SONET

This paper originated from D.K. Siu of Worcester Polytechnic Institute (c. ~1997).

Table of Contents


Introduction

SONET is an acronym for Synchronous Optical NETwork. It is a set of standards defining the rates and formats for optical networks specified in ANSI T1.105, ANSI T1.106 and ANSI T1.117. A similar standard, Synchronous Digital Hierarchy (SDH), has also established in Europe by UTI-T. SONET and SDH are technically consistent standards. SONET standard is a long term solution for a mid-span-meet between vendors. Another advantage is the fact that SONET is synchronous. Being synchronous allows the adding and/or dropping signals with single multiplexing process. SONET integrates OAM&P in the network to further reduce the cost of transmission.

SONET standard defines the rates and formats, the physical layer, network element architectural features, and network operational criteria. This project concentrates on the rates and formats, the multiplexing procedures and the network synchronization and timing aspects of the network.

Back to the Table of Contents


Typical End-to-End Connection

Many existing networks have not deployed the standard as those proposed for SONET. Communication between various localized networks is costly because differences in digital signal hierarchies, encoding techniques and multiplexing strategies. For example, the DS1 signals consist of 24 voice signals and one framing bit per frame. It has a rate of 1.544 Mbps. DS1 uses the AMI encoding scheme, it robs a bit from an eight bit byte for signaling. Therefore, it has a rate of 56 kbps per channel. When B8ZS bipolar violation encoding scheme is used, every bit can be used for transmission. Therefore, it has a rate of 64 Kbps per channel. The CEPT-1 signal consist of 30 voice signals and 2 channels for framing and signaling. It has a rate of 2.048 Mbps. CEPT-1 uses HDB3 coding technique. Multiplexing procedures may also be different between signals. It can be byte interleaving or bit interleaving.

Transporting a signal to a different network requires complicated multiplexing/ demultiplexing, coding/decoding process to convert a signal from one scheme to another scheme. To solve this problem SONET standardize the rates and formats. The Synchronous Transport Signal (STS) is the basic building block of SONET optical interfaces with a rate of 51.84 Mbps. STS consists of two parts, the STS payload and the STS overhead. The STS payload carries the information portion of the signal. The STS overhead carries the signaling and protocol information. This allows communication between intelligent nodes on the network, permitting administration, surveillance, provisioning and control of a network from a central location.

At the ends of a communication system, it involves signals with various rates and different formats. A signal is converted to STS and travel through various SONET networks in the STS format until it terminates. The terminating equipment converts the STS to the user format. Now lets examine a typical SONET end-to-end connection as shown in the following figure.

Path Terminating Equipment (PTE)

The STS path terminating equipment is a network element that multiplex/demultiplex the STS payload. It can originate, access, modify or terminate the path overhead, or can perform any combination of these actions. For example, a STS path terminating equipment assembles 28 1.544Mbps DS1 signals and inserts path overhead to from a 51.84 Mbps STS-1 signal.

Line Terminating Equipment (LTE)

Line terminating equipment is the network element that originate and/or terminates line signal. It can originate, access, modify or terminate the line overhead, or can perform any combination of these actions.

Section Terminating Equipment (STE)

Section is any two adjacent SONET network elements. Section terminating equipment can be a terminating network element or a regenerator. It can originate, access, modify or terminate the section overhead, or can perform any combination of these actions.

Before we examine the STS frame format and its overheads, we need to know the optical interface layers.

Back to the Table of Contents


Optical Interface Layers

There are four optical interface layers in SONET. (section 3.3.1 of [3] and section 9.1 of [2]) They are the path layer, line layer, section layer and photonic layer. The following figure shows the optical interface layers hierarchy.

The optical interface layers have a hierarchical relationship, with each layer building on the services provided by the lower layers. Each layer communicates to peer equipment in the same layer and processes information and passes it up and down to the next layer. For example, two network nodes are exchanges DS1 signals. At the source node, the path layer maps 28 DS1 signals and path overhead to form a STS-1 SPE and hand this to the line layer. The line layer multiplexes 3 STS-1 SPE and adds line overhead. This combined signal is then passed to the section layer. The section layer performs framing and scrambling and adds section overhead to form 3 STS-1 signals. Finally the photonic layer convert the 3 electrical STS signals to optical signal and transmits to the distant node. The optical form of the STS signals are called Optical Carriers. The STS-1 signal and the OC-1 signal have the same rate.

At the distant node, the process is reversed from the photonic layer where the optical signal is converted to electrical signal to the path layer where the DS1 signals terminate.

Back to the Table of Contents


Functions of Photonic Layer

The photonic layer deals with the transport of bits across the physical medium. Its main function is the conversion between STS signal and OC signals. The issues are:

Pulse Shape

Power Level

Wavelength

Back to the Table of Contents


Functions of Section Layer

The Section layer deals with the transport of an STS-N frame across the physical medium. The main functions are:

Framing

Scrambling

Error Monitoring

Section Maintenance

Orderwire

Back to the Table of Contents


Functions of Line Layer

The line layer deals with the reliable transport of the path layer payload and it overhead across the physical medium. Its main functions are to provide synchronization and to perform multiplexing for the path layer. It also adds/interprets line overhead for maintenance and protection switching. The issues are:

Synchronization

Multiplexing

Error Monitoring

Line maintenance

Protection Switching

Back to the Table of Contents


Functions of Path Layer

The path layer deals with the transport of services between path terminating equipments (PTE). Its main function is to map the signals into a format required by the line layer. It also reads, interprets, and modifies the path overhead for performance monitoring and automatic protection switching.

Back to the Table of Contents


Frame Structures

Before we discuss the SONET frame and its overheads, we listed the factors which determines the usage of the overhead bytes and the ways the input signals are mapped into the SPE.

STS-1 Frame

Same as the common digital carriers, SONET adopts the frame length of 125 usec or frame rate of 8000 frames per second. The STS-1 (Synchronous Transport Signal level 1) is the basic signal rate of SONET. Each frame can be viewed as a 9-row by 90-column structure, a total of 810 bytes. The following figure shows the STS-1 frame structure.

Therefore, the STS-1 line rate can be derived as follows:

        9 rows * 90 columns * 8000 frames/sec * 8 bits/bytes = 51.84 Mbps
The first 3 columns are the transport overhead. Nine of the 27 (3 * 9) bytes of the transport overhead is used for section overhead. 18 of the 27 (3 * 9) bytes of the transport overhead is used for line overhead. >From column 4 to column 90 are the Synchronous Payload Envelope (SPE). Thus the capacity of the STS-1 payload is:
        9 rows * 87 columns * 8000 frames/sec * 8 bits/bytes = 50.112 Mbps

Each STS-1 frame is transmit starting from the byte in row 1 column 1 to the byte in row 9 column 90. The most significant bit of a byte is transmitted first. The following algorithm shows the transmission sequences.

        for (row = 1; row <= 9; row++)
                for (column = 1; column <= 90; column++)
                        for (bit = 1; bit <= 8; bit ++)
                        /* bit 1 is the most significant bit */
                        /* and bit 8 is the least significant bit */
                                bitTransmit = STSFrame[row][column][bit];

Line Rate

SONET line rate is a synchronous hierarchy that is flexible enough to support many different capacity signals. The STS-1/OC-1 line rate was chosen to be 51.84 Mbps to accommodate 28 DS1 signals and 1 DS3 signal. Also three STS-1 signals fit perfectly into a STM-1 signal. The higher level signals are obtained by synchronous multiplexing lower level signals. This signal can be represented by OC-N, where N is an integer. Currently the values of N are 1, 3, 12, 48 and 192. For example, OC-48 has a rate of 2488.320 Mbps, 48 times of OC-1 rate.

STS-N Frame

The STS-N signal is formed by byte interleaving N STS-1 signals. The transport overhead channels of the individual STS-1 signals must be frame-aligned before interleaving. Every STS-1 signal will have a unique payload pointer to indicate the location of each SPE, therefore the associated STS SPEs are not required to be aligned.

STS-Nc Frame

Super rate services such as the broadband ISDN has a rate of higher than that of STS-1 rate. Therefore, STS-1s are concatenated to form a STS-Nc signal which will be multiplexed, switched and transported through the network as a signal entity. The STS-Nc SPE consists of N * 87 columns and 9 rows of bytes. There is only one set of STS-1 path overhead is needed in the STS-Nc SPE. The other N-1 column that normal would be used for path overhead can be used for normal payload.

Let's look at the similarities and the differences between the STS-3 and STS-3c frames.

Overhead

Before we look at the SONET transport overhead, we first examine the overhead of the DS1 signal hierarchy. The percentage overhead (OH) can be define as:

        % OH = (Ct - Ci)/Ct * 100%
                where Ct = total capacity
                          Ci = information bearing capacity
        For 24 voice channels, DS1 signals has
        % OH for DS1 = (1.544 - 1.536)/1.544 = 0.52%

As the rate increases the percentage overhead increases. The addition overhead is used for control bits, alarm and signaling, parity bits and bit stuffing. The percentage overhead for DS2, DS3 and DS4 are shown below.

        % OH for DS2 = (6.312 - 96 * 64/1000)/6.312 = 2.7%
        % OH for DS3 = (44.736 - 672 * 64/1000)/44.736 = 3.9%
        % OH for DS4 = (276.176 - 4032 * 64/1000)/276.176 = 6.6%
The percentage of useful capacity decreases as the rate increases. SONET adopts fixed locations for overhead and fixed percentage for overhead signal portion independent of system rate. The percentage of SONET overhead = 4 columns * 100% / 90 columns or 4.44%. The following figure shows the transport overhead and the path overhead of a STS-1 frame.

Section Overhead

The section overhead occupies the first 3 rows of the transport overhead. (section 3.32 of [3] and section 9 of [2])

Line Overhead

The line overhead occupies the last 6 rows of the transport overhead.

Path Overhead

The path overhead is assigned to, and transported with the payload from the time it is created by the path terminating equipment as part of the SPE until the payload is demultiplexed at the terminating path equipment. In the case of super rate services, only one set of path overhead is required and is contained in the first STS-1 of the STS-Nc. The path overhead supports four classes of functions:

Back to the Table of Contents


Multiplexing Procedures

Since the optical cable is capable to transmit very high data rate, it is logical to multiplex multiple STS-1 signals to fully utilize the network capacity. Multiplex is required to provide super rate services such as BISDN. In a synchronous environment such as SONET, multiple STS-1s are traveling together at a higher rate, and still be visible as individual STS-1s by using the interleaving process. (section 11 of [2] and section 5.1 of [3])

Interleaving

Interleaving, as defined in SONET, is a procedure for interlacing the individual bytes of a signal such that each component signal is visible within the combined signal. This eliminates the necessity for complete demultiplexing of an STS-N signal in order to access a single STS-1. The first step in the interleaving process is the frame alignment of the STS-1s and is then byte interleaved to form a STS-N signal. The transport overhead is now N * 3 columns and the SPE is now N * 87 columns. As mentioned in the transport Overhead section, not all the overhead is needed nor defined. The following figure shows the resulting overhead after multiplexing for 3 STS-1 signals.

The byte interleaving of STS-1 signals to form an STS-N signal can be done in one-stage or a two stage process.

Two-Stage Interleaving

The two-stage interleaving process accommodates the basic European rate of the ITU-T, the STM-1, which is equivalent to an STS-3. To create higher rate in the SDH, it is necessary to byte interleave the STM-1s. In order to be compatible with the equivalent European rate, STS-1 signals are first byte interleaved to from STS-3s and the STS-3s are then byte interleaved to a higher STS-N signal. The following figure illustrates the two-stage byte interleaving process.

Single-Stage Interleaving

The single-stage interleaving process directly byte interleaves N STS-1 signals to form a STS-N signal without first creating the STS-3 signals. The outcome of the STS-N signal should be identical to the two-stage interleave process; that is the byte sequence obtained should be the same with the two-stage and the single stage processes. The following figure illustrates the single-stage byte interleaving process.

Concatenation

If the rate is less than or equal to an STS-3c, the concatenated signal must be wholly contained within an STS-3 block formed in the intermediate step of the two-stage interleaving. If the rate is greater than STS-3c, all signals must be wholly contained within blocks that are multiples of STS-3 formed in the intermediate step of the two-stage interleaving.

Scrambling

Each SONET Network Element is required to have the capability to derive the clock timing from incoming OC-N signal. All transmitted OC-N signals will be timed from this clock. Therefore, it is important to maintain the one's density in the data stream. The technique commonly used with modems, called scrambling and descrambling, is used in SONET to make the data appeared to be more random, and guarantee one's density.

The following paragraph describes the scrambling and descrambling procedures, is obtained from [3].

A frame synchronous scrambler of sequence length 127, operating at the line rate, shall be used. The generating polynomial shall be 1+x^6+x^7. The scrambler shall be reset to '1111111' on the most-significant bit of the byte following the STS-1 number N C1 (STS-1 ID) byte. This bit and all the subsequent bits to be scrambled shall be added, modulo 2, to the output from the x^7 position of the scrambler as shown in the following figure.

The scrambling process is performed after the multiplexing step, but before framing, C1 byte insertion, and the electrical to optical conversion. Therefore, the scrambler should operate on the line rate and the framing bytes and the C1 byte are not scrambled. An exclusive OR operation is performed to the STS-N frame starting from the byte right after the Nth C1 byte with the repeated 127 bit sequence. This sequence, generated by the pseudo random number generator (with the generating polynomial, 1+x^6_x^7), is obtained by dividing binary 1111111 by 10000011. The first 127 bits are:

111111100000010000011000010100 011110010001011001110101001111 010000011100010010011011010110 110111101100011010010111011100 0101010

The same operation is used for descrambling. For example, the input data is 000000000001111111111.

        00000000001111111111  <-- input data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (scramble operation)
        11111110001110111110  <-- scrambled data
        11111110000001000001  <-- scramble sequence
        --------------------  <-- exclusive OR (descramble operation)
        00000000001111111111  <-- original data

Back to the Table of Contents


Virtual Tributary

So far we have discussed the basic STS-1 signal and the higher rate signals, STS-N. We have also mentioned that a STS-1 signal may contain 28 DS1 signals. We are going to discuss how SONET handles signals that is lower than the STS-1 signal or sub-STS-1 signals. Sub-STS-1 signal is called Virtual Tributary (VT) in SONET. (section 3.4.1 of [3] and 12.1 of [2])

Structure

The VT structure is designed for transporting and switching sub-STS-1 signals. There are four types (sizes) of VTs. Each size is designed to accommodate a certain size of digital signals. The following table shows the VT signals and their corresponding digital signals.

        ------------------------------------------------------
        |   SONET Hierarchy       |    Digital Signals       |
        |====================================================|
        |   VT-6 (6.912 Mbps)     |   DS2 (6.312Mbps)        |
        |----------------------------------------------------|
        |   VT-3 (3.456 Mbps)     |   DS1C (3.152 Mbps)      |
        |----------------------------------------------------|
        |   VT-2 (2.304 Mbps)     |   CEPT-1 (2.048 Mbps)    |
        |----------------------------------------------------|
        |   VT-1.5 (1.728 Mbps)   |   DS1 (1.544 Mbps)       |
        ------------------------------------------------------
Remember that the SPE is a 9 rows * 87 columns structure. Each column has a rate of 9 row * 8 bits * 8000 SPE/sec or .576 Mbps. The number of columns needed to contain a VT-6 signal is 6.912 Mbps / .576 Mbps or 12 columns. The number of columns needed for VT-3, VT-2 and VT1.5 are 6, 4, 3 columns respectively.

VT Group

In order to carried a mixed size of VTs in a STS-1 SPE in an efficient manner, VT Group (VTG) is defined. The size of the VTG is 12 columns which is the least common multiple of the four sizes of VTs. Only one type of VT can be contained within each VTG. Therefore, a VTG can be formed by byte interleave multiplexing 1 VT-6, 2 VT-3s, 3 VT-2s and 4 VT-1.5s.

VT Mapping

Seven (87 columns / 12 columns per VTG) VTGs can be mapped into a STS-1 SPE. One column (column 1) is used for path overhead and the remaining 2 columns (columns 30 and 59) are fixed stuff. The following figure shows how VTGs are mapping onto STS-1 SPE.

VT Modes

VT can operates in two modes, locked mode and floating mode. The locked mode is designed for maximum efficiency of network elements performing DSO switching. The floating mode minimizes delay for distributed VT switching.

The locked mode reduces the interface complexity in distributed 64 Kbps. It has a fixed mapping into an STS-1 SPE, providing a direct correspondence between the actual tributaries and their location. In floating mode, the payload can float within the VT pay load capacity. This requires a VT payload pointer and a VT path overhead.

VT Superframe

In floating mode, four consecutive 125 usec frames of STS-1 SPE are organized into a 500 usec superframe. The H4 (multiframe indicator) byte in the STS path overhead indicates the occurrence and the phase of the VT superframe. A '0' in the last two bits of the H4 byte (in STS path overhead) indicates the next STS frame contains the first frame of a VT superframe. A '1' indicates the second frame of a VT superframe and so on.

The VT superframe consists of VT payload pointer, VT synchronous payload envelope and the VT path overhead. The 4 bytes VT payload pointer (V1, V2, V3 and V4) is located in the first byte of each of the 4 VT frames of a VT superframe. The VT payload pointer allows dynamic alignment of the VT SPE within the VT superframe. The first byte of the VT SPE is the VT path overhead byte V5 which is used for error checking, signal label and path status.

Back to the Table of Contents


Payload Pointers

Due to likely differences in the clocks from different system, or due to faults, the frames from different systems will not aligned perfectly. To multiplex these signals, it is necessary to decouple the SPE frame alignment from the STS-1 frame alignment or to decouple the VT SPE frame alignment from the VT superframe. The payload pointers provide a dynamic floating SPE, and accommodate the phase and clock rate differences. (section 3.5 of [3] and section 10 of [2])

STS-1 pointers

The STS-1 pointer (H1 and H2 bytes) performs two functions:

The STS-1 pointer is divided into three parts: The first byte of SPE is the trace byte, J1 of the POH. When the J1 byte is on row 4, column 4, the STS-1 pointer offset is 0. When the J1 byte is on row 4, column 5, the STS-1 pointer offset is 1. When the J1 byte is on row 3 of the next STS-1 frame, column 90, the STS-1 pointer offset is 782. The following figure shows the pointer offset of 89.

Frequency justification and pointers

When the input data has a rate lower than the output data rate of a multiplexer, the positive stuffing will occur. The input is stored in a buffer at a rate which is controlled by the WRITE clock. Since the output (READ) clock rate is higher than the WRITE clock rate, the buffer content will be depleted or emptied. To avoid this condition, the buffer fill is constantly monitored and compared to a threshold. If the the content fill is below a threshold, the READ clock is inhibited and stuffed bit is inserted to the output stream. Meanwhile, the input data stream is still filling the buffer. The stuffed bit location information must be transmitted to the receiver so that the receiver can remove the stuffed bit.

When the input data has a rate higher than the output data rate of a multiplexer, the negative stuffing will occur. If negative stuffing occur, the extra data can be transmitted through an other channel. The receiver must need to kown how to retrieve the data.

Positive Stuffing

If the frame rate of the STS SPE is too slow with respect to the frame rate then the alignment of the envelope should periodically slip back or the pointer should be incremented by one periodically. This operation is indicated by inverting the I bits of the 10 bit pointer. The byte right after the H3 byte is the stuff byte and should be ignored. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of '0010010011' for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       1000111001  <-- the I bits are inverted, positive stuffing
                                    is required.
          N+2       0010010100  <-- the pointer is increased by 1
Negative Stuffing

If the frame rate of the STS SPE is too fast with respect to the frame rate then the alignment of the envelope should periodically advance or the pointer should be decremented by one periodically. This operation is indicated by inverting the D bits of the 10 bit pointer. The H3 byte is containing actual data. The following frames should contain the new pointer. For example, the 10 bit of the H1 and H2 pointer bytes has the value of '0010010011' for STS-1 frame N.

        Frame #     IDIDIDIDID
        ----------------------
          N         0010010011
          N+1       0111000110  <-- the D bits are inverted, negative stuffing
                                    is required.
          N+2       0010010010  <-- the pointer is decreased by 1
New Data Flag

The 4 bit NDF is a mechanism to allow the pointer be changed arbitrarily. Normally the NDF should be '0110'. When NDF is set to '1001', the value of the offset pointer is indicated by the 10 bit pointer regradless the state of the receiver.

VT pointers

The VT pointers provide a method of allowing flexible and dynamic alignment of the VT SPE within the VT superframe. The VT pointer (V1 and V2 bytes) can be divided into 3 parts:

Frequency Justification

The VT payload pointer is used to frequency justify the VT SPE in the same way as the STS-1 pointer. A positive stuff opportunity immediately follows the V3 byte. The V3 byte is used for actual data during negative stuffing.

New Data Flag

Similar to the STS SPE pointer NFD, the VT pointer NDF allow the change of pointer value arbitrarily. In addition, it also allows the change in VT size. THe normally value is '0110'. If the value is '1001', then the pointer and the size value accompanying the NDF will take effect.

Back to the Table of Contents


Signal Mapping

We have discussed the SONET basic STS-1 signal which can be used to transport the DS3 signal. STS-Nc signal is used to transport higher rate signals. VTs are used to transport sub-STS-1 rate signals such as DS1 signal. However, the digital signals that SONET carries do not fit in the SPE perfectly. SONET defines how this signals are mapped. (section 3.4 of [3] and section 12 of [2])

Asynchronous mapping for DS3 payload

The 44.736 Mbps asynchronous DS3 signal is first synchronize to form a 48.384 Mbps DS3 payload mapping. The STS-1 SPE is formed by adding the STS POH. The STS-1 SPE consists of 9 subframes every 125 usec. Each subframe consists of 621 information (i) bits, a set of 5 stuff control (c) bits, 1 stuff opportunity (S) bit and 2 overhead communication channel bits. The remaining 59 bits are fix stuff (r) bits. The following figure shows the mapping for asynchronous DS3 payload.

The byte types are defined as:

        Byte Type      bits in the byte
        ---------      ----------------
           I           iiiiiiii
           R           rrrrrrrr
           X           rrciiiii
           Y           ccrrrrrr
           Z           ccrroors

        where i=information; r=fixed stuff; c=stuff control
                s=stuff opportunity; o=overhead
The set of five c bits determine the use of the s bit. If the c bits are 0, the s bit is used for data. If the c bits are 1, the s bit is undefined. The rate is acheived as:
        using only the I bits      = 44.712 Mbps
        the DS3 rate               = 44.736 Mbps
        using the I bits and S bit = 44.784 Mbps

Asynchronous mapping for DS2 payload

The 6.312 Mbps asynchronous DS2 signal is first synchronize to from a 6.912 Mbps VT6 signal which consists of VT6 SPE and VT POH. The mapping for DS2 payload into VT6 is defined only for the VT floating mode. The following figure shows the asynchronous mapping for DS2 into a floating VT6.

The byte types are defined as:

        Byte Type      bits in the byte
        ---------      ----------------
           I           iiiiiiii
           R           rrrrrrrr
           X1          c1c2ooooir
           X2          c3c4ooooir
           X3          c5c6ooooir
           X4          c7c8ooooir
           y1          c1c2iiis1s2r
           y2          c3c4iiis3s4r
           y3          c5c6iiis5s6r
           y4          c7c8iiis7s8r
           Z           iiiiiiir

        where i=information; r=fixed stuff; c=stuff control
                s=stuff opportunity; o=overhead
In this 500 usec VT superframe, there are four bytes VT path overhead (V5, V6, V7 and V8), 3152 (16 * 24 * 8 + 4 * 8 + 4 * 7 + 4 * (1 + 1 + 3)) information bits. There are 8 sets of control bits, c1 to c8, eight stuff opportunities, s1 to s8, 32 overhead channel (O) bits, and the rest are stuff (R) bits. The c1 bits control the s1 bit, the c2 bits control the s2 bit and so on. If the c bits are 0s, the corresponding s bit is data. If the c bits are 1s, the corresponding s bit is undefined. The DS2 rate is acheivied by
        with only the I bits (3152 bits * 2000) = 6.304 Mbps
        normal DS2 rate                         = 6.312 Mbps
        with I and S bits (1578 bits * 2000)    = 3.320 Mbps

Asynchronous mapping for DS1C payload

The 3.152 Mbps asynchronous DS1C signal is first synchronize to from a 3.456 Mbps VT3 signal which consists of VT3 SPE and VT POH. The mapping for DS1C payload into VT3 is defined only for the VT floating mode. The following figure shows the asynchronous mapping for DS1C into a floating VT3.

In this 500 usec VT superframe, there are four bytes VT path overhead (V5, V6, V7 and V8), 1574 (8 * 24 * 8 + 2 * 8 + 2 * 6 + 2 * 3 + 4 * 1) information bits. There are 4 sets of control bits, C1, C2, C3 and C4, four stuff opportunities, S1, S2, S3 and S4 bits, sixteen overhead channel (O) bits, and the rest ate stuff (R) bits. The C1 bits control the S1 bit, the C2 bits control the S2 bit and so on. If the C bits are 0s, the corresponding S bit is data. If the C bits are 1s, the corresponding S bit is undefined. The DS1C rate is achieved by

        with only the I bits (1574 bits * 2000) = 3.148 Mbps
        normal DS1C rate                        = 3.152 Mbps
        with I and S bits (1578 bits * 2000)    = 3.156 Mbps

Asynchronous mapping for DS1 payload

The 1.544 Mbps asynchronous DS1 signal is first synchronize to from a 1.728 Mbps VT1.5 signal which consists of VT1.5 SPE and VT POH. The mapping for DS1 payload is defined only for the VT floating mode. The following figure shows the asynchronous mapping for DS1 into a floating VT1.5.

In this 500 usec VT superframe, there are four bytes VT path overhead (V5, V6, V7 and V8), 771 (4 frames * 24 bytes * 8 bits + 3 bits) information bits. There are 2 sets of control bits, C1 and C2, two stuff opportunities, S1 and S2 bits, eight overhead channel (O) bits, and the rest ate stuff (R) bits. The C1 bits control the S1 bit and the C2 bits control the S2 bit. If the C bits are 0s, the corresponding S bit is data. If the C bits are 1s, the corresponding S bit is undefined. The DS1 rate is achieved by

        with only the I bits (771 bits * 2000) = 1.542 Mbps
        normal DS1 rate                        = 1.544 Mbps
        with I and S bits (773 bits * 2000)    = 1.546 Mbps

Byte synchronous mapping for DS1 payload

Byte synchronous mapping for DS1 payload is defined for VT floating mode. VT locked mode has been removed from future applications. A byte synchronous mapping into a floating VT1.5 SPE is defined to allow downstream SONET NEs direct identification and access to the 24 DS0 channels that are carried. The following figure shows the byte synchronous mapping for DS1 into a floating VT1.5.

In this 500 usec VT superframe, there are four bytes VT path overhead (V5, V6, V7 and V8), 768 (4 frames * 24 bytes * 8 bits) information bits. For each 24 DS0 channels, there are 4 signaling (S1 S2 S3 and S4) bits, one framing (F) bit, two phase (P1 and P2) bits and one fixed stuff (R) bit. The P0 and P1 bits are allocated for indicating the phase of signaling. The DS1 rate is achieved by

        (768 bits + 4 framing bits) * 2000 = 1.544 Mbps

Asynchronous mapping for CEPT-1 payload

The 2.046 Mbps asynchronous CEPT-1 signal is first synchronize to from a 2.304 Mbps VT2 signal which consists of VT2 SPE and VT POH. The mapping for CEPT-1 payload into VT2 floating mode is provisional and subject to further study. The following figure shows the asynchronous mapping for CEPT-1 into a floating VT2.

In this 500 usec VT superframe, there are four bytes VT path overhead (V5, V6, V7 and V8), 1023 (3 * 32 * 8 + 31 * 8 + 7) information bits. There are 2 sets of control bits, C1 and C2, two stuff opportunities, S1 and S2 bits, eight overhead channel (O) bits, and the rest ate stuff (R) bits. The C1 bits control the S1 bit, the C2 bits control the S2 bit and so on. If the C bits are 0s, the corresponding S bit is data. If the C bits are 1s, the corresponding S bit is undefined. The CEPT-1 rate is achieved by

        with only the I bits (1023 bits * 2000) = 2.046 Mbps
        normal CEPT-1 rate                      = 2.048 Mbps
        with I and S bits (1025 bits * 2000)    = 2.050 Mbps

STS-3C mapping for ATM

The 53 bytes ATM cells are mapped into STS-3c payload as shown in the following figure.

The STS-3c has a payload capacity of 3 * (90 - 3 - 1) columns * 9 rows or 2340 bytes. Because of the STS-3c payload capacity is not an integer multiple of the ATM cell, a cell is allowed to cross the SPE boundary. The H4 byte in the STS path overhead is used to indicate the offset (distance in byte) between itself and the beginning of the next ATM cell. Since the ATM cell is 53 bytes long. The range of the offset is 0 to 52. Therefore only bit 3 to bit 8 of the H4 byte is used. The remaining 2 bits are undefined. (The definition of the use of H4 byte as a pointer to the beginning of the next ATM cell has been removed from the 1995 version of [3])

In order to provide security against payload information replicating the frame scrambling sequence. The ATM payload is scrambled before the STS-3c framing using the generating polynomial 1 + x^43. The scrambler operates only on the 48 byte ATM cell payload. The scrambler suspends it operation during the ATM header and resumes it operation during the ATM cell payload. At startup, the scrambler states are set to all 1s.

Asynchronous mapping for FDDI

An asynchronous mapping for FDDI into a STS-3c SPE is defined for clear channel transport of FDDI signals as shown in the following figure.

Each 260 byte row of STS-3c payload capacity is divided into 20 blocks of 13 bytes each. There are 5 block types, J, A, B, X and Y which contain some combination of the 6 byte types, I, C, O, R, S1 and S2. The block types are defined as:

        Block Type     sequence of byte type contained
        ----------     -------------------------------
           J           O I I I R I I I R I I I I
           A           C I I I I I I I I I I I I
           B           R I I I R I I I R I I I I
           X           S1 I I I R I I I I I I I I
           Y           S2 I I I R I I I I I I I I
The byte types are defined as:
        Byte Type      bits in the byte
        ---------      ----------------
           I           iiiiiiii
           C           iiiiiiic
           O           iiiiiioo
           R           rrrrrrrr
           S1          iiiiiiis
           S2          iiiiiisr

        where i=information; r=fixed stuff; c=stuff control
                s=stuff opportunity; o=overhead
There are 15,621 information bits per STS-3c frame capable of transporting 124.968 Mbps data rate. The control bits and the stuff opportunities are used in the same manner as the sub-rate mappings. With all the stuff bits are used for data, the data rate can be 125.04 Mbps.

Back to the Table of Contents


Synchronization/Timing

SONET eliminates the need of sync characters and message framing for clock synchronization between equipments. Instead, SONET relies on a synchronization network that transport timing reference between locations.

Existing Scheme

SONET uses and is compatible with the existing synchronization network[4] which distributes timing signal based on DS1 signal. A single clock is designated as the Building Integrated Timing Supply (BITS), within a building. This clock should be the highest accuracy clock in the building. All other clocks within the building receive DS1/DS0 timing references from this BITS. Through DS1 facilities, this clock receives DS1 timing reference from other office which has a better quality clock. Eventually, the network obtains the clock reference from the Primary Reference Source (PRS) which has a accuracy of Stratum 1.

This synchronization network described is based on a tree type network topology. The quality of the synchronization at the end of the network depends on the accuracy of the slave clocks and the transmission links. If a node loses its reference to its master, it will go to the holdover mode and generate clock signals for the down stream nodes.

Timing Modes

SONET NEs are required to support the three major timing modes:

The DS1 transported in the SONET VT payloads are not recommended to be used for interoffice synchronization distribution. The phase variation contents in the DS1 signals are created deal to the pointer adjustment and the desynchronizer process. We shall discuss the problem as the result of pointer adjustment and desynchronizer process in the Phase Variation section. SONET specified the clock accuracy for its network elements using the the synchronization clock hierarchies as shown in the following table. Reference [3] specifies clock accuracy requirements for different types of network elements.
        -------------------------------------------------
        | Stratum | Minimum Accuracy |     Slip rate    |
        -------------------------------------------------
        |    1    |    10^-11        |     2.523/year   |
        -------------------------------------------------
        |    2    |    1.6 *10^-8    |     11.06/day    |
        -------------------------------------------------
        |    3    |    4.6 *10^-6    |     132.48/hr    |
        -------------------------------------------------
        |    4    |    3.2 *10^-5    |     15.36/min    |
        -------------------------------------------------

Timing Evolution

With the introduction of the SONET NEs, synchronization networks can be distributed by DS1 and OC-N signals. For SONET NEs, the OC-N derived synchronization is the prefer choice. The long term plan is to evolve from the current DS1 based synchronization distribution network to OC-N based network. Therefore, SONET also specifies the capability to supply a timing reference derived from OC-N in the form of a DS1 in the superframe format. This DS1 signal should be framed and contains all 1's.

The distribution of the timing reference is enhanced by the automatic protection switch capability in SONET. In the ring or linear add/drop configuration, a node can receive OC-N signals from more than one nodes. It has the capability to obtain timing source from the node with a better clock. For example, nodes A, B and C are in the linear add/drop configuration as shown in the following figure.

Node B is loop timed from node A. When node A loses its timing reference, it must go to the holdover mode and send a message to node B using the Z1 byte. When node B receives the message, it switches the timing source to node C which has a better clock.

Importance of Clock Accuracy

The accuracy of the clock is very important especially in SONET which is designed to carry very high data rates. For example a terminal equipment has a transmission rate of 1 Mbps and a clock accuracy of 10^-6.

The equipment may decode 60 bits more or 60 bits less. The equipment does not obtain the right amount of bits, but the transmitted data would not be decoded properly by the receiver. The difference in the frequency and phase is known in general as phase variation. If the period of the clock pulse is T second and the phase variation is t second. When t is equal to T second, it is called 1 Unit Interval (UI) of phase variation.

Phase Variation

Phase variation covers both jitter and wander. The source of jitters are multiplexing and regenerator equipments. The temperature difference in different portion of the cable may cause wander. In SONET, the terms, jitter and wander, are defined based not on the cause but on the way how they are treated in the network. Jitter is the phase variation that is buffered and filtered by a phase locked loop. Wander is the phase variation that tracked and passed on by a phase locked loop. Therefore, there is a two step process to handle phase variations.

  • to deal with wander by tracking the incoming signal
  • to filter jitter by pointer adjustment mechanism

    Wander

    Signals travelling through a transmission medium, which may span over a long distance, may have different propagation delay and different length due to the temperature changes from one segment of the cable to another segment of the cable, or from time to time (e.g. day and night, summer and winter). This can result in slowly shifting pulse position over a long period of time. Drift in regenerator laser wavelength and instabilities in the timing reference can also lead to wander.

    Wander shifts bits to unexpected time positions. As the bit streams flow into a multiplexer or a demultiplexer, the multiplexer is trying to read the bit streams with a reference clock. As it cannot correctly read the bit stream, hence results bit errors.

    One way to solve the problem is to pass each bit stream to a PLL or surface acoustic wave (SAW) filter to extract the timing from the bit stream. With the extracted clock associated with each bit stream, each bit stream is then read into separate elastic stores. By using the same reference clock to read the elastic stores, the different bit stream can be synchronized to the reference clock. The following figure shows the block diagram of synchronization of different bit streams in a multiplexer.

    Regenerator Jitter

    At the regenerator, the timing of the outgoing signal is extracted from the input line signal. The imperfection of the incoming signal, the circuitry of the regenerator, high frequency, bit pattern passing through the regenerator causes phase variation. Chapter 2 of reference [6] shows that when passive lumped filter or PLL circuit is used at high frequencies, there is a ripple in the passband of the bandpass filter and passband asymmetries in the filter shape would lead to jitter peaking. Furthermore, the phase variation accumulates as the signal passes through regenerators.

    Frequency Offset

    If the local clock is not synchronous with PRS, running with a slightly difference frequency, the different in frequency produces constant increasing phase variation over time as represented by the following graph.

    Clock Noise

    Temperature and voltage can affect the frequency of a clock. Temperature and voltage changes may induce phase variation to the clock. The regenerator jitter, frequency offset and clock noise are usually reduced by pointer adjustment.

    Justification Jitter

    During the mapping of tributaries into SPE, the tributaries need not be aligned with each other nor the output signal. The phase variation can be filtered for each of the tributaries by the pointer adjustment mechanism as briefly described in the Payload Pointers section. The STS and VT pointer adjustment mechanisms are essentially the same. Each tributaries is tracked as described in the Wander section. A gapped clock is derived from the clock of the synchronous payload to read each of the tributaries. If a tributaries lags behind the clock of the synchronous payload, a negative pointer adjustment is made every time the phase exceeds a predetermined threshold. If a tributaries advances ahead the clock of the synchronous payload, a positive pointer adjustment is made every time the phase exceeds a predetermined threshold.

    Usually, the clocks delivered to the SONET NEs are extremely stable. However, if a NE is operating at free run or holdover mode, the accuracy of the clock is in around 4.6 * 10^-6 (Stratum 3). An OC3 multiplexer which is operating at free run mode with a clock 4 * 10^-6 ahead of the OC3 rate, would have to adjust for 4 * 155.52 (~620) bits. H3 bytes are used fore pointer adjustment. At STS-3 rate, 3 bytes or 24 bits are used. This means that pointer adjustment occurs every 24 bits. This 24 bit step adjustment occurs 26 (620/24) times in a second. The following graph shows the effects of pointer adjustment.

    The pointer adjustment mechanism does not introduce jitter onto the STS-3 signal. However, it does introduce jitter onto the payload it carries.

    Synchronizer

    We have discribed the mapping of various signals to STS-N signal. In this section, we shall describe the timing aspect of signal mappings. To simpify the discussion, the discussion is based on DS3 signal to STS-1 signal mapping.

    The DS3 payload is first read into a elastic store with the DS3 rate clock. This data is read out of the elastic store with a gapped STS-1 rate clock so that the average frequency is the DS3 rate. The phase different between the DS3 signal and the gapped STS-1 rate clock is adjusted according to the DS3 to STS-1 mapping discribed in the Signal Mapping Section using the stuff bits and control bits. The section, line and path overhead are then inserted. The values of the overheads have been discussed in the Overhead section. The STS-1 pointer, H1 and H2 bytes contain a fixed pointer to the beginning of the payload in the SPE.

    Initially, signals are synchronized to the clock rate used by the transmitting equipment; STS-1 pointers should point to the beginning of the SPE. As the signals travel through the network, phase variation may be introduced to the signals. Pointer adjustment takes place at various nodes. When the signal finally reach the terminating equipment. Desynchronizer is used to reproduce the DS3 signal.

    Desynchronizer

    At the terminating node, the STS-N signal must be demultiplexed to the DS3 signal. The process is generalized in the following steps:

    The following figure shows the block diagram of the desynchronizer.

    Step 3 and 4 of the above process is normally handled by a jitter reduction circuit or a desynchronizer which consists of two major components, the bit leaking circuit and the elastic store.

    The bit leaking circuit stores incoming STS-1 pointer adjustments into a queue and leaks them out of the desynchronizer on bit at a time. The circuit employs a digital phase locked loop and a control network to lock on to the rate of incoming pointer adjustments and to estimate the leak rate of the stored bits. STS-1 pointer adjustment represent an 8 bit adjustment to the instantaneous number of bits in a frame of STS-1 SPE. STS-1 SPE has a rate of 50.112 Mbps. 1 bit pointer adjustment represents 8/(8000*50.112) usec or 1/50.112 nsec.

    The elastic store acccepts an STS-1 data stream and a gapped clock. The gaps in the input clock inhibit the elastic store from writing all but DS3 payload data bit into its storeage cells. A filter or "smooth" DS3 clock reads the DS3 payload out of the elastic store.

    SONET standard defines the jitter tolerance based on the current PLL technology. Reference [7] proposed two different desynchronizer circuit design.

    Back to the Table of Contents


    Source Of Information and References

    Most of the materials used for the Formats and Rates sections (section 1-12) are obtained from sections 1-12 of references [2] and sections 2, 3 and 5 of [3]. Reference [5] gives good insight of SDH. It uses SDH terminologies and may be harder to read. The materials on the Synchronization and Timing section is based on the chapter 4 of this book. It also gives a Mathematical Analysis of Phase variation on appendix A.

    Back to the Table of Contents