2. VLL Services

2.1. ATM VLL (Apipe) Services

This section provides information about the ATM VLL (Apipe) services and implementation.

This feature is supported on the 7450 ESS platform in mixed-mode.

2.1.1. Apipe For End-to-End ATM Service

An Apipe provides a point-to-point ATM service between users connected to 7450 ESS or 7750 SR nodes on an IP/MPLS network. Users are either directly connected to a PE or through an ATM access network. In both cases, an ATM PVC (for example, a virtual channel (VC) or a virtual path (VP)) is configured on the PE. This feature supports local cross-connecting when users are attached to the same PE node. VPI/VCI translation is supported in the Apipe.

PE1, PE2, and PE3 receive standard UNI/NNI cells on the ATM Service Access Point (SAP) that are then encapsulated into a pseudowire packet using the N:1 cell mode encapsulation or AAL5 SDU mode encapsulation according to RFC 4717, Encapsulation Methods for Transport of ATM Over MPLS Networks. When using N:1 cell mode encapsulation, cell concatenation into a pseudowire packet is supported. In this application, both VC- and VP-level connections are supported.

The ATM pseudowire is initiated using Targeted LDP (T-LDP) signaling as specified in RFC 4447, Pseudo-wire Setup and Maintenance using LDP. The SDP can be an MPLS or a GRE type.

Figure 1 shows an example of Apipe for end-to-end ATM service.

Figure 1:  Apipe for End-to-End ATM Service 

2.1.2. ATM Virtual Trunk Over IP/MPLS Packet Switched Network

For 7450 ESS or 7750 SR OS, ATM virtual trunk (VT) implements a transparent trunking of user and control traffic between two ATM switches over an ATM pseudowire. Figure 2 shows ATM 2 and ATM 3 switches that appear as if they are directly connected over an ATM link. Control traffic includes PNNI signaling and routing traffic.

Figure 2:  ATM VT Application 

The VT SAP on a PE is identified by a tuple (port, VPI-range) meaning that all cells arriving on the specified port within the specified VPI range are fed into a single ATM pseudowire for transport across the IP/MPLS network. A user can configure the whole ATM port as a VT and does not need to specify a VPI range. No VPI/VCI translation is performed on ingress or egress. Cell order is maintained within a VT. As a special case, the two ATM ports could be on the same PE node.

By carrying all cells from all VPIs making up the VT in one pseudowire, a solution is provided that is robust; for example, black holes on some VPIs but not others. The solution is also operationally efficient since the entire VT can be managed as a single entity from the Network Manager (single point for configuration, status, alarms, statistics, and so on).

ATM virtual trunks use PWE3 N:1 ATM cell mode encapsulation to provide a cell-mode transport, supporting all AAL types, over the MPLS network. Cell concatenation on a pseudowire packet is supported. The SDP can be an MPLS or a GRE type.

The ATM pseudowire is initiated using Targeted LDP (T-LDP) signaling (defined in RFC 4447, Pseudowire Setup and Maintenance using LDP). In this application, there is no ATM signaling on the gateway nodes since both endpoints of the MPLS network are configured by the network operator. ATM signaling between the ATM nodes is passed transparently over the VT (along with user traffic) from one ATM port on a PE to another ATM port on a remote (or the same) SR PE.

2.1.3. Traffic Management Support

Traffic management support is supported only on the 7750 SR.

2.1.3.1. Ingress Network Classification

Classification is based on the EXP value of the pseudowire label and EXP-to-FC mapping is determined by the network ingress QoS policy.

2.1.3.2. Ingress Queuing and Shaping on the IOM

Each SAP of an ATM VLL has an associated single ingress service queue on the IOM. The default QoS policy configures this queue to have CIR=0 and PIR=line rate. Other QoS policies can be applied if they specify a single service queue. Applying a non-default QoS policy allows the CIR/PIR of the incoming traffic to be controlled, regardless of whether ATM policing is configured, and provides queuing and shaping to smooth traffic flows on the ingress of the network.

2.1.3.3. Egress Queuing and Shaping on the IOM

Each SAP of an ATM VLL has an associated single egress service queue on the IOM. The default QoS policy configures this queue to have CIR=0 and PIR=line rate. Other QoS policies can be applied if they specify a single service queue. Applying a non-default QoS policy allows the CIR/PIR of the outgoing traffic to be controlled, regardless of whether ATM shaping is configured.

2.1.3.4. Egress Shaping/Scheduling

Each SAP of an ATM VLL has an associated egress ATM traffic descriptor. The default traffic descriptor has service category UBR with zero MIR, resulting in endpoints associated with this descriptor being scheduled at the lowest priority on the ATM MDA. Egress traffic may be shaped or scheduled, depending on the configuration of the egress ATM traffic descriptor associated with the SAP. Table 3 describes how the different service categories, and shaping settings, and priorities affect egress transmission rates.

Shaping applies to CBR, rtVBR, and nrtVBR service categories and results in cells being transmitted in such a way as to satisfy a downstream ATM UPC function. For example, the transmission rate will be limited (in the case of CBR, there is a hard limit of PIR, while rtVBR/nrtVBR will transmit at SIR with short (constrained by MBS) bursts of up to PIR), and the inter-cell gap will also be controlled.

Service categories UBR and rtVBR are scheduled on the WRR scheduler with the configured rates (MIR for UBR+) determining the weight applied to the flow. Weights are between 1 and 255 and are determined by a formula applied to the configured rate. UBR flows (for example, those with no MIR) receive a weight of 1 and the maximum weight of 255 is reached by flows with configured rates of around 8 Mb/s. Scheduling does not apply a limit to the transmission rate; the available port bandwidth is shared out by the scheduler according to the weight, so if the other flows are quiescent, one flow may burst up to port bandwidth.

Shaping and scheduling of egress ATM VLL traffic is performed entirely at the ATM layer and is, therefore, not forwarding-class-aware. If the offered rate is greater than can be transmitted toward the customer (either because the shaping rate limits transmission or because the SAP does not receive sufficient servicing in the weighed round-robin used for scheduled SAPs), the per-VC queue will begin to discard traffic. These discards trigger the congestion control mechanisms in the MDA queues or in the IOM service egress queues associated with the SAP. For AAL5 SDU VLLs, these discards occur at the AAL5 SDU level. For N-to-1 VLLs, these discards occur at the level of the cell or a block of cells when cell concatenation is enabled.

Table 3:  Service Categories and Relative Priorities  

Flow Type

Transmission Rate

Priority

shaped CBR

Limited to configured PIR

Strict priority over all other traffic

shaped rtVBR

Limited to configured SIR, but with bursts up to PIR within MBS

Strict priority over all but shaped CBR

shaped nrtVBR

Limited to configured SIR, but with bursts up to PIR within MBS

Strict priority over all scheduled traffic

scheduled nrtVBR

Weighted share (according to SIR) of port bandwidth remaining after shaped traffic has been exhausted

In the same WRR scheduler as UBR+ and UBR

scheduled UBR+

Weighted share (according to MIR) of port bandwidth remaining after shaped traffic has been exhausted

In the same WRR scheduler as nrtVBR and UBR

scheduled UBR

Weighted share (with weight of 1) of port bandwidth remaining after shaped traffic has been exhausted

In the same WRR scheduler as nrtVBR and UBR+

2.2. Circuit Emulation (Cpipe) Services

This section provides information about Circuit Emulation (Cpipe) services. Cpipe is supported for the 7450 ESS and 7750 SR only.

Note:

Cpipe VLL is not supported in System Profile B. To determine if Cpipes are currently provisioned, use the show service service-using cpipe command before configuring profile B.

2.2.1. Mobile Infrastructure

Packet infrastructure is required within 2G, 2.5G, and 3G mobile networks to handle SMS messaging, web browsing, and emerging applications such as streaming video, gaming, and video on demand. Within existing 2.5G and 3G mobile networks, ATM is defined as the transport protocol. Within existing 2G networks, TDM is defined as the transport protocol. Due to the relatively low bit rate of existing handsets, most cell sites use 2 to 10 DS1s or E1s to transport traffic. When using ATM over multiple DS1/E1 links, Inverse Multiplexing over ATM (IMA) is very effective for aggregating the available bandwidth for maximum statistical gain and providing automatic resilience in the case of a link failure. Also, multiple DS1s or E1s are required to transport the 2G voice traffic.

Typically, low-cost devices are used at the many cell sites to transport multiple DS1 or E1 using ATM/IMA and TDM over an Ethernet/MPLS infrastructure. In Nokia applications, the circuit emulation would currently be performed using the 7705 SAR. This could be performed by DMXplore at the cell site. However, a large number of cell sites aggregate into a single switching center. Book-ending 7705 SAR nodes would require a very large number of systems at the switching center (see Figure 3). Therefore, a channelized OC3/STM1 solution is much more efficient at the switching center. Table 4 defines the cellsite backhaul types, CSR roles, and transport acronyms used in Figure 3.

With a channelized OC3/STM1 CES CMA/MDA in the 7750 SR, Nokia can provide a converged, flexible solution for IP/MPLS infrastructures for 2G/2.5G/3G mobile networks supporting both the CES (by CES CMA/MDA) and ATM/IMA transported traffic (by the ASAP MDA).

Figure 3:  Mobile Infrastructure 
Table 4:  Mobile Infrastructure Definitions 

Cellsite Backhaul Type

CSR Role

Transport Acronyms

Microwave

Circuit emulation

CSR: Cellsite Service Router

xDSL

ATM IMA termination into pseudowire

MAR: Mobile Aggregation Router

Fiber, dark or light

Ethernet VLL switching

MSR: Mobile Service Router

ATM, ATM IMA

IP/MPLS aggregation

CEC: Circuit Emulation Concentrator

Leased line

MCR: Mobile Core Router

BR: Border Router

2.2.2. Circuit Emulation Modes

Two modes of circuit emulation are supported: unstructured and structured. Unstructured mode is supported for DS1 and E1 channels per RFC 4553, Structure-Agnostic Time Division Multiplexing (TDM) over Packet (SAToP). Structured mode is supported for N*64 kb/s circuits as per RFC 5086, Structure-Aware Time Division Multiplexed (TDM) Circuit Emulation Service over Packet Switched Network (CESoPSN). Also, DS1, E1, and N*64 kb/s circuits are supported (per MEF8). TDM circuits are optionally encapsulated in MPLS or Ethernet as per the referenced standards in the following examples.

RFC 4553 (SAToP) MPLS PSN Encapsulation:

0              1               2               3                
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1  
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
|                           ...                                 | 
|              MPLS Label Stack                                 | 
|                           ...                                 | 
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
|                  SAToP Control Word                           | 
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
|                       OPTIONAL                                | 
+--                                                           --+ 
|                                                               | 
+--                                                           --+ 
|                 Fixed RTP Header (see [RFC3550])              | 
+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
|                   Packetized TDM data (Payload)               | 
|                            ...                                | 
|                            ...                                | 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 

CESoPSN Packet Format for an MPLS PSN:

 0               1               2               3                
  0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1  
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
 |                           ...                                 | 
 |                    MPLS Label Stack                           | 
 |                           ...                                 | 
 +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
 |                  CESoPSN Control Word                         | 
 +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
 |                       OPTIONAL                                | 
 +--                                                           --+ 
 |                                                               | 
 +--                                                           --+ 
 |                 Fixed RTP Header (see [RFC3550])              | 
 +=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+=+ 
 |                  Packetized TDM data (Payload)                | 
 |                            ...                                | 
 |                            ...                                | 
 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 

MEF8 PSN Encapsulation:

 0                   1                   2                   3 
       0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
                                      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
                                      |  Destination MAC Address 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
                          Destination MAC Address (cont)              | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      |                     Source MAC Address 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
          Source MAC Address  (cont)  |   VLAN Ethertype (opt)        | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      |VLP|C|      VLAN ID (opt)      |         Ethertype             | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      |              ECID (20 bits)           |   RES (set to 0x102)  | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      | RES(0)|L|R| M |FRG|  Length   |         Sequence Number       | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
   opt|RTV|P|X|  CC   |M|     PT      |      RTP Sequence Number      | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
   opt|                            Timestamp                          | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
   opt|                         SSRC identifier                       | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      |                                                               | 
      |                        Adapted Payload                        | 
      |                                                               | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 
      |                     Frame Check Sequence                      | 
      +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

2.2.3. Circuit Emulation Parameters

2.2.3.1. Circuit Emulation Modes

All channels on the CES CMA/MDA are supported as circuits to be emulated across the packet network. Structure-aware mode is supported for N*64 kb/s channel groups in DS1 and E1 carriers. Fragmentation is not supported for circuit emulation packets (structured or unstructured).

For DS1 and E1 unstructured circuits, the framing can be set to unframed. When channel group 1 is created on an unframed DS1 or E1, it is automatically configured to contain all 24 or 32 channels, respectively.

N*64 kb/s circuit emulation supports basic and Channel Associated Signaling (CAS) options for timeslots 1 to 31 (channels 2 to 32) on E1 carriers and channels 1 to 24 on DS1 carriers. CAS in-band is supported; therefore, no separate pseudowire support for CAS is provided. CAS option can be enabled or disabled for all channel groups on a specific DS1 or E1. If CAS operation is enabled, timeslot 16 (channel 17) cannot be included in the channel group on E1 carriers. Control channel signaling (CCS) operation is not supported.

2.2.3.2. Absolute Mode Option

For all circuit emulation channels except those with differential clock sources, RTP headers in absolute mode can be optionally enabled (disabled by default). For circuit emulation channels that use differential clock sources, this configuration is blocked. All channel groups on a specific DS1 or E1 can be configured for the same mode of operation.

When enabled for absolute mode operation, an RTP header will be inserted. On transmit, the CES IWF will insert an incrementing (by 1 for each packet) timestamp into the packets. All other fields will be set to zero. The RTP header will be ignored on receipt. This mode is enabled for interoperability purposes only for devices that require an RTP header to be present.

2.2.3.3. Payload Size

For DS3, E3, DS1, and E1 circuit emulation, the payload size can be configurable in number of octets. The default values for this parameter are shown in Table 5. Unstructured payload sizes can be set to a multiple of 32 octets and minimally be 64 octets. TDM satellite supports only unstructured payloads.

Table 5:  Unstructured Payload Defaults 

TDM Circuit

Default Payload Size

DS1

192 octets

E1

256 octets

For N*64 kb/s circuits, the number of octets or DS1/E1 frames to be included in the TDM payload needs to be configurable in the range 4 to 128 DS1/E1 frames in increments of 1 or the payload size in octets. The default number of frames is shown in Table 6 with associated packet sizes. For the number of 64 kb/s channels included (N), the following number of default frames apply for no CAS: N=1, 64 frames; 2<=N<= 4, 32 frames; 5<=N<= 15, 16 frames; N>=16, 8 frames.

For CAS circuits, the number of frames can be 24 for DS1 and 16 for E1, which yields a payload size of N*24 octets for T1 and N*16 octets for E1. For CAS, the signaling portion is an additional ((N+1)/2) bytes, where N is the number of channels. The additional signaling bytes are not included in the TDM payload size, although they are included in the actual packet size shown in Table 6.

The full ABCD signaling value can be derived before the packet is sent. This occurs for every 24 frames for DS1 ESF and every 16 frames for E1. For DS1 SF, ABAB signaling is actually sent because SF framing only supports AB signaling every 12 frames.

Table 6:  Structured Number of Default Frames 

Num Timeslots

No CAS

DS1 CAS

E1 CAS

num-frames default

Default Payload

Minimum Payload

Payload (24 Frames)

Packet Size

Payload (16 Frames)

Packet Size

1

64

64

40

24

25

16

17

2

32

64

64

48

49

32

33

3

32

96

96

72

74

48

50

4

32

128

128

96

98

64

66

5

16

80

80

120

123

80

83

6

16

96

96

144

147

96

99

7

16

112

112

168

172

112

116

8

16

128

128

192

196

128

132

9

16

144

144

216

221

144

149

10

16

160

160

240

245

160

165

11

16

176

176

264

270

176

182

12

16

192

192

288

294

192

198

13

16

208

208

312

319

208

215

14

16

224

224

336

343

224

231

15

16

240

240

360

368

240

248

16

8

128

128

384

392

256

264

17

8

136

136

408

417

272

281

18

8

144

144

432

441

288

297

19

8

152

152

456

466

304

314

20

8

160

160

480

490

320

330

21

8

168

168

504

515

336

347

22

8

176

176

528

539

352

363

23

8

184

184

552

564

368

380

24

8

192

192

576

588

384

396

25

8

200

200

NA

NA

400

413

26

8

208

208

NA

NA

416

429

27

8

216

216

NA

NA

432

446

28

8

224

224

NA

NA

448

462

29

8

232

232

NA

NA

464

479

30

8

240

240

NA

NA

480

495

31

8

248

248

NA

NA

NA

NA

Note:

num-frames DS1 CAS are multiples of 24; num-frames E1 is a multiple of 16.

2.2.3.4. Jitter Buffer

For each circuit, the maximum receive jitter buffer is configurable. Packet delay from this buffer starts when the buffer is 50% full, to give an operational packet delay variance (PDV) equal to 75% of the maximum buffer size. The default value for the jitter buffer is nominally 5 ms. However, for lower-speed N*64 kb/s circuits and CAS circuits, the following default values are used to align with the default number of frames (and resulting packetization delay) to allow at least two frames to be received before starting to playout the buffer. The jitter buffer is at least four times the packetization delay. The following default jitter buffer values for structured circuits apply:

Basic CES (DS1 and E1):

N=1, 32 ms

2<=N<=4, 16 ms

5<=N<=15, 8 ms

N>=16, 5 ms

2.2.3.5. CES Circuit Operation

The circuit status can be tracked to be either up, loss of packets, or administratively down. Statistics are available for the number of in-service seconds and the number of out-of-service seconds when the circuit is administratively up.

Jitter buffer overrun and underrun counters are available by statistics and optionally logged while the circuit is up. On overruns, excess packets are discarded and counted. On underruns, all ones are sent for unstructured circuits. For structured circuits, all ones or a user-defined data pattern is sent based on configuration. Also, if CAS is enabled, all ones or a user-defined signaling pattern is sent based on configuration.

For each CES circuit, alarms can be optionally disabled/enabled for stray packets, malformed packets, packet loss, receive buffer overrun, and remote packet loss. An alarm is raised if the defect persists for 3 seconds, and cleared when the defect no longer persists for 10 seconds. These alarms are logged and trapped when enabled.

2.2.4. Services for Transporting CES Circuits

Each circuit can be optionally encapsulated in MPLS, Ethernet packets. Circuits encapsulated in MPLS use circuit pipes (Cpipes) to connect to the far-end circuit. Cpipes support either SAP spoke-SDP or SAP-SAP connections. Cpipes are supported over MPLS and GRE tunnels. The Cpipe default service MTU is set to 1514 bytes.

Circuits encapsulated in Ethernet can be selected as a SAP in Epipes. Circuits encapsulated in Ethernet can be SAP spoke-SDP connections or Ethernet CEM SAP-to-Ethernet SAP for all valid Epipe SAPs. Circuits requiring CEM SAP-to-CEM SAP connections use Cpipes. A local and remote EC-ID and far-end destination MAC address can be configurable for each circuit. The CMA/MDA MAC address will be used as the source MAC address for these circuits.

For all service types, there are deterministic PIR=CIR values with class=EF parameters based on the circuit emulation parameters.

All circuit emulation services support the display of status of up, loss of packet (LOP), or admin down. Also, any jitter buffer overruns or underruns are logged.

Non-stop services are supported for Cpipes and CES over Epipes.

2.2.5. Network Synchronization Considerations

Each OC-3/STM-1 port can be independently configured to be loop-timed or node-timed. Each OC-3/STM-1 port can be configured to be a timing source for the node. TDM satellites only support node-timed mode.

Each DS-1 or E-1 channel without CAS signaling enabled can be independently configured to be loop-timed, node-timed, adaptive-timed, or differential-timed. Each DS-1 or E-1 channel with CAS signaling enabled can be independently configured to be loop-timed or node-timed. Adaptive timing and differential timing are not supported on DS-1 or E-1 channels with CAS signaling enabled. For the TDM satellite, each DS1/E1 channel can be loop-timed, node-timed, or differential-timed.

The adaptive recovered clock of a CES circuit can be used as a timing reference source for the node (ref1 or ref2). This is required to distribute network timing to network elements that only have packet connectivity to the network. One timing source on the CMA/MDA can be monitored for timing integrity. Both timing sources can be monitored if they are configured on separate CMA/MDAs while respecting the timing subsystem slot requirements.

If a CES circuit is being used for adaptive clock recovery at the remote end (such that the local end is now an adaptive clock master), Nokia recommends setting the DS-1/E-1 to be node-timed to prevent potential jitter issues in the recovered adaptive clock at the remote device. This is not applicable to TDM satellites.

For differential-timed circuits, the following timestamp frequencies are supported: 103.68 MHz (for recommended >100 MHz operation), 77.76 MHz (for interoperability with SONET/SDH-based systems such as TSS-5) and 19.44 MHz (for Y.1413 compliance). TDM satellite supports only 77.76 MHz.

Adaptive and differential timing recovery must comply with published jitter and wander specifications (G.823, G.824, and G.8261) for traffic interfaces under typical network conditions and for synchronous interfaces under specified packet network delay, loss, and delay variance (jitter) conditions. The packet network requirements to meet the synchronous interface requirements are to be determined during the testing phase.

On the 7450 ESS and 7750 SR CES CMA, a BITS port is also provided. The BITS port can be used as one of the two timing reference sources in the system timing subsystem. The operation of BITS ports configured as ref1 or ref2 is the same as existing ports configured as ref1 and ref2 with all options supported. The operation of the 7450 ESS or 7750 SR BITS source is unchanged and the BITS ports are not available on the CES MDAs (only SF/CPM BITS are available).

2.2.6. Cpipe Payload

Figure 4 shows the format of the CESoPSN TDM payload (with and without CAS) for packets carrying trunk-specific 64 kb/s service. In CESoPSN, the payload size is dependent on the number of timeslots used. This is not applicable to TDM satellite since only unstructured DS1/E1 is supported.

Figure 4:  CESoPSN MPLS Payload Format 

2.3. Ethernet Pipe (Epipe) Services

This section provides information about the Epipe service and implementation.

2.3.1. Epipe Service Overview

An Epipe service is the Nokia implementation of an Ethernet VLL based on the IETF “Martini Drafts” (draft-martini-l2circuit-trans-mpls-08.txt and draft-martini-l2circuit-encapmpls-04.txt) and the IETF Ethernet Pseudowire Draft (draft-so-pwe3-ethernet-00.txt).

An Epipe service is a Layer 2 point-to-point service where the customer data is encapsulated and transported across a service provider IP, MPLS, or Provider Backbone Bridging (PBB) VPLS network. An Epipe service is completely transparent to the customer data and protocols. The Epipe service does not perform any MAC learning. A local Epipe service consists of two SAPs on the same node, whereas a distributed Epipe service consists of two SAPs on different nodes. SDPs are not used in local Epipe services.

Each SAP configuration includes a specific port or channel on which service traffic enters the router from the customer side (also called the access side). Each port is configured with an encapsulation type. If a port is configured with an IEEE 802.1Q (referred to as dot1q) encapsulation, a unique encapsulation value (ID) must be specified.

Figure 5:  Epipe/VLL Service 

2.3.2. Epipe Service Pseudowire VLAN Tag Processing

Distributed Epipe services are connected using a pseudowire, which can be provisioned statically or dynamically and is represented in the system as a spoke-SDP. The spoke-SDP can be configured to process zero, one, or two VLAN tags as traffic is transmitted and received; see Table 7 and Table 8 for the ingress and egress tag processing. In the transmit direction, VLAN tags are added to the frame being sent. In the received direction, VLAN tags are removed from the frame being received. This is analogous to the SAP operations on a null, dot1q, and QinQ SAP.

The system expects a symmetrical configuration with its peer; specifically, it expects to remove the same number of VLAN tags from received traffic as it adds to transmitted traffic. When removing VLAN tags from a spoke-SDP, the system attempts to remove the configured number of VLAN tags. If fewer tags are found, the system removes the VLAN tags found and forwards the resulting packet.

Because some of the related configuration parameters are local and not communicated in the signaling plane, an asymmetrical behavior cannot always be detected and so cannot be blocked. With an asymmetrical behavior, a protocol extraction will not necessarily function as it would with a symmetrical configuration, resulting in an unexpected operation.

The VLAN tag processing is configured as follows on a spoke-SDP in an Epipe service:

  1. Zero VLAN tags processed — This requires the configuration of vc-type ether under the spoke-SDP, or in the related PW template.
  2. One VLAN tag processed — This requires one of the following configurations:
    1. vc-type vlan under the spoke-SDP or in the related PW template
    2. vc-type ether and force-vlan-vc-forwarding under the spoke-SDP or in the related PW template
  3. Two VLAN tags processed — This requires the configuration of force-qinq-vc-forwarding [c-tag-c-tag | s-tag-c-tag] under the spoke-SDP or in the related PW template.

The PW template configuration provides support for BGP VPWS services.

The following restrictions apply to VLAN tag processing:

  1. The configuration of vc-type vlan and force-vlan-vc-forwarding is mutually exclusive.
  2. force-qinq-vc-forwarding [c-tag-c-tag | s-tag-c-tag] can be configured with the spoke-SDP signaled as either vc-type ether or vc-type vlan.
  3. The following are not supported with force-qinq-vc-forwarding [c-tag-c-tag | s-tag-c-tag] configured under the spoke-SDP, or in the related PW template:
    1. Multi-segment pseudowires.
    2. PBB-Epipe services
    3. force-vlan-vc-forwarding under the same spoke-SDP or PW template
    4. Eth-CFM LM tests are NOT supported on UP MEPs when force-qinq-vc-forwarding is enabled.

Table 7 and Table 8 describe the VLAN tag processing with respect to the zero, one, and two VLAN tag configuration described for the VLAN identifiers, Ethertype, ingress QoS classification (dot1p/DE), and QoS propagation to the egress (which can be used for egress classification and/or to set the QoS information in the innermost egress VLAN tag).

Table 7:  Epipe Spoke-SDP VLAN Tag Processing: Ingress 

Ingress (Received on Spoke-SDP)

Zero VLAN Tags

One VLAN Tag

Two VLAN Tags (enabled by force-qinq-vc-forwarding [c-tag-c-tag | s-tag-c-tag]

VLAN identifiers

N/A

Ignored

Both inner and outer ignored

Ethertype (to determine the presence of a VLAN tag)

N/A

0x8100 or value configured under sdp vlan-vc-etype

Both inner and outer VLAN tags: 0x8100, or outer VLAN tag value configured under sdp

vlan-vc-etype (inner VLAN tag value must be 0x8100)

Ingress QoS (dot1p/DE) classification

N/A

Ignored

Both inner and outer ignored

Qos (dot1p/DE) propagation to egress

Dot1p/DE=0

Dot1p/DE taken from received VLAN tag

Dot1p/DE taken as follows:

  1. If the egress encapsulation is a Dot1q SAP, Dot1p/DE bits are taken from the outer received VLAN tag
  2. If the egress encapsulation is QinQ SAP, the s-tag bits are taken from the outer received VLAN tag and the c-tag bits from the inner received VLAN tag

The egress cannot be a spoke-sdp since force-qinq-vc-forwarding does not support multi-segment PWs.

Table 8:  Epipe-Spoke SDP VLAN Tag Processing: Egress 

Egress (Sent on Mesh or Spoke-SDP)

Zero VLAN Tags

One VLAN Tag

Two VLAN Tags (enabled by force-qinq-vc-forwarding [c-tag-c-tag | s-tag-c-tag]

VLAN identifiers (set in VLAN tags)

N/A

The tag is derived from one of the following:

  1. the vlan-vc-tag value configured in PW template or under the spoke-SDP
  2. value from the inner tag received on a QinQ SAP or QinQ spoke-SDP
  3. value from the VLAN tag received on a dot1q SAP or spoke-SDP (with vc-type vlan or force-vlan-vc-forwarding)
  4. value from the outer tag received on a qtag.* SAP
  5. 0 if there is no service delimiting VLAN tag at the ingress SAP or spoke-SDP

The inner and outer VLAN tags are derived from one of the following:

  1. vlan-vc-tag value configured in PW template or under the spoke-SDP:
    1. If c-tag-c-tag is configured, both inner and outer tags are taken from the vlan-vc-tag value
    2. If s-tag-c-tag is configured, only the s-tag value is taken from vlan-vc-tag
  2. value from the inner tag received on a QinQ SAP for the c-tag-c-tag option and value from outer/inner tag received on a QinQ SAP for the s-tag-c-tag configuration option
  3. value from the VLAN tag received on a dot1q SAP for the c-tag-c-tag option and value from the VLAN tag for the outer tag and zero for the inner tag

  1. value from the outer tag received on a qtag.* SAP for the c-tag-c-tag option and value from the VLAN tag for the outer tag and zero for the inner tag
  2. value 0 if there is no service delimiting VLAN tag at the ingress SAP or spoke-SDP Ethertype (set in VLAN tags).

Ethertype (set in VLAN tags)

N/A

0x8100 or value configured under sdp vlan-vc-etype

Both inner and outer VLAN tags: 0x8100, or outer VLAN tag value configured under sdp vlan-vc-etype (inner VLAN tag value will be 0x8100)

Egress QoS (dot1p/DE) (set in VLAN tags)

N/A

The tag taken from the innermost ingress service delimiting tag can be one of the following:

  1. The inner tag received on a QinQ SAP or QinQ spoke-SDP
  2. value from the VLAN tag received on a dot1q SAP or spoke-SDP (with vc-type vlan or force-vlan-vc-forwarding)
  3. value from the outer tag received on a qtag.* SAP

Inner and outer dot1p/DE:

If c-tag-c-tag is configured, the inner and outer dot1p/DE bits are both taken from the innermost ingress service delimiting tag. It can be one of the following:

  1. inner tag received on a QinQ SAP
  2. value from the VLAN tag received on a dot1q SAP
  3. value from the outer tag received on a qtag.* SAP
  4. value 0 if there is no service delimiting VLAN tag at the ingress SAP

  1. 0 if there is no service delimiting VLAN tag at the ingress SAP or spoke-SDP

Note: Neither the inner nor outer dot1p/DE values can be explicitly set.

If s-tag-c-tag is configured, the inner and outer dot1p/DE bits are taken from the inner and outer ingress service delimiting tag (respectively). They can be:

  1. inner and outer tags received on a QinQ SAP
  2. value from the VLAN tag received on a dot1q SAP for the outer tag and zero for the inner tag
  3. value from the outer tag received on a qtag.* SAP for the outer tag and zero for the inner tag
  4. value 0 if there is no service delimiting VLAN tag at the ingress SAP

Note: Neither the inner nor outer dot1p/DE values can be explicitly set.

Any non-service delimiting VLAN tags are forwarded transparently through the Epipe service. SAP egress classification is possible on the outermost customer VLAN tag received on a spoke-SDP using the ethernet-ctag parameter in the associated SAP egress QoS policy.

2.3.3. Epipe Up Operational State Configuration Option

By default, the operational state of the Epipe is tied to the state of the two connections that comprise the Epipe. If either of the connections in the Epipe are operationally down, the Epipe service that contains that connection will also be operationally down. The operator can configure a single SAP within an Epipe that does not affect the operational state of that Epipe, using the optional ignore-oper-state command. Within an Epipe, if a SAP that includes this optional command becomes operationally down, the operational state of the Epipe will not transition to down. The operational state of the Epipe will remain up. This does not change that the SAP is down and no traffic can transit an operationally down SAP. Removing and adding this command on the fly will evaluate the operational state of the service, based on the SAPs and the addition or deletion of this command.

Service OAM (SOAM) designers may consider using this command if an operationally up MEP configured on the operationally down SAP within an Epipe is required to receive and process SOAM PDUs. When a service is operationally down, this is not possible. For SOAM PDUs to continue to arrive on an operationally up, MEP configured on the failed SAP, the service must be operationally up. Consider the case where an operationally up MEP is placed on a UNI-N or E-NNI and the UNI-C on E-NNI peer is shutdown in such a way that it causes the SAP to become operationally down.

Two connections must be configured within the Epipe; otherwise, the service will be operationally down regardless of this command. The ignore-oper-state functionality will only operate as intended when the Epipe has one ingress and one egress. This command is not to be used for Epipe services with redundant connections that provide alternate forwarding in case of failure, even though the CLI does not prevent this configuration.

Support is available on Ethernet SAPs configured on ports or Ethernet SAPs configured on LAG. However, it is not allowed on SAPs using LAG profiles or if the SAP is configured on a LAG that has no ports.

2.3.4. Epipe with PBB

A PBB tunnel may be linked to an Epipe to a B-VPLS. MAC switching and learning is not required for the point-to-point service. All packets ingressing the SAP are PBB encapsulated and forwarded to the PBB tunnel to the backbone destination MAC address. Likewise, all the packets ingressing the B-VPLS destined for the ISID are PBB de-encapsulated and forwarded to the Epipe SAP. A fully specified backbone destination address must be provisioned for each PBB Epipe instance to be used for each incoming frame on the related I-SAP. If the backbone destination address is not found in the B-VPLS FDB, packets may be flooded through the B-VPLSs.

All B-VPLS constructs may be used including B-VPLS resiliency and OAM. Not all generic Epipe commands are applicable when using a PBB tunnel.

2.3.5. Epipe over L2TPv3

The L2TPv3 feature provides a framework to transport Ethernet pseudowire services over an IPv6-only network without MPLS. This architecture relies on the abundance of address space in the IPv6 protocol to provide unique far-end and local-end addressing that uniquely identify each tunnel and service binding.

L2TPv3 provides the capability of transporting multiple Epipes (up to 16K per system), by binding multiple IPv6 addresses to each node and configuring one SDP per Epipe.

Because the IPv6 addressing uniqueness identifies the customer and service binding, the L2TPv3 control plane is disabled in this mode.

L2TPv3 is supported on non-12e 7750 SR and 7450 ESS (mixed mode) and 7950 XRS platforms.

ETH-CFM is supported for OAM services.

Figure 6:  L2TPv3 SDP 

2.3.6. Ethernet Interworking VLL

Figure 7 provides an example of an Ethernet interworking VLL. The Ethernet interworking VLL provides a point-to-point Ethernet VLL service between Frame Relay (FR) attached users, ATM-attached users, and Ethernet-attached users across an IP/MPLS packet switched network. It effectively provides ATM and FR bridged encapsulation termination on the existing Epipe service of the 7750 SR.

Figure 7:  Application of Ethernet Interworking VLL 

The following connectivity scenarios are supported:

  1. a Frame Relay or ATM user connected to a ATM network communicating with a Ethernet user connected to a 7750 SR PE node on a IP/MPLS network
  2. a Frame Relay or ATM user connected to 7750 SR PE node communicating with an Ethernet user connected to a 7750 SR PE node on a IP/MPLS network. This feature supports local cross-connecting when these users are attached to the same 7750 SR PE node.

Users attach over an ATM UNI with RFC 2684, Multiprotocol Encapsulation over ATM Adaptation Layer 5, tagged/untagged bridged Ethernet PDUs, a FR UNI using RFC 2427, Multiprotocol Interconnect over Frame Relay, tagged/untagged bridged Ethernet PDUs, or an Ethernet tagged/untagged UNI interface. However, the VCI/VPI and the data-link connection identifier (DLCI) are the identifiers of the SAP in the case of ATM and FR, respectively, and the received tags are transparent to the service, so are preserved.

The Ethernet pseudowire is established using T-LDP signaling and can use the ether or vlan VC types on the SDP. The SDP can be either an MPLS or GRE type.

2.3.7. VLL CAC

The VLL Connection Admission Control (CAC) is supported for the 7750 SR only and provides a method to administratively account for the bandwidth used by VLL services inside an SDP that consists of RSVP LSPs.

The service manager keeps track of the available bandwidth for each SDP. The SDP available bandwidth is applied through a configured booking factor. An administrative bandwidth value is assigned to the spoke-SDP. When a VLL service is bound to an SDP, the amount of bandwidth is subtracted from the adjusted available SDP bandwidth. When the VLL service binding is deleted from the SDP, the amount of bandwidth is added back into the adjusted SDP available bandwidth. If the total adjusted SDP available bandwidth is overbooked when adding a VLL service, a warning is issued and the binding is rejected.

This feature does not guarantee bandwidth to a VLL service because there is no change to the datapath to enforce the bandwidth of an SDP by means such as shaping or policing of constituent RSVP LSPs.

2.3.8. MC-Ring and VLL

To support redundant VLL access in ring configurations, the multi-chassis ring (MC-Ring) feature is applicable to VLL SAPs. A conceptual drawing of the operation is shown in Figure 8. The specific CPE that is connected behind the ring node has access to both BSAs through the same VLAN provisioned in all ring nodes. There are two SAPs (with the same VLAN) provisioned on both nodes.

If a closed ring status occurs, one of the BSAs becomes the master and will signal an active status bit on the corresponding VLL pseudowire. Similarly, the standby BSA will signal a standby status. With this information, the remote node can choose the correct path to reach the CPE. In case of a broken ring, the node that can reach the ring node, to which the CPE is connected by RNCV check, will become master and will signal corresponding status on its pseudowire.

The mapping of individual SAPs to the ring nodes is done statically through CLI provisioning. To keep the convergence time to a minimum, MAC learning must be disabled on the ring node so all CPE originated traffic is sent in both directions. If the status is operationally down on the SAP on the standby BSA, that part of the traffic will be blocked and not forwarded to the remote site.

Figure 8:  MC-Ring in a Combination with VLL Service 

For further information about Multi-Chassis Ring Layer 2 (with ESM), refer to the Advanced Configuration Guide.

2.4. Frame Relay VLL (Fpipe) Services

This section provides information about the Frame Relay VLL (Fpipe) service and implementation. Fpipe is supported for the 7750 SR only.

2.4.1. Frame Relay VLL

Figure 9 shows an application of a Frame Relay VLL. The Fpipe provides a point-to-point Frame Relay service between users connected to 7750 SR nodes on the IP/MPLS network. Users are connected to the 7750 SR.

Figure 9:  Application of a Fpipe 

PE nodes using Frame Relay PVCs PE1, PE2, and PE3 receive a standard Q.922 Core frame on the Frame Relay SAP and encapsulate it into a pseudowire packet according to the 1-to-1 Frame Relay encapsulation mode in RFC 4619, Encapsulation Methods for Transport of Frame Relay Over MPLS Networks. The 7750 SR Fpipe feature supports local cross-connecting when the users are attached to the same 7750 SR PE node.

The FR pseudowire is initiated using T-LDP signaling as specified in RFC 4447, Pseudo-wire Setup and Maintenance using LDP. The SDP can be an MPLS or a GRE type.

2.4.2. Frame Relay-to-ATM Interworking (FRF.5) VLL

Figure 10 provides an example of a point-to-point Frame Relay service between users where one user is connected to an existing ATM network, the other to a 7750 SR PE node on an IP/MPLS network.

Figure 10:  Frame Relay-to-ATM Network Interworking (FRF.5) VLL 

This VLL uses an ATM AAL5 SDU pseudowire between the 7750 SR PE nodes. It is configured by adding a FR SAP to an Apipe service using vc-type atm-sdu. The 7750 SR PE2 node performs an FRF.5 interworking function to interwork the ingress and egress data paths as well as the operations required in an FR and an ATM VLL.

The pseudowire is initiated using Targeted LDP signaling as specified in the IETF Draft draft-ietf-pwe3-control-protocol-xx.txt. The SDP can be an MPLS or a GRE type.

2.4.3. Traffic Management Support

2.4.3.1. Frame Relay Traffic Management

Traffic management of Fpipes is supported for the 7750 SR only and is achieved through the application of ingress and egress QoS policies to SAPs like other Frame Relay SAPs. No queuing occurs on the MDA; all queuing, policing, and shaping occurs on the IOM and, therefore, traffic management is forwarding-class-aware. Forwarding classes may be determined by inspecting the DSCP marking of contained IP packets (for example) and this will determine both the queuing and the EXP bit setting of packets on an Fpipe.

2.4.3.2. Ingress SAP Classification and Marking

Ingress SAP classification and marking is supported for the 7450 ESS and 7750 SR only. DE=0 frames are subject to the CIR marking algorithm in the queue. Drop preference for these packets will follow the state of the CIR bucket associated with the ingress queue. The value is marked in the drop preference bit of the internal header and in the DE bit in the Q.922 frame header. DE=1 frames are classified in “out-of-profile” state and are not overwritten by the CIR marking in the ingress queue. The drop preference is set to high.

2.4.3.3. Egress Network EXP Marking

FC-to-EXP mapping is supported for the 7450 ESS and 7750 SR only and is as per the Network Egress QoS policy. Marking of the EXP field in both label stacks is performed.

2.4.3.4. Ingress Network Classification

Classification is supported for the 7450 ESS and 7750 SR only and is based on the EXP value of the pseudowire label and EXP-to-FC mapping is as per Network Ingress QoS policy.

2.5. IP Interworking VLL (Ipipe) Services

This section provides information about IP Interworking VLL (Ipipe) services.

2.5.1. Ipipe VLL

Figure 11 provides an example of IP connectivity between a host attached to a point-to-point access circuit (FR, ATM, PPP) with routed PDU IPv4 encapsulation and a host attached to an Ethernet interface. Both hosts appear to be on the same LAN segment. This feature is supported for the 7450 ESS and 7750 SR and enables service interworking between different link layer technologies. A typical use of this application is in a Layer 2 VPN when upgrading a hub site to Ethernet while keeping the spoke sites with their existing Frame Relay or ATM IPv4 (7750 SR only) routed encapsulation.

Note:

Ipipe VLL is not supported in System Profile B. To determine if Ipipes are currently provisioned, use the show service service-using ipipe command before configuring profile B.

Figure 11:  IP Interworking VLL (Ipipe) 

The ATM SAP is supported by the 7750 SR only. It carries the IPv4 packet using RFC 2684, Multiprotocol Encapsulation over ATM Adaptation Layer 5, VC-Mux or LLC/SNAP routed PDU encapsulation.

The Frame Relay SAP uses RFC 2427, Multiprotocol Interconnect over Frame Relay, routed PDU encapsulation of an IPv4 packet. A PPP interface uses RFC 1332, The PPP Internet Protocol Control Protocol (IPCP), PPP IPCP encapsulation of an IPv4 packet. A Cisco-HDLC SAP uses the routed IPv4 encapsulation. The pseudowire uses the IP Layer 2 transport pseudowire encapsulation type.

Note:

The Ipipe is a point-to-point Layer 2 service. All packets received on one SAP of the Ipipe will be forwarded to the other SAP. No IP routing of customer packets occurs.

2.5.2. IP Interworking VLL Datapath

In Figure 11, PE 2 is manually configured with both CE 1 and CE 2 IP addresses. These are host addresses and are entered in /32 format. PE 2 maintains an ARP cache context for each IP interworking VLL. PE 2 responds to ARP request messages received on the Ethernet SAP. PE 2 responds with the Ethernet SAP configured MAC address as a proxy for any ARP request for CE 1 IP address. PE 2 silently discards any ARP request message received on the Ethernet SAP for an address other than that of CE 1. Likewise, PE 2 silently discards any ARP request message with the source IP address other than that of CE 2. In all cases, PE 2 keeps track of the association of IP to MAC addresses for ARP requests it receives over the Ethernet SAP.

To forward unicast frames destined for CE 2, PE 2 needs to know the CE 2 MAC address. When the Ipipe SAP is first configured and administratively enabled, PE2 sends an ARP request message for CE 2 MAC address over the Ethernet SAP. Until an ARP reply is received from CE2, providing the CE2 MAC address, unicast IP packets destined for CE2 will be discarded at PE2. IP broadcast and IP multicast packets are sent on the Ethernet SAP using the broadcast or direct-mapped multicast MAC address.

To forward unicast frames destined for CE 1, PE 2 validates the MAC destination address of the received Ethernet frame. The MAC address should match that of the Ethernet SAP. PE 2 then removes the Ethernet header and encapsulates the IP packet directly into a pseudowire without a control word. PE 1 removes the pseudowire encapsulation and forwards the IP packet over the Frame Relay SAP using RFC 2427, Multiprotocol Interconnect over Frame Relay, routed PDU encapsulation.

To forward unicast packets destined for CE1, PE2 validates the MAC destination address of the received Ethernet frame. If the IP packet is unicast, the MAC destination must match that of the Ethernet SAP. If the IP packet is multicast or broadcast, the MAC destination address must be an appropriate multicast or broadcast MAC address.

The other procedures are similar to the case of communication between CE 1 and CE 2, except that the ATM SAP and the Ethernet SAP are cross-connected locally and IP packets do not get sent over an SDP.

A PE does not flush the ARP cache unless the SAP goes administratively or operationally down. The PE with the Ethernet SAP sends unsolicited ARP requests to refresh the ARP cache every “T” seconds. ARP requests are staggered at an increasing rate if no reply is received to the first unsolicited ARP request. The value of T is configurable by the user through the mac-refresh CLI command.

2.5.3. Extension to IP VLL for Discovery of Ethernet CE IP Address

VLL services provide IP connectivity between a host attached to a point-to-point access circuit (FR, ATM, PPP) with routed PDU encapsulation and a host attached to an Ethernet interface. Both hosts appear to be on the same IP interface. This feature is supported only for IPv4 payload.

In deployments where it is not practical for operators to obtain and configure their customer CE address, the following behaviors apply:

  1. A service comes up without prior configuration of the CE address parameter under both the SAP and the spoke-SDP.
  2. Operators rely solely on received ARP messages from the Ethernet SAP-attached CE device to update the ARP cache with no further check of the validity of the source IP address of the ARP request message and the target IP address being resolved.
  3. The LDP address list TLV signaling the learned CE IP address to the remote PE is supported. This is to allow the PE with the FR SAP to respond to an invFR ARP request message received from the FR-attached CE device. Only Ethernet SAP and FR SAP can learn the CE address through ARP and invFR ARP, respectively. The 7450 ESS and 7750 SR OS do not support invATM ARP on an ATM interface.

2.5.3.1. VLL Ethernet SAP Processes

The operator can enable the following CE address discovery processes by configuring the ce-address-discovery in the config>service>ipipe context.

  1. The service is brought up without the CE address parameter configured at either the SAP or the spoke-SDP.
  2. The operator cannot configure the ce-address parameter under the config>service>ipipe>sap or config>service>ipipe>spoke-sdp context when the ce-address-discovery in the config>service>ipipe context is enabled. Conversely, the operator is not allowed to enable the ce-address-discovery option under the Ipipe service if it has a SAP and/or spoke-SDP with a user-entered ce-address parameter.
  3. While an ARP cache is empty, the PE does not forward unicast IP packets over the Ethernet SAP but forwards multicast/broadcast packets. target IP address being resolved.
  4. The PE waits for an ARP request from the CE to learn both IP and MAC addresses of the CE. Both entries are added into the ARP cache. The PE accepts any ARP request message received over Ethernet SAP and updates the ARP cache IP and MAC entries with no further check of the source IP address of the ARP request message or of the target IP address being resolved.
  5. The 7450 ESS, 7750 SR, and 7950 XRS routers will always reply to a received ARP request message from the Ethernet SAP with the SAP MAC address and a source IP address of the target IP address being resolved without any further check of the latter.
  6. If the router received an address list TLV from the remote PE node with a valid IP address of the CE attached to the remote PE, the router will not check the CE IP address against the target IP address being resolved when replying to an ARP request over the Ethernet SAP.
  7. The ARP cache is flushed when the SAP bounces or when the operator manually clears the ARP cache. This results in the clearing of the CE address discovered on this SAP. However, when the SAP comes up initially or comes back up from a failure, an unsolicited ARP request is not sent over the Ethernet SAP.
  8. If the Ipipe service uses a spoke-SDP, the router includes the address list TLV in the interface parameters field of the pseudowire Forwarding Equivalent Class (FEC) TLV in the label mapping message. The address list TLV contains the current value of the CE address in the ARP cache. If no address was learned, an address value of 0.0.0.0 must be used.
  9. If the remote PE included the address list TLV in the received label mapping message, the local router updates the remote PE node with the most current IP address of the Ethernet CE using a T-LDP notification message with the TLV status code set to 0x0000002C and containing an LDP address list. The notification message is sent each time an IP address different from the current value in the ARP cache is learned. This includes when the ARP is flushed and the CE address is reset to the value of 0.0.0.0.
  10. If the remote PE did not include the address list TLV in the received label mapping message, the local router will not send any notification messages containing the address list TLV during the lifetime of the IP pseudowire.
  11. If the operator disables the ce-address-discovery option under the VLL service, service manager instructs LDP to withdraw the service label and the service is shutdown. The pseudowire labels will only be signaled and the service will come up if the operator re-enters the option again or manually enters the ce-address parameter under SAP and spoke-SDP.

2.5.3.1.1. VLL FR SAP Procedures

The operator enables the following CE address dynamic learning procedures by enabling the ce-address-discovery option under the VLL service on the 7450 ESS or 7750 SR.

  1. Allow the service to come up without the CE address parameter configured at both the SAP and spoke-SDP. If one or both parameters are configured, they are ignored.
  2. The operator cannot configure the ce-address parameter under SAP or spoke-SDP when the ce-address-discovery option under the VLL service is enabled. Conversely, the operator is not allowed to enable the ce-address-discovery option under the Ipipe service if it has a SAP and/or spoke-SDP with a user-entered ce-address parameter.
  3. If the router receives an invFR ARP request message over the FR SAP, it updates the ARP cache with the FR CE address. It also replies with the IP address of the CE attached to the remote PE if a valid address was advertised in the address list TLV by this remote PE. Otherwise, the router updates the ARP cache but does not reply to the invFR ARP.
  4. If the Ipipe service uses a spoke-SDP, the router includes the address list TLV in the interface parameters field of the pseudowire FEC TLV in the label mapping message. The address list TLV contains the current value of the CE address in the ARP cache. If no address was learned, then an address value of 0.0.0.0 is used.
  5. If the remote PE included the address list TLV in the received label mapping message, the local router updates the remote PE node with the most current IP address of the FR CE using a T-LDP status notification message containing an LDP address list. The notification message is sent each time an IP address different from the current value in the ARP cache is learned. This includes when the ARP is flushed and the CE address is reset to the value of 0.0.0.00.
  6. If the remote PE did not include the address list TLV in the received label mapping message, the local router does not send any notification messages containing the address list TLV during the lifetime of the IP pseudowire.

2.5.3.1.2. VLL ATM SAP Procedures

The operator enables the following CE address dynamic learning procedures by enabling the ce-address-discovery option under the VLL service on the 7750 SR.

  1. Allow the service to come up without the ce-address parameter configured at both the SAP and spoke-SDP. If one or both parameters are configured, they are ignored.
  2. The operator is not allowed to configure the ce-address parameter under the SAP or spoke-SDP when the ce-address-discovery option under the VLL service is enabled. Conversely, the operator is not allowed to enable the ce-address-discovery option under the Ipipe service if it has a SAP and/or spoke-SDP with a user-entered ce-address parameter.
  3. If the router receives an invATM ARP request message over the ATM SAP, the router silently discards it. The router does not support receiving or sending of an invATM ARP message.
  4. If the Ipipe service uses a spoke-SDP, the router includes the address list TLV in the interface parameters field of the pseudowire FEC TLV in the label mapping message. The address list TLV contains an address value of 0.0.0.0.
  5. If the remote PE included the address list TLV in the received label mapping message, the local router will not make further updates to the address list TLV to the remote PE node using a T-LDP status notification message since the learned IP address of the ATM-attached CE will never change away from the value of 0.0.0.0.
  6. If the remote PE did not include the address list TLV in the received label mapping message, the local router will not send any notification messages containing the address list TLV during the lifetime of the IP pseudowire.

2.5.3.1.3. VLL PPP/IPCP and Cisco-HDLC SAP Procedures

The procedures are similar to the case of an ATM SAP. The remote CE address can only be learned in the case of a PPP SAP but is not sent in the address list TLV to the remote PE in both PPP and Cisco-HDLC SAP cases.

2.5.4. IPv6 Support on IP Interworking VLL

The 7450 ESS, 7750 SR, and 7950 XRS nodes support both the transport of IPv6 packets and the interworking of IPv6 Neighbor discovery/solicitation messages on an IP Interworking VLL. IPv6 capability is enabled on an Ipipe using the ce-address-discovery ipv6 command in the CLI.

2.5.4.1. IPv6 Datapath Operation

The IPv6 Datapath operation uses ICMPv6 extensions to automatically resolve IP address and link address associations. These are IP packets, as compared to ARP and invARP in IPv4, which are separate protocols and not based on IP packets. Manual configuration of IPv6 addresses is not supported on the IP Interworking VLL.

Each PE device intercepts ICMPv6 Neighbor Discovery (RFC 2461) packets, whether received over the SAP or over the pseudowire. The device inspects the packets to learn IPv6 interface addresses and CE link-layer addresses, modifies these packets as required according to the SAP type, then forwards them toward the original destination. The PE is also capable of generating packets to interwork between CEs (by using IPv6 Neighbor Discovery) and CEs that use other neighbor discovery protocols to bring up the link; for example, IPv6CP for PPP.

The PE device learns the IPv6 interface addresses for its directly-attached CE and other IPv6 interface addresses for the far-end CE. The PE device also learns the link-layer address of the local CE and uses it when forwarding traffic between the local and far-end CEs.

As with IPv4, the SAP accepts both unicast and multicast packets. For unicast packets, the PE checks that the MAC address/IP addresses are consistent with that in the ARP cache before forwarding; otherwise, the packet is silently discarded. Multicast packets are validated and forwarded. If more than one IP address is received per MAC address in a neighbor discovery packet, or if multiple neighbor discovery packets are received for a specific MAC address, the currently cached address is overwritten with the most recent value.

Figure 12 shows the data path operation for IPv6 on an IP Interworking VLL between the Ethernet and PPP (IPv6CP) SAPs.

Figure 12:  Data Path for Ethernet CE to PPP Attached CE 

With reference to neighbor discovery between Ethernet and PPP CEs in Figure 12, the steps are as follows:

  1. Ethernet-attached CE2 sends a Neighbor Solicitation message toward PE2 in order to begin the neighbor discovery process.
  2. PE2 snoops this message, and the MAC address and IP address of CE2 is stored in the ARP cache of PE2 before forwarding the Neighbor Solicitation on the IP pseudowire to PE1.
  3. PE1 snoops this message that arrives on the IP pseudowire and stores the IP address of the remote CE2. Since CE3 is attached to a PPP SAP, which uses IPv6CP to bring up the link, PE1 generates a neighbor advertisement message and sends it on the Ipipe toward PE2.
  4. PE2 receives the neighbor advertisement on the Ipipe from PE1. It must replace the Layer 2 address in the neighbor advertisement message with the MAC address of the SAP before forwarding to CE2.

2.5.4.2. IPv6 Stack Capability Signaling

The 7750 SR, 7450 ESS, and 7950 XRS support IPv6 capability negotiation between PEs at the ends of an IP interworking VLL. Stack capability negotiation is performed if stack-capability-signaling is enabled in the CLI. Stack capability negotiation is disabled by default. Therefore, it must be assumed that the remote PE supports both IPv4 and IPv6 transport over an Ipipe.

A stack-capability sub-TLV is signaled by the two PEs using T-LDP so that they can agree on which stacks they should be using. By default, the IP pseudowire will always be capable of carrying IPv4 packets. Therefore, this capability sub-TLV is used to indicate if other stacks need to be supported concurrently with IPv4.

The stack-capability sub-TLV is a part of the interface parameters of the pseudowire FEC. This means that any change to the stack support requires that the pseudowire be torn down and re-signaled.

A PE that supports IPv6 on an IP pseudowire must signal the stack-capability sub-TLV in the initial label mapping message for the pseudowire. For the 7750 SR, 7450 ESS, and 7950 XRS, this means that the stack-capability sub-TLV must be included if both the stack-capability-signaling and ce-address-discovery ipv6 options are enabled under the VLL service.

In Release 14.0, if one PE of an IP interworking VLL supports IPv6, while the far-end PE does not support IPv6 (or ce-address-discovery ipv6 is disabled), the pseudowire does not come up.

If a PE that supports IPv6 (that is, stack-capability-signaling ipv6 is enabled) has already sent an initial label mapping message for the pseudowire, but does not receive a stack-capability sub-TLV from the far-end PE in the initial label mapping message, or one is received but it is set to a reserved value, then the PE assumes that a configuration error has occurred. That is, if the remote PE did not include the stack-capability sub-TLV in the received label mapping message, or it does include the sub-TLV but with the IPv6 bit cleared, and if stack-capability-signaling is enabled, the local node with ce-address-discovery ipv6 enabled withdraws its pseudowire label with the LDP status code “IP Address type mismatch”.

If a 7750 SR, 7450 ESS, and 7950 XRS PE that supports IPv6 (that is, stack-capability-signaling ipv6 is enabled) has not yet sent a label mapping message for the pseudowire and does not receive a stack-capability sub-TLV from the far-end PE in the initial label mapping message, or one is received but it is set to a reserved value, the PE assumes that a configuration error has occurred and does not send a label mapping message of its own.

If the IPv6 stack is not supported by both PEs, or at least one of the PEs does support IPv6 but does not have the ce-address-discovery ipv6 option selected in the CLI, IPv6 packets received from the AC are discarded by the PE. IPv4 packets are always supported.

If IPv6 stack support is implemented by both PEs, but the ce-address-discovery ipv6 command was not enabled on both so that the IP pseudowire came up with only IPv4 support, and one PE is later toggled to ce-address-discovery ipv6, then that PE sends a label withdraw with the LDP status code meaning “Wrong IP Address Type” (Status Code 0x0000004B9).

If the IPv6 stack is supported by both PEs and, therefore, the pseudowire is established with IPv6 capability at both PEs, but the ce-address-discovery ipv6 command on one PE is later toggled to no ce-address-discovery ipv6 so that a PE ceases to support the IPv6 stack, then that PE sends a label withdraw with the LDP status code meaning “Wrong IP Address Type”.

2.6. Services Configuration for MPLS-TP

MPLS-TP PWs are supported in Epipe, Apipe, and Cpipe VLLs and Epipe spoke termination on IES/VPRN and VPLS, I-VPLS, and B-VPLS on the 7450 ESS and 7750 SR only.

This section describes how SDPs and spoke-SDPs are used with MPLS-TP LSPs and static pseudowires with MPLS-TP OAM. It also describes how to conduct test service throughput for PWs, using lock instruct messages and loopback configuration.

2.6.1. MPLS-TP SDPs

Only MPLS SDPs are supported.

An SDP used for MPLS-TP supports the configuration of an MPLS-TP identifier as the far-end address as an alternative to an IP address. IP addresses are used if IP/MPLS LSPs are used by the SDP, or if MPLS-TP tunnels are identified by IPv4 source/destination addresses. MPLS-TP node identifiers are used if MPLS-TP tunnels are used.

Only static SDPs with signaling off support MPLS-TP spoke-SDPs.

The following CLI shows the MPLS-TP options:

config
   service
      sdp 10 [mpls | GRE | [ldp-enabled] [create] 
         signaling <off | on> 
         [no] lsp <xyz> 
         [no] accounting-policy <policy-id>
         [no] adv-mtu-override
         [no] booking-factor <percentage>
         [no] class-forwarding
         [no] collect-stats
         [no] description <description-string>
         [no] far-end <ip-address> | [node-id 
              {<ip-address> | <0…4,294,967,295>} [global-id <global-id>]]
         [no] tunnel-far-end <ip-address> 
         [no] keep-alive
         [no] mixed-lsp-mode
         [no] metric <metric>
         [no] network-domain <network-domain-name>
         [no] path-mtu <mtu>  
         [no] pbb-etype <ethertype>
         [no] vlan-vc-etype <ethertype>
         [no] shutdown 

The far-end node-id ip-address global-id global-id command is used to associate an SDP far end with an MPLS-TP tunnel whose far-end address is an MPLS-TP node ID. If the SDP is associated with an RSVP-TE LSP, the far end must be a routable IPv4 address.

The system accepts the node-id being entered in either 4-octet IP address format (a.b.c.d) or unsigned integer format.

The SDP far end refers to an MPLS-TP node-id/global-id only if:

  1. delivery type is MPLS
  2. signaling is off
  3. keep-alive is disabled
  4. mixed-lsp-mode is disabled
  5. adv-mtu-override is disabled

An LSP can only be allowed to be configured if the far-end information matches the lsp far end information (whether MPLS-TP or RSVP).

  1. Only one LSP is allowed if the far end is an MPLS-TP node-id/global-id.
  2. MPLS-TP or RSVP-TE LSPs are supported. However, LDP and BG LSPs are not blocked in CLI.

Signaling LDP or BGP is blocked if:

  1. far-end node-id/global-id is configured
  2. control-channel-status is enabled on any spoke (or mate vc-switched spoke)
  3. pw-path-id is configured on any spoke (or mate vc-switched spoke)
  4. IES/VPRN interface spoke control-word is enabled

The following commands are blocked if a far-end node-id/global-id is configured:

  1. class-forwarding
  2. tunnel-far-end
  3. mixed-lsp-mode
  4. keep-alive
  5. ldp or bgp-tunnel
  6. adv-mtu-override

2.6.2. VLL Spoke SDP Configuration

The system can be a T-PE or an S-PE for a pseudowire (a spoke-SDP) supporting MPLS-TP OAM. MPLS-TP related commands are applicable to spoke-SDPs configured under all services supported by MPLS-TP pseudowires. All commands and functions that are applicable to spoke-SDPs are supported, except for those that explicitly depend on T-LDP signaling of the pseudo-wire, or as stated following. Likewise, all existing functions on a specified service SAP are supported if the spoke-SDP that it is mated to is MPLS-TP.

vc-switching is supported.

The following describes how to configure MPLS-TP on an Epipe VLL. However, a similar configuration applies to other VLL types.

A spoke-SDP bound to an SDP with the mpls-tp keyword cannot be no shutdown unless the ingress label, the egress label, the control word, and the pw-path-id are configured, as follows:

config
   service
      epipe
         [no] spoke-sdp sdp-id[:vc-id]
              [no] hash-label
              [no] standby-signaling-slave
 
         [no] spoke-sdp sdp-id[:vc-id] [vc-type {ether | vlan}]
             [create] [vc-switching] [no-endpoint | {endpoint [icb]}] 
            egress
               vc-label <out-label>
            ingress
               vc-label <in-label>
            control-word 
            bandwidth <bandwidth> 
            [no] pw-path-id 
               agi <agi> 
               saii-type2 <global-id:node-id:ac-id>
               taii-type2 <global-id:node-id:ac-id>
               exit
            [no] control-channel-status 
[no] refresh-timer <value> 
request-timer <request-timer-secs> retry-timer <retry-timer-secs> timeout-
multiplier <multiplier>
no request-timer
               [no] acknowledgment 
               [no] shutdown
               exit

The pw-path-id context is used to configure the end-to-end identifiers for an MS-PW. These may not coincide with those for the local node if the configuration is at an S-PE. The SAII and TAII are consistent with the source and destination of a label mapping message for a signaled PW.

The control-channel-status command enables static pseudowire status signaling. This is valid for any spoke-SDP where signaling none is configured on the SDP (for example, where T-LDP signaling is not in use). The refresh timer is specified in seconds, from 10-65535, with a default of 0 (off). This value can only be changed if control-channel-status is shutdown.

Commands that rely on PW status signaling are allowed if control-channel-status is configured for a spoke-SDP bound to an SDP with signaling off, but the system will use control channel status signaling rather than T-LDP status signaling. The ability to configure control channel status signaling on a specified spoke-SDP is determined by the credit-based algorithm described earlier. Control channel status for a pseudowire only counts against the credit-based algorithm if the pseudowire is in a no shutdown state and has a non-zero refresh timer and a non-zero request timer.

A shutdown of a service will result in the static PW status bits for the corresponding PW being set.

The spoke-SDP is held down unless the pw-path-id is complete.

The system will accept the node-id of the pw-path-id saii or taii being entered in either 4-octet IP address format (a.b.c.d) or unsigned integer format.

The control-word must be enabled to use MPLS-TP on a spoke-SDP.

The optional acknowledgment to a static PW status message is enabled using the acknowledgment command. The default is no acknowledgment.

The pw-path-id is only configurable if all of the following are true:

  1. in network mode D
  2. sdp signaling is off
  3. control-word is enabled (control-word is disabled by default)
  4. on service type Epipe, VPLS, Cpipe, or IES/VPRN interface
  5. An MPLS-TP node-id/global-id is configured under the config>router>mpls>mpls-tp context. This is required for OAM to provide a reply address.

In the vc-switching case, if configured to make a static MPLS-TP spoke SDP to another static spoke SDP, the TAII of the spoke-SDP must match the SAII of its mate, and the SAII of the spoke-SDP must match the TAII of its mate.

A control-channel-status no shutdown is allowed only if all of the following are true:

  1. in network-mode D
  2. sdp signaling is off
  3. control-word is enabled (control-word by default is disabled)
  4. the service type is Epipe, Apipe, VPLS, Cpipe, or IES/VPRN interface
  5. pw-status-signaling is enabled (as follows)
  6. pw-path-id is configured for this spoke

The hash-label option is only configurable if SDP far end is not node-id/global-id.

The control channel status request mechanism is enabled when the request-timer timer parameter is non-zero. When enabled, this overrides the normal RFC-compliant refresh timer behavior. The refresh timer value in the status packet defined in RFC 6478 is always set to zero. The refresh-timer in the sending node is taken from the request-timer <timer1> timer. The two mechanisms are not compatible with each other. One node sends a request timer while the other is configured for refresh timer. In a specified node, the request timer can only be configured with both acknowledgment and refresh timers disabled.

When configured, the procedures following are used instead of the RFC 6478 procedures when a PW status changes.

The CLI commands to configure control channel status requests are as follows:

[no] control-channel-status 
     [no] refresh-timer <value> //0,10-65535, default:0 
     [no] request-timer <timer1> retry-timer <timer2> 
                  [timeout-multiplier <value>]
     [no] shutdown
     exit

request-timer <timer1>: 0, 10-65535, defaults: 0.

  1. This parameter determines the interval at which PW status messages are sent, including a reliable delivery TLV, with the “request” bit set (as follows). This cannot be enabled if refresh-timer is not equal to zero (0).

retry-timer <timer2>: 3-60s

  1. This parameter determines the timeout interval if no response to a PW status is received. This defaults to zero (0) when no retry-timer.

timeout-multiplier <value> - 3 to 15

  1. If a requesting node does not get a response after retry-timer × multiplier, the node must assume that the peer is down. This defaults to zero (0) when no retry-timer.

2.6.2.1. Epipe VLL Spoke SDP Termination on IES, VPRN, and VPLS

All existing commands (except for those explicitly specified following) are supported for spoke-SDP termination on IES, VPRN, and VPLS (VPLS, I-VPLS and B-VPLS and routed VPLS) services. Also, the MPLS-TP commands listed preceding are supported. The syntax, default values, and functional behavior of these commands is the same as for Epipe VLLs, as specified preceding.

Also, the PW Control Word is supported on spoke-SDP termination on IES/VPRN interfaces for pseudowires of type “Ether” with statically assigned labels (signaling off) for spoke-SDPs configured with MPLS-TP Identifiers.

The following CLI commands under spoke-SDP are blocked for spoke-SDPs with statically assigned labels (and the SDP has signaling off) and MPLS-TP identifiers:

  1. no status-signaling — This command causes the spoke-SDP to fall back to using PW label withdrawal as a status signaling method. However, T-LDP is not supported on MPLS-TP SDPs. Control channel status signaling should always be used for signaling PW status. Since active/standby dual-homing into a routed VPLS requires the use of T-LDP label withdrawal as the method for status signaling, active/standby dual-homing into routed VPLS is not supported if the spoke-SDPs are MPLS-TP.
  2. propagate-mac-flush — This command requires the ability to receive MAC Flush messages using T-LDP signaling and is blocked.

2.6.3. Configuring MPLS-TP Lock Instruct and Loopback

MPLS-TP supports lock instruct and loopback for PWs.

2.6.3.1. MPLS-TP PW Lock Instruct and Loopback Overview

The lock instruct and loopback capability for MPLS-TP PWs includes the ability to:

  1. administratively lock a spoke-SDP with MPLS-TP identifiers
  2. divert traffic to and from an external device connected to a SAP
  3. create a data path loopback on the corresponding PW at a downstream S-PE or T-PE that was not originally bound to the spoke-SDP being tested
  4. forward test traffic from an external test generator into an administratively locked PW, while simultaneously blocking the forwarding of user service traffic

MPLS-TP provides the ability to conduct test service throughput for PWs, using lock instruct messages and loopback configuration. To conduct a service throughput test, you can apply an administrative lock at each end of the PW. This creates a test service that contains the SAP connected to the external device. Lock request messaging is not supported. You can also configure a MEP to send a lock instruct message to the far-end MEP. The lock instruct message is carried in a G-ACh on Channel 0x0026. A lock can be applied using the CLI or NMS. The forwarding state of the PW can be either active or standby.

After locking a PW, you can put it into loopback mode (for two-way tests) so the ingress data path in the forward direction is cross-connected to the egress data path in the reverse direction of the PW. This is accomplished by configuring the source MEP to send a loopback request to an intermediate MIP or MEP. A PW loopback is created at the PW level, so everything under the PW label is looped back. This distinguishes a PW loopback from a service loopback, where only the native service packets are looped back. The loopback is also configured through CLI or NMS.

The following MPLS-TP lock instruct and loopback functionality is supported:

  1. An MPLS-TP loopback can be created for an Epipe, Cpipe or Apipe VLL.
  2. Test traffic can be inserted at an Epipe, Cpipe or Apipe VLL endpoint or at an Epipe spoke-sdp termination on a VPLS interface.

2.6.3.2. Lock PW Endpoint Model

You can administratively lock a spoke-SDP by locking the host service using the admin-lock parameter of the tools command. The following conditions and constraints apply:

  1. Both ends of a PW or MS-PW represented by a spoke-SDP must be administratively locked.
  2. Test traffic can be injected into the spoke-SDP using a SAP defined within a test service. The test service must be identified in the tools command at one end of the locked PW.
  3. All traffic is forwarded to and from the test SAP defined in the test service, which must be of a type that is compatible with the spoke-SDP.
  4. Traffic to and from a non-test SAP is dropped. If no test SAP is defined, all traffic received on the spoke-SDP is dropped, and all traffic received on the paired SAP is also dropped.
  5. If a spoke-SDP is administratively locked, it is treated as operationally down. If a VLL SAP is paired with a spoke-SDP that is administratively locked, the SAP OAM treats this as if the spoke-SDP is operationally down.
  6. If a VPLS interface is paired to a spoke-SDP that is administratively locked, the L2 interface is taken down locally.
  7. Control-channel-status must be shutdown prior to administratively locking a spoke-SDP.

2.6.3.3. PW Redundancy and Lock Instruct and Loopback

It is possible to apply an administrative lock and loopback to one or more spoke-SDPs within a redundant set. That is, it is possible to move a spoke-SDP from an existing endpoint to a test service. When an administrative lock is applied to a spoke-SDP, it becomes operationally down and cannot send or receive traffic from the normal service SAP or spoke interface. If the lock is applied to all the spoke-SDPs in a service, all the spoke-SDPs will become operationally down.

2.6.3.4. Configuring a Test SAP for an MPLS-TP PW

A test SAP is configured under a unique test service type. This looks similar to a normal service context, but will normally only contain a SAP configuration:

config
   service
      epipe <service-id> [test] [create]
         [no] sap <sap-id>
         [no] shutdown
      [no] shutdown
config
   service
      apipe <service-id> [vc-type {atm-vcc | atm-sdu | atm-vpc | atm-cell} 
       [test][create]
         [no] sap <sap-id>
         [no] shutdown
      [no] shutdown
config
   service
      cpipe <service-id> [vc-type {satop-e1 | satop-t1 | cesopsn | cesopsncas}
       [test][create]
         [no] sap <sap-id>
         [no] shutdown
      [no] shutdown

You can define test SAPs appropriate to any service or PW type supported by MPLS-TP, including an Apipe, Cpipe or Epipe. The following test SAP types are supported:

  1. Ethernet NULL, 1q, Q-in-Q
  2. ATM VC, VP, VT, and so on
  3. TDM E1, E3, DS0, DS3, and so on

The following constraints and conditions apply:

  1. Up to a maximum of 16 test services can be configured per system.
  2. It is possible to configure access ingress and access egress QoS policies on a test SAP, as well as any other applicable SAP-specific commands and overrides.
  3. Vc-switching and spoke-SDP are blocked for services configured under the test context.
  4. The test keyword is mutually exclusive with vc-switching and customer.
  5. Valid commands under a compatible test service context do not need to be blocked just because the service is a test service.

2.6.3.5. Configuring an Administrative Lock

An administrative lock is configured on a spoke-SDP using the admin-lock option of the tools perform command, as follows:

tools
    perform
      service-id <svc-id> 
         admin-lock
            pw
               sdp <sdp-id> admin-lock [test-svc-id <id>]

The following conditions and constraints apply for configuring an administrative lock:

  1. The lock can be configured either on a spoke-SDP that is bound to a SAP, another spoke-SDP or a VPLS interface.
  2. The lock is only allowed if a PW path ID is defined (for example, for static PWs with MPLS-TP identifiers).
  3. The lock cannot be configured on spoke-SDPs that are an Inter-Chassis Backup (ICB) or if the vc-switching keyword is present.
  4. The control-channel-status must be shutdown. The operator should also shutdown control-channel-status on spoke-SDPs belonging to an MS-PW at an S-PE whose far ends are administratively locked at its T-PEs. This should be enforced throughout the network management if using the 5620 SAM.
  5. When enabled, all traffic on the spoke-SDP is sent to and from a paired SAP that has the test keyword present, if such a SAP exists in the X endpoint (see Pseudowire Redundancy Service Models). Otherwise, all traffic to and from the paired SAP is dropped.
  6. The lock can be configured at a spoke-SDP that is bound to a VLL SAP or a VPLS interface.
  7. The test-svc-id parameter refers to the test service that should be used to inject test traffic into the service. The test service must be of a compatible type to the existing spoke-SDP under test (see Table 9).
  8. If the test-svc-id parameter is not configured on an admin-locked spoke-SDP, user traffic is blocked on the spoke-SDP.

The service manager should treat an administrative lock as a fault from the perspective of a paired SAP that is not a test SAP. This will cause the appropriate SAP OAM fault indication.

Table 9 maps supported real services to their corresponding test services.

Table 9:  Mapping of Real Services to Test Service Types 

Service

Test Service

CPIPE

CPIPE

EPIPE

EPIPE

APIPE

APIPE

VPLS

EPIPE

PBB VPLS

EPIPE

2.6.3.6. Configuring a Loopback

If a loopback is configured on a spoke-SDP, all traffic on the ingress direction of the spoke-sdp and associated with the ingress vc-label is forwarded to the egress direction of the spoke-SDP. A loopback may be configured at either a T-PE or an S-PE. It is recommended that an administrative lock is configured before configuring the loopback on a spoke-SDP. This is enforced by the NMS.

A data path loopback is configured using a tools perform command, as follows:

tools
   perform
      service-id <svc-id> 
         loopback
            pw
               sdp <sdp-id>:<vc-id> {start | stop}

The following constraints and conditions apply for PW loopback configuration:

  1. The spoke-SDP cannot be an ICB or be bound to a VPLS interface.
  2. A PW path ID must be configured, that is, the spoke-SDP must be static and use MPLS-TP identifiers.
  3. The spoke-SDP must be bound to a VLL mate SAP or another spoke-SDP that is not an ICB.
  4. The control-channel-status must be shutdown.
  5. The following are disabled on a spoke-SDP for which a loopback is configured:
    1. Filters
    2. PW shaping
  6. Only network port QoS is supported.

2.6.4. Switching Static MPLS-TP to Dynamic T-LDP Signaled PWs

Some use cases for MPLS-TP require an MPLS-TP based aggregation network and an IP-based core network to interoperate, so providing the seamless transport of packet services across static MPLS-TP and dynamically signaled domains using an MS-PW. In this environment, end-to-end VCCV Ping and VCCV Trace may be used on the MS-PW, as illustrated in Figure 13.

Figure 13:  Static - Dynamic PW Switching with MPLS-TP 

Services are backhauled from the static MPLS-TP network on the left to the dynamic IP/MPLS network on the right. The router acts as an S-PE interconnecting the static and dynamic domains.

The router implementation supports such use cases through the ability to mate a static MPLS-TP spoke SDP, with a defined pw-path-id, to a FEC128 spoke SDP. The dynamically signaled spoke SDP must be MPLS; GRE PWs are not supported, but the T-LDP signaled PW can use any supported MPLS tunnel type (for example, LDP, RSVP-TE, static, BGP). The control-word must be enabled on both mate spoke SDPs.

Mapping of control channel status signaling to and from T-LDP status signaling at the router S-PE is also supported.

The use of VCCV Ping and VCCV Trace on an MS-PW composed of a mix of static MPLS-TP and dynamic FEC128 segments is described in more detail in the 7450 ESS, 7750 SR, 7950 XRS, and VSR OAM and Diagnostics Guide.

2.7. VCCV BFD support for VLL, Spoke-SDP Termination on IES and VPRN, and VPLS Services

This section provides information about VCCV BFD support for VLL, spoke-SDP Termination on IES and VPRN, and VPLS Services. VCCV BFD is supported on the 7450 ESS and 7750 SR only.

2.7.1. VCCV BFD Support

The SR OS supports RFC 5885, which specifies a method for carrying BFD in a pseudowire-associated channel. This enables BFD to monitor the pseudowire between its terminating PEs, regardless of how many P routers or switching PEs the pseudowire may traverse. This makes it possible for faults that are local to individual pseudowires to be detected, whether or not they also affect forwarding for other pseudowires, LSPs, or IP packets. VCCV BFD is ideal for monitoring specific high-value services, where detecting forwarding failures (and potentially restoring from them) in the minimal amount of time is critical.

VCCV BFD is supported on VLL services using T-LDP spoke-SDPs or BGP VPWS. It is supported for Apipe, Cpipe, Epipe, Fpipe, and Ipipe VLL services.

VCCV BFD is supported on IES/VPRN services with T-LDP spoke -SDP termination (for Epipes and Ipipes).

VCCV BFD is supported on LDP- and BGP-signaled pseudowires, and on pseudowires with statically configured labels, whether signaling is off or on for the SDP. VCCV BFD is not supported on MPLS-TP pseudowires.

VCCV BFD is supported on VPLS services (both spoke-SDPs and mesh SDPs). VCCV BFD is configured by:

  1. configuring generic BFD session parameters in a BFD template
  2. applying the BFD template to a spoke-SDP or pseudowire-template binding, using the bfd-template template_name command
  3. enabling the template on that spoke-SDP, mesh SDP, or pseudowire-template binding using the bfd-enable command

2.7.2. VCCV BFD Encapsulation on a Pseudowire

The SR OS supports IP/UDP encapsulation for BFD. With this encapsulation type, the UDP headers are included on the BFD packet. IP/UDP encapsulation is supported for pseudowires that use router alert (VCCV Type 2), and for pseudowires with a control word (VCCV Type 1). In the control word case, the IPv4 channel (channel type 0x0021) is used. On the node, the destination IPv4 address is fixed at 127.0.0.1 and the source address is 127.0.0.2.

VCCV BFD sessions run end-to-end on a switched pseudowire. They do not terminate on an intermediate S-PE; therefore, the TTL of the pseudowire label on VCCV BFD packets is always set to 255 to ensure that the packets reach the far-end T-PE of an MS-PW.

2.7.3. BFD Session Operation

BFD packets flow along the full length of a PW, from T-PE to T-PE. Since they are not intercepted at an S-PE, single-hop initialization procedures are used.

A single BFD session exists per pseudowire.

BFD runs in asynchronous mode.

BFD operates as a simple connectivity check on a pseudowire. The BFD session state is reflected in the MIBs and in the show>service id>sdp>vccv-bfd session command. Therefore, BFD operates in a similar manner to other proactive OAM tools, such as SAA with VCCV Ping. BFD is not used to change the operation state of the pseudowire or to modify pseudowire redundancy. Mapping the BFD state to SAP OAM is not supported.

VCCV BFD runs in software with a minimum supported timer interval of 1 s.

BFD is only used for fault detection. While RFC 5885 provides a mode in which VCCV BFD can be used to signal pseudowire status, this mode is only applicable for pseudowires that have no other status signaling mechanism in use. LDP status and static pseudowire status signaling always take precedence over BFD-signaled PW status, and BFD-signaled pseudowire status is not used on pseudowires that use LDP status or static pseudowire status signaling mechanisms.

2.7.4. Configuring VCCV BFD

Generic BFD session parameters are configured for VCCV using the bfd-template command, in the config>router>bfd context. However, there are some restrictions.

For VCCV, the BFD session cannot terminate on the CPM network processor. Therefore, an error is generated if the user tries to bind a BFD template using the type cpm-np command within the config>router>bfd>bfd-template context.

As well, the minimum supported value for the transmit-interval and receive-interval commands when BFD is used for VCCV-BFD is 1s. Attempting to bind a BFD template with any unsupported transmit or receive interval will generate an error.

Finally, attempting to commit changes to a BFD template that is already bound to a pseudowire where the new values are invalid for VCCV BFD will result in an error.

If the preceding BFD timer values are changed in a specified template, any BFD sessions on pseudowires to which that template is bound will try to renegotiate their timers to the new values.

Commands within the BFD-template use a begin-commit model. To edit any value within the BFD template, a begin command needs to be executed after the template context has been entered. However, a value will still be stored temporarily in the template-module until the commit command is issued. When the commit is issued, values will be used by other modules such as the MPLS-TP module and BFD module.

For pseudowires where the pseudowire template does not apply, a named BFD template is configured on the spoke-SDP using the config service [epipe | cpipe | apipe | fpipe | ipipe] spoke-sdp bfd-template name command, then enabled using the config service [epipe | cpipe | apipe | fpipe | ipipe] spoke-sdp bfd-enable command. For example, LDP-signaled spoke-SDPs for a VLL service that uses the pseudowire ID FEC (FEC128) or spoke-SDPs with static pseudowire labels with or without MPLS-TP identifiers.

Configuring and enabling a BFD template on a static pseudowire already configured with MPLS-TP identifiers (that is, with a pw-path-id) or on a spoke-SDP with a configured pw-path-id is not supported. Likewise, if a BFD template is configured and enabled on a spoke-SDP, a pw-path-id cannot be configured on the spoke-SDP.

The bfd-enable command is blocked on a spoke-SDP configured with VC-switching. This is because VCCV BFD always operates end-to-end on an MS-pseudowire. It is not possible to extract VCCV BFD packets at the S-PE.

For IES and VPRN spoke-SDP termination where the pseudowire template does not apply (that is, where the spoke-SDP is signaled with LDP and uses the pseudowire ID FEC (FEC128)), the BFD template is configured using the config service ies | vprn if spoke-sdp bfd-template name command, then enabled using the config service ies | vprn if spoke-sdp bfd-enable command.

For H-VPLS, where the pseudowire template does not apply (that is, LDP-VPLS spoke and mesh SDPs that use the pseudowire ID FEC(FEC128)) the BFD template is configured using the config service vpls spoke-sdp bfd-name name command or the config service vpls mesh-sdp bfd-name name command. VCCV BFD is then enabled with the bfd-enable command under the VPLS spoke-SDP or mesh-SDP context.

Pseudowires where the pseudowire template does apply and that support VCCV BFD are as follows:

  1. BGP-AD, which is signaled using the Generalized pseudowire ID FEC (FEC129) with Attachment Individual Identifier (AII) type I
  2. BGP VPLS
  3. BGP VPWS

For these pseudowire types, a named BFD template is configured and enabled from the pseudowire template binding context.

For BGP VPWS, the BFD template is configured using the config service epipe bgp pw-template-binding bfd-template name command, then enabled using the config service epipe bgp pw-template-binding bfd-enable command.

2.8. Pseudowire Switching

The pseudowire switching feature provides the user with the ability to create a VLL service by cross-connecting two spoke-SDPs. This feature allows the scaling of VLL and VPLS services in a large network in which the otherwise full mesh of PE devices would require thousands of Targeted LDP (T-LDP) sessions per PE node.

Services with one SAP and one spoke-SDP are created normally on the PE; however, the target destination of the SDP is the pseudowire switching node instead of what is normally the remote PE. Also, the user configures a VLL service on the pseudowire switching node using the two SDPs.

The pseudowire switching node acts in a passive role with respect to signaling of the pseudowires. It waits until one or both of the PEs sends the label mapping message before relaying it to the other PE. This is because it needs to pass the interface parameters of each PE to the other.

A pseudowire switching point TLV is inserted by the switching pseudowire to record its system address when relaying the label mapping message. This TLV is useful in a few situations:

  1. It allows for troubleshooting of the path of the pseudowire especially if multiple pseudowire switching points exist between the two PEs.
  2. It helps in loop detection of the T-LDP signaling messages where a switching point would receive back a label mapping message it had already relayed.
  3. The switching point TLV is inserted in pseudowire status notification messages when they are sent end-to-end or from a pseudowire switching node toward a destination PE.

Pseudowire OAM is supported for the manual switching pseudowires and allows the pseudowire switching node to relay end-to-end pseudowire status notification messages between the two PEs. The pseudowire switching node can generate a pseudowire status and send it to one or both of the PEs by including its system address in the pseudowire switching point TLV. This allows a PE to identify the origin of the pseudowire status notification message.

In the following example, the user configures a regular Epipe VLL service PE1 and PE2. These services each consist of a SAP and a spoke-SDP. However, the target destination of the SDP is not the remote PE, but the pseudowire switching node. Also, the user configures an Epipe VLL service on the pseudowire switching node using the two SDPs.

|7450 ESS, 7750 SR, and 7950 XRS PE1 (Epipe)|---sdp 2:10---|7450 ESS, 7750 SR, and
7950 XRS PW SW (Epipe)|---sdp 7:15---|7450 ESS, 7750 SR, and 7950 XRS PE2 (Epipe)|

Configuration examples are in Configuring Two VLL Paths Terminating on T-PE2.

2.8.1. Pseudowire Switching with Protection

Pseudowire switching scales VLL and VPLS services over a multi-area network by removing the need for a full mesh of targeted LDP sessions between PE nodes. Figure 14 shows the use of pseudowire redundancy to provide a scalable and resilient VLL service across multiple IGP areas in a provider network.

In the network in Figure 14, PE nodes act as masters and pseudowire switching nodes act as slaves for the purpose of pseudowire signaling. A switching node will need to pass the SAP interface parameters of each PE to the other PEs. T-PE1 sends a label mapping message for the Layer 2 FEC to the peer pseudowire switching node; for example, S-PE1. The label mapping message will include the SAP interface parameters, such as MTU, in the label mapping message. S-PE1 checks the FEC against the local information and, if a match exists, appends the optional pseudowire switching point TLV to the FEC TLV in which it records its system address. T-PE1 then relays the label mapping message to S-PE2. S-PE2 performs similar operations and forwards a label mapping message to T-PE2.

The same procedures are followed for the label mapping message in the reverse direction; for example, from T-PE2 to T-PE1. S-PE1 and S-PE2 will make the spoke-SDP cross-connect only when both directions of the pseudowire have been signaled and matched.

Figure 14:  VLL Resilience with Pseudowire Redundancy and Switching 

The pseudowire switching TLV is useful in a few situations. First, it allows for troubleshooting of the path of the pseudowire, especially if multiple pseudowire switching points exist between the two T-PE nodes. Second, it helps in loop detection of the T-LDP signaling messages where a switching point receives back a label mapping message that the point already relayed. Finally, it can be inserted in pseudowire status messages when they are sent from a pseudowire switching node toward a destination PE.

Pseudowire status messages can be generated by the T-PE nodes and/or the S-PE nodes. Pseudowire status messages received by a switching node are processed, and passed on to the next hop. An S-PE node appends the optional pseudowire switching TLV, with the S-PEs system address added to it, to the FEC in the pseudowire status notification message, only if that S-PE originated the message or the message was received with the TLV in it. Otherwise, the message was originated by a T-PE node and the S-PE should process and pass the message without changes, except for the VC-ID value in the FEC TLV.

2.8.2. Pseudowire Switching Behavior

In the network in Figure 14, PE nodes act as masters and pseudowire switching nodes act as slaves for the purpose of pseudowire signaling. This is because a switching node will need to pass the SAP interface parameters of each PE to the other. T-PE1 sends a label mapping message for the Layer 2 FEC to the peer pseudowire switching node; for example, S-PE1. It will include the SAP interface parameters, such as MTU, in the label mapping message. S-PE1 checks the FEC against the local information and, if a match exists, appends the optional pseudowire switching point TLV to the FEC TLV in which it records its system address. T-PE1 then relays the label mapping message to S-PE2. S-PE2 performs similar operation and forwards a label mapping message to T-PE2.

The same procedures are followed for the label mapping message in the reverse direction; for example, from T-PE2 to T-PE1. S-PE1 and S-PE2 will affect the spoke-SDP cross-connect only when both directions of the pseudowire have been signaled and matched.

Pseudowire status messages can be generated by the T-PE nodes and/or the S-PE nodes. Pseudowire status messages received by a switching node are processed, then passed on to the next hop. An S-PE node appends the optional pseudowire switching TLV, with its system address added to it, to the FEC in the pseudowire status notification message, only if it originated the message or the message was received with the TLV in it. Otherwise, the message was originated by a T-PE node and the S-PE should process and pass the message without changes, except for the VC-ID value in the FEC TLV.

The merging of the received T-LDP status notification message and the local status for the spoke-SDPs from the service manager at a PE complies with the following rules:

  1. When the local status for both spoke-SDPs is up, the S-PE passes any received SAP or SDP binding generated status notification message unchanged; for example, the status notification TLV is unchanged but the VC-ID in the FEC TLV is set to value of the pseudowire segment to the next hop.
  2. When the local operational status for any of the spokes is down, the S-PE always sends an SDP-binding down status bits regardless of whether the received status bits from the remote node indicated SAP up or down or SDP-binding up or down.

2.8.2.1. Pseudowire Switching TLV

The format of the pseudowire switching TLV is as follows:

    0                   1                   2                   3
    0                   1                   2                   3
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |1|0|     pw sw TLV  (0x096D)   |     pseudowire sw TLV  Length         |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |     Type      |    Length     |    Variable Length Value      |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
   |                         Variable Length Value                 |
   |                             "                                 |
   +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

PW sw TLV Length — Specifies the total length of all the following pseudowire switching point TLV fields in octets

Type — Encodes how the Value field is to be interpreted

Length — Specifies the length of the Value field in octets

Value — Octet string of Length octets that encodes information to be interpreted as specified by the Type field

2.8.2.2. Pseudowire Switching Point Sub-TLVs

Following is information specific to pseudowire switching point sub-TLVs:

  1. Pseudowire ID of last pseudowire segment traversed — Sub-TLV type that contains a pseudowire ID in the format of the pseudowire ID
  2. Pseudowire switching point description string — An optional description text string of up to 80 characters
  3. IP address of pseudowire switching point — An options sub-TLV; IP V4 or V6 address of the pseudowire switching point.
  4. MH VCCV capability indication

2.8.3. Static-to-Dynamic Pseudowire Switching

When one segment of the pseudowire cross-connect at the S-PE is static while the other is signaled using T-LDP, the S-PE operates much like a T-PE from a signaling perspective and as an S-PE from a data plane perspective.

The S-PE signals a label mapping message as soon as the local configuration is complete. The control word C-bit field in the pseudowire FEC is set to the value configured on the static spoke-SDP.

When the label mapping for the egress direction is also received from the T-LDP peer, and the information in the FEC matches that of the local configuration, the static-to-dynamic cross-connect is created.

It is possible that end nodes of a static pseudowire segment can be misconfigured. In this case, an S-PE or T-PE node may be receiving packets with the wrong encapsulation, so that an invalid payload could be forwarded over the pseudowire or the SAP, respectively. Also, if the S-PE or T-PE node is expecting the control word in the packet encapsulation and the received packet comes with no control word, but the first nibble below the label stack is 0x0001, the packet may be mistaken for a VCCV OAM packet and may be forwarded to the CPM. In that case, the CPM will perform a check of the IP header fields such as version, IP header length, and checksum. If any of these fail the VCCV packet will be discarded.

2.8.4. Ingress VLAN Swapping

This feature is supported on VPLS and VLL services where the end-to-end solution is built using two node solutions (requiring SDP connections between the nodes).

In VLAN swapping, only the VLAN ID value is copied to the inner VLAN position. The Ethertype of the inner tag will be preserved and all consecutive nodes will work with that value. Similarly, the dot1p bits value of outer tag will not be preserved.

Figure 15 describes a network where, at user-access side (DSLAM-facing SAPs), every subscriber is represented by several QinQ SAPs with inner-tag encoding service and outer tag encoding subscriber (DSL line). At the aggregation side (BRAS- or PE-facing SAPs) every subscriber is represented by DSL line number (inner VLAN tag) and DSLAM (outer VLAN tag). The effective operation on the VLAN tag is to drop the inner tag at the access side and push another tag at the aggregation side.

Figure 15:  Ingress VLAN Swapping 

2.8.4.1. Ingress VLAN Translation

Figure 16 indicates an application where different circuits are aggregated in the VPLS-based network. The access side is represented by an explicit do1q encapsulated SAP. Because the VLAN ID is port specific, those connected to different ports might have the same VLAN. The aggregation side is aggregated on the same port; therefore, a unique VLAN ID is required.

Figure 16:  Ingress VLAN Translation 

2.8.5. Pseudowire Redundancy

Pseudowire redundancy provides the ability to protect a pseudowire with a pre-provisioned secondary standby pseudowire and to switch traffic over to that secondary standby pseudowire in case of a SAP and/or network failure condition. Normally, pseudowires are redundant by the virtue of the SDP redundancy mechanism. For instance, if the SDP is an RSVP LSP and is protected by a secondary standby path and/or by Fast-Reroute paths (FRR), the pseudowire is also protected. However, there are two applications in which SDP redundancy does not protect the end-to-end pseudowire path:

  1. There are two different destination PE nodes for the same VLL service. The main use case is the provision of dual-homing of a CPE or access node to two PE nodes located in different POPs. The other use case is the provision of a pair of active and standby BRAS nodes, or active and standby links to the same BRAS node, to provide service resiliency to broadband service subscribers.
  2. The pseudowire path is switched in the middle of the network and the pseudowire switching node fails.

Pseudowire and VPLS link redundancy extends link-level resiliency for pseudowires and VPLS to protect critical network paths against physical link or node failures. These innovations enable the virtualization of redundant paths across the metro or core IP network to provide seamless and transparent fail-over for point-to-point and multi-point connections and services. When deployed with multi-chassis LAG, the path for return traffic is maintained through the pseudowire or VPLS switchover, which enables carriers to deliver “always on” services across their IP/MPLS networks.

2.8.6. Dynamic Multi-Segment Pseudowire Routing

2.8.6.1. Overview

Dynamic Multi-Segment Pseudowire Routing (Dynamic MS-PWs) enable a complete multi-segment pseudowire to be established, while only requiring per-pseudowire configuration on the T-PEs. No per-pseudowire configuration is required on the S-PEs. End-to-end signaling of the MS-PW is achieved using T-LDP, while multi-protocol BGP is used to advertise the T-PEs, allowing dynamic routing of the MS-PW through the intervening network of S-PEs. Dynamic multi-segment pseudowires are described in the IETF Draft draft-ietf-pwe3-dynamic-ms-pw-13.txt.

Figure 17 shows the operation of dynamic MS-PWs.

Figure 17:  Dynamic MS-PW Overview 

The FEC 129 AII Type 2 structure depicted in Figure 18 is used to identify each individual pseudowire endpoint:

Figure 18:  MS-PW Addressing using FEC129 AII Type 2 

A 4-byte global-id followed by a 4-byte prefix and a 4-byte attachment circuit ID are used to provide for hierarchical, independent allocation of addresses on a per-service provider network basis. The first 8 bytes (global-id + prefix) may be used to identify each individual T-PE or S-PE as a loopback Layer 2 address.

The AII type is mapped into the MS-PW BGP NLRI (a BGP AFI of L2VPN, and SAFI for network layer reachability information for dynamic MS-PWs). As soon as a new T- PE is configured with a local prefix address of global id: prefix, pseudowire routing will proceed to advertise this new address to all the other T- PEs and S-PEs in the network, as depicted in Figure 19.

Figure 19:  Advertisement of PE Addresses by PW Routing 

In step 1 of Figure 19, a new T-PE (T-PE2) is configured with a local prefix.

Next, in steps 2 to 5, MP-BGP will use the NLRI for the MS-PW routing SAFI to advertise the location of the new T-PE to all the other PEs in the network. Alternatively, static routes may be configured on a per T-PE/S-PE basis to accommodate non-BGP PEs in the solution.

As a result, pseudowire routing tables for all the S-PEs and remote T-PEs are populated with the next hop to be used to reach T-PE2.

VLL services can then be established, as illustrated in Figure 20.

Figure 20:  Signaling of Dynamic MS-PWs using T-LDP 

In step 1 and 1' of Figure 20 the T-PEs are configured with the local and remote endpoint information: Source AII (SAII) and Target AII (TAII). On the router, the AIIs are locally configured for each spoke-SDP, according to the model shown in Figure 21. Therefore the router provides for a flexible mapping of the AII to SAP. That is, the values used for the AII are through local configuration, and it is the context of the spoke-SDP that binds it to a specific SAP.

Figure 21:  Mapping of AII to SAP 

Before T-LDP signaling starts, the two T-PEs decide on an active and passive relationship using the highest AII (comparing the configured SAII and TAII) or the configured precedence. Next, the active T-PE (in the IETF draft, this is referred to as the source T-PE or ST-PE) checks the PW routing table to determine the next signaling hop for the configured TAII using the longest match between the TAII and the entries in the PW routing table.

This signaling hop is then used to choose the T-LDP session to the chosen next-hop S-PE. Signaling proceeds through each subsequent S-PE using similar matching procedures to determine the next signaling hop. Otherwise, if a subsequent S-PE does not support dynamic MS-PW routing, so uses a statically configured PW segment, the signaling of individual segments follows the procedures already implemented in the PW Switching feature.

BGP can install a PW AII route in the PW routing table with ECMP next-hops. However, when LDP needs to signal a PW with matching TAII, it will choose only one next-hop from the available ECMP next-hops. PW routing supports up to 4 ECMP paths for each destination.

The signaling of the forward path ends when the PE matches the TAII in the label mapping message with the SAII of a spoke-SDP bound to a local SAP. The signaling in the reverse direction can now be initiated, which follows the entries installed in the forward path. The PW routing tables are not consulted for the reverse path. This ensures that the reverse direction of the PW follows exactly the same set of S-PEs as the forward direction.

This solution can be used in either a MAN-WAN environment or in an Inter-AS/Inter-Provider environment as depicted in Figure 22.

Figure 22:  VLL Using Dynamic MS-PWs, Inter-AS Scenario 

Data plane forwarding at the S-PEs uses pseudowire service label switching, as per the pseudowire switching feature.

2.8.6.2. Pseudowire Routing

Each S-PE and T-PE has a pseudowire routing table that contains a reference to the T-LDP session to use to signal to a set of next hop S-PEs to reach a specific T-PE (or the T-PE if that is the next hop). For VLLs, this table contains aggregated AII Type 2 FECs and may be populated with routes that are learned through MP-BGP or that are statically configured.

MP-BGP is used to automatically distribute T-PE prefixes using the new MS-PW NLRI, or static routes can be used. The MS-PW NLRI is composed of a Length, an 8-byte route distinguisher (RD), a 4-byte global-id, a 4-byte local prefix, and (optionally) a 4-byte AC-ID. Support for the MS-PW address family is configured in CLI under the config>router>bgp>family ms-pw context.

MS-PW routing parameters are configured in the config>service>pw-routing context.

To enable support for dynamic MS-PWs on a 7750 SR, 7450 ESS, or 7950 XRS node to be used as a T-PE or S-PE, a single, globally unique, S-PE ID, known as the S-PE address, is first configured under config>service>pw-routing on each node to be used as a T-PE or S-PE. The S-PE address has the format global-id:prefix. It is not possible to configure any local prefixes used for pseudowire routing or to configure spoke-SDPs using dynamic MS-PWs at a T-PE unless an S-PE address has already been configured. The S-PE address is used as the address of a node used to populate the switching point TLV in the LDP label mapping message and the pseudowire status notification sent for faults at an S-PE.

Each T-PE is also configured with the following parameters:

  1. Global-id — This is a 4-byte identifier that uniquely identifies an operator or the local network.
  2. Local prefix — One or more local (Layer 2) prefixes (up to a maximum of 16), which are formatted in the style of a 4-octet IPv4 address. A local prefix identifies a T-PE or S-PE in the PW routing domain.
  3. For each local prefix, at least one 8-byte RD can be configured. It is also possible to configure an optional BGP community attribute.

For each local prefix, BGP then advertises each global-id/prefix tuple and unique RD and community pseudowire using the MS-PW NLRI, based on the aggregated FEC129 AII Type 2 and the Layer 2 VPN/PW routing AFI/SAFI 25/6, to each T-PE/S-PE that is a T-LDP neighbor, subject to local BGP policies.

The dynamic advertisement of each of these pseudowire routes is enabled for each prefix and RD using the advertise-bgp command.

An export policy is also required in order to export MS-PW routes in MP-BGP. This can be done using a default policy, such as the following:

*A:lin-123>config>router>policy-options# info
----------------------------------------------
            policy-statement "ms-pw"
                default-action accept
                exit
            exit
----------------------------------------------

However, this would export all routes. A recommended choice is to enable filtering per-family, as follows:

*A:lin-123>config>router>policy-options# info
----------------------------------------------
            policy-statement "to-mspw"
                entry 1
                    from
                        family ms-pw
                    exit
                    action accept
                    exit
                exit
            exit
----------------------------------------------

The following command is then added in the config>router>bgp context:

export "to-mspw"

Local-preference for IBGP and BGP communities can be configured under such a policy.

2.8.6.2.1. Static Routing

As well as support for BGP routing, static MS-PW routes may also be configured using the config services pw-routing static-route command. Each static route comprises the target T-PE global-id and prefix, and the IP address of the T-LDP session to the next hop S-PE or T-PE that should be used.

If a static route is set to 0, this represents the default route. If a static route exists to a specified T-PE, this default route is used in preference to any BGP route that may exist.

2.8.6.2.2. Explicit Paths

A set of default explicit routes to a remote T-PE or S-PE prefix may be configured on a T-PE under config>services>pw-routing using the path name command. Explicit paths are used to populate the explicit route TLV used by MS-PW T-LDP signaling. Only strict (fully qualified) explicit paths are supported.

It is possible to configure explicit paths independently of the configuration of BGP or static routing.

2.8.6.3. Configuring VLLs using Dynamic MS-PWs

One or more spoke-SDPs may be configured for distributed Epipe VLL services. Dynamic MS-PWs use FEC129 (also known as the Generalized ID FEC) with Attachment Individual Identifier (AII) Type 2 to identify the pseudowire, as opposed to FEC128 (also known as the PW ID FEC) used for traditional single segment pseudowires and for pseudowire switching. FEC129 spoke-SDPs are configured under the spoke-sdp-fec command in the CLI.

FEC129 AII Type 2 uses a Source Attachment Individual Identifier (SAII) and a Target Attachment Individual Identifier (TAII) to identify the end of a pseudowire at the T-PE. The SAII identifies the local end, while the TAII identifies the remote end. The SAII and TAII are each structured as follows:

  1. Global-id — This is a 4-byte identifier that uniquely identifies an operator or the local network.
  2. Prefix — A 4-byte prefix, which should correspond to one of the local prefixes assigned under pw-routing.
  3. AC-ID — A 4-byte identifier for the local end of the pseudowire. This should be locally unique within the scope of the global-id:prefix.

2.8.6.3.1. Active/Passive T-PE Selection

Dynamic MS-PWs use single-sided signaling procedures with double-sided configuration; a fully qualified FEC must be configured at both endpoints. That is, one T-PE (the source T-PE, ST-PE) of the MS-PW initiates signaling for the MS-PW, while the other end (the terminating T-PE, TT-PE) passively waits for the label mapping message from the far end. This termination end only responds with a label mapping message to set up the opposite direction of the MS-PW when it receives the label mapping from the ST-PE. By default, the router will determine which T-PE is the ST-PE (the active T-PE) and which is the TT-PE (the passive T-PE) automatically, based on comparing the SAII with the TAII as unsigned integers. The T-PE with SAII>TAII assumes the active role. However, it is possible to override this behavior using the signaling {master | auto} command under spoke-sdp-fec. If master is selected at a specified T-PE, that T-PE will assume the active role. If a T-PE is at the endpoint of a spoke-SDP that is bound to an VLL SAP and single-sided auto-configuration is used (see  2.8.6.3.2), then that endpoint is always passive. Therefore, signaling master should only be used when it is known that the far end will assume a passive behavior.

2.8.6.3.2. Automatic Endpoint Configuration

Automatic endpoint configuration allows the configuration of an endpoint without specifying the TAII associated with that spoke-sdp-fec. It allows a single-sided provisioning model where an incoming label mapping message with a TAII that matches the SAII of that spoke-SDP is automatically bound to that endpoint. This is useful in scenarios where a service provider wants to separate service configuration from the service activation phase.

Automatic endpoint configuration is supported for Epipe VLL spoke-sdp-fec endpoints bound to a VLL SAP. It is configured using the spoke-sdp-fec auto-config command, and excluding the TAII from the configuration. When auto-configuration is used, the node assumes passive behavior from a point of view of T-LDP signaling (see 2.8.6.3.1). Therefore, the far-end T-PE must be configured as the signaling master for that spoke-sdp-fec.

2.8.6.3.3. Selecting a Path for an MS-PW

Path selection for signaling occurs in the outbound direction (ST-PE to TT-PE) for an MS-PW. In the TT-PE to ST-PE direction, a label mapping message follows the reverse of the path already taken by the outgoing label mapping.

A node can use explicit paths, static routes, or BGP routes to select the next hop S-PE or T-PE. The order of preference used in selecting these routes is:

  1. Explicit Path
  2. Static route
  3. BGP route

To use an explicit path for an MS-PW, an explicit path must have been configured in the config>services>pw-routing>path path-name context. The user must then configure the corresponding path path-name under spoke-sdp-fec.

If an explicit path name is not configured, the TT-PE or S-PE will perform a longest match lookup for a route (static if it exists, and BGP if not) to the next hop S-PE or T-PE to reach the TAII.

Pseudowire routing chooses the MS-PW path in terms of the sequence of S-PEs to use to reach a specified T-PE. It does not select the SDP to use on each hop, which is instead determined at signaling time. When a label mapping is sent for a specified pseudowire segment, an LDP SDP will be used to reach the next-hop S-PE/T-PE if such an SDP exists. If not, and an RFC 3107 labeled BGP SDP is available, then that will be used. Otherwise, the label mapping will fail and a label release will be sent.

2.8.6.3.4. Pseudowire Templates

Dynamic MS-PWs support the use of the pseudowire template for specifying generic pseudowire parameters at the T-PE. The pseudowire template to use is configured in the spoke-sdp-fec>pw-template-bind policy-id context. Dynamic MS-PWs do not support the provisioned SDPs specified in the pseudowire template. Auto-created GRE SDPs are supported with dynamic MS-PWs by creating the PW template used within the spoke-sdp-fec with the parameter auto-gre-sdp.

2.8.6.4. Pseudowire Redundancy

Pseudowire redundancy is supported on dynamic MS-PWs used for VLLs. It is configured in a similar manner to pseudowire redundancy on VLLs using FEC128, whereby each spoke-sdp-fec within an endpoint is configured with a unique SAII/TAII.

Figure 23 shows the use of pseudowire redundancy.

Figure 23:  Pseudowire Redundancy 

The following is a summary of the key points to consider in using pseudowire redundancy with dynamic MS-PWs:

  1. Each MS-PW in the redundant set must have a unique SAII/TAII set and is signaled separately. The primary pseudowire is configured in the spoke-sdp-fec>primary context.
  2. Each MS-PW in the redundant set should use a diverse path (from the point of view of the S-PEs traversed) from every other MS-PW in that set if path diversity is possible in a specific network topology. There are a number of possible ways to achieve this:
    1. Configure an explicit path for each MS-PW.
    2. Allow BGP routing to automatically determine diverse paths using BGP policies applied to different local prefixes assigned to the primary and standby MS-PWs.
    3. Path diversity can be further provided for each primary pseudowire through the use of a BGP RD.

If the primary MS-PW fails, fail-over to a standby MS-PW occurs, as per the normal pseudowire redundancy procedures. A configurable retry timer for the failed primary MS-PW is then started. When the timer expires, attempts to reestablish the primary MS-PW using its original path occur, up to a maximum number of attempts as per the retry count parameter. On successful reestablishment the T-PE may then optionally revert back to the primary MS-PW.

Since the SDP ID is determined dynamically at signaling time, it cannot be used as a tie breaker to choose the primary MS-PW between multiple MS-PWs of the same precedence. The user should, therefore, explicitly configure the precedence values to determine which MS-PW is active in the final selection.

2.8.6.5. VCCV OAM for Dynamic MS-PWs

The primary difference between dynamic MS-PWs and those using FEC128 is support for FEC129 AII type 2. As in PW Switching, VCCV on dynamic MS-PWs requires the use of the VCCV control word on the pseudowire. Both the vccv-ping and vccv-trace commands support dynamic MS-PWs.

2.8.6.6. VCCV-Ping on Dynamic MS-PWs

VCCV-ping supports the use of FEC129 AII type 2 in the target FEC stack of the ping echo request message. The FEC to use in the echo request message is derived in one of two ways: Either the user can specify only the spoke-sdp-fec-id of the MS-PW in the vccv-ping command, or the user can explicitly specify the SAII and TAII to use.

If the SAII:TAII is entered by the user in the vccv-ping command, those values are used for the vccv-ping echo request, but their order is reversed before being sent so that they match the order for the downstream FEC element for an S-PE, or the locally configured SAII:TAII for a remote T-PE of that MS-PW. If SAII:TAII is entered as well as the spoke-sdp-fec-id, the system will verify the entered values against the values stored in the context for that spoke-sdp-fec-id.

Otherwise, if the SAII:TAII to use in the target FEC stack of the vccv-ping message is not entered by the user, and if a switching point TLV was previously received in the initial label mapping message for the reverse direction of the MS-PW (with respect to the sending PE), then the SAII:TAII to use in the target FEC stack of the vccv-ping echo request message is derived by parsing that switching point TLV based on the user-specified TTL (or a TTL of 255 if none is specified). In this case, the order of the SAII:TAII in the switching point TLV is maintained for the vccv-ping echo request message.

If no pseudowire switching point TLV was received, then the SAII:TAII values to use for the vccv-ping echo request are derived from the MS-PW context, but their order is reversed before being sent so that they match the order for the downstream FEC element for an S-PE, or the locally configured SAII:TAII for a remote T-PE of that MS-PW.

The use of spoke-sdp-fec-id in vccv-ping is only applicable at T-PE nodes, since it is not configured for a specified MS-PW at S-PE nodes.

2.8.6.7. VCCV-Trace on Dynamic MS-PWs

The 7750 SR, 7450 ESS, and 7950 XRS support the MS-PW path trace mode of operation for VCCV trace, as per pseudowire switching, but using FEC129 AII type 2. As in the case of vccv-ping, the SAII:TAII used in the VCCV echo request message sent from the T-PE or S-PE from which the VCCV trace command is executed is specified by the user or derived from the context of the MS-PW. The use of spoke-sdp-fec-id in vccv-trace is only applicable at T-PE nodes, since it is not configured for a specified MS-PW at S-PE nodes.

2.8.7. Example Dynamic MS-PW Configuration

This section describes an example of how to configure Dynamic MS-PWs for a VLL service between a set of Nokia nodes. The network consists of two T-PEs and two nodes, in the role of S-PEs, as shown in the following figure. Each 7750 SR, 7450 ESS, or 7950 XRS peers with its neighbor using LDP and BGP.

Figure 24:  Dynamic MS-PW Example 

The example uses BGP to route dynamic MS-PWs and T-LDP to signal them. Therefore, each node must be configured to support the MS-PW address family under BGP, and BGP and LDP peerings must be established between the T-PEs/S-PEs. The appropriate BGP export policies must also be configured.

Next, pseudowire routing must be configured on each node. This includes an S-PE address for every participating node, and one or more local prefixes on the T-PEs. MS-PW paths and static routes may also be configured.

When this routing and signaling infrastructure is established, spoke-sdp-fecs can be configured on each of the T-PEs, as follows:

config
   router
      ldp
         targeted-session
            peer 10.20.1.5
            exit
         exit
      policy-options
         begin
         policy-statement "exportMsPw"
            entry 10
               from
                  family ms-pw
               exit
               action accept
               exit
            exit
         exit
         commit
      exit
      bgp
         family ms-pw
         connect-retry 1
         min-route-advertisement 1
         export "exportMsPw" 
         rapid-withdrawal          
         group "ebgp"
            neighbor 10.20.1.5
               multihop 255
               peer-as 200
            exit
         exit
     exit
config
   service
      pw-routing
         spe-address 3:10.20.1.3
         local-prefix 3:10.20.1.3 create
         exit
         path "path1_to_F" create
            hop 1 10.20.1.5
            hop 2 10.20.1.2
            no shutdown
        exit
     exit
     epipe 1 name “XYZ Epipe 1” customer 1 create
        description "Default epipe
             description for service id 1"
        service-mtu 1400
        sap 2/1/1:1 create
        exit
        spoke-sdp-fec 1 fec 129 aii-type 2 create
           retry-timer 10
           retry-count 10
           saii-type2 3:10.20.1.3:1
           taii-type2 6:10.20.1.6:1
           no shutdown
        exit
        no shutdown
     exit
config
   router
      ldp
         targeted-session
            peer 10.20.1.2
            exit
         exit
         …
      policy-options
         begin
         policy-statement "exportMsPw"
            entry 10
               from
                  family ms-pw
               exit
               action accept
               exit
            exit
         exit
         commit
      exit
      
      bgp
         family ms-pw
         connect-retry 1
         min-route-advertisement 1
         export "exportMsPw" 
         rapid-withdrawal          
         group "ebgp"
            neighbor 10.20.1.2
               multihop 255
               peer-as 300
            exit
         exit
     exit
config
   service
      pw-routing
         spe-address 6:10.20.1.6
         local-prefix 6:10.20.1.6 create
         exit
         path "path1_to_F" create
            hop 1 10.20.1.2
            hop 2 10.20.1.5
            no shutdown
        exit
     exit
     epipe 1 name “XYZ Epipe 1” customer 1 create
        description "Default epipe
             description for service id 1"
service-mtu 1400
        sap 1/1/3:1 create
        exit
        spoke-sdp-fec 1 fec 129 aii-type 2 create
           retry-timer 10
           retry-count 10
           saii-type2 6:10.20.1.6:1
           taii-type2 3:10.20.1.3:1
           no shutdown
        exit
        no shutdown
     exit
 
config
   router
      ldp
         targeted-session
            peer 10.20.1.3
            exit
            peer 10.20.1.2
            exit
         exit
         …
      bgp
         family ms-pw
         connect-retry 1
         min-route-advertisement 1
         rapid-withdrawal          
         group "ebgp"
            neighbor 10.20.1.2
               multihop 255
               peer-as 300
            exit
            neighbor 10.20.1.3
               multihop 255
               peer-as 100
            exit
         exit
     exit
 
   service
      pw-routing
         spe-address 5:10.20.1.5
      exit
config
   router
      ldp
         targeted-session
            peer 10.20.1.5
            exit
            peer 10.20.1.6
            exit
         exit
         …
      bgp
         family ms-pw
         connect-retry 1
         min-route-advertisement 1
         rapid-withdrawal          
         group "ebgp"
            neighbor 10.20.1.5
               multihop 255
               peer-as 200
            exit
            neighbor 10.20.1.6
               multihop 255
               peer-as 400
            exit           
         exit
     exit
service
      pw-routing
         spe-address 2:10.20.1.2
      exit

2.8.8. VLL Resilience with Two Destination PE Nodes

Figure 25 shows the application of pseudowire redundancy to provide Ethernet VLL service resilience for broadband service subscribers accessing the broadband service on the service provider BRAS.

Figure 25:  VLL Resilience 

If the Ethernet SAP on PE2 fails, PE2 notifies PE1 of the failure by either withdrawing the primary pseudowire label it advertised or by sending a pseudowire status notification with the code set to indicate a SAP defect. PE1 will receive it and will immediately switch its local SAP to forward over the secondary standby spoke-SDP. To avoid black holing of packets during the switching of the path, PE1 will accept packets received from PE2 on the primary pseudowire while transmitting over the backup pseudowire. However, in other applications such as those described in Access Node Resilience Using MC-LAG and Pseudowire Redundancy, it will be important to minimize service outage to end users.

When the SAP at PE2 is restored, PE2 updates the new status of the SAP by sending a new label mapping message for the same pseudowire FEC or by sending pseudowire status notification message indicating that the SAP is back up. PE1 then starts a timer and reverts back to the primary at the expiry of the timer. By default, the timer is set to 0, which means PE1 reverts immediately. A special value of the timer (infinity) will mean that PE1 should never revert back to the primary pseudowire.

The behavior of the pseudowire redundancy feature is the same if PE1 detects or is notified of a network failure that brought the spoke-SDP status to operationally down. The following are the events that will cause PE1 to trigger a switchover to the secondary standby pseudowire:

  1. T-LDP peer (remote PE) node withdrew the pseudowire label.
  2. T-LDP peer signaled a FEC status indicating a pseudowire failure or a remote SAP failure.
  3. T-LDP session to peer node times out.
  4. SDP binding and VLL service went down as a result of network failure condition such as the SDP to peer node going operationally down.

The SDP type for the primary and secondary pseudowires need not be the same. That is, the user can protect an RSVP-TE based spoke-SDP with an LDP or GRE based one. This provides the ability to route the path of the two pseudowires over different areas of the network. All VLL service types, for example, Apipe, Epipe, Fpipe, and Ipipe, are supported on the 7750 SR.

Nokia routers support the ability to configure multiple secondary standby pseudowire paths. For example, PE1 uses the value of the user-configurable precedence parameter associated with each spoke-SDP to select the next available pseudowire path after the failure of the current active pseudowire (whether it is the primary or one of the secondary pseudowires). The revertive operation always switches the path of the VLL back to the primary pseudowire though. There is no revertive operation between secondary paths, meaning that the path of the VLL will not be switched back to a secondary pseudowire of higher precedence when the latter comes back up again.

Nokia routers support the ability for a user-initiated manual switchover of the VLL path to the primary or any of the secondary, be supported to divert user traffic in case of a planned outage such as in node upgrade procedures.

On the 7750 SR, this application can make use of all types of VLL supported on SR-series routers. However, if a SAP is configured on an MC-LAG instance, only the Epipe service type is allowed.

2.8.8.1. Master-Slave Operation

This section describes master-slave pseudowire redundancy. It adds the ability for the remote peer to react to the pseudowire standby status notification, even if only one spoke-SDP terminates on the VLL endpoint on the remote peer, by blocking the transmit (Tx) direction of a VLL spoke-SDP when the far-end PE signals standby. This solution enables the blocking of the Tx direction of a VLL spoke-SDP at both master and slave endpoints when standby is signaled by the master endpoint. This approach satisfies a majority of deployments where bidirectional blocking of the forwarding on a standby spoke-SDP is required.

Figure 26 shows the operation of master-slave pseudowire redundancy. In this scenario, an Epipe service is provided between CE1 and CE2. CE2 is dual-homed to PE2 and PE3; therefore, PE1 is dual-homed to PE2 and PE3 using Epipe spoke-SDPs. The objective of this feature is to ensure that only one pseudowire is used for forwarding in both directions by PE1, PE2, and PE3 in the absence of a native dual homing protocol between CE2 and PE2/PE3, such as MC-LAG. In normal operating conditions (the SAPs on PE2 and PE3 toward CE2 are both up and there are no defects on the ACs to CE2), PE2 and PE3 cannot choose which spoke-SDP to forward on, based on the status of the AC redundancy protocol.

Figure 26:  Master-Slave Pseudowire Redundancy 

Master-slave pseudowire redundancy adds the ability for the remote peer to react to the pseudowire standby status notification, even if only one spoke-SDP terminates on the VLL endpoint on the remote peer. When the CLI command standby-signaling-slave is enabled at the spoke-SDP or explicit endpoint level in PE2 and PE3, then any spoke-SDP for which the remote peer signals PW FWD Standby will be blocked in the transmit direction.

This is achieved as follows. The standby-signaling-master state is activated on the VLL endpoint in PE1. In this case, a spoke-SDP is blocked in the transmit direction at this master endpoint if it is either in operDown state, or it has lower precedence than the highest precedence spoke-SDP, or the specific peer PE signals one of the following pseudowire status bits:

  1. Pseudowire not forwarding (0x01)
  2. SAP (ingress) receive fault (0x02)
  3. SAP (egress) transmit fault (0x04)
  4. SDP binding (ingress) receive fault (0x08)
  5. SDP binding (egress) transmit fault (0x10)

That the specified spoke-SDP has been blocked will be signaled to the LDP peer through the pseudowire status bit (PW FWD Standby (0x20)). This will prevent traffic being sent over this spoke-SDP by the remote peer, but only if that remote peer supports and reacts to pseudowire status notification. Previously, this applied only if the spoke-SDP terminates on an IES, VPRN, or VPLS. However, if standby-signaling-slave is enabled at the remote VLL endpoint, the Tx direction of the spoke-SDP will also be blocked, according to the rules in Operation of Master-Slave Pseudowire Redundancy with Existing Scenarios.

Although master-slave operation provides bidirectional blocking of a standby spoke-SDP during steady-state conditions, it is possible that the Tx directions of more than one slave endpoint can be active for transient periods during a fail-over operation. This is due to slave endpoints transitioning a spoke-SDP from standby to active receiving and/or processing a pseudowire preferential forwarding status message before those endpoints transitioning a spoke-SDP to standby. This transient condition is most likely when a forced switchover is performed, or the relative preferences of the spoke-SDPs are changed, or the active spoke-SDP is shutdown at the master endpoint. During this period, loops of unknown traffic may be observed. Fail-overs due to common network faults that can occur during normal operation, or a failure of connectivity on the path of the spoke-SDP or the SAP, would not result in such loops in the data path.

2.8.8.1.1. Interaction with SAP-Specific OAM

If all of the spoke-SDPs bound to a SAP at a slave PE are selected as standby, then this should be treated from a SAP OAM perspective in the same manner as a fault on the service: an SDP binding down or remote SAP down. That is, a fault should be indicated to the service manager. If SAP-specific OAM is enabled toward the CE, such as Ethernet Continuity Check Message (CCM), Ethernet Link Management Interface (E-LMI), or FR LMI, then this should result in the appropriate OAM message being sent on the SAP. This can enable the remote CE to avoid forwarding traffic toward a SAP that will drop it.

Figure 27 shows an example for the case of Ethernet LMI.

Figure 27:  Example of SAP OAM Interaction with Master-Slave Pseudowire Redundancy 

2.8.8.1.2. Local Rules at Slave VLL PE

It is not possible to configure a standby-signaling-slave on endpoints or spoke-SDPs that are bound to an IES, VPRN, ICB, or MC-EP, or that are part of an MC-LAG or MC-APS.

If standby-signaling-slave is configured on a specific spoke-SDP or explicit endpoint, then the following rules apply. The rules describe the case of several spoke-SDPs in an explicit endpoint. The same rules apply to the case of a single spoke-SDP outside of an endpoint where no endpoint exists:

  1. Rules for processing endpoint SAP active/standby status bits:
    1. Since the SAP in endpoint X is never a part of an MC-LAG/MC-APS instance, a forwarding status of active is always advertised.
  2. Rules for processing and merging local and received endpoint objects with an up or down operational status:
  1. Endpoint X is operationally up if at least one of its objects is operationally up. It is Down if all of its objects are operationally down.
  2. If all objects in endpoint X did any or all of the following, the node must send status bits of SAP down over all Y endpoint spoke-SDPs:
    1. transitioned locally to down state
    2. received a SAP down notification via remote T-LDP or via SAP-specific OAM signal
    3. received status bits of SDP-binding down
    4. received states bits of PW not forwarding
  3. Endpoint Y is operationally up if at least one of its objects is operationally up. It is down if all its objects are operationally down.
  4. If a spoke-SDP in endpoint Y, including the ICB spoke-SDP, transitions locally to down state, the node must send T-LDP SDP-binding down status bits on this spoke-SDP.
  5. If a spoke-SDP in endpoint Y received T-LDP SAP down status bits, and/or T-LDP SDP-binding down status bits, and/or status bits of PW not forwarding, the node saves this status and takes no further action. The saved status is used for selecting the active transmit endpoint object.
  6. If all objects in endpoint Y, or a single spoke-SDP that exists outside of an endpoint (and no endpoint exists), the node must send a SAP down notification on the X endpoint SAP via the SAP-specific OAM signal, if applicable:
    1. transitioned locally to down state
    2. received status bits of T-LDP SAP down
    3. received status bits of T-LDP SDP-binding down
    4. received status bits of PW not forwarding
    5. received status bits of PW FWD standby
  7. If the peer PE for a specified object in endpoint Y signals PW FWD standby, the spoke-SDP must be blocked in the transmit direction and the spoke-SDP is not eligible for selection by the active transmit selection rules.
  8. If the peer PE for a specified object in endpoint Y does not signal PW FWD standby, then spoke-SDP is eligible for selection.

2.8.8.1.3. Operation of Master-Slave Pseudowire Redundancy with Existing Scenarios

This section discusses how master-slave pseudowire redundancy could operate.

2.8.8.1.3.1. VLL Resilience Path Example

Figure 28 shows VLL resilience path example. An sample configuration follows.

Figure 28:  VLL Resilience 

A revert-time value of zero (default) means that the VLL path will be switched back to the primary immediately after it comes back up.

PE-1
configure service epipe 1
endpoint X
exit
endpoint Y
   revert-time 0   
   standby-signaling-master
   exit
   sap 1/1/1:100 endpoint X
   spoke-sdp 1:100 endpoint Y 
precedence primary
   spoke-sdp 2:200 endpoint Y 
precedence 1
 
PE-2
configure service epipe 1
   endpoint X
   exit
   sap 2/2/2:200 endpoint X
   spoke-sdp 1:100 
      standby-signaling-slave
 
PE-3
configure service epipe 1
   endpoint X
   exit
   sap 3/3/3:300 endpoint X
   spoke-sdp 2:200 
      standby-signaling-slave

2.8.8.1.4. VLL Resilience for a Switched Pseudowire Path

Figure 29 displays VLL resilience for a switched pseudowire path example. A sample configuration follows.

Figure 29:  VLL Resilience with Pseudowire Switching 
T-PE-1
configure service epipe 1
   endpoint X
   exit
   endpoint Y
   revert-time 100   
   standby-signaling-master
   exit
   sap 1/1/1:100 endpoint X
   spoke-sdp 1:100 endpoint Y 
      precedence primary
   spoke-sdp 2:200 endpoint Y 
      precedence 1
   spoke-sdp 3:300 endpoint Y 
      precedence 1
 
T-PE-2
configure service epipe 1
   endpoint X
   exit
   endpoint Y
   revert-time 100   
   standby-signaling-slave
   exit
   sap 2/2/2:200 endpoint X
   spoke-sdp 4:400 endpoint Y 
      precedence primary
   spoke-sdp 5:500 endpoint Y 
      precedence 1
   spoke-sdp 6:600 endpoint Y 
      precedence 1
S-PE-1

VC switching indicates a VC cross-connect so that the service manager does not signal the VC label mapping immediately but will put S-PE-1 into passive mode, as follows:

configure service epipe 1 vc-switching 
   spoke-sdp 1:100 
   spoke-sdp 4:400 

2.8.9. Pseudowire SAPs

Refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 3 Services Guide: IES and VPRN for information about how to use pseudowire SAPs with Layer 2 services.

2.8.10. Epipe Using BGP-MH Site Support for Ethernet Tunnels

Using Epipe in combination with G.8031 and BGP multi-homing in the same manner as VPLS offers a multi-chassis resiliency option for Epipe services that is a non-learning and non-flooded service. MC-LAG (see Access Node Resilience Using MC-LAG and Pseudowire Redundancy) offers access node redundancy with active/stand-by links while Ethernet tunnels offer per service redundancy with all active links and active or standby services. G.8031 offers an end-to-end service resiliency for Epipe and VPLS services. BGP-MH site support for Ethernet tunnels offers Ethernet edge resiliency for Epipe services that integrates with MPLS pseudowire redundancy.

Figure 30 shows the BGP-MH site support for Ethernet tunnels, where a G.8031 edge device (A) is configure with two provider edge switches (B and C). G.8031 is configured on the Access devices (A and F). An Epipe endpoint service is configured along with BGP Multi-homing and Pseudowire Redundancy on the provider edge nodes (B/C and D/E). This configuration offers a fully redundant Epipe service.

Figure 30:  BGP-MH Site Support for Ethernet Tunnels 

2.8.10.1. Operational Overview

G.8031 offers a number of redundant configurations. Normally, it offers the ability to control two independent paths for 1:1 protection. In the BGP-MH site support for Ethernet tunnels case, BGP drives G.8031 as a slave service. In this case, the provider edge operates using only standard 802.1ag MEPs with CCM to monitor the paths. Figure 31 shows an Epipe service on a Customer Edge (CE) device that uses G.8031 with two paths and two MEPs. The paths can use a single VLAN of dot1q or QinQ encapsulation.

Figure 31:  G.8031 for Slave Operation 

In a single-service deployment, the control (CFM) and data will share the same port and VID. For multiple services for scaling, fate sharing is allowed between multiple SAPs, but all SAPs within a group must be on the same physical port.

To get fate sharing for multiple services with this feature, a dedicated G.8031 CE-based service (one VLAN) is connected to a Epipe SAP on a PE, which uses BGP-MH and operational groups to control other G.8031 tunnels. This dedicated G.8031 service still has data control capabilities, but the data Epipe service is not bearing user data packets. On the CE, this G.8031 is only used for group control. Making this a dedicated control (CFM) for a set of G.8031 tunnels is to simplify operation and allow individual disabling of services. Using a dedicated G.8031 service to both control and to carry data traffic is allowed.

Fate sharing from the PE side is achieved using BGP and operational groups. G.8031 Epipe services can be configured on the CE as regular non-fate shared G.8031 services, but due to the configuration on the PE side, these Ethernet tunnels will be treated as a group following the one designated control service. The G.8031 control logic on the CE is a slave to the BGP-MH control.

On the CE, G.8031 allows independent configuration of VIDs on each path. On the PE, the Epipe or endpoint that connects to the G.8031 service must have a SAP with the corresponding VID. If the G.8031 service has a Maintenance End Point (MEP) for that VID, the SAP should be configured with a MEP. The MEPs on the paths on the CE signal standard interface status TLV (ifStatusTLV), No Fault (Up), and Fault (Down). The MEPs on the PE (Epipe or endpoint) also use signaling of ifStatusTLV No Fault (Up), and Fault (Down) to control the G.8031 SAP. However, in the 7750 SR, 7450 ESS, and 7950 XRS model, fate shared Ethernet tunnels with no MEP are allowed. In this case, it is up to the CE to manage these CE-based fate shared tunnels.

Interface status signaling (ifStatusTLV) is used to control the G.8031 tunnel from the PE side. Normally the CE will signal No Fault (Up) in the path SAP MEP ifStatusTLV before the BGP-MH will cause the SAP MEP to become active by signaling No Fault (Up).

2.8.10.2. Detailed Operation

For this feature, BGP-MH is used as the master control and the Ethernet tunnel is a slave. The G.8031 on the CE is unaware that it is being controlled. While a single Epipe service is configured and will serve as the control for the CE connection, allowing fate sharing, all signaling to the CE is based on the ifStatusTLV per G.8031 tunnel. By controlling G.8031 with BGP-MH, the G.8031 CE is forced to be a slave to the PE BGP-MH election. BGP-MH election is controlled by the received VPLS preference or BGP local-preference, or the PE ID (IP address of provider edge) if local-preference is equal to VPLS preference. There may be traps generated on the CE side for some G.8031 implementations, but these can be suppressed or filtered to allow this feature to operate.

There are two configuration options:

  1. Every G.8031 service SAP terminates on a single Epipe that has BGP-MH. These Epipes may use endpoints with or without ICBs.
  2. A control Epipe service monitors a single SAP that is used for group control of fate shared CE services. In this case, the Epipe service has a SAP that serves as the control termination for one Ethernet tunnel connection. The group fate sharing SAPs may or may not have MEPs if they use shared fate. In this case, the Epipe may have endpoints but will not support ICBs.

The MEP ifStatusTlv and CCM are used for monitoring the PE to CE SAP. MEP ifStatusTlv is used to signal that the Ethernet tunnel inactive and CCM is used as an aliveness mechanism. There is no G.8031 logic on the PE; the SAP is controlling the corresponding CE SAP.

2.8.10.2.1. Sample Operation of G.8031 BGP-MH

Any Ethernet tunnel actions (force, lock) on the CE (single site) do not control the action to switch paths directly, but they may influence the outcome of BGP-MH if they are on a control tunnel. If a path is disabled on the CE, the result may force the SAP with an MEP on the PE to eventually take the SAP down; Nokia recommends running commands from the BGP-MH side to control these connections.

Figure 32:  Full Redundancy G.8031 Epipe and BGP-MH 

Table 10 lists the SAP MEP signaling shown in Figure 32. For a description of the events shown in this sample operation, see Events in Sample Operation.

Table 10:  SAP MEP Signaling 

G.8031 ET on CE

Path A MEP Facing Node B Local ifStatus

Path B MEP Facing Node C Local ifStatus

Path B PE MEP ifStatus

Path B PE MEP ifStatus

1

Down (inactive)

No Fault 1

No Fault

Fault

Fault

2

Up use Path A

No Fault

No Fault

No Fault

Fault

3

Up use Path B

No Fault

No Fault

Fault

No Fault

4

Down Path A fault

Fault 2

No Fault

Fault

Fault

5

Down Path A and B fault at A

Fault

No Fault

Fault

Fault

6

Partitioned Network

Use Path Precedence

Up use Path A

No Fault

No Fault

No Fault

No Fault

    Notes:

  1. No Fault = no ifStatusTlv transmit | CCM transmit normally
  2. Fault = ifStatusTlv transmit down | no CCM transmit

2.8.10.2.1.1. Events in Sample Operation

The following describes the events for switchover in Figure 32. This configuration uses operational groups. The nodes of interest are A, B, and C listed in Table 10.

  1. A single G.8031 SAP that represents the control for a group of G.8031 SAPs is configured on the CE.
    1. The Control SAP does not normally carry any data; however, it can if needed.
    2. An Epipe service is provisioned on each PE node (B/C), only for control (no customer traffic flows over this service).
    3. On CE A, there is an Epipe Ethernet tunnel (G.8031) control SAP.
    4. The Ethernet tunnel has two paths:
      1. one facing B
      2. one facing C
    5. PE B has an Epipe control SAP that is controlled by the BGP-MH site and PE C also has the corresponding SAP that is controlled by the same BGP-MH site.
  2. At node A, there are MEPs configured under each path that check connectivity on the A-B and A-C links. At nodes B and C, there is a MEP configured under their respective SAPs with fault propagation enabled with the use of ifStatusTlv.
  3. Initially, assume there is no link failure:
    1. SAPs on node A have ifStatusTLV No Fault to B and C (no MEP fault detected at A); see Table 10 row 1 (Fault is signaled in the other direction PE to CE).
    2. BGP-MH determines which is the master or Designated Forwarder (DF).
    3. Assume SAP on node B is picked as the DF.
    4. The MEP at Path A-B signals ifStatusTlv No Fault. Due to this signal, the MEP under the node A path facing node B detects the path to node B is usable by the path manager on A.
  4. At the CE node A, Path A-C becomes standby and is brought down; see Table 10 row 2.
    1. Since fault propagation is enabled under the SAP node C MEP, and ifStatusTLV is operationally Down, the Path remains in the present state.
    2. Under these conditions, the MEP under the node A path facing node C detects the fault and informs Ethernet manager on node A.
    3. Node A then considers bringing path A-C down.
    4. ET port remains up since path A-B is operationally up. This is a stable state.
  5. On nodes B and C, each Epipe-controlled SAP is the sole (controlling) member of an operational group.
    1. Other data SAPs may be configured for fate shared VLANs (Ethernet tunnels) and to monitor the control SAP.
    2. The SAPs facing the CE node A share the fate of the control SAP and follow the operation.
  6. If there is a break in path A-B connectivity (CCM timeout or LOS on the port for link A-B), then on node A the path MEP detects connectivity failure and informs Ethernet tunnel manager; see Table 10 row 4.
  7. At this point, the Ethernet tunnel is down since both path A-B and path A-C are down.
  8. The CE node A Ethernet tunnel goes down.
  9. At node B on the PE, the SAP also detects the failure and the propagation of fault status goes to BGP-MH; see Table 10 row 4.
  10. This in turn feeds into BGP-MH, which deems the site non-DF and makes the site standby.
  11. Since the SAP at node B is standby, the service manager feeds this to CFM, which then propagates a Fault toward node A. This is a cyclic fault propagation. However, since path A-B is broken, the situation is stable; see Table 10 row 5.
  12. There is traffic loss during the BGP-MH convergence.
    1. Load sharing mode is recommended when using a 7450 as a CE node A device.
    2. BGP-MH signals that node C is now the DF; see Table 10 row 3.
  13. BGP-MH on node C elects a SAP and brings it up.
  14. ET port transitions to port A-C, and is operationally up. This is a stable state. The A-C SAPs monitoring the operational group on C transitions to operationally up.

Unidirectional failures: At point 6 the failure was detected at both ends. In the case of a unidirectional failure, CCM times out on one side.

  1. In the case where the PE detects the failure, it propagates the failure to BGP-MH and the BGP-MH takes the site down causing the SAPs on the PE to signal a Fault to the CE.
  2. In the case where G.8031 on the CE detects the failure, it takes the tunnel down and signals a fault to the PE, and then the SAP propagates that to BGP-MH.

2.8.10.3. BGP-MH Site Support for Ethernet Tunnels Operational Group Model

For operational groups, one or more services follow the controlling service. On node A, there is an ET SAP facing nodes B/C, and on nodes B/C there are SAPs of the Epipe on physical ports facing node A. Each of the PE data SAPs monitor their respective operational groups, meaning they are operationally up or down based on the operational status of the control SAPs. On node A, since the data SAP is on the ET logical port, it goes operationally down whenever the ET port goes down and similarly for going operationally up.

Alternatively, an Epipe service may be provisioned on each node for each G.8031 data SAP (one-for-one service with no fate sharing). On CE node A, there will be a G.8031 Ethernet tunnel. The Ethernet tunnel has two paths: one facing node B and one facing node C. This option is the same as the control SAP, but there are no operational groups. However, now there is a BGP-MH site per service. For large sites, operational groups are more efficient.

2.8.10.4. BGP-MH Specifics for MH Site Support for Ethernet Tunnels

BGP Multi-Homing for VPLS describes the procedures for using BGP to control resiliency for VPLS. These procedures are the same except that an Epipe service can be configured for BGP-MH.

2.8.10.5. PW Redundancy for BGP-MH Site Support for Ethernet Tunnels

Pseudowire Redundancy Service Models and VLL Resilience with Pseudowire Redundancy and Switching are used for the MPLS network resiliency. BGP MH site support for Ethernet tunnels reuses this model.

2.8.10.6. T-LDP Status Notification Handling Rules of BGP-MH Epipes

Using Figure 35 as a reference, the following are the rules for generating, processing, and merging T-LDP status notifications in VLL service with endpoints.

2.8.10.6.1. Rules for Processing Endpoint SAP Active/Standby Status Bits

  1. The advertised admin forwarding status of active/standby reflects the status of the local Epipe SAP in BGP-MH instance. If the SAP is not part of an MC-LAG instance or a BGP-MH instance, the forwarding status of Active is always advertised.
  2. When the SAP in endpoint X is part of a BGP-MH instance, a node must send T-LDP forwarding status bit of SAP active/standby over all Y endpoint spoke-SDPs, except the ICB spoke-SDP, whenever this (BGP-MH designated forwarder) status changes. The status bit sent over the ICB is always zero (Active by default).
  3. When the SAP in endpoint X is not part of an MC-LAG instance or BGP-MH instance, then the forwarding status sent over all Y endpoint spoke-SDPs should always be set to zero (Active by default).
  4. The received SAP active/standby status is saved and used for selecting the active transmit endpoint object Pseudowire Redundancy procedures.

2.8.10.6.2. Rules for Processing, Merging Local, and Received Endpoint Operational Status

  1. Endpoint X is operationally up if at least one of its objects is operationally Up. It is Down if all its objects are operationally down.
  2. If the SAP in endpoint X transitions locally to the down state, or received a SAP Down notification via SAP-specific OAM signal (SAP MEP), the node must send T-LDP SAP down status bits on the Y endpoint ICB spoke-SDP only. BGP-MH SAPs support MEPs for ifStatusTLV signaling. All other SAP types cannot exist on the same endpoint as an ICB spoke-SDP since non Ethernet SAP cannot be part of an MC-LAG instance or a BGP-MH Instance.
  3. If the ICB spoke-SDP in endpoint X transitions locally to Down state, the node must send T-LDP SDP-binding down status bits on this spoke-SDP.
  4. If the ICB spoke-SDP in endpoint X received T-LDP SDP-binding down status bits or PW not forwarding status bits, the node saves this status and takes no further action. The saved status is used for selecting the active transmit endpoint object as per Pseudowire Redundancy procedures.
  5. If all objects in endpoint X did any or all of the following, the node must send status bits of SAP Down over all Y endpoint spoke-SDPs, including the ICB:
    1. transitioned locally to the down state due to operator or BGP-MH DF election
    2. received a SAP down notification via remote T-LDP status bits or via SAP-specific OAM signal (SAP MEP)
    3. received status bits of SDP-binding down
    4. received status bits of PW not forwarding
  6. Endpoint Y is operationally up if at least one of its objects is operationally Up. It is Down if all its objects are operationally down.
  7. If a spoke-SDP in endpoint Y, including the ICB spoke-SDP, transitions locally to down state, the node must send T-LDP SDP-binding down status bits on this spoke-SDP.
  8. If a spoke-SDP in endpoint Y, including the ICB spoke-SDP, did any or all of the following, the node saves this status and takes no further action:
    1. received T-LDP SAP down status bits
    2. received T-LDP SDP-binding down status bits
    3. received PW not forwarding status bits
    The saved status is used for selecting the active transmit endpoint object as per Pseudowire Redundancy procedures.
  9. If all objects in endpoint Y, except the ICB spoke-SDP, did any or all of the following, the node must send status bits of SDP-binding down over the X endpoint ICB spoke-SDP only:
    1. transitioned locally to the down state
    2. received T-LDP SAP down status bits
    3. received T-LDP SDP-binding down status bits
    4. received PW not forwarding status bits
  10. If all objects in endpoint Y did any or all of the following, the node must send status bits of SDP-binding down over the X endpoint ICB spoke-SDP only, and must send a SAP down notification on the X endpoint SAP via the SAP-specific OAM signal:
    1. transitioned locally to down state
    2. received T- LDP SAP down status bits
    3. received T-LDP SDP-binding down status bits
    4. received PW not forwarding status bits
    In this case the SAP MEP ifStatusTLV is operationally down and also signals the BGP-MH site, if this SAP is part of a BGP Site.

2.8.10.6.3. Operation for BGP-MH Site Support for Ethernet Tunnels

A multi-homed site can be configured on up to four PEs although two PEs are sufficient for most applications, with each PE having a single object SAP connecting to the multi-homed site. SR OS G.8031 implementation with load sharing allows multiple PEs as well. The designated forwarder election chooses a single connection to be operationally up, with the other placed in standby. Only revertive behavior is supported in release 14.0.

Fate sharing (the status of one site can be inherited from another site) is achievable using monitor-groups.

The following are supported:

  1. All Ethernet-tunnel G.8031 SAPs on CE:
    1. 7750 SR, 7450 ESS, or 7950 XRS G.8031 in load sharing mode (recommended)
    2. 7750 SR, 7450 ESS, or 7950 XRS G.8031 in non-load sharing mode
  2. Epipe and endpoint with SAPs on PE devices.
  3. Endpoints with PW.
  4. Endpoints with active/standby PWs.

There are the following constraints with this feature:

  1. Not supported with PBB Epipes.
  2. Spoke SDP (pseudowire).
    1. BGP signaling is not supported.
    2. Cannot use BGP MH for auto-discovered pseudowire. This is achieved in a VPLS service using SHGs, which are not available in Epipes.
  3. Other multi-chassis redundancy features are not supported on the multi-homed site object, as follows:
    1. MC-LAG
    2. MC-EP
    3. MC-Ring
    4. MC-APS
  4. Master and Slave pseudowire is not supported.
    Figure 33:  Sample Topology Full Redundancy 

See the following Configuration Examples for configuration examples derived from Figure 33.

2.8.10.6.3.1. Configuration Examples

Node-1: Using operational groups and Ethernet CFM per SAP

#--------------------------------------------------
echo "Eth-CFM Configuration"
#--------------------------------------------------
    eth-cfm
        domain 100 format none level 3
            association 2 format icc-based name "node-3-site-1-0"
                bridge-identifier 1
                exit
                remote-mepid 310 
            exit
            association 2 format icc-based name "node-3-site-1-1"
                bridge-identifier 100
                exit
                remote-mepid 311 
            exit
        exit
    exit
 
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
    service
        customer 1 create
            description "Default customer"
        exit
        sdp 2 mpls create
            far-end 10.1.1.4
            lsp "to-node-4-lsp-1"
            keep-alive
                shutdown
            exit
            no shutdown
        exit
        sdp 3 mpls create  //  Etcetera 
             
        pw-template 1 create
            vc-type vlan
        exit
        oper-group "og-name-et" create
        exit
        oper-group "og-name-et100" create
        exit
        epipe 1 customer 1 create
            service-mtu 500
            bgp
                route-distinguisher 65000:1
                route-target export target:65000:1 import target:65000:1
            exit
            site "site-1" create
                site-id 1
                sap 1/1/2:1.1
                boot-timer 100
                site-activation-timer 2
                no shutdown
            exit
            endpoint "x" create
            exit
            endpoint "y" create
            exit
            sap 1/1/2:1.1 endpoint "x" create
                eth-cfm
                    mep 130 domain 100 association 2 direction down
                        fault-propagation-enable use-if-tlv
                        ccm-enable
                        no shutdown
                    exit
                exit
                oper-group "og-name-et"
            exit
            spoke-sdp 2:1 endpoint "y" create
                precedence primary
                no shutdown
            exit
            spoke-sdp 3:1 endpoint "y" create
                precedence 2
                no shutdown
            exit
            no shutdown
        exit
        epipe 100 customer 1 create
            description "Epipe 100 in separate opergroup"
            service-mtu 500
            bgp
                route-distinguisher 65000:2
                route-target export target:65000:2 import target:65000:2
            exit
            site "site-name-et100" create
                site-id 1101
                sap 1/1/4:1.100
                boot-timer 100
                site-activation-timer 2
                no shutdown
            exit
  
            endpoint "x" create
            exit
            endpoint "y" create
            exit
            sap 1/1/4:1.100 endpoint "x" create
                    eth-cfm
                    mep 131 domain 1 association 2 direction down
                        fault-propagation-enable use-if-tlv
                        ccm-enable
                        no shutdown
                    exit
                exit
                oper-group "og-name-et100"
                
            exit
            spoke-sdp 2:2 vc-type vlan endpoint "y" create
                precedence 1
                no shutdown
            exit
            spoke-sdp 3:2 vc-type vlan endpoint "y" create
                precedence 2
                no shutdown
            exit
            no shutdown               
        exit
        
    exit
#--------------------------------------------------
echo "BGP Configuration"
#--------------------------------------------------
        bgp
            rapid-withdrawal
            rapid-update l2-vpn
            group "internal"
                type internal
                neighbor 10.1.1.2
                    family l2-vpn
                exit
            exit
        exit
    exit

Node-3: Using operational groups and Ethernet CFM per SAP

#--------------------------------------------------
echo "Eth-CFM Configuration"
#--------------------------------------------------
    eth-cfm
        domain 100 format none level 3
            association 2 format icc-based name "node-3-site-1-0"
                bridge-identifier 1
                exit
                ccm-interval 1
                remote-mepid 130
            exit
            association 2 format icc-based name "node-3-site-1-1"
                bridge-identifier 100
                exit
                ccm-interval 1
                remote-mepid 131
            association 3 format icc-based name "node-3-site-2-0"
                bridge-identifier 1
                exit
                ccm-interval 1
                remote-mepid 120
            exit
            association 3 format icc-based name "node-3-site-2-1"
                bridge-identifier 100
                exit
                ccm-interval 1
                remote-mepid 121
            exit
        exit
    exit
 
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
 
  eth-tunnel 1
        description "Eth Tunnel loadsharing mode QinQ example"
        protection-type loadsharing
        ethernet
            encap-type qinq
        exit
        path 1
            member 1/1/3
            control-tag 1.1
            eth-cfm
                mep 310 domain 100 association 2
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        path 2
            member 1/1/4
            control-tag 1.2
            eth-cfm
                mep 320 domain 100 association 3
                    ccm-enablepath 
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        no shutdown
    exit
#--------------------------------------------------
echo "Ethernet Tunnel Configuration"
#--------------------------------------------------
    eth-tunnel 2
        description "Eth Tunnel QinQ"
        revert-time 10
        path 1
            precedence primary
            member 1/1/1
            control-tag 1.100
            eth-cfm
                mep 311 domain 100 association 2
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        path 2
            member 1/1/2
            control-tag 1.100
            eth-cfm
                mep 321 domain 100 association 3
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        no shutdown
    exit
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
    service
        epipe 1 customer 1 create
            sap 2/1/2:1.1 create
            exit
            sap eth-tunnel-1 create
            exit
            no shutdown
        exit
        epipe 100 customer 1 create
            service-mtu 500
            sap 2/1/10:1.100 create
            exit
            sap eth-tunnel-2 create
            exit
            no shutdown
        exit

2.8.10.6.3.2. Configuration with Fate Sharing on Node-3

In this example, the SAPs monitoring the operational groups do not need CFM if the corresponding SAP on the CE side is using fate sharing.

Node-1:

#--------------------------------------------------
echo "Service Configuration"  Oper-groups 
#--------------------------------------------------
    service
        customer 1 create
            description "Default customer"
        exit
        sdp 2 mpls create
          ...
 
        exit
        pw-template 1 create
            vc-type vlan
        exit
        oper-group "og-name-et" create
        exit
        epipe 1 customer 1 create
            service-mtu 500
            bgp
                route-distinguisher 65000:1
                route-target export target:65000:1 import target:65000:1
            exit
            site "site-1" create
                site-id 1
                sap 1/1/2:1.1
                boot-timer 100
                site-activation-timer 2
                no shutdown
            exit
            endpoint "x" create
            exit
            endpoint "y" create
            exit
            sap 1/1/2:1.1 endpoint "x" create
                eth-cfm
                    mep 130 domain 100 association 1 direction down
                        fault-propagation-enable use-if-tlv
                        ccm-enable
                        no shutdown
                    exit
                exit
                oper-group "og-name-et"
            exit
            spoke-sdp 2:1 endpoint "y" create
                precedence primary
                no shutdown
            exit
            spoke-sdp 3:1 endpoint "y" create
                precedence 2
                no shutdown
            exit
            no shutdown
        exit
        epipe 2 customer 1 create
            description "Epipe 2 in opergroup with Epipe 1"
            service-mtu 500
            bgp
                route-distinguisher 65000:2
                route-target export target:65000:2 import target:65000:2
            exit
            endpoint "x" create
            exit
            endpoint "y" create
            exit
            sap 1/1/2:1.2 endpoint "x" create
                monitor-oper-group "og-name-et"
            exit
            spoke-sdp 2:2 vc-type vlan endpoint "y" create
                precedence 1
                no shutdown
            exit
            spoke-sdp 3:2 vc-type vlan endpoint "y" create
                precedence 2
                no shutdown
            exit
            no shutdown               
        exit
        
    exit

Node-3:

#--------------------------------------------------
echo "Eth-CFM Configuration" 
#--------------------------------------------------
    eth-cfm
        domain 100 format none level 3
            association 1 format icc-based name "node-3-site-1-0"
                bridge-identifier 1
                exit
                ccm-interval 1
                remote-mepid 130
            exit
            association 2 format icc-based name "node-3-site-2-0"
                bridge-identifier 2
                exit
                ccm-interval 1
                remote-mepid 120
            exit
        exit
    exit
 
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
 
  eth-tunnel 2
        description "Eth Tunnel loadsharing mode QinQ example"
        protection-type loadsharing
        ethernet
            encap-type qinq
        exit
        path 1
            member 1/1/1
            control-tag 1.1
            eth-cfm
                mep 310 domain 100 association 1
                    ccm-enable
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        path 2
            member 1/1/2
            control-tag 1.1
            eth-cfm
                mep 320 domain 100 association 2
                    ccm-enablepath 
                    control-mep
                    no shutdown
                exit
            exit
            no shutdown
        exit
        no shutdown
    exit
 
 
#--------------------------------------------------
echo "Service Configuration"
#--------------------------------------------------
    service
        epipe 1 customer 1 create
            sap 1/10/1:1 create
            exit
            sap eth-tunnel-1 create
            exit
            no shutdown
        exit
#--------------------------------------------------
echo "Service Configuration for a shared fate Ethernet Tunnel"
#--------------------------------------------------
        epipe 2 customer 1 create
            sap 1/10/2:3 create
            exit
            sap eth-tunnel-1:2 create
                eth-tunnel
                    path 1 tag 1.2
                    path 2 tag 1.2
                exit
            exit
            no shutdown
        exit

2.8.11. Access Node Resilience Using MC-LAG and Pseudowire Redundancy

Figure 34 shows the use of both Multi-Chassis Link Aggregation (MC-LAG) in the access network and pseudowire redundancy in the core network to provide a resilient end-to-end VLL service to the customers.

Figure 34:  Access Node Resilience 

In this application, a new pseudowire status bit of active or standby indicates the status of the SAP in the MC-LAG instance in the SR-series aggregation node. All spoke-SDPs are of secondary type and there is no use of a primary pseudowire type in this mode of operation. Node A is in the active state according to its local MC-LAG instance and therefore advertises active status notification messages to both its peer pseudowire nodes; for example, nodes C and D. Node D performs the same operation. Node B is in the standby state according to the status of the SAP in its local MC-LAG instance, so advertises standby status notification messages to both nodes C and D. Node C performs the same operation.

An SR-series node selects a pseudowire as the active path for forwarding packets when both the local pseudowire status and the received remote pseudowire status indicate active status. However, an SR-series device in standby status according to the SAP in its local MC-LAG instance is capable of processing packets for a VLL service received over any of the pseudowires that are up. This is to avoid black holing of user traffic during transitions. The SR-series standby node forwards these packets to the active node by the Inter-Chassis Backup pseudowire (ICB pseudowire) for this VLL service. An ICB is a spoke-SDP used by an MC-LAG node to back up an MC-LAG SAP during transitions. The same ICB can also be used by the peer MC-LAG node to protect against network failures causing the active pseudowire to go down.

At configuration time, the user specifies a precedence parameter for each of the pseudowires that are part of the redundancy set, as described in the application in VLL Resilience with Two Destination PE Nodes. An SR-series node uses this to select which pseudowire to forward packets to in case both pseudowires show active/active for the local/remote status during transitions.

Only VLL service of type Epipe is supported in this application. Also, ICB spoke-SDP can only be added to the SAP side of the VLL cross-connect if the SAP is configured on an MC-LAG instance.

2.8.12. VLL Resilience for a Switched Pseudowire Path

Figure 35 shows the use of both pseudowire redundancy and pseudowire switching to provide a resilient VLL service across multiple IGP areas in a provider network.

Figure 35:  VLL Resilience with Pseudowire Redundancy and Switching 

Pseudowire switching is a method for scaling a large network of VLL or VPLS services by removing the need for a full mesh of T-LDP sessions between the PE nodes as the number of these nodes grows over time.

Like in the application in VLL Resilience with Two Destination PE Nodes, the T-PE1 node switches the path of a VLL to a secondary standby pseudowire if a network side failure caused the VLL binding status to be operationally down or if T-PE2 notified it that the remote SAP went down. This application requires that pseudowire status notification messages generated by either a T-PE node or a S-PE node be processed and relayed by the S-PE nodes.

It is possible that the secondary pseudowire path terminates on the same target PE as the primary; for example, T-PE2. This provides protection against network side failures but not against a remote SAP failure. When the target destination PE for the primary and secondary pseudowires is the same, T-PE1 will normally not switch the VLL path onto the secondary pseudowire upon receipt of a pseudowire status notification indicating the remote SAP is down, because the status notification is sent over both the primary and secondary pseudowires. However, the status notification on the primary pseudowire may arrive earlier than the one on the secondary pseudowire due to the differential delay between the paths. This will cause T-PE1 to switch the path of the VLL to the secondary standby pseudowire and remain there until the status notification is cleared. Then, the VLL path is switched back to the primary pseudowire due to the revertive behavior operation. The path will not switch back to a secondary path when it comes up, even if it has a higher precedence than the currently active secondary path.

For the 7750 SR, this application can make use of all types of VLL supported on the routers; for example, Apipe, Fpipe, Epipe, and Ipipe services. A SAP can be configured on a SONET/SDH port that is part of an APS group. However, if a SAP is configured on an MC-LAG instance, only the Epipe service type will be allowed.

2.9. Pseudowire Redundancy Service Models

This section describes the various MC-LAG and pseudowire redundancy scenarios as well as the algorithm used to select the active transmit object in a VLL endpoint.

The redundant VLL service model is described in the following section, Redundant VLL Service Model.

2.9.1. Redundant VLL Service Model

To implement pseudowire redundancy, a VLL service accommodates more than a single object on the SAP side and on the spoke-SDP side. Figure 36 shows the model for a redundant VLL service based on the concept of endpoints.

Figure 36:  Redundant VLL Endpoint Objects 

By default a VLL service supports two implicit endpoints managed internally by the system. Each endpoint can only have one object: a SAP or a spoke-SDP.

To add more objects, up to two explicitly named endpoints may be created per VLL service. The endpoint name is locally significant to the VLL service. They are referred to as endpoint X and endpoint Y as illustrated in Figure 36.

Figure 36 is just an example; the Y endpoint can also have a SAP and/or an ICB spoke-SDP. The following describes the four types of endpoint objects supported and the rules used when associating them with an endpoint of a VLL service:

  1. SAP — There can only be a maximum of one SAP per VLL endpoint.
  2. Primary spoke-SDP — The VLL service always uses this pseudowire and only switches to a secondary pseudowire when this primary pseudowire is down; the VLL service switches the path to the primary pseudowire when it is back up. The user can configure a timer to delay reverting back to primary or to never revert. There can only be a maximum of one primary spoke-SDP per VLL endpoint.
  3. Secondary spoke-SDP — There can be a maximum of four secondary spoke-SDPs per endpoint. The user can configure the precedence of a secondary pseudowire to indicate the order in which a secondary pseudowire is activated.
  4. Inter-Chassis Backup (ICB) spoke-SDP — This special pseudowire is used for MC-LAG and pseudowire redundancy applications. Forwarding between ICBs is blocked on the same node. The user has to explicitly indicate that the spoke-SDP is an ICB, at creation time. However, following are a few scenarios where the user can configure the spoke-SDP as ICB or as a regular spoke-SDP on a specified node. The CLI for those cases will indicate both options.

A VLL service endpoint can only use a single active object to transmit at any specified time but can receive from all endpoint objects.

An explicitly named endpoint can have a maximum of one SAP and one ICB. When a SAP is added to the endpoint, only one more object of type ICB spoke-SDP is allowed. The ICB spoke-SDP cannot be added to the endpoint if the SAP is not part of an MC-LAG instance. Conversely, a SAP that is not part of an MC-LAG instance cannot be added to an endpoint that already has an ICB spoke-SDP.

An explicitly named endpoint that does not have a SAP object, can have a maximum of four spoke-SDPs and can include any of the following:

  1. a single primary spoke-SDP
  2. one or many secondary spoke-SDPs with precedence
  3. a single ICB spoke-SDP

2.9.2. T-LDP Status Notification Handling Rules

Using Redundant VLL Endpoint Objects as a reference, the following are the rules for generating, processing, and merging T-LDP status notifications in VLL service with endpoints. Any allowed combination of objects as specified in Redundant VLL Service Model can be used on endpoints X and Y. The following sections see the specific combination objects in Figure 36 as an example to describe the more general rules.

2.9.2.1. Processing Endpoint SAP Active/Standby Status Bits

The advertised admin forwarding status of active/standby reflects the status of the local LAG SAP in MC-LAG application. If the SAP is not part of an MC-LAG instance, the forwarding status of active is always advertised.

When the SAP in endpoint X is part of an MC-LAG instance, a node must send a T-LDP forwarding status bit of SAP active/standby over all Y endpoint spoke-SDPs, except the ICB spoke-SDP, whenever this status changes. The status bit sent over the ICB is always zero (active by default).

When the SAP in endpoint X is not part of an MC-LAG instance, then the forwarding status sent over all Y endpoint spoke-SDPs should always be set to zero (active by default).

2.9.2.2. Processing and Merging

Endpoint X is operationally up if at least one of its objects is operationally up. It is down if all of its objects are operationally down.

If the SAP in endpoint X transitions locally to the down state, or received a SAP down notification by SAP-specific OAM signal, the node must send T-LDP SAP down status bits on the Y endpoint ICB spoke-SDP only. Ethernet SAP does not support SAP OAM protocol. All other SAP types cannot exist on the same endpoint as an ICB spoke-SDP because a non-Ethernet SAP cannot be part of an MC-LAG instance.

If the ICB spoke-SDP in endpoint X transitions locally to down state, the node must send T-LDP SDP-binding down status bits on this spoke-SDP.

If the ICB spoke-SDP in endpoint X received T-LDP SDP-binding down status bits or pseudowire not forwarding status bits, the node saves this status and takes no further action. The saved status is used for selecting the active transmit endpoint object.

If all objects in endpoint X did any or all of the following, the node must send status bits of SAP down over all “Y” endpoint spoke-SDPs, including the ICB:

  1. transitioned locally to down state
  2. received a SAP down notification by remote T-LDP status bits or by SAP-specific OAM signal
  3. received SDP-binding down status bits
  4. received PW not forwarding status bits

Endpoint Y is operationally up if at least one of its objects is operationally up. It is down if all its objects are operationally down.

If a spoke-SDP in endpoint Y, including the ICB spoke-SDP, transitions locally to down state, the node must send T-LDP SDP-binding down status bits on this spoke-SDP.

If a spoke-SDP in endpoint Y, including the ICB spoke-SDP, did any or all of the following, the node saves this status and takes no further action:

  1. received T-LDP SAP down status bits
  2. received T-LDP SDP-binding down status bits
  3. received PW not forwarding status bits

The saved status is used for selecting the active transmit endpoint object.

If all objects in endpoint Y, except the ICB spoke-SDP, did any or all of the following, the node must send status bits of SDP-binding down over the X endpoint ICB spoke-SDP only:

  1. transitioned locally to the down state
  2. received T-LDP SAP down status bits
  3. received T-LDP SDP-binding down status bits
  4. received PW not forwarding status bits

If all objects in endpoint Y did any or all of the following, he node must send status bits of SDP-binding down over the X endpoint ICB spoke-SDP, and must send a SAP down notification on the X endpoint SAP by the SAP-specific OAM signal if applicable:

  1. transitioned locally to down state
  2. received T-LDP SAP down status bits
  3. received T-LDP SDP-binding down status bits
  4. received PW not forwarding status bits

An Ethernet SAP does not support signaling status notifications.

2.10. High-Speed Downlink Packet Access (HSDPA) Off Load Fallback over ATM

For many Universal Mobile Telecommunications System (UMTS) networks planning to deploy High-Speed Downlink Packet Access (HSDPA), the existing mobile backhaul topology consists of a cell site that is partially backhauled over DSL (for the HSDPA portion) and partially over an existing TDM/ATM infrastructure (for UMTS voice traffic).

Figure 37:  HSDPA Off Load Fallback over ATM 

For example, the service pseudowires provider may use a 7705 SAR with one or two ATM E1 uplinks for real-time voice traffic and an Ethernet uplink connected to a DSL model for NRT data traffic. At the RNC site, a 7750 SR service router can be used, connected by ASAP (E1 IMA bundles) or STM-n ATM to the TDM/ATM network, and Ethernet to the DSL backhaul network.

On the MSC-located SR connected to the Radio Network Controller (RNC), there is a standard pseudowire (Ethernet or ATM) that has an active pseudowire by IP/MPLS, but the standby path is not IP/MPLS capable Therefore, the active/standby pseudowire concept is extended to allow standby to be an access SAP to an ATM network for ATM pseudowire or Ethernet (bridged over ATM) for ETH pseudowire.

Normally, if the MPLS pseudowire path is active, this path is used. If a failure happens on the IP/MPLS path, detected through BFD-TLDP or local notification, traffic needs to switch to the SAP that is connected to the ATM/TDM backhaul network. As soon as the MPLS pseudowire path becomes available again, reversion back to the pseudowire path is supported.

2.10.1. Primary Spoke SDP Fallback to Secondary SAP

For HSDPA, Apipe and Epipe service termination on the SR where an endpoint-X SAP connects to the mobile RNC (by ATM or Ethernet) and an endpoint Y has a primary spoke-SDP and a secondary SAP on an SR ATM or ASAP MDA (with bridged PDU encapsulation for Epipes). The secondary SAP has the same restrictions as the SAP in endpoint-X for Apipe and Epipe, respectively.

It is sufficient to have a single secondary SAP (without any precedence), which implies that it cannot be mixed with any secondary spoke-SDPs; 1+1 APS and MC-APS is supported on the secondary SAP interface.

Similar to the current pseudowire redundancy implementation, receive should be enabled on both objects even though transmit is only enabled on one.

It is expected that BFD for T-LDP will be used in most applications to decrease the fault detection times and minimize the outage times upon failure.

2.10.2. Reversion to Primary Spoke SDP Path

The endpoint revert-time reversion from secondary to primary paths in the config>service>apipe>endpoint and config>service>epipe>endpoint contexts are consistent with standard pseudowire redundancy. Various network configurations and equipment require different reversion configurations. The default revert-time is 0.

2.10.3. MC-APS and MC-LAG

In many cases, 7750 SRs are deployed in redundant pairs at the MSC. In this case, MC-APS is typically used for all ATM connections. Figure 38 shows this case, assuming that MC-APS is deployed on both the RNC connection and the ATM network connection. For MC-APS to be used, clear channel SONET or SDH connections should be used.

Figure 38:  HSDPA Off Load Fallback with MC-APS 

In this scenario, endpoint Y allows an ICB spoke-SDP as well as the primary spoke-SDP and secondary SAP. ICB operation is maintained as the current redundant pseudowire operation and the ICB spoke-SDP is always given an active status. The ICB spoke-SDP is only used if both the primary spoke-SDP and secondary SAP are not available. The secondary SAP is used if it is operationally up and the primary spoke-SDP pseudowire status is not active. Receive is enabled on all objects even though transmit is only enabled on one.

To allow correct operation in all failure scenarios, an ICB spoke-SDP must be added to endpoint X. The ICB spoke-SDP is only used if the SAP is operationally down.

The following is an example configuration of Epipes mapping to Figure 38. A SAP can be added to an endpoint with a non-ICB spoke-SDP only if the precedence of the spoke is primary.

7750 SR #1

*A:ALA-A>config>service#  epipe 1
----------------------------------------------
            endpoint X
            exit
            endpoint Y
            exit
            sap 1/1/2:0 endpoint X
              exit
            spoke-sdp 1:100 endpoint X icb
              exit
            spoke-sdp 10:500 endpoint Y 
              precedence primary
              exit 
            sap 1/1/3:0 endpoint Y
            exit
            spoke-sdp 1:200 endpoint Y icb
            exit
----------------------------------------------
*A:ALA-A>config>service#
 

7750 SR #2

*A:ALA-B>config>service# epipe 1
----------------------------------------------
            endpoint X
            exit
            endpoint Y
            exit
            sap 2/3/4:0 endpoint X
            exit
            spoke-sdp 1:200 endpoint X icb
            exit
            spoke-sdp 20:600 endpoint Y
              precedence primary 
              exit
            sap 2/3/5:0 endpoint Y
            exit
            spoke-sdp 1:100 endpoint Y icb
            exit
----------------------------------------------
*A:ALA-B>config>service#

2.10.3.1. Failure Scenario

Based on the previously mentioned rules, the following is an example of a failure scenario. Assuming both links are active on 7750 SR #1 and the Ethernet connection to the cell site fails (most likely failure scenario because the connection would not be protected), SDP1 would go down and the secondary SAP would be used in 7750 SR #1 and 7705 SAR, as shown in Figure 39.

Figure 39:  Ethernet Failure At Cell Site 

If the active link to the Layer 2 switched network was on 7750 SR #2 at the time of the failure, SAP1 would be operationally down (because the link is in standby) and ICBA would be used. Because the RNC SAP on 7750 SR #2 is on a standby APS link, ICBA would be active and it would connect to SAP2 because SDP2 is operationally down as well.

All APS link failures would be handled through the standard pseudowire status messaging procedures for the RNC connection and through standard ICB usage for the Layer 2 switched network connection.

2.11. VLL Using G.8031 Protected Ethernet Tunnels

The use of MPLS tunnels provides the 7450 ESS and 7750 SR OS a way to scale the core while offering fast failover times using MPLS FRR. In environments where Ethernet services are deployed using native Ethernet backbones, Ethernet tunnels are provided to achieve the same fast failover times as in the MPLS FRR case.

The Nokia VLL implementation offers the capability to use core Ethernet tunnels compliant with ITU-T G.8031 specification to achieve 50 ms resiliency for backbone failures. This is required to comply with the stringent SLAs provided by service providers. Epipe and Ipipe services are supported.

When using Ethernet tunnels, the Ethernet tunnel logical interface is created first. The Ethernet tunnel has member ports, which are the physical ports supporting the links. The Ethernet tunnel control SAPs carry G.8031 and 802.1ag control traffic and user data traffic. Ethernet service SAPs are configured on the Ethernet tunnel. Optionally, when tunnels follow the same paths, end-to-end services may be configured with fate shared Ethernet tunnel SAPs, which carry only user data traffic and share the fate of the Ethernet tunnel port (if correctly configured).

Ethernet tunnels provide a logical interface that VLL SAPs may use just as regular interfaces. The Ethernet tunnel provides resiliency by providing end-to-end tunnels. The tunnels are stitched together by VPLS or Epipe services at intermediate points. Epipes offer a more scalable option.

For further information, refer to 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide.

2.12. MPLS Entropy Label and Hash Label

The router supports the MPLS entropy label (RFC 6790) and the Flow Aware Transport label, known as the hash label (RFC 6391). These labels allow LSR nodes in a network to load-balance labeled packets in a much more granular fashion than allowed by just hashing on the standard label stack. See the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for further information.

2.13. BGP Virtual Private Wire Service (VPWS)

BGP Virtual Private Wire Service (VPWS) is a point-to-point L2 VPN service based on RFC 6624 Layer 2 Virtual Private Networks using BGP for Auto-Discovery and Signaling which in turn uses the BGP pseudowire signaling concepts from RFC 4761, Virtual Private LAN Service Using BGP for Auto-Discovery and Signaling.

The BGP signaled pseudowires created can use either automatic or pre-provisioned SDPs over LDP- or BGP-signaled tunnels (the choice of tunnel depends on the tunnel's preference in the tunnel table), or over GRE. Pre-provisioned SDPs must be configured when RSVP signaled transport tunnels are used.

The use of an automatically created GRE tunnel is enabled by creating the PW template used within the service with the parameter auto-gre-sdp. The GRE SDP and SDP binding is created after a matching BGP route is received.

Inter-AS model C and dual-homing are supported.

2.13.1. Single-Homed BGP VPWS

A single-homed BGP VPWS service is implemented as an Epipe connecting a SAP or static GRE tunnel (a spoke-SDP using a GRE SDP configured with static MPLS labels) and a BGP signaled pseudowire, maintaining the Epipe properties such as no MAC learning. The MPLS pseudowire data plane uses a two-label stack; the inner label is derived from the BGP signaling and identifies the Epipe service while the outer label is the tunnel label of an LSP transporting the traffic between the two end systems.

Figure 40 shows how this service would be used to provide a virtual leased line service (VLL) across an MPLS network between two sites: A and B.

Figure 40:  Single-Homed BGP-VPWS Example 

An Epipe is configured on PE1 and PE2 with BGP VPWS enabled. PE1 and PE2 are connected to site A and B, respectively, each using a SAP. The interconnection between the two PEs is achieved through a pseudowire that is signaled using BGP VPWS updates over a specific tunnel LSP.

2.13.2. Dual-Homed BGP VPWS

A BGP-VPWS service can benefit from dual-homing, as described in IETF Draft draft-ietf-bess-vpls-multihoming-01. When using dual-homing, two PEs connect to a site, with one PE being the designated forwarder for the site and the other blocking its connection to the site. On failure of the active PE, its pseudowire, or its connection to the site, the other PE becomes the designated forwarder and unblocks its connection to the site.

2.13.2.1. Single Pseudowire Example

A pseudowire is established between the designated forwarder of the dual-homed PEs and the remote PE. If a failure causes a change in the designated forwarder, the pseudowire is deleted and reestablished between the remote PE and the new designated forwarder. This topology requires that the VE IDs on the dual-homed PEs are set to the same value.

A dual-homed, single pseudowire topology example is shown in Figure 41.

Figure 41:  Dual-Homed BGP VPWS with Single Pseudowire 

An Epipe with BGP VPWS enabled is configured on each PE. Site A is dual-homed to PE1 and PE2 with the remote PE (PE3) connecting to site B. An Epipe service is configured on each PE in which there is a SAP connecting to the local site.

The pair of dual-homed PEs perform a designated forwarder election, which is influenced by BGP route selection, the site state, and by configuring the site-preference. A site will only be eligible to be the designated forwarder if it is up (the site state will be down if there is no pseudowire established or if the pseudowire is in an oper down state). The winner, for example PE1, becomes the active switch for traffic sent to and from site A, while the loser blocks its connection to site A.

Pseudowires are signaled using BGP from PE1 and PE2 to PE3, but only from PE3 to the designated forwarder in the opposite direction (so only one bi-directional pseudowire is established). There is no pseudowire between PE1 and PE2; this is achieved by configuration.

Traffic is sent and received traffic on the pseudowire connected between PE3 and the designated forwarder, PE1.

If the site state is oper down, both the D and Circuit Status Vector (CSV) bits (see the following for more details) are set in the BGP-VPWS update which will cause the remote PE to use the pseudowire to the new designated forwarder.

2.13.2.2. Active/Standby Pseudowire Example

Pseudowires are established between the remote PE and each dual-homed PE. The remote PE can receive traffic on either pseudowire, but will only send on the one to the designated forwarder. This creates an active/standby pair of pseudowires. At most, one standby pseudowire will be established; this being determined using the tie-breaking rules defined in the multi-homing draft. This topology requires each PE to have a different VE ID.

A dual-homed, active/standby pseudowires topology example is shown in Figure 42.

Figure 42:  Dual-homed BGP VPWS with Active/Standby Pseudowires 

An Epipe with BGP VPWS enabled is configured on each PE. Site A is dual-homed to PE1 and PE2 with the remote PE (PE3) connecting to site B. An Epipe service is configured on each PE in which there is a SAP connecting to the local site.

The pair of dual-homed PEs perform a designated forwarder election, which is influenced by configuring the site-preference. The winner, PE1 (based on its higher site-preference) becomes the active switch for traffic sent to and from site A, while the loser, PE2, blocks its connection to site A. Pseudowires are signaled using BGP between PE1 and PE3, and between PE2 and PE3. There is no pseudowire between PE1 and PE2; this is achieved by configuration. The active/standby pseudowires on PE3 are part of an endpoint automatically created in the Epipe service.

Traffic is sent and received on the pseudowire connected to the designated forwarder, PE1.

2.13.3. BGP VPWS Pseudowire Switching

Pseudowire switching is supported with a BGP VPWS service allowing the cross connection between a BGP VPWS signaled spoke-SDP and a static GRE tunnel, the latter being a spoke-SDP configured with static MPLS labels using a GRE SDP. No other spoke-SDP types are supported. Support is not included for BGP multi-homing using an active and a standby pseudowire to a pair of remote PEs.

Operational state changes to the GRE tunnel are reflected in the state of the Epipe and propagated accordingly in the BGP VPWS spoke-SDP status signaling, specifically using the BGP update D and CSV bits.

The following configuration is required:

  1. The Epipe service must be created using the vc-switching parameter.
  2. The GRE tunnel spoke-SDP must be configured using a GRE SDP with signaling off, and have the ingress and egress vc-labels statically configured.

An example configuration is as follows:

configure
    service
        sdp 1 create
            signaling off
            far-end 192.168.1.1
            keep-alive
                shutdown
            exit
            no shutdown
        exit
        pw-template 1 create
        exit
        epipe 1 customer 1 vc-switching create
            description "BGP VPWS service"
            bgp
                route-distinguisher 65536:1
                route-target export target:65536:1 import target:65536:1
                pw-template-binding 1
                exit
            exit
            bgp-vpws
                ve-name "PE1"
                    ve-id 1
                exit
                remote-ve-name "PE2"
                    ve-id 2
                exit
                no shutdown
            exit
            spoke-sdp 1:1 create
                ingress
                    vc-label 1111
                exit
                egress
                    vc-label 1122
                exit
                no shutdown
            exit
            no shutdown
        exit

2.13.3.1. Pseudowire Signaling

The BGP signaling mechanism used to establish the pseudowires is described in the BGP VPWS standards with the following differences:

  1. As stated in Section 3 of RFC 6624, there are two modifications of messages when compared to RFC 4761.
    1. the Encaps Types supported in the associated extended community
    2. the addition of a circuit status vector sub-TLV at the end of the VPWS NLRI
  2. The control flags and VPLS preference in the associated extended community are based on IETF Draft draft-ietf-bess-vpls-multihoming-01.

Figure 43 shows the format of the BGP VPWS update extended community.

Figure 43:  BGP VPWS Update Extended Community Format 
  1. Extended Community Type — The value allocated by IANA for this attribute is 0x800A.
  2. Encaps Type — Encapsulation type, identifies the type of pseudowire encapsulation. Ethernet VLAN (4) and Ethernet Raw mode (5), as described in RFC 4448, are the only values supported. If there is a mismatch between the Encaps Type signaled and the one received, the pseudowire is created but with the operationally down state.
  3. Control Flags — Control information regarding the pseudowires, see Figure 44 for more information.
  4. Layer 2 MTU — Maximum Transmission Unit to be used on the pseudowires. If the received Layer 2 MTU is zero, no MTU check is performed and the related pseudowire is established. If there is a mismatch between the local service-mtu and the received Layer 2 MTU, the pseudowire is created with the operationally down state and an MTU/Parameter mismatch indication.
  5. VPLS Preference – VPLS preference has a default value of zero for BGP-VPWS updates sent by the system, indicating that it is not in use. If the site-preference is configured, its value is used for the VPLS preference and is also used in the local designated forwarder election.
    On receipt of a BGP VPWS update containing a non-zero value, it will be used to determine to which system the pseudowire is established, as part of the VPWS update process tie-breaking rules. The BGP local preference of the BGP VPWS update sent by the system is set to the same value as the VPLS preference if the latter is non-zero, as required by the draft (as long as the D bit in the extended community is not set to 1). Consequently, attempts to change the BGP local preference when exporting a BGP VPWS update with a non-zero VPLS preference will be ignored. This prevents the updates being treated as malformed by the receiver of the update.
    For inter-AS, the preference information must be propagated between autonomous systems using the VPLS preference. Consequently, if the VPLS preference in a BGP-VPWS or BGP multi-homing update is zero, the local preference is copied by the egress ASBR into the VPLS preference field before sending the update to the eBGP peer. The adjacent ingress ASBR then copies the received VPLS preference into the local preference to prevent the update from being considered malformed.

The control flags are shown in Figure 44.

Figure 44:  Control Flags 

The following bits in the Control Flags are defined:

D — Access circuit down indicator from IETF Draft draft-kothari-l2vpn-auto-site-id-01. D is 1 if all access circuits are down, otherwise D is 0.

A — Automatic site ID allocation, which is not supported. This is ignored on receipt and set to 0 on sending.

F — MAC flush indicator. This is not supported because it relates to a VPLS service. This is set to 0 and ignored on receipt.

C — Presence of a control word. Control word usage is supported. When this is set to 1, packets will be sent and are expected to be received, with a control word. When this is set to 0, packets will be sent and are expected to be received, without a control word (by default).

S — Sequenced delivery. Sequenced delivery is not supported. This is set to 0 on sending (no sequenced delivery) and, if a non-zero value is received (indicating sequenced delivery required), the pseudowire will not be created.

The BGP VPWS NLRI is based on that defined for BGP VPLS, but is extended with a circuit status vector, as shown in Figure 45.

Figure 45:  BGP VPWS NLRI 

The VE ID value is configured within each BGP VPWS service, the label base is chosen by the system, and the VE block offset corresponds to the remote VE ID because a VE block size of 1 is always used.

The circuit status vector is encoded as a TLV, as shown in Figure 46 and Figure 47.

Figure 46:  BGP VPWS NLRI TLV Extension Format 
Figure 47:  Circuit Status Vector TLV Type 

The circuit status vector is used to indicate the status of both the SAP/GRE tunnel and the status of the spoke-SDP within the local service. Because the VE block size used is 1, the most significant bit in the circuit status vector TLV value will be set to 1 if either the SAP/GRE tunnel or spoke-SDP is down, otherwise it will be set to 0. On receiving a circuit status vector, only the most significant byte of the CSV is examined for designated forwarder selection purposes.

If a circuit status vector length field of greater than 32 is received, the update will be ignored and not reflected to BGP neighbors. If the length field is greater than 800, a notification message will be sent and the BGP session will restart. Also, BGP VPWS services support a single access circuit, so only the most significant bit of the CSV is examined on receipt.

A pseudowire will be established when a BGP VPWS update is received that matches the service configuration, specifically the configured route-targets and remote VE ID. If multiple matching updates are received, the system to which the pseudowire is established is determined by the tie-breaking rules, as described in IETF Draft draft-ietf-bess-vpls-multihoming-01.

Traffic will be sent on the active pseudowire connected to the remote designated forwarder. Traffic can be received on either the active or standby pseudowire, although no traffic should be received on the standby pseudowire because the SAP/GRE tunnel on the non-designated forwarder should be blocked.

2.13.3.2. BGP-VPWS with Inter-AS Model C

BGP VPWS with inter-AS model C is supported both in a single-homed and dual-homed configuration.

When dual-homing is used, the dual-homed PEs must have different values configured for the site-preference (under the site within the Epipe service) to allow the PEs in a different AS to select the designated forwarder when all access circuits are up. The value configured for the site-preference is propagated between autonomous systems in the BGP VPWS and BGP multi-homing update extended community VPLS preference field. The receiving ingress ASBR copies the VPLS preference value into local preference of the update to ensure that the VPLS preference and local preference are equal, which prevents the update from being considered malformed.

2.13.3.3. BGP VPWS Configuration Procedure

In addition to configuring the associated BGP and MPLS infrastructure, the provisioning of a BGP VPWS service requires:

  1. Configuring the BGP Route Distinguisher, Route Target
    1. Updates are accepted into the service only if they contain the configured import route-target
  2. Configuring a binding to the pseudowire template
    1. Multiple pseudowire template bindings can be configured with their associated route-targets used to control which is applied
  3. Configuring the SAP or static GRE tunnel
  4. Configuring the name of the local VE and its associate VE ID
  5. Configuring the name of the remote VE and its associated VE ID
  6. For a dual-homed PE
    1. Enabling the site
    2. Configuring the site with non-zero site-preference
  7. For a remote PE
    1. Configuring up to two remove VE names and associated VE IDs
  8. Enabling BGP VPWS

2.13.3.4. Use of Pseudowire Template for BGP VPWS

The pseudowire template concept used for BGP AD is re-used for BGP VPWS to dynamically instantiate pseudowires (SDP-bindings) and the related SDPs (provisioned or automatically instantiated).

The settings for the L2-Info extended community in the BGP update sent by the system are derived from the pseudowire-template attributes. The following rules apply:

  1. If multiple pseudowire-template-bindings (with or without import-rt) are specified for the VPWS instance, the first (numerically lowest ID) pseudowire-template entry will be used.
  2. Both Ethernet VLAN and Ethernet Raw Mode Encaps Types are supported; these are selected by configuring the vc-type in the pseudowire template to be either vlan or ether, respectively. The default is ether.
    1. The same value must be used by the remote BGP VPWS instance to ensure that the related pseudowire will come up.
  3. Layer 2 MTU – derived from service VPLS service-mtu parameter.
    1. The same value must be used by the remote BGP VPWS instance to ensure that the related pseudowire will come up.
  4. Control Flag C – can be 0 or 1, depending on the setting of the controlword parameter in the PW template 0.
  5. Control Flag S – always 0.

On reception, the values of the parameters in the L2-Info extended community of the BGP update are compared with the settings from the corresponding pseudowire-template. The following steps are used to determine the local pseudowire-template:

  1. The route-target values are matched to determine the pseudowire-template. The binding configured with the first matching route target is chosen.
  2. If a match is not found from the previous step, the lowest pw-template-binding (numerically) without any route-target configured is used[.
  3. If the values used for Encaps Type or Layer 2 MTU do not match, the pseudowire is created but with the operationally down state.
    1. To interoperate with existing implementations, if the received MTU value = 0, then MTU negotiation does not take place; the related pseudowire is set up ignoring the MTU.
  4. If the value of the S flag is not zero, the pseudowire is not created.

The following pseudowire template parameters are supported when applied within a BGP VPWS service; the remainder are ignored:

configure service pw-template policy-id [use-provisioned-sdp | 
       [prefer-provisioned-sdp] [auto-sdp]] [create] [name name]
   accounting-policy acct-policy-id 
   no accounting-policy
   [no] collect-stats
   [no] controlword
   egress 
      filter ipv6 ipv6-filter-id 
      filter ip ip-filter-id
      filter mac mac-filter-id
      no filter [ip ip-filter-id] [mac mac-filter-id] [ipv6 ipv6-filter-id]
      qos network-policy-id port-redirect-group queue-group-name instance instance
id
      no qos [network-policy-id] 
   [no] force-vlan-vc-forwarding
   hash-label [signal-capability] 
   no hash-label
   ingress 
      filter ipv6 ipv6-filter-id 
      filter ip ip-filter-id
      filter mac mac-filter-id
      no filter [ip ip-filter-id] [mac mac-filter-id] [ipv6 ipv6-filter-id]
      qos network-policy-id fp-redirect-group queue-group-name instance instance-id 
      no qos [network-policy-id]
   [no] sdp-exclude
   [no] sdp-include
   vc-type {ether | vlan}
   vlan-vc-tag vlan-id
   no vlan-vc-tag 

For more information about this command, refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Services Overview Guide.

The use-provisioned-sdp option is permitted when creating the pseudowire template if a pre-provisioned SDP is to be used. Pre-provisioned SDPs must be configured whenever RSVP-signaled transport tunnels are used.

When the prefer-provisioned-sdp option is specified, if the system finds an existing matching SDP that conforms to any restrictions defined in the pseudowire template (for example, sdp-include/sdp-exclude group), it uses this matching SDP (even if the existing SDP is operationally down); otherwise, it automatically creates an SDP.

When the auto-gre-sdp option is specified, a GRE SDP is automatically created.

The tools perform command can be used in the same way as for BGP-AD to apply changes to the pseudowire template using the following format:

Example:
tools perform service [id service-id] eval-pw-template policy-id [allow-service-impact]

If a user configures a service using a pseudowire template with the prefer-provisioned-sdp option, but without an applicable SDP being provisioned, and the system binds to an automatic SDP, and the user subsequently provisions an appropriate SDP, the system will not automatically switch to the new provisioned SDP. This will only occur if the pseudowire template is re-evaluated using the tools perform service id service-id eval-pw-template command.

2.13.3.5. Use of Endpoint for BGP VPWS

An endpoint is required on a remote PE connecting to two dual-homed PEs to associate the active/standby pseudowires with the Epipe service. An endpoint is automatically created within the Epipe service such that active/standby pseudowires are associated with that endpoint. The creation of the endpoint occurs when bgp-vpws is enabled (and deleted when it is disabled) and so will exist in both a single- and dual-homed scenario (this simplifies converting a single-homed service to a dual-homed service). The naming convention used is _tmnx_BgpVpws-x, where x is the service identifier. The automatically created endpoint has the default parameter values, although all are ignored in a BGP-VPWS service with the description field being defined by the system.

The command:

tools perform service id <service-id> endpoint <endpoint-name> force-switchover

will have no effect on an automatically created VPWS endpoint.

2.14. VLL Service Considerations

This section describes the general 7450 ESS, 7750 SR, and 7950 XRS service features and any special capabilities or considerations as they relate to VLL services.

2.14.1. SDPs

The most basic SDPs must have the following:

  1. A locally unique SDP identification (ID) number.
  2. The system IP address of the originating and far-end routers.
  3. An SDP encapsulation type, either GRE or MPLS.

The most basic Apipe and Fpipe SDP configurations for the 7750 SR must have the following:

  1. A locally unique SDP identification (ID) number and VC-ID.

2.14.1.1. SDP Statistics for VPLS and VLL Services

The three-node network in Figure 48 shows two MPLS SDPs and one GRE SDP defined between the nodes. These SDPs connect VPLS1 and VPLS2 instances that are defined in the three nodes. With this feature, the operator will have local CLI-based as well as SNMP-based statistics collection for each VC used in the SDPs. This will allow for traffic management of tunnel usage by the different services and with aggregation the total tunnel usage.

Figure 48:  SDP Statistics for VPLS and VLL Services 

2.14.2. SAP Encapsulations and Pseudowire Types

The Epipe service is designed to carry Ethernet frame payloads, so it can provide connectivity between any two SAPs that pass Ethernet frames. The following SAP encapsulations are supported on the 7450 ESS, 7750 SR, and 7950 XRS Epipe service:

  1. Ethernet null
  2. Ethernet dot1q
  3. QinQ
  4. SONET/SDH BCP-null for the 7450 ESS and 7750 SR
  5. SONET/SDH BCP-dot1q for the 7450 ESS and 7750 SR
  6. ATM VC with RFC 2684 Ethernet bridged encapsulation (see Ethernet Interworking VLL) for the 7750 SR
  7. FR VC with RFC 2427 Ethernet bridged encapsulation (see Ethernet Interworking VLL) for the 7450 ESS and 7750 SR

While different encapsulation types can be used, encapsulation mismatching can occur if the encapsulation behavior is not understood by connecting devices, which are unable to send and receive the expected traffic. For example, if the encapsulation type on one side of the Epipe is dot1q and the other is null, tagged traffic received on the null SAP will be double-tagged when it is transmitted out of the dot1q SAP.

ATM VLLs can be configured with both endpoints (SAPs) on the same router or with the two endpoints on different 7750 SRs. In the latter case, Pseudowire Emulation Edge-to-Edge (PWE3) signaling is used to establish a pseudowire between the devices, allowing ATM traffic to be tunneled through an MPLS or GRE network:

Two pseudowire encapsulation modes, that is, SDP vc-type, are available:

  1. PWE3 N-to-1 Cell Mode Encapsulation
  2. PWE3 AAL5 SDU Mode Encapsulation

The endpoints of Fpipes must be Data-Link Connection Identifiers (DLCIs) on any port that supports Frame Relay. The pseudowire encapsulation, or SDP vc-type, supported is the 1-to-1 Frame Relay encapsulation mode.

2.14.2.1. PWE3 N-to-1 Cell Mode

The endpoints of an N-to-1 mode VLL on a 7750 SR can be:

  1. ATM VCs — VPI/VCI translation is supported (that is, the VPI/VCI at each endpoint does not need to be the same).
  2. ATM VPs — VPI translation is supported (that is, the VPI at each endpoint need not be the same, but the original VCI will be maintained).
  3. ATM VTs (a VP range) — No VPI translation is supported (that is, the VPI/VCI of each cell is maintained across the network).
  4. ATM ports — No translation is supported (that is, the VPI/VCI of each cell is maintained across the network).

For N-to-1 mode VLLs, cell concatenation is supported. Cells will be packed on ingress to the VLL and unpacked on egress. As cells are being packed, the concatenation process may be terminated by:

  1. Reaching a maximum number of cells per packet.
  2. Expiry of a timer.
  3. (Optionally) change of the CLP bit.
  4. (Optionally) change of the PTI bits indicating the end of the AAL5 packet.

In N-to-1 mode, OAM cells are transported through the VLL as any other cell. The PTI and CLP bits are untouched, even if VPI/VCI translation is carried out.

2.14.2.2. PWE3 AAL5 SDU Mode

The endpoints of an AAL5 SDU mode VLL on a 7750 SR must be ATM VCs specified by port/vpi/vci. VPI/ VCI translation is supported. The endpoint can also be a FR VC, specified by port/dlci. In this case, FRF.5 FR-ATM network interworking is performed between the ATM VC SAP or the SDP and the FR VC SAP.

In SDU mode, the mandatory PWE3 control word is supported. This allows the ATM VLL to transport OAM cells along with SDU frames, using the “T” bit to distinguish between them, to recover the original SDU length, and to carry CLP, EFCI, and UU information.

2.14.2.3. QoS Policies

When applied to 7450 ESS, 7750 SR, or 7950 XRS Epipe, Apipe, and Fpipe services, service ingress QoS policies only create the unicast queues defined in the policy. The multipoint queues are not created on the service.

With Epipe, Apipe, and Fpipe services, egress QoS policies function as with other services where the class-based queues are created as defined in the policy. Both Layer 2 or Layer 3 criteria can be used in the QoS policies for traffic classification in a service. QoS policies on Apipes cannot perform any classification, and on Fpipes Layer 3 (IP), classification is performed.

2.14.2.4. Filter Policies

7450 ESS, 7750 SR, and 7950 XRS Epipe, Fpipe, and Ipipe services can have a single filter policy associated on both ingress and egress. Both MAC and IP filter policies can be used on Epipe services.

Filters cannot be configured on 7750 SR Apipe service SAPs.

2.14.2.5. MAC Resources

Epipe services are point-to-point Layer 2 VPNs capable of carrying any Ethernet payloads. Although an Epipe is a Layer 2 service, the 7450 ESS, 7750 SR, and 7950 XRS Epipe implementation does not perform any MAC learning on the service, so Epipe services do not consume any MAC hardware resources.