5. Virtual Private LAN Service

This chapter provides information about Virtual Private LAN Service (VPLS), process overview, and implementation notes.

5.1. VPLS Service Overview

Virtual Private LAN Service (VPLS) is a class of virtual private network service that allows the connection of multiple sites in a single bridged domain over a provider-managed IP/MPLS network. The customer sites in a VPLS instance appear to be on the same LAN, regardless of their location. VPLS uses an Ethernet interface on the customer-facing (access) side that simplifies the LAN/WAN boundary and allows for rapid and flexible service provisioning. The 7210 SAS supports provisioning of access or uplink spokes to connect to the provider edge (PE) IP/MPLS routers.

VPLS provides a balance between point-to-point Frame Relay service and outsourced routed services (VPRN). VPLS enables customers to maintain control of their own routing strategies. All customer routers in the VPLS service are part of the same subnet (LAN), which simplifies the IP addressing plan, especially when compared to a mesh constructed from many separate point-to-point connections. VPLS service management is simplified because the service is not aware of, nor participates in, the IP addressing and routing.

A VPLS service provides connectivity between two or more SAPs on one (which is considered a local service) or more (which is considered a distributed service) service routers. The connection appears to be a bridged domain to the customer sites, so protocols, including routing protocols, can traverse the VPLS service.

Other VPLS advantages include the following.

  1. VPLS is a transparent, protocol-independent service.
  2. There is no Layer 2 protocol conversion between LAN and WAN technologies.
  3. There is no need to design, manage, configure, and maintain separate WAN access equipment, which eliminates the need to train personnel on WAN technologies, such as Frame Relay.

5.1.1. VPLS Packet Walk-Through in Network Mode

This section provides an example of VPLS processing of a customer packet sent across the network from site-A, which is connected to PE-Router-A through a 7210 SAS to site-C, which is connected through 7210 SAS to PE-Router-C (Figure 56) in an H-VPLS configuration. This section does not describe the processing on the PE routers, but only on 7210 SAS routers.

Figure 56:  VPLS Service Architecture 
  1. 7210-A (Figure 57)
    1. Service packets arriving at 7210-A are associated with a VPLS service instance based on the combination of the physical port and the IEEE 802.1Q tag (VLAN-ID) in the packet.
      Figure 57:  Access Port Ingress Packet Format and Lookup 
    2. 7210-A learns the source MAC address in the packet and creates an entry in the FIB table that associates the MAC address to the service access point (SAP) on which it was received.
    3. The destination MAC address in the packet is looked up in the FIB table for the VPLS instance. There are two possibilities: either the destination MAC address has already been learned (known MAC address), or the destination MAC address is not yet learned (unknown MAC address).
      For a Known MAC Address (Figure 58):
    4. If the destination MAC address has already been learned by 7210, an existing entry in the FIB table identifies the far-end PE-Router and the service VC-label (inner label) to be used before sending the packet to PE-Router-A.
    5. The customer packet is sent on this LSP when the IEEE 802.1Q tag is stripped and the service VC-label (inner label) and the transport label (outer label) are added to the packet.
      For an Unknown MAC Address (Figure 58):
    6. If the destination MAC address has not been learned, 7210 floods the packet to spoke-SDPs that are participating in the service.
      Figure 58:  Network Port Egress Packet Format and Flooding  
  2. Core Router Switching
    1. The PE router encapsulates this packet in an MPLS header and transports it across the core network to the remote 7210-C.
  3. 7210-C (Figure 57)
    1. 7210-C associates the packet with the VPLS instance based on the VC label in the received packet after the stripping of the tunnel label.
    2. 7210-C learns the source MAC address in the packet and creates an entry in the FIB table that associates the MAC address to the spoke-SDP on which the packet was received.
    3. The destination MAC address in the packet is looked up in the FIB table for the VPLS instance. Again, there are two possibilities: either the destination MAC address has already been learned (known MAC address), or the destination MAC address has not been learned on the access side of 7210-C (unknown MAC address).
    4. If the destination MAC address has been learned by 7210-C, an existing entry in the FIB table identifies the local access port and the IEEE 802.1Q tag (if any) to be added before sending the packet to customer Location-C. The egress Q tag may be different from the ingress Q tag.
    5. If the destination MAC address has not been learned, 7210 floods the packet to all the access SAPs that are participating in the service.

5.1.2. VPLS Packet Walk-Through in Access-uplink Mode

This section provides an example of VPLS processing of a customer packet sent across the network from site-A, which is connected to PE-Router-A through a 7210 SAS to site-C, which is connected through 7210 SAS to PE-Router-C (Figure 56) in an H-VPLS configuration. This section does not describe the processing on the PE routers but only on 7210 SAS routers.

  1. 7210-A (Figure 59)
    1. Service packets arriving at 7210-A are associated with a VPLS service instance based on the combination of the physical port and the IEEE 802.1Q tag (VLAN-ID) in the packet.
      Figure 59:  Access Port Ingress Packet Format and Lookup 
    2. 7210-A learns the source MAC address in the packet and creates an entry in the FIB table that associates the MAC address to the service access point (SAP) on which it was received.
    3. The destination MAC address in the packet is looked up in the FIB table for the VPLS instance. There are two possibilities: either the destination MAC address has already been learned (known MAC address) or the destination MAC address is not yet learned (unknown MAC address).
      For a Known MAC Address (Figure 58):
    4. If the destination MAC address has already been learned by 7210, an existing entry in the FIB table identifies destination uplink QinQ SAP to be used for sending the packet toward the PE-Router-A.
    5. The customer packet is sent on this uplink SAP when the IEEE 802.1Q tag is stripped and the uplink SAP tag is added to the packet.
      For an Unknown MAC Address (Figure 60):
    6. If the destination MAC address has not been learned, 7210 will flood the packet to all the uplink SAP spoke-SDPs that are participating in the service.
      Figure 60:  Network Port Egress Packet Format and Flooding  
  2. Core Router Switching
    1. The PE router will encapsulate this packet in the appropriate MPLS header and transport it across the core network to the remote 7210-C.
  3. 7210-C (Figure 57)
    1. 7210-C associates the packet with the VPLS instance based on the VLAN tags in the received packet.
    2. 7210-C learns the source MAC address in the packet and creates an entry in the FIB table that associates the MAC address to the access-uplink port on which the packet was received.
    3. The destination MAC address in the packet is looked up in the FIB table for the VPLS instance. Again, there are two possibilities: either the destination MAC address has already been learned (known MAC address) or the destination MAC address has not been learned on the access side of 7210-C (unknown MAC address).
    4. If the destination MAC address has been learned by 7210-C, an existing entry in the FIB table identifies the local access port and the IEEE 802.1Q tag (if any) to be added before sending the packet to customer Location-C. The egress Q tag may be different from the ingress Q tag.
    5. If the destination MAC address has not been learned, 7210 floods the packet to all the access SAPs that are participating in the service.

5.2. VPLS Features

5.2.1. VPLS Enhancements

The Nokia VPLS implementation includes several enhancements beyond basic VPN connectivity. The following VPLS features can be configured individually for each VPLS service instance:

  1. extensive MAC and IP filter support (up to Layer 4). Filters can be applied on a per-SAP basis.
  2. Forwarding Information Base (FIB) management features including:
    1. configurable FIB size limit
    2. FIB size alarms
    3. MAC learning disable
    4. discard unknown
    5. separate aging timers for locally and remotely learned MAC addresses
  3. ingress rate limiting for broadcast, multicast, and destination unknown flooding on a per-SAP basis
  4. implementation of Spanning Tree Protocol (STP) parameters on a per-VPLS, per-SAP, and per-spoke-SDP basis
  5. optional SAP and/or spoke-SDP redundancy to protect against node failure
  6. IGMP snooping on a per-SAP and SDP basis

5.2.2. VPLS over MPLS in Network Operating Mode

The VPLS architecture proposed in draft-ietf-ppvpn-vpls-ldp-0x.txt specifies the use of provider equipment (PE) that is capable of learning, bridging, and replication on a per-VPLS basis. The PE routers that participate in the service are connected using MPLS Label Switched Path (LSP) tunnels in a full-mesh composed of mesh SDPs or based on an LSP hierarchy (Hierarchical VPLS (H-VPLS)) composed of mesh SDPs and spoke-SDPs. The 7210 SAS supports only H-VPLS.

Multiple VPLS services can be offered over the same set of LSP tunnels. Signaling specified in RFC 4905 is used to negotiate a set of ingress and egress VC labels on a per-service basis. The VC labels are used by the PE routers for de-multiplexing traffic arriving from different VPLS services over the same set of LSP tunnels.

VPLS/H-VPLS is provided over MPLS by:

  1. connecting 7210 SAS to bridging-capable PE routers through a mesh/spoke-SDP. The PE routers are connected using a full mesh of LSPs.
  2. negotiating per-service VC labels using draft-Martini encapsulation
  3. replicating unknown and broadcast traffic in a service domain
  4. enabling MAC learning over tunnel and access ports (see VPLS MAC Learning and Packet Forwarding)
  5. using a separate FIB per VPLS service

5.2.3. VPLS over QinQ Spokes for 7210 SAS Devices Configured in Access-uplink Operating Mode

7210 SAS devices configured in access-uplink operating mode support QinQ spokes or dot1q spokes, which allows them to connect to upstream PE nodes which provides IP/MPLS transport.

VPLS is provided over QinQ/Dot1q spokes by:

  1. connecting bridging-capable 7210 SAS devices
  2. replicating unknown and broadcast traffic in a service domain
  3. enabling MAC learning over QinQ/Dot1q spokes and access ports (see VPLS MAC Learning and Packet Forwarding)
  4. using a separate FIB per VPLS service

5.2.4. VPLS MAC Learning and Packet Forwarding

The 7210 SAS edge devices perform the packet replication required for broadcast and multicast traffic across the bridged domain. MAC address learning is performed by the 7210 SAS device to reduce the amount of unknown destination MAC address flooding.

Each 7210 SAS maintains a Forwarding Information Base (FIB) for each VPLS service instance, and learned MAC addresses are populated in the FIB table of the service. All traffic is switched based on MAC addresses and forwarded between all participating nodes using the LSP tunnels unknown destination packets (for example, the destination MAC address has not been learned) are forwarded on all LSPs to all participating nodes for that service until the target station responds and the MAC address is learned by the 7210 SAS associated with that service.

5.2.5. IGMP Snooping in a VPLS Service

Note:

  1. IGMP snooping is supported on all 7210 SAS platforms as described in this document, including those configured in the access-uplink operating mode.
  2. This section provides information about IGMP snooping support in a VPLS service. It does not apply to R-VPLS services. IGMP snooping can also be enabled for R-VPLS services. See R-VPLS and IGMPv3 Snooping for more information.

In Layer 2 switches, multicast traffic is treated as an unknown MAC address or broadcast frame, which causes the incoming frame to be flooded out (broadcast) on every port within a VLAN. Although this is acceptable behavior for unknown and broadcast frames, this flooded multicast traffic may result in wasted bandwidth on network segments and end stations because IP multicast hosts can join and be interested in only specific multicast groups.

IGMP snooping uses information in Layer 3 protocol headers of multicast control messages to determine the processing at Layer 2. By doing so, an IGMP snooping switch provides the benefit of conserving bandwidth on those segments of the network in which no node has expressed interest in receiving packets addressed to the group address.

Note:

References to SDP in the following section about IGMP snooping are applicable only to 7210 SAS platforms operating in network mode.

IGMP snooping can be enabled in the context of VPLS services. The IGMP snooping optimizes the multicast data flow to only those SAPs or SDPs that are members of the group. The system builds a database of group members for each service by listening to IGMP queries and reports from each SAP or SDP, as follows.

  1. When the switch receives an IGMP report from a host for a particular multicast group, the switch adds the host port number to the forwarding table entry.
  2. When the switch receives an IGMP leave message from a host, it removes the host port from the table entry, if no other group members are present. It also deletes entries if it does not receive periodic IGMP membership reports from the multicast clients.

The following is a list of supported IGMP snooping features.

  1. IGMPv1, IGMPv2, and IGMPv3 are supported in accordance with RFC 1112, Host Extensions for IP Multicasting, and RFC 2236, Internet Group Management Protocol, Version 2.
    1. The 7210 SAS-M and 7210 SAS-T configured in the access-uplink operating mode support IGMPv1, IGMPv2, and IGMPv3 snooping in a VPLS service.
    2. All 7210 SAS platforms as described in this document, except those configured in the access-uplink operating mode, support only IGMPv1 and IGMPv2 snooping in a VPLS service.
  2. IGMP snooping can be enabled and disabled on individual VPLS service instances.
  3. IGMP snooping can be configured on individual SAPs that are part of a VPLS service. When IGMP snooping is enabled on a VPLS service, all its contained SAPs and SDPs automatically have snooping enabled.
  4. Fast leave terminates the multicast session immediately, rather than using the standard group-specific query to check if other group members are present on the network.
  5. SAPs and SDPs can be statically configured as multicast router ports. This allows the operator to control the set of ports to which IGMP membership reports are forwarded.
  6. Static multicast group membership on a per-SAP and a per-SDP basis can be configured.
  7. The maximum number of multicast groups (static and dynamic) that a SAP or SDP can join can be configured. An event is generated when the limit is reached.
  8. The maximum number of multicast groups (static and dynamic) that a VPLS instance simultaneously supports can be configured.
  9. Proxy summarization of IGMP messages reduces the number of IGMP messages processed by upstream devices in the network.
  10. IGMP filtering allows a subscriber to a service or the provider to block, receive, or transmit permission (or both) to individual hosts or a range of hosts. The following types of filters can be defined.
    1. Filter group membership that reports from a particular host or range of hosts. This filtering is performed by importing a defined routing policy into the SAP or SDP.
    2. Filters that prevent a host from transmitting multicast streams into the network. The operator can define a data-plane filter (ACL) that drops all multicast traffic and apply this filter to a SAP or SDP.

5.2.5.1. Configuration Guidelines for IGMP Snooping in VPLS Service

The following IGMP snooping considerations apply.

  1. Layer-2 multicast is supported in VPLS services.
  2. IGMP snooping is not supported for VCs (vc-ether or vc-vlan) with control-word enabled.
  3. IGMP snooping fast leave processing can be enabled only on SAPs and SDPs. IGMP snooping proxy summarization is enabled by default on SAPS and SDPs and cannot be disabled. Proxy summarization and fast leave processing are supported only on SDPs for which VCs are configured to use vc-type ether and do not have control-word enabled.
  4. IGMP filtering using policies is available on SAPs and SDPs. It is supported only on SDPs for which VCs are configured to use vc-type ether and do not have control-word enabled.
  5. Dynamic learning is only supported on SDPs for which VCs are configured to use vc-type ether and do not have control-word enabled.
  6. SDPs that are configured to use VCs of type vc-vlan that need to be mrouter ports must be configured statically. Multicast group memberships for such SDPs must be configured statically. Dynamic learning is not available for these SDPs.
  7. IGMP snooping is not supported for control-word enabled SDP.
  8. All 7210 SAS platforms as described in this document, except those configured in the access-uplink operating mode, support only IGMPv1 and IGMPv2 snooping in a VPLS service. These platforms do not support IGMPv3 snooping in a VPLS service.
  9. All 7210 SAS platforms as described in this document, except those configured in the access-uplink operating mode, support IGMPv3 in an R-VPLS service only. See R-VPLS and IGMPv3 Snooping for more information.

5.2.6. Multicast VLAN Registration (MVR) Support in VPLS Service

Multicast VPLS Registration (MVR) is a bandwidth optimization method for multicast in a broadband services network. MVR allows a subscriber on a port to subscribe and unsubscribe to a multicast stream on one or more network-wide multicast VPLS instances.

MVR assumes that subscribers join and leave multicast streams by sending IGMP join and leave messages. The IGMP leave and join message are sent inside the VPLS to which the subscriber port is assigned. The multicast VPLS is shared in the network while the subscribers remain in separate VPLS services. Using MVR, users on different VPLS cannot exchange information between them but still multicast services are provided.

On the MVR VPLS, IGMP snooping must be enabled. On the user VPLS, IGMP snooping and MVR work independently. If IGMP snooping and MVR are both enabled, MVR reacts only to join and leave messages from multicast groups configured under MVR. Join and leave messages from all other multicast groups are managed by IGMP snooping in the local VPLS. This way, potentially several MVR VPLS instances could be configured, each with its own set of multicast channels.

MVR by proxy — In some situations, the multicast traffic should not be copied from the MVR VPLS to the SAP on which the IGMP message was received (standard MVR behavior) but to another SAP. This is called MVR by proxy, shown in Figure 61.

Figure 61:  MVR and MVR by Proxy 

5.2.6.1. Configuration Guidelines for MVR in VPLS Services

In an MVR configuration, the svc-sap-type of the VPLS service that is the source (also known as MVR VPLS service) and the svc-sap-type of the VPLS service that is the sink (also known as user VPLS service) should match.

5.2.7. Layer 2 Forwarding Table Management

The following sections describe VPLS features related to management of the FIB.

5.2.7.1. FIB Size

The following MAC table management features are required for each instance of a SAP or spoke-SDP within a particular VPLS service instance:

  1. MAC FIB size limits
    Allows users to specify the maximum number of MAC FIB entries that are learned locally for a SAP or remotely for a spoke-SDP. If the configured limit is reached, no new addresses will be learned from the SAP or spoke-SDP until at least one FIB entry is aged out or cleared.
    1. When the limit is reached on a SAP or spoke-SDP, packets with unknown source MAC addresses are still forwarded (this default behavior can be changed by configuration). By default, if the destination MAC address is known, it is forwarded based on the FIB, and if the destination MAC address is unknown, it will be flooded. Alternatively, if discard unknown is enabled at the VPLS service level, unknown destination MAC addresses are discarded.
    2. The log event SAP MAC limit reached is generated when the limit is reached. When the condition is cleared, the log event SAP MAC Limit Reached Condition Cleared is generated.
    3. Disable learning at the VPLS service level allows users to disable the dynamic learning function on the service. Disable Learning is supported at the SAP and spoke-SDP level as well.
    4. Disable aging allows users to turn off aging for learned MAC addresses. It is supported at the VPLS service level, SAP level and spoke-SDP level.

5.2.7.2. FIB Size Alarms

The size of the VPLS FIB can be configured with a low watermark and a high watermark, expressed as a percentage of the total FIB size limit. If the actual FIB size grows above the configured high watermark percentage, an alarm is generated. If the FIB size falls below the configured low watermark percentage, the alarm is cleared by the system.

5.2.7.3. Local and Remote Aging Timers

Like a Layer 2 switch, learned MACs within a VPLS instance can be aged out if no packets are sourced from the MAC address for a specified period of time (the aging time). In each VPLS service instance, there are independent aging timers for locally learned MAC and remotely learned MAC entries in the FIB. A local MAC address is a MAC address associated with a SAP because it ingressed on a SAP. A remote MAC address is a MAC address received by an SDP from another router for the VPLS instance. The local-age timer for the VPLS instance specifies the aging time for locally learned MAC addresses, and the remote-age timer specifies the aging time for remotely learned MAC addresses.

In general, the remote-age timer is set to a longer period than the local-age timer to reduce the amount of flooding required for destination unknown MAC addresses. The aging mechanism is considered a low priority process. In most situations, the aging out of MAC addresses can happen within tens of seconds beyond the age time. To minimize overhead, local MAC addresses on a LAG port and remote MAC addresses, in some circumstances, can take up to two times their respective age timer to be aged out.

5.2.7.4. Disable MAC Aging

The MAC aging timers can be disabled, which prevents learned MAC entries from being aged out of the FIB. When aging is disabled, it is still possible to manually delete or flush learned MAC entries. Aging can be disabled for learned MAC addresses on a SAP or a spoke-SDP of a VPLS service instance.

5.2.7.5. Disable MAC Learning

When MAC learning is disabled for a service, new source MAC addresses are not entered in the VPLS FIB. MAC learning can be disabled for individual SAPs or spoke-SDPs.

5.2.7.6. Unknown MAC Discard

Unknown MAC discard is a feature that discards all packets ingressing the service where the destination MAC address is not in the FIB. The normal behavior is to flood these packets to all end points in the service.

Unknown MAC discard can be used with the disable MAC learning and disable MAC aging options to create a fixed set of MAC addresses allowed to ingress and traverse the service.

5.2.7.7. VPLS and Rate Limiting

Traffic that is flooded throughout the VPLS can be rate limited on SAP ingress through the use of service ingress QoS policies. In a service ingress QoS policy, individual meters can be defined per forwarding class to provide rate-limiting/policing of broadcast traffic, MAC multicast traffic, and unknown destination MAC traffic.

5.2.7.8. MAC Move

The MAC move feature is useful to protect against undetected loops in a VPLS topology as well as the presence of duplicate MACs in a VPLS service.

If two clients in the VPLS have the same MAC address, the VPLS will experience a high relearn rate for the MAC. When MAC move is enabled, the 7210 SAS will shut down the SAP or spoke-SDP and create an alarm event when the threshold is exceeded.

MAC move allows sequential order port blocking. By configuration, some VPLS ports can be configured as “non-blockable” which allows simple level of control which ports are being blocked during loop occurrence.

5.2.7.8.1. Split Horizon SAP Groups and Split Horizon Spoke-SDP Groups

Note:

Per-service split horizon groups are supported on all 7210 SAS platforms as described in this document, except those operating in access-uplink mode.

Within the context of VPLS services, a loop-free topology inside a fully meshed VPLS core is achieved by applying a split-horizon forwarding concept.The packets received from a mesh SDP are never forwarded to other mesh SDPs within the same service. The advantage of this approach is that no protocol is required to detect loops within the VPLS core network.

In applications such as DSL aggregation, it is useful to extend this split-horizon concept also to groups of SAPs and/or spoke-SDPs. This extension is referred to as a split horizon SAP group. Traffic arriving on a SAP or a spoke-SDP within a split horizon group will not be forwarded to other SAPs and spoke-SDPs configured in the same split horizon group, but will be forwarded to other SAPs/spoke-SDPs, which are not part of the split horizon group.

5.2.7.8.2. Configuration Guidelines for Use of Split Horizon Group in a VPLS Service

  1. On 7210 SAS-M, 7210 SAS-T, and 7210 SAS-Sx/S 1/10GE operating in network mode, mesh SDPs cannot be configured in a service which uses split horizon group (SHG). Conversely, if a service has a mesh SDP configured, split horizon group cannot be used in the same service.
  2. On 7210 SAS-Mxp, service based SHG can be configured along with mesh SDPs and spoke-SDPs.

5.2.8. VPLS and Spanning Tree Protocol

The Nokia VPLS service provides a bridged or switched Ethernet Layer 2 network. Equipment connected to SAPs forward Ethernet packets into the VPLS service. The 7210 SAS participating in the service learns where the customer MAC addresses reside, on ingress SAPs.

Unknown destinations, broadcasts, and multicasts are flooded to all other SAPs in the service. If SAPs are connected together, either through misconfiguration or for redundancy purposes, loops can form and flooded packets can keep flowing through the network. The Nokia implementation of the Spanning Tree Protocol (STP) is designed to remove these loops from the VPLS topology. This is done by putting one or several SAPs in the discarding state.

The Nokia implementation of the Spanning Tree Protocol (STP) incorporates some modifications to make the operational characteristics of VPLS more effective.

The STP instance parameters allow the balancing between resiliency and speed of convergence extremes. Modifying particular parameters can affect the behavior. For information about command usage, descriptions, and CLI syntax, refer to Configuring a VPLS Service with CLI.

5.2.8.1. Spanning Tree Operating Modes

For each VPLS instance, a preferred STP variant can be configured. The STP variants supported are:

  1. rstp
    Rapid Spanning Tree Protocol (RSTP) compliant with IEEE 802.1D-2004 - default mode
  2. dot1w
    compliant with IEEE 802.1w
  3. comp-dot1w
    operation as in RSTP but backwards compatible with IEEE 802.1w
    (this mode allows interoperability with some MTU types)
  4. mstp
    compliant with the Multiple Spanning Tree Protocol specified in IEEE 802.1Q-REV/D5.0-09/2005. This mode of operation is only supported in an mVPLS.

While the 7210 SAS initially uses the mode configured for the VPLS, it will dynamically fall back (on a per-SAP basis) to STP (IEEE 802.1D-1998) based on the detection of a BPDU of a different format. A trap or log entry is generated for every change in spanning tree variant.

Some older 802.1W compliant RSTP implementations may have problems with some of the features added in the 802.1D-2004 standard. Interworking with these older systems is improved with the comp-dot1w mode. The differences between the RSTP mode and the comp-dot1w mode are as follows.

  1. The RSTP mode implements the improved convergence over shared media feature, for example, RSTP will transition from discarding to forwarding in 4 seconds when operating over shared media. The comp-dot1w mode does not implement this 802.1D-2004 improvement and transitions conform to 802.1w in 30 seconds (both modes implement fast convergence over point-to-point links).
  2. In the RSTP mode, the transmitted BPDUs contain the port's designated priority vector (DPV) (conforms to 802.1D-2004). Older implementations may be confused by the DPV in a BPDU and may fail to recognize an agreement BPDU correctly. This would result in a slow transition to a forwarding state (30 seconds). For this reason, in the comp-dot1w mode, these BPDUs contain the port's port priority vector (conforms to 802.1w).

The 7210 SAS supports two BDPU encapsulation formats, and can dynamically switch between the following supported formats (on a per-SAP basis):

  1. IEEE 802.1D STP
  2. Cisco PVST

5.2.8.2. Multiple Spanning Tree

The Multiple Spanning Tree Protocol (MSTP) extends the concept of the IEEE 802.1w Rapid Spanning Tree Protocol (RSTP) by allowing grouping and associating VLANs to Multiple Spanning Tree Instances (MSTI). Each MSTI can have its own topology, which provides architecture enabling load balancing by providing multiple forwarding paths. At the same time, the number of STP instances running in the network is significantly reduced as compared to Per VLAN STP (PVST) mode of operation. Network fault tolerance is also improved because a failure in one instance (forwarding path) does not affect other instances.

The 7210 SAS implementation of Management VPLS (mVPLS) is used to group different VPLS instances under single RSTP instance. Introducing MSTP into the mVPLS allows the following:

  1. interoperation with traditional Layer 2 switches in access network
  2. an effective solution for dual homing of many business Layer 2 VPNs into a provider network

5.2.8.2.1. Redundancy Access to VPLS

The GigE MAN portion of the network is implemented with traditional switches. Using MSTP running on individual switches facilitates redundancy in this part of the network. To provide dual homing of all VPLS services accessing from this part of the network, the VPLS PEs must participate in MSTP.

This can be achieved by the following:

  1. Configuring mVPLS on VPLS-PEs (only PEs directly connected to GigE MAN network).
  2. Assign different managed-vlan ranges to different MSTP instances.

Typically, the mVPLS would have SAPs with null encapsulations (to receive, send, and transmit MSTP BPDUs) and a mesh SDP to interconnect a pair of VPLS PEs.

Different access scenarios are displayed in Figure 62 as example network diagrams dually connected to the PBB PEs:

  1. Access Type A
    source devices connected by null or Dot1q SAPs
  2. Access Type B
    one QinQ switch connected by QinQ/801ad SAPs
  3. Access Type C
    two or more ES devices connected by QinQ/802.1ad SAPs
    Figure 62:  Access Resiliency 

The following mechanisms are supported for the I-VPLS.

  1. STP/RSTP can be used for all access types.
  2. M-VPLS with MSTP can be used as is just for access Type A. MSTP is required for access type B and C.
  3. LAG and MC-LAG can be used for access Type A and B.
  4. Split-horizon-group does not require residential.

5.2.8.3. MSTP for QinQ SAPs

MSTP runs in a MVPLS context and can control SAPs from source VPLS instances. QinQ SAPs are supported. The outer tag is considered by MSTP as part of VLAN range control.

5.2.8.4. Provider MSTP

Note:

Provider MSTP is only supported on platforms that support PBB, and therefore is only supported on 7210 SAS-M and 7210 SAS-T operating in the network mode.

Provider MSTP is specified in (IEEE-802.1ad-2005). It uses a provider bridge group address instead of a regular bridge group address used by STP, RSTP, MSTP BPDUs. This allows for implicit separation of source and provider control planes.

The 802.1ad access network sends PBB PE P-MSTP BPDUs using the specified MAC address and also works over QinQ interfaces. P-MSTP mode is used in PBBN for core resiliency and loop avoidance.

Similar to regular MSTP, the STP mode (for example, PMSTP) is only supported in VPLS services where the m-VPLS flag is configured.

5.2.8.4.1. MSTP General Principles

MSTP represents modification of RSTP which allows the grouping of different VLANs into multiple MSTIs. To enable different devices to participate in MSTIs, they must be consistently configured. A collection of interconnected devices that have the same MST configuration (region-name, revision and VLAN-to-instance assignment) comprises an MST region.

There is no limit to the number of regions in the network, but every region can support a maximum of 16 MSTIs. Instance 0 is a special instance for a region, known as the Internal Spanning Tree (IST) instance. All other instances are numbered from 1 to 4094. IST is the only spanning-tree instance that sends and receives BPDUs (typically BPDUs are untagged). All other spanning-tree instance information is included in MSTP records (M-records), which are encapsulated within MSTP BPDUs. This means that single BPDU carries information for multiple MSTI which reduces overhead of the protocol.

Any specific MSTI is local to an MSTP region and completely independent from an MSTI in other MST regions. Two redundantly connected MST regions will use only a single path for all traffic flows (no load balancing between MST regions or between MST and SST region).

Traditional Layer 2 switches running MSTP protocol assign all VLANs to the IST instance per default. The operator may then “re-assign” individual VLANs to a specific MSTI by configuring per VLAN assignment. This means that an SR-series PE can be considered as the part of the same MST region only if the VLAN assignment to IST and MSTIs is identical to the one of Layer 2 switches in access network.

5.2.8.4.2. MSTP in the 7210 SAS Platform

The 7210 SAS platform uses a concept of mVPLS to group different SAPs under a single STP instance. The VLAN range covering SAPs to be managed by a specific mVPLS is declared under a specific mVPLS SAP definition. MSTP mode-of-operation is only supported in an mVPLS.

When running MSTP, by default, all VLANs are mapped to the CIST. On the VPLS level VLANs can be assigned to specific MSTIs. When running RSTP, the operator must explicitly indicate, per SAP, which VLANs are managed by that SAP.

5.2.8.5. Enhancements to the Spanning Tree Protocol

To interconnect 7210 SAS devices (PE devices) across the backbone, service tunnels (SDPs) are used. These service tunnels are shared among multiple VPLS instances. The Nokia implementation of the Spanning Tree Protocol (STP) incorporates some enhancements to make the operational characteristics of VPLS more effective. The implementation of STP on the router is modified to guarantee that service tunnels will not be blocked in any circumstance without imposing artificial restrictions on the placement of the root bridge within the network. The modifications are fully compliant with the 802.1D-2004 STP specification.

When running MSTP, spoke-SDPs cannot be configured. Also, ensure that all bridges connected by mesh SDPs are in the same region. If not, the mesh will be prevented from becoming active (trap is generated).

To achieve this, all mesh SDPs are dynamically configured as either root ports or designated ports. The PE devices participating in each VPLS mesh determine (using the root path cost learned as part of the normal protocol exchange) which of the 7210 SAS devices is closest to the root of the network. This PE device is internally designated as the primary bridge for the VPLS mesh. As a result of this, all network ports on the primary bridges are assigned the designated port role and therefore remain in the forwarding state.

The second part of the solution ensures that the remaining PE devices participating in the STP instance see the SDP ports as a lower cost path to the root rather than a path that is external to the mesh. Internal to the PE nodes participating in the mesh, the SDPs are treated as zero cost paths toward the primary bridge. As a consequence, the path through the mesh are seen as lower cost than any alternative and the PE node will designate the network port as the root port. This ensures that network ports always remain in forwarding state.

A combination of the previously mentioned features ensure that network ports are never blocked and maintain interoperability with bridges external to the mesh that are running STP instances.

5.2.8.5.1. L2PT Termination

L2PT is used to transparently transport protocol data units (PDUs) of Layer 2 protocols, such as Cisco Discovery Protocol (CDP), Dynamic Trunking Protocol (DTP), Port Aggregation Protocol (PAGP), Spanning Tree Protocol (STP), Unidirectional Link Detection (UDLD), VLAN trunking protocol (VTP), and Link Layer Discovery Protocol (LLDP). This allows users to run these protocols between customer CPEs without involving backbone infrastructure.

The 7210 SAS routers support the transparent tunneling of PDUs across the VPLS core; however, in some network designs the VPLS PE is connected to CPEs through a legacy Layer 2 network, rather than via direct connections. In this type of environment, the termination of tunnels through the infrastructure is required.

L2PT tunnels transport protocol PDUs by overwriting MAC destination addresses at the ingress of the tunnel to a proprietary MAC address, such as 01-00-0c-cd-cd-d0. On egress of the tunnel, the MAC address is overwritten back to the MAC address of the respective Layer 2 protocol.

The 7210 SAS nodes support L2PT termination for STP BPDUs as follows.

  1. On ingress of every SAP or spoke-SDP that is configured as an L2PT termination, all PDUs with a MAC destination address of 01-00-0c-cd-cd-d0 are intercepted, and their MAC destination address is overwritten to the MAC destination address used for the corresponding protocol. The type of protocol can be derived from LLC and SNAP encapsulation.
  2. In the egress direction, PDUs of the corresponding protocol received on all VPLS ports are intercepted, and L2PT encapsulation is performed for SAP and spoke-SDPs configured as L2PT termination points. For implementation reasons, PDU interception and redirection to CPM can be performed only on ingress. Therefore, to comply with the preceding requirement, as soon as at least one port of a specific VPLS service is configured as an L2PT termination port, redirection of PDUs to CPM are set on all other ports (SAPs, spoke-SDPs) of the VPLS service.

L2PT termination can be enabled only if STP is disabled in the context of the specific VPLS service.

5.2.8.5.2. BPDU Translation

VPLS networks are typically used to interconnect different customer sites using different access technologies such as Ethernet and bridged-encapsulated ATM PVCs. Typically, different Layer 2 devices can support different types of STP and even if they are from the same vendor. In some cases, it is necessary to provide BPDU translation to provide an inter-operable e2e solution.

To address these network designs, BPDU format translation is supported on 7210 SAS devices. If enabled on a specific SAP or spoke-SDP, the system will intercept all BPDUs destined to that interface and perform required format translation such as STP-to-PVST or vice versa.

Similarly, BPDU interception and redirection to the CPM is performed only at ingress meaning that as soon as at least 1 port within a specific VPLS service has BPDU translation enabled, all BPDUs received on any of the VPLS ports will be redirected to the CPM.

BPDU translation involves all encapsulation actions that the datapath would perform for a specific outgoing port (such as adding VLAN tags depending on the outer SAP and the SDP encapsulation type) and adding or removing all the required VLAN information in a BPDU payload.

This feature can be enabled on a SAP/spoke only if STP is disabled in the context of the specific VPLS service.

5.2.8.5.3. L2PT and BPDU Translation

L2PT termination for only STP (Spanning Tree Protocol) and PVST (Per VLAN Spanning Tree Protocol), Cisco Discovery Protocol (CDP), Digital Trunking Protocol (DTP), Port Aggregation Protocol (PAgP), Unidirectional Link Detection (UDLD), Virtual Trunk Protocol (VTP), STP (Spanning Tree Protocol) and PVST (per-VLAN Spanning Tree protocol) are supported on 7210 SAS devices.

These protocols automatically pass the other protocols tunneled by L2PT toward the CPM and all carry the same specific Cisco MAC.

The existing L2PT limitations apply.

  1. The protocols apply only to VPLS.
  2. The protocols are mutually exclusive with running STP on the same VPLS as soon as one SAP/spoke has L2PT/BPDU translation enabled.
  3. Forwarding occurs on the CPM and uses CPU processing cycles.

5.2.9. VPLS Redundancy

The VPLS standard (RFC 4762, Virtual Private LAN Services Using LDP Signalling) includes provisions for hierarchical VPLS, using point-to-point spoke-SDPs. Two applications have been identified for spoke-SDPs:

  1. to connect to Multi-Tenant Units (MTUs) to PEs in a metro area network
  2. to interconnect the VPLS nodes of two networks

In both applications the spoke-SDPs serve to improve the scalability of VPLS. While node redundancy is implicit in non-hierarchical VPLS services (using a full mesh of SDPs between PEs), node redundancy for spoke-SDPs needs to be provided separately. In VPLS services, only two spoke-SDPs are allowed in an endpoint.

The 7210 SAS routers have implemented special features for improving the resilience of hierarchical VPLS instances, in both MTU and inter-metro applications.

5.2.9.1. Spoke-SDP Redundancy for Metro Interconnection

When two or more meshed VPLS instances are interconnected by redundant spoke-SDPs (as shown in Figure 63), a loop in the topology results. To remove such a loop from the topology, Spanning Tree Protocol (STP) can be run over the SDPs (links) which form the loop such that one of the SDPs is blocked. As running STP in each and every VPLS in this topology is not efficient, the node includes functionality which can associate a number of VPLSes to a single STP instance running over the redundant SDPs. Node redundancy is therefore achieved by running STP in one VPLS, and applying the conclusions of this STP to the other VPLS services. The VPLS instance running STP is referred to as the “management VPLS” or mVPLS.

In the case of a failure of the active node, STP on the management VPLS in the standby node will change the link states from disabled to active. The standby node will then broadcast a MAC flush LDP control message in each of the protected VPLS instances, so that the address of the newly active node can be relearned by all PEs in the VPLS.

It is possible to configure two management VPLS services, where both VPLS services have different active spokes (this is achieved by changing the path-cost in STP). By associating different user VPLSes with the two management VPLS services, load balancing across the spokes can be achieved.

Figure 63:  H-VPLS with Spoke Redundancy 

5.2.9.2. Spoke-SDP-Based Redundant Access

This feature provides the ability to have a node deployed as MTUs (Multi-Tenant Unit Switches) to be multi-homed for VPLS to multiple routers deployed as PEs without requiring the use of mVPLS.

In the configuration example displayed in Figure 63, the MTUs have spoke-SDPs to two PEs devices. One is designated as the primary and one as the secondary spoke-SDP. This is based on a precedence value associated with each spoke. If the primary and secondary spoke-SDPs have the same precedence value, the spoke-SDP with lower ID functions as the primary SDP.

The secondary spoke is in a blocking state (both on receive and transmit) as long as the primary spoke is available. When the primary spoke becomes unavailable (due to link failure, PEs failure, etc.), the MTU immediately switches traffic to the backup spoke and starts receiving/sending traffic to/from the standby spoke. Optional revertive operation (with configurable switch-back delay) is applicable only when one of the spokes is configured with precedence of primary. If not, this action does not take place. Forced manual switchover is also supported.

To speed up the convergence time during a switchover, MAC flush is configured. The MTUs generates a MAC flush message over the newly unblocked spoke when a spoke change occurs. As a result, the PEs receiving the MAC flush will flush all MACs associated with the impacted VPLS service instance and forward the MAC flush to the other PEs in the VPLS network if “propagate-mac-flush” is enabled.

5.2.9.3. Inter-Domain VPLS Resiliency Using Multi-Chassis Endpoints

Note:

MC-EP is not supported on 7210 SAS platforms. This section provides an example of how 7210 SAS platforms can be used as MTU devices in an MC-EP solution. In this solution, 7750 SR routers provide the MC-EP functionality.

Inter-domain VPLS refers to a VPLS deployment where sites may be located in different domains. An example of inter-domain deployment can be where different Metro domains are interconnected over a Wide Area Network (Metro1-WAN-Metro2) or where sites are located in different autonomous systems (AS1-ASBRs-AS2).

Multi-chassis endpoint (MC-EP) provides an alternate solution that does not require RSTP at the gateway VPLS PEs while still using pseudowires to interconnect the VPLS instances located in the two domains.

MC-EP expands the single chassis endpoint based on active-standby pseudowires for VPLS shown in Figure 64. In the solution depicted by the Figure 64, 7210 SAS devices are used as MTUs.

Figure 64:  H-VPLS Resiliency Based on AS Pseudowires 

The active-standby pseudowire solution is appropriate for the scenario when only one VPLS PE (MTU-s) needs to be dual-homed to two core PEs (PE1 and PE2).

5.2.10. VPLS Access Redundancy

A second application of hierarchical VPLS is using MTUs that are MPLS-enabled which must have spoke-SDPs to the closest PE node. To protect against failure of the PE node, an MTU can be dual-homed.

The following are several mechanisms that can be used to resolve a loop in an access network where 7210 SAS devices are used:

  1. STP-based access, with or without mVPLS
  2. Ethernet APS using G.8032

5.2.10.1. STP-Based Redundant Access to VPLS

Figure 65:  Dual-homed MTUs in Two-Tier Hierarchy H-VPLS  

In configuration shown in Figure 65, STP is activated on the MTU and two PEs to resolve a potential loop.

To remove such a loop from the topology, Spanning Tree Protocol (STP) can be run over the SDPs (links) which form the loop such that one of the SDPs is blocked. Running STP in every VPLS in this topology is not efficient as the node includes functionality which can associate a number of VPLSes to a single STP instance running over the redundant SDPs. Node redundancy is therefore achieved by running STP in one VPLS. Therefore, this applies the conclusions of this STP to the other VPLS services.

The VPLS instance running STP is referred to as the “management VPLS” or mVPLS. In the case of a failure of the active node, STP on the management VPLS in the standby node will change the link states from disabled to active. The standby node will then broadcast a MAC flush LDP control message in each of the protected VPLS instances, so that the address of the newly active node can be relearned by all PEs in the VPLS. It is possible to configure two management VPLS services, where both VPLS services have different active spokes (this is achieved by changing the path-cost in STP). By associating different user VPLSes with the two management VPLS services, load balancing across the spokes can be achieved.

In this configuration the scope of STP domain is limited to MTU and PEs, while any topology change needs to be propagated in the whole VPLS domain.

This is done by using “MAC-flush” messages defined by RFC 4762, Virtual Private LAN Services Using LDP Signaling. In the case where STP acts as a loop resolution mechanism, every Topology Change Notification (TCN) received in a context of STP instance is translated into an LDP-MAC address withdrawal message (also referred to as a MAC-flush message) requesting to clear all FDB entries except the ones learned from the originating PE. Such messages are sent to all PE peers connected through SDPs (mesh and spoke) in the context of VPLS services which are managed by the specific STP instance.

5.2.10.2. Redundant Access to VPLS without STP

The Nokia implementation also provides alternative methods for providing redundant access to Layer 2 services, such as MC-LAG. Also in this case, the topology change event needs to be propagated into VPLS topology to provide fast convergence.

Figure 63 shows a dual-homed connection to VPLS service (PE-A, PE-B, PE-C, PE-D) and operation in case of link failure (between PE-C and Layer 2-B). Upon detection of a link failure PE-C will send MAC-Address-Withdraw messages, which will indicate to all LDP peers that they should flush all MAC addresses learned from PE-C. This will lead that to a broadcasting of packets addressing affected hosts and relearning process in case an alternative route exists.

Note that the message described here is different from the message described in previous section and in RFC 4762, Virtual Private LAN Services Using LDP Signaling. The difference is in the interpretation and action performed in the receiving PE. According to the standard definition, upon receipt of a MAC withdraw message, all MAC addresses, except the ones learned from the source PE, are flushed,

This section specifies that all MAC addresses learned from the source are flushed. This message has been implemented as an LDP address message with vendor-specific type, length, value (TLV), and is called the flush-mine message.

The advantage of this approach (as compared to RSTP based methods) is that only MAC-affected addresses are flushed and not the full forwarding database. While this method does not provide a mechanism to secure alternative loop-free topology, the convergence time is dependent on the speed of the specific CE device will open alternative link (Layer 2-B switch in Figure 57) as well as on the speed PE routers will flush their FDB.

In addition, this mechanism is effective only if PE and CE are directly connected (no hub or bridge) as it reacts to physical failure of the link.

5.2.11. MAC Flush Message Processing

The previous sections described operation principle of several redundancy mechanisms available in context of VPLS service. All of them rely on MAC flush message as a tool to propagate topology change in a context of the specific VPLS. This section aims to summarize basic rules for generation and processing of these messages.

As described on respective sections, the 7210 SAS supports two types of MAC flush message, flush-all-but-mine and flush-mine. The main difference between these messages is the type of action they signal. Flush-all-but-mine requests clearing of all FDB entries which were learned from all other LDP peers except the originating PE. This type is also defined by RFC 4762 as an LDP MAC address withdrawal with an empty MAC address list.

Flush-all-mine message requests clearing all FDB entries learned from originating PE. This means that this message has exactly other effect then flush-all-but-mine message. This type is not included in RFC 4762 definition and it is implemented using vendor specific TLV.

The advantages and disadvantages of the individual types should be apparent from examples in the previous section. The description here focuses on summarizing actions taken on reception and conditions individual messages are generated.

Upon reception of MAC flush messages (regardless the type) SR-Series PE will take following actions:

  1. clears FDB entries of all indicated VPLS services conforming the definition
  2. propagates the message (preserving the type) to all LDP peers, if “propagate-mac-flush” flag is enabled at corresponding VPLS level

The flush-all-but-mine message is generated under following conditions.

  1. The flush-all-but-mine message is received from LDP peer and propagate-mac-flush flag is enabled. The message is sent to all LDP peers in the context of VPLS service it was received in.
  2. TCN message in a context of STP instance is received. The flush-all-but-mine message is sent to all LDP-peers connected with spoke and mesh SDPs in a context of VPLS service controlled by the specific STP instance (based on mVPLS definition). The message is sent only to LDP peers which are not part of STP domain, which means corresponding spoke and mesh SDPs are not part of mVPLS.
  3. Flush-all-but-mine message is generated when switch over between spoke-SDPs of the same endpoint occurs. The message is sent to LDP peer connected through newly active spoke-SDP.

The flush-mine message is generated under following conditions.

  1. The flush-mine message is received from LDP peer and “propagate-mac-flush” flag is enabled. The message is sent to all LDP peers in the context of VPLS service it was received.
  2. The flush-mine message is generated when on a SAP or SDP transition from operationally up to an operationally down state and send-flush-on-failure flag is enabled in the context of the specific VPLS service. The message is sent to all LDP peers connected in the context of the specific VPLS service. When enabling “send-flush-on-failure” the flag is blocked in VPLS service managed by mVPLS. This is to prevent both messages being sent at the same time.
  3. The flush-mine message is generated on an MC-LAG SAP transition from an operationally up state to an operationally down state. The message is sent to all LDP peers connected in the context of the specific VPLS service.

5.2.11.1. MAC Flush with STP

A second application of Hierarchical VPLS is in the use of Multi Tenant Units (MTU). MTUs are typically not MPLS-enabled, and therefore have Ethernet links to the closest PE node (see Figure 66). To protect against failure of the PE node, an MTU could be dual-homed and therefore have two SAPs on two PE nodes. To resolve the potential loop, STP is activated on the MTU and the two PEs.

Like in the previous scenario, STP only needs to run in a single VPLS instance, and the results of the STP calculations are applied to all VPLSes on the link. Equally, the standby node will broadcast MAC flush LDP messages in the protected VPLS instances when it detects that the active node has failed.

Figure 66:  H-VPLS with SAP Redundancy 

5.2.11.2. Selective MAC Flush

When using STP as described previously is not appropriate, the “Selective MAC flush” feature can be used instead.

In this scenario, the 7210 SAS that detects a port failure will send out a flush-all-from-ME LDP message to all PEs in the VPLS. The PEs receiving this LDP message will remove all MAC entries originated by the sender from the indicated VPLS.

A drawback of this approach is that the selective MAC flush does not signal that a backup path was found, only that the previous path is no longer available. In addition, the selective MAC Flush mechanism is effective only if the CE and PE are directly connected (no intermediate hubs or bridges) as it reacts only to a physical failure of the link. Consequently, Nokia recommends using the MAC flush with STP method previously described where possible.

5.2.11.3. Dual Homing to a VPLS Service

Figure 67:  Dual Homed CE Connection to VPLS 

Figure 67 shows a dual-homed connection to VPLS service (PE-A, PE-B, PE-C, PE-D) and operation in case of link failure (between PE-C and Layer 2-B). Upon detection of a link failure PE-C will send MAC-Address-Withdraw messages, which will indicate to all LDP peers that they should flush all MAC addresses learned from PE-C. This will lead that to a broadcasting of packets addressing affected hosts and relearning process in case an alternative route exists.

Note that the message described here is different from the message described in draft-ietf-l2vpn-vpls-ldp-xx.txt, Virtual Private LAN Services over MPLS. The difference is in the interpretation and action performed in the receiving PE. According the draft definition, upon receipt of a MAC-withdraw message, all MAC addresses, except the ones learned from the source PE, are flushed, This section specifies that all MAC addresses learned from the source are flushed. This message has been implemented as an LDP address message with vendor-specific type, length, value (TLV), and is called the flush-all-from-ME message.

The draft definition message is currently used in management VPLS which is using RSTP for recovering from failures in Layer 2 topologies. The mechanism described in this document represent an alternative solution.

The advantage of this approach (as compared to RSTP based methods) is that only MAC-affected addresses are flushed and not the full forwarding database. While this method does not provide a mechanism to secure alternative loop-free topology, the convergence time is dependent on the speed of the specific CE device will open alternative link (Layer 2-B switch in Figure 67) as well as on the speed PE routers will flush their FDB.

In addition, this mechanism is effective only if PE and CE are directly connected (no hub or bridge) as it reacts to physical failure of the link.

5.2.12. VPLS Service Considerations

This section describes various 7210 SAS service features and special capabilities or considerations as they relate to VPLS services.

5.2.12.1. SAP Encapsulations

VPLS services are designed to carry Ethernet frame payloads, so it can provide connectivity between any SAPs that pass Ethernet frames. The following SAP encapsulations are supported on the VPLS service:

  1. Ethernet null
  2. Ethernet Dot1q
  3. Ethernet Dot1q Default
  4. Ethernet Dot1q Explicit Null

5.2.12.2. VLAN Processing

The SAP encapsulation definition on Ethernet ingress ports defines which VLAN tags are used to determine the service that the packet belongs:

  1. Null encapsulation defined on ingress — Any VLAN tags are ignored and the packet goes to a default service for the SAP.
  2. Dot1q encapsulation defined on ingress — Only first VLAN tag is considered.
  3. Dot1q Default encapsulation defined on ingress — Tagged packets not matching any of the configured VLAN encapsulations would be accepted. This is like a default SAP for tagged packets.
  4. Dot1q Explicit Null encapsulation defined on ingress — Any untagged or priority-tagged packets will be accepted.

5.3. BGP Auto-Discovery for LDP VPLS

BGP Auto Discovery (BGP AD) for LDP VPLS is a framework for automatically discovering the endpoints of a Layer 2 VPN offering an operational model similar to that of an IP VPN. This model allows carriers to leverage existing network elements and functions, including but not limited to, route reflectors and BGP policies to control the VPLS topology.

BGP AD is an excellent complement to an already established and well deployed Layer 2 VPN signaling mechanism target LDP providing one touch provisioning for LDP VPLS where all the related PEs are discovered automatically. The service provider may make use of existing BGP policies to regulate the exchanges between PEs in the same, or in different, autonomous system (AS) domains. The addition of BGP AD procedures does not require carriers to uproot their existing VPLS deployments and to change the signaling protocol.

5.3.1. BGP AD Overview

The BGP protocol establishes neighbor relationships between configured peers. An open message is sent after the completion of the three-way TCP handshake. This open message contains information about the BGP peer sending the message. This message contains Autonomous System Number (ASN), BGP version, timer information and operational parameters, including capabilities. The capabilities of a peer are exchanged using two numerical values: the Address Family Identifier (AFI) and Subsequent Address Family Identifier (SAFI). These numbers are allocated by the Internet Assigned Numbers Authority (IANA). BGP AD uses AFI 65 (L2VPN) and SAFI 25 (BGP VPLS).

5.3.2. Information Model

Following is the establishment of the peer relationship, the discovery process begins as soon as a new VPLS service instance is provisioned on the PE.

Two VPLS identifiers are used to indicate the VPLS membership and the individual VPLS instance:

  1. VPLS-ID — Membership information, unique network wide identifier; same value assigned for all VPLS switch instances (VSIs) belonging to the same VPLS; encodable and carried as a BGP extended community in one of the following formats:
    1. A two-octet AS specific extended community
    2. An IPv4 address specific extended community
  2. VSI-ID— The unique identifier for each individual VSI, built by concatenating a route distinguisher (RD) with a 4 bytes identifier (usually the system IP of the VPLS PE); encoded and carried in the corresponding BGP NLRI.

To advertise this information, BGP AD employs a simplified version of the BGP VPLS NLRI where just the RD and the next 4 bytes are used to identify the VPLS instance. There is no need for Label Block and Label Size fields as T-LDP will take care of signaling the service labels later on.

The format of the BGP AD NLRI is very similar with the one used for IP VPN as shown in Figure 68. The system IP may be used for the last 4 bytes of the VSI-ID further simplifying the addressing and the provisioning process.

Figure 68:  BGP AD NLRI versus IP VPN NLRI 

Network Layer Reachability Information (NLRI) is exchanged between BGP peers indicating how to reach prefixes. The NLRI is used in the Layer 2 VPN case to tell PE peers how to reach the VSI rather than specific prefixes. The advertisement includes the BGP next hop and a route target (RT). The BGP next hop indicates the VSI location and is used in the next step to determine which signaling session is used for pseudowire signaling. The RT, also coded as an extended community, can be used to build a VPLS full mesh or a H-VPLS hierarchy through the use of BGP import or export policies.

BGP is only used to discover VPN endpoints and the corresponding far end PEs. It is not used to signal the pseudowire labels. This task remains the responsibility of targeted-LDP (T-LDP).

5.3.3. FEC Element for T-LDP Signaling

Two LDP FEC elements are defined in RFC 4447, PW Setup & Maintenance Using LDP. The original pseudowire-ID FEC element 128 (0x80) employs a 32-bit field to identify the virtual circuit ID and it was used extensively in the initial VPWS and VPLS deployments. The simple format is easy to understand but it does not provide the required information model for BGP Auto-Discovery function. To support BGP AD and other new applications a new Layer 2 FEC element, the generalized FEC (0x81) is required.

The generalized pseudowire-ID FEC element has been designed for auto discovery applications. It provides a field, the address group identifier (AGI), that is used to signal the membership information from the VPLS-ID. Separate address fields are provided for the source and target address associated with the VPLS endpoints called the Source Attachment Individual Identifier (SAII) and respectively, Target Attachment Individual Identifier (TAII). These fields carry the VSI-ID values for the two instances that are to be connected through the signaled pseudowire.

The detailed format for FEC 129 is shown in Figure 69.

Figure 69:  Generalized Pseudowire-ID FEC Element 

Each of the FEC fields are designed as a sub-TLV equipped with its own type and length providing support for new applications. To accommodate the BGP AD information model the following FEC formats are used:

  1. AGI (type 1) is identical in format and content with the BGP extended community attribute used to carry the VPLS-ID value.
  2. Source AII (type 1) is a 4 bytes value destined to carry the local VSI-id (outgoing NLRI minus the RD).
  3. Target AII (type 1) is a 4 bytes value destined to carry the remote VSI-ID (incoming NLRI minus the RD).

5.3.4. BGP-AD and Target LDP (T-LDP) Interaction

BGP is responsible for discovering the location of VSIs that share the same VPLS membership. LDP protocol is responsible for setting up the pseudowire infrastructure between the related VSIs by exchanging service-specific labels between them.

When the local VPLS information is provisioned in the local PE, the related PEs participating in the same VPLS are identified through BGP AD exchanges. A list of far-end PEs is generated and triggers the creation, if required, of the necessary T-LDP sessions to these PEs and the exchange of the service-specific VPN labels. The steps for the BGP AD discovery process and LDP session establishment and label exchange are shown in Figure 70.

Figure 70:  BGP-AD and T-LDP Interaction 

Key:

  1. Establish I-BGP connectivity RR.
  2. Configure VPN (10) on edge node (PE3).
  3. Announce VPN to RR using BGP-AD.
  4. Send membership update to each client of the cluster.
  5. LDP exchange or inbound FEC filtering (IFF) of non-match or VPLS down.
  6. Configure VPN (10) on edge node (PE2).
  7. Announce VPN to RR using BGP-AD.
  8. Send membership update to each client of the cluster.
  9. LDP exchange or inbound FEC filtering (IFF) of non-match or VPLS down.
  10. Complete LDP bidirectional pseudowire establishment FEC 129.

5.3.5. SDP Usage

Service Access Points (SAP) are linked to transport tunnels using Service Distribution Points (SDP). The service architecture of the 7210 platform allows services to be abstracted from the transport network.

MPLS transport tunnels are signaled using the Resource Reservation Protocol (RSVP-TE) or by the Label Distribution Protocol (LDP). The capability to automatically create an SDP only exists for LDP based transport tunnels. Using a manually provisioned SDP is available for both RSVP-TE and LDP transport tunnels. Refer to the 7210 SAS-M, T, R6, R12, Mxp, Sx, S MPLS Guide for more information about MPLS, LDP, and RSVP.

5.3.6. Automatic Creation of SDPs

When BGP AD is used for LDP VPLS and LDP is used as the transport tunnel there is no requirement to manually create an SDP. The LDP SDP can be automatically instantiated using the information advertised by BGP AD. This simplifies the configuration on the service node.

Enabling LDP on the IP interfaces connecting all nodes between the ingress and the egress, builds transport tunnels based on the best IGP path. LDP bindings are automatically built and stored in the hardware. These entries contain an MPLS label pointing to the best next hop along the best path toward the destination.

When two endpoints need to connect and no SDP exists, a new SDP will automatically be constructed. New services added between two endpoints that already have an automatically created SDP will be immediately used. No new SDP will be constructed. The far-end information is gleaned from the BGP next hop information in the NLRI. When services are withdrawn with a BGP_Unreach_NLRI, the automatically established SDP will remain up as long as at least one service is connected between those endpoints. An automatically created SDP will be removed and the resources released when the only or last service is removed.

5.3.7. Manually Provisioned SDP

The carrier is required to manually provision the SDP if they create transport tunnels using RSVP-TE. Operators have the option to choose a manually configured SDP, if they use LDP as the tunnel signaling protocol. The functionality is the same regardless of the signaling protocol.

Creating a BGP-AD enabled VPLS service on an ingress node with the manually provisioned SDP option causes the Tunnel Manager to search for an existing SDP that connects to the far-end PE. The far-end IP information is gleaned from the BGP next hop information in the NLRI. If a single SDP exists to that PE, it is used. If no SDP is established between the two endpoints, the service remains down until a manually configured SDP becomes active.

When multiple SDPs exist between two endpoints, the tunnel manager selects the appropriate SDP. The algorithm preferred SDPs with the best (lower) metric. Should there be multiple SDPs with equal metrics, the operational state of the SDPs with the best metric is considered. If the operational state is the same, the SDP with the higher sdp-id is used. If an SDP with a preferred metric is found with an operational state that is not active, the tunnel manager flags it as ineligible and restarts the algorithm.

5.3.8. Automatic Instantiation of Pseudowires (SDP Bindings)

The choice of manual or auto provisioned SDPs has limited impact on the amount of required provisioning. Most of the savings are achieved through the automatic instantiation of the pseudowire infrastructure (SDP bindings). This is achieved for every auto-discovered VSIs through the use of the pseudowire template concept. Each VPLS service that uses BGP AD contains the “pw-template-binding” option defining specific Layer 2 VPN parameters. This command references a “pw-template” which defines the pseudowire parameters. The same “pwtemplate” may be referenced by multiple VPLS services. As a result, changes to these pseudowire templates have to be treated with great care as they may impact many customers at once.

The Nokia implementation provides for safe handling of pseudowire templates. Changes to the pseudowire templates are not automatically propagated. Tools are provided to evaluate and distribute the changes. The following command is used to distribute changes to a “pw-template” at the service level to one or all services that use that template.

PERs-4# tools perform service id 300 eval-pw-template 1 allow-service-impact

If the service ID is omitted, then all services are updated. The type of change made to the “pwtemplate” influences how the service is impacted.

  1. Adding or removing a split-horizon-group will cause the router to destroy the original object and recreate using the new value.
  2. Changing parameters in the vc-type {ether | vlan} command requires LDP to re-signal the labels.

Both of these changes affect the services. Other changes are not service affected.

5.3.9. Mixing Statically Configured and Auto-Discovered Pseudowires in a VPLS Service

The services implementation allows for manually provisioned and auto-discovered pseudowire (SDP bindings) to coexist in the same VPLS instance (for example, both FEC128 and FEC 129 are supported). This allows for gradual introduction of auto discovery into an existing VPLS deployment.

As FEC 128 and 129 represent different addressing schemes, it is important to make sure that only one is used at any point in time between the same two VPLS instances. Otherwise, both pseudowires may become active causing a loop that might adversely impact the correct functioning of the service. It is recommended that FEC128 pseudowire be disabled as soon as the FEC129 addressing scheme is introduced in a portion of the network. Alternatively, RSTP may be used during the migration as a safety mechanism to provide additional protection against operational errors.

5.3.10. Resiliency Schemes

The use of BGP-AD on the network side, or in the backbone, does not affect the different resiliency schemes Nokia has developed in the access network. This means that both Multi-Chassis Link Aggregation (MC-LAG) and Management-VPLS (M-VPLS) can still be used.

BGP-AD may coexist with Hierarchical-VPLS (H-VPLS) resiliency schemes (for example, dual homed MTU-s devices to different PE-rs nodes) using existing methods (M-VPLS and statically configured Active or Standby pseudowire endpoint).

If provisioned SDPs are used by BGP AD, M-VPLS may be employed to provide loop avoidance. However, it is currently not possible to auto-discover active or standby pseudowires and to instantiate the related endpoint.

5.4. Routed VPLS

Note:

R-VPLS with IPv6 interfaces is supported only on 7210 SAS-Mxp operating in the network mode.

Routed VPLS (R-VPLS) allows a VPLS instance to be associated with an IP interface.

Note:

  1. R-VPLS is supported on all 7210 SAS platforms as described in this document, including those operating in access-uplink mode.
  2. For 7210 SAS platforms operating in network mode, R-VPLS can provide both customer service and in-band management of the node.
  3. For 7210 SAS platforms operating in access-uplink mode, R-VPLS can only provide in-band management of the node.

Within an R-VPLS service, traffic with a destination MAC matching that of the associated IP interface is routed based on the IP forwarding table; all other traffic is forwarded based on the VPLS forwarding table.

In access-uplink mode, an R-VPLS service can be associated with an IPv4 interface and supports only static routing. It is primarily designed for use of in-band management of the node. It allows for in-band management of the 7210 SAS nodes in a ring deployment using a single IPv4 subnet, reducing the number of IP subnets needed.

In network mode, an R-VPLS service can be associated with an IPv4 or IPv6 interface and supports static routing and other routing protocols. It can be used to provide a service to the customer or for in-band management of the node.

5.4.1. IES or VPRN IP Interface Binding

A standard IP interface within an existing IES or VPRN service context may be bound to a service name. A VPLS service only supports binding for a single IP interface.

While an IP interface may only be bound to a single VPLS service, the routing context containing the IP interface (IES or VPRN) may have other IP interfaces bound to other VPLS service contexts. That is, R-VPLS allows the binding of IP interfaces in IES and VPRN services to be bound to VPLS services.

5.4.2. Assigning a Service Name to a VPLS Service

When a service name is applied to any service context, the name and service ID association is registered with the system. A service name cannot be assigned to more than one service ID.

Special consideration is given to a service name that is assigned to a VPLS service that has the configure service vpls allow-ip-int-binding command enabled. If a name is applied to the VPLS service while the allow-ip-int-binding flag is set, the system scans the existing IES and VPRN services for an IP interface that is bound to the specified service name. If an IP interface is found, the IP interface is attached to the VPLS service associated with the name. Only one interface can be bound to the service with the specified name.

If the allow-ip-int-binding command is not enabled on the VPLS service, the system does not attempt to resolve the VPLS service name to an IP interface. As soon as the allow-ip-int-binding flag is configured on the VPLS, the corresponding IP interface is bound and becomes operational up. There is no need to toggle the shutdown or no shutdown command.

If an IP interface is not currently bound to the VPLS service name, no action is taken at the time of the service name assignment.

5.4.3. Service Binding Requirements

In the event that a defined service name is created on the system, the system checks to ensure that the service type is VPLS. If the created service type is VPLS, the IP interface is eligible to enter the operationally upstate.

5.4.3.1. Bound Service Name Assignment

In the event that a bound service name is assigned to a service within the system, the system first checks to ensure that the service type is VPLS. Secondly, the system ensures that the service is not already bound to another IP interface through the service name. If the service type is not VPLS or the service is already bound to another IP interface through the service ID, the service name assignment fails.

A single VPLS instance cannot be bound to two separate IP interfaces.

5.4.3.2. Binding a Service Name to an IP Interface

An IP interface within an IES or VPRN service context may be bound to a service name at anytime. Only one interface can be bound to a service. When an IP interface is bound to a service name and the IP interface is administratively up, the system scans for a VPLS service context using the name and takes the following actions.

  1. If the name is not currently in use by a service, the IP interface is placed in an operationally down: Non-existent service name or inappropriate service type state.
  2. If the name is currently in use by a non-VPLS service or the wrong type of VPLS service, the IP interface is placed in the operationally down: Non-existent service name or inappropriate service type state.
  3. If the name is currently in use by a VPLS service without the allow-ip-int-binding flag set, the IP interface is placed in the operationally down: VPLS service allow-ip-int-binding flag not set state. There is no need to toggle the shutdown or no shutdown command.
  4. If the name is currently in use by a valid VPLS service and the allow-ip-int-binding flag is set, the IP interface is eligible to be placed in the operationally up state depending on other operational criteria being met.

5.4.3.3. IP Interface Attached VPLS Service Constraints

When a VPLS service has been bound to an IP interface through its service name, the service name assigned to the service cannot be removed or changed unless the IP interface is first unbound from the VPLS service name.

A VPLS service that is currently attached to an IP interface cannot be deleted from the system unless the IP interface is unbound from the VPLS service name.

The allow-ip-int-binding flag within an IP interface attached VPLS service cannot be reset. The IP interface must first be unbound from the VPLS service name to reset the flag.

5.4.3.4. IP Interface and VPLS Operational State Coordination

When the IP interface is successfully attached to a VPLS service, the operational state of the IP interface is dependent upon the operational state of the VPLS service.

The VPLS service remains down until at least one virtual port (SAP, spoke-SDP, or mesh SDP) is operational.

5.4.3.5. IP Interface MTU and Fragmentation

Note:

VPLS service MTU is supported on all 7210 SAS platforms as described in this document, except 7210 SAS-M and 7210 SAS-T platforms operating in access-uplink mode.

The user must ensure that the port MTU is configured appropriately so that the largest packet traversing through any of the SAPs (virtual ports) of the VPLS service can be forwarded out of any of the SAPs. VPLS services do not support fragmentation and can discard packets larger than the configured port MTU.

When an IP interface is associated with a VPLS service, the IP-MTU is based on either the administrative value configured for the IP interface or an operational value derived from port MTU of all the SAPs configured in the service. The port MTU excluding the Layer 2 Header and tags for all the ports which have SAPs configured in this VPLS service are considered and the minimum value among those are computed (which is called computed MTU). The operational value of the IP interface is set as follows.

  1. If the configured (administrative) value of IP MTU is greater than the computed MTU, then the operational IP MTU is set to the computed MTU.
  2. If the configured (administrative) value of IP MTU is lesser than or equal to the computed MTU, then operational IP MTU is set to the configured (administrative) value of IP MTU.

5.4.4. ARP and VPLS FIB Interactions

Two address-oriented table entries are used when routing into a VPLS service. On the routing side, an ARP entry is used to determine the destination MAC address used by an IP next-hop. In the case where the destination IP address in the routed packet is a host on the local subnet represented by the VPLS instance, the destination IP address is used as the next-hop IP address in the ARP cache lookup. If the destination IP address is in a remote subnet that is reached by another router attached to the VPLS service, the routing lookup returns the local IP address on the VPLS service of the remote router is returned. If the next-hop is not currently in the ARP cache, the system generates an ARP request to determine the destination MAC address associated with the next-hop IP address. IP routing to all destination hosts associated with the next-hop IP address stops until the ARP cache is populated with an entry for the next-hop. The dynamically populated ARP entries age out according to the ARP aging timer.

The second address table entry that affects VPLS routed packets is the MAC destination lookup in the VPLS service context. The MAC associated with the ARP table entry for the IP next-hop may or may not currently be populated in the VPLS Layer 2 FIB table. While the destination MAC is unknown (not populated in the VPLS FIB), the system is flooded with all packets destined to that MAC (routed or bridged) to all SAPs within the VPLS service context. When the MAC is known (populated in the VPLS FIB), all packets destined to the MAC (routed or bridged) is targeted to the specific SAP where the MAC has been learned. As with ARP entries, static MAC entries may be created in the VPLS FIB. Dynamically learned MAC addresses are allowed to age out or be flushed from the VPLS FIB while static MAC entries always remain associated with a specific virtual port. Dynamic MACs may also be relearned on another VPLS SAP than the current SAP in the FIB. In this case, the system automatically moves the MAC FIB entry to the new VPLS SAP.

Note:

  1. In 7210 SAS, whenever a MAC entry is removed from the VPLS FIB (either explicitly by the user or due to MAC aging or mac-move), ARP entries which match this MAC address is removed from the ARP cache. Though the VPLS FIB entries are not removed; an ARP entry ages out and is removed from the ARP cache.
  2. If the VPLS FIB limit is reached and the node is no longer able to learn new MAC address, ARP will also not be learned.

5.4.5. R-VPLS Specific ARP Cache Behavior

In typical routing behavior, the system uses the IP route table to select the egress interface, an ARP entry is used forward the packet to the appropriate Ethernet MAC. With R-VPLS, the egress IP interface may be represented by multiple egress (VPLS service SAPs).

Table 53 describes how the ARP cache and MAC FIB entry states interact.

Table 53:  Routing Behavior in R-VPLS and Interaction ARP Cache and MAC FIB 

ARP Cache Entry

MAC FIB Entry

Routing or System Behavior

ARP Cache Miss (No Entry)

Known or

Unknown

Triggers a request to control plane ARP processing module, to send out an ARP request, out of all the SAPs. (also known as virtual ports) of the VPLS instance.

ARP Cache Hit

Known

Forward to specific VPLS virtual port or SAP.

Unknown

This behavior cannot happen typically on 7210 SAS-D, as and when an Layer 2 entry is removed from the FDB, the matching MAC address is also removed from the ARP cache. On 7210 SAS-K, the packet is sent out of all the SAPs of the VPLS instance.

5.4.5.1. The allow-ip-int-binding VPLS Flag

The allow-ip-int-binding flag on a VPLS service context is used to inform the system that the VPLS service is enabled for routing support. The system uses the setting of the flag as a key to determine what type of ports the VPLS service may span.

The system also uses the flag state to define which VPLS features are configurable on the VPLS service to prevent enabling a feature that is not supported when routing support is enabled.

5.4.6. R-VPLS SAP Support on Standard Ethernet Ports

The allow-ip-int-binding flag is set (routing support enabled) on a VPLS service. SAPs within the service can be created on standard Ethernet ports.

5.4.6.1. LAG Port Membership Constraints

If a LAG has a unsupported port type as a member, a SAP for the routing-enabled VPLS service cannot be created on the LAG. When one or more routing-enabled VPLS SAPs are associated with a LAG, a unsupported Ethernet port type cannot be added to the LAG membership.

5.4.6.2. VPLS Feature Support and Restrictions

When the allow-ip-int-binding flag is set on a VPLS service, the following features cannot be enabled (the flag also cannot be enabled while any of these features are applied to the VPLS service). The following restrictions apply to both network mode and access-uplink mode unless otherwise noted.

  1. In network mode, SDPs used in spoke or mesh SDP bindings cannot be configured.
  2. In access-uplink mode, the VPLS service type cannot be MVPLS.
  3. In network mode, the VPLS service type must be R-VPLS; no other VPLS service is allowed.
  4. In access-uplink mode, MVR from an R-VPLS SAP to another SAP is not supported.
  5. In access-uplink mode, default QinQ SAPs are not supported in an R-VPLS service.
  6. In network mode, MVR from an R-VPLS SAP to another R-VPLS SAP is supported. See R-VPLS and IGMPv3 Snooping for more information.
  7. The allow-ip-int-binding command cannot be used in a VPLS service that is acting as the G.8032 control instance.
  8. IP (IPv4 and IPv6) filters (ingress and egress) can be used with R-VPLS SAPs. Additionally, IP ingress override filters are supported, which affects the behavior of the IP filters attached to the R-VPLS SAPs.
  9. MAC filters (ingress and egress) are not supported for use with R-VPLS SAPs.
  10. In access-uplink mode, a VPLS IP interface is not allowed in an R-VPLS service, and an R-VPLS service/SAP cannot be configured with a VPLS IP interface.
  11. In access-uplink mode, the VPLS service can be configured with either access SAPs or access-uplink SAPs. In network mode, the VPLS service can be configured only with access SAPs or with SAPs on hybrid ports.
  12. In access-uplink mode, the VPLS service can use the following svc-sap-type values: any, dot1q-preserve, and null-star. Only specific SAP combinations are allowed for a specific svc-sap-type. Allowed SAP combinations are similar to those for plain VPLS service, except that default QinQ SAPs are not supported.
  13. In network mode, the VPLS service can use the following svc-sap-type values: any, null-star, and dot1q-preserve.
  14. G.8032 or MVPLS/STP based protection mechanisms can be used with an R-VPLS service. A separate G.8032 control instance or a separate MVPLS/STP instance must be used and the R-VPLS SAPs must be associated with these control instances such that the R-VPLS SAP forwarding state is driven by the control instance protocols.
  15. IP multicast is not supported in an R-VPLS service.
  16. IGMP snooping is supported in an R-VPLS service for 7210 SAS-M (network mode), 7210 SAS-T (network mode), 7210 SAS-Mxp, 7210 SAS-S, and 7210 SAS-Sx.
  17. In access-uplink mode, DHCP snooping is not supported for the SAPs configured in an R-VPLS service. Instead, DHCP relay can be enabled on the IES service associated with the R-VPLS service.
  18. In network mode, only on 7210 SAS-Mxp, DHCP IPv6 relay can be enabled on the IES service and VPRN service associated with the R-VPLS service. DHCPv6 snooping is not supported.
  19. In network mode, DHCP snooping is not supported for the SAPs configured in an R-VPLS service. Instead, DHCP relay (IPv4) can be enabled on the IES service associated with the R-VPLS service.
  20. In network mode, R-VPLS SAPs are allowed on access ports and hybrid ports.
  21. In network mode, an R-VPLS SAP drops packets received with extra tags. That is, if a packet is received on an R-VPLS SAP, with the number of tags greater than the SAP tags to which it is mapped, then it is dropped. This is true for all supported encapsulations (that is, null, dot1q, and QinQ encapsulations) of the port. For example, double-tagged packets received on a Dot1q SAP configured in an R-VPLS service is dropped on ingress.
  22. In the saved configuration file, for the R-VPLS service, the R-VPLS service instance appears twice, once for service creation and once with all the other configuration parameters. This is required to resolve references to the R-VPLS service and to execute the configuration without any errors.

5.4.7. VPLS SAP Ingress IP Filter Override

When an IP Interface is attached to a VPLS service context, the VPLS SAP provisioned IP filter for ingress routed packets may be optionally overridden to provide special ingress filtering for routed packets. This allows different filtering for routed packets and non-routed packets. The filter override is defined on the IP interface bound to the VPLS service name. A separate override filter may be specified for IPv4 packet types.

If a filter for a specific packet type (IP) is not overridden, the SAP-specified filter is applied to the packet (if defined).

Table 54 and Table 55 list ACL lookup behavior with and without Ingress Override filter attached to an IES interface in an R-VPLS service.

Table 54:  ACL Lookup Behavior with Ingress Override Filter Attached to an IES Interface in an R-VPLS Service 

Type of Traffic

SAP Ingress IPv4 Filter

SAP Egress IPv4 Filter

Ingress Override IPv4 Filter

Destination MAC != IES IP interface MAC

Yes

Yes

No

Destination MAC = IES IP interface MAC and Destination IP on same subnet as IES interface

No

No

Yes

Destination MAC = IES IP interface MAC and destination IP not on same subnet as IES IP interface and route to destination IP does not exist

No

No

No

Destination MAC = IES IP interface MAC and destination IP not on same subnet as IES IP interface and route to destination IP exists

No

No

Yes

Destination MAC = IES IP interface MAC and IP TTL = 1

No

No

No

Destination MAC = IES IP interface MAC and IPv4 packet with Options

No

No

No

Destination MAC = IES IP interface MAC and IPv4 Multicast packet

No

No

No

Table 55:  ACL Lookup Behavior Without Ingress Override Filter Attached to an IES Interface in an R-VPLS Service 

Type of Traffic

SAP Ingress IPv4 Filter

SAP Egress IPv4 Filter

Destination MAC != IES IP interface MAC

Yes

Yes

Destination MAC = IES IP interface MAC and Destination IP on same subnet as IES IP interface

Yes

No

Destination MAC = IES IP interface MAC and destination IP not on same subnet as IES IP interface and route to destination IP does not exist

No

No

Destination MAC = IES IP interface MAC and destination IP not on same subnet as IES IP interface and route to destination IP exists

Yes

No

Destination MAC = IES IP interface MAC and IP TTL = 1

No

No

Destination MAC = IES IP interface MAC and IPv4 packet with Options

No

No

Destination MAC = IES IP interface MAC and IPv4 Multicast packet

No

No

5.4.7.1. QoS Support for VPLS SAPs and IP Interface in a R-VPLS Service

  1. SAP ingress classification (IPv4, IPv6, and MAC criteria) is supported for SAPs configured in the service. SAP ingress policies cannot be associated with IES IP interface.
  2. On 7210 SAS-M, 7210 SAS-T, 7210 SAS-Sx/S 1/10GE, and 7210 SAS-Sx 10/100GE, egress port-based queuing and shaping are available. It is shared among all the SAPs on the port.
  3. On 7210 SAS-Mxp, when the node is operating in SAP based queuing mode, unicast traffic sent out of R-VPLS SAPs uses SAP based egress queues while BUM traffic sent out of R-VPLS SAPs uses per port egress queues. When the node is operating in port based queuing mode, both unicast and BUM traffic sent out of R-VPLS SAPS uses per port egress queues. For more information, see the 7210 SAS-M, T, R6, R12, Mxp, Sx, S Quality of Service Guide.
  4. Port based Egress Marking is supported for both routed packets and bridged packets. The existing access egress QoS policy can be used for Dot1p marking and DSCP marking.
  5. In access-uplink mode, IES IP interface bound to R-VPLS services, IES IP interface on access SAPs, and IES IP interface on access-uplink SAPs are designed for use with inband management of the node. Consequently, they share a common set of queues for CPU-bound management traffic. All CPU bound traffic is policed to predefined rates before being queued into CPU queues for application processing. The system uses meters per application or a set of applications. It does not allocate meters per IP interface. The possibility of CPU overloading has been reduced by use of these mechanisms. Users must use appropriate security policies either on the node or in the network to ensure that this does not happen.

5.4.7.2. R-VPLS and Routing Protocols Support

In access-uplink mode, R-VPLS is supported only in the base routing instance. Only IPv4 addressing support is available for IES interfaces associated with a R-VPLS service.

In network mode, R-VPLS is supported in both base routing instances (IES) and VPRN services. IPv4 and IPv6 addressing support is available for IES and VPRN IP interfaces associated with a R-VPLS service.

Table 56 lists the support available for routing protocols on IP interfaces bound to a VPLS service in access-uplink mode and network mode.

Table 56:  Routing Protocols on IP Interfaces Bound to a VPLS Service 

Services

Access-uplink

Network

Static-routing

Supported

Supported

BGP

Not Supported

Supported

OSPF

Not Supported

Supported

ISIS

Not Supported

Supported

BFD

Not Supported

Supported

VRRP

Not Supported

Supported

ARP and Proxy-Arp

ARP is supported

Both are supported

DHCP Relay 1

Supported

Supported

DHCPv6 Relay

Not Supported

Supported only on the 7210 SAS-Mxp operating in network mode

    Note:

  1. In access-uplink mode, DHCP relay can be configured for the IES interface associated with the R-VPLS service. DHCP snooping cannot be configured on the VPLS SAPs in the R-VPLS service. In network mode, DHCP relay can be configured for the IES interface associated with the R-VPLS service. DHCP snooping cannot be configured on the VPLS SAPs in the R-VPLS Service.

5.4.7.3. Spanning Tree and Split Horizon

The R-VPLS context supports all spanning tree capabilities that a non R-VPLS service supports. Service-based SHGs are not supported in an R-VPLS context.

5.4.8. R-VPLS and IGMPv3 Snooping

Note:

This feature is supported on all 7210 SAS platforms as described in this document, except those operating in access-uplink mode.

This feature (IGMPv3 snooping in R-VPLS) extends IGMP snooping to an R-VPLS service. On 7210 SAS, VPLS services that use MPLS uplinks (network mode) support IGMP snooping with IGMP v1 and v2 only. That is, IGMP v3 is not supported, which means that only Layer 2 multicast is supported. To support source-based IP multicast, support for IGMPv3 is needed. To provide source-based IP multicast, support for IGMP snooping v1, v2, and v3 is added to R-VPLS service. The IGMPv3 snooping in R-VPLS feature gives customers an option to use an R-VPLS service without a configured IP interface association to deliver IP multicast traffic in access Layer 2 networks. Users also have an option to configure MVR service.

IGMPv3 snooping in R-VPLS is supported only for IES (not for VPRNs).

For information about IGMP snooping in the context of VPLS, see IGMP Snooping in a VPLS Service.

5.4.8.1. Configuration Guidelines and Restrictions for IGMP Snooping in R-VPLS

The following items apply to IGMP snooping in R-VPLS and should be included with the regular VPLS multicast configuration guidelines (see Configuration Guidelines for IGMP Snooping in VPLS Service and R-VPLS Supported Functionality and Restrictions):

  1. R-VPLS without an IP interface association can be used to emulate VPLS service with support for IGMPv3 snooping.
  2. R-VPLS with or without IP interface association can be used for IGMPv3 snooping. If enabling MVR on the service then the service should not have an IP interface association.
  3. IGMPv3 snooping can be enabled in the context of the R-VPLS (both with and without MVR). It cannot be enabled in regular VPLS service. Regular VPLS service supports IGMP v1 and v2 only.
  4. MVR can be configured in an R-VPLS without an IP interface association. It can be used to leak multicast traffic to a user R-VPLS service with an IP interface configuration. Therefore, a user R-VPLS can be used to forward both unicast and multicast services.

In addition, the following list of guidelines and restrictions pertain to IGMP snooping in an R-VPLS service:

  1. R-VPLS service can only have a single SAP per port configured in a service. That is, two SAPs on the same port cannot be configured in the same service.
  2. Spoke-SDPs and mesh SDPs cannot be configured in an R-VPLS service.
  3. On 7210 SAS devices, on ingress of a port, multicast traffic can be processed in the context of either igmp-snooping (Layer 2 Ethernet multicast with IGMP v1 or v2 snooping) or l3-multicast (either multicast in an Layer 3 service or IGMP snooping in an R-VPLS), but not both. That is, it is not possible to configure SAPs on the port such that one SAP is a receiver for multicast traffic to be processed by IGMP snooping, and another SAP is a receiver for multicast traffic to be processed by IP multicast in the context of Layer 3 service or R-VPLS. An option per port is available using the configure> port> ethernet> multicast-ingress {l2-mc | ip-mc} command to enable one or the other. Refer to the 7210 SAS-M, T, R6, R12, Mxp, Sx, S Interface Configuration Guide for more information about this command. By default, IGMP snooping is enabled to be backward compatible. Users need to explicitly change the IGMP snooping configuration to allow processing of received multicast traffic as IP multicast in the context of Layer 3 service or R-VPLS.
  4. If a VPLS SAP is configured on the same port as the port on which IP multicast is enabled, then multicast traffic received on the SAP is dropped. Unicast, broadcast, and unknown-unicast packets received on the SAP are forwarded appropriately. This behavior is true only for VPLS SAPs and does not apply to VPLS SDPs, Epipe SAPs, and Epipe SDPs.
  5. With R-VPLS multicast and when using MVR capability, a port on which receivers are present can be configured to perform either Layer 2 multicast replication (that is, no IP TTL decrement and no source MAC replacement) or Layer 3 multicast replication (that is, IP TTL is decremented and source MAC is replaced with 7210 SAS chassis MAC or IP interface MAC). An option to use either Layer 2 or Layer 3 multicast replication is available using the configure port ethernet multicast-egress {l2-switch | l3-forward} command. Refer to the 7210 SAS-M, T, R6, R12, Mxp, Sx, S Interface Configuration Guide for more information about this command. All SAPs on the port have the same behavior.
  6. An MVR R-VPLS must be configured without an IP interface and will support Layer 2 forwarding of both unicast and multicast traffic (that is, no IP forwarding).
  7. A user RPVLS can be configured with an IP interface and will support Layer 2 forwarding of both unicast and multicast (with (S,G) IP multicast replications) and support Layer 3 forwarding of unicast traffic.
  8. On 7210 SAS-Mxp and 7210 SAS-R6, when using SAP-based egress queues and scheduler, R-VPLS BUM traffic uses per port egress queues—not per SAP egress queues.
  9. In an MVR configuration, the svc-sap-type of the R-VPLS service that is the source (also known as MVR R-VPLS service) and the svc-sap-type of the R-VPLS service that is the sink (also known as user R-VPLS service) should match.
  10. On 7210 SAS-Mxp, 7210 SAS-M, and 7210 SAS-T, the MVR R-VPLS service configured with IGMPv3 snooping shares resources with TWAMP. An increase in one decreases the amount of resources available for the other. Contact your Nokia representative for more information about scaling of these features.

5.4.9. R-VPLS Supported Functionality and Restrictions

R-VPLS supported functionality and restrictions for both access-uplink and network modes are listed as follows. The following items are applicable to both the modes, unless specified explicitly.

  1. Static ARP cannot be configured with an IES IP interface that is associated with an R-VPLS, though static MAC can be configured in an R-VPLS service.
  2. In access-uplink mode, only static routes are supported. No dynamic routing protocols are supported.
  3. In network mode, both static routing and dynamic routing protocols are supported.
  4. Whenever a VPLS FIB entry is removed either due to user action, aging or mac-move, the corresponding ARP entry whose MAC address matches that of the MAC in the FIB is removed from the ARP cache.
  5. In access-uplink mode, R-VPLS is supported only in the base routing instance. Only IPv4 addressing support is available for IES interfaces associated with an R-VPLS service.
  6. In network mode, R-VPLS is supported in both the base routing instance (IES) and VPRN services. IPv4 addressing support is available for IES and VPRN IP interfaces associated with an R-VPLS service.
  7. In access-uplink mode, IPv6 addressing support is not available for IES interfaces associated with an R-VPLS service.
  8. In network mode, only on the 7210 SAS-Mxp, IPv6 addressing support is available for IES and VPRN interfaces associated with an R-VPLS service.
  9. In both network mode and access-uplink mode, multiple SAPs configured on the same port cannot be part of the same R-VPLS service. That is, a single service can only be configured with a single SAP on a specific port.
  10. Service MTU configuration is not supported in the R-VPLS service.
  11. In network mode, in any service (that is, svc-sap-type set to any), null sap accepts only untagged packets. Received tagged packets are dropped.
  12. In network mode, MPLS protocols (for example, RSVP and LDP) cannot be enabled on an R-VPLS IP interface.
  13. In network mode, MPLS-TP cannot use an R-VPLS IP interface.
  14. In network mode, R-VPLS SAPs can be configured on an MC-LAG LAG.
  15. Service-based SHGs are not supported in an R-VPLS service.

5.5. Epipe Emulation Using Dot1q VLAN Range SAP in VPLS with G.8032

Note:

This feature is only supported on 7210 SAS-M (10G variant) and 7210 SAS-T operating in access-uplink mode.

On the node where the service originates, in addition to the access dot1q range SAP, the service needs to be configured with access-uplink SAPs on the two G.8032 ring ports. G.8032 mechanism is used to for breaking the loop in the ring and VPLS service protection. The intermediate nodes on the ring needs to use VPLS service with access-uplink SAPs on the ring ports and use the same G.8032 instance for protection, as one is used for service protection on the originating node.

Figure 71 shows how two business offices, served by an operator are connected in a ring network deployment using Dot1q range SAPs and a VPLS service with G.8032 for protection.

Figure 71:  Epipe Emulation in a Ring using VPLS with G.8032 

The following are the requirements to provide for an Epipe service connectivity between two business sites:

  1. Transport all the VLANs used by the internal enterprise network of the businesses.
  2. Support high availability for the service between the business sites by protecting against failure of the links or nodes in the ring.

To achieve connectivity between two business sites in access-uplink/Layer 2 mode is to configure SAPs for each of the individual VLANs used in the enterprise network in a VPLS service and use G.8032 for protection. The number of VLANs that was supported is limited by the number of SAPs supported on the platform.

The 7210 SAS platforms, currently support the use of Dot1q range SAPs with only Epipe services in either network/MPLS mode or access-uplink/Layer 2 mode. Dot1q range SAPs allows operators to transport a range of VLANs by providing similar service treatment (service treatment refers to forwarding decision along with encapsulation used, QoS and ACL processing, accounting, etc.) to all the VLANs configured in the range. It simplifies service configuration and allows operators to scale the number of VLANs that can be handled by the node. This took care of the need to support hundreds of VLANs using a single SAP or a small number of SAPs. When MPLS the mode is deployed in ring topology, operators have the option of using different redundancy mechanisms such as FRR, primary/secondary LSPs, Active/Standby PWs, to improve Epipe service availability. No such option is available to protect Epipe service in Layer 2 mode when deployed in a ring topology. Additionally many operators prefer G.8032 based ring protection mechanism, since a single control instance on the ring can potentially protect all the VPLS services on the ring.

This feature allows operators to deploy Epipe services in a ring topology when using Layer 2 mode, by emulating an Epipe service using a VPLS service with G.8032 protection and at the same time provides the benefits of using dot1q range SAPs. The user should ensure that the VPLS service is a point-to-point service. This is achieved by configuring a VPLS service with an access dot1q range SAP used at the customer handoff on one node in the ring and an access dot1q range SAP in a customer handoff of a VPLS service on another node (that is, at the other end of the Epipe), such that there are only two endpoints for the service in the network.

On the node where the service originates, in addition to the access dot1q range SAP, the service needs to be configured with access-uplink SAPs on the two G.8032 ring ports. G.8032 mechanism is used to for breaking the loop in the ring and VPLS service protection. The intermediate nodes on the ring needs to use VPLS service with access-uplink SAPs on the ring ports and use the same G.8032 instance for protection, as one is used for service protection on the originating node.

5.5.1. Epipe Emulation Configuration Guidelines and Restrictions

The VPLS service with dot1q-range SAPs uses svc-sap-type of dot1q-range and supports limited functionality in comparison to a normal VPLS service. The following list provides more information about the feature, configuration guidelines, and restrictions:

  1. The user can define access dot1q range SAPs, which specifies a group of VLANs which receive similar service treatment; that is, forwarding behavior, SAP ingress QoS treatment and SAP (behavior similar to that available in Epipe service) and allows it to be configured in a VPLS service.
    1. On the node, where the service originates, in addition to the access dot1q range SAP, the service should be configured with Q1.* SAPs on the two G.8032 ring ports. The access or access-uplink Q1.*SAPs can be used, but the access-uplink SAPs are recommended for use.The user cannot configure any other SAPs in the same VPLS service.
    2. There is no special configuration required on intermediate nodes, that is, the ring nodes which do not terminate or originate the service. The nodes should be configured for providing transit VPLS service and the VPLS service must use the same G.8032 instance for protection as is used by the service on originating and terminating node.
    3. The Epipe service on 7210, currently does not check if the inner tag received on a Q1.* SAP is within the range of the configured VLANs. VPLS service too has the same behavior.
  2. Support for SAP Ingress QoS, Ingress and Egress ACLs, accounting, and other services, for dot1q range SAP configured in a VPLS service matches the support available in Epipe service.
  3. G.8032 mechanism is used for loop detection in ring network and service protection. A separate VPLS service representing the G.8032 control instance must be configured and the state should be associated with this service.
    1. Use of dot1q range SAPs to provide service on the interconnection node, in a G.8032 major-ring/sub-ring deployment, when using the virtual channel, is not supported. This restriction is not applicable when the interconnection node in a G.8032 major-ring/sub-ring is configured without a virtual channel.
  4. mVPLS/xSTP support is available for use with Q1.* SAP on the ring ports to break the loop. This is a add-on to the G.8032 support.
  5. Broadcast, Unknown Unicast and Multicast (BUM) traffic is flooded in the service.
  6. Learning is enabled on the service by default, to avoid the need to flood the service traffic out of one of the ring ports, after network MAC addresses are learned. The user has an option to disable learning per service. Learning enable/disable per SAP is not supported.
  7. MAC limiting is available per service. MAC limiting per SAP is not supported.
  8. CFM OAM is supported. The support for UP MEPs on the dot1q range SAP in the service to be used for fault management and performance management using the CFM/Y.1731 OAM tools is available.
    1. Only UP MEP is allowed to be configured only on the dot1q VLAN range SAPs. CFM/Y.1731 tools can be used for trouble shooting and performance measurements. User must pick a VLAN value from the range of VLANs configured for the dot1-range SAP using the CLI command config>eth-cfm>domain>association>bridge-identifier VLAN and enable the use of using the CLI command primary-vlan-enable under the MEP CLI context. It is used as the VLAN tag in the packet header for all the CFM/Y.1731 messages sent out in the context of the UP MEP.
    2. Down MEPs and MIPs are not allowed to be configured.
    3. Fault propagation is not supported with UP MEPs for dot1q range SAP in access-uplink mode.
  9. CFM support is not available for SAPs on the ring ports.
  10. IGMP snooping and MVR is not supported.