5. Label Distribution Protocol

This chapter provides information to enable the Label Distribution Protocol (LDP).

Topics in this chapter include:

5.1. Label Distribution Protocol

Label Distribution Protocol (LDP) is used to distribute labels in non-traffic-engineered applications. LDP allows routers to establish LSPs through a network by mapping network-layer routing information directly to data link LSPs.

An LSP is defined by the set of labels from the ingress LER to the egress LER. LDP associates a Forwarding Equivalence Class (FEC) with each LSP it creates. A FEC is a collection of common actions associated with a class of packets. When an ingress LER assigns a label to a FEC, it must let other LSRs in the path know about the label. LDP helps to establish the LSP by providing a set of procedures that LSRs can use to distribute labels.

The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network by each LSR, where each LSR splices incoming labels for the FEC to the outgoing label assigned to the next hop for the FEC.

LDP allows an LSR to request a label from a downstream LSR so it can bind the label to a specific FEC. The downstream LSR responds to the request from the upstream LSR by sending the requested label.

LSRs can distribute a FEC label binding in response to an explicit request from another LSR. This is known as Downstream On Demand (DOD) label distribution. LSRs can also distribute label bindings to LSRs that have not explicitly requested them. This is called Downstream Unsolicited (DU). For LDP on the 7705 SAR, Downstream Unsolicited (DU) mode is implemented.

This section contains the following topics:

5.1.1. LDP and MPLS

LDP performs dynamic label distribution in MPLS environments. The LDP operation begins with a Hello discovery process network to form an adjacency with an LDP peer in the network. LDP peers are two MPLS routers that use LDP to exchange label/FEC mapping information. An LDP session is created between LDP peers. A single LDP session allows each peer to learn the other's label mappings and to distribute its own label information (LDP is bidirectional), and exchange label binding information.

LDP signaling works with the MPLS label manager to manage the relationships between labels and the corresponding FEC. For service-based FECs, LDP works in tandem with the Service Manager to identify the virtual leased lines (VLLs) and pseudowires (PWs) to signal.

An MPLS label identifies a set of actions that the forwarding plane performs on an incoming packet before discarding it. The FEC is identified through the signaling protocol (in this case LDP), and is allocated a label. The mapping between the label and the FEC is communicated to the forwarding plane. In order for this processing on the packet to occur at high speeds, optimized tables that enable fast access and packet identification are maintained in the forwarding plane.

When an unlabeled packet ingresses the 7705 SAR, classification policies associate it with a FEC, the appropriate label is imposed on the packet, and then the packet is forwarded. Other actions can also take place on a packet before it is forwarded, including imposing additional labels, other encapsulations, or learning actions. Once all actions associated with the packet are completed, the packet is forwarded.

When a labeled packet ingresses the router, the label or stack of labels indicates the set of actions associated with the FEC for that label or label stack. The actions are performed on the packet and then the packet is forwarded.

The LDP implementation provides support for DU, ordered control, and liberal label retention mode.

For LDP label advertisement, DU mode is supported. To prevent filling the uplink bandwidth with unassigned label information, Ordered Label Distribution Control mode is supported.

A PW/VLL label can be dynamically assigned by targeted LDP operations. Targeted LDP allows the inner labels (that is, the VLL labels) in the MPLS headers to be managed automatically. This makes it easier for operators to manage the VLL connections. There is, however, additional signaling and processing overhead associated with this targeted LDP dynamic label assignment.

5.1.1.1. BFD for T-LDP

BFD is a simple protocol for detecting failures in a network. BFD uses a “hello” mechanism that sends control messages periodically to the far end and receives periodic control messages from the far end. BFD is implemented in asynchronous mode only, meaning that neither end responds to control messages; rather, the messages are sent in the time period configured at each end.

A T-LDP session is a session between either directly or non-directly connected peers and requires that adjacencies be created between two peers. BFD for T-LDP sessions allows support for tracking of failures of nodes that are not directly connected. BFD timers must be configured under the system router interface context before being enabled under T-LDP.

BFD tracking of an LDP session associated with a T-LDP adjacency allows for faster detection of the status of the session by registering the loopback address of the peer as the transport address.

5.1.2. LDP Architecture

LDP comprises a few processes that handle the protocol PDU transmission, timer-related issues, and protocol state machine. The number of processes is kept to a minimum to simplify the architecture and to allow for scalability. Scheduling within each process prevents starvation of any particular LDP session, while buffering alleviates TCP-related congestion issues.

The LDP subsystems and their relationships to other subsystems are illustrated in Figure 22. This illustration shows the interaction of the LDP subsystem with other subsystems, including memory management, label management, service management, SNMP, interface management, and RTM. In addition, debugging capabilities are provided through the logger.

Communication within LDP tasks is typically done by interprocess communication through the event queue, as well as through updates to the various data structures. The following list describes the primary data structures that LDP maintains:

  1. FEC/label database — this database contains all the FEC-to-label mappings, including both sent and received. It also contains both address FECs (prefixes and host addresses) as well as service FECs (L2 VLLs).
  2. Timer database — this database contains all the timers for maintaining sessions and adjacencies
  3. Session database — this database contains all the session and adjacency records, and serves as a repository for the LDP MIB objects

5.1.3. LDP Subsystem Interrelationships

Figure 22 shows the relationships between LDP subsystems and other 7705 SAR subsystems. The following sections describe how the subsystems work to provide services.

5.1.3.1. Memory Manager and LDP

LDP does not use any memory until it is instantiated. It preallocates some amount of fixed memory so that initial startup actions can be performed. Memory allocation for LDP comes out of a pool reserved for LDP that can grow dynamically as needed.

Fragmentation is minimized by allocating memory in large chunks and managing the memory internally to LDP. When LDP is shut down, it releases all memory allocated to it.

5.1.3.2. Label Manager

LDP assumes that the label manager is up and running. LDP will abort initialization if the label manager is not running. The label manager is initialized at system boot-up; hence anything that causes it to fail will likely indicate that the system is not functional. The 7705 SAR uses a label range from 28 672 (28K) to 131 071 (128K-1) to allocate all dynamic labels, including VC labels.

Figure 22:  LDP Subsystem Interrelationships 

5.1.3.3. LDP Configuration

The 7705 SAR uses a single consistent interface to configure all protocols and services. CLI commands are translated to SNMP requests and are handled through an agent-LDP interface. LDP can be instantiated or deleted through SNMP. Also, targeted LDP sessions can be set up to specific endpoints. Targeted session parameters are configurable.

5.1.3.4. Logger

LDP uses the logger interface to generate debug information relating to session setup and teardown, LDP events, label exchanges, and packet dumps. Per-session tracing can be performed. Refer to the 7705 SAR System Management Guide for logger configuration information.

5.1.3.5. Service Manager

All interaction occurs between LDP and the service manager, since LDP is used primarily to exchange labels for Layer 2 services. In this context, the service manager informs LDP when an LDP session is to be set up or torn down, and when labels are to be exchanged or withdrawn. In turn, LDP informs the service manager of relevant LDP events, such as connection setups and failures, timeouts, and labels signaled or withdrawn.

5.1.4. Execution Flow

LDP activity in the 7705 SAR is limited to service-related signaling. Therefore, the configurable parameters are restricted to system-wide parameters, such as hello and keepalive timeouts.

5.1.4.1. Initialization

MPLS must be enabled when LDP is initialized. LDP makes sure that the various prerequisites are met, such as ensuring that the system IP interface and the label manager are operational, and ensuring that there is memory available. It then allocates a pool of memory to itself and initializes its databases.

5.1.4.2. Session Lifetime

In order for a targeted LDP session to be established, an adjacency has to be created. The LDP extended discovery mechanism requires hello messages to be exchanged between two peers for session establishment. Once the adjacency is established, session setup is attempted.

5.1.4.2.1. Adjacency Establishment

In the 7705 SAR, adjacency management is done through the establishment of a Service Destination Point (SDP) object, which is a service entity in the Nokia service model.

The service model uses logical entities that interact to provide a service. The service model requires the service provider to create and configure four main entities:

  1. customers
  2. services
  3. Service Access Points (SAPs) on local 7705 SAR routers
  4. SDPs that connect to one or more remote 7705 SAR routers or 77x0 SR routers

An SDP is the network-side termination point for a tunnel to a remote 7705 SAR or 77x0 SR router. An SDP defines a local entity that includes the system IP address of the remote 7705 SAR routers and 77x0 SR routers, and a path type.

Each SDP comprises:

  1. the SDP ID
  2. the transport encapsulation type, MPLS
  3. the far-end system IP address

If the SDP is identified as using LDP signaling, then an LDP extended hello adjacency is attempted.

If another SDP is created to the same remote destination and if LDP signaling is enabled, no further action is taken, since only one adjacency and one LDP session exists between the pair of nodes.

An SDP is a unidirectional object, so a pair of SDPs pointing at each other must be configured in order for an LDP adjacency to be established. Once an adjacency is established, it is maintained through periodic hello messages.

5.1.4.2.2. Session Establishment

When the LDP adjacency is established, the session setup follows as per the LDP specification. Initialization and keepalive messages complete the session setup, followed by address messages to exchange all interface IP addresses. Periodic keepalives or other session messages maintain the session liveness.

Since TCP is back-pressured by the receiver, it is necessary to be able to push that back-pressure all the way into the protocol. Packets that cannot be sent are buffered on the session object and reattempted as the back-pressure eases.

5.1.5. Label Exchange

Label exchange is initiated by the service manager. When an SDP is attached to a service (that is, once the service gets a transport tunnel), a message is sent from the service manager to LDP. This causes a label mapping message to be sent. Additionally, when the SDP binding is removed from the service, the VC label is withdrawn. The peer must send a label release to confirm that the label is not in use.

5.1.5.1. Implicit Null Label

The implicit null label option enables an eLER to receive MPLS packets from the previous-hop LSR without the outer LSP label.

The implicit null label is signaled by the eLER to the previous-hop LSR during FEC signaling by the LDP control protocol. When the implicit null label is signaled to the LSR, it pops the outer label before sending the MPLS packet to the eLER; this is known as penultimate hop popping.

The implicit null label option can be enabled for all LDP FECs for which the router is the eLER by using the implicit-null-label command in the config>router>ldp context.

If the implicit null configuration is changed, LDP withdraws all the FECs and readvertises them using the new label value.

5.1.5.2. Other Reasons for Label Actions

Label actions can also occur for the following reasons:

  1. MTU changes — LDP withdraws the previously assigned label and resignals the FEC with the new Maximum Transmission Unit (MTU) in the interface parameter
  2. clear labels — when a service manager command is issued to clear the labels, the labels are withdrawn and new label mappings are issued
  3. SDP down — when an SDP goes administratively down, the VC label associated with that SDP for each service is withdrawn
  4. memory allocation failure — if there is no memory to store a received label, the received label is released
  5. VC type unsupported — when an unsupported VC type is received, the received label is released

5.1.5.3. Cleanup

LDP closes all sockets, frees all memory, and shuts down all its tasks when it is deleted, so that it uses no memory (0 bytes) when it is not running.

5.1.6. LDP Filters

The 7705 SAR supports both inbound and outbound LDP label binding filtering.

Inbound filtering (import policy) allows the user to configure a policy to control the label bindings an LSR (Label Switch Router) accepts from its peers.

Import policy label bindings can be filtered based on the following:

  1. neighbor — match on bindings received from the specified peer
  2. prefix-list — match on bindings with the specified prefix/prefixes

The default import behavior is to accept all FECs received from peers.

Outbound filtering (export policy) allows the user to configure a policy to control the set of LDP label bindings advertised by the LSR (Label Switch Router).

Because the default behavior is to originate label bindings for the system IP address only, when a non-default loopback address is used as the transport address, the 7705 SAR will not advertise the loopback FEC automatically. With LDP export policy, the user is now able to explicitly export the loopback address in order to advertise the loopback address label and allow the node to be reached by other network elements.

Export policy label bindings can be filtered based on the following:

  1. all — all local subnets by specifying “direct” as the match protocol
  2. prefix-list — match on bindings with the specified prefix/prefixes

Note:

In order for the 7705 SAR to consider a received label to be active, there must be an exact match to the FEC advertised together with the label found in the routing table, or a longest prefix match (if the aggregate-prefix-match option is enabled; see Multi-area and Multi-instance Extensions to LDP). This can be achieved by configuring a static route pointing to the prefix encoded in the FEC.

5.1.7. LDP FEC Statistics

LDP FEC statistics allow operators to monitor traffic being forwarded between any two PE routers and for all services using an LDP SDP. LDP FEC statistics are available for the egress data path at the ingress LER and LSR. Because an ingress LER is also potentially an LSR for an LDP FEC, combined egress data path statistics are provided whenever applicable. For more information, see RSVP LSP and LDP FEC Statistics.

5.1.8. Multi-area and Multi-instance Extensions to LDP

When a network has two or more IGP areas, or instances, inter-area LSPs are required for MPLS connectivity between the PE devices that are located in the distinct IGP areas. In order to extend LDP across multiple areas of an IGP instance or across multiple IGP instances, the current standard LDP implementation based on RFC 3036, LDP Specification, requires that all /32 prefixes of PEs be leaked between the areas or instances. IGP route leaking is the distribution of the PE loopback addresses across area boundaries. An exact match of the prefix in the routing table (RIB) is required to install the prefix binding in the FIB and set up the LSP.

This behavior is the default behavior for the 7705 SAR when it is configured as an Area Border Router (ABR). However, exact prefix matching causes performance issues for the convergence of IGP on routers deployed in networks where the number of PE nodes scales to thousands of nodes. Exact prefix matching requires the RIB and FIB to contain the IP addresses maintained by every LSR in the domain and requires redistribution of a large number of addresses by the ABRs. Security is a potential issue as well, as host routes leaked between areas can be used in DoS and DDoS attacks and spoofing attacks.

To avoid these performance and security issues, the 7705 SAR can be configured for an optional behavior in which LDP installs a prefix binding in the LDP FIB by performing a longest prefix match with an aggregate prefix in the routing table (RIB). This behavior is described in RFC 5283, LDP Extension for Inter-Area Label Switched Paths. The LDP prefix binding continues to be advertised on a per-individual /32 prefix basis.

When the longest prefix match option is enabled and an LSR receives a FEC-label binding from an LDP neighbor for a prefix-address FEC element, FEC1, it installs the binding in the LDP FIB if:

  1. the routing table (RIB) contains an entry that matches FEC1. Matching can either be a longest IP match of the FEC prefix or an exact match.
  2. the advertising LDP neighbor is the next hop to reach FEC1

When the FEC-label binding has been installed in the LDP FIB, LDP programs an NHLFE entry in the egress data path to forward packets to FEC1. LDP also advertises a new FEC-label binding for FEC1 to all its LDP neighbors.

When a new prefix appears in the RIB, LDP checks the LDP FIB to determine if this prefix is a closer match for any of the installed FEC elements. If a closer match is found, this may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed.

When a prefix is removed from the RIB, LDP checks the LDP FIB for all FEC elements that matched this prefix to determine if another match exists in the routing table. If another match exists, LDP must use it. This may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed. If another match does not exist, the LSR removes the FEC binding and sends a label withdraw message to its LDP neighbors.

If the next hop for a routing prefix changes, LDP updates the LDP FIB entry for the FEC elements that matched this prefix. It also updates the NHLFE entry for the FEC elements.

5.1.9. ECMP Support for LDP

Equal-Cost Multipath Protocol (ECMP) support for LDP performs load balancing for services that use LDP-based LSPs as transport tunnels, by having multiple equal-cost outgoing next hops for an IP prefix.

ECMP for LDP load-balances traffic across all equal-cost links based on the output of the hashing algorithm using the allowed inputs, based on the service type. For detailed information, refer to “LAG and ECMP Hashing” in the 7705 SAR Interface Configuration Guide.

There is only one next-hop peer for a network link. To offer protection from a network link or next-hop peer failure, multiple network links can be configured to connect to different next-hop peers, or multiple links to the same peer. For example, an MLPPP link and an Ethernet link can be connected to two peers, or two Ethernet links can be connected to the same peer. ECMP occurs when the cost of each link reaching a target IP prefix is equal.

The 7705 SAR uses a liberal label retention mode, which retains all labels for an IP prefix from all next-hop peers. A 7705 SAR acting as an LSR load-balances the MPLS traffic over multiple links using a hashing algorithm.

The 7705 SAR supports the following optional fields as hash inputs and supports profiles for various combinations:

  1. hashing algorithms
    1. label-only option: hashing is done on the MPLS label stack, up to a maximum of 10 labels (default)
    2. label-IP option: hashing is done on the MPLS label stack and the IPv4 source and destination IP address if an IPv4 header is present after the MPLS labels
    3. Layer 4 header (source or destination UDP or TCP port number) and TEID: hashing is done on the MPLS label stack, the IPv4 source and destination IP address (if present), then on the Layer 4 source and destination UDP or TCP port fields (if present) and the TEID in the GTP header (if present)
  2. label stack profile options on significance of the bottom-of-stack label (VC label)
    1. profile 1: favors better load balancing for pseudowires when the VC label distribution is contiguous (default)
    2. profile 2: similar to profile 1 where the VC labels are contiguous, but provides an alternate distribution
    3. profile 3: all labels have equal influence in hash key generation
  3. ingress LAG port at the LSR (default is disabled)
    The use-ingress-port option, when enabled, specifies that the ingress port will be used by the hashing algorithm at the LSR. This option should be enabled for ingress LAG ports because packets with the same label stack can arrive on all ports of a LAG interface. In this case, using the ingress port in the hashing algorithm will result in better egress load balancing, especially for pseudowires.
    The option should be disabled for LDP ECMP so that the ingress port is not used by the hashing algorithm. For ingress LDP ECMP, if the ingress port is used by the hashing algorithm, the hash distribution could be biased, especially for pseudowires.
  4. system IP address – hashing on the system IP address is enabled and disabled at the system level only

All of the above options can be configured with the lsr-load-balancing command, with the exception of the system IP address, which is configured with the system-ip-load-balancing command.

Note:

The global IF index is no longer a hash input for LSR ECMP load balancing. It has been replaced with the use-ingress-port configurable option in the lsr-load-balancing command. As well, the default treatment of the MPLS label stack has changed to focus on the bottom-of-stack label (VC label). In previous releases, all labels had equal influence.

LSR load balancing can be configured at the system level or interface level. Configuration at the interface level overrides the system-level settings for the specific interface. Configuration must be done on the ingress network interface (that is, the interface on the LDP LSR node that the packet is received on).

Configuration of load balancing at the interface level provides some control to the user; for example, the label-IP option can be disabled on a specific interface if labeled packets received on the interface include non-IP packets that can be confused by the hash routine for IP packets. Disabling the label-IP option can be used in cases where the first nibble of a non-IP packet is a 4, which would result in the packet being hashed incorrectly if the label-IP option was enabled.

If ECMP is not enabled, the label from only one of the next-hop peers is selected and installed in the forwarding plane. In this case, the algorithm used to distribute the traffic flow looks up the route information, and selects the network link with the lowest IP address. If the selected network link or next-hop peer fails, another next-hop peer is selected, and LDP reprograms the forwarding plane to use the label sent by the newly selected peer.

ECMP is supported on all Ethernet ports in network mode, and is also supported on the 4-port OC3/STM1 Clear Channel Adapter card when it is configured for POS (ppp-auto) encapsulation and network mode.

For information on configuring the 7705 SAR for LSR ECMP, refer to the lsr-load-balancing and system-ip-load-balancing commands in the 7705 SAR Basic System Configuration Guide, “System Information and General Commands” and the lsr-load-balancing command in the 7705 SAR Router Configuration Guide, “Router Interface Commands”.

For information on LDP treetrace commands for tracing ECMP paths, refer to the 7705 SAR OAM and Diagnostics Guide.

Note:

LDP treetrace works best with label-IP hashing (lbl-ip) enabled, rather than label-only (lbl-only) hashing. These options are set with the lsr-load-balancing command.

Note:

  1. Because of the built-in timeout to dynamic ARP, the MAC address of the remote peer needs to be renewed periodically. The flow of IP traffic resets the timers back to their maximum values. In the case of LDP ECMP, one link could be used for transporting user MPLS (pseudowire) traffic but the LDP session could possibly be using a different equal-cost link. For LDPs using ECMP and for static LSPs, it is important to ensure that the remote MAC address is learned and does not expire. Configuring static ARP entries or running continuous IP traffic ensures that the remote MAC address is always known. Running BFD for fast detection of Layer 2 faults or running any OAM tools with SAA ensures that the learned MAC addresses do not expire.
  2. ARP entries are refreshed by static ARP and BFD, SAA, OSPF, IS-IS, or BGP.
  3. For information on configuring static ARP and running BFD, refer to the 7705 SAR Router Configuration Guide.

5.1.9.1. Label Operations

If an LSR is the ingress router for a given IP prefix, LDP programs a PUSH operation for the prefix in the IOM. This creates an LSP ID to the Next Hop Label Forwarding Entry (NHLFE) mapping (LTN mapping) and an LDP tunnel entry in the forwarding plane. LDP will also inform the Tunnel Table Manager (TTM) about this tunnel. Both the LSP ID to NHLFE (LTN) entry and the tunnel entry will have an NHLFE for the label mapping that the LSR received from each of its next-hop peers.

If the LSR is to function as a transit router for a given IP prefix, LDP will program a SWAP operation for the prefix in the IOM. This involves creating an Incoming Label Map (ILM) entry in the forwarding plane. The ILM entry might need to map an incoming label to multiple NHLFEs.

If an LSR is an egress router for a given IP prefix, LDP will program a POP entry in the IOM. This too will result in an ILM entry being created in the forwarding plane, but with no NHLFEs.

When unlabeled packets arrive at the ingress LER, the forwarding plane consults the LTN entry and uses a hashing algorithm to map the packet to one of the NHLFEs (PUSH label) and forward the packet to the corresponding next-hop peer. For a labeled packet arriving at a transit or egress LSR, the forwarding plane consults the ILM entry and either uses a hashing algorithm to map it to one of the NHLFEs if they exist (SWAP label) or routes the packet if there are no NHLFEs (POP label).

5.1.10. Graceful Restart Helper

Graceful Restart (GR) is part of the LDP handshake process (that is, the LDP peering session initialization) and needs to be supported by both peers. GR provides a mechanism that allows the peers to cope with a service interruption due to a CSM switchover, which is a period of time when the standby CSM is not capable of synchronizing the states of the LDP sessions and labels being advertised and received.

Graceful Restart Helper (GR-Helper) decouples the data plane from the control plane so that if the control plane is not responding (that is, there is no LDP message exchange between peers), then the data plane can still forward frames based on the last known (advertised) labels.

Because the 7705 SAR supports non-stop services / high-availability for LDP (and MPLS), the full implementation of GR is not needed. However, GR-Helper is implemented on the 7705 SAR to support non-high-availability devices. With GR-Helper, if an LDP peer of the 7705 SAR requests GR during the LDP handshake, the 7705 SAR agrees to it but does not request GR. For the duration of the LDP session, if the 7705 SAR LDP peer fails, the 7705 SAR continues to forward MPLS packets based on the last advertised labels and will not declare the peer dead until the GR timer expires.

5.1.11. Graceful Handling of Resource Exhaustion

Graceful handling of resource exhaustion enhances the behavior of LDP when a data path or a CSM resource required for the resolution of a FEC is exhausted. In prior releases, the entire LDP protocol was shut down, causing all LDP peering sessions to be torn down and therefore impacting all peers. The user was required to fix the issue that caused the FEC scaling to be exceeded, and to restart the LDP session by executing the no shutdown CLI command. With graceful handling of resource exhaustion, only the responsible session or sessions are shut down, which impacts only the appropriate peer or peers.

Graceful handling of resources implements a capability by which the LDP interface to the peer, or the targeted peer in the case of a targeted LDP (T-LDP) session, is shut down.

If LDP tries to resolve a FEC over a link or a T-LDP session and runs out of data path or CSM resources, LDP brings down that interface or targeted peer, which brings down the Hello adjacency over that interface to all linked LDP peers or to the targeted peer. The interface is brought down for the LDP context only and is still available to other applications such as IP forwarding and RSVP LSP forwarding.

After taking action to free up resources, the user must manually perform a no shutdown command on the interface or the targeted peer to bring it back into operation. This re-establishes the Hello adjacency and resumes the resolution of FECs over the interface or to the targeted peer.

5.1.12. LDP Support for Unnumbered Interfaces

Unnumbered interfaces are point-to-point interfaces that are not explicitly configured with a dedicated IP address and subnet; instead, they borrow (or link to) an IP address from another interface on the system (the system IP address, another loopback interface, or any other numbered interface) and use it as the source IP address for packets originating from the interface. For more information on support for unnumbered interfaces, refer to the 7705 SAR Router Configuration Guide, “Unnumbered Interfaces”.

This feature allows LDP to establish a Hello adjacency and to resolve unicast FECs over unnumbered LDP interfaces.

For example, LSR A and LSR B are the two endpoints of an unnumbered link. These interfaces are identified on each system with their unique link local identifier. The combination router ID and link local identifier uniquely identifies the interface in IS-IS throughout the network.

A borrowed IP address is also assigned to the interface to be used as the source address of IP packets that must originate from the interface. The borrowed IP address defaults to the system interface address, A and B in this example. The borrowed IP interface can be configured to any IP interface by using the following CLI command: config> router>interface>unnumbered {ip-int-name | ip-address}.

The fec-originate command, which defines how to originate a FEC for egress and non-egress LSRs, includes a parameter to specify the name of the interface that the label for the originated FEC is swapped to. For an unnumbered interface, this parameter is mandatory because an unnumbered interface does not have its own IP address.

When the unnumbered interface is added into LDP, the follow behavior occurs.

For link LDP (L-LDP) sessions:

  1. The Hello adjacency is brought up using a link Hello packet with the source IP address set to the interface borrowed IP address and a destination IP address set to 224.0.0.2.
  2. Hello packets with the same source IP address should be accepted when received over parallel unnumbered interfaces from the same peer LSR ID. The corresponding Hello adjacencies are associated with a single LDP session.
  3. The transport address for the TCP connection, which is encoded in the Hello packet, is always set to the LSR ID of the node whether or not the interface option was enabled using the config>router>ldp>interface-parameters> interface>transport-address command.
  4. The local-lsr-id option can be configured on the interface and the value of the LSR ID can be changed to either the local interface or to some other interface name. If the local interface is selected or the provided interface name corresponds to an unnumbered IP interface, the unnumbered interface borrowed IP address is used as the LSR ID. In all cases, the transport address for the LDP session is updated to the new LSR ID value, but the link Hello packets continue to use the interface borrowed IP address as the source IP address.
  5. The LSR with the highest transport address, the LSR ID in this case, bootstraps the TCP connection and LDP session.
  6. The source and destination IP addresses of LDP packets are the transport addresses, that is, the LDP LSR IDs of the LSRs at the endpoints of the link (A and B in the example).

For targeted LDP (T-LDP) sessions:

  1. The source and destination addresses of the targeted Hello packet are the LDP LSR IDs of systems A and B.
  2. The local-lsr-id option can be configured on the interface for the targeted session and the value of the LSR ID can be changed to either the local interface or to some other interface name. If the local interface is selected or the provided interface name corresponds to an unnumbered IP interface, the unnumbered interface borrowed IP address is used as the LSR ID. In all cases, the transport address for the LDP session and the source IP address of the targeted Hello message are updated to the new LSR ID value.
  3. The LSR with the highest transport address, the LSR ID in this case, bootstraps the TCP connection and LDP session.
  4. The source and destination IP addresses of LDP packets are the transport addresses, that is, the LDP LSR IDs of the LSRs at the endpoints of the link (A and B in the example).

FEC resolution:

  1. LDP advertises/withdraws unnumbered interfaces using the Address/Address-Withdraw message. The borrowed IP address of the interface is used.
  2. A FEC can be resolved to an unnumbered interface in the same way as it is resolved to a numbered interface. The outgoing interface and next hop are looked up in the RTM cache. The next hop is the router ID and link identifier of the interface at the peer LSR.
  3. LDP FEC ECMP next hops over a mix of unnumbered and numbered interfaces are supported.
  4. All LDP FEC types are supported.
  5. The fec-originate command is supported when the next hop is over an unnumbered interface.

All LDP features supported for numbered IP interfaces are supported for unnumbered interfaces, with the following exceptions:

  1. BFD is not supported on unnumbered IP interfaces
  2. LDP FRR is not triggered by a BFD session timeout, only by a physical failure or the local interface going down
  3. unnumbered IP interfaces cannot be added into LDP global and peer prefix policies

The unnumbered interface feature also extends the support of LSP ping and LSP traceroute to test an LDP unicast FEC that is resolved over an unnumbered LDP interface.

5.1.13. LDP Fast Reroute (FRR)

LDP Fast Reroute (FRR) provides local protection for an LDP FEC by precalculating and downloading a primary and a backup NHLFE for the FEC to the LDP FIB. The primary NHLFE corresponds to the label of the FEC received from the primary next hop as per the standard LDP resolution of the FEC prefix in the RTM. The backup NHLFE corresponds to the label received for the same FEC from a Loop-Free Alternate (LFA) next hop.

LDP FRR protects against single link or single node failure. SRLG failure protection is not supported.

Without FRR, when a local link or node fails, the router must signal the failure to its neighbors via the IGP providing the routing (OSPF or IS-IS), recalculate primary next-hop NHLFEs for all affected FECs, and update the FIB. Until the new primary next hops are installed in the FIB, any traffic destined for the affected FECs is discarded. This process can take hundreds of milliseconds.

LDP FRR improves convergence in case of a local link or node failure in the network, by using the label-FEC binding received from the LFA next hop to forward traffic for a given prefix as soon as the primary next hop is not available. This means that a router resumes forwarding LDP packets to a destination prefix using the backup path without waiting for the routing convergence. Convergence times should be similar to RSVP-TE FRR, in the tens of milliseconds.

OSPF or IS-IS must perform the Shortest Path First (SPF) calculation of an LFA next hop, as well as the primary next hop, for all prefixes used by LDP to resolve FECs. The IGP also populates both routes in the RTM.

When LDP FRR is enabled and an LFA backup next hop exists for the FEC prefix in the RTM, or for the longest prefix the FEC prefix matches to when the aggregate-prefix-match option is enabled, LDP will program the data path with both a primary NHLFE and a backup NHLFE for each next hop of the FEC.

In order to perform a switchover to the backup NHLFE in the fast path, LDP follows the standard FRR failover procedures, which are also supported for RSVP-TE FRR.

When any of the following events occurs, the backup NHLFE is enabled for each affected FEC next hop:

  1. an LDP interface goes operationally down or is administratively shut down
    In this case, LDP sends a neighbor/next hop down message to each LDP with which it has an adjacency over the interface.
  2. an LDP session to a peer goes down because the Hello timer or keepalive timer has expired over an interface
    In this case, LDP sends a neighbor/next hop down message to the affected peer.
  3. the TCP connection used by a link LDP session to a peer goes down
    In this case, LDP sends a neighbor/next hop down message to the affected peer.

Refer to RFC 5286, Basic Specification for IP Fast Reroute: Loop-Free Alternates, for more information on LFAs.

5.1.13.1. ECMP vs FRR

If ECMP is enabled, which provides multiple primary next hops for a prefix, LDP FRR is not used. That is, the LFA next hops are not populated in the RTM and the ECMP paths are used instead.

5.1.13.2. IGP Shortcuts (RSVP-TE Tunnels)

IGP shortcuts are an MPLS functionality where LSPs are treated like physical links within IGPs; that is, LSPs can be used for next-hop reachability. If an RSVP-TE LSP is used as a shortcut by OSPF or IS-IS, it is included in the SPF calculation as a point-to-point link for both primary and LFA next hops. It can also be advertised to neighbors so that the neighboring nodes can also use the links to reach a destination via the advertised next hop.

IGP shortcuts can be used to simplify remote LFA support and simplify the number of LSPs required in a ring topology.

When both IGP shortcuts and LFA are enabled under OSPF or IS-IS, and LDP FRR is also enabled, the following applies:

  1. a FEC that is resolved to a direct primary next hop can be backed up by a tunneled LFA next hop
  2. a FEC that is resolved to a tunneled primary next hop will not have an LFA next hop; it relies on RSVP-TE FRR for protection

5.1.13.3. LDP FRR Configuration

To configure LDP FRR, LFA calculation by the SPF algorithm must first be enabled under the OSPF or IS-IS protocol level with the command:

     config>router>ospf>loopfree-alternate

     or

     config>router>ospf3>loopfree-alternate

     or

     config>router>isis>loopfree-alternate

Next, LDP must be enabled to use the LFA next hop with the command config>router>ldp>fast-reroute.

If IGP shortcuts are used, they must be enabled under the OSPF or IS-IS routing protocol. As well, they must be enabled under the MPLS LSP context, using the command config>router>mpls>lsp>igp-shortcut.

For information on LFA and IGP shortcut support for OSPF and IS-IS, refer to the 7705 SAR Routing Protocols Guide, “LDP and IP Fast Reroute for OSPF Prefixes” and “LDP and IP Fast Reroute for IS-IS Prefixes”.

Both LDP FRR and IP FRR are supported; for information on IP FRR, refer to the 7705 SAR Router Configuration Guide, “IP Fast Reroute (FRR)”.

5.1.14. LDP-to-Segment Routing Stitching for IPv4 /32 Prefixes (IS-IS)

This feature provides stitching between an LDP FEC and an SR node SID route for the same IPv4 /32 IS-IS prefix by allowing the export of SR tunnels from the Tunnel Table Manager (TTM) to LDP (IGP). In the LDP-to-SR data path direction, the LDP tunnel table route export policy supports the exporting of SR tunnels from the TTM to LDP.

A route policy option is configured to support LDP-to-SR stitching using the config>router>policy-options context. Refer to the 7705 SAR Router Configuration Guide, “Configuring LDP-to-Segment Routing Stitching Policies”, for a configuration example and to “Route Policy Command Reference” for information on the commands that are used.

After the route policy option is configured, the SR tunnels are exported from the TTM into LDP (IGP) using the config>router>ldp>export-tunnel-table command. See LDP Command Reference for more information on this command.

When configuring a route policy option, the user can restrict the exporting of SR tunnels from the TTM to LDP from a specific prefix list by excluding the prefix from the list.

The user can also restrict the exporting of SR tunnels from the TTM to a specific IS-IS IGP instance by specifying the instance ID in the from protocol statement. The from protocol statement is valid only when the protocol value is isis. Policy entries with any other protocol value are ignored when the route policy is applied. If the user configures multiple from protocol statements in the same policy or does not include the from protocol statement but adds a default-action of accept, then LDP routing uses the lowest instance ID in the IS-IS protocol to select the SR tunnel.

When the routing policy is enabled, LDP checks the SR tunnel entries in the TTM. Whenever an LDP FEC primary next hop cannot be resolved using an RTM route and an SR tunnel of type isis to the same destination IPv4 /32 prefix matches an entry in the export policy, LDP programs an LDP ILM and stitches it to the SR node SID tunnel endpoint. LDP then originates a FEC for the prefix and redistributes it to its LDP peer. When a LDP FEC is stitched to an SR tunnel, forwarded packets benefit from the protection of the LFA/remote LFA or TI-LFA backup next hop of the SR tunnel.

When resolving a FEC, LDP attempts a resolution in the RTM before attempting a resolution in the TTM, when both are available. That is, a swapping operation from the LDP ILM to an LDP NHLFE is attempted before stitching the LDP ILM to an SR tunnel endpoint.

In the SR-to-LDP data path direction, the SR mapping server provides a global policy for the prefixes corresponding to the LDP FECs the SR needs to stitch to. Therefore, a tunnel table export policy is not used. The user enables the exporting of the LDP tunnels for FEC prefixes advertised by the mapping server to an IGP instance using the command config>router>isis>segment-routing>export-tunnel-table ldp. Refer to the 7705 SAR Routing Protocols Guide, “IS-IS Command Reference”, for more information on this command.

When the export-tunnel-table ldp command is enabled, the IGP monitors the LDP tunnel entries in the TTM. Whenever an IPv4 /32 LDP tunnel destination matches a prefix for which the IGP received a prefix SID sub-TLV from the mapping server, the IGP instructs the SR module to program the SR ILM and to stitch it to the LDP tunnel endpoint. The SR ILM can stitch to an LDP FEC resolved over the LDP link. When an SR tunnel is stitched to an LDP FEC, forwarded packets benefit from the protection of the LFA backup next hop of the LDP FEC.

When resolving a node SID, the IGP attempts a resolution of the prefix SID received in an IP reachability TLV before attempting a resolution of a prefix SID received via the mapping server, when both are available. That is, a swapping operation of the SR ILM to an SR NHLFE is attempted before stitching it to an LDP tunnel endpoint. Refer to the 7705 SAR Routing Protocols Guide, “Prefix SID Resolution for a Segment Routing Mapping Server”, for more information about prefix SID resolution.

It is recommended that the bfd-enable option be enabled on the interfaces for both LDP and IGP contexts to speed up the failure detection and the activation of the LFA/remote LFA backup next hop in either direction. This applies particularly for remote failures. For the LDP context, the config>router>ldp>interface-parameters>interface>bfd-enable command string is used; see LDP Commands. For the IGP context, the config>router>isis>interface>bfd-enable command string is used; refer to the 7705 SAR Routing Protocols Guide, “IS-IS Command Reference”.

The sections that follow describe how stitching is performed in the LDP-to-SR and SR-to-LDP data path directions.

5.1.14.1. Stitching in the LDP-to-SR Direction

Stitching in the data plane in the LDP-to-SR direction is based on the LDP module monitoring the TTM for an SR tunnel of a prefix matching an entry in the LDP TTM export policy.

Figure 23:  Stitching in the LDP-to-SR Direction 

In Figure 23, router R1 is at the boundary between an SR domain and an LDP domain and is configured to stitch between SR and LDP. Link R1-R2 is LDP-enabled, but router R2 does not support SR or SR is disabled.

The following steps are performed by the boundary router R1 to configure stitching:

  1. Router R1 receives a prefix SID sub-TLV in an IS-IS IP reachability TLV originated by router Ry for prefix Y.
  2. R1 resolves the prefix SID and programs an NHLFE on the link toward the next hop in the SR domain. R1 programs an SR ILM and points it to the NHLFE.
  3. Because R1 is programmed to stitch LDP to SR, LDP in R1 checks the TTM and finds the SR tunnel to prefix Y. LDP programs an LDP ILM and points it to the SR tunnel. As a result, both the SR ILM and LDP ILM are now pointing to the SR tunnel, one via the SR NHLFE and the other via the SR tunnel endpoint.
  4. R1 advertises the LDP FEC for prefix Y to all its LDP peers. R2 is now able to install an LDP tunnel towards Ry.
  5. If R1 finds multiple SR tunnels to destination prefix Y, R1 uses the lowest instance ID in the IS-IS protocol to select the tunnel.
  6. If the user configured multiple from statements or did not include the from statement but added a default action of accept for the IS-IS protocol, R1 selects the tunnel to destination prefix Y by using the lowest instance ID in the IS-IS protocol.
    Note:

    If R1 has already resolved an LDP FEC for prefix Y, it has an ILM assigned to it. However, this ILM will not be updated to point toward the SR tunnel because LDP attempts a resolution in the RTM before attempting a resolution in the TTM. Therefore, an LDP tunnel is selected before an SR tunnel. Similarly, if an LDP FEC is received after the stitching is programmed, the LDP ILM is updated to point to the LDP NHLFE because LDP is able to resolve the LDP FEC in the RTM.

  7. The user enables SR in R2. R2 resolves the prefix SID for prefix Y and installs the SR ILM and the SR NHLFE. R2 is now able to forward packets over the SR tunnel to router Ry. There is no activity in R1 because the SR ILM is already programmed.
  8. The user disables LDP over the R1-R2 interface in both directions. This causes the LDP FEC ILM and NHLFE to be removed in R1 and in R2, which can then only do forwarding using the SR tunnel toward Ry.

5.1.14.2. Stitching in the SR-to-LDP Direction

Stitching in the data plane in the SR-to-LDP direction is based on the IGP monitoring the TTM for an LDP tunnel of a prefix matching an entry in the SR TTM export policy.

In Figure 23, router R1 is at the boundary between an SR domain and an LDP domain and is configured to stitch between SR and LDP. Link R1-R2 is LDP-enabled but router R2 does not support SR or SR is disabled.

The following steps are performed by the boundary router R1 to configure stitching:

  1. R1 receives an LDP FEC for prefix X from router Rx in the LDP domain. The RTM in R1 indicates that the interface to R2 is the next hop for prefix X.
  2. LDP in R1 resolves the received FEC in the RTM and creates an LDP ILM for the FEC with an ingress label (for example, label L1), and points it to an LDP NHLFE toward R2 with egress label L2.
  3. R1 receives a prefix SID sub-TLV from the R5 mapping server for prefix X.
  4. The IGP in R1 attempts to resolve in its routing table the next hop of prefix X over the interface to R2. R1 detects that R2 did not advertise support of SR and therefore the SID resolution for prefix X in the routing table fails.
  5. The IGP in R1 then attempts to resolve the prefix SID of prefix X in the TTM because it detects that it is configured for SR-to-LDP stitching. R1 finds an LDP tunnel to prefix X in the TTM, instructs the SR module to program an SR ILM with ingress label L3, and points it to the LDP tunnel endpoint, thus stitching ingress label L3 to egress label L2.
    Note:

    1. The ILMs for LDP and SR are both pointing to the same LDP tunnel, one via NHLFE and one via the tunnel endpoint.
    2. No SR tunnel to destination prefix X should be programmed in the TTM following the resolution of the prefix SID of prefix X in the TTM.
    3. If the IGP is not able to resolve the SID resolution for prefix X in step 4 and step 5, a trap is generated for the prefix SID resolution failure. An existing trap for the prefix SID resolution failure is enhanced to state whether the prefix SID that failed the resolution attempts was part of a mapping server TLV or an IP reachability TLV.
  6. The user enables segment routing on R2.
  7. The IGP in R1 discovers that R2 supports SR.
    Because R1 still has a prefix SID for prefix X from the mapping server R5, it maintains the stitching of the SR ILM for prefix X to the LDP FEC.
  8. The user disables the LDP interface between R1 and R2 in both directions. This causes the LDP FEC ILM and NHLFE for prefix X to be removed in R1 and triggers the re-evaluation of the SIDs.
  9. R1 first attempts the resolution in the routing table. Because the next hop for prefix X supports SR, the IGP instructs the SR module to program an NHLFE for the prefix SID of prefix X with egress label L4 and with an outgoing interface to R2. R1 creates an SR tunnel in the TTM for destination prefix X. R1 also changes the SR ILM with ingress label L3 to point to the SR NHLFE with egress label L4.
    Router R2 now becomes the SR-LDP stitching router.
  10. Router Rx, which owns prefix X, is upgraded to support SR. Rx sends a prefix SID sub-TLV to R1 in an IS-IS IP reachability TLV for prefix X. The SID information may or may not be the same as the information received from the mapping server R5. If the SID information is not the same, the IGP in R1 chooses the prefix SID originated by Rx and updates the SR ILM and NHLFE with the appropriate labels.
  11. The user then cleans up the mapping server and removes the mapping entry for prefix X, which is then withdrawn by IS-IS.

5.1.14.3. TTL Propagation and ICMP Tunneling

When stitching is performed between an LDP FEC and an SR IS-IS node SID tunnel, the TTL of the outer LDP or SR label is decreased, similar to a regular swapping operation at an LSR.

5.1.15. LDP FRR Remote LFA and TI-LFA Backup Using an SR Tunnel for IPv4 /32 Prefixes (IS-IS)

This feature allows an SR tunnel to be used as a remote LFA or TI-LFA backup tunnel next hop by an LDP FEC. The feature is enabled using the CLI command string config>router>ldp>fast-reroute backup-sr-tunnel. See LDP Commands for more information.

This feature requires the LDP-to-Segment Routing Stitching for IPv4 /32 Prefixes (IS-IS) feature as a prerequisite, because the LSR performs the stitching of the LDP ILM to an SR tunnel when the primary LDP next hop of the FEC fails. Therefore, LDP monitors SR tunnels programmed by the IGP in the TTM without the need for a mapping server.

It is assumed that:

  1. the backup-sr-tunnel option is enabled in LDP
  2. the loopfree-alternate ti-lfa or loopfree-alternate remote-lfa option is enabled in the IGP instance (refer to the 7705 SAR Routing Protocols Guide, “IS-IS Command Reference”)
    Note:

    The loopfree-alternate options can be enabled separately or together. If both options are enabled, TI-LFA backup takes precedence over remote LFA backup.

  3. LDP was able to resolve the primary next hop of the LDP FEC in the RTM

If the IGP LFA SPF does not find a regular LFA backup next hop for an LDP FEC prefix, it runs the TI-LFA and remote LFA algorithms. If the IGP LFA SPF finds a remote LFA or TI-LFA tunnel next hop, LDP programs the primary next hop of the FEC using an LDP NHLFE and programs the remote LFA or TI-LFA backup tunnel next hop using an LDP NHLFE pointing to the SR tunnel endpoint.

Note:

The LDP packet is not sent over the SR tunnel. The LDP label is stitched to the segment routing label stack. LDP points both the LDP ILM and the LTN to the backup LDP NHLFE, which uses the SR tunnel endpoint.

5.1.15.1. Feature Behavior

The following describes the behavior of this feature.

  1. When LDP resolves a primary next hop in the RTM or a remote LFA or TI-LFA backup next hop using an SR tunnel in the TTM, LDP programs a primary LDP NHLFE and a backup LDP NHLFE with an implicit null label value pointing to the SR tunnel that has the remote LFA or TI-LFA backup programmed for the same prefix.
  2. If the LDP FEC primary next hop fails and LDP has preprogrammed a remote LFA and TI-LFA next hop with an LDP backup NHLFE pointing to an SR tunnel, the LDP ILM/LTN switches to it.
    Note:

    If the LDP FEC primary next hop failure impacts only the LDP tunnel primary next hop but not the SR tunnel primary next hop, the LDP backup NHLFE points to the primary next hop of the SR tunnel; the LDP ILM/LTN traffic follows this path instead of the remote LFA or TI-LFA next hop of the SR tunnel until the remote LFA or TI-LFA next hop is activated.

  3. If the LDP FEC primary next hop becomes unresolved in the RTM, LDP switches the resolution to an SR tunnel in the TTM, if one exists, following the steps described in Stitching in the LDP-to-SR Direction.
  4. If both the LDP primary next hop and a regular LFA next hop become resolved in RTM, the LDP FEC programs the primary NHLFE and backup NHLFE.

5.1.16. TCP MD5 Authentication

The operation of a network can be compromised if an unauthorized system is able to form or hijack an LDP session and inject control packets by falsely representing itself as a valid neighbor. This risk can be mitigated by enabling TCP MD5 authentication on one or more of the sessions.

When TCP MD5 authentication is enabled on a session, every TCP segment exchanged with the peer includes a TCP option (19) containing a 16-byte MD5 digest of the segment (more specifically the TCP/IP pseudo-header, TCP header, and TCP data). The MD5 digest is generated and validated using an authentication key that must be known to both sides. If the received digest value is different from the locally computed one, the TCP segment is dropped, thereby protecting the router from a spoofed TCP segment.

The TCP Enhanced Authentication Option, as specified in draft-bonica-tcpauth-05.txt, Authentication for TCP-based Routing and Management Protocols, is a TCP extension that enhances security for LDP, BGP, and other TCP-based protocols. It extends the MD5 authentication option to include the ability to change keys in an LDP or BGP session seamlessly without tearing down the session, and allows for stronger authentication algorithms to be used. It is intended for applications where secure administrative access to both endpoints of the TCP connection is normally available.

TCP peers can use this extension to authenticate messages passed between one another. This strategy improves upon the practice described in RFC 2385, Protection of BGP Sessions via the TCP MD5 Signature Option. Using this new strategy, TCP peers can update authentication keys during the lifetime of a TCP connection. TCP peers can also use stronger authentication algorithms to authenticate routing messages.

TCP enhanced authentication uses keychains that are associated with every protected TCP connection.

Keychains are configured in the config>system>security>keychain context. For more information about configuring keychains, refer to the 7705 SAR System Management Guide, “TCP Enhanced Authentication and Keychain Authentication”.

5.2. LDP Point-to-Multipoint Support

The 7705 SAR supports point-to-multipoint mLDP. This section contains information on the following topics:

5.2.1. LDP Point-to-Multipoint Configuration

A node running LDP also supports point-to-multipoint LSP setup using LDP. By default, the node advertises the capability to a peer node using the point-to-multipoint capability TLV in LDP initialization message.

The multicast-traffic configuration option (per interface) restricts or allows the use of an interface for LDP multicast traffic forwarding towards a downstream node. The interface configuration option does not restrict or allow the exchange of the point-to-multipoint FEC by way of an established session to the peer on an interface, but only restricts or allows the use of next hops over the interface.

5.2.2. LDP Point-to-Multipoint Protocol

Only a single generic identifier range is defined for signaling a multipoint data tree (MDT) for all client applications. Implementation on the 7705 SAR reserves the range 1 to 8292 for generic point-to-multipoint LSP ID values for static point-to-multipoint LSP on the root node.

5.2.3. Make-Before-Break (MBB)

When a transit or leaf node detects that the upstream node towards the root node of a multicast tree has changed, the node follows the graceful procedure that allows make-before-break transition to the new upstream node. Make-before-break support is optional via the mp-mbb-time command. If the new upstream node does not support MBB procedures, the downstream node waits for the configured timer to time out before switching over to the new upstream node.

5.2.4. ECMP Support

If multiple ECMP paths exist between two adjacent nodes, then the upstream node of the multicast receiver programs all entries in the forwarding plane. Only one entry is active and it is based on the ECMP hashing algorithm.

5.3. Multicast LDP Fast Upstream Switchover

This feature allows a downstream LSR of a multicast LDP (mLDP) FEC to perform a fast switchover in order to source the traffic from another upstream LSR while IGP and LDP are converging due to a failure of the upstream LSR, where the upstream LSR is the primary next hop of the root LSR for the point-to-multipoint FEC. The feature is enabled through the mcast-upstream-frr command.

The feature provides upstream fast reroute (FRR) node protection for mLDP FEC packets. The protection is at the expense of traffic duplication from two different upstream nodes into the node that performs the fast upstream switchover.

The detailed procedures for this feature are described in draft-pdutta-mpls-mldp-up-redundancy.

5.3.1. mLDP Fast Upstream Switchover Configuration

To enable the mLDP fast upstream switchover feature, configure the following option in the CLI:

     config>router>ldp>mcast-upstream-frr

When mcast-upstream-frr is enabled and LDP is resolving an mLDP FEC received from a downstream LSR, LDP checks for the existence of an ECMP next hop or a loop-free alternate (LFA) next hop to the root LSR node. If LDP finds one, it programs a primary incoming label map (ILM) on the interface corresponding to the primary next hop and a backup ILM on the interface corresponding to the ECMP or LFA next hop. LDP then sends the corresponding labels to both upstream LSR nodes. In normal operation, the primary ILM accepts packets and the backup ILM drops them. If the interface or the upstream LSR of the primary ILM goes down, causing the LDP session to go down, the backup ILM starts accepting packets.

To use the ECMP next hop, configure the ecmp max-ecmp-routes value in the system to be at least 2, using the following command:

     config>router>ecmp max-ecmp-routes

To use the LFA next hop, enable LFA using the following commands (as needed):

     config>router>isis>loopfree-alternate

     or

     config>router>ospf>loopfree-alternate

Enabling IP FRR or LDP FRR is not strictly required, since LDP only needs to know the location of the alternate next hop to the root LSR in order to send the label mapping message and program the backup ILM during the initial signaling of the tree. That is, enabling the LFA option is sufficient for providing the backup ILM information. However, if unicast IP and LDP prefixes need to be protected, then IP FRR and LDP FRR—and the mLDP fast upstream switchover—can be enabled concurrently using the following commands:

     config>router>ip-fast-reroute

     or

     config>router>ldp>fast-reroute

An mLDP FRR fast switchover relies on the fast detection of a lost LDP session to the upstream peer to which the primary ILM label had been advertised. To ensure fast detection of a lost LDP session, do the following:

  1. Enable BFD on all LDP interfaces to upstream LSR nodes. When BFD detects the loss of the last adjacency to the upstream LSR, it brings down the LDP session immediately, which causes the CSM to activate the backup ILM.
  2. If there is a concurrent T-LDP adjacency to the same upstream LSR node, enable BFD on the T-LDP peer in addition to enabling it on the interface.
  3. Enable the ldp-sync-timer option on all interfaces to the upstream LSR nodes. If an LDP session to the upstream LSR to which the primary ILM is resolved goes down for any reason other than a failure of the interface or the upstream LSR, then routing and LDP go out of synchronization. This means that the backup ILM remains activated until the next time SPF is run by IGP.
    By enabling the IGP-LDP synchronization feature, the advertised link metric changes to the maximum value as soon as the LDP session goes down. This, in turn, triggers an SPF, and LDP will likely download a new set of primary and backup ILMs.

5.3.2. mLDP Fast Upstream Switchover Behavior

This feature allows a downstream LSR to send a label binding to two upstream LSR nodes, but only accept traffic as follows:

  1. for normal operation, traffic is accepted from the ILM on the interface to the primary next hop of the root LSR for the point-to-multipoint FEC
  2. for failure operation, traffic is accepted from the ILM on the interface to the backup next hop

A candidate upstream LSR node must be either an ECMP next hop or an LFA next hop. Either option allows the downstream LSR to perform a fast switchover and to source the traffic from another upstream LSR while IGP is converging due to a failure of the LDP session of the upstream peer, which is the primary next hop of the root LSR for the point-to-multipoint FEC. That is, the candidate upstream LSR node provides upstream FRR node protection for the mLDP FEC packets.

Multicast LDP fast upstream switchover is illustrated in Figure 24. LSR U is the primary next hop for the root LSR R of the point-to-multipoint FEC. LSR U' is an ECMP or LFA backup next hop for the root LSR R of the same point-to-multipoint FEC.

Figure 24:  Multicast LDP Fast Upstream Switchover 

In Figure 24, downstream LSR Z sends a label mapping message to both upstream LSR nodes, and programs the primary ILM on the interface to LSR U and the backup ILM on the interface to LSR U'. The labels for the primary and backup ILMs must be different. Thus LSR Z attracts traffic from both ILMs. However, LSR Z blocks the ILM on the interface to LSR U' and only accepts traffic from the ILM on the interface to LSR U.

If the link to LSR U fails, or LSR U fails, causing the LDP session to LSR U to go down, LSR Z will detect the failure and reverse the ILM blocking state. In addition, LSR Z immediately starts receiving traffic from LSR U' until IGP converges and provides a new primary next hop and a new ECMP or LFA backup next hop, which may or may not be on the interface to LSR U'. When IGP convergence is complete, LSR Z updates the primary and backup ILMs in the datapath.

Note:

LDP uses the interface of either an ECMP next hop or an LFA next hop to the root LSR prefix, whichever is available, to program the backup ILM. However, ECMP next hop and LFA next hop are mutually exclusive for a given prefix. IGP installs the ECMP next hop in preference to the LFA next hop as a prefix in the routing table manager (RTM).

If one or more ECMP next hops for the root LSR prefix exist, LDP picks the interface for the primary ILM based on the rules of mLDP FEC resolution specified in RFC 6388, Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths:

  1. The candidate upstream LSRs are numbered from lowest to highest IP address.
  2. The following hash is performed:
           H = (CRC32(Opaque Value)) modulo N
           where N is the number of upstream LSRs
    The Opaque Value is the field in the point-to-multipoint FEC element immediately after the Opaque Length field. The Opaque Length indicates the opaque value used in this calculation.
  3. The selected upstream LSR U is the LSR that has the number H.

LDP then picks the interface for the backup ILM using the following new rules:

           if (H + 1 < NUM_ECMP) {

           // If the hashed entry is not last in the next hops then pick up               the next as backup.

                 backup = H + 1;

           } else {

           // Wrap around and pick up the first.

                 backup = 1;

           }

In some topologies, it is possible that no ECMP or LFA next hop is found. In this case, LDP programs the primary ILM only.

5.4. LDP IPv6

The 7705 SAR extends the LDP control plane and data plane to support LDP IPv6 adjacencies and sessions using 128-bit LSR ID.

The implementation allows for concurrent support of independent LDP IPv4 (which uses a 32-bit LSR ID) and LDP IPv6 adjacencies and sessions between peer LSRs and over the same interfaces or different set of interfaces.

Figure 25 shows an example of an LDP adjacency and session over an IPv6 interface.

Figure 25:  LDP Adjacency and Session over an IPv6 Interface 

LSR-A and LSR-B have the following IPv6 LDP identifiers respectively:

  1. <LSR Id=A/128> : <label space id=0>
  2. <LSR Id=B/128> : <label space id=0>

By default, LSR-A/128 and LSR-B/128 use the system interface IPv6 address.

Although the LDP control plane can operate using only the IPv6 system address, it is recommended that the user must configure the IPv4-formatted router ID in order for OSPF, IS-IS, and BGP to operate properly.

The following sections describe LDP IPv6 behavior on the 7705 SAR:

5.4.1. Link LDP

LDP IPv6 uses a 128-bit LSR ID as defined in draft- pdutta-mpls-ldp-v2-00. See LDP Process Overview for more information about interoperability of this implementation with a 32-bit LSR ID, as defined in draft-ietf- mpls-ldp-ipv6-14.

A Hello adjacency is brought up using a link Hello packet with a source IP address set to the interface link local unicast address and a destination IP address set to the link local multicast address FF02:0:0:0:0:0:0:2.

The transport address for the TCP connection, which is encoded in the Hello packet, is set by default to the LSR ID of the LSR. The transport address is instead set to the interface IPv6 address if the user enables the interface option in one of the following contexts:

  1. config>router>ldp>if-params>ipv6>transport-address
  2. config>router>ldp>if-params>if>ipv6>transport-address

The user can configure the local-lsr-id option on the interface and change the value of the LSR ID to either the local interface or to another interface name, including loopback. The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR ID. If the interface does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id option, the session does not come up and an error message is displayed.

The LSR with the highest transport address will bootstrap the IPv6 TCP connection and IPv6 LDP session.

The source and destination addresses of LDP/TCP session packets are the IPv6 transport addresses.

5.4.2. Targeted LDP

The source and destination addresses of targeted Hello packets are the LDP IPv6 LSR- IDs of systems A and B in Figure 25.

The user can configure the local-lsr-id option on the targeted session and change the value of the LSR ID to either the local interface or to some other interface name, including loopback or not. The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR ID. If the user invokes an interface that does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id option, the session does not come up and an error message is displayed. In all cases, the transport address for the LDP session and the source IP address of targeted Hello messages are updated with the new LSR ID value.

The LSR with the highest transport address (in this case, the LSR ID) will bootstrap the IPv6 TCP connection and IPv6 LDP session.

The source and destination IP addresses of LDP/TCP session packets are the IPv6 transport addresses the LDP LSR IDs of systems A and B in Figure 25.

5.4.3. FEC Resolution

LDP advertises and withdraws all interface IPv6 addresses using the Address/ Address-Withdraw message. Both the link local unicast address and the configured global unicast addresses of an interface are advertised.

Like LDP IPv4 sessions, LDP FEC types can be exchanged over an LDP IPv6 session. The LSR does not advertise a FEC for a link local address and, if received, the LSR will not resolve it.

An IPv4 or IPv6 prefix FEC can be resolved to an LDP IPv6 interface in the same way it is resolved to an LDP IPv4 interface. The outgoing interface and next hop are looked up in the RTM cache. The next hop can be the link local unicast address of the other side of the link or a global unicast address. The FEC is resolved to the LDP IPv6 interface of the downstream LDP IPv6 LSR that advertised the IPv4 or IPv6 address of the next hop.

A PW FEC can be resolved to a targeted LDP IPv6 adjacency with an LDP IPv6 LSR if there is a context for the FEC with local spoke SDP configuration or spoke SDP auto-creation from a service such as BGP-AD VPLS, BGP-VPWS, or dynamic MS- PW.

5.4.4. LDP Session Capabilities

LDP can advertise all FEC types over an LDP IPv4 or an LDP IPv6 session. The FEC types are: IPv4 prefix FEC, IPv6 prefix FEC, IPv4 P2MP FEC (with MVPN), and PW FEC 128.

LDP also supports signaling the enabling or disabling of the advertisement of the following subset of FEC types during the LDP IPv4 or IPv6 session initialization phase, and when the session is already up.

  1. IPv4 prefix FEC
    This is performed using the State Advertisement Control (SAC) capability TLV as specified in draft-ietf-mpls-ldp-ip-pw-capability. The SAC capability TLV includes the IPv4 SAC element having the D-bit (Disable- bit) set or reset to disable or enable this FEC type respectively. The LSR can send this TLV in the LDP Initialization message and subsequently in an LDP capability message.
  2. IPv6 prefix FEC
    This is performed using the State Advertisement Control (SAC) capability TLV as specified in draft-ietf-mpls-ldp-ip-pw-capability. The SAC capability TLV includes the IPv6 SAC element having the D-bit (Disable- bit) set or reset to disable or enable this FEC type respectively. The LSR can send this TLV in the LDP Initialization message and subsequently in an LDP capability message to update the state of this FEC type.
  3. P2MP FEC (IPv4 only)
    This is performed using the P2MP capability TLV as specified in RFC 6388. The P2MP capability TLV has the S-bit (State-bit) with a value of set or reset to enable or disable this FEC type respectively. The LSR can send this TLV in the LDP initialization message and, subsequently, in an LDP capability message to update the state of this FEC type.

During LDP session initialization, each LSR indicates to its peers which FEC type it supports by including the capability TLV for it in the LDP initialization message. The 7705 SAR enables the IPv4 and IPv6 Prefix FEC types by default and sends their corresponding capability TLVs in the LDP initialization message. If one or both peers advertise the disabling of a capability in the LDP Initialization message, no FECs of the corresponding FEC type are exchanged between the two peers for the lifetime of the LDP session unless a capability message is sent to explicitly enable it. The same behavior applies if no capability TLV for a FEC type is advertised in the LDP initialization message, except for the IPv4 prefix FEC which is assumed to be supported by all implementations by default.

Dynamic Capability, as defined in RFC 5561, allows all FEC types to update the enabled or disabled state after the LDP session initialization phase. An LSR informs its peer that it supports Dynamic Capability by including the Dynamic Capability Announcement TLV in the LDP initialization message. If both LSRs advertise this capability, the user can enable or disable any of the above FEC types while the session is up and the change takes effect immediately. The LSR then sends a SAC capability message with the IPv4 or IPv6 SAC element having the D-bit (Disable-bit) set or reset, or the P2MP capability TLV (IPv4 only) in a capability message with the S-bit (State-bit) set or reset. Each LSR then takes the consequent action of withdrawing or advertising the FECs of that type to the peer LSR. If one or both LSRs did not advertise the Dynamic Capability Announcement TLV in the LDP initialization message, any change to the enabled or disabled FEC types only takes effect the next time the LDP session is restarted.

The user can enable or disable a specific FEC type for a given LDP session to a peer by using the following CLI commands:

  1. config>router>ldp>session-params>peer>fec-type-capability>prefix-ipv4
  2. config>router>ldp>session-params>peer>fec-type-capability>prefix-ipv6
  3. config>router>ldp>session-params>peer>fec-type-capability>p2mp

5.4.5. LDP Adjacency Capabilities

Adjacency-level FEC-type capability advertisement is defined in draft-pdutta-mpls- ldp-adj-capability. By default, all FEC types supported by the LSR are advertised in the LDP IPv4 or IPv6 session initialization; see LDP Session Capabilities for more information. If a given FEC type is enabled at the session level, it can be disabled over a given LDP interface at the IPv4 or IPv6 adjacency level for all IPv4 or IPv6 peers over that interface. If a given FEC type is disabled at the session level, then FECs will not be advertised and enabling that FEC type at the adjacency level will not have any effect. The LDP adjacency capability can be configured on link Hello adjacency only and does not apply to targeted Hello adjacency.

The LDP adjacency capability TLV is advertised in the Hello message with the D-bit (Disable-bit) set or reset to disable or enable the resolution of this FEC type over the link of the Hello adjacency. It is used to restrict which FECs can be resolved over a given interface to a peer. This provides the ability to dedicate links and data path resources to specific FEC types. For IPv4 and IPv6 prefix FECs, a subset of ECMP links to an LSR peer may be configured to carry one of the two FEC types. An mLDP P2MP FEC (IPv4 only) can exclude specific links to a downstream LSR from being used to resolve this type of FEC.

Like the LDP session-level FEC-type capability, the adjacency FEC-type capability is negotiated for both directions of the adjacency. If one or both peers advertise the disabling of a capability in the LDP Hello message, no FECs of the corresponding FEC type will be resolved by either peer over the link of this adjacency for the lifetime of the LDP Hello adjacency, unless one or both peers sends the LDP adjacency capability TLV subsequently to explicitly enable it.

The user can enable or disable a specific FEC type for a given LDP interface to a peer by using the following CLI commands:

  1. config>router>ldp>if-params>if>ipv4>fec-type-capability>p2mp-ipv4
  2. config>router>ldp>if-params>if>ipv4/ipv6>fec-type-capability>prefix-ipv4
  3. config>router>ldp>if-params>if> ipv4/ipv6>fec-type-capability>prefix-ipv6

These commands, when applied to the IPv4 P2MP FEC, deprecate the existing multicast-traffic command under the interface. Unlike the session-level capability, these commands can disable multicast FEC for IPv4.

The encoding of the adjacency capability TLV uses a PRIVATE Vendor TLV. It is used only in a Hello message to negotiate a set of capabilities for a specific LDP IPv4 or IPv6 hello adjacency.

0                   1                   2                   3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|1|0| ADJ_CAPABILITY_TLV       |      Length                    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|                    VENDOR_OUI                                 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|S| Reserved  |                                                 |
+-+-+-+-+-+-+-+-+                                               +
|          Adjacency capability elements                        |
+                                                               +
|                                                               |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

The value of the U-bit for the TLV is set to 1 so that a receiver must silently ignore if the TLV is deemed unknown.

The value of the F-bit is 0. After being advertised, this capability cannot be withdrawn; thus, the S-bit is set to 1 in a Hello message.

Adjacency capability elements are encoded as follows:

0 1 2 3 4 5 6 7 
+-+-+-+-+-+-+-+-+
|D| CapFlag     |
+-+-+-+-+-+-+-+-+

D bit: Controls the capability state.

  1. 1 : Disable capability
  2. 0 : Enable capability

CapFlag: The adjacency capability

  1. 1 : Prefix IPv4 forwarding
  2. 2 : Prefix IPv6 forwarding
  3. 3 : P2MP IPv4 forwarding
  4. 4 : P2MP IPv6 forwarding (not supported on the 7705 SAR)
  5. 5 : MP2MP IPv4 forwarding
  6. 6 : MP2MP IPv6 forwarding

Each CapFlag appears no more than once in the TLV. If duplicates are found, the D-bit of the first element is used. For forward compatibility, if the CapFlag is unknown, the receiver must silently discard the element and continue processing the rest of the TLV.

5.4.6. IP Address and FEC Distribution

When an LDP LSR initializes the LDP session to the peer LSR and the session comes up, IP addresses and FECs are distributed. Local IPv4 and IPv6 interface addresses are exchanged using the Address and Address Withdraw messages. FECs are exchanged using label mapping messages.

By default, IPv6 address distribution is determined by whether the dual-stack capability TLV, which is defined in draft-ietf-mpls-ldp-ipv6, is present in the Hello message from the peer. This requirement is designed to address interoperability issues found with existing third-party LDP IPv4 implementations.

IP address and FEC distribution behavior is described below.

  1. If the peer LSR sent the dual-stack capability TLV in the Hello message, then local IPv6 addresses are sent to the peer. The user can configure an address export policy to restrict which local IPv6 interface addresses are sent to the peer.
    1. If the peer explicitly stated enabling of LDP IPv6 FEC type by including the IPv6 SAC TLV in the initialization message with the D-bit set to 0, then IPv6 FECs are also sent to the peer.
    2. If the peer sent the dual-stack capability TLV in the Hello message, but explicitly stated disabling of LDP IPv6 FEC type by including the IPv6 SAC TLV in the initialization message with the D-bit set to 1, then IPv6 local addresses instead of IPv6 FECs are sent to the peer. The user can configure an address export policy to further restrict which local IPv6 interface addresses to send to the peer.
  2. If the peer did not send the dual-stack capability TLV in the Hello message, then no IPv6 addresses or IPv6 FECs are sent to that peer, regardless of the presence or not of the IPv6 SAC TLV in the initialization message. This case is added to prevent interoperability issues with some third-party LDP IPv4 implementations. The user can override the distribution defined by the initial Hello message by explicitly configuring an address export policy and a FEC export policy to select IPv6 addresses and FECs to send to the peer.

The above behavior applies to LDP IPv4 and IPv6 addresses and FECs. The procedure is summarized in the flowchart diagrams in Figure 26 and Figure 27.

Figure 26:  LDP IPv6 Address and FEC Distribution Procedure 
Figure 27:  LDP IPv6 Address and FEC Distribution Procedure 

5.4.7. IGP and Static Route Synchronization with LDP

The IGP-LDP synchronization and the static route-to-LDP synchronization features are modified to operate on a dual-stack IPv4 or IPv6 LDP interface as follows.

  1. If the router interface goes down or both LDP IPv4 and LDP IPv6 sessions go down, IGP sets the interface metric to the maximum value and all static routes with the ldp-sync option enabled and resolved on this interface are deactivated.
  2. If the router interface is up and only one of the LDP IPv4 or LDP IPv6 interfaces goes down, no action is taken.
  3. When the router interface comes up from a down state, and the LDP IPv4 or LDP IPv6 sessions comes up, IGP starts the sync timer. When the sync timer expires, the interface metric is restored to its configured value and all static routes with the ldp-sync option enabled are activated.

Given the above behavior, it is recommended that the user configures the sync timer to a value which allows enough time for both the LDP IPv4 and LDP IPv6 sessions to come up.

5.4.8. BFD Operation

The operation of BFD over an LDP interface tracks the next hop of prefix IPv4 and prefix IPv6 in addition to tracking of the LDP peer address of the Hello adjacency over that link. Tracking is required because LDP can resolve both IPv4 and IPv6 prefix FECs over a single IPv4 or IPv6 LDP session and therefore, the next hop of a prefix will not necessarily match the LDP peer source address of the Hello adjacency. If any of the BFD tracking sessions fail, the LFA backup NHLFE for the FEC is activated, or, if there is no FRR backup, the FEC is re-resolved.

The user can configure tracking with only an IPv4 BFD session, only an IPv6 BFD session, or with both using the config>router>ldp>if-params>if>bfd-enable [ipv4] [ipv6] command.

This command provides flexibility in case the user does not need to track both Hello adjacency and the next hops of FECs.

For example, if the user configures bfd-enable ipv6 only to save on the number of BFD sessions, then LDP will track the IPv6 Hello adjacency and the next hops of IPv6 prefix FECs. LDP will not track next hops of IPv4 prefix FECs resolved over the same LDP IPv6 adjacency. If the IPv4 data plane encounters errors and the IPv6 Hello adjacency is not affected and remains up, traffic for the IPv4 prefix FECs resolved over that IPv6 adjacency will be blackholed. If the BFD tracking the IPv6 Hello adjacency times out, then all IPv4 and IPv6 prefix FECs will be updated.

5.4.9. Services Using SDP with an LDP IPv6 FEC

The 7705 SAR supports SDPs of type LDP with far-end options using IPv6 addresses. The addresses need not be of the same family (IPv6 or IPv4) for the SDP configuration to be allowed. The user can have an SDP with an IPv4 (or IPv6) control plane for the T-LDP session and an IPv6 (or IPv4) LDP FEC as the tunnel.

Because IPv6 LSP is only supported with LDP, the use of a far-end IPv6 address is not allowed with a BGP or RSVP/MPLS LSP. In addition, the CLI does not allow an SDP with a combination of an IPv6 LDP LSP and an IPv4 LSP of a different control plane. As a result, the following commands are blocked in the SDP configuration context when the far end is an IPv6 address:

  1. bgp-tunnel
  2. lsp
  3. mixed-lsp-mode

SDP admin groups are not supported with an SDP using an LDP IPv6 FEC, and the attempt to assign them is blocked in CLI.

Services that use the LDP control plane (such as T-LDP VPLS and R-VPLS, VLL, and IES/VPRN spoke interface) have the spoke SDP (PW) signaled with an IPv6 T-LDP session when the far-end option is configured to an IPv6 address. By default, the spoke SDP for these services binds to an SDP that uses an LDP IPv6 FEC that matches the prefix of the far end address.

In addition, the IPv6 PW control word is supported with data plane packets and VCCV OAM packets. Hash label is also supported with the above services, including the signaling and negotiation of hash label support using T-LDP (Flow sub-TLV) with the LDP IPv6 control plane. Finally, network domains are supported in VPLS.

5.4.10. Mirror Services

The user can configure a spoke SDP bound to an LDP IPv6 LSP to forward mirrored packets from a mirror source to a remote mirror destination. In the configuration of the mirror destination service at the destination node, the remote-source command must use a spoke SDP with a VC ID that matches the VC-ID that is configured in the mirror destination service at the mirror source node. The far-end option is not supported with an IPv6 address.

5.4.10.1. Configuration at mirror source node

Use the following rules and syntax to configure a spoke SDP at the mirror source node.

  1. The sdp-id must match an SDP that uses an LDP IPv6 FEC.
  2. Configuring the egress-vc-label is optional.
CLI Syntax:
no spoke-sdp sdp-id:vc-id
spoke-sdp sdp-id:vc-id [create] egress
vc-label egress-vc-label

5.4.10.2. Configuration at mirror destination node

Use the following rules and syntax to configure mirror service at the mirror destination node.

  1. The far-end ip-address command is not supported with LDP IPv6 transport tunnel. The user must reference a spoke SDP using an LDP IPv6 SDP coming from mirror source node.
  2. In the spoke-sdp sdp-id:vc-id command, the vc-id should match that of the spoke-sdp configured in the mirror-destination context at the mirror source node.
  3. Configuring the ingress-vc-label is optional; both Static and T-LDP are supported.
CLI Syntax:
far-end ip-address [vc-id vc-id] [ing-svc-label ingress- vc-label | tldp] [icb]
no far-end ip-address
spoke-sdp sdp-id:vc-id [create] ingress-vc-label ingress-vc-label exit
no shutdown exit
exit

Mirroring is also supported with the PW redundancy feature when the endpoint spoke SDP, including the ICB, is using an LDP IPv6 tunnel.

5.4.11. OAM Support with LDP IPv6

MPLS OAM tools LSP ping and LSP trace can operate with LDP IPv6 and support the following:

  1. use of IPv6 addresses in the echo request and echo reply messages, including in DSMAP TLV, as per RFC 4379
  2. use of LDP IPv6 prefix target FEC stack TLV, as per RFC 4379
  3. use of IPv6 addresses in the DDMAP TLV and FEC stack change sub-TLV, as per RFC 6424
  4. use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as the destination address of the echo request message, as per RFC 4379
  5. use of 127/8 IPv4 mapped IPv6 address; that is, in the range ::ffff:127/104, as the path-destination address when the user wants to exercise a specific LDP ECMP path

The behavior at the sender and receiver nodes supports both LDP IPv4 and IPv6 target FEC stack TLVs. Specifically:

  1. The IP family (IPv4/IPv6) of the UDP/IP echo request message will always match the family of the LDP target FEC stack TLV as entered by the user in the prefix option.
  2. The src-ip-address option is extended to accept IPv6 address of the sender node. If the user did not enter a source IP address, the system IPv6 address will be used. If the user entered a source IP address of a different family than the LDP target FEC stack TLV, an error is returned and the command is aborted.
  3. The IP family of the UDP/IP echo reply message must match that of the received echo request message.
  4. For lsp-trace, the downstream information in DSMAP/DDMAP will be encoded as the same family as the LDP control plane of the link LDP or targeted LDP session to the downstream peer.
  5. The sender node inserts the experimental value of 65503 in the Router Alert Option in the echo request packet IPv6 header, as per RFC 5350. Once a value is allocated by IANA for MPLS OAM as part of draft-ietf-mpls-oam-ipv6-rao, it will be updated.

VCCV ping and VCCV trace for a single-hop PW support IPv6 PW FEC 128, as per RFC 6829. In addition, the PW OAM control word is supported with VCCV packets when the control-word option is enabled on the spoke SDP configuration. When the value of the Channel Type field is set to 0x57, it indicates that the Associated Channel carries an IPv6 packet, as per RFC 4385.

5.4.12. Interoperability

5.4.12.1. Interoperability with Implementations Compliant with draft- ietf-mpls-ldp-ipv6

The 7705 SAR uses a 128-bit LSR ID as defined in draft-pdutta-mpls-ldp-v2 to establish an LDP IPv6 session with a peer LSR. This is so that a routable system IPv6 address can be used by default to bring up the LDP task on the router and establish link LDP and T-LDP sessions to other LSRs. More importantly, using a 128-bit LSR ID allows for the establishment of control plane-independent LDP IPv4 and IPv6 sessions between two LSRs over the same interface or different set of interfaces because each session uses a unique LSR ID (32-bit for IPv4 and 128-bit for IPv6).

The 7705 SAR LDP implementation does not interoperate with a system using a 32-bit LSR ID (as defined in draft-ietf-mpls-ldp-ipv6) to establish an IPv6 LDP session. The latter specifies that an LSR can send both IPv4 and IPv6 Hello messages over an interface, allowing the system to establish either an IPv4 or an IPv6 LDP session with LSRs on the same subnet. It does not allow for separate LDP IPv4 and LDP IPv6 LDP sessions between two routers.

The 7705 SAR LDP implementation interoperates with systems using a 32-bit LSR ID (as defined in draft-ietf-mpls-ldp-ipv6) to establish an IPv4 LDP session and to resolve both IPv4 and IPv6 prefix FECs.

The 7705 SAR otherwise complies with all other aspects of draft-ietf-mpls-ldp-ipv6, including the support of the dual-stack capability TLV in the Hello message. The latter is used by an LSR to inform its peer that it is capable of establishing either an LDP IPv4 or LDP IPv6 session and to convey the IP family preference for the LDP Hello adjacency and thus for the resulting LDP session. This is required because the implementation described in draft-ietf-mpls-ldp-ipv6 allows for a single session between LSRs, and both LSRs must agree if the session should be brought up using IPv4 or IPv6 when both IPv4 and IPv6 Hellos are exchanged between the two LSRs. The 7705 SAR implementation has a separate session for each IP family between two LSRs and, as such, this TLV is used to specify the family preference and to indicate that the system supports resolving IPv6 FECs over an IPv4 LDP session.

5.4.12.2. Interoperability with Implementations Compliant with RFC 5036 for IPv4 LDP Control Plane Only

Some third-party LDP implementations are compliant with RFC 5036 for LDP IPv4 but are not compliant with RFC 5036 for handling IPv6 address or IPv6 FECs over an LDP IPv4 session.

An LSR based on the 7705 SAR in a LAN with a broadcast interface can peer with any third-party LSR, including those that are incapable of handling IPv6 address or IPv6 FECs over an LDP IPv4 session. When the 7705 SAR uses the IPv4 LDP control plane to advertise IPv6 addresses or IPv6 FECs to that peer, it can cause the IPv4 LDP session to go down.

To address this issue, draft-ietf-mpls-ldp-ipv6 modifies RFC 5036 and requires compliant systems to check for the dual-stack capability TLV in the IPv4 Hello message from the peer. If the peer does not advertise this TLV, the LSR does not send IPv6 addresses and FECs to that peer. The 7705 SAR supports advertising and resolving IPv6 prefix FECs over an LDP IPv4 session using a 32-bit LSR ID in compliance with draft-ietf-mpls-ldp-ipv6.

5.4.13. Upgrading from IPv4 to IPv6

For smooth transition from IPv4 to IPv6, it is recommended to follow the steps below.

  1. Create a new IPv6 interface in the 7750 SR management VPRN.
  2. Configure a new Layer 3 Spoke SDP configured with LDPv6 and a far-end IPv6 address, and assign it to the new IPv6 interface.
  3. On the 7705 SAR, in the management Epipe create an endpoint object and assign the endpoint to the existing IPv4 PW. Ensure there is no traffic lost during this step.
  4. On the 7705 SAR, create a new SDP with LDPv6 and a far-end IPv6 address.
  5. On the 7705 SAR, within the management Epipe, assign the new SDP to a spoke SDP with the same endpoint as the IPv4 spoke SDP.
  6. On the 7750 SAR, shut down the IPv4 interface.
  7. On the 7705 SAR, start IPv6 traffic and ensure reachability to 7705 via the IPv6 SDP.
  8. Remove the IPv4 SDP and spokes from the 7705 SAR Epipe and 7750 SR VPRN.

Figure 28 shows an example of a network ensuring a smooth upgrade from IPv4 to IPv6 with PW redundancy.

Figure 28:  Smooth Management Transition From IPv4 to IPv6  

5.5. LDP Process Overview

Figure 29 displays the process to provision basic LDP parameters.

Figure 29:  LDP Configuration and Implementation 

5.6. Configuration Notes

Refer to the 7705 SAR Services Guide for information about signaling.

5.6.1. Reference Sources

For information on supported IETF drafts and standards, as well as standard and proprietary MIBs, refer to Standards and Protocol Support.