This chapter provides information to enable the Label Distribution Protocol (LDP).
Topics in this chapter include:
Label Distribution Protocol (LDP) is used to distribute labels in non-traffic-engineered applications. LDP allows routers to establish LSPs through a network by mapping network-layer routing information directly to data link LSPs.
An LSP is defined by the set of labels from the ingress LER to the egress LER. LDP associates a Forwarding Equivalence Class (FEC) with each LSP it creates. A FEC is a collection of common actions associated with a class of packets. When an ingress LER assigns a label to a FEC, it must let other LSRs in the path know about the label. LDP helps to establish the LSP by providing a set of procedures that LSRs can use to distribute labels.
The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network by each LSR, where each LSR splices incoming labels for the FEC to the outgoing label assigned to the next hop for the FEC.
LDP allows an LSR to request a label from a downstream LSR so it can bind the label to a specific FEC. The downstream LSR responds to the request from the upstream LSR by sending the requested label.
LSRs can distribute a FEC label binding in response to an explicit request from another LSR. This is known as Downstream On Demand (DOD) label distribution. LSRs can also distribute label bindings to LSRs that have not explicitly requested them. This is called Downstream Unsolicited (DU). For LDP on the 7705 SAR, Downstream Unsolicited (DU) mode is implemented.
This section contains the following topics:
LDP performs dynamic label distribution in MPLS environments. The LDP operation begins with a Hello discovery process network to form an adjacency with an LDP peer in the network. LDP peers are two MPLS routers that use LDP to exchange label/FEC mapping information. An LDP session is created between LDP peers. A single LDP session allows each peer to learn the other's label mappings and to distribute its own label information (LDP is bidirectional), and exchange label binding information.
LDP signaling works with the MPLS label manager to manage the relationships between labels and the corresponding FEC. For service-based FECs, LDP works in tandem with the Service Manager to identify the virtual leased lines (VLLs) and pseudowires (PWs) to signal.
An MPLS label identifies a set of actions that the forwarding plane performs on an incoming packet before discarding it. The FEC is identified through the signaling protocol (in this case LDP), and is allocated a label. The mapping between the label and the FEC is communicated to the forwarding plane. In order for this processing on the packet to occur at high speeds, optimized tables that enable fast access and packet identification are maintained in the forwarding plane.
When an unlabeled packet ingresses the 7705 SAR, classification policies associate it with a FEC, the appropriate label is imposed on the packet, and then the packet is forwarded. Other actions can also take place on a packet before it is forwarded, including imposing additional labels, other encapsulations, or learning actions. Once all actions associated with the packet are completed, the packet is forwarded.
When a labeled packet ingresses the router, the label or stack of labels indicates the set of actions associated with the FEC for that label or label stack. The actions are performed on the packet and then the packet is forwarded.
The LDP implementation provides support for DU, ordered control, and liberal label retention mode.
For LDP label advertisement, DU mode is supported. To prevent filling the uplink bandwidth with unassigned label information, Ordered Label Distribution Control mode is supported.
A PW/VLL label can be dynamically assigned by targeted LDP operations. Targeted LDP allows the inner labels (that is, the VLL labels) in the MPLS headers to be managed automatically. This makes it easier for operators to manage the VLL connections. There is, however, additional signaling and processing overhead associated with this targeted LDP dynamic label assignment.
BFD is a simple protocol for detecting failures in a network. BFD uses a “hello” mechanism that sends control messages periodically to the far end and receives periodic control messages from the far end. BFD is implemented in asynchronous mode only, meaning that neither end responds to control messages; rather, the messages are sent in the time period configured at each end.
A T-LDP session is a session between either directly or non-directly connected peers and requires that adjacencies be created between two peers. BFD for T-LDP sessions allows support for tracking of failures of nodes that are not directly connected. BFD timers must be configured under the system router interface context before being enabled under T-LDP.
BFD tracking of an LDP session associated with a T-LDP adjacency allows for faster detection of the status of the session by registering the loopback address of the peer as the transport address.
LDP comprises a few processes that handle the protocol PDU transmission, timer-related issues, and protocol state machine. The number of processes is kept to a minimum to simplify the architecture and to allow for scalability. Scheduling within each process prevents starvation of any particular LDP session, while buffering alleviates TCP-related congestion issues.
The LDP subsystems and their relationships to other subsystems are illustrated in Figure 22. This illustration shows the interaction of the LDP subsystem with other subsystems, including memory management, label management, service management, SNMP, interface management, and RTM. In addition, debugging capabilities are provided through the logger.
Communication within LDP tasks is typically done by interprocess communication through the event queue, as well as through updates to the various data structures. The following list describes the primary data structures that LDP maintains:
Figure 22 shows the relationships between LDP subsystems and other 7705 SAR subsystems. The following sections describe how the subsystems work to provide services.
LDP does not use any memory until it is instantiated. It preallocates some amount of fixed memory so that initial startup actions can be performed. Memory allocation for LDP comes out of a pool reserved for LDP that can grow dynamically as needed.
Fragmentation is minimized by allocating memory in large chunks and managing the memory internally to LDP. When LDP is shut down, it releases all memory allocated to it.
LDP assumes that the label manager is up and running. LDP will abort initialization if the label manager is not running. The label manager is initialized at system boot-up; hence anything that causes it to fail will likely indicate that the system is not functional. The 7705 SAR uses a label range from 28 672 (28K) to 131 071 (128K-1) to allocate all dynamic labels, including VC labels.
The 7705 SAR uses a single consistent interface to configure all protocols and services. CLI commands are translated to SNMP requests and are handled through an agent-LDP interface. LDP can be instantiated or deleted through SNMP. Also, targeted LDP sessions can be set up to specific endpoints. Targeted session parameters are configurable.
LDP uses the logger interface to generate debug information relating to session setup and teardown, LDP events, label exchanges, and packet dumps. Per-session tracing can be performed. Refer to the 7705 SAR System Management Guide for logger configuration information.
All interaction occurs between LDP and the service manager, since LDP is used primarily to exchange labels for Layer 2 services. In this context, the service manager informs LDP when an LDP session is to be set up or torn down, and when labels are to be exchanged or withdrawn. In turn, LDP informs the service manager of relevant LDP events, such as connection setups and failures, timeouts, and labels signaled or withdrawn.
LDP activity in the 7705 SAR is limited to service-related signaling. Therefore, the configurable parameters are restricted to system-wide parameters, such as hello and keepalive timeouts.
MPLS must be enabled when LDP is initialized. LDP makes sure that the various prerequisites are met, such as ensuring that the system IP interface and the label manager are operational, and ensuring that there is memory available. It then allocates a pool of memory to itself and initializes its databases.
In order for a targeted LDP session to be established, an adjacency has to be created. The LDP extended discovery mechanism requires hello messages to be exchanged between two peers for session establishment. Once the adjacency is established, session setup is attempted.
In the 7705 SAR, adjacency management is done through the establishment of a Service Destination Point (SDP) object, which is a service entity in the Nokia service model.
The service model uses logical entities that interact to provide a service. The service model requires the service provider to create and configure four main entities:
An SDP is the network-side termination point for a tunnel to a remote 7705 SAR or 77x0 SR router. An SDP defines a local entity that includes the system IP address of the remote 7705 SAR routers and 77x0 SR routers, and a path type.
Each SDP comprises:
If the SDP is identified as using LDP signaling, then an LDP extended hello adjacency is attempted.
If another SDP is created to the same remote destination and if LDP signaling is enabled, no further action is taken, since only one adjacency and one LDP session exists between the pair of nodes.
An SDP is a unidirectional object, so a pair of SDPs pointing at each other must be configured in order for an LDP adjacency to be established. Once an adjacency is established, it is maintained through periodic hello messages.
When the LDP adjacency is established, the session setup follows as per the LDP specification. Initialization and keepalive messages complete the session setup, followed by address messages to exchange all interface IP addresses. Periodic keepalives or other session messages maintain the session liveness.
Since TCP is back-pressured by the receiver, it is necessary to be able to push that back-pressure all the way into the protocol. Packets that cannot be sent are buffered on the session object and reattempted as the back-pressure eases.
Label exchange is initiated by the service manager. When an SDP is attached to a service (that is, once the service gets a transport tunnel), a message is sent from the service manager to LDP. This causes a label mapping message to be sent. Additionally, when the SDP binding is removed from the service, the VC label is withdrawn. The peer must send a label release to confirm that the label is not in use.
The implicit null label option enables an eLER to receive MPLS packets from the previous-hop LSR without the outer LSP label.
The implicit null label is signaled by the eLER to the previous-hop LSR during FEC signaling by the LDP control protocol. When the implicit null label is signaled to the LSR, it pops the outer label before sending the MPLS packet to the eLER; this is known as penultimate hop popping.
The implicit null label option can be enabled for all LDP FECs for which the router is the eLER by using the implicit-null-label command in the config>router>ldp context.
If the implicit null configuration is changed, LDP withdraws all the FECs and readvertises them using the new label value.
Label actions can also occur for the following reasons:
LDP closes all sockets, frees all memory, and shuts down all its tasks when it is deleted, so that it uses no memory (0 bytes) when it is not running.
The 7705 SAR supports both inbound and outbound LDP label binding filtering.
Inbound filtering (import policy) allows the user to configure a policy to control the label bindings an LSR (Label Switch Router) accepts from its peers.
Import policy label bindings can be filtered based on the following:
The default import behavior is to accept all FECs received from peers.
Outbound filtering (export policy) allows the user to configure a policy to control the set of LDP label bindings advertised by the LSR (Label Switch Router).
Because the default behavior is to originate label bindings for the system IP address only, when a non-default loopback address is used as the transport address, the 7705 SAR will not advertise the loopback FEC automatically. With LDP export policy, the user is now able to explicitly export the loopback address in order to advertise the loopback address label and allow the node to be reached by other network elements.
Export policy label bindings can be filtered based on the following:
![]() | Note: In order for the 7705 SAR to consider a received label to be active, there must be an exact match to the FEC advertised together with the label found in the routing table, or a longest prefix match (if the aggregate-prefix-match option is enabled; see Multi-area and Multi-instance Extensions to LDP). This can be achieved by configuring a static route pointing to the prefix encoded in the FEC. |
LDP FEC statistics allow operators to monitor traffic being forwarded between any two PE routers and for all services using an LDP SDP. LDP FEC statistics are available for the egress data path at the ingress LER and LSR. Because an ingress LER is also potentially an LSR for an LDP FEC, combined egress data path statistics are provided whenever applicable. For more information, see RSVP LSP and LDP FEC Statistics.
When a network has two or more IGP areas, or instances, inter-area LSPs are required for MPLS connectivity between the PE devices that are located in the distinct IGP areas. In order to extend LDP across multiple areas of an IGP instance or across multiple IGP instances, the current standard LDP implementation based on RFC 3036, LDP Specification, requires that all /32 prefixes of PEs be leaked between the areas or instances. IGP route leaking is the distribution of the PE loopback addresses across area boundaries. An exact match of the prefix in the routing table (RIB) is required to install the prefix binding in the FIB and set up the LSP.
This behavior is the default behavior for the 7705 SAR when it is configured as an Area Border Router (ABR). However, exact prefix matching causes performance issues for the convergence of IGP on routers deployed in networks where the number of PE nodes scales to thousands of nodes. Exact prefix matching requires the RIB and FIB to contain the IP addresses maintained by every LSR in the domain and requires redistribution of a large number of addresses by the ABRs. Security is a potential issue as well, as host routes leaked between areas can be used in DoS and DDoS attacks and spoofing attacks.
To avoid these performance and security issues, the 7705 SAR can be configured for an optional behavior in which LDP installs a prefix binding in the LDP FIB by performing a longest prefix match with an aggregate prefix in the routing table (RIB). This behavior is described in RFC 5283, LDP Extension for Inter-Area Label Switched Paths. The LDP prefix binding continues to be advertised on a per-individual /32 prefix basis.
When the longest prefix match option is enabled and an LSR receives a FEC-label binding from an LDP neighbor for a prefix-address FEC element, FEC1, it installs the binding in the LDP FIB if:
When the FEC-label binding has been installed in the LDP FIB, LDP programs an NHLFE entry in the egress data path to forward packets to FEC1. LDP also advertises a new FEC-label binding for FEC1 to all its LDP neighbors.
When a new prefix appears in the RIB, LDP checks the LDP FIB to determine if this prefix is a closer match for any of the installed FEC elements. If a closer match is found, this may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed.
When a prefix is removed from the RIB, LDP checks the LDP FIB for all FEC elements that matched this prefix to determine if another match exists in the routing table. If another match exists, LDP must use it. This may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed. If another match does not exist, the LSR removes the FEC binding and sends a label withdraw message to its LDP neighbors.
If the next hop for a routing prefix changes, LDP updates the LDP FIB entry for the FEC elements that matched this prefix. It also updates the NHLFE entry for the FEC elements.
Equal-Cost Multipath Protocol (ECMP) support for LDP performs load balancing for services that use LDP-based LSPs as transport tunnels, by having multiple equal-cost outgoing next hops for an IP prefix.
ECMP for LDP load-balances traffic across all equal-cost links based on the output of the hashing algorithm using the allowed inputs, based on the service type. For detailed information, refer to “LAG and ECMP Hashing” in the 7705 SAR Interface Configuration Guide.
There is only one next-hop peer for a network link. To offer protection from a network link or next-hop peer failure, multiple network links can be configured to connect to different next-hop peers, or multiple links to the same peer. For example, an MLPPP link and an Ethernet link can be connected to two peers, or two Ethernet links can be connected to the same peer. ECMP occurs when the cost of each link reaching a target IP prefix is equal.
The 7705 SAR uses a liberal label retention mode, which retains all labels for an IP prefix from all next-hop peers. A 7705 SAR acting as an LSR load-balances the MPLS traffic over multiple links using a hashing algorithm.
The 7705 SAR supports the following optional fields as hash inputs and supports profiles for various combinations:
All of the above options can be configured with the lsr-load-balancing command, with the exception of the system IP address, which is configured with the system-ip-load-balancing command.
![]() | Note: The global IF index is no longer a hash input for LSR ECMP load balancing. It has been replaced with the use-ingress-port configurable option in the lsr-load-balancing command. As well, the default treatment of the MPLS label stack has changed to focus on the bottom-of-stack label (VC label). In previous releases, all labels had equal influence. |
LSR load balancing can be configured at the system level or interface level. Configuration at the interface level overrides the system-level settings for the specific interface. Configuration must be done on the ingress network interface (that is, the interface on the LDP LSR node that the packet is received on).
Configuration of load balancing at the interface level provides some control to the user; for example, the label-IP option can be disabled on a specific interface if labeled packets received on the interface include non-IP packets that can be confused by the hash routine for IP packets. Disabling the label-IP option can be used in cases where the first nibble of a non-IP packet is a 4, which would result in the packet being hashed incorrectly if the label-IP option was enabled.
If ECMP is not enabled, the label from only one of the next-hop peers is selected and installed in the forwarding plane. In this case, the algorithm used to distribute the traffic flow looks up the route information, and selects the network link with the lowest IP address. If the selected network link or next-hop peer fails, another next-hop peer is selected, and LDP reprograms the forwarding plane to use the label sent by the newly selected peer.
ECMP is supported on all Ethernet ports in network mode, and is also supported on the 4-port OC3/STM1 Clear Channel Adapter card when it is configured for POS (ppp-auto) encapsulation and network mode.
For information on configuring the 7705 SAR for LSR ECMP, refer to the lsr-load-balancing and system-ip-load-balancing commands in the 7705 SAR Basic System Configuration Guide, “System Information and General Commands” and the lsr-load-balancing command in the 7705 SAR Router Configuration Guide, “Router Interface Commands”.
For information on LDP treetrace commands for tracing ECMP paths, refer to the 7705 SAR OAM and Diagnostics Guide.
![]() | Note: LDP treetrace works best with label-IP hashing (lbl-ip) enabled, rather than label-only (lbl-only) hashing. These options are set with the lsr-load-balancing command. |
![]() | Note:
|
If an LSR is the ingress router for a given IP prefix, LDP programs a PUSH operation for the prefix in the IOM. This creates an LSP ID to the Next Hop Label Forwarding Entry (NHLFE) mapping (LTN mapping) and an LDP tunnel entry in the forwarding plane. LDP will also inform the Tunnel Table Manager (TTM) about this tunnel. Both the LSP ID to NHLFE (LTN) entry and the tunnel entry will have an NHLFE for the label mapping that the LSR received from each of its next-hop peers.
If the LSR is to function as a transit router for a given IP prefix, LDP will program a SWAP operation for the prefix in the IOM. This involves creating an Incoming Label Map (ILM) entry in the forwarding plane. The ILM entry might need to map an incoming label to multiple NHLFEs.
If an LSR is an egress router for a given IP prefix, LDP will program a POP entry in the IOM. This too will result in an ILM entry being created in the forwarding plane, but with no NHLFEs.
When unlabeled packets arrive at the ingress LER, the forwarding plane consults the LTN entry and uses a hashing algorithm to map the packet to one of the NHLFEs (PUSH label) and forward the packet to the corresponding next-hop peer. For a labeled packet arriving at a transit or egress LSR, the forwarding plane consults the ILM entry and either uses a hashing algorithm to map it to one of the NHLFEs if they exist (SWAP label) or routes the packet if there are no NHLFEs (POP label).
Graceful Restart (GR) is part of the LDP handshake process (that is, the LDP peering session initialization) and needs to be supported by both peers. GR provides a mechanism that allows the peers to cope with a service interruption due to a CSM switchover, which is a period of time when the standby CSM is not capable of synchronizing the states of the LDP sessions and labels being advertised and received.
Graceful Restart Helper (GR-Helper) decouples the data plane from the control plane so that if the control plane is not responding (that is, there is no LDP message exchange between peers), then the data plane can still forward frames based on the last known (advertised) labels.
Because the 7705 SAR supports non-stop services / high-availability for LDP (and MPLS), the full implementation of GR is not needed. However, GR-Helper is implemented on the 7705 SAR to support non-high-availability devices. With GR-Helper, if an LDP peer of the 7705 SAR requests GR during the LDP handshake, the 7705 SAR agrees to it but does not request GR. For the duration of the LDP session, if the 7705 SAR LDP peer fails, the 7705 SAR continues to forward MPLS packets based on the last advertised labels and will not declare the peer dead until the GR timer expires.
Graceful handling of resource exhaustion enhances the behavior of LDP when a data path or a CSM resource required for the resolution of a FEC is exhausted. In prior releases, the entire LDP protocol was shut down, causing all LDP peering sessions to be torn down and therefore impacting all peers. The user was required to fix the issue that caused the FEC scaling to be exceeded, and to restart the LDP session by executing the no shutdown CLI command. With graceful handling of resource exhaustion, only the responsible session or sessions are shut down, which impacts only the appropriate peer or peers.
Graceful handling of resources implements a capability by which the LDP interface to the peer, or the targeted peer in the case of a targeted LDP (T-LDP) session, is shut down.
If LDP tries to resolve a FEC over a link or a T-LDP session and runs out of data path or CSM resources, LDP brings down that interface or targeted peer, which brings down the Hello adjacency over that interface to all linked LDP peers or to the targeted peer. The interface is brought down for the LDP context only and is still available to other applications such as IP forwarding and RSVP LSP forwarding.
After taking action to free up resources, the user must manually perform a no shutdown command on the interface or the targeted peer to bring it back into operation. This re-establishes the Hello adjacency and resumes the resolution of FECs over the interface or to the targeted peer.
Unnumbered interfaces are point-to-point interfaces that are not explicitly configured with a dedicated IP address and subnet; instead, they borrow (or link to) an IP address from another interface on the system (the system IP address, another loopback interface, or any other numbered interface) and use it as the source IP address for packets originating from the interface. For more information on support for unnumbered interfaces, refer to the 7705 SAR Router Configuration Guide, “Unnumbered Interfaces”.
This feature allows LDP to establish a Hello adjacency and to resolve unicast FECs over unnumbered LDP interfaces.
For example, LSR A and LSR B are the two endpoints of an unnumbered link. These interfaces are identified on each system with their unique link local identifier. The combination router ID and link local identifier uniquely identifies the interface in IS-IS throughout the network.
A borrowed IP address is also assigned to the interface to be used as the source address of IP packets that must originate from the interface. The borrowed IP address defaults to the system interface address, A and B in this example. The borrowed IP interface can be configured to any IP interface by using the following CLI command: config> router>interface>unnumbered {ip-int-name | ip-address}.
The fec-originate command, which defines how to originate a FEC for egress and non-egress LSRs, includes a parameter to specify the name of the interface that the label for the originated FEC is swapped to. For an unnumbered interface, this parameter is mandatory because an unnumbered interface does not have its own IP address.
When the unnumbered interface is added into LDP, the follow behavior occurs.
For link LDP (L-LDP) sessions:
For targeted LDP (T-LDP) sessions:
FEC resolution:
All LDP features supported for numbered IP interfaces are supported for unnumbered interfaces, with the following exceptions:
The unnumbered interface feature also extends the support of LSP ping and LSP traceroute to test an LDP unicast FEC that is resolved over an unnumbered LDP interface.
LDP Fast Reroute (FRR) provides local protection for an LDP FEC by precalculating and downloading a primary and a backup NHLFE for the FEC to the LDP FIB. The primary NHLFE corresponds to the label of the FEC received from the primary next hop as per the standard LDP resolution of the FEC prefix in the RTM. The backup NHLFE corresponds to the label received for the same FEC from a Loop-Free Alternate (LFA) next hop.
LDP FRR protects against single link or single node failure. SRLG failure protection is not supported.
Without FRR, when a local link or node fails, the router must signal the failure to its neighbors via the IGP providing the routing (OSPF or IS-IS), recalculate primary next-hop NHLFEs for all affected FECs, and update the FIB. Until the new primary next hops are installed in the FIB, any traffic destined for the affected FECs is discarded. This process can take hundreds of milliseconds.
LDP FRR improves convergence in case of a local link or node failure in the network, by using the label-FEC binding received from the LFA next hop to forward traffic for a given prefix as soon as the primary next hop is not available. This means that a router resumes forwarding LDP packets to a destination prefix using the backup path without waiting for the routing convergence. Convergence times should be similar to RSVP-TE FRR, in the tens of milliseconds.
OSPF or IS-IS must perform the Shortest Path First (SPF) calculation of an LFA next hop, as well as the primary next hop, for all prefixes used by LDP to resolve FECs. The IGP also populates both routes in the RTM.
When LDP FRR is enabled and an LFA backup next hop exists for the FEC prefix in the RTM, or for the longest prefix the FEC prefix matches to when the aggregate-prefix-match option is enabled, LDP will program the data path with both a primary NHLFE and a backup NHLFE for each next hop of the FEC.
In order to perform a switchover to the backup NHLFE in the fast path, LDP follows the standard FRR failover procedures, which are also supported for RSVP-TE FRR.
When any of the following events occurs, the backup NHLFE is enabled for each affected FEC next hop:
Refer to RFC 5286, Basic Specification for IP Fast Reroute: Loop-Free Alternates, for more information on LFAs.
If ECMP is enabled, which provides multiple primary next hops for a prefix, LDP FRR is not used. That is, the LFA next hops are not populated in the RTM and the ECMP paths are used instead.
IGP shortcuts are an MPLS functionality where LSPs are treated like physical links within IGPs; that is, LSPs can be used for next-hop reachability. If an RSVP-TE LSP is used as a shortcut by OSPF or IS-IS, it is included in the SPF calculation as a point-to-point link for both primary and LFA next hops. It can also be advertised to neighbors so that the neighboring nodes can also use the links to reach a destination via the advertised next hop.
IGP shortcuts can be used to simplify remote LFA support and simplify the number of LSPs required in a ring topology.
When both IGP shortcuts and LFA are enabled under OSPF or IS-IS, and LDP FRR is also enabled, the following applies:
To configure LDP FRR, LFA calculation by the SPF algorithm must first be enabled under the OSPF or IS-IS protocol level with the command:
config>router>ospf>loopfree-alternate
or
config>router>ospf3>loopfree-alternate
or
config>router>isis>loopfree-alternate
Next, LDP must be enabled to use the LFA next hop with the command config>router>ldp>fast-reroute.
If IGP shortcuts are used, they must be enabled under the OSPF or IS-IS routing protocol. As well, they must be enabled under the MPLS LSP context, using the command config>router>mpls>lsp>igp-shortcut.
For information on LFA and IGP shortcut support for OSPF and IS-IS, refer to the 7705 SAR Routing Protocols Guide, “LDP and IP Fast Reroute for OSPF Prefixes” and “LDP and IP Fast Reroute for IS-IS Prefixes”.
Both LDP FRR and IP FRR are supported; for information on IP FRR, refer to the 7705 SAR Router Configuration Guide, “IP Fast Reroute (FRR)”.
This feature provides stitching between an LDP FEC and an SR node SID route for the same IPv4 /32 IS-IS prefix by allowing the export of SR tunnels from the Tunnel Table Manager (TTM) to LDP (IGP). In the LDP-to-SR data path direction, the LDP tunnel table route export policy supports the exporting of SR tunnels from the TTM to LDP.
A route policy option is configured to support LDP-to-SR stitching using the config>router>policy-options context. Refer to the 7705 SAR Router Configuration Guide, “Configuring LDP-to-Segment Routing Stitching Policies”, for a configuration example and to “Route Policy Command Reference” for information on the commands that are used.
After the route policy option is configured, the SR tunnels are exported from the TTM into LDP (IGP) using the config>router>ldp>export-tunnel-table command. See LDP Command Reference for more information on this command.
When configuring a route policy option, the user can restrict the exporting of SR tunnels from the TTM to LDP from a specific prefix list by excluding the prefix from the list.
The user can also restrict the exporting of SR tunnels from the TTM to a specific IS-IS IGP instance by specifying the instance ID in the from protocol statement. The from protocol statement is valid only when the protocol value is isis. Policy entries with any other protocol value are ignored when the route policy is applied. If the user configures multiple from protocol statements in the same policy or does not include the from protocol statement but adds a default-action of accept, then LDP routing uses the lowest instance ID in the IS-IS protocol to select the SR tunnel.
When the routing policy is enabled, LDP checks the SR tunnel entries in the TTM. Whenever an LDP FEC primary next hop cannot be resolved using an RTM route and an SR tunnel of type isis to the same destination IPv4 /32 prefix matches an entry in the export policy, LDP programs an LDP ILM and stitches it to the SR node SID tunnel endpoint. LDP then originates a FEC for the prefix and redistributes it to its LDP peer. When a LDP FEC is stitched to an SR tunnel, forwarded packets benefit from the protection of the LFA/remote LFA or TI-LFA backup next hop of the SR tunnel.
When resolving a FEC, LDP attempts a resolution in the RTM before attempting a resolution in the TTM, when both are available. That is, a swapping operation from the LDP ILM to an LDP NHLFE is attempted before stitching the LDP ILM to an SR tunnel endpoint.
In the SR-to-LDP data path direction, the SR mapping server provides a global policy for the prefixes corresponding to the LDP FECs the SR needs to stitch to. Therefore, a tunnel table export policy is not used. The user enables the exporting of the LDP tunnels for FEC prefixes advertised by the mapping server to an IGP instance using the command config>router>isis>segment-routing>export-tunnel-table ldp. Refer to the 7705 SAR Routing Protocols Guide, “IS-IS Command Reference”, for more information on this command.
When the export-tunnel-table ldp command is enabled, the IGP monitors the LDP tunnel entries in the TTM. Whenever an IPv4 /32 LDP tunnel destination matches a prefix for which the IGP received a prefix SID sub-TLV from the mapping server, the IGP instructs the SR module to program the SR ILM and to stitch it to the LDP tunnel endpoint. The SR ILM can stitch to an LDP FEC resolved over the LDP link. When an SR tunnel is stitched to an LDP FEC, forwarded packets benefit from the protection of the LFA backup next hop of the LDP FEC.
When resolving a node SID, the IGP attempts a resolution of the prefix SID received in an IP reachability TLV before attempting a resolution of a prefix SID received via the mapping server, when both are available. That is, a swapping operation of the SR ILM to an SR NHLFE is attempted before stitching it to an LDP tunnel endpoint. Refer to the 7705 SAR Routing Protocols Guide, “Prefix SID Resolution for a Segment Routing Mapping Server”, for more information about prefix SID resolution.
It is recommended that the bfd-enable option be enabled on the interfaces for both LDP and IGP contexts to speed up the failure detection and the activation of the LFA/remote LFA backup next hop in either direction. This applies particularly for remote failures. For the LDP context, the config>router>ldp>interface-parameters>interface>bfd-enable command string is used; see LDP Commands. For the IGP context, the config>router>isis>interface>bfd-enable command string is used; refer to the 7705 SAR Routing Protocols Guide, “IS-IS Command Reference”.
The sections that follow describe how stitching is performed in the LDP-to-SR and SR-to-LDP data path directions.
Stitching in the data plane in the LDP-to-SR direction is based on the LDP module monitoring the TTM for an SR tunnel of a prefix matching an entry in the LDP TTM export policy.
In Figure 23, router R1 is at the boundary between an SR domain and an LDP domain and is configured to stitch between SR and LDP. Link R1-R2 is LDP-enabled, but router R2 does not support SR or SR is disabled.
The following steps are performed by the boundary router R1 to configure stitching:
![]() | Note: If R1 has already resolved an LDP FEC for prefix Y, it has an ILM assigned to it. However, this ILM will not be updated to point toward the SR tunnel because LDP attempts a resolution in the RTM before attempting a resolution in the TTM. Therefore, an LDP tunnel is selected before an SR tunnel. Similarly, if an LDP FEC is received after the stitching is programmed, the LDP ILM is updated to point to the LDP NHLFE because LDP is able to resolve the LDP FEC in the RTM. |
Stitching in the data plane in the SR-to-LDP direction is based on the IGP monitoring the TTM for an LDP tunnel of a prefix matching an entry in the SR TTM export policy.
In Figure 23, router R1 is at the boundary between an SR domain and an LDP domain and is configured to stitch between SR and LDP. Link R1-R2 is LDP-enabled but router R2 does not support SR or SR is disabled.
The following steps are performed by the boundary router R1 to configure stitching:
![]() | Note:
|
When stitching is performed between an LDP FEC and an SR IS-IS node SID tunnel, the TTL of the outer LDP or SR label is decreased, similar to a regular swapping operation at an LSR.
This feature allows an SR tunnel to be used as a remote LFA or TI-LFA backup tunnel next hop by an LDP FEC. The feature is enabled using the CLI command string config>router>ldp>fast-reroute backup-sr-tunnel. See LDP Commands for more information.
This feature requires the LDP-to-Segment Routing Stitching for IPv4 /32 Prefixes (IS-IS) feature as a prerequisite, because the LSR performs the stitching of the LDP ILM to an SR tunnel when the primary LDP next hop of the FEC fails. Therefore, LDP monitors SR tunnels programmed by the IGP in the TTM without the need for a mapping server.
It is assumed that:
![]() | Note: The loopfree-alternate options can be enabled separately or together. If both options are enabled, TI-LFA backup takes precedence over remote LFA backup. |
If the IGP LFA SPF does not find a regular LFA backup next hop for an LDP FEC prefix, it runs the TI-LFA and remote LFA algorithms. If the IGP LFA SPF finds a remote LFA or TI-LFA tunnel next hop, LDP programs the primary next hop of the FEC using an LDP NHLFE and programs the remote LFA or TI-LFA backup tunnel next hop using an LDP NHLFE pointing to the SR tunnel endpoint.
![]() | Note: The LDP packet is not sent over the SR tunnel. The LDP label is stitched to the segment routing label stack. LDP points both the LDP ILM and the LTN to the backup LDP NHLFE, which uses the SR tunnel endpoint. |
The following describes the behavior of this feature.
![]() | Note: If the LDP FEC primary next hop failure impacts only the LDP tunnel primary next hop but not the SR tunnel primary next hop, the LDP backup NHLFE points to the primary next hop of the SR tunnel; the LDP ILM/LTN traffic follows this path instead of the remote LFA or TI-LFA next hop of the SR tunnel until the remote LFA or TI-LFA next hop is activated. |
The operation of a network can be compromised if an unauthorized system is able to form or hijack an LDP session and inject control packets by falsely representing itself as a valid neighbor. This risk can be mitigated by enabling TCP MD5 authentication on one or more of the sessions.
When TCP MD5 authentication is enabled on a session, every TCP segment exchanged with the peer includes a TCP option (19) containing a 16-byte MD5 digest of the segment (more specifically the TCP/IP pseudo-header, TCP header, and TCP data). The MD5 digest is generated and validated using an authentication key that must be known to both sides. If the received digest value is different from the locally computed one, the TCP segment is dropped, thereby protecting the router from a spoofed TCP segment.
The TCP Enhanced Authentication Option, as specified in draft-bonica-tcpauth-05.txt, Authentication for TCP-based Routing and Management Protocols, is a TCP extension that enhances security for LDP, BGP, and other TCP-based protocols. It extends the MD5 authentication option to include the ability to change keys in an LDP or BGP session seamlessly without tearing down the session, and allows for stronger authentication algorithms to be used. It is intended for applications where secure administrative access to both endpoints of the TCP connection is normally available.
TCP peers can use this extension to authenticate messages passed between one another. This strategy improves upon the practice described in RFC 2385, Protection of BGP Sessions via the TCP MD5 Signature Option. Using this new strategy, TCP peers can update authentication keys during the lifetime of a TCP connection. TCP peers can also use stronger authentication algorithms to authenticate routing messages.
TCP enhanced authentication uses keychains that are associated with every protected TCP connection.
Keychains are configured in the config>system>security>keychain context. For more information about configuring keychains, refer to the 7705 SAR System Management Guide, “TCP Enhanced Authentication and Keychain Authentication”.
The 7705 SAR supports point-to-multipoint mLDP. This section contains information on the following topics:
A node running LDP also supports point-to-multipoint LSP setup using LDP. By default, the node advertises the capability to a peer node using the point-to-multipoint capability TLV in LDP initialization message.
The multicast-traffic configuration option (per interface) restricts or allows the use of an interface for LDP multicast traffic forwarding towards a downstream node. The interface configuration option does not restrict or allow the exchange of the point-to-multipoint FEC by way of an established session to the peer on an interface, but only restricts or allows the use of next hops over the interface.
Only a single generic identifier range is defined for signaling a multipoint data tree (MDT) for all client applications. Implementation on the 7705 SAR reserves the range 1 to 8292 for generic point-to-multipoint LSP ID values for static point-to-multipoint LSP on the root node.
When a transit or leaf node detects that the upstream node towards the root node of a multicast tree has changed, the node follows the graceful procedure that allows make-before-break transition to the new upstream node. Make-before-break support is optional via the mp-mbb-time command. If the new upstream node does not support MBB procedures, the downstream node waits for the configured timer to time out before switching over to the new upstream node.
If multiple ECMP paths exist between two adjacent nodes, then the upstream node of the multicast receiver programs all entries in the forwarding plane. Only one entry is active and it is based on the ECMP hashing algorithm.
This feature allows a downstream LSR of a multicast LDP (mLDP) FEC to perform a fast switchover in order to source the traffic from another upstream LSR while IGP and LDP are converging due to a failure of the upstream LSR, where the upstream LSR is the primary next hop of the root LSR for the point-to-multipoint FEC. The feature is enabled through the mcast-upstream-frr command.
The feature provides upstream fast reroute (FRR) node protection for mLDP FEC packets. The protection is at the expense of traffic duplication from two different upstream nodes into the node that performs the fast upstream switchover.
The detailed procedures for this feature are described in draft-pdutta-mpls-mldp-up-redundancy.
To enable the mLDP fast upstream switchover feature, configure the following option in the CLI:
config>router>ldp>mcast-upstream-frr
When mcast-upstream-frr is enabled and LDP is resolving an mLDP FEC received from a downstream LSR, LDP checks for the existence of an ECMP next hop or a loop-free alternate (LFA) next hop to the root LSR node. If LDP finds one, it programs a primary incoming label map (ILM) on the interface corresponding to the primary next hop and a backup ILM on the interface corresponding to the ECMP or LFA next hop. LDP then sends the corresponding labels to both upstream LSR nodes. In normal operation, the primary ILM accepts packets and the backup ILM drops them. If the interface or the upstream LSR of the primary ILM goes down, causing the LDP session to go down, the backup ILM starts accepting packets.
To use the ECMP next hop, configure the ecmp max-ecmp-routes value in the system to be at least 2, using the following command:
config>router>ecmp max-ecmp-routes
To use the LFA next hop, enable LFA using the following commands (as needed):
config>router>isis>loopfree-alternate
or
config>router>ospf>loopfree-alternate
Enabling IP FRR or LDP FRR is not strictly required, since LDP only needs to know the location of the alternate next hop to the root LSR in order to send the label mapping message and program the backup ILM during the initial signaling of the tree. That is, enabling the LFA option is sufficient for providing the backup ILM information. However, if unicast IP and LDP prefixes need to be protected, then IP FRR and LDP FRR—and the mLDP fast upstream switchover—can be enabled concurrently using the following commands:
config>router>ip-fast-reroute
or
config>router>ldp>fast-reroute
An mLDP FRR fast switchover relies on the fast detection of a lost LDP session to the upstream peer to which the primary ILM label had been advertised. To ensure fast detection of a lost LDP session, do the following:
This feature allows a downstream LSR to send a label binding to two upstream LSR nodes, but only accept traffic as follows:
A candidate upstream LSR node must be either an ECMP next hop or an LFA next hop. Either option allows the downstream LSR to perform a fast switchover and to source the traffic from another upstream LSR while IGP is converging due to a failure of the LDP session of the upstream peer, which is the primary next hop of the root LSR for the point-to-multipoint FEC. That is, the candidate upstream LSR node provides upstream FRR node protection for the mLDP FEC packets.
Multicast LDP fast upstream switchover is illustrated in Figure 24. LSR U is the primary next hop for the root LSR R of the point-to-multipoint FEC. LSR U' is an ECMP or LFA backup next hop for the root LSR R of the same point-to-multipoint FEC.
In Figure 24, downstream LSR Z sends a label mapping message to both upstream LSR nodes, and programs the primary ILM on the interface to LSR U and the backup ILM on the interface to LSR U'. The labels for the primary and backup ILMs must be different. Thus LSR Z attracts traffic from both ILMs. However, LSR Z blocks the ILM on the interface to LSR U' and only accepts traffic from the ILM on the interface to LSR U.
If the link to LSR U fails, or LSR U fails, causing the LDP session to LSR U to go down, LSR Z will detect the failure and reverse the ILM blocking state. In addition, LSR Z immediately starts receiving traffic from LSR U' until IGP converges and provides a new primary next hop and a new ECMP or LFA backup next hop, which may or may not be on the interface to LSR U'. When IGP convergence is complete, LSR Z updates the primary and backup ILMs in the datapath.
![]() | Note: LDP uses the interface of either an ECMP next hop or an LFA next hop to the root LSR prefix, whichever is available, to program the backup ILM. However, ECMP next hop and LFA next hop are mutually exclusive for a given prefix. IGP installs the ECMP next hop in preference to the LFA next hop as a prefix in the routing table manager (RTM). |
If one or more ECMP next hops for the root LSR prefix exist, LDP picks the interface for the primary ILM based on the rules of mLDP FEC resolution specified in RFC 6388, Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths:
LDP then picks the interface for the backup ILM using the following new rules:
if (H + 1 < NUM_ECMP) {
// If the hashed entry is not last in the next hops then pick up the next as backup.
backup = H + 1;
} else {
// Wrap around and pick up the first.
backup = 1;
}
In some topologies, it is possible that no ECMP or LFA next hop is found. In this case, LDP programs the primary ILM only.
The 7705 SAR extends the LDP control plane and data plane to support LDP IPv6 adjacencies and sessions using 128-bit LSR ID.
The implementation allows for concurrent support of independent LDP IPv4 (which uses a 32-bit LSR ID) and LDP IPv6 adjacencies and sessions between peer LSRs and over the same interfaces or different set of interfaces.
Figure 25 shows an example of an LDP adjacency and session over an IPv6 interface.
LSR-A and LSR-B have the following IPv6 LDP identifiers respectively:
By default, LSR-A/128 and LSR-B/128 use the system interface IPv6 address.
Although the LDP control plane can operate using only the IPv6 system address, it is recommended that the user must configure the IPv4-formatted router ID in order for OSPF, IS-IS, and BGP to operate properly.
The following sections describe LDP IPv6 behavior on the 7705 SAR:
LDP IPv6 uses a 128-bit LSR ID as defined in draft- pdutta-mpls-ldp-v2-00. See LDP Process Overview for more information about interoperability of this implementation with a 32-bit LSR ID, as defined in draft-ietf- mpls-ldp-ipv6-14.
A Hello adjacency is brought up using a link Hello packet with a source IP address set to the interface link local unicast address and a destination IP address set to the link local multicast address FF02:0:0:0:0:0:0:2.
The transport address for the TCP connection, which is encoded in the Hello packet, is set by default to the LSR ID of the LSR. The transport address is instead set to the interface IPv6 address if the user enables the interface option in one of the following contexts:
The user can configure the local-lsr-id option on the interface and change the value of the LSR ID to either the local interface or to another interface name, including loopback. The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR ID. If the interface does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id option, the session does not come up and an error message is displayed.
The LSR with the highest transport address will bootstrap the IPv6 TCP connection and IPv6 LDP session.
The source and destination addresses of LDP/TCP session packets are the IPv6 transport addresses.
The source and destination addresses of targeted Hello packets are the LDP IPv6 LSR- IDs of systems A and B in Figure 25.
The user can configure the local-lsr-id option on the targeted session and change the value of the LSR ID to either the local interface or to some other interface name, including loopback or not. The global unicast IPv6 address corresponding to the primary IPv6 address of the interface is used as the LSR ID. If the user invokes an interface that does not have a global unicast IPv6 address in the configuration of the transport address or the configuration of the local-lsr-id option, the session does not come up and an error message is displayed. In all cases, the transport address for the LDP session and the source IP address of targeted Hello messages are updated with the new LSR ID value.
The LSR with the highest transport address (in this case, the LSR ID) will bootstrap the IPv6 TCP connection and IPv6 LDP session.
The source and destination IP addresses of LDP/TCP session packets are the IPv6 transport addresses the LDP LSR IDs of systems A and B in Figure 25.
LDP advertises and withdraws all interface IPv6 addresses using the Address/ Address-Withdraw message. Both the link local unicast address and the configured global unicast addresses of an interface are advertised.
Like LDP IPv4 sessions, LDP FEC types can be exchanged over an LDP IPv6 session. The LSR does not advertise a FEC for a link local address and, if received, the LSR will not resolve it.
An IPv4 or IPv6 prefix FEC can be resolved to an LDP IPv6 interface in the same way it is resolved to an LDP IPv4 interface. The outgoing interface and next hop are looked up in the RTM cache. The next hop can be the link local unicast address of the other side of the link or a global unicast address. The FEC is resolved to the LDP IPv6 interface of the downstream LDP IPv6 LSR that advertised the IPv4 or IPv6 address of the next hop.
A PW FEC can be resolved to a targeted LDP IPv6 adjacency with an LDP IPv6 LSR if there is a context for the FEC with local spoke SDP configuration or spoke SDP auto-creation from a service such as BGP-AD VPLS, BGP-VPWS, or dynamic MS- PW.
LDP can advertise all FEC types over an LDP IPv4 or an LDP IPv6 session. The FEC types are: IPv4 prefix FEC, IPv6 prefix FEC, IPv4 P2MP FEC (with MVPN), and PW FEC 128.
LDP also supports signaling the enabling or disabling of the advertisement of the following subset of FEC types during the LDP IPv4 or IPv6 session initialization phase, and when the session is already up.
During LDP session initialization, each LSR indicates to its peers which FEC type it supports by including the capability TLV for it in the LDP initialization message. The 7705 SAR enables the IPv4 and IPv6 Prefix FEC types by default and sends their corresponding capability TLVs in the LDP initialization message. If one or both peers advertise the disabling of a capability in the LDP Initialization message, no FECs of the corresponding FEC type are exchanged between the two peers for the lifetime of the LDP session unless a capability message is sent to explicitly enable it. The same behavior applies if no capability TLV for a FEC type is advertised in the LDP initialization message, except for the IPv4 prefix FEC which is assumed to be supported by all implementations by default.
Dynamic Capability, as defined in RFC 5561, allows all FEC types to update the enabled or disabled state after the LDP session initialization phase. An LSR informs its peer that it supports Dynamic Capability by including the Dynamic Capability Announcement TLV in the LDP initialization message. If both LSRs advertise this capability, the user can enable or disable any of the above FEC types while the session is up and the change takes effect immediately. The LSR then sends a SAC capability message with the IPv4 or IPv6 SAC element having the D-bit (Disable-bit) set or reset, or the P2MP capability TLV (IPv4 only) in a capability message with the S-bit (State-bit) set or reset. Each LSR then takes the consequent action of withdrawing or advertising the FECs of that type to the peer LSR. If one or both LSRs did not advertise the Dynamic Capability Announcement TLV in the LDP initialization message, any change to the enabled or disabled FEC types only takes effect the next time the LDP session is restarted.
The user can enable or disable a specific FEC type for a given LDP session to a peer by using the following CLI commands:
Adjacency-level FEC-type capability advertisement is defined in draft-pdutta-mpls- ldp-adj-capability. By default, all FEC types supported by the LSR are advertised in the LDP IPv4 or IPv6 session initialization; see LDP Session Capabilities for more information. If a given FEC type is enabled at the session level, it can be disabled over a given LDP interface at the IPv4 or IPv6 adjacency level for all IPv4 or IPv6 peers over that interface. If a given FEC type is disabled at the session level, then FECs will not be advertised and enabling that FEC type at the adjacency level will not have any effect. The LDP adjacency capability can be configured on link Hello adjacency only and does not apply to targeted Hello adjacency.
The LDP adjacency capability TLV is advertised in the Hello message with the D-bit (Disable-bit) set or reset to disable or enable the resolution of this FEC type over the link of the Hello adjacency. It is used to restrict which FECs can be resolved over a given interface to a peer. This provides the ability to dedicate links and data path resources to specific FEC types. For IPv4 and IPv6 prefix FECs, a subset of ECMP links to an LSR peer may be configured to carry one of the two FEC types. An mLDP P2MP FEC (IPv4 only) can exclude specific links to a downstream LSR from being used to resolve this type of FEC.
Like the LDP session-level FEC-type capability, the adjacency FEC-type capability is negotiated for both directions of the adjacency. If one or both peers advertise the disabling of a capability in the LDP Hello message, no FECs of the corresponding FEC type will be resolved by either peer over the link of this adjacency for the lifetime of the LDP Hello adjacency, unless one or both peers sends the LDP adjacency capability TLV subsequently to explicitly enable it.
The user can enable or disable a specific FEC type for a given LDP interface to a peer by using the following CLI commands:
These commands, when applied to the IPv4 P2MP FEC, deprecate the existing multicast-traffic command under the interface. Unlike the session-level capability, these commands can disable multicast FEC for IPv4.
The encoding of the adjacency capability TLV uses a PRIVATE Vendor TLV. It is used only in a Hello message to negotiate a set of capabilities for a specific LDP IPv4 or IPv6 hello adjacency.
The value of the U-bit for the TLV is set to 1 so that a receiver must silently ignore if the TLV is deemed unknown.
The value of the F-bit is 0. After being advertised, this capability cannot be withdrawn; thus, the S-bit is set to 1 in a Hello message.
Adjacency capability elements are encoded as follows:
D bit: Controls the capability state.
CapFlag: The adjacency capability
Each CapFlag appears no more than once in the TLV. If duplicates are found, the D-bit of the first element is used. For forward compatibility, if the CapFlag is unknown, the receiver must silently discard the element and continue processing the rest of the TLV.
When an LDP LSR initializes the LDP session to the peer LSR and the session comes up, IP addresses and FECs are distributed. Local IPv4 and IPv6 interface addresses are exchanged using the Address and Address Withdraw messages. FECs are exchanged using label mapping messages.
By default, IPv6 address distribution is determined by whether the dual-stack capability TLV, which is defined in draft-ietf-mpls-ldp-ipv6, is present in the Hello message from the peer. This requirement is designed to address interoperability issues found with existing third-party LDP IPv4 implementations.
IP address and FEC distribution behavior is described below.
The above behavior applies to LDP IPv4 and IPv6 addresses and FECs. The procedure is summarized in the flowchart diagrams in Figure 26 and Figure 27.
The IGP-LDP synchronization and the static route-to-LDP synchronization features are modified to operate on a dual-stack IPv4 or IPv6 LDP interface as follows.
Given the above behavior, it is recommended that the user configures the sync timer to a value which allows enough time for both the LDP IPv4 and LDP IPv6 sessions to come up.
The operation of BFD over an LDP interface tracks the next hop of prefix IPv4 and prefix IPv6 in addition to tracking of the LDP peer address of the Hello adjacency over that link. Tracking is required because LDP can resolve both IPv4 and IPv6 prefix FECs over a single IPv4 or IPv6 LDP session and therefore, the next hop of a prefix will not necessarily match the LDP peer source address of the Hello adjacency. If any of the BFD tracking sessions fail, the LFA backup NHLFE for the FEC is activated, or, if there is no FRR backup, the FEC is re-resolved.
The user can configure tracking with only an IPv4 BFD session, only an IPv6 BFD session, or with both using the config>router>ldp>if-params>if>bfd-enable [ipv4] [ipv6] command.
This command provides flexibility in case the user does not need to track both Hello adjacency and the next hops of FECs.
For example, if the user configures bfd-enable ipv6 only to save on the number of BFD sessions, then LDP will track the IPv6 Hello adjacency and the next hops of IPv6 prefix FECs. LDP will not track next hops of IPv4 prefix FECs resolved over the same LDP IPv6 adjacency. If the IPv4 data plane encounters errors and the IPv6 Hello adjacency is not affected and remains up, traffic for the IPv4 prefix FECs resolved over that IPv6 adjacency will be blackholed. If the BFD tracking the IPv6 Hello adjacency times out, then all IPv4 and IPv6 prefix FECs will be updated.
The 7705 SAR supports SDPs of type LDP with far-end options using IPv6 addresses. The addresses need not be of the same family (IPv6 or IPv4) for the SDP configuration to be allowed. The user can have an SDP with an IPv4 (or IPv6) control plane for the T-LDP session and an IPv6 (or IPv4) LDP FEC as the tunnel.
Because IPv6 LSP is only supported with LDP, the use of a far-end IPv6 address is not allowed with a BGP or RSVP/MPLS LSP. In addition, the CLI does not allow an SDP with a combination of an IPv6 LDP LSP and an IPv4 LSP of a different control plane. As a result, the following commands are blocked in the SDP configuration context when the far end is an IPv6 address:
SDP admin groups are not supported with an SDP using an LDP IPv6 FEC, and the attempt to assign them is blocked in CLI.
Services that use the LDP control plane (such as T-LDP VPLS and R-VPLS, VLL, and IES/VPRN spoke interface) have the spoke SDP (PW) signaled with an IPv6 T-LDP session when the far-end option is configured to an IPv6 address. By default, the spoke SDP for these services binds to an SDP that uses an LDP IPv6 FEC that matches the prefix of the far end address.
In addition, the IPv6 PW control word is supported with data plane packets and VCCV OAM packets. Hash label is also supported with the above services, including the signaling and negotiation of hash label support using T-LDP (Flow sub-TLV) with the LDP IPv6 control plane. Finally, network domains are supported in VPLS.
The user can configure a spoke SDP bound to an LDP IPv6 LSP to forward mirrored packets from a mirror source to a remote mirror destination. In the configuration of the mirror destination service at the destination node, the remote-source command must use a spoke SDP with a VC ID that matches the VC-ID that is configured in the mirror destination service at the mirror source node. The far-end option is not supported with an IPv6 address.
Use the following rules and syntax to configure a spoke SDP at the mirror source node.
Use the following rules and syntax to configure mirror service at the mirror destination node.
Mirroring is also supported with the PW redundancy feature when the endpoint spoke SDP, including the ICB, is using an LDP IPv6 tunnel.
MPLS OAM tools LSP ping and LSP trace can operate with LDP IPv6 and support the following:
The behavior at the sender and receiver nodes supports both LDP IPv4 and IPv6 target FEC stack TLVs. Specifically:
VCCV ping and VCCV trace for a single-hop PW support IPv6 PW FEC 128, as per RFC 6829. In addition, the PW OAM control word is supported with VCCV packets when the control-word option is enabled on the spoke SDP configuration. When the value of the Channel Type field is set to 0x57, it indicates that the Associated Channel carries an IPv6 packet, as per RFC 4385.
The 7705 SAR uses a 128-bit LSR ID as defined in draft-pdutta-mpls-ldp-v2 to establish an LDP IPv6 session with a peer LSR. This is so that a routable system IPv6 address can be used by default to bring up the LDP task on the router and establish link LDP and T-LDP sessions to other LSRs. More importantly, using a 128-bit LSR ID allows for the establishment of control plane-independent LDP IPv4 and IPv6 sessions between two LSRs over the same interface or different set of interfaces because each session uses a unique LSR ID (32-bit for IPv4 and 128-bit for IPv6).
The 7705 SAR LDP implementation does not interoperate with a system using a 32-bit LSR ID (as defined in draft-ietf-mpls-ldp-ipv6) to establish an IPv6 LDP session. The latter specifies that an LSR can send both IPv4 and IPv6 Hello messages over an interface, allowing the system to establish either an IPv4 or an IPv6 LDP session with LSRs on the same subnet. It does not allow for separate LDP IPv4 and LDP IPv6 LDP sessions between two routers.
The 7705 SAR LDP implementation interoperates with systems using a 32-bit LSR ID (as defined in draft-ietf-mpls-ldp-ipv6) to establish an IPv4 LDP session and to resolve both IPv4 and IPv6 prefix FECs.
The 7705 SAR otherwise complies with all other aspects of draft-ietf-mpls-ldp-ipv6, including the support of the dual-stack capability TLV in the Hello message. The latter is used by an LSR to inform its peer that it is capable of establishing either an LDP IPv4 or LDP IPv6 session and to convey the IP family preference for the LDP Hello adjacency and thus for the resulting LDP session. This is required because the implementation described in draft-ietf-mpls-ldp-ipv6 allows for a single session between LSRs, and both LSRs must agree if the session should be brought up using IPv4 or IPv6 when both IPv4 and IPv6 Hellos are exchanged between the two LSRs. The 7705 SAR implementation has a separate session for each IP family between two LSRs and, as such, this TLV is used to specify the family preference and to indicate that the system supports resolving IPv6 FECs over an IPv4 LDP session.
Some third-party LDP implementations are compliant with RFC 5036 for LDP IPv4 but are not compliant with RFC 5036 for handling IPv6 address or IPv6 FECs over an LDP IPv4 session.
An LSR based on the 7705 SAR in a LAN with a broadcast interface can peer with any third-party LSR, including those that are incapable of handling IPv6 address or IPv6 FECs over an LDP IPv4 session. When the 7705 SAR uses the IPv4 LDP control plane to advertise IPv6 addresses or IPv6 FECs to that peer, it can cause the IPv4 LDP session to go down.
To address this issue, draft-ietf-mpls-ldp-ipv6 modifies RFC 5036 and requires compliant systems to check for the dual-stack capability TLV in the IPv4 Hello message from the peer. If the peer does not advertise this TLV, the LSR does not send IPv6 addresses and FECs to that peer. The 7705 SAR supports advertising and resolving IPv6 prefix FECs over an LDP IPv4 session using a 32-bit LSR ID in compliance with draft-ietf-mpls-ldp-ipv6.
For smooth transition from IPv4 to IPv6, it is recommended to follow the steps below.
Figure 28 shows an example of a network ensuring a smooth upgrade from IPv4 to IPv6 with PW redundancy.
Figure 29 displays the process to provision basic LDP parameters.
Refer to the 7705 SAR Services Guide for information about signaling.
For information on supported IETF drafts and standards, as well as standard and proprietary MIBs, refer to Standards and Protocol Support.