5. Label Distribution Protocol

This chapter provides information to enable the Label Distribution Protocol (LDP).

Topics in this chapter include:

5.1. Label Distribution Protocol

Label Distribution Protocol (LDP) is used to distribute labels in non-traffic-engineered applications. LDP allows routers to establish LSPs through a network by mapping network-layer routing information directly to data link LSPs.

An LSP is defined by the set of labels from the ingress LER to the egress LER. LDP associates a Forwarding Equivalence Class (FEC) with each LSP it creates. A FEC is a collection of common actions associated with a class of packets. When an ingress LER assigns a label to a FEC, it must let other LSRs in the path know about the label. LDP helps to establish the LSP by providing a set of procedures that LSRs can use to distribute labels.

The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network by each LSR, where each LSR splices incoming labels for the FEC to the outgoing label assigned to the next hop for the FEC.

LDP allows an LSR to request a label from a downstream LSR so it can bind the label to a specific FEC. The downstream LSR responds to the request from the upstream LSR by sending the requested label.

LSRs can distribute a FEC label binding in response to an explicit request from another LSR. This is known as Downstream On Demand (DOD) label distribution. LSRs can also distribute label bindings to LSRs that have not explicitly requested them. This is called Downstream Unsolicited (DU). For LDP on the 7705 SAR, Downstream Unsolicited (DU) mode is implemented.

This section contains the following topics:

5.1.1. LDP and MPLS

LDP performs dynamic label distribution in MPLS environments. The LDP operation begins with a Hello discovery process network to form an adjacency with an LDP peer in the network. LDP peers are two MPLS routers that use LDP to exchange label/FEC mapping information. An LDP session is created between LDP peers. A single LDP session allows each peer to learn the other's label mappings and to distribute its own label information (LDP is bidirectional), and exchange label binding information.

LDP signaling works with the MPLS label manager to manage the relationships between labels and the corresponding FEC. For service-based FECs, LDP works in tandem with the Service Manager to identify the virtual leased lines (VLLs) and pseudowires (PWs) to signal.

An MPLS label identifies a set of actions that the forwarding plane performs on an incoming packet before discarding it. The FEC is identified through the signaling protocol (in this case LDP), and is allocated a label. The mapping between the label and the FEC is communicated to the forwarding plane. In order for this processing on the packet to occur at high speeds, optimized tables that enable fast access and packet identification are maintained in the forwarding plane.

When an unlabeled packet ingresses the 7705 SAR, classification policies associate it with a FEC, the appropriate label is imposed on the packet, and then the packet is forwarded. Other actions can also take place on a packet before it is forwarded, including imposing additional labels, other encapsulations, or learning actions. Once all actions associated with the packet are completed, the packet is forwarded.

When a labeled packet ingresses the router, the label or stack of labels indicates the set of actions associated with the FEC for that label or label stack. The actions are performed on the packet and then the packet is forwarded.

The LDP implementation provides support for DU, ordered control, and liberal label retention mode.

For LDP label advertisement, DU mode is supported. To prevent filling the uplink bandwidth with unassigned label information, Ordered Label Distribution Control mode is supported.

A PW/VLL label can be dynamically assigned by targeted LDP operations. Targeted LDP allows the inner labels (that is, the VLL labels) in the MPLS headers to be managed automatically. This makes it easier for operators to manage the VLL connections. There is, however, additional signaling and processing overhead associated with this targeted LDP dynamic label assignment.

5.1.1.1. BFD for T-LDP

BFD is a simple protocol for detecting failures in a network. BFD uses a “hello” mechanism that sends control messages periodically to the far end and receives periodic control messages from the far end. BFD is implemented in asynchronous mode only, meaning that neither end responds to control messages; rather, the messages are sent in the time period configured at each end.

A T-LDP session is a session between either directly or non-directly connected peers and requires that adjacencies be created between two peers. BFD for T-LDP sessions allows support for tracking of failures of nodes that are not directly connected. BFD timers must be configured under the system router interface context before being enabled under T-LDP.

BFD tracking of an LDP session associated with a T-LDP adjacency allows for faster detection of the status of the session by registering the loopback address of the peer as the transport address.

5.1.2. LDP Architecture

LDP comprises a few processes that handle the protocol PDU transmission, timer-related issues, and protocol state machine. The number of processes is kept to a minimum to simplify the architecture and to allow for scalability. Scheduling within each process prevents starvation of any particular LDP session, while buffering alleviates TCP-related congestion issues.

The LDP subsystems and their relationships to other subsystems are illustrated in Figure 19. This illustration shows the interaction of the LDP subsystem with other subsystems, including memory management, label management, service management, SNMP, interface management, and RTM. In addition, debugging capabilities are provided through the logger.

Communication within LDP tasks is typically done by interprocess communication through the event queue, as well as through updates to the various data structures. The following list describes the primary data structures that LDP maintains:

  1. FEC/label database — this database contains all the FEC-to-label mappings, including both sent and received. It also contains both address FECs (prefixes and host addresses) as well as service FECs (L2 VLLs).
  2. Timer database — this database contains all the timers for maintaining sessions and adjacencies
  3. Session database — this database contains all the session and adjacency records, and serves as a repository for the LDP MIB objects

5.1.3. LDP Subsystem Interrelationships

Figure 19 shows the relationships between LDP subsystems and other 7705 SAR subsystems. The following sections describe how the subsystems work to provide services.

5.1.3.1. Memory Manager and LDP

LDP does not use any memory until it is instantiated. It preallocates some amount of fixed memory so that initial startup actions can be performed. Memory allocation for LDP comes out of a pool reserved for LDP that can grow dynamically as needed.

Fragmentation is minimized by allocating memory in large chunks and managing the memory internally to LDP. When LDP is shut down, it releases all memory allocated to it.

5.1.3.2. Label Manager

LDP assumes that the label manager is up and running. LDP will abort initialization if the label manager is not running. The label manager is initialized at system boot-up; hence anything that causes it to fail will likely indicate that the system is not functional. The 7705 SAR uses a label range from 28 672 (28K) to 131 071 (128K-1) to allocate all dynamic labels, including VC labels.

Figure 19:  LDP Subsystem Interrelationships 

5.1.3.3. LDP Configuration

The 7705 SAR uses a single consistent interface to configure all protocols and services. CLI commands are translated to SNMP requests and are handled through an agent-LDP interface. LDP can be instantiated or deleted through SNMP. Also, targeted LDP sessions can be set up to specific endpoints. Targeted session parameters are configurable.

5.1.3.4. Logger

LDP uses the logger interface to generate debug information relating to session setup and teardown, LDP events, label exchanges, and packet dumps. Per-session tracing can be performed. Refer to the 7705 SAR System Management Guide for logger configuration information.

5.1.3.5. Service Manager

All interaction occurs between LDP and the service manager, since LDP is used primarily to exchange labels for Layer 2 services. In this context, the service manager informs LDP when an LDP session is to be set up or torn down, and when labels are to be exchanged or withdrawn. In turn, LDP informs the service manager of relevant LDP events, such as connection setups and failures, timeouts, and labels signaled or withdrawn.

5.1.4. Execution Flow

LDP activity in the 7705 SAR is limited to service-related signaling. Therefore, the configurable parameters are restricted to system-wide parameters, such as hello and keepalive timeouts.

5.1.4.1. Initialization

MPLS must be enabled when LDP is initialized. LDP makes sure that the various prerequisites are met, such as ensuring that the system IP interface and the label manager are operational, and ensuring that there is memory available. It then allocates a pool of memory to itself and initializes its databases.

5.1.4.2. Session Lifetime

In order for a targeted LDP session to be established, an adjacency has to be created. The LDP extended discovery mechanism requires hello messages to be exchanged between two peers for session establishment. Once the adjacency is established, session setup is attempted.

5.1.4.2.1. Adjacency Establishment

In the 7705 SAR, adjacency management is done through the establishment of a Service Destination Point (SDP) object, which is a service entity in the Nokia service model.

The service model uses logical entities that interact to provide a service. The service model requires the service provider to create and configure four main entities:

  1. customers
  2. services
  3. Service Access Points (SAPs) on local 7705 SAR routers
  4. SDPs that connect to one or more remote 7705 SAR routers or 77x0 SR routers

An SDP is the network-side termination point for a tunnel to a remote 7705 SAR or 77x0 SR router. An SDP defines a local entity that includes the system IP address of the remote 7705 SAR routers and 77x0 SR routers, and a path type.

Each SDP comprises:

  1. the SDP ID
  2. the transport encapsulation type, MPLS
  3. the far-end system IP address

If the SDP is identified as using LDP signaling, then an LDP extended hello adjacency is attempted.

If another SDP is created to the same remote destination and if LDP signaling is enabled, no further action is taken, since only one adjacency and one LDP session exists between the pair of nodes.

An SDP is a unidirectional object, so a pair of SDPs pointing at each other must be configured in order for an LDP adjacency to be established. Once an adjacency is established, it is maintained through periodic hello messages.

5.1.4.2.2. Session Establishment

When the LDP adjacency is established, the session setup follows as per the LDP specification. Initialization and keepalive messages complete the session setup, followed by address messages to exchange all interface IP addresses. Periodic keepalives or other session messages maintain the session liveness.

Since TCP is back-pressured by the receiver, it is necessary to be able to push that back-pressure all the way into the protocol. Packets that cannot be sent are buffered on the session object and reattempted as the back-pressure eases.

5.1.5. Label Exchange

Label exchange is initiated by the service manager. When an SDP is attached to a service (that is, once the service gets a transport tunnel), a message is sent from the service manager to LDP. This causes a label mapping message to be sent. Additionally, when the SDP binding is removed from the service, the VC label is withdrawn. The peer must send a label release to confirm that the label is not in use.

5.1.5.1. Other Reasons for Label Actions

Label actions can also occur for the following reasons:

  1. MTU changes — LDP withdraws the previously assigned label and resignals the FEC with the new Maximum Transmission Unit (MTU) in the interface parameter
  2. clear labels — when a service manager command is issued to clear the labels, the labels are withdrawn and new label mappings are issued
  3. SDP down — when an SDP goes administratively down, the VC label associated with that SDP for each service is withdrawn
  4. memory allocation failure — if there is no memory to store a received label, the received label is released
  5. VC type unsupported — when an unsupported VC type is received, the received label is released

5.1.5.2. Cleanup

LDP closes all sockets, frees all memory, and shuts down all its tasks when it is deleted, so that it uses no memory (0 bytes) when it is not running.

5.1.6. LDP Filters

The 7705 SAR supports both inbound and outbound LDP label binding filtering.

Inbound filtering (import policy) allows the user to configure a policy to control the label bindings an LSR (Label Switch Router) accepts from its peers.

Import policy label bindings can be filtered based on the following:

  1. neighbor — match on bindings received from the specified peer
  2. prefix-list — match on bindings with the specified prefix/prefixes

The default import behavior is to accept all FECs received from peers.

Outbound filtering (export policy) allows the user to configure a policy to control the set of LDP label bindings advertised by the LSR (Label Switch Router).

Because the default behavior is to originate label bindings for the system IP address only, when a non-default loopback address is used as the transport address, the 7705 SAR will not advertise the loopback FEC automatically. With LDP export policy, the user is now able to explicitly export the loopback address in order to advertise the loopback address label and allow the node to be reached by other network elements.

Export policy label bindings can be filtered based on the following:

  1. all — all local subnets by specifying “direct” as the match protocol
  2. prefix-list — match on bindings with the specified prefix/prefixes

Note:

In order for the 7705 SAR to consider a received label to be active, there must be an exact match to the FEC advertised together with the label found in the routing table, or a longest prefix match (if the aggregate-prefix-match option is enabled; see Multi-area and Multi-instance Extensions to LDP). This can be achieved by configuring a static route pointing to the prefix encoded in the FEC.

5.1.7. Multi-area and Multi-instance Extensions to LDP

When a network has two or more IGP areas, or instances, inter-area LSPs are required for MPLS connectivity between the PE devices that are located in the distinct IGP areas. In order to extend LDP across multiple areas of an IGP instance or across multiple IGP instances, the current standard LDP implementation based on RFC 3036, LDP Specification, requires that all /32 prefixes of PEs be leaked between the areas or instances. IGP route leaking is the distribution of the PE loopback addresses across area boundaries. An exact match of the prefix in the routing table (RIB) is required to install the prefix binding in the FIB and set up the LSP.

This behavior is the default behavior for the 7705 SAR when it is configured as an Area Border Router (ABR). However, exact prefix matching causes performance issues for the convergence of IGP on routers deployed in networks where the number of PE nodes scales to thousands of nodes. Exact prefix matching requires the RIB and FIB to contain the IP addresses maintained by every LSR in the domain and requires redistribution of a large number of addresses by the ABRs. Security is a potential issue as well, as host routes leaked between areas can be used in DoS and DDoS attacks and spoofing attacks.

To avoid these performance and security issues, the 7705 SAR can be configured for an optional behavior in which LDP installs a prefix binding in the LDP FIB by performing a longest prefix match with an aggregate prefix in the routing table (RIB). This behavior is described in RFC 5283, LDP Extension for Inter-Area Label Switched Paths. The LDP prefix binding continues to be advertised on a per-individual /32 prefix basis.

When the longest prefix match option is enabled and an LSR receives a FEC-label binding from an LDP neighbor for a prefix-address FEC element, FEC1, it installs the binding in the LDP FIB if:

  1. the routing table (RIB) contains an entry that matches FEC1. Matching can either be a longest IP match of the FEC prefix or an exact match.
  2. the advertising LDP neighbor is the next hop to reach FEC1

When the FEC-label binding has been installed in the LDP FIB, LDP programs an NHLFE entry in the egress data path to forward packets to FEC1. LDP also advertises a new FEC-label binding for FEC1 to all its LDP neighbors.

When a new prefix appears in the RIB, LDP checks the LDP FIB to determine if this prefix is a closer match for any of the installed FEC elements. If a closer match is found, this may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed.

When a prefix is removed from the RIB, LDP checks the LDP FIB for all FEC elements that matched this prefix to determine if another match exists in the routing table. If another match exists, LDP must use it. This may mean that the LSR used as the next hop will change; if so, the NHLFE entry for that FEC must be changed. If another match does not exist, the LSR removes the FEC binding and sends a label withdraw message to its LDP neighbors.

If the next hop for a routing prefix changes, LDP updates the LDP FIB entry for the FEC elements that matched this prefix. It also updates the NHLFE entry for the FEC elements.

5.1.8. ECMP Support for LDP

Equal-Cost Multipath Protocol (ECMP) support for LDP performs load balancing for services that use LDP-based LSPs as transport tunnels, by having multiple equal-cost outgoing next hops for an IP prefix.

ECMP for LDP load-balances traffic across all equal-cost links based on the output of the hashing algorithm using the allowed inputs, based on the service type. For detailed information, refer to “LAG and ECMP Hashing” in the 7705 SAR Interface Configuration Guide.

There is only one next-hop peer for a network link. To offer protection from a network link or next-hop peer failure, multiple network links can be configured to connect to different next-hop peers, or multiple links to the same peer. For example, an MLPPP link and an Ethernet link can be connected to two peers, or two Ethernet links can be connected to the same peer. ECMP occurs when the cost of each link reaching a target IP prefix is equal.

The 7705 SAR uses a liberal label retention mode, which retains all labels for an IP prefix from all next-hop peers. A 7705 SAR acting as an LSR load-balances the MPLS traffic over multiple links using a hashing algorithm.

The 7705 SAR supports the following optional fields as hash inputs and supports profiles for various combinations:

  1. hashing algorithms
    1. label-only option: hashing is done on the MPLS label stack, up to a maximum of 10 labels (default)
    2. label-IP option: hashing is done on the MPLS label stack and the IPv4 source and destination IP address if an IPv4 header is present after the MPLS labels
    3. Layer 4 header (source or destination UDP or TCP port number) and TEID: hashing is done on the MPLS label stack, the IPv4 source and destination IP address (if present), then on the Layer 4 source and destination UDP or TCP port fields (if present) and the TEID in the GTP header (if present)
  2. label stack profile options on significance of the bottom-of-stack label (VC label)
    1. profile 1: favors better load balancing for pseudowires when the VC label distribution is contiguous (default)
    2. profile 2: similar to profile 1 where the VC labels are contiguous, but provides an alternate distribution
    3. profile 3: all labels have equal influence in hash key generation
  3. ingress LAG port at the LSR (default is disabled)
    The use-ingress-port option, when enabled, specifies that the ingress port will be used by the hashing algorithm at the LSR. This option should be enabled for ingress LAG ports because packets with the same label stack can arrive on all ports of a LAG interface. In this case, using the ingress port in the hashing algorithm will result in better egress load balancing, especially for pseudowires.
    The option should be disabled for LDP ECMP so that the ingress port is not used by the hashing algorithm. For ingress LDP ECMP, if the ingress port is used by the hashing algorithm, the hash distribution could be biased, especially for pseudowires.
  4. system IP address – hashing on the system IP address is enabled and disabled at the system level only

All of the above options can be configured with the lsr-load-balancing command, with the exception of the system IP address, which is configured with the system-ip-load-balancing command.

Note:

The global IF index is no longer a hash input for LSR ECMP load balancing. It has been replaced with the use-ingress-port configurable option in the lsr-load-balancing command. As well, the default treatment of the MPLS label stack has changed to focus on the bottom-of-stack label (VC label). In previous releases, all labels had equal influence.

LSR load balancing can be configured at the system level or interface level. Configuration at the interface level overrides the system-level settings for the specific interface. Configuration must be done on the ingress network interface (that is, the interface on the LDP LSR node that the packet is received on).

Configuration of load balancing at the interface level provides some control to the user; for example, the label-IP option can be disabled on a specific interface if labeled packets received on the interface include non-IP packets that can be confused by the hash routine for IP packets. Disabling the label-IP option can be used in cases where the first nibble of a non-IP packet is a 4, which would result in the packet being hashed incorrectly if the label-IP option was enabled.

If ECMP is not enabled, the label from only one of the next-hop peers is selected and installed in the forwarding plane. In this case, the algorithm used to distribute the traffic flow looks up the route information, and selects the network link with the lowest IP address. If the selected network link or next-hop peer fails, another next-hop peer is selected, and LDP reprograms the forwarding plane to use the label sent by the newly selected peer.

ECMP is supported on all Ethernet ports in network mode, and is also supported on the 4-port OC3/STM1 Clear Channel Adapter card when it is configured for POS (ppp-auto) encapsulation and network mode.

For information on configuring the 7705 SAR for LSR ECMP, refer to the lsr-load-balancing and system-ip-load-balancing commands in the 7705 SAR Basic System Configuration Guide, “System Information and General Commands” and the lsr-load-balancing command in the 7705 SAR Router Configuration Guide, “Router Interface Commands”.

For information on LDP treetrace commands for tracing ECMP paths, refer to the 7705 SAR OAM and Diagnostics Guide.

Note:

LDP treetrace works best with label-IP hashing (lbl-ip) enabled, rather than label-only (lbl-only) hashing. These options are set with the lsr-load-balancing command.

Note:

  1. Because of the built-in timeout to dynamic ARP, the MAC address of the remote peer needs to be renewed periodically. The flow of IP traffic resets the timers back to their maximum values. In the case of LDP ECMP, one link could be used for transporting user MPLS (pseudowire) traffic but the LDP session could possibly be using a different equal-cost link. For LDPs using ECMP and for static LSPs, it is important to ensure that the remote MAC address is learned and does not expire. Configuring static ARP entries or running continuous IP traffic ensures that the remote MAC address is always known. Running BFD for fast detection of Layer 2 faults or running any OAM tools with SAA ensures that the learned MAC addresses do not expire.
  2. ARP entries are refreshed by static ARP and BFD, SAA, OSPF, IS-IS, or BGP.
  3. For information on configuring static ARP and running BFD, refer to the 7705 SAR Router Configuration Guide.

5.1.8.1. Label Operations

If an LSR is the ingress router for a given IP prefix, LDP programs a PUSH operation for the prefix in the IOM. This creates an LSP ID to the Next Hop Label Forwarding Entry (NHLFE) mapping (LTN mapping) and an LDP tunnel entry in the forwarding plane. LDP will also inform the Tunnel Table Manager (TTM) about this tunnel. Both the LSP ID to NHLFE (LTN) entry and the tunnel entry will have an NHLFE for the label mapping that the LSR received from each of its next-hop peers.

If the LSR is to behave as a transit router for a given IP prefix, LDP will program a SWAP operation for the prefix in the IOM. This involves creating an Incoming Label Map (ILM) entry in the forwarding plane. The ILM entry might need to map an incoming label to multiple NHLFEs.

If an LSR is an egress router for a given IP prefix, LDP will program a POP entry in the IOM. This too will result in an ILM entry being created in the forwarding plane, but with no NHLFEs.

When unlabeled packets arrive at the ingress LER, the forwarding plane consults the LTN entry and uses a hashing algorithm to map the packet to one of the NHLFEs (PUSH label) and forward the packet to the corresponding next-hop peer. For a labeled packet arriving at a transit or egress LSR, the forwarding plane consults the ILM entry and either uses a hashing algorithm to map it to one of the NHLFEs if they exist (SWAP label) or routes the packet if there are no NHLFEs (POP label).

5.1.9. Graceful Restart Helper

Graceful Restart (GR) is part of the LDP handshake process (that is, the LDP peering session initialization) and needs to be supported by both peers. GR provides a mechanism that allows the peers to cope with a service interruption due to a CSM switchover, which is a period of time when the standby CSM is not capable of synchronizing the states of the LDP sessions and labels being advertised and received.

Graceful Restart Helper (GR-Helper) decouples the data plane from the control plane so that if the control plane is not responding (that is, there is no LDP message exchange between peers), then the data plane can still forward frames based on the last known (advertised) labels.

Because the 7705 SAR supports non-stop services / high-availability for LDP (and MPLS), the full implementation of GR is not needed. However, GR-Helper is implemented on the 7705 SAR to support non-high-availability devices. With GR-Helper, if an LDP peer of the 7705 SAR requests GR during the LDP handshake, the 7705 SAR agrees to it but does not request GR. For the duration of the LDP session, if the 7705 SAR LDP peer fails, the 7705 SAR continues to forward MPLS packets based on the last advertised labels and will not declare the peer dead until the GR timer expires.

5.1.10. Graceful Handling of Resource Exhaustion

Graceful handling of resource exhaustion enhances the behavior of LDP when a data path or a CSM resource required for the resolution of a FEC is exhausted. In prior releases, the entire LDP protocol was shut down, causing all LDP peering sessions to be torn down and therefore impacting all peers. The user was required to fix the issue that caused the FEC scaling to be exceeded, and to restart the LDP session by executing the no shutdown CLI command. With graceful handling of resource exhaustion, only the responsible session or sessions are shut down, which impacts only the appropriate peer or peers.

Graceful handling of resources implements a capability by which the LDP interface to the peer, or the targeted peer in the case of a targeted LDP (T-LDP) session, is shut down.

If LDP tries to resolve a FEC over a link or a T-LDP session and runs out of data path or CSM resources, LDP brings down that interface or targeted peer, which brings down the Hello adjacency over that interface to all linked LDP peers or to the targeted peer. The interface is brought down for the LDP context only and is still available to other applications such as IP forwarding and RSVP LSP forwarding.

After taking action to free up resources, the user must manually perform a no shutdown command on the interface or the targeted peer to bring it back into operation. This re-establishes the Hello adjacency and resumes the resolution of FECs over the interface or to the targeted peer.

5.1.11. LDP Support for Unnumbered Interfaces

Unnumbered interfaces are point-to-point interfaces that are not explicitly configured with a dedicated IP address and subnet; instead, they borrow (or link to) an IP address from another interface on the system (the system IP address, another loopback interface, or any other numbered interface) and use it as the source IP address for packets originating from the interface. For more information on support for unnumbered interfaces, refer to the 7705 SAR Router Configuration Guide, “Unnumbered Interfaces”.

This feature allows LDP to establish a Hello adjacency and to resolve unicast FECs over unnumbered LDP interfaces.

For example, LSR A and LSR B are the two endpoints of an unnumbered link. These interfaces are identified on each system with their unique link local identifier. The combination router ID and link local identifier uniquely identifies the interface in IS-IS throughout the network.

A borrowed IP address is also assigned to the interface to be used as the source address of IP packets that must originate from the interface. The borrowed IP address defaults to the system interface address, A and B in this example. The borrowed IP interface can be configured to any IP interface by using the following CLI command: config> router>interface>unnumbered {ip-int-name | ip-address}.

The fec-originate command, which defines how to originate a FEC for egress and non-egress LSRs, includes a parameter to specify the name of the interface that the label for the originated FEC is swapped to. For an unnumbered interface, this parameter is mandatory because an unnumbered interface does not have its own IP address.

When the unnumbered interface is added into LDP, the follow behavior occurs.

For link LDP (L-LDP) sessions:

  1. The Hello adjacency is brought up using a link Hello packet with the source IP address set to the interface borrowed IP address and a destination IP address set to 224.0.0.2.
  2. Hello packets with the same source IP address should be accepted when received over parallel unnumbered interfaces from the same peer LSR ID. The corresponding Hello adjacencies are associated with a single LDP session.
  3. The transport address for the TCP connection, which is encoded in the Hello packet, is always set to the LSR ID of the node whether or not the interface option was enabled using the config>router>ldp>interface-parameters> interface>transport-address command.
  4. The local-lsr-id option can be configured on the interface and the value of the LSR ID can be changed to either the local interface or to some other interface name. If the local interface is selected or the provided interface name corresponds to an unnumbered IP interface, the unnumbered interface borrowed IP address is used as the LSR ID. In all cases, the transport address for the LDP session is updated to the new LSR ID value, but the link Hello packets continue to use the interface borrowed IP address as the source IP address.
  5. The LSR with the highest transport address, the LSR ID in this case, bootstraps the TCP connection and LDP session.
  6. The source and destination IP addresses of LDP packets are the transport addresses, that is, the LDP LSR IDs of the LSRs at the endpoints of the link (A and B in the example).

For targeted LDP (T-LDP) sessions:

  1. The source and destination addresses of the targeted Hello packet are the LDP LSR IDs of systems A and B.
  2. The local-lsr-id option can be configured on the interface for the targeted session and the value of the LSR ID can be changed to either the local interface or to some other interface name. If the local interface is selected or the provided interface name corresponds to an unnumbered IP interface, the unnumbered interface borrowed IP address is used as the LSR ID. In all cases, the transport address for the LDP session and the source IP address of the targeted Hello message are updated to the new LSR ID value.
  3. The LSR with the highest transport address, the LSR ID in this case, bootstraps the TCP connection and LDP session.
  4. The source and destination IP addresses of LDP packets are the transport addresses, that is, the LDP LSR IDs of the LSRs at the endpoints of the link (A and B in the example).

FEC resolution:

  1. LDP advertises/withdraws unnumbered interfaces using the Address/Address-Withdraw message. The borrowed IP address of the interface is used.
  2. A FEC can be resolved to an unnumbered interface in the same way as it is resolved to a numbered interface. The outgoing interface and next hop are looked up in the RTM cache. The next hop is the router ID and link identifier of the interface at the peer LSR.
  3. LDP FEC ECMP next hops over a mix of unnumbered and numbered interfaces are supported.
  4. All LDP FEC types are supported.
  5. The fec-originate command is supported when the next hop is over an unnumbered interface.

All LDP features supported for numbered IP interfaces are supported for unnumbered interfaces, with the following exceptions:

  1. BFD is not supported on unnumbered IP interfaces
  2. LDP FRR is not triggered by a BFD session timeout, only by a physical failure or the local interface going down
  3. unnumbered IP interfaces cannot be added into LDP global and peer prefix policies

The unnumbered interface feature also extends the support of LSP ping and LSP traceroute to test an LDP unicast FEC that is resolved over an unnumbered LDP interface.

5.1.12. LDP Fast Reroute (FRR)

LDP Fast Reroute (FRR) provides local protection for an LDP FEC by precalculating and downloading a primary and a backup NHLFE for the FEC to the LDP FIB. The primary NHLFE corresponds to the label of the FEC received from the primary next hop as per the standard LDP resolution of the FEC prefix in the RTM. The backup NHLFE corresponds to the label received for the same FEC from a Loop-Free Alternate (LFA) next hop.

LDP FRR protects against single link or single node failure. SRLG failure protection is not supported.

Without FRR, when a local link or node fails, the router must signal the failure to its neighbors via the IGP providing the routing (OSPF or IS-IS), recalculate primary next-hop NHLFEs for all affected FECs, and update the FIB. Until the new primary next hops are installed in the FIB, any traffic destined for the affected FECs is discarded. This process can take hundreds of milliseconds.

LDP FRR improves convergence in case of a local link or node failure in the network, by using the label-FEC binding received from the LFA next hop to forward traffic for a given prefix as soon as the primary next hop is not available. This means that a router resumes forwarding LDP packets to a destination prefix using the backup path without waiting for the routing convergence. Convergence times should be similar to RSVP-TE FRR, in the tens of milliseconds.

OSPF or IS-IS must perform the Shortest Path First (SPF) calculation of an LFA next hop, as well as the primary next hop, for all prefixes used by LDP to resolve FECs. The IGP also populates both routes in the RTM.

When LDP FRR is enabled and an LFA backup next hop exists for the FEC prefix in the RTM, or for the longest prefix the FEC prefix matches to when the aggregate-prefix-match option is enabled, LDP will program the data path with both a primary NHLFE and a backup NHLFE for each next hop of the FEC.

In order to perform a switchover to the backup NHLFE in the fast path, LDP follows the standard FRR failover procedures, which are also supported for RSVP-TE FRR.

When any of the following events occurs, the backup NHLFE is enabled for each affected FEC next hop:

  1. an LDP interface goes operationally down or is administratively shut down
    In this case, LDP sends a neighbor/next hop down message to each LDP with which it has an adjacency over the interface.
  2. an LDP session to a peer goes down because the Hello timer or keepalive timer has expired over an interface
    In this case, LDP sends a neighbor/next hop down message to the affected peer.
  3. the TCP connection used by a link LDP session to a peer goes down
    In this case, LDP sends a neighbor/next hop down message to the affected peer.

Refer to RFC 5286, Basic Specification for IP Fast Reroute: Loop-Free Alternates, for more information on LFAs.

5.1.12.1. ECMP vs FRR

If ECMP is enabled, which provides multiple primary next hops for a prefix, LDP FRR is not used. That is, the LFA next hops are not populated in the RTM and the ECMP paths are used instead.

5.1.12.2. IGP Shortcuts (RSVP-TE Tunnels)

IGP shortcuts are an MPLS functionality where LSPs are treated like physical links within IGPs; that is, LSPs can be used for next-hop reachability. If an RSVP-TE LSP is used as a shortcut by OSPF or IS-IS, it is included in the SPF calculation as a point-to-point link for both primary and LFA next hops. It can also be advertised to neighbors so that the neighboring nodes can also use the links to reach a destination via the advertised next hop.

IGP shortcuts can be used to simplify remote LFA support and simplify the number of LSPs required in a ring topology.

When both IGP shortcuts and LFA are enabled under OSPF or IS-IS, and LDP FRR is also enabled, the following applies:

  1. a FEC that is resolved to a direct primary next hop can be backed up by a tunneled LFA next hop
  2. a FEC that is resolved to a tunneled primary next hop will not have an LFA next hop; it relies on RSVP-TE FRR for protection

5.1.12.3. LDP FRR Configuration

To configure LDP FRR, LFA calculation by the SPF algorithm must first be enabled under the OSPF or IS-IS protocol level with the command:

     config>router>ospf>loopfree-alternate

     or

     config>router>ospf3>loopfree-alternate

     or

     config>router>isis>loopfree-alternate

Next, LDP must be enabled to use the LFA next hop with the command config>router>ldp>fast-reroute.

If IGP shortcuts are used, they must be enabled under the OSPF or IS-IS routing protocol. As well, they must be enabled under the MPLS LSP context, using the command config>router>mpls>lsp>igp-shortcut.

For information on LFA and IGP shortcut support for OSPF and IS-IS, refer to the 7705 SAR Routing Protocols Guide, “LDP and IP Fast Reroute for OSPF Prefixes” and “LDP and IP Fast Reroute for IS-IS Prefixes”.

Both LDP FRR and IP FRR are supported; for information on IP FRR, refer to the 7705 SAR Router Configuration Guide, “IP Fast Reroute (FRR)”.

5.1.13. TCP MD5 Authentication

The operation of a network can be compromised if an unauthorized system is able to form or hijack an LDP session and inject control packets by falsely representing itself as a valid neighbor. This risk can be mitigated by enabling TCP MD5 authentication on one or more of the sessions.

When TCP MD5 authentication is enabled on a session, every TCP segment exchanged with the peer includes a TCP option (19) containing a 16-byte MD5 digest of the segment (more specifically the TCP/IP pseudo-header, TCP header, and TCP data). The MD5 digest is generated and validated using an authentication key that must be known to both sides. If the received digest value is different from the locally computed one, the TCP segment is dropped, thereby protecting the router from a spoofed TCP segment.

The TCP Enhanced Authentication Option, as specified in draft-bonica-tcpauth-05.txt, Authentication for TCP-based Routing and Management Protocols, is a TCP extension that enhances security for LDP, BGP, and other TCP-based protocols. It extends the MD5 authentication option to include the ability to change keys in an LDP or BGP session seamlessly without tearing down the session, and allows for stronger authentication algorithms to be used. It is intended for applications where secure administrative access to both endpoints of the TCP connection is normally available.

TCP peers can use this extension to authenticate messages passed between one another. This strategy improves upon the practice described in RFC 2385, Protection of BGP Sessions via the TCP MD5 Signature Option. Using this new strategy, TCP peers can update authentication keys during the lifetime of a TCP connection. TCP peers can also use stronger authentication algorithms to authenticate routing messages.

TCP enhanced authentication uses keychains that are associated with every protected TCP connection.

Keychains are configured in the config>system>security>keychain context. For more information about configuring keychains, refer to the 7705 SAR System Management Guide, “TCP Enhanced Authentication and Keychain Authentication”.

5.2. LDP Point-to-Multipoint Support

The 7705 SAR supports point-to-multipoint mLDP. This section contains information on the following topics:

5.2.1. LDP Point-to-Multipoint Configuration

A node running LDP also supports point-to-multipoint LSP setup using LDP. By default, the node advertises the capability to a peer node using the point-to-multipoint capability TLV in LDP initialization message.

The multicast-traffic configuration option (per interface) restricts or allows the use of an interface for LDP multicast traffic forwarding towards a downstream node. The interface configuration option does not restrict or allow the exchange of the point-to-multipoint FEC by way of an established session to the peer on an interface, but only restricts or allows the use of next hops over the interface.

5.2.2. LDP Point-to-Multipoint Protocol

Only a single generic identifier range is defined for signaling a multipoint data tree (MDT) for all client applications. Implementation on the 7705 SAR reserves the range 1 to 8292 for generic point-to-multipoint LSP ID values for static point-to-multipoint LSP on the root node.

5.2.3. Make-Before-Break (MBB)

When a transit or leaf node detects that the upstream node towards the root node of a multicast tree has changed, the node follows the graceful procedure that allows make-before-break transition to the new upstream node. Make-before-break support is optional via the mp-mbb-time command. If the new upstream node does not support MBB procedures, then the downstream node waits for the configured timer to time out before switching over to the new upstream node.

5.2.4. ECMP Support

If multiple ECMP paths exist between two adjacent nodes, then the upstream node of the multicast receiver programs all entries in the forwarding plane. Only one entry is active and it is based on the ECMP hashing algorithm.

5.3. Multicast LDP Fast Upstream Switchover

This feature allows a downstream LSR of a multicast LDP (mLDP) FEC to perform a fast switchover in order to source the traffic from another upstream LSR while IGP and LDP are converging due to a failure of the upstream LSR, where the upstream LSR is the primary next hop of the root LSR for the point-to-multipoint FEC. The feature is enabled through the mcast-upstream-frr command.

The feature provides upstream fast reroute (FRR) node protection for mLDP FEC packets. The protection is at the expense of traffic duplication from two different upstream nodes into the node that performs the fast upstream switchover.

The detailed procedures for this feature are described in draft-pdutta-mpls-mldp-up-redundancy.

5.3.1. mLDP Fast Upstream Switchover Configuration

To enable the mLDP fast upstream switchover feature, configure the following option in the CLI:

     config>router>ldp>mcast-upstream-frr

When mcast-upstream-frr is enabled and LDP is resolving an mLDP FEC received from a downstream LSR, LDP checks for the existence of an ECMP next hop or a loop-free alternate (LFA) next hop to the root LSR node. If LDP finds one, it programs a primary incoming label map (ILM) on the interface corresponding to the primary next hop and a backup ILM on the interface corresponding to the ECMP or LFA next hop. LDP then sends the corresponding labels to both upstream LSR nodes. In normal operation, the primary ILM accepts packets and the backup ILM drops them. If the interface or the upstream LSR of the primary ILM goes down, causing the LDP session to go down, the backup ILM starts accepting packets.

To use the ECMP next hop, configure the ecmp max-ecmp-routes value in the system to be at least 2, using the following command:

     config>router>ecmp max-ecmp-routes

To use the LFA next hop, enable LFA using the following commands (as needed):

     config>router>isis>loopfree-alternate

     or

     config>router>ospf>loopfree-alternate

Enabling IP FRR or LDP FRR is not strictly required, since LDP only needs to know the location of the alternate next hop to the root LSR in order to send the label mapping message and program the backup ILM during the initial signaling of the tree. That is, enabling the LFA option is sufficient for providing the backup ILM information. However, if unicast IP and LDP prefixes need to be protected, then IP FRR and LDP FRR—and the mLDP fast upstream switchover—can be enabled concurrently using the following commands:

     config>router>ip-fast-reroute

     or

     config>router>ldp>fast-reroute

An mLDP FRR fast switchover relies on the fast detection of a lost LDP session to the upstream peer to which the primary ILM label had been advertised. To ensure fast detection of a lost LDP session, do the following:

  1. Enable BFD on all LDP interfaces to upstream LSR nodes. When BFD detects the loss of the last adjacency to the upstream LSR, it brings down the LDP session immediately, which causes the CSM to activate the backup ILM.
  2. If there is a concurrent T-LDP adjacency to the same upstream LSR node, enable BFD on the T-LDP peer in addition to enabling it on the interface.
  3. Enable the ldp-sync-timer option on all interfaces to the upstream LSR nodes. If an LDP session to the upstream LSR to which the primary ILM is resolved goes down for any reason other than a failure of the interface or the upstream LSR, then routing and LDP go out of synchronization. This means that the backup ILM remains activated until the next time SPF is run by IGP.
    By enabling the IGP-LDP synchronization feature, the advertised link metric changes to the maximum value as soon as the LDP session goes down. This, in turn, triggers an SPF, and LDP will likely download a new set of primary and backup ILMs.

5.3.2. mLDP Fast Upstream Switchover Behavior

This feature allows a downstream LSR to send a label binding to two upstream LSR nodes, but only accept traffic as follows:

  1. for normal operation, traffic is accepted from the ILM on the interface to the primary next hop of the root LSR for the point-to-multipoint FEC
  2. for failure operation, traffic is accepted from the ILM on the interface to the backup next hop

A candidate upstream LSR node must be either an ECMP next hop or an LFA next hop. Either option allows the downstream LSR to perform a fast switchover and to source the traffic from another upstream LSR while IGP is converging due to a failure of the LDP session of the upstream peer, which is the primary next hop of the root LSR for the point-to-multipoint FEC. That is, the candidate upstream LSR node provides upstream FRR node protection for the mLDP FEC packets.

Multicast LDP fast upstream switchover is illustrated in Figure 20. LSR U is the primary next hop for the root LSR R of the point-to-multipoint FEC. LSR U' is an ECMP or LFA backup next hop for the root LSR R of the same point-to-multipoint FEC.

Figure 20:  Multicast LDP Fast Upstream Switchover 

In Figure 20, downstream LSR Z sends a label mapping message to both upstream LSR nodes, and programs the primary ILM on the interface to LSR U and the backup ILM on the interface to LSR U'. The labels for the primary and backup ILMs must be different. Thus LSR Z attracts traffic from both ILMs. However, LSR Z blocks the ILM on the interface to LSR U' and only accepts traffic from the ILM on the interface to LSR U.

If the link to LSR U fails, or LSR U fails, causing the LDP session to LSR U to go down, LSR Z will detect the failure and reverse the ILM blocking state. In addition, LSR Z immediately starts receiving traffic from LSR U' until IGP converges and provides a new primary next hop and a new ECMP or LFA backup next hop, which may or may not be on the interface to LSR U'. When IGP convergence is complete, LSR Z updates the primary and backup ILMs in the datapath.

Note:

LDP uses the interface of either an ECMP next hop or an LFA next hop to the root LSR prefix, whichever is available, to program the backup ILM. However, ECMP next hop and LFA next hop are mutually exclusive for a given prefix. IGP installs the ECMP next hop in preference to the LFA next hop as a prefix in the routing table manager (RTM).

If one or more ECMP next hops for the root LSR prefix exist, LDP picks the interface for the primary ILM based on the rules of mLDP FEC resolution specified in RFC 6388, Label Distribution Protocol Extensions for Point-to-Multipoint and Multipoint-to-Multipoint Label Switched Paths:

  1. The candidate upstream LSRs are numbered from lowest to highest IP address.
  2. The following hash is performed:
           H = (CRC32(Opaque Value)) modulo N
           where N is the number of upstream LSRs
    The Opaque Value is the field in the point-to-multipoint FEC element immediately after the Opaque Length field. The Opaque Length indicates the opaque value used in this calculation.
  3. The selected upstream LSR U is the LSR that has the number H.

LDP then picks the interface for the backup ILM using the following new rules:

           if (H + 1 < NUM_ECMP) {

           // If the hashed entry is not last in the next hops then pick up               the next as backup.

                 backup = H + 1;

           } else {

           // Wrap around and pick up the first.

                 backup = 1;

           }

In some topologies, it is possible that no ECMP or LFA next hop is found. In this case, LDP programs the primary ILM only.

5.4. LDP Process Overview

Figure 21 displays the process to provision basic LDP parameters.

Figure 21:  LDP Configuration and Implementation 

5.5. Configuration Notes

Refer to the 7705 SAR Services Guide for information about signaling.

5.5.1. Reference Sources

For information on supported IETF drafts and standards, as well as standard and proprietary MIBs, refer to Standards and Protocol Support.