This chapter provides information to enable Label Distribution Protocol (LDP).
Label Distribution Protocol (LDP) is a protocol used to distribute labels in non-traffic-engineered applications. LDP allows routers to establish label switched paths (LSPs) through a network by mapping network-layer routing information directly to data link layer-switched paths.
An LSP is defined by the set of labels from the ingress Label Switching Router (LSR) to the egress LSR. LDP associates a Forwarding Equivalence Class (FEC) with each LSP it creates. A FEC is a collection of common actions associated with a class of packets. When an LSR assigns a label to a FEC, it must let other LSRs in the path know about the label. LDP helps to establish the LSP by providing a set of procedures that LSRs can use to distribute labels.
The FEC associated with an LSP specifies which packets are mapped to that LSP. LSPs are extended through a network as each LSR splices incoming labels for a FEC to the outgoing label assigned to the next hop for the given FEC.
LDP allows an LSR to request a label from a downstream LSR so it can bind the label to a specific FEC. The downstream LSR responds to the request from the upstream LSR by sending the requested label.
LSRs can distribute a FEC label binding in response to an explicit request from another LSR. This is known as Downstream On Demand (DOD) label distribution. LSRs can also distribute label bindings to LSRs that have not explicitly requested them. This is called Downstream Unsolicited (DUS).
LDP performs the label distribution only in MPLS environments. The LDP operation begins with a hello discovery process to find LDP peers in the network. LDP peers are two LSRs that use LDP to exchange label/FEC mapping information. An LDP session is created between LDP peers. A single LDP session allows each peer to learn the other's label mappings (LDP is bi-directional) and to exchange label binding information.
LDP signaling works with the MPLS label manager to manage the relationships between labels and the corresponding FEC. For service-based FECs, LDP works in tandem with the Service Manager to identify the virtual leased lines (VLLs) and Virtual Private LAN Services (VPLSs) to signal.
An MPLS label identifies a set of actions that the forwarding plane performs on an incoming packet before discarding it. The FEC is identified through the signaling protocol (in this case, LDP) and allocated a label. The mapping between the label and the FEC is communicated to the forwarding plane. In order for this processing on the packet to occur at high speeds, optimized tables are maintained in the forwarding plane that enable fast access and packet identification.
When an unlabeled packet ingresses the IP/MPLS router, classification policies associate it with a FEC. The appropriate label is imposed on the packet, and the packet is forwarded. Other actions that can take place before a packet is forwarded are imposing additional labels, other encapsulations, learning actions, etc. When all actions associated with the packet are completed, the packet is forwarded.
When a labeled packet ingresses the router, the label or stack of labels indicates the set of actions associated with the FEC for that label or label stack. The actions are preformed on the packet and then the packet is forwarded.
The LDP implementation provides DOD, DUS, ordered control, liberal label retention mode support.
LDP comprises a few processes that handle the protocol PDU transmission, timer-related issues, and protocol state machine. The number of processes is kept to a minimum to simplify the architecture and to allow for scalability. Scheduling within each process prevents starvation of any particular LDP session, while buffering alleviates TCP-related congestion issues.
The LDP subsystems and their relationships to other subsystems are illustrated in Figure 29. This illustration shows the interaction of the LDP subsystem with other subsystems, including memory management, label management, service management, SNMP, interface management, and RTM. In addition, debugging capabilities are provided through the logger.
Communication within LDP tasks is typically done by inter-process communication through the event queue, as well as through updates to the various data structures. The primary data structures that LDP maintains are:
The following figure shows how LDP and the other subsystems work to provide services.
LDP does not use any memory until it is instantiated. It preallocates some amount of fixed memory so that initial startup actions can be performed. Memory allocation for LDP comes out of a pool reserved for LDP that can grow dynamically as needed. Fragmentation is minimized by allocating memory in larger chunks and managing the memory internally to LDP. When LDP is shut down, it releases all memory allocated to it.
LDP assumes that the label manager is up and running. LDP will abort initialization if the label manager is not running. The label manager is initialized at system boot-up; hence, anything that causes it to fail will likely imply that the system is not functional. The 7210 devices uses a label range from 28672 (28K) to 131071 (128K-1) to allocate all dynamic labels, including RSVP allocated labels and VC labels.
The 7210 SAS devices use a single consistent interface to configure all protocols and services. CLI commands are translated to SNMP requests and are handled through an agent-LDP interface. LDP can be instantiated or deleted through SNMP. Also, LDP targeted sessions can be set up to specific endpoints. Targeted-session parameters are configurable.
LDP uses the logger interface to generate debug information relating to session setup and teardown, LDP events, label exchanges, and packet dumps. Per-session tracing can be performed.
All interaction occurs between LDP and the service manager, since LDP is used primarily to exchange labels for Layer 2 services. In this context, the service manager informs LDP when an LDP session is to be set up or torn down, and when labels are to be exchanged or withdrawn. In turn, LDP informs service manager of relevant LDP events, such as connection setups and failures, timeouts, labels signaled/withdrawn.
LDP activity is limited to service-related signaling. Therefore, the configurable parameters are restricted to system-wide parameters, such as hello and keepalive timeouts.
MPLS must be enabled when LDP is initialized. LDP makes sure that the various prerequisites, such as ensuring the system IP interface is operational, the label manager is operational, and there is memory available, are met. It then allocates itself a pool of memory and initializes its databases.
In order for a targeted LDP (T-LDP) session to be established, an adjacency must be created. The LDP extended discovery mechanism requires hello messages to be exchanged between two peers for session establishment. After the adjacency establishment, session setup is attempted.
When the LDP adjacency is established, the session setup follows as per the LDP specification. Initialization and keepalive messages complete the session setup, followed by address messages to exchange all interface IP addresses. Periodic keepalives or other session messages maintain the session liveliness.
Since TCP is back-pressured by the receiver, it is necessary to be able to push that back-pressure all the way into the protocol. Packets that cannot be sent are buffered on the session object and re-attempted as the back-pressure eases.
Label exchange is initiated by the service manager. When an SDP is attached to a service (for example, the service gets a transport tunnel), a message is sent from the service manager to LDP. This causes a label mapping message to be sent. Additionally, when the SDP binding is removed from the service, the VC label is withdrawn. The peer must send a label release to confirm that the label is not in use.
Other reasons for label actions include:
LDP closes all sockets, frees all memory, and shuts down all its tasks when it is deleted, so its memory usage is 0 when it is not running.
The implicit null label option allows a 7210 SAS egress LER to receive MPLS packets from the previous hop without the outer LSP label. The operation of the previous hop is referred to as penultimate hop popping (PHP). This option is signaled by the egress LER to the previous hop during the FEC signaling by the control protocol.
The user can configure to signal the implicit null option for all RSVP FECs for which this node is the egress LER using the following command:
config>router>rsvp>implicit-null-label
When the user changes the implicit null configuration option, RSVP withdraws all the FECs and re-advertises them using the new label value.
Both inbound and outbound LDP label binding filtering is supported.
Inbound filtering (import policy) allows configuration of a policy to control the label bindings an LSR accepts from its peers. Label bindings can be filtered based on:
Note: The default import behavior is to accept all FECs received from peers. The LDP export policy can be used to explicitly add FECs (or non-LDP routes) for label propagation and does not filter out or stop propagation of any FEC received from neighbors. |
Export policy enables configuration of a policy to advertise label bindings based on:
Note: The LDP export policy will not filter out FECs. It is only used to explicitly add FECs (or non-LDP routes) for label propagation. |
The default export behavior originates label bindings for system address and propagate all FECs received.
Note:
|
This feature performs load balancing for LDP-based LSPs by having multiple outgoing next-hops for a given IP prefix on ingress and transit LSRs.
An LSR that has multiple equal cost paths to a given IP prefix can receive an LDP label mapping for this prefix from each downstream next-hop peer. As the LDP implementation uses the liberal label retention mode to retain all the labels for an IP prefix received from multiple next-hop peers.
Without ECMP support (only for LDP LSR LSPs on 7210 SAS), only one of these next-hop peers will be selected and installed in the forwarding plane. The next-hop peer selection algorithm looks up the route information obtained from the RTM for this prefix and finds the first valid LDP next-hop peer (for example, the first neighbor in the RTM entry from which a label mapping was received). If, for some reason, the outgoing label to the installed next-hop is no longer valid ( for example, if the session to the peer is lost or the peer withdraws the label) a new valid LDP next-hop peer will be selected out of the existing next-hop peers and LDP will reprogram the forwarding plane to use the label sent by this peer.
With ECMP support, all the valid LDP next-hop peers, those that sent a label mapping for a given IP prefix, will be installed in the forwarding plane. In transit LSR, an ingress label will be mapped to the next hops that are in the RTM and from which a valid mapping label has been received. The forwarding plane will then use an internal hashing algorithm to determine how the traffic will be distributed amongst these multiple next-hops, assigning each “flow” to a particular next-hop.
For more information about the hash algorithms at transit LSR, refer to “LAG and ECMP Hashing” in the 7210 SAS-Mxp, R6, R12, S, Sx, T Interface Configuration Guide.
If an LSR is the ingress for a specific IP prefix, LDP programs a push operation for the prefix in the forwarding engine. This creates an LSP ID to the Next Hop Label Forwarding Entry (NHLFE) (LTN) mapping and an LDP tunnel entry in the forwarding plane. LDP will also inform the Tunnel Table Manager (TTM) of this tunnel. Both the LTN entry and the tunnel entry will have a NHLFE for the label mapping that the LSR received from each of its next-hop peers.
If the LSR is to behave as a transit for a given IP prefix, LDP will program a swap operation for the prefix in the forwarding engine. This results in the creation of an Incoming Label Map (ILM) entry in the forwarding plane. The ILM entry will have to map an incoming label to possibly multiple NHLFEs. If the LSR is an egress for a specific IP prefix, LDP programs a POP entry in the forwarding engine. Programming a POP entry results in an ILM entry in the forwarding plane, but with no NHLFEs.
When unlabeled packets arrive at the ingress LER, the forwarding plane will consult the LTN entry and will use a hashing algorithm to map the packet to one of the NHLFEs (push label) and forward the packet to the corresponding next-hop peer. For labeled packets arriving at a transit or egress LSR, the forwarding plane will consult the ILM entry and either use a hashing algorithm to map it to one of the NHLFEs if they exist (swap label) or simply route the packet if there are no NHLFEs (pop label).
Static FEC swap will not be activated unless there is a matching route in system route table that also matches the user configured static FEC next-hop.
The following table lists the cases in which LDP LSR ECMP hashing occurs when an MPLS encapsulated packet is received at LSR, and the cases where the MAC or IP packet address fields that are used in hashing vary.
Number and types of labels egressing iLER | Packet header address fields used in hashing 1 | Hashing scenario 2 | Notes | ||
Varying MAC | Varying IP | Hashing over LAG at LSR | Hashing over ECMP paths at LSR | ||
2 (LDP transport label and service label) | — | ||||
✓ | |||||
✓ | |||||
✓ | ✓ | ||||
3 (LDP transport label, service label, and hash label) | The last label egressing the iLER is a hash label, which has a different value from the other two (2) labels because it has different MAC and IP fields in the packet of the service traffic. | ||||
✓ | ✓ | ✓ | |||
✓ | ✓ | ✓ | |||
✓ | ✓ | ✓ | ✓ | ||
3 (LDP/RSVP transport label, BGP3107 label, and service label) | The packet egressess the LSR between the PE and ASBR with three (3) labels. Each label has the same value in every stream for traffic forwarded in a specific service. However, the values are not the same for traffic forwarded in multiple services using the same LDP LSP. | ||||
✓ | |||||
✓ | |||||
✓ | ✓ | ||||
4 (LDP/RSVP transport label, BGP 3107 label, service label, and hash label) | The packet egressess the LSR between the PE and ASBR with three (3) labels and one (1) hash label, which is the fourth label in the packet. Hashing does not occur at the LSR between the PE and ASBR; however, if a LAG is configured on egress of the ASBR, the packets are hashed over the LAG members. | ||||
✓ | |||||
✓ | |||||
✓ | ✓ |
Notes:
Hello adjacency will be brought up using link Hello packet with source IP address set to the interface borrowed IP address and a destination IP address set to 224.0.0.2.
By default, the LDP session uses the system interface address as the LSR-ID unless explicitly configured using the command config>router>ldp>interface-parameters>interface>local-lsr-id interface. Using this command user is allowed to use the local interface as both the LSR-ID and the transport address for the link-level LDP session. Note that when the interface option is selected, the transport connection (TCP) for the link LDP session will also use the address of the local LDP interface as the transport address. If system is the value configured under the command configure>router>ldp>interface-parameters>interface>transport-address, it will be overridden.
The LSR with the highest transport address, that is, LSR-ID in this case, will bootstrap the TCP connection and LDP session.
Source and destination IP addresses of LDP packets are the transport addresses, that is, LDP LSR-IDs of systems A and B in this case.
Source and destination addresses of targeted Hello packets are the LDP LSR-IDs of systems A and B.
The user can configure the local-lsr-id option on the targeted session and change the value of the LSR-ID to either the local interface or to some other interface name, loopback or not. If the local interface is selected the IP address of the local interface will be used as the LSR-ID. In all cases, the transport address for the LDP session and the source IP address of targeted Hello message will be updated to the new LSR-ID value.
The LSR with the highest transport address, that is, LSR-ID in this case, will bootstrap the TCP connection and LDP session.
Source and destination IP addresses of LDP messages are the transport addresses, which, in this case, are the LDP LSR-IDs of systems A and B.
Note:
|
This feature allows LDP to establish a Hello adjacency and to resolve unicast and multicast FECs over unnumbered LDP interfaces.
This feature also extends the support of lsp-ping, p2mp-lsp-ping, and ldp-treetrace to test an LDP unicast or multicast FEC which is resolved over an unnumbered LDP interface.
This feature does not introduce a new CLI command for adding an unnumbered interface into LDP. Instead, the fec-originate command is extended to specify the interface name, because an unnumbered interface does not have an IP address of its own. The user can, however, specify the interface name for numbered interfaces.
See the CLI section for the changes to the fec-originate command.
Consider the setup shown in the following figure.
LSR A and LSR B have the following LDP identifiers respectively:
There are two P2P unnumbered interfaces between LSR A and LSR B. These interfaces are identified on each system with their unique local link identifier. In other words, the combination of {Router-ID, Local Link Identifier} uniquely identifies the interface in OSPF or IS-IS throughout the network.
A borrowed IP address is also assigned to the interface to be used as the source address of IP packets which need to be originated from the interface. The borrowed IP address defaults to the system loopback interface address, A and B respectively in this setup. The user can change the borrowed IP interface to any configured IP interface, loopback or not, by applying the following command:
config>router>if>unnumbered [<ip-int-name | ip-address>]
When the unnumbered interface is added into LDP, it will have the behavior described in the following sections.
All LDP features are supported except for the following:
LDP over RSVP-TE provides end-to-end tunnels that have two important properties, fast reroute and traffic engineering which are not available in LDP. LDP over RSVP-TE is focused at large networks (over 100 nodes in the network). Simply using end-to-end RSVP-TE tunnels will not scale. While an LER may not have that many tunnels, any transit node will potentially have thousands of LSPs, and if each transit node also has to deal with detours or bypass tunnels, this number can make the LSR overly burdened.
Note:
|
LDP over RSVP-TE allows tunneling of user packets using an LDP LSP inside an RSVP LSP.The main application of this feature is for deployment of MPLS based services, for example, VPRN, VLL, and VPLS services, in large scale networks across multiple IGP areas without requiring full mesh of RSVP LSPs between PE routers.
The network displayed in Figure 31 consists of two metro areas, Area 1 and 2 respectively, and a core area, Area 3. Each area makes use of TE LSPs to provide connectivity between the edge routers. In order to enable services between PE1 and PE2 across the three areas, LSP1, LSP2, and LSP3 are set up using RSVP-TE. There are in fact 6 LSPs required for bidirectional operation but we will refer to each bi-directional LSP with a single name, for example, LSP1. A targeted LDP (T-LDP) session is associated with each of these bidirectional LSP tunnels. That is, a T-LDP adjacency is created between PE1 and ABR1 and is associated with LSP1 at each end. The same is done for the LSP tunnel between ABR1 and ABR2, and finally between ABR2 and PE2. The loopback address of each of these routers is advertised using T-LDP. Similarly, backup bidirectional LDP over RSVP tunnels, LSP1a and LSP2a, are configured via ABR3.
This setup effectively creates an end-to-end LDP connectivity which can be used by all PEs to provision services. The RSVP LSPs are used as a transport vehicle to carry the LDP packets from one area to another. Note that only the user packets are tunneled over the RSVP LSPs. The T-LDP control messages are still sent unlabeled using the IGP shortest path.
Note that in this application, the bi-directional RSVP LSP tunnels are not treated as IP interfaces and are not advertised back into the IGP. A PE must always rely on the IGP to look up the next hop for a service packet. LDP-over-RSVP introduces a new tunnel type, tunnel-in-tunnel, in addition to the existing LDP tunnel and RSVP tunnel types. If multiple tunnels types match the destination PE FEC lookup, LDP will prefer an LDP tunnel over an LDP-over-RSVP tunnel by default.
The design in Figure 31 allows a service provider to build and expand each area independently without requiring a full mesh of RSVP LSPs between PEs across the three areas.
In order to participate in a VPRN service, PE1 and PE2 perform the autobind to LDP. The LDP label which represents the target PE loopback address is used below the RSVP LSP label. Therefore a 3 label stack is required.
In order to provide a VLL service, PE1 and PE2 are still required to set up a targeted LDP session directly between them. Again a 3 label stack is required, the RSVP LSP label, followed by the LDP label for the loopback address of the destination PE, and finally the pseudowire label (VC label).
This implementation supports a variation of the application in Figure 31, in which area 1 is an LDP area. In that case, PE1 will push a two label stack while ABR1 will swap the LDP label and push the RSVP label as shown in Figure 32.
The user creates a targeted LDP (T-LDP) session to an ABR or the destination PE. This results in LDP hellos being sent between the two routers. These messages are sent unlabeled over the IGP path. Next, the user enables LDP tunneling on this T-LDP session and optionally specifies a list of LSP names to associate with this T-LDP session. By default, all RSVP LSPs which terminate on the T-LDP peer are candidates for LDP-over-RSVP tunnels. At this point in time, the LDP FECs resolving to RSVP LSPs are added into the Tunnel Table Manager as tunnel-in-tunnel type.
Note that if LDP is running on regular interfaces also, then the prefixes LDP learns are going to be distributed over both the T-LDP session as well as regular IGP interfaces. The policy controls which prefixes go over the T-LDP session, for example, only /32 prefixes, or a particular prefix range.
LDP-over-RSVP works with both OSPF and IS-IS. These protocols include the advertising router when adding an entry to the RTM. LDP-over-RSVP tunnels can be used as shortcuts for BGP next-hop resolution.
When LDP tries to resolve a prefix received over a T-LDP session, it performs a lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the destination PE and the advertising router (ABR or destination PE itself). If the next-hop router advertised the same FEC over link-level LDP, LDP will prefer the LDP tunnel by default unless the user explicitly changed the default preference using the system wide prefer-tunnel-in-tunnel command. If the LDP tunnel becomes unavailable, LDP will select an LDP-over-RSVP tunnel if available.
When searching for an LDP-over-RSVP tunnel, LDP selects the advertising router(s) with best route. If the advertising router matches the T-LDP peer, LDP then performs a second lookup for the advertising router in the Tunnel Table Manager (TTM) which returns the user configured RSVP LSP with the best metric. If there are more than one configured LSP with the best metric, LDP selects the first available LSP.
If all user configured RSVP LSPs are down, no more action is taken. If the user did not configure any LSPs under the T-LDP session, the lookup in TTM will return the first available RSVP LSP which terminates on the advertising router with the lowest metric.
When LDP tries to resolve a prefix received over a T-LDP session, it performs a lookup in the Routing Table Manager (RTM). This lookup returns the next hop to the destination PE and the advertising router (ABR or destination PE itself).
When searching for an LDP-over-RSVP tunnel, LDP selects the advertising router(s) with best route. If the advertising router matches the targeted LDP peer, LDP then performs a second lookup for the advertising router in the Tunnel Table Manager (TTM) which returns the user configured RSVP LSP with the best metric. If there are more than one configured LSP with the best metric, LDP selects the first available LSP.
If all user configured RSVP LSPs are down, then an LDP tunnel will be selected if available.
If the user did not configure any LSPs under the T-LDP session, a lookup in TTM will return the first available RSVP LSP which terminates on the advertising router. If none are available, then an LDP tunnel will be selected if available.
Every failure in the network can be protected against, except for the ingress and egress PEs. All other constructs have protection available. These constructs are LDP-over-RSVP tunnel and ABR.
An RSVP LSP can deal with a failure in two ways.
Note: Only FRR one-to-one is supported with LDP-over-RSVP with use of implicit NULL label. In other words, implicit NULL label must be enabled to use FRR one-to-one. FRR facility cannot be used. The software does not make any checks to enforce these restrictions. Operators must ensure this by network design and configuration. |
If an ABR fails, then routing around the ABR requires that a new next-hop LDP-over-RSVP tunnel be found to a backup ABR. If an ABR fails, then the T-LDP adjacency fails. Eventually, the backup ABR becomes the new next hop (after SPF converges), and LDP learns of the new next-hop and can reprogram the new path.
The user enables BFD tracking of a T-LDP session by using the config>router>ldp>targeted-session>bfd-enable command.
When this command is executed, LDP registers the address of the T-LDP session peer with BFD for tracking purposes. In other words, when the BFD session goes down, the T-LDP session is also brought down. However, the BFD session going up does not affect the state of the T-LDP session as T-LDP has to establish proper Hello adjacency and then a TCP connection to the peer which then allows the T-LDP session to come up.
The source and destination addresses of the BFD session depends on whether the T-LDP peer is directly reachable over a local interface or is more than one hop away.
When the peer is on the local subnet, the BFD session used will be the one associated with the local interface on the direct link to the peer. In that case, the source address and destination address in the BFD packets will be that of the local end and the far-end of that interface respectively. If multiple interfaces exist to the peer because of parallel links, then the BFD session must be associated with the interface which is currently used by the common LDP session shared by both the T-LDP and link-level LDP sessions.
The parameters used for the BFD session, transmit-interval, receive-interval, multiplier, and echo-receive are also configured under the local interface(s) using the config>router>interface>bfd command.
Note that the local interface BFD session is used regardless if the LDP session, and underlying TCP connection, were bootstrapped by the link-level LDP Hello adjacency or the T-LDP hello adjacency. Furthermore, if the BFD session goes down it will bring down the state of both the T-LDP session and the link-level LDP session sharing the same LDP session.
When the peer is several hops away, the BFD session used will be the one associated with the loopback interface corresponding to LSR-ID of the T-LDP session. The LSR-ID is used to establish the Hello adjacency with the peer. By default the LSR-ID matches the system interface address but the user can change it to any other loopback interface address [ldp-instances]. In that case, the source address and destination address in the BFD packets will match the local end LSR-ID and the far-end address specified for the peer respectively. The parameters used for the BFD session are also those configured under the loopback interface corresponding to the LSR-ID using the bfd command in the config>router>interface context.
Given that the BFD session used to track the same T-LDP peer may move from a link interface to a loopback interface depending on route reachability, it is important that the user configures the BFD session parameters consistently on both interfaces.
The link interface BFD session is sourced and maintained on the IOM while the loopback interface BFD session is sourced and maintained on the CPM. As a result, the system level BFD resource count reflects the worst case where each T-LDP session is using two BFD sessions.
The user enables the use by an LDP session of the Downstream-on-Demand (DoD) label distribution using the command config>router>ldp>peer-parameters>peer> dod-label-distribution.
When this option is enabled, LDP will set the A-bit in the Label Initialization message, when the LDP session to the peer is established. When both peers set the A-bit, both uses the DoD label distribution method over the LDP session [rfc5036].
This feature can only be enabled on a link level LDP session and applies to prefix labels only, and not service labels.
As soon as the link LDP session comes up, the 7210 SAS sends a label request to the DoD peer for the FEC prefix corresponding to the peer’s LSR-id. The DoD peer LSR-id is found in the basic Hello discovery messages the peer used to establish the Hello adjacency with the 7210.
Similarly, if the 7210 SAS and the directly attached DoD peer enter into the extended discovery and establish a targeted LDP session, the 7210 SAS immediately sends a label request for the FEC prefix corresponding to the peer’s LSR-id found in the extended discovery messages.
However, the 7210 SAS node does not advertise any <FEC, label> bindings, including the FEC of its own LSR-id, unless the DoD peer requested it through a Label Request Message.
When the DoD peer sends a label request for any FEC prefix, the 7210 SAS replies with a <FEC, label> binding for that prefix if the FEC was already activated on the 7210 SAS. If not, the 7210 SAS replies with a notification message containing the status code of “no route”. The 7210 SAS does not attempt in the latter case to send a label request to the next-hop for the FEC prefix when the LDP session to this next-hop uses the DoD label distribution mode. Thus, the reference to single-hop LDP DoD procedures.
The single-hop LDP DoD procedures makes sure the 7210 SAS has a label for the LDP DoD peer, whenever it is needed.
The 7210 SAS needs a label of directly attached DoD peer in the following cases:
The 7210 SAS also supports sending and receiving the Label Abort Request Message as described. This message is used to abort an outstanding request for a label in case no response was received from the peer within a finite amount of time.
7210 SAS devices does not support ECMP for LDP over RSVP LSPs.
LDP Fast Re-Route (FRR) is a feature which allows the user to provide local protection for an LDP FEC by precomputing and downloading to IOM both a primary and a backup NHLFE for this FEC.
The primary NHLFE corresponds to the label of the FEC received from the primary next-hop as per standard LDP resolution of the FEC prefix in RTM. The backup NHLFE corresponds to the label received for the same FEC from a Loop-Free Alternate (LFA) next-hop.
The LFA next-hop precomputation by IGP is described in RFC 5286 – “Basic Specification for IP Fast Reroute: Loop-Free Alternates”. LDP FRR relies on using the label-FEC binding received from the LFA next-hop to forward traffic for a given prefix as soon as the primary next-hop is not available. This means that a node resumes forwarding LDP packets to a destination prefix without waiting for the routing convergence. The label-FEC binding is received from the loop-free alternate next-hop ahead of time and is stored in the Label Information Base since LDP on the router operates in the liberal retention mode.
This feature requires that IGP performs the Shortest Path First (SPF) computation of an LFA next-hop, in addition to the primary next-hop, for all prefixes used by LDP to resolve FECs. IGP also populates both routes in the Routing Table Manager (RTM).
The user enables Loop-Free Alternate (LFA) computation by SPF under the IS-IS or OSPF routing protocol level:
config>router>isis>loopfree-alternate
config>router>ospf>loopfree-alternate
The above commands instruct the IGP SPF to attempt to precompute both a primary next-hop and an LFA next-hop for every learned prefix. When found, the LFA next-hop is populated into the RTM along with the primary next-hop for the prefix.
Next the user enables the use by LDP of the LFA next-hop by configuring the following option:
config>router>ldp>fast-reroute
When this command is enabled, LDP will use both the primary next-hop and LFA next-hop, when available, for resolving the next-hop of an LDP FEC against the corresponding prefix in the RTM. This will result in LDP programming a primary NHLFE and a backup NHLFE into the IOMXCM for each next-hop of a FEC prefix for the purpose of forwarding packets over the LDP FEC.
Note that because LDP can detect the loss of a neighbor/next-hop independently, it is possible that it switches to the LFA next-hop while IGP is still using the primary next-hop. In order to avoid this situation, it is recommended to enable IGP-LDP synchronization on the LDP interface:
config>router>interface>ldp-sync-timer seconds
The user can instruct IGP to not include all interfaces participating in a specific IS-IS level or OSPF area in the SPF LFA computation. This provides a way of reducing the LFA SPF calculation where it is not needed.
config>router>isis>level>loopfree-alternate-exclude
config>router>ospf>area>loopfree-alternate-exclude
Note that if IGP shortcut are also enabled in LFA SPF, LSPs with destination address in that IS-IS level or OSPF area are also not included in the LFA SPF calculation.
The user can also exclude a specific IP interface from being included in the LFA SPF computation by IS-IS or OSPF:
config>router>isis>interface>loopfree-alternate-exclude
config>router>ospf>area>interface>loopfree-alternate-exclude
Note that when an interface is excluded from the LFA SPF in IS-IS, it is excluded in both level 1 and level 2. When the user excludes an interface from the LFA SPF in OSPF, it is excluded in all areas. However, the above OSPF command can only be executed under the area in which the specified interface is primary and once enabled, the interface is excluded in that area and in all other areas where the interface is secondary. If the user attempts to apply it to an area where the interface is secondary, the command will fail.
The LDP FEC resolution when LDP FRR is not enabled operates as follows. When LDP receives a FEC, label binding for a prefix, it will resolve it by checking if the exact prefix, or a longest match prefix when the aggregate-prefix-match option is enabled in LDP, exists in the routing table and is resolved against a next-hop which is an address belonging to the LDP peer which advertised the binding, as identified by its LSR-id. When the next-hop is no longer available, LDP deactivates the FEC and deprograms the NHLFE in the datapath. LDP will also immediately withdraw the labels it advertised for this FEC and deletes the ILM in the datapath unless the user configured the label-withdrawal-delay option to delay this operation. Traffic that is received while the ILM is still in the datapath is dropped. When routing computes and populates the routing table with a new next-hop for the prefix, LDP resolves again the FEC and programs the datapath accordingly.
When LDP FRR is enabled and an LFA backup next-hop exists for the FEC prefix in RTM, or for the longest prefix the FEC prefix matches to when aggregate-prefix-match option is enabled in LDP, LDP will resolve the FEC as above but will program the datapath with both a primary NHLFE and a backup NHLFE for each next-hop of the FEC.
In order perform a switchover to the backup NHLFE in the fast path, LDP follows the uniform FRR failover procedures which are also supported with RSVP FRR.
When any of the following events occurs, LDP instructs in the fast path the IOM to enable the backup NHLFE for each FEC next-hop impacted by this event. The IOM do that by simply flipping a single state bit associated with the failed interface or neighbor/next-hop:
The tunnel-down-dump-time option or the label-withdrawal-delay option, when enabled, does not cause the corresponding timer to be activated for a FEC as long as a backup NHLFE is still available.
LDP can only track an LDP peer with which it established a link LDP session with using the Hello and Keep-Alive timers. If an IGP protocol registered with BFD on an IP interface to track a neighbor, and the BFD session times out, the next-hop for prefixes advertised by the neighbor are no longer resolved. This however does not bring down the link LDP session to the peer since the LDP peer is not directly tracked by BFD. More importantly the LSR-id of the LDP peer may not coincide with the neighbor’s router-id IGP is tracking by way of BFD.
In order to properly track the link LDP peer, LDP needs to track the Hello adjacency to its peer by registering with BFD. This way, the peer next-hop is tracked.
The user enables Hello adjacency tracking with BFD by enabling BFD on an LDP interface:
config>router>ldp>interface-parameters>interface>enable-bfd
The parameters used for the BFD session, i.e., transmit-interval, receive-interval, and multiplier, are those configured under the IP interface in existing implementation:
config>router>interface>bfd
When multiple links exist to the same LDP peer, a Hello adjacency is established over each link but only a single LDP session will exist to the peer and will use a TCP connection over one of the link interfaces. Also, a separate BFD session should be enabled on each LDP interface. If a BFD session times out on a specific link, LDP will immediately bring down the Hello adjacency on that link. In addition, if the there are FECs which have their primary NHLFE over this link, LDP triggers the LDP FRR procedures by sending to IOM the neighbor/next-hop down message. This will result in moving the traffic of the impacted FECs to an LFA next-hop on a different link to the same LDP peer or to an LFA backup next-hop on a different LDP peer depending on the lowest backup cost path selected by the IGP SPF.
As soon as the last Hello adjacency goes down due to BFD timing out, the LDP session goes down and the LDP FRR procedures will be triggered. This will result in moving the traffic to an LFA backup next-hop on a different LDP peer.
Whenever the SPF computation determined there is more than one primary next-hop for a prefix, it will not program any LFA next-hop in RTM. Thus, the LDP FEC will resolve to the multiple primary next-hops in this case which provides the required protection.
Also note that when the system ECMP value is set to ecmp=1 or to no ecmp, which translates to the same and is the default value, SPF will be able to use the overflow ECMP links as LFA next hops in these two cases.
This section describes support for LDP P2MP.
Note:
|
A node running LDP also supports P2MP LSP setup using LDP. By default, it would advertise the capability to a peer node using P2MP capability TLV in LDP initialization message.
This configuration option per interface is provided to restrict/allow the use of interface in LDP multicast traffic forwarding towards a downstream node. The interface configuration option does not restrict/allow exchange of P2MP FEC by way of established session to the peer on an interface, but it would only restrict/allow use of next-hops over the interface. By default, the LDP-P2MP capability is disabled on interface.
Only a single generic identifier range is defined for signaling multipoint tree for all client applications. Implementation on 7210 SAS reserves the range (1 to 8292) of generic LSP P2MP-ID on root node for static P2MP LSP.
SPF computation in IS-IS and OSPF is enhanced to compute LFA alternate routes for each learned prefix and populate it in RTM.
The following figure shows a simple network topology with point-to-point (P2P) interfaces and highlights three routes to reach router R5 from router R1.
The primary route is by way of R3. The LFA route by way of R2 has two equal cost paths to reach R5. The path by way of R3 protects against failure of link R1-R3. This route is computed by R1 by checking that the cost for R2 to reach R5 by way of R3 is lower than the cost by way of routes R1 and R3. This condition is referred to as the loop-free criterion. R2 must be loop-free with respect to source node R1.
The path by way of R2 and R4 can be used to protect against the failure of router R3. However, with the link R2-R3 metric set to 5, R2 sees the same cost to forward a packet to R5 by way of R3 and R4. Thus R1 cannot guarantee that enabling the LFA next-hop R2 will protect against R3 node failure. This means that the LFA next-hop R2 provides link-protection only for prefix R5. If the metric of link R2-R3 is changed to 8, then the LFA next-hop R2 provides node protection since a packet to R5 will always go over R4. In other words it is required that R2 becomes loop-free with respect to both the source node R1 and the protected node R3.
Consider the case where the primary next-hop uses a broadcast interface shown in the following figure.
In order for next-hop R2 to be a link-protect LFA for route R5 from R1, it must be loop-free with respect to the R1-R3 link’s Pseudo-Node (PN). However, since R2 has also a link to that PN, its cost to reach R5 by way of the PN or router R4 are the same. Thus R1 cannot guarantee that enabling the LFA next-hop R2 will protect against a failure impacting link R1-PN since this may cause the entire subnet represented by the PN to go down. If the metric of link R2-PN is changed to 8, then R2 next-hop will be an LFA providing link protection.
The following are the detailed rules for this criterion as provided in RFC 5286:
For the case of P2P interface, if SPF finds multiple LFA next-hops for a given primary next-hop, it follows the following selection algorithm:
For the case of a broadcast interface, a node-protect LFA is not necessarily a link protect LFA if the path to the LFA next-hop goes over the same PN as the primary next-hop. Similarly, a link protect LFA may not guarantee link protection if it goes over the same PN as the primary next-hop.
The selection algorithm when SPF finds multiple LFA next-hops for a given primary next-hop is modified as follows:
Note this algorithm is more flexible than strictly applying Rule 3 above; the link protect rule in the presence of a PN and specified in RFC 5286. A node-protect LFA which does not avoid the PN; does not guarantee link protection, can still be selected as a last resort. The same thing, a link-protect LFA which does not avoid the PN may still be selected as a last resort.Both the computed primary next-hop and LFA next-hop for a given prefix are programmed into RTM.
When SPF resolves OSPF inter-area prefixes or IS-IS inter-level prefixes, it will compute an LFA backup next-hop to the same exit area/border router as used by the primary next-hop.
An LFA SPF policy allows the user to apply specific criteria, such as admin group and SRLG constraints, to the selection of a LFA backup next-hop for a subset of prefixes that resolve to a specific primary next-hop. See more details in the Loop-Free Alternate Shortest Path First (LFA SPF) Policies section in the 7210 SAS-Mxp, R6, R12, S, Sx, T Routing Protocols Guide.
To extend LDP across multiple areas of an IGP instance or across multiple IGP instances, the current standard LDP implementation based on RFC 3036 requires that all the /32 prefixes of PEs be leaked between the areas or instances. This is because an exact match of the prefix in the routing table has to install the prefix binding in the LDP Forwarding Information Base (FIB).
Multi-area and multi-instance extensions to LDP provide an optional behavior by which LDP installs a prefix binding in the LDP FIB by simply performing a longest prefix match with an aggregate prefix in the routing table (RIB). The ABR is configured to summarize the /32 prefixes of PE routers. This method is compliant to RFC 5283- LDP Extension for Inter-Area Label Switched Paths (LSPs).
The following figure shows the basic LDP parameter provisioning process.
The following figure shows the LDP configuration and implementation process.