4. MPLS Forwarding Policy

The MPLS forwarding policy provides an interface for adding user defined label entries into the label FIB of the router. The user can define an entry into the label FIB by configuring a set of primary and backup next-hops and bind a label to them. This feature is targeted for router programmability in SDN environments.

4.1. Introduction

This section provides information about configuring and operating a MPLS forwarding policy using CLI.

An application of the MPLS forwarding policy is what is referred to as a static label route. A static label route allows the user to configure a SWAP operation on an incoming label to an implicit-null label and the forwarding of the exposed packet to all the resolved next-hops derived from looking up the configured route of the indirect next-hop in the routing table. In addition, the static label route feature provides the ability to configure a SWAP operation on an incoming label to an implicit-null label and the forwarding of the packet to a primary or backup direct next-hop. The policy associates an incoming label, referred to as a binding label, to a Next-Hop Group (NHG) in which the primary and backup direct or indirect next-hops are defined. This type of MPLS forwarding policy is referred to as a label-binding policy.

The following example shows the configuration commands for creating a static label route using a MPLS forwarding policy.

CLI Syntax:
config>router>mpls
forwarding-policies
no forwarding-policies
reserved-label-block name<64 characters>
[no] reserved-label-block
[no] shutdown
[no] forwarding-policy name<64 characters>
binding-label {label-value< 32-1048575 >}
no binding-label
[no] ingress-statistics
[no] shutdown
preference preference-value<1..255>
no preference
revert-timer seconds 1..600, default=0
no revert-timer
[no] shutdown
next-hop-group index <1..32> [resolution-type {direct| indirect]
no next-hop-group index <1..32>
primary-next-hop
no primary-next-hop
next-hop { ip-address | ipv6-address }
no next-hop
backup-next-hop
next-hop { ip-address | ipv6-address }
no next-hop
[no] shutdown

4.2. Feature Validation and Operation Procedures

The MPLS forwarding policy follows a number of configuration and operation rules which are enforced for the lifetime of the policy.

There are two levels of validation:

  1. The first level validation is performed at provisioning time. The user can bring up a policy (no shutdown command) once these validation rules are met. Afterwards, the policy is stored in the forwarding policy database.
  2. The second level validation is performed when the database resolves the policy.

4.2.1. Policy Parameters and Validation Procedure Rules

The following policy parameters and validation rules apply to the MPLS forwarding policy:

  1. The binding-label command allows the user to specify the label for the binding to the policy such that labeled packets matching the ILM of the binding label can be forwarded over the NHG of the policy.
    The ILM entry is created only when a label is configured. Only a provisioned binding label, from a reserved label block, is supported. The user must configure the name of the reserved label block using the reserved-label-block command.
    The payload of the packet forwarded, using the ILM (payload underneath the swapped label), can be IPv4, IPv6, or MPLS. The family of the primary and backup next-hops of the NHG, within the policy, are not relevant to the type of payload of the forwarded packets.
  2. Changes to the value of the binding-label parameter requires a shutdown of the specific forwarding policy context.
  3. A change to the name of the reserved-label-block requires a shutdown of the forwarding-policies context. The shutdown is not required if the user extends or shrinks the range of the reserved-label-block.
  4. The preference parameter allows the user to configure multiple label-binding policies with the same binding label; providing the capability to achieve a 1:N backup strategy for the forwarding policy. Only the most preferred, lowest numerical preference value, policy is activated in data path as explained in Policy Resolution and Operational Procedures.
  5. Changes to the value of parameter preference requires a shutdown of the specific forwarding policy context.
  6. A maximum of eight label-binding policies, with different preference values, are allowed for each unique value of the binding label.
    Label-binding policies with exactly the same value of the tuple {binding label | preference} are duplicate and their configuration is not allowed.
    The user cannot perform no shutdown on the duplicate policy.
  7. The revert-timer command configures the time to wait before switching back the resolution from the backup next-hop to the restored primary next-hop within a given NHG. By default, this timer is disabled meaning that the NHG will immediately revert to the primary next-hop when it is restored.
  8. The MPLS forwarding policy feature allows for a maximum of one NHG consisting of, at most, one primary next-hop and one backup next-hop.
  9. The next-hop command allows the user to specify a direct next-hop address or an indirect next-hop address.
  10. The no shutdown command at the forwarding-policy context fails if:
    1. The address value of the primary and backup next-hop, within the same NHG, are duplicate.
  11. A second level of validation is performed by the forwarding policy database at resolution time; meaning each time the policy is re-evaluated:
    1. If the NHG primary or backup next-hop resolves to a route whose type does not match the configured value in resolution-type, that next-hop is made operationally “down”.
      A DOWN reason code shows in the state of the next-hop.
    2. The primary and backup next hops of an NHG are looked up in the routing table. The lookups can match a direct next-hop in the case of the direct resolution type. They can also match a static, IGP or BGP route for an indirect resolution type, but only the set of IP next-hops of the route are selected. Tunnel next-hops are not selected and if they are the sole next-hops for the route, the NHG will be put in operationally “down” state.
    3. The first 32, out of a maximum of 64, resolved IP next-hops are selected for resolving the primary or backup next-hop of a NHG of resolution-type indirect.
    4. If the primary next-hop is operationally “down”, the NHG will use the backup next-hop if it is UP. If both are operationally down, the NHG is DOWN
    5. If the binding label is not available, meaning it is either outside the range of the configured reserved-label-block, or is used by another MPLS forwarding policy, or by another application, the label-binding policy is put operationally “down” and a retry mechanism will check the label availability in the background.
      A policy level DOWN reason code is added to alert users who may then choose to modify the binding label value.

4.2.2. Policy Resolution and Operational Procedures

The following are the details of these two levels of validation, the resolution of the policy, and the operational behavior of the forwarding policy:

  1. The forwarding policy database activates the best label-binding policy, among the named policies sharing the same binding label, by selecting the lowest preference value policy. This policy is then programmed into the label FIB table in data path as detailed in Data Path Support.
    If this policy goes down, the forwarding policy database performs a re-evaluation and activates the named policy with the next lowest preference value for the same binding label value.
    If a more preferred policy comes back up, the forwarding policy database reverts to the more preferred policy and activates it.
  2. A policy is considered UP when it is the best policy activated by the forwarding policy database and when its NHG is operationally UP. A NHG of an active policy is considered UP when at least one of the primary or backup next-hops is operationally UP.
  3. When the config>router>mpls or config>router>mpls>forwarding-policies context is set to shutdown, all forwarding policies are set to DOWN in the forwarding policy database and deprogrammed from IOM and data path.
  4. When a policy is set to shutdown, it is deleted in the forwarding policy database and deprogrammed from IOM and data path.
  5. When an NHG is set to shutdown, it is deprogrammed from the IOM and data path.
  6. The no version of the forwarding-policies CLI node deletes all policies from the forwarding policy database.

4.3. Data Path Support

4.3.1. MPLS Forwarding Policy Data Path Model

The MPLS forwarding policy data path implementation adheres to the following model and principles of operation:

  1. NHG of resolution type indirect:
    1. The primary or backup next-hop is modeled as a swap operation from the binding label to an implicit-null label over multiple outgoing interfaces (multiple NHLFEs) corresponding to the resolved next-hops of the indirect route.
    2. Packets of flows are sprayed over the resolved next-hops of an NHG with resolution of type indirect as a one level ECMP spraying. See Spraying of Packets over the Next-hops of a NHG for more details.
    3. An NHG of resolution type indirect does not support uniform failover and will thus have CPM program only the active, primary or backup, indirect next-hop at any given point in time.
    4. The forwarding database tracks the primary or backup next-hop in the routing table. A route delete of the primary indirect next-hop causes CPM to program the backup indirect next-hop in data path.
      Note:

      A route is deleted from the routing table if it is no longer resolved by IGP, BGP, or the static route module. In the case of static route, it is deleted if the user deleted the route.

      A route modify to the indirect primary or backup next-hop causes CPM to update its resolved next-hops and to update the data path if it is the active indirect next-hop.
    5. When the primary indirect next-hop is restored and is added back into the routing table, CPM waits for an amount of time equal to the user programmed revert-timer before updating the data path. However, if the backup indirect next-hop fails while the timer is running, CPM updates the data path immediately.
  2. NHG of resolution type direct
    1. The primary or backup next-hop is modeled as a swap operation from the binding label to an implicit-null label over a single outgoing interface (single NHLFE) to the next-hop.
    2. An NHG of resolution type direct supports uniform failover. The forwarding policy database assigns a Protect-Group ID (PG-ID) to the primary next-hop and pre-programs the backup next-hop in data path. During a failure affecting the primary next-hop, CPM signals the PG-ID switch to the data path, which then immediately begins using the NHLFE of the backup next-hop for flow packets mapped to NHGs of all forwarding polices that share this primary next-hop.
      An interface down event sent by CPM to the data path causes the data path to switch the PG-ID of all next-hops associated with this interface and perform the uniform failover procedure for NHGs of all policies which share these PG-IDs.
    3. Any subsequent network event causing a failure of the backup next-hop while the primary next-hop is still down, blackholes traffic of the NHG until CPM updates the policy with a restored primary or backup next-hop.
    4. The forwarding database tracks the primary or backup next-hop in the routing table. A route delete of the primary direct next-hop causes CPM to send a PG-ID switch to the data path.
      A route modify to the direct primary or backup next-hop causes CPM to update the MPLS forwarding database and to update the data path because both next-hops are programmed.
    5. When the primary direct next-hop is restored and is added back into the routing table, CPM waits for an amount of time equal to the user programmed revert-timer before activating it and updating the data path. However, if the backup direct next-hop fails while the timer is running, CPM activates it and updates the data path immediately.

4.3.2. Spraying of Packets over the Next-hops of a NHG

When an MPLS packet, including an MPLS-over-GRE packet, with a binding label in the label stack is received over any network IP interface, the packet is forwarded over the primary or backup next-hop of the NHG. Furthermore, spraying of packets is performed over the resolved next-hops of a NHG of resolution type indirect.

The router performs the following procedure:

  1. If the packet is MPLS-over-GRE encapsulated, the router performs the GRE header processing. Refer to section 2.6.2 MPLS-over-GRE and IP-over-GRE Termination Function in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Router Configuration Guide.
  2. The router pops one or more labels and if there is a match with the ILM of a binding label, swaps the label to an implicit-null label and forwards the packet to the outgoing interface. The outgoing interface is selected from the set of ECMP next-hop of the active route, based on the LSR hash on the headers of the received MPLS packet.
    1. The hash calculation follows the method in the user configuration of the command lsr-load-balancing {lbl-only | lbl-ip | ip-only} for packets that are MPLS-only encapsulated.
    2. The hash calculation follows a different method for packets that are MPLS-over-GRE encapsulated. Refer to LSR Hashing of MPLS-over-GRE Encapsulated Packet in section Changing Default Per Flow Hashing Inputs of the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide.

4.3.3. Outgoing Packet Ethertype Setting and TTL Handling in Label-binding Policy

The following rules determine how the router sets the Ethertype field value of the outgoing packet as follows:

  1. If the swapped label is not the Bottom-of-Stack label, Ethertype is set to the MPLS value.
  2. If the swapped label is the Bottom-of-Stack label, Ethertype is set to the IPv4 or IPv6 value when the first nibble of the exposed IP packet is 4 or 6 respectively.

The router sets the TTL of the outgoing packet as per the behavior of a PHP LSR as follows:

  1. The TTL of a forwarded IP packet is set to MIN(MPLS_TTL-1, IP_TTL), where MPLS_TTL refers to the TTL in the outermost label in the popped stack and IP_TTL refers to the TTL in the exposed IP header.
  2. The TTL of a forwarded MPLS packet is set to MIN(MPLS_TTL-1, INNER_MPLS_TTL), where MPLS_TTL refers to the TTL in the outermost label in the popped stack and INNER_MPLS_TTL refers to the TTL in the exposed label.

4.4. Statistics

4.4.1. Ingress Statistics

The user enables ingress statistics for an MPLS forwarding policy using the CLI provided in Introduction.

The ingress statistics feature is associated with the binding label, that is the ILM of the forwarding policy, and provides aggregate packet and octet counters for packets matching the binding label.

The per-ILM stat index for the MPLS forwarding policy features is assigned at the time the first instance of the policy is programmed in the data path. All instances of the same policy, for example, same binding-label, regardless of the preference parameter value share the same stat index.

The stat index remains assigned as long as the policy exists and the ingress-statistics context is not shutdown. If the last instance of the policy is removed from the forwarding policy database, the CPM frees the stat index and returns it to the pool.

If the ingress statistics is not configured or is shutdown in a specific instance of the forwarding policy, identified by a unique value of pair {bindling-label, preference} of the forwarding policy, an assigned stat index will not be incremented if that instance of the policy is activated

If a stat index is not available at allocation time, the allocation fails and a retry mechanism will check the stat index availability in the background.