The following rules are used for a NHG with a resolution type of direct:
Each NHG is modeled as a pair of {primary, backup} NHLFEs. The following are the specifics of the label operation:
For a label-binding policy, forwarding over the primary or backup next hop is modeled as a swap operation from the binding label to the configured label stack or to an implicit-null label (if the pushed-labels command is not configured) over a single outgoing interface to the next hop.
For an endpoint policy, forwarding over the primary or backup next hop is modeled as a push operation from the binding label to the configured label stack or to an implicit-null label (if the pushed-labels command is not configured) over a single outgoing interface to the next hop.
The labels, configured by the pushed-labels command, are not validated.
By default, packets of flows are sprayed over the set of NHGs with resolution of type direct as a one-level ECMP spraying. See Spraying of packets in a MPLS forwarding policy.
The user can enable weighted ECMP forwarding over the NHGs by configuring weight against all the NHGs of the policy. See Spraying of packets in a MPLS forwarding policy.
Within an NHG, the primary next hop is the preferred active path in the absence of any failure of the NHG of resolution type direct.
The NHG supports uniform failover. The forwarding policy database assigns a Protect-Group ID (PG-ID) to each of the primary next hop and the backup next hop and programs both of them in the data path. A failure of the active path switches traffic to the other path following the uniform failover procedures as described in Active path determination and failover in a NHG of resolution type direct.
The forwarding database tracks the primary or backup next hop in the routing table. A route delete of the primary or backup direct next hop causes CPM to send the corresponding PG-ID switch to the data path.
A route modify of the direct primary or backup next hop causes CPM to update the MPLS forwarding database and to update the data path because both next hops are programmed.
When the primary direct next hop is restored and is added back into the routing table, CPM waits for an amount of time equal to the user programmed revert-timer before activating it and updating the data path. However, if the backup direct next hop fails while the timer is running, CPM activates it and updates the data path immediately. The latter failover to the restored primary next hop is performed using the uniform failover procedures as described in Active path determination and failover in a NHG of resolution type direct.
CPM keeps track and updates the IOM for each NHG with the state of active or inactive of its primary and backup next hops following a failure event, a reversion to the primary next hop, or a successful next-hop switch request instruction (RIB API only).