EVPN is an IETF technology per RFC 7432, BGP MPLS-Based Ethernet VPN, that uses a new BGP address family and allows VPLS services to be operated as IP-VPNs, where the MAC addresses and the information to set up the flooding trees are distributed by BGP.
EVPN is defined to fill the gaps of other L2VPN technologies such as VPLS. The main objective of the EVPN is to build E-LAN services in a similar way to RFC 4364 IP-VPNs, while supporting MAC learning within the control plane (distributed by MP-BGP), efficient multi-destination traffic delivery, and active-active multi-homing.
EVPN can be used as the control plane for different data plane encapsulations. The Nokia implementation supports the following data planes:
The 7750 SR, 7450 ESS, or 7950 XRS EVPN VXLAN implementation is integrated in the Nuage Data Center architecture, where the router serves as the DC GW.
Refer to the Nuage Networks Virtualized Service Platform Guide for more information about the Nuage Networks architecture and products. The following sections describe the applications supported by EVPN in the 7750 SR, 7450 ESS, or 7950 XRS implementation.
Figure 138 shows the use of EVPN for VXLAN overlay tunnels on the 7750 SR, 7450 ESS, or 7950 XRS when it is used as a Layer 2 DC GW.
DC providers require a DC GW solution that can extend tenant subnets to the WAN. Customers can deploy the NVO3-based solutions in the DC, where EVPN is the standard control plane and VXLAN is a predominant data plane encapsulation. The Nokia DC architecture (Nuage) uses EVPN and VXLAN as the control and data plane solutions for Layer 2 connectivity within the DC and so does the SR OS.
While EVPN VXLAN is used within the DC, most service providers use VPLS and H-VPLS as the solution to extend Layer 2 VPN connectivity. Figure 138 shows the Layer 2 DC GW function on the 7750 SR, 7450 ESS, and 7950 XRS routers, providing VXLAN connectivity to the DC and regular VPLS connectivity to the WAN.
The WAN connectivity is based on VPLS where SAPs (null, dot1q, and qinq), spoke SDPs (FEC type 128 and 129), and mesh-SDPs are supported.
The DC GWs can provide multi-homing resiliency through the use of BGP multi-homing.
EVPN-MPLS can also be used in the WAN. In this case, the Layer 2 DC GW function provides translation between EVPN-VXLAN and EVPN-MPLS. EVPN multi-homing can be used to provide DC GW redundancy.
If point-to-point services are needed in the DC, SR OS supports the use of EVPN-VPWS for VXLAN tunnels, including multi-homing, according to RFC8214.
Figure 139 shows the use of EVPN for VXLAN overlay tunnels on the 7750 SR, 7450 ESS, or 7950 XRS when the DC provides Layer 2 connectivity and the DC GW can route the traffic to the WAN through an R-VPLS and linked VPRN.
In some cases, the DC GW must provide a Layer 3 default gateway function to all the hosts in a specified tenant subnet. In this case, the VXLAN data plane is terminated in an R-VPLS on the DC GW, and connectivity to the WAN is accomplished through regular VPRN connectivity. The 7750 SR, 7450 ESS, and 7950 XRS support IPv4 and IPv6 interfaces as default gateways in this scenario.
Figure 140 shows the use of EVPN for VXLAN tunnels on the 7750 SR, 7450 ESS, or 7950 XRS when the DC provides distributed Layer 3 connectivity to the DC tenants.
Each tenant has several subnets for which each DC Network Virtualization Edge (NVE) provides intra-subnet forwarding. An NVE may be a Nuage VSG, VSC/VRS, or any other NVE in the market supporting the same constructs, and each subnet normally corresponds to an R-VPLS. For example, in Figure 140, subnet 10.20.0.0 corresponds to R-VPLS 2001 and subnet 10.10.0.0 corresponds to R-VPLS 2000.
In this example, the NVE provides inter-subnet forwarding too, by connecting all the local subnets to a VPRN instance. When the tenant requires Layer 3 connectivity to the IP-VPN in the WAN, a VPRN is defined in the DC GWs, which connects the tenant to the WAN. That VPRN instance is connected to the VPRNs in the NVEs by means of an IRB (Integrated Routing and Bridging) backhaul R-VPLS. This IRB backhaul R-VPLS provides a scalable solution because it allows Layer 3 connectivity to the WAN without the need for defining all of the subnets in the DC GW.
The 7750 SR, 7450 ESS, and 7950 XRS DC GW support the IRB backhaul R-VPLS model, where the R-VPLS runs EVPN-VXLAN and the VPRN instances exchange IP prefixes (IPv4 and IPv6) through the use of EVPN. Interoperability between the EVPN and IP-VPN for IP prefixes is also fully supported.
Figure 141 shows the use of EVPN for VXLAN tunnels on the 7750 SR, 7450 ESS, or 7950 XRS, when the DC provides distributed Layer 3 connectivity to the DC tenants and the VPRN instances are connected through EVPN tunnels.
The solution described in section EVPN for VXLAN Tunnels in a Layer 3 DC with Integrated Routing Bridging Connectivity among VPRNs provides a scalable IRB backhaul R-VPLS service where all the VPRN instances for a specified tenant can be connected by using IRB interfaces. When this IRB backhaul R-VPLS is exclusively used as a backhaul and does not have any SAPs or SDP bindings directly attached, the solution can be optimized by using EVPN tunnels.
EVPN tunnels are enabled using the evpn-tunnel command under the R-VPLS interface configured on the VPRN. EVPN tunnels provide the following benefits to EVPN-VXLAN IRB backhaul R-VPLS services:
![]() | Note: IPv6 interfaces do not require the provisioning of an IPv6 Global Address; a Link Local Address is automatically assigned to the IRB interface. |
This optimization is fully supported by the 7750 SR, 7450 ESS, and 7950 XRS.
Figure 142 shows the use of EVPN for MPLS tunnels on the 7750 SR, 7450 ESS, and 7950 XRS. In this case, EVPN is used as the control plane for E-LAN services in the WAN.
EVPN-MPLS is standardized in RFC 7432 as an L2VPN technology that can fill the gaps in VPLS for E-LAN services. A significant number of service providers offering E-LAN services today are requesting EVPN for their multi-homing capabilities, as well as the optimization EVPN provides. EVPN supports all-active multi-homing (per-flow load-balancing multi-homing) as well as single-active multi-homing (per-service load-balancing multi-homing).
EVPN is a standard-based technology that supports all-active multi-homing, and although VPLS already supports single-active multi-homing, EVPN's single-active multi-homing is perceived as a superior technology due to its mass-withdrawal capabilities to speed up convergence in scaled environments.
EVPN technology provides a number of significant benefits, including:
The SR OS EVPN-MPLS implementation is compliant with RFC 7432.
EVPN-MPLS can also be enabled in R-VPLS services with the same feature-set that is described for VXLAN tunnels in sections EVPN for VXLAN Tunnels in a Layer 3 DC with Integrated Routing Bridging Connectivity among VPRNs and EVPN for VXLAN Tunnels in a Layer 3 DC with EVPN-Tunnel Connectivity among VPRNs.
The MPLS network used by EVPN for E-LAN services can also be shared by E-Line services using EVPN in the control plane. EVPN for E-Line services (EVPN-VPWS) is a simplification of the RFC 7432 procedures, and it is supported in compliance with RFC 8214.
The MPLS network used by E-LAN and E-Line services can also be shared by Ethernet-Tree (E-Tree) services using the EVPN control plane. EVPN E-Tree services use the EVPN control plane extensions described in IETF RFC 8317 and are supported on the 7750 SR, 7450 ESS, and 7950 XRS.
Figure 143 shows the use of EVPN for MPLS tunnels on the 7750 SR, 7450 ESS, and 7950 XRS. In this case, EVPN is used as the control plane for E-LAN services in the WAN.
EVPN for PBB over MPLS (hereafter called PBB-EVPN) is specified in RFC 7623. It provides a simplified version of EVPN for cases where the network requires very high scalability and does not need all the advanced features supported by EVPN-MPLS (but still requires single-active and all-active multi-homing capabilities).
PBB-EVPN is a combination of 802.1ah PBB and RFC 7432 EVPN and reuses the PBB-VPLS service model, where BGP-EVPN is enabled in the B-VPLS domain. EVPN is used as the control plane in the B-VPLS domain to control the distribution of B-MACs and setup per-ISID flooding trees for I-VPLS services. The learning of the C-MACs, either on local SAPs/SDP bindings or associated with remote B-MACs, is still performed in the data plane. Only the learning of B-MACs in the B-VPLS is performed through BGP.
The SR OS PBB-EVPN implementation supports PBB-EVPN for I-VPLS and PBB-Epipe services, including single-active and all-active multi-homing.
This section provides information about EVPN for VXLAN tunnels and cloud technologies.
The SR OS and Nuage solution for DC supports VXLAN (Virtual eXtensible Local Area Network) overlay tunnels as per RFC 7348.
VXLAN addresses the data plane needs for overlay networks within virtualized data centers accommodating multiple tenants. The main attributes of the VXLAN encapsulation are:
Figure 144 shows an example of the VXLAN encapsulation supported by the Nokia implementation.
As shown in Figure 144, VXLAN encapsulates the inner Ethernet frames into VXLAN + UDP/IP packets. The main pieces of information encoded in this encapsulation are:
![]() | Note: The source MAC address is changed on all the IP hops along the path, as is usual in regular IP routing. |
![]() | Note: All remote MACs are learned by the EVPN BGP and associated with a remote VTEP address and VNI. |
Some considerations related to the support of VXLAN on the 7750 SR, 7450 ESS, and 7950 XRS are:
The DC GW supports ECMP load balancing to reach the destination VTEP. Also, any intermediate core node in the Data Center should be able to provide further load balancing across ECMP paths because the source UDP port of each tunneled packet is derived from a hash of the customer inner packet. The following must be considered:
The following describes the behavior on the 7750 SR, 7450 ESS, and 7950 XRS with respect to VLAN tag handling for VXLAN VPLS services:
For VXLAN VPLS services, the network port MTU must be at least 50 Bytes (54 Bytes if dot1q) greater than the Service-MTU to allow enough room for the VXLAN encapsulation.
The Service-MTU is only enforced on SAPs, (any SAP ingress packet with MTU greater than the service-mtu is discarded) and not on VXLAN termination (any VXLAN ingress packet makes it to the egress SAP regardless of the configured service-mtu).
![]() | Note: The router never fragments or reassemble VXLAN packets. In addition, the router always sets the DF (Do not Fragment) flag in the VXLAN outer IP header. |
VXLAN is a network port encapsulation; therefore, the QoS settings for VXLAN are controlled from the network QoS policies.
The network ingress QoS policy can be applied either to the network interface over which the VXLAN traffic arrives or under vxlan/network/ingress within the EVPN service.
Regardless of where the network QoS policy is applied, the ingress network QoS policy is used to classify the VXLAN packets based on the outer dot1p (if present), then the outer DSCP, to yield an FC/profile.
If the ingress network QoS policy is applied to the network interface over which the VXLAN traffic arrives then the VXLAN unicast traffic uses the network ingress queues configured on FP where the network interface resides. QoS control of BUM traffic received on the VXLAN tunnels is possible by separately redirecting these traffic types to policers within an FP ingress network queue group. This QoS control uses the per forwarding class fp-redirect-group parameter together with broadcast-policer, unknown-policer, and mcast-policer within the ingress section of a network QoS policy. This QoS control applies to all BUM traffic received for that forwarding class on the network IP interface on which the network QoS policy is applied.
The ingress network QoS policy can also be applied within the EVPN service by referencing an FP queue group instance, as follows:
In this case, the redirection to a specific ingress FP queue group applies as a single entity (per forwarding class) to all VXLAN traffic received only by this service. This overrides the QoS applied to the related network interfaces for traffic arriving on VXLAN tunnels in that service but does not affect traffic received on a spoke SDP in the same service. It is possible to also redirect unicast traffic to a policer using the per forwarding class fp-redirect-group policer parameter, as well as the BUM traffic as above, within the ingress section of a network QoS policy. The use of ler-use-dscp, ip-criteria and ipv6-criteria statements are ignored if configured in the ingress section of the referenced network QoS policy. If the instance of the named queue group template referenced in the qos command is not configured on an FP receiving the VXLAN traffic, then the traffic uses the ingress network queues or queue group related to the network interface.
On egress, there is no need to specify “remarking” in the policy to mark the DSCP. This is because the VXLAN adds a new IPv4 header, and the DSCP is always marked based on the egress network qos policy.
A new VXLAN troubleshooting tool, VXLAN Ping, is available to verify VXLAN VTEP connectivity. The VXLAN Ping command is available from interactive CLI and SNMP.
This tool allows the operator to specify a wide range of variables to influence how the packet is forwarded from the VTEP source to VTEP termination. The ping function requires the operator to specify a different test-id (equates to originator handle) for each active and outstanding test. The required local service identifier from which the test is launched determines the source IP (the system IP address) to use in the outer IP header of the packet. This IP address is encoded into the VXLAN header Source IP TLV. The service identifier also encodes the local VNI. The outer-ip-destination must equal the VTEP termination point on the remote node, and the dest-vni must be a valid VNI within the associated service on the remote node. The outer source IP address is automatically detected and inserted in the IP header of the packet. The outer source IP address uses the IPv4 system address by default.
If the VTEP is created using a non-system source IP address via the vxlan-src-vtep command, the outer source IP address uses the address specified by vxlan-src-vtep. The remainder of the variables are optional.
The VXLAN PDU is encapsulated in the appropriate transport header and forwarded within the overlay to the appropriate VTEP termination. The VXLAN router alert (RA) bit is set to prevent forwarding OAM PDU beyond the terminating VTEP. Since handling of the router alert bit was not defined in some early releases of VXLAN implementations, the VNI Informational bit (I-bit) is set to “0” for OAM packets. This indicates that the VNI is invalid, and the packet should not be forwarded. This safeguard can be overridden by including the i-flag-on option that sets the bit to “1”, valid VNI. Ensure that OAM frames meant to be contained to the VTEP are not forwarded beyond its endpoints.
The supporting VXLAN OAM ping draft includes a requirement to encode a reserved IEEE MAC address as the inner destination value. However, at the time of implementation, that IEEE MAC address had not been assigned. The inner IEEE MAC address defaults to 00:00:00:00:00:00, but may be changed using the inner-l2 option. Inner IEEE MAC addresses that are included with OAM packets are not learned in the local Layer 2 forwarding databases.
The echo responder terminates the VXLAN OAM frame, and takes the appropriate response action, and include relevant return codes. By default, the response is sent back using the IP network as an IPv4 UDP response. The operator can choose to override this default by changing the reply-mode to overlay. The overlay return mode forces the responder to use the VTEP connection representing the source IP and source VTEP. If a return overlay is not available, the echo response is dropped by the responder.
Support is included for:
The VXLAN OAM PDU includes two timestamps. These timestamps are used to report forward direction delay. Unidirectional delay metrics require accurate time of day clock synchronization. Negative unidirectional delay values are reported as “0.000”. The round trip value includes the entire round trip time including the time that the remote peer takes to process that packet. These reported values may not be representative of network delay.
The following example commands and outputs show how the VXLAN Ping function can be used to validate connectivity. The echo output includes a new header to better describe the VXLAN ping packet headers and the various levels.
IPv4 and IPv6 multicast routing is supported in an EVPN-VXLAN VPRN and IES routed VPLS service through its IP interface when the source of the multicast stream is on one side of its IP interface and the receivers are on either side of the IP interface. For example, the source for multicast stream G1 could be on the IP side, sending to receivers on both other regular IP interfaces and the VPLS of the routed VPLS service, while the source for group G2 could be on the VPLS side sending to receivers on both the VPLS and IP side of the routed VPLS service. See IPv4 and IPv6 Multicast Routing Support for more details.
The delivery of IP multicast in VXLAN services can be optimized with IGMP and MLD snooping. IGMP and MLD snooping are supported in EVPN-VXLAN VPLS services and in EVPN-VXLAN VPRN/IES R-VPLS services. When enabled, IGMP and MLD reports are snooped on SAPs or SDP bindings, but also on VXLAN bindings, to create or modify entries in the MFIB for the VPLS service.
When configuring IGMP and MLD snooping in EVPN-VXLAN VPLS services, consider the following:
![]() | Note: MLD snooping uses MAC-based forwarding. See MAC-Based IPv6 Multicast Forwarding for more details. |
PIM snooping for IPv4 and IPv6 are supported in an EVPN-EVPN-VXLAN VPLS or R-VPLS service (with the R-VPLS attached to a VPRN or IES service). The snooping operation is similar to that within a VPLS service (see PIM Snooping for VPLS) and supports both PIM snooping and PIM proxy modes.
PIM snooping for IPv4 is enabled using the config>service>vpls>pim-snooping command.
PIM snooping for IPv6 is enabled using the config>service>vpls>pim-snooping no ipv6-multicast-disable command.
When using PIM snooping for IPv6, the default forwarding is MAC-based with optional support for SG-based (see IPv6 Multicast Forwarding). SG-based forwarding requires FP3- or higher-based hardware.
It is not possible to configure max-num-groups for VXLAN bindings.
By default, the system IP address is used to terminate and generate VXLAN traffic. The following configuration example shows an Epipe service that supports static VXLAN termination:
Where:
![]() | Note: The operational group configured under egr-vtep cannot be monitored on the SAP of the Epipe where it is configured. |
The following features are not supported by Epipe services with VXLAN destinations.
VXLAN instances in VPLS and R-VPLS can be configured with egress VTEPs. This is referred as static vxlan-instances. The following configuration example shows a VPLS service that supports a static vxlan-instance:
Specifically the following can be stated:
Static VXLAN instances can use non-system IPv4/IPv6 termination.
By default, only VXLAN packets with the same IP destination address as the system IPv4 address of the router can be terminated and processed for a subsequent MAC lookup. A router can simultaneously terminate VXLAN tunnels destined for its system IP address and three additional non-system IPv4 or IPv6 addresses, which can be on the base router or VPRN instances. This section describes the configuration requirements for services to terminate VXLAN packets destined for a non-system loopback IPv4 or IPv6 address on the base router or VPRN.
Perform the following steps to configure a service with non-system IPv4 or IPv6 VXLAN termination:
FPE Creation
A Forwarding Path Extension (FPE) is required to terminate non-system IPv4 or IPv6 VXLAN tunnels.
In a non-system IPv4 VXLAN termination, the FPE function is used for additional processing required at ingress (VXLAN tunnel termination) only, and not at egress (VXLAN tunnel origination).
If the IPv6 VXLAN terminates on a VPLS or Epipe service, the FPE function is used at ingress only, and not at egress.
For R-VPLS services terminating IPv6 VXLAN tunnels and also for VPRN VTEPs, the FPE is used for the egress as well as the VXLAN termination function. In the case of R-VPLS, an internal static SDP is created to allow the required extra processing.
See the “Forwarding Path Extension” section of the 7450 ESS, 7750 SR, 7950 XRS, and VSR Interface Configuration Guide for information about FPE configuration and functions.
FPE Association with VXLAN Termination
The FPE must be associated with the VXLAN termination application. The following sample configuration shows two FPEs and their corresponding association. FPE 1 uses the base router and FPE 2 is configured for VXLAN termination on VPRN 10.
VXLAN Router Loopback Interface
Create the interface that terminates and originates the VXLAN packets. The interface is created as a router interface, which is added to the Interior Gateway Protocol (IGP) and used by the BGP as the EVPN NLRI next hop.
Because the system cannot terminate the VXLAN on a local interface address, a subnet must be assigned to the loopback interface and not a host IP address that is /32 or /128. In the following example, all the addresses in subnet 11.11.11.0/24 (except 11.11.11.1, which is the interface IP) and subnet 10.1.1.0/24 (except 10.1.1.1) can be used for tunnel termination. The subnet is advertised using the IGP and is configured on either the base router or a VPRN. In the example, two subnets are assigned, in the base router and VPRN 10 respectively.
A local interface address cannot be configured as a VXLAN tunnel-termination IP address in the CLI, as shown in the following example.
The subnet can be up to 31 bits. For example, to use 10.11.11.1 as the VXLAN termination address, the subnet should be configured and advertised as shown in the following sample configuration.
It is not a requirement for the remote PEs and NVEs to have the specific /32 or /128 IP address in their RTM to resolve the BGP EVPN NLRI next hop or forward the VXLAN packets. An RTM with a subnet that contains the remote VTEP can also perform these tasks.
![]() | Note: The system does not check for a pre-existing local base router loopback interface with a subnet corresponding to the VXLAN tunnel termination address. If a tunnel termination address is configured and the FPE is operationally up, the system starts terminating VXLAN traffic and responding ICMP messages for that address. The following conditions are ignored in this scenario:
|
The following sample output includes an IPv6 address in the base router. It could also be configured in a VPRN instance.
VXLAN Termination VTEP Addresses
The service>system>vxlan>tunnel-termination context allows the user to configure non-system IP addresses that can terminate the VXLAN and their corresponding FPEs.
As shown in the following example, an IP address may be associated with a new or existing FPE already terminating the VXLAN. The list of addresses that can terminate the VXLAN can include IPv4 and IPv6 addresses.
The tunnel-termination command creates internal loopback interfaces that can respond to ICMP requests. In the following sample output, an internal loopback is created when the tunnel termination address is added (for 10.11.11.1 and 2001:db8:1000::1). The internal FPE router interfaces created by the VXLAN termination function are also shown in the output. Similar loopback and interfaces are created for tunnel termination addresses in a VPRN (not shown).
VXLAN Services
By default, the VXLAN services use the system IP address as the source VTEP of the VXLAN encapsulated frames. The vxlan-src-vtep command in the config>service>vpls or config>service>epipe context enables the system to use a non-system IPv4 or IPv6 address as the source VTEP for the VXLAN tunnels in that service.
A different vxlan-src-vtep can be used for different services, as shown in the following example where two different services use different non-system IP addresses as source VTEPs.
In addition, if a vxlan-src-vtep is configured and the service uses EVPN, the IP address is also used to set the BGP NLRI next hop in EVPN route advertisements for the service.
![]() | Note: The BGP EVPN next hop can be overridden by the use of export policies based on the following rules.
|
After the preceding steps are performed to configure a VXLAN termination, the VPLS, R-VPLS, or Epipe service can be used normally, except that the service terminates VXLAN tunnels with a non-system IPv4 or IPv6 destination address (in the base router or a VPRN instance) instead of the system IP address only.
The FPE vxlan-termination function creates internal router interfaces and loopbacks that are displayed by the show commands. When configuring IPv6 VXLAN termination on an R-VPLS service, as well as the internal router interfaces and loopbacks, the system creates internal SDP bindings for the required egress processing. The following output shows an example of an internal FPE-type SDP binding created for IPv6 R-VPLS egress processing.
When BGP EVPN is used, the BGP peer over which the EVPN-VXLAN updates are received can be an IPv4 or IPv6 peer, regardless of whether the next-hop is an IPv4 or IPv6 address.
The same VXLAN tunnel termination address cannot be configured on different router instances; that is, on two different VPRN instances or on a VPRN and the base router.
This section describes the specifics of EVPN for non-MPLS Overlay tunnels.
RFC 8365 describes EVPN as the control plane for overlay-based networks. The 7750 SR, 7450 ESS, and 7950 XRS support all routes and features described in RFC 7432 that are required for the DC GW function. EVPN multihoming and BGP multihoming based on the L2VPN BGP address family are both supported if redundancy is needed.
Figure 145 shows the EVPN MP-BGP NLRI, required attributes and extended communities, and two route types supported for the DC GW Layer 2 applications:
EVPN Route Type 3 – Inclusive Multicast Ethernet Tag Route
Route type 3 is used to set up the flooding tree (BUM flooding) for a specified VPLS service in the data center. The received inclusive multicast routes add entries to the VPLS flood list in the 7750 SR, 7450 ESS, and 7950 XRS. The tunnel types supported in an EVPN route type 3 when BGP-EVPN MPLS is enabled are ingress replication, P2MP MLDP, and composite tunnels.
Ingress Replication (IR) and Assisted Replication (AR) are supported for VXLAN tunnels. See Layer 2 Multicast Optimization for VXLAN (Assisted-Replication) for more information about the AR.
If ingress-repl-inc-mcast-advertisement is enabled, a route type 3 is generated by the router per VPLS service as soon as the service is in an operationally up state. The following fields and values are used:
AR Role | Function | Inclusive Mcast Routes Advertisement |
AR-R | Assists AR-LEAFs |
|
AR-LEAF | Sends BM only to AR-Rs | IR inclusive multicast route (IR IP, T=2) if ingress-repl-inc-mcast-advertisement is enabled |
RNVE | Non-AR support | IR inclusive multicast route (IR IP) if ingress-repl-inc-mcast-advertisement is enabled |
EVPN Route Type 2 – MAC/IP Advertisement Route
The 7750 SR, 7450 ESS, and 7950 XRS generates this route type for advertising MAC addresses. The router generates MAC advertisement routes for the following:
The route type 2 generated by a router uses the following fields and values:
![]() | Note: The RD can be configured or derived from the bgp-evpn evi value. |
When EVPN-VXLAN multihoming is enabled, type 1 routes (Auto-Discovery per-ES and per-EVI routes) and type 4 routes (ES routes) are also generated and processed. See BGP-EVPN Control Plane for MPLS Tunnels for more information about route types 1 and 4.
EVPN Route Type 5 – IP Prefix Route
Figure 147 shows the IP prefix route or route-type 5.
The router generates this route type for advertising IP prefixes in EVPN. The router generates IP prefix advertisement routes for:
The route-type 5 generated by a router uses the following fields and values:
All the routes in EVPN-VXLAN is sent with the RFC 5512 tunnel encapsulation extended community, with the tunnel type value set to VXLAN.
The EVPN-VXLAN service is designed around the current VPLS objects and the additional VXLAN construct.
Figure 138 shows a DC with a Layer 2 service that carries the traffic for a tenant who wants to extend a subnet beyond the DC. The DC PE function is carried out by the 7750 SR, 7450 ESS, and 7950 XRS where a VPLS instance exists for that particular tenant. Within the DC, the tenant has VPLS instances in all the Network Virtualization Edge (NVE) devices where they require connectivity (such VPLS instances can be instantiated in TORs, Nuage VRS, VSG, and so on). The VPLS instances in the redundant DC GW and the DC NVEs are connected by VXLAN bindings. BGP-EVPN provides the required control plane for such VXLAN connectivity.
The DC GW routers are configured with a VPLS per tenant that will provide the VXLAN connectivity to the Nuage VPLS instances. On the router, each tenant VPLS instance is configured with:
The bgp-evpn context specifies the encapsulation type (only vxlan is supported) to be used by EVPN and other parameters like the unknown-mac-route and mac-advertisement commands. These commands are typically configured in three different ways:
Other parameters related to EVPN or VXLAN are:
After the VPLS is configured and operationally up, the router will send/receive inclusive multicast Ethernet Tag routes, and a full-mesh of VXLAN connections will be automatically created. These VXLAN “auto-bindings” can be characterized as follows:
After the flooding domain is setup, the routers and DC NVEs start advertising MAC addresses, and the routers can learn MACs and install them in the FDB. Some considerations are the following:
![]() | Note: The unknown-mac-route will not be installed in the FDB (therefore, will not show up in the show service id svc-id fdb detail command). |
The DC overlay infrastructure relies on IP tunneling, that is, VXLAN; therefore, the underlay IP layer resolves failure in the DC core. The IGP should be optimized to get the fastest convergence.
From a service perspective, resilient connectivity to the WAN may be provided by BGP-Multi-homing.
All BGP-EVPN (control plane for a VXLAN DC), BGP-AD (control plane for MPLS-based spoke SDPs connected to the WAN), and one site for BGP multi-homing (control plane for the multi-homed connection to the WAN) can be configured in one service in a specified system. If that is the case, the following considerations apply:
This section describes the behavior of the EVPN-VXLAN service in the router when the unknown-mac-route and BGP-MH are configured at the same time.
The use of EVPN, as the control plane of NVO networks in the DC, provides a significant number of benefits as described in IETF Draft draft-ietf-bess-evpn-overlay.
However, there is a potential issue that must be addressed when a VPLS DCI is used for an NVO3-based DC: all the MAC addresses learned from the WAN side of the VPLS must be advertised by BGP EVPN updates. Even if optimized BGP techniques like RT-constraint are used, the number of MAC addresses to advertise or withdraw (in case of failure) from the DC GWs can be difficult to control and overwhelming for the DC network, especially when the NVEs reside in the hypervisors.
The 7750 SR, 7450 ESS, and 7950 XRS solution to this issue is based on the use of an unknown-mac-route address that is advertised by the DC PEs. By using this unknown-mac-route advertisement, the DC tenant may decide to optionally turn off the advertisement of WAN MAC addresses in the DC GW, therefore, reducing the control plane overhead and the size of the FDB tables in the NVEs.
The use of the unknown-mac-route is optional and helps to reduce the amount of unknown-unicast traffic within the data center. All the receiving NVEs supporting this concept will send any unknown-unicast packet to the owner of the unknown-mac-route, as opposed to flooding the unknown-unicast traffic to all other NVEs that are part of the same VPLS.
![]() | Note: Although the router can be configured to generate and advertise the unknown-mac-route, the router will never honor the unknown-mac-route and will flood to the TLS-flood list when an unknown-unicast packet arrives at an ingress SAP or SDP binding. |
The use of the unknown-mac-route assumes the following:
Therefore, when unknown-mac-route is configured, it will only be generated when one of the following applies:
Figure 139 shows a DC with a Layer 2 service that carries the traffic for a tenant who extends a subnet within the DC, while the DC GW is the default gateway for all the hosts in the subnet. The DC GW function is carried out by the 7750 SR, 7450 ESS, and 7950 XRS where an R-VPLS instance exists for that particular tenant. Within the DC, the tenant will have VPLS instances in all the NVE devices where they require connectivity (such VPLS instances can be instantiated in TORs, Nuage VRS, VSG, and so on). The WAN connectivity will be based on existing IP-VPN features.
In this model, the DC GW routers are configured with a R-VPLS (bound to the VPRN that provides the WAN connectivity) per tenant that will provide the VXLAN connectivity to the Nuage VPLS instances. This model provides inter-subnet forwarding for L2-only TORs and other L2 DC NVEs.
On the router:
On the Nuage VSGs and NVEs:
Other considerations:
![]() | Note: ND entries received from the EVPN are installed as Router entries. The ARP/ND entries coming from the EVPN will be tagged as evpn. |
EVPN-enabled R-VPLS services are also supported on IES interfaces.
Figure 140 shows a Layer 3 DC model, where a VPRN is defined in the DC GWs, connecting the tenant to the WAN. That VPRN instance will be connected to the VPRNs in the NVEs by means of an IRB backhaul R-VPLS. Since the IRB backhaul R-VPLS provides connectivity only to all the IRB interfaces and the DC GW VPRN is not directly connected to all the tenant subnets, the WAN ip-prefixes in the VPRN routing table must be advertised in EVPN. In the same way, the NVEs will send IP prefixes in EVPN that will be received by the DC GW and imported in the VPRN routing table.
![]() | Note: To generate or process IP prefixes sent or received in EVPN route type 5, support for IP route advertisement must be enabled in BGP-EVPN using the bgp-evpn>ip-route-advertisement command. This command is disabled by default and must be explicitly enabled. The command is tied to the allow-ip-int-bind command required for R-VPLS, and it is not supported on an R-VPLS linked to IES services. |
Local router interface host addresses are not advertised in EVPN by default. To advertise them, the ip-route-advertisement incl-host command must be enabled. For example:
For the case displayed by the output above, the behavior is the following:
Below is an example of VPRN (500) with two IRB interfaces connected to backhaul R-VPLS services 501 and 502 where EVPN-VXLAN runs:
When the above commands are enabled, the router will:
![]() | Note: IRB MAC and IP addresses are advertised in the IRB backhaul R-VPLS in routes type 2. |
The VPRN routing table can receive routes from all the supported protocols (BGP-VPN, OSPF, IS-IS, RIP, static routing) as well as from IP prefixes from EVPN, as shown below:
The following considerations apply:
Although the description above is focused on IPv4 interfaces and prefixes, it applies to IPv6 interfaces too. The following considerations are specific to IPv6 VPRN R-VPLS interfaces:
Figure 141 shows an L3 connectivity model that optimizes the solution described in EVPN for VXLAN in IRB Backhaul R-VPLS Services and IP Prefixes. Instead of regular IRB backhaul R-VPLS services for the connectivity of all the VPRN IRB interfaces, EVPN tunnels can be configured. The main advantage of using EVPN tunnels is that they don't need the configuration of IP addresses, as regular IRB R-VPLS interfaces do.
In addition to the ip-route-advertisement command, this model requires the configuration of the config>service>vprn>if>vpls <name> evpn-tunnel.
![]() | Note: EVPN tunnels can be enabled independently of the ip-route-advertisement command, however, no route-type 5 advertisements will be sent or processed. Neither command, evpn-tunnel and ip-route-advertisement, is supported on R-VPLS services linked to IES interfaces. |
The example below shows a VPRN (500) with an EVPN-tunnel R-VPLS (504):
A specified VPRN supports regular IRB backhaul R-VPLS services as well as EVPN tunnel R-VPLS services.
![]() | Note: EVPN tunnel R-VPLS services do not support SAPs or SDP-binds. |
The process followed upon receiving a route-type 5 on a regular IRB R-VPLS interface differs from the one for an EVPN-tunnel type:
The GW-MAC as well as the rest of the IP prefix BGP attributes are displayed by the show>router>bgp>routes>evpn>ip-prefix command.
EVPN tunneling is also supported on IPv6 VPRN interfaces. When sending IPv6 prefixes from IPv6 interfaces, the GW-MAC in the route type 5 (IP-prefix route) is always zero. If no specific Global Address is configured on the IPv6 interface, the routes type 5 for IPv6 prefixes will always be sent using the Link Local Address as GW-IP. The following example output shows an IPv6 prefix received via BGP EVPN.
BGP-EVPN Control Plane for EVPN-VPWS
EVPN-VPWS uses route-type 1 and route-type 4; it does not use route-types 2, 3 or 5. Figure 148 shows the encoding of the required extensions for the Ethernet A-D per-EVI routes. The encoding follows the guidelines described in RFC 8214.
If the advertising PE has an access SAP-SDP or spoke SDP that is not part of an Ethernet Segment (ES), then the PE populates the fields of the AD per-EVI route with the following values:
If the advertising PE has an access SAP-SDP or spoke SDP that is part of an ES, the AD per-EVI route is sent with the information described above, with the following minor differences:
Also, ES and AD per-ES routes are advertised and processed for the Ethernet-Segment, as described in RFC 7432 ESs. The ESI label sent with the AD per-ES route is used by BUM traffic on VPLS services; it is not used for Epipe traffic.
EVPN-VPWS for VXLAN Tunnels in Epipe Services
BGP-EVPN can be enabled in Epipe services with either SAPs or spoke SDPs at the access, as shown in Figure 149.
EVPN-VPWS is supported in VXLAN networks that also run EVPN-VXLAN in VPLS services. From a control plane perspective, EVPN-VPWS is a simplified point-to-point version of RFC 7432 for E-Line services for the following reasons:
In the following configuration example, Epipe 2 is an EVPN-VPWS service between PE2 and PE4 (as shown in Figure 149).
The following considerations apply for the above example configuration:
EVPN-VPWS Epipes can also be configured with the following characteristics:
Using A/S PW and MC-LAG with EVPN-VPWS Epipes
The use of A/S PW (for access spoke SDP) and MC-LAG (for access SAPs) provides an alternative redundant solution for EVPN-VPWS that do not use the EVPN multi homing procedures described in RFC 8214. Figure 150 shows the use of both mechanisms in a single Epipe.
In Figure 150, an A/S PW connects the CE to PE1 and PE2 (left-hand side of the diagram), and an MC-LAG connects the CE to PE3 and PE4 (right-hand side of the diagram). As EVPN multi homing is not used, there are no AD per-ES routes or ES routes. The redundancy is handled as follows:
EVPN Multi-homing for EVPN-VPWS Services
EVPN multi homing is supported for EVPN-VPWS Epipe services with the following considerations:
The DF election for Epipes that is defined in an all-active multi homing ES is not relevant because all PEs in the ES behave in the same way as follows:
Aliasing is supported for traffic sent to an ES destination. If ECMP is enabled on the ingress PE, per-flow load balancing is performed to all PEs that advertise P=1. The PEs that advertise P=0, are not considered as next hops for an ES destination.
![]() | Note: The ingress PE will load balance the traffic if shared queuing or ingress policing is enabled on the access SAPs. |
Although DF election is not relevant for Epipes in an all-active multi homing ES, it is essential for the following forwarding and backup functions in a single-active multihoming ES.
Non-system IPv4/IPv6 VXLAN Termination for EVPN-VPWS Services
EVPN-VPWS services support non-system IPv4/IPv6 VXLAN termination. For system configuration information, see Non-System IPv4 and IPv6 VXLAN Termination in VPLS, R-VPLS, and Epipe Services.
EVPN multi-homing is supported when the PEs use non-system IP termination, however some extra configuration steps are needed in this case.
Figure 140 shows a Layer 3 DC model, where a VPRN is defined in the DC GWs, connecting the tenant to the WAN. That VPRN instance will be connected to the VPRNs in the NVEs by means of an IRB backhaul R-VPLS. Since the IRB backhaul R-VPLS provides connectivity only to all the IRB interfaces and the DC GW VPRN is not directly connected to all the tenant subnets, the WAN ip-prefixes in the VPRN routing table must be advertised in EVPN. In the same way, the NVEs will send IP prefixes in EVPN that will be received by the DC GW and imported in the VPRN routing table.
![]() | Note: To generate or process IP prefixes sent or received in EVPN route type 5, the support for IP route advertisement must be enabled in BGP-EVPN. This is performed through the bgp-evpn>ip-route-advertisement command. This command s disabled by default and must be explicitly enabled. The command is tied to the allow-ip-int-bind command required for R-VPLS, and it is not supported on R-VPLS linked to IES services. |
Local router interface host addresses are not advertised in EVPN by default. To advertise them, the ip-route-advertisement incl-host command must be enabled. For example:
For the case displayed by the output above, the behavior is the following:
Below is an example of VPRN (500) with two IRB interfaces connected to backhaul R-VPLS services 501 and 502 where EVPN-VXLAN runs:
When the above commands are enabled, the router will:
![]() | Note: IRB MAC and IP addresses are advertised in the IRB backhaul R-VPLS in routes type 2. |
The VPRN routing table can receive routes from all the supported protocols (BGP-VPN, OSPF, IS-IS, RIP, static routing) as well as from IP prefixes from EVPN, as shown below:
The following considerations apply:
Although the description above is focused on IPv4 interfaces and prefixes, it applies to IPv6 interfaces too. The following considerations are specific to IPv6 VPRN R-VPLS interfaces:
Figure 141 shows an L3 connectivity model that optimizes the solution described in EVPN for VXLAN in IRB Backhaul R-VPLS Services and IP Prefixes. Instead of regular IRB backhaul R-VPLS services for the connectivity of all the VPRN IRB interfaces, EVPN tunnels can be configured. The main advantage of using EVPN tunnels is that they don't need the configuration of IP addresses, as regular IRB R-VPLS interfaces do.
In addition to the ip-route-advertisement command, this model requires the configuration of the config>service>vprn>if>vpls <name> evpn-tunnel.
![]() | Note: evpn-tunnel can be enabled independently of ip-route-advertisement, however, no route-type 5 advertisements will be sent or processed in that case. Neither command, evpn-tunnel and ip-route-advertisement, is supported on R-VPLS services linked to IES interfaces. |
The example below shows a VPRN (500) with an EVPN-tunnel R-VPLS (504):
A specified VPRN supports regular IRB backhaul R-VPLS services as well as EVPN tunnel R-VPLS services.
![]() | Note: EVPN tunnel R-VPLS services do not support SAPs or SDP-binds. |
The process followed upon receiving a route-type 5 on a regular IRB R-VPLS interface differs from the one for an EVPN-tunnel type:
The GW-MAC as well as the rest of the IP prefix BGP attributes are displayed by the show>router>bgp>routes>evpn>ip-prefix command.
EVPN tunneling is also supported on IPv6 VPRN interfaces. When sending IPv6 prefixes from IPv6 interfaces, the GW-MAC in the route type 5 (IP-prefix route) is always zero. If no specific Global Address is configured on the IPv6 interface, the routes type 5 for IPv6 prefixes will always be sent using the Link Local Address as GW-IP. The following example output shows an IPv6 prefix received via BGP EVPN.
The Nuage VSD (Virtual Services Directory) provides automation in the Nuage DC. The VSD is a programmable policy and analytics engine. It provides a flexible and hierarchical network policy framework that enables IT administrators to define and enforce resource policies.
The VSD contains a multi-tenant service directory that supports role-based administration of users, computing, and network resources. The VSD also manages network resource assignments such as IP addresses and ACLs.
To communicate with the Nuage controllers and gateways (including the 7750 SR, 7450 ESS, or 7950 XRS DC GW), VSD uses an XMPP (eXtensible Messaging and Presence Protocol) communication channel. The router can receive service parameters from the Nuage VSD through XMPP and add them to the existing VPRN/VPLS service configuration.
![]() | Note: The service must be pre-provisioned in the router using the CLI, SNMP, or other supported interfaces. The VSD will only push a limited number of parameters into the configuration. This router – VSD integration model is known as a Static-Dynamic provisioning model, because only a few parameters are dynamically pushed by VSD, as opposed to a Fully Dynamic model, where the entire service can be created dynamically by VSD. |
The router – VSD integration comprises the following building blocks:
These building blocks are described in more detail in the following subsections.
The Extensible Messaging and Presence Protocol (XMPP) is an open technology for real-time communication using XML (Extensible Markup Language) as the base format for exchanging information. The XMPP provides a way to send small pieces of XML from one entity to another in close to real time.
In a Nuage DC, an XMPP ejabberd server will have an interface to the Nuage VSD as well as the Nuage VSC/VSG and the 7750 SR, 7450 ESS, or 7950 XRS DC GW.
Figure 152 shows the basic XMPP architecture in the data center. While a single XMPP server is represented in the diagram, XMPP allows for easy server clustering and performs message replication to the cluster. It is similar to how BGP can scale and replicate the messages through the use of route reflectors.
Also the VSD is represented as a single server, but a cluster of VSD servers (using the same data base) will be a very common configuration in a DC.
In the Nuage solution, each XMPP client, including the 7750 SR, 7450 ESS, and 7950 XRS, is referred to with a JID (JabberID) in the following format: username@xmppserver.domain. The xmppserver.domain points to the XMPP Server.
To enable the XMPP interface on the 7750 SR, 7450 ESS, or 7950 XRS, the following command must be added to indicate to which XMPP server address the DC GW has to register, as well as the router’s JID:
Where:
![]() | Note: The DNS must be configured on the router so that the XMPP server name can be resolved. XMPP relies on the Domain Name System (DNS) to provide the underlying structure for addressing, instead of using raw IP addresses. The DNS is configured using the following BOF commands: bof primary-dns, bof secondary-dns, bof dns-domain. |
After the XMPP server is properly configured, the router can generate or receive XMPP stanza elements, such as presence and IQ (Information/Query) messages. IQ messages are used between the VSD and the router to request and receive configuration parameters. The status of the XMPP communication channel can be checked with the following command:
In addition to the XMPP server, the router must be configured with a VSD system-id that uniquely identifies the router in the VSD:
After the above configuration is complete, the router will subscribe to a VSD XMPP PubSub node to discover the available VSD servers. Then, the router will be discovered in the VSD UIs. On the router, the available VSD servers can be shown with the following command.
In the Static-Dynamic integration model, the DC and DC GW management entities can be the same or different. The DC GW operator will provision the required VPRN and VPLS services with all the parameters needed for the connectivity to the WAN. VSD will only push the required parameters so that those WAN services can be attached to the existing DC domains.
Figure 153 shows the workflow for the attachment of the WAN services defined on the DC GW to the DC domains.
The Static-Dynamic VSD integration model can be summarized in the steps shown in Figure 153 and described in the following procedure.
![]() | Note: The parameters between brackets “[..]” are not configured at this step. They will be pushed by the VSD through XMPP. |
In the Static-Dynamic integration model, VSD can only provision certain parameters in VPLS and/or VPRN services. When VSD and the DC GW exchange XMPP messages for a specified service, they use vsd-domains to identify those services. A vsd-domain is a tag that will be used by the 7750 SR, 7450 ESS, or 7950 XRS router and the VSD to correlate a configuration to a service. When redundant DC GWs are available, the vsd-domain for the same service can have the same or a different name in the two redundant DC GWs.
There are four different types of vsd-domains that can be configured in the router:
The domains will be configured in the config>service# context and assigned to each service.
L2-DOMAIN VSD-domains will be associated with VPLS services configured without a route-target and vxlan VNI. VSD will configure the route-target and VNI in the router VPLS service. Some considerations related to L2-DOMAINs are:
L2-DOMAIN-IRB VSD-domains will be associated with R-VPLS services configured without a static route-target and vxlan VNI. VSD will configure the dynamic route-target and VNI in the router VPLS service. The same considerations described for L2-DOMAINs apply to L2-DOMAIN-IRB domains with one exception: allow-ip-int-bind is now allowed.
VRF-GRE VSD-domains will be associated with VPRN services configured without a static route-target. In this case, the VSD will push a route-target that the router will add to the VPRN service. The system will check whether the VPRN service has a configured policy:
![]() | Note: In cases where a vrf-import policy is used, the user will provision the WAN route-target statically in a vrf-export policy. This route-target will also be used for the routes advertised to the DC. |
An example of the auto-generated entry is shown below:
VRF-VXLAN VSD-domains will be associated with R-VPLS services configured without a static route-target and vxlan VNI. VSD will configure the dynamic route-target and VNI in the router VPLS service. Some considerations related to VRF-VXLAN domains are:
The following commands help show the association between the 7750 SR, 7450 ESS, and 7950 XRS router services and VSD-domains, as well as statistics and configuration errors sent/received to/from VSD.
In the Static-Dynamic VSD integration model, the VPLS/VPRN service, as well as most of the parameters, must be provisioned “statically” through usual procedures (CLI, SNMP, and so on). VSD will “dynamically” send the parameters that are required for the attachment of the VPLS/VPRN service to the L2/L3 domain in the Data Center. In the Fully-Dynamic VSD integration model, the entire VPLS/VPRN service configuration will be dynamically driven from VSD and no static configuration is required. Through the existing XMPP interface, the VSD will provide the 7750 SR, 7450 ESS, and 7950 XRS routers with a handful of parameters that will be translated into a service configuration by a python-script. This python-script provides an intermediate abstraction layer between VSD and the router, translating the VSD data model into a 7750 SR, 7450 ESS, or 7950 XRS CLI data model.
In this Fully-Dynamic integration model, the DC and DC GW management entities are usually the same. The DC GW operator will provision the required VPRN and VPLS services with all the parameters needed for the connectivity to the WAN and the DC. VSD will push the required parameters so that the router can create the service completely and get it attached to an existing DC domain.
The workflow of the Fully-Dynamic integration model is shown in Figure 154.
The Fully-Dynamic VSD integration model can be summarized in the steps shown in Figure 154 and described in the following procedure.
![]() | Note: The keys or values do not need to follow any specific format. The python script interprets and translates them into the router data model. |
![]() | Note: The python-script cannot access all the CLI commands and nodes in the system. A white-list of nodes and commands is built and Python will only have access to those nodes/commands. The list can be displayed using the following command: tools dump service vsd-services command-list. |
A python-script provides an intermediate abstraction layer between VSD and the router, translating the VSD data model into the 7750 SR, 7450 ESS, or 7950 XRS router CLI data model. VSD will use metadata key=value parameters to provision service-specific attributes on the 7750. The XMPP messages get to the 7750 and are passed transparently to the Python module so that the CLI is generated and the service created.
![]() | Note: The CLI generated by the python-script is not saved in the configuration file; it can be displayed by running the info include-dynamic command on the service contexts. The configuration generated by the python-script is protected and cannot be changed by a CLI user. |
The following example shows how the python-script works for a VSD generated configuration:
The user should consider the following:
![]() | Note: The CLI configured services cannot be modified when the user is in enable-vsd-config mode. |
The following example shows the use of the tools perform service vsd evaluate-script command:
The following example output shows a python-script that can set up or tear down L2-DOMAINs.
For instance, assuming that the VSD sends the following:
The system will execute the setup script as follows:
IP, IPv6, and MAC filters can be configured from the VSD within the context of the Fully Dynamic XMPP provisioning model for VPRN and VPLS services.
The VSD filters or filter entries are intended for use in two DC environments:
The Assisted-Replication feature for IPv4 VXLAN tunnels (both Leaf and Replicator functions) is supported in compliance with the non-selective mode described in IETF Draft draft-ietf-bess-evpn-optimized-ir.
The Assisted-Replication feature is a Layer 2 multicast optimization feature that helps software-based PE and NVEs with low-performance replication capabilities to deliver broadcast and multicast Layer 2 traffic to remote VTEPs in the VPLS service.
The EVPN and proxy-ARP/ND capabilities can reduce the amount of broadcast and unknown unicast in the VPLS service; ingress replication is sufficient for most use cases in this scenario. However, when multicast applications require a significant amount of replication at the ingress node, software-based nodes struggle due to their limited replication performance. By enabling the Assisted-Replication Leaf function on the software-based SR-series router, all the broadcast and multicast packets are sent to a 7x50 router configured as a Replicator, which replicates the traffic to all the VTEPs in the VPLS service on behalf of the Leaf. This guarantees that the broadcast or multicast traffic is delivered to all the VPLS participants without any packet loss caused by performance issues.
The Leaf or Replicator function is enabled per VPLS service by the config>service>vpls>vxlan>assisted-replication {replicator | leaf} command. In addition, the Replicator requires the configuration of an Assisted-Replication IP (AR-IP) address. The AR-IP loopback address indicates whether the received VXLAN packets have to be replicated to the remote VTEPs. The AR-IP address is configured using the config>service>system>vxlan>assisted-replication-ip <ip-address> command.
Based on the assisted-replication {replicator | leaf} configuration, the SR-series router can behave as a Replicator (AR-R), Leaf (AR-L), or Regular Network Virtualization Edge (RNVE) router. An RNVE router does not support the Assisted-Replication feature. Because it is configured with no assisted replication, the RNVE router ignores the AR-R and AR-L information and replicates to its flooding list where VTEPs are added based on the regular ingress replication routes.
An AR-R configuration is shown in the following example.
In this example configuration, the BGP advertises a new inclusive multicast route with tunnel-type = AR, type (T) = AR-R, and tunnel-id = originating-ip = next-hop = assisted-replication-ip (IP address 10.2.2.2 in the preceding example). In addition to the AR route, the AR-R sends a regular IR route if ingress-repl-inc-mcast-advertisement is enabled.
![]() | Note: You should disable the ingress-repl-inc-mcast-advertisement command if the AR-R does not have any SAP or SDP bindings and is used solely for Assisted-Replication functions. |
The AR-R builds a flooding list composed of ACs (SAPs and SDP bindings) and VXLAN tunnels to remote nodes in the VPLS. All objects in the flooding list are broadcast/multicast (BM) and unknown unicast (U) capable. The following sample output of the show service id vxlan command shows that the VXLAN destinations in the flooding list are tagged as “BUM”.
When the AR-R receives a BUM packet on an AC, the AR-R forwards the packet to its flooding list (including the local ACs and remote VTEPs).
When the AR-R receives a BM packet on a VXLAN tunnel, it checks the IP DA of the underlay IP header and performs the following BM packet processing.
![]() | Note: The AR-R function is only relevant to BM packets; it does not apply to unknown unicast packets. If the AR-R receives unknown unicast packets, it sends them to the flooding list, skipping the VXLAN tunnels. |
An AR-L is configured as shown in the following example.
In this example configuration, the BGP advertises a new inclusive multicast route with a tunnel-type = IR, type (T) = AR-L and tunnel-id = originating-ip = next-hop = IR-IP (IP address terminating VXLAN normally, either system-ip or vxlan-src-vtep address).
The AR-L builds a single flooding list per service but controlled by the BM and U flags. These flags are displayed in the following show service id vxlan command sample output.
The AR-L creates the following VXLAN destinations when it receives and selects a Replicator-AR route or the Regular-IR routes.
The BM traffic is only sent to the selected AR-R, whereas the U (unknown unicast) traffic is sent to all the destinations with the U flag.
The AR-L performs per-service load-balancing of the BM traffic when two or more AR-Rs exist in the same service. The AR Leaf creates a list of candidate PEs for each AR-R (ordered by IP and VNI; candidate 0 being the lowest IP and VNI). The replicator is selected out of a modulo function of the service-id and the number of replicators, as shown in the following sample output.
A change in the number of Replicator-AR routes (for example, if a route is withdrawn or a new route appears) affects the result of the hashing, which may cause a different AR-R to be selected.
![]() | Note: An AR-L waits for the configured replicator-activation-time before sending the BM packets to the AR-R. In the interim, the AR-L uses regular ingress replication procedures. This activation time allows the AR-R to program the Leaf VTEP. If the timer is zero, the AR-R may receive packets from a not-yet-programmed source VTEP, in which case it will discard the packets. |
The following list summarizes other aspects of the AR-L behavior.
Figure 155 shows the expected replication behavior for BM traffic when received at the access on an AR-R, AR-L, or RNVE router. Unknown unicast follows regular ingress replication behavior regardless of the role of the ingress node for the specific service.
The Assisted-Replication feature has the following limitations.
The Nuage Virtual Services Platform (VSP) supports a service chaining function that ensures traffic traverses a number of services (also known as Service Functions) between application hosts (FW, LB, NAT, IPS/IDS, and so on.) if the operator needs to do so. In the DC, tenants want the ability to specify these functions and their sequence, so that services can be added or removed without requiring changes to the underlying application.
This service chaining function is built based on a series of policy based routing/forwarding redirecting rules that are automatically coordinated and abstracted by the Nuage Virtual Services Directory (VSD). From a networking perspective, the packets are hop-by-hop redirected based on the location of the corresponding SF (Service Function) in the DC fabric. The location of the SF is specified by its VTEP and VNI and is advertised by BGP-EVPN along with an Ethernet Segment Identifier that is uniquely associated with the SF.
Refer to the Nuage VSP documentation for more information about the Nuage Service Chaining solution.
The 7750 SR, 7450 ESS, or 7950 XRS can be integrated as the first hop in the chain in a Nuage DC. This service chaining integration is intended to be used as described in the following three use cases.
Figure 156 shows the 7750 SR, 7450 ESS, and 7950 XRS Service Chaining integration with the Nuage VSP on VPLS services. In this example, the DC GW, PE1, is connected to an L2-DOMAIN that exists in the DC and must redirect the traffic to the Service Function SF-1. The regular Layer 2 forwarding procedures would have taken the packets to PE2, as opposed to SF-1.
An operator must configure a PBF match/action filter policy entry in an IPv4 or MAC ingress access or network filter deployed on a VPLS interface using CLI/SNMP/NETCONF management interfaces. The PBF target is the first service function in the chain (SF-1) that is identified by an ESI.
In the example shown in Figure 156, the PBF filter will redirect the matching packets to ESI 0x01 in VPLS-1.
![]() | Note: Figure 156 represents ESI as “0x01” for simplicity; in reality, the ESI is a 10-byte number. |
As soon as the redirection target is configured and associated with the vport connected to SF-1, the Nuage VSC (Virtual Services Controller, or the remote PE3 in the example) advertises the location of SF-1 via an Auto-Discovery Ethernet Tag route (route type 1) per-EVI. In this AD route, the ESI associated with SF-1 (ESI 0x01) is advertised along with the VTEP (PE3's IP) and VNI (VNI-1) identifying the vport where SF-1 is connected. PE1 will send all the frames matching the ingress filter to PE3's VTEP and VNI-1.
![]() | Note: When packets get to PE3, VNI-1 (the VNI advertised in the AD route) will indicate that a cut-through switching operation is needed to deliver the packets straight to the SF-1 vport, without the need for a regular MAC lookup. |
The following filter configuration shows an example of PBF rule redirecting all the frames to an ESI.
When the filter is properly applied to the VPLS service (VPLS-301 in this example), it will show 'Active' in the following show commands as long as the Auto-Discovery route for the ESI is received and imported.
Details of the received AD route that resolves the filter forwarding are shown in the following 'show router bgp routes' command.
This AD route, when used for PBF redirection, is added to the list of EVPN-VXLAN bindings for the VPLS service and shown as 'L2 PBR' type:
If the AD route is withdrawn, the binding will disappear and the filter will be inactive again. The user can control whether the matching packets are dropped or forwarded if the PBF target cannot be resolved by BGP.
![]() | Note: ES-based PBF filters can be applied only on services with the default bgp (vxlan) instance (instance 1). |
Figure 157 shows the 7750 SR, 7450 ESS, and 7950 XRS Service Chaining integration with the Nuage VSP on L2-DOMAIN-IRB domains. In this example, the DC GW, PE1, is connected to an L2-DOMAIN-IRB that exists in the DC and must redirect the traffic to the Service Function SF-1 with IP address 10.10.10.1. The regular Layer 3 forwarding procedures would have taken the packets to PE2, as opposed to SF-1.
In this case, an operator must configure a PBR match/action filter policy entry in an IPv4 ingress access or network filter deployed on IES/VPRN interface using CLI, SNMP or NETCONF management interfaces. The PBR target identifies first service function in the chain (ESI 0x01 in Figure 157, identifying where the Service Function is connected and the IPv4 address of the SF) and EVPN VXLAN egress interface on the PE (VPRN routing instance and R-VPLS interface name). The BGP control plane together with ESI PBR configuration are used to forward the matching packets to the next-hop in the EVPN-VXLAN data center chain (through resolution to a VNI and VTEP). If the BGP control plane information is not available, the packets matching the ESI PBR entry will be, by default, forwarded using regular routing. Optionally, an operator can select to drop the packets when the ESI PBR target is not reachable.
The following filter configuration shows an example of a PBR rule redirecting all the matching packets to an ESI.
In this use case, the following are required in addition to the ESI: the sf-ip (10.10.10.1 in the example above), router instance (300), and vas-interface.
The sf-ip is used by the system to know which inner MAC DA it has to use when sending the redirected packets to the SF. The SF-IP will be resolved to the SF MAC following regular ARP procedures in EVPN-VXLAN.
The router instance may be the same as the one where the ingress filter is configured or may be different: for instance, the ingress PBR filter can be applied on an IES interface pointing at a VPRN router instances that is connected to the DC fabric.
The vas-interface refers to the R-VPLS interface name through which the SF can be found. The VPRN instance may have more than one R-VPLS interface, therefore, it is required to specify which R-VPLS interface to use.
When the filter is properly applied to the VPRN or IES service (VPRN-300 in this example), it will show 'Active' in the following show commands as long as the Auto-Discovery route for the ESI is received and imported and the SF-IP resolved to a MAC address.
In the FDB for the R-VPLS 301, the MAC address is associated with the VTEP and VNI specified by the AD route, and not by the MAC/IP route anymore. When a PBR filter with a forward action to an ESI and SF-IP (Service Function IP) exists, a MAC route is auto-created by the system and this route has higher priority that the remote MAC, or IP routes for the MAC (see BGP and EVPN Route Selection for EVPN Routes).
The following shows that the AD route creates a new EVPN-VXLAN binding and the MAC address associated with the SF-IP uses that 'binding':
For Layer 2, if the AD route is withdrawn or the SF-IP ARP not resolved, the filter will be inactive again. The user can control whether the matching packets are dropped or forwarded if the PBF target cannot be resolved by BGP.
SR OS supports EVPN VXLAN multihoming as specified in RFC8365. Similar to EVPN-MPLS, as described in EVPN for MPLS Tunnels, ESs and virtual ESs can be associated to VPLS and R-VPLS services where BGP-EVPN VXLAN is enabled. Figure 158 illustrates the use of ESs in EVPN VXLAN networks.
As described in EVPN Multi-Homing in VPLS Services, the multihoming procedures consist of three components:
DF election is the mechanism by which the PEs attached to the same ES elect a single PE to forward all traffic (in case of single-active mode) or all BUM traffic (in case of all-active mode) to the multi-homed CE. The same DF Election mechanisms described in EVPN for MPLS Tunnels are supported for VXLAN services.
Split-horizon is the mechanism by which BUM traffic received from a peer ES PE is filtered so that it is not looped back to the CE that first transmitted the frame. It is applicable to all-active multi-homing. This is illustrated in Figure 158, where PE4 receives BUM traffic from PE3 but, in spite of being the DF for ES-2, PE4 filters the traffic and does not send it back to host-1. While split-horizon filtering uses ESI-labels in EVPN MPLS services, an alternative procedure called “Local Bias” is applied in VXLAN services, as described in RFC 8365. In MPLS services, split-horizon filtering may be used in single-active mode to avoid in-flight BUM packets from being looped back to the CE during transient times. In VXLAN services, split-horizon filtering is only used with all-active mode.
Aliasing is the procedure by which PEs that are not attached to the ES can process non-zero MAC/IP and AD routes and create ES destinations to which per-flow ecmp can be applied. Aliasing only applies to all-active mode.
As an example, the configuration of an ES that is used for VXLAN services follows. Note that this ES can be used for VXLAN services and MPLS services (in both cases VPLS and Epipes).
An example of configuration of a VXLAN service using the above ES follows:
The commands auto-disc-route-advertisement and mh-mode network are required in all services that are attached to at least one ES, and they must be configured in both, the PEs attached to the ES locally and the remote PEs in the same service. The former enables the advertising of multihoming routes in the service, whereas the latter activates the multihoming procedures for the service, including the local bias mode for split-horizon.
In addition, the configuration of vpls>bgp-evpn>vxlan>ecmp 2 (or greater) is required so that VXLAN ES destinations with two or more next hops can be used for per-flow load balancing. The following command shows how PE1, as shown in Figure 158, creates an ES destination composed of two VXLAN next hops.
EVPN MPLS, as described in EVPN for MPLS Tunnels, uses ESI-labels to identify the BUM traffic sourced from a given ES. The egress PE performs a label lookup to find the ESI label below the EVI label and to determine if a frame can be forwarded to a local ES. Because VXLAN does not support ESI-labels, or any MPLS label for that matter, the split-horizon filtering must be based on the tunnel source IP address. This also implies that the SAP-to-SAP forwarding rules must be changed when the SAPs belong to local ESs, irrespective of the DF state. This new forwarding is what RFC 8365 refers to as local bias. Figure 159 illustrates the local bias forwarding behavior.
Local bias is based on the following principles.
With this approach, the ingress PE must perform replication locally to all directly-attached ESs (regardless of the DF Election state) for all flooded traffic coming from the access interfaces. BUM frames received on any SAP are flooded to:
As an example, in Figure 159, PE2 receives BUM traffic from Host-3 and it forwards it to the remote PEs and the local ES SAP, even though the SAP is in NDF state.
The following rules apply to egress PE forwarding for EVPN-VXLAN services.
For example, in Figure 159, PE3 receives BUM traffic on VXLAN. PE3 identifies the source VTEP as a PE with which two ESs are shared, hence it does not forward the BUM frames to the two shared ESs. It forwards to the non-shared ES (Host-5) because it is in DF state. PE4 receives BUM traffic and forwards it based on normal rules because it does not share any ESs with PE2.
The following command can be used to check whether the local PE has enabled the local bias procedures for a given ES:
In EVPN MPLS networks, an ingress PE that uses ingress replication to flood unknown unicast traffic pushes a BUM MPLS label that is different from a unicast label. The egress PEs use this BUM label to identify such BUM traffic and thus apply DF filtering for All-Active multi-homed sites. In PBB-EVPN, in addition to the multicast label, the egress PE can also rely on the multicast B-MAC DA to identify customer BUM traffic.
In VXLAN there are no BUM labels or any tunnel indication that can assist the egress PE in identifying the BUM traffic. As such, the egress PE must solely rely on the C-MAC destination address, which may create some transient issues that are depicted in Figure 160.
As shown in Figure 160, top diagram, in absence of the mentioned unknown unicast traffic indication there can be transient duplicate traffic to All-Active multi-homed sites under the following condition: CE1’s MAC address is learned by the egress PEs (PE1 and PE2) and advertised to the ingress PE3; however, the MAC advertisement has not been received or processed by the ingress PE, resulting in the host MAC address to be unknown on the ingress PE3 but known on the egress PEs. Therefore, when a packet destined to CE1 address arrives on PE3, it floods it via ingress replication to PE1 or PE2 and, because CE1’s MAC is known to PE1 and PE2, multiple copies are sent to CE1.
Another issue is shown at the bottom of Figure 160. In this case, CE1’s MAC address is known on the ingress PE3 but unknown on PE1 and PE2. If PE3’s aliasing hashing picks up the path to the ES’ NDF, a black-hole occurs.
The above two issues are solved in MPLS, as unicast known and unknown frames are identified with different labels.
Finally, another issue is outlined in Figure 161. Under normal circumstances, when CE3 sends BUM traffic to PE3, the traffic is “local-biased” to PE3’s SAP3 even though it is NDF for the ES. The flooded traffic to PE2 is forwarded to CE2, but not to SAP2 because the local bias split-horizon filtering takes place.
The right side of the diagram in Figure 161 shows an issue when SAP3 is manually shutdown. In this case, PE3 withdraws the AD per-EVI route corresponding to SAP3; however, this does not change the local bias filtering for SAP2 in PE2. Therefore, when CE3 sends BUM traffic, it can neither be forwarded to CE23 via local SAP3 nor can it be forwarded by PE2.
EVPN VXLAN multi-homing is supported on VPLS and R-VPLS services when the PEs use non-system IPv4 or IPv6 termination, however, as with EVPN VPWS services, additional configuration steps are required.
This section provides information about EVPN for MPLS tunnels.
Table 79 lists all the EVPN routes supported in 7750 SR, 7450 ESS, or 7950 XRS SR OS and their usage in EVPN-VXLAN, EVPN-MPLS, and PBB-EVPN.
![]() | Note: Route type 1 is not required in PBB-EVPN as per RFC 7623. |
EVPN Route | Usage | EVPN-VXLAN | EVPN-MPLS | PBB-EVPN |
Type 1 - Ethernet Auto-Discovery route (A-D) | Mass-withdraw, ESI labels, Aliasing | (Only EVPN-VPWS) | Y | — |
Type 2 - MAC/IP Advertisement route | MAC/IP advertisement, IP advertisement for ARP resolution | Y | Y | Y |
Type 3 - Inclusive Multicast Ethernet Tag route | Flooding tree setup (BUM flooding) | Y | Y | Y |
Type 4 - ES route | ES discovery and DF election | (Only EVPN-VPWS) | Y | Y |
Type 5 - IP Prefix advertisement route | IP Routing | Y | Y | — |
RFC 7432 describes the BGP-EVPN control plane for MPLS tunnels. If EVPN multi-homing is not required, two route types are needed to set up a basic EVI (EVPN Instance): MAC/IP Advertisement and the Inclusive Multicast Ethernet Tag routes. If multi-homing is required, the ES and the Auto-Discovery routes are also needed.
The route fields and extended communities for route types 2 and 3 are shown in Figure 145. BGP-EVPN Control Plane for VXLAN Overlay Tunnels The changes compared to their use in EVPN-VXLAN are described below.
EVPN Route Type 3 – Inclusive Multicast Ethernet Tag Route
As in EVPN-VXLAN, route type 3 is used for setting up the flooding tree (BUM flooding) for a specified VPLS service. The received inclusive multicast routes add entries to the VPLS flood list in the 7750 SR, 7450 ESS, and 7950 XRS. Ingress replication, p2mp mLDP, and composite tunnels are supported as tunnel types in route type 3 when BGP-EVPN MPLS is enabled
The following route values are used for EVPN-MPLS services:
IMET-P2MP-IR routes are used in EVIs with a few root nodes and a significant number of leaf-only PEs. In this scenario, a combination of P2MP and IR tunnels can be used in the network, such that the root nodes use P2MP tunnels to send broadcast, Unknown unicast, and Multicast traffic but the leaf-PE nodes use IR to send traffic to the roots. This use case is documented in IETF RFC 8317 and the main advantage it offers is the significant savings in P2MP tunnels that the PE/P routers in the EVI need to handle (as opposed to a full mesh of P2MP tunnels among all the PEs in an EVI).
In this case, the root PEs signals a special tunnel type in the PTA, indicating that they intend to transmit BUM traffic using an mLDP P2MP tunnel but they can also receive traffic over an IR evpn-mpls binding. An IMET route with this special “composite” tunnel type in the PTA is called an IMET-P2MP-IR route and the encoding of its PTA is shown in Figure 162.
EVPN Route Type 2 - MAC/IP Advertisement Route
The 7750 SR, 7450 ESS, or 7950 XRS router generates this route type for advertising MAC addresses (and IP addresses if proxy-ARP/proxy-ND is enabled). The router generates MAC advertisement routes for the following:
![]() | Note: The unknown-mac-route is not supported for EVPN-MPLS services. |
The route type 2 generated by a router uses the following fields and values:
When EVPN multi-homing is enabled in the system, two more routes are required. Figure 163 shows the fields in routes type 1 and 4 and their associated extended communities.
EVPN Route Type 1 - Ethernet Auto-discovery Route (AD route)
The 7750 SR, 7450 ESS, or 7950 XRS router generates this route type for advertising for multi-homing functions. The system can generate two types of AD routes:
The Ethernet AD per-ESI route generated by a router uses the following fields and values:
The system can either send a separate Ethernet AD per-ESI route per service, or a few Ethernet AD per-ESI routes aggregating the route-targets for multiple services. While both alternatives will inter-operate, RFC 7432 states that the EVPN Auto-Discovery per-ES route must be sent with a set of route-targets corresponding to all the EVIs defined on the Ethernet-Segment. Either option can be enabled using the command: config>service>system>bgp-evpn#ad-per-es-route-target <[evi-rt] | [evi-rt-set route-distinguisher <ip-address>]>
The default option ad-per-es-route-target evi-rt configures the system to send a separate AD per-ES route per service. When enabled, the evi-rt-set option allows the aggregation of routes: A single AD per-ES route with the associated RD (ip-address:1) and a set of EVI route-targets will be advertised (to a maximum of 128). When the number of EVIs defined in the Ethernet-Segment is significant (hence the number of route-targets), the system will send more than one route. For example:
![]() | Note: When evi-rt-set is configured, no vsi-export policies are possible on the services defined on the Ethernet-Segment. If vsi-export policies are configured for a service, the system will send an individual AD per-ES route for that service. The maximum standard BGP update size is 4KB, with a maximum of 2KB for the route-target extended community attribute. |
The Ethernet AD per-EVI route generated by a router uses the following fields and values:
![]() | Note: The AD per-EVI route is not sent with the ESI label Extended Community. |
EVPN Route Type 4 - ES route
The router generates this route type for multi-homing ES discovery and DF (Designated Forwarder) election.
EVPN Route Type 5 - IP Prefix Route
IP Prefix Routes are also supported for MPLS tunnels. The route fields for route type 5 are shown in Figure 147. The 7750 SR, 7450 ESS, or 7950 XRS router will generate this route type for advertising IP prefixes in EVPN using the same fields that are described in section BGP-EVPN Control Plane for VXLAN Overlay Tunnels, with the following exceptions:
RFC 5512 - BGP Tunnel Encapsulation Extended Community
The following routes are sent with the RFC 5512 BGP Encapsulation Extended Community: MAC/IP, Inclusive Multicast Ethernet Tag, and AD per-EVI routes. ES and AD per-ESI routes are not sent with this Extended Community.
The router processes the following BGP Tunnel Encapsulation tunnel values registered by IANA for RFC 5512:
Any other tunnel value will make the route 'treat-as-withdraw'.
If the encapsulation value is MPLS, the BGP will validate the high-order 20-bits of the label field, ignoring the low-order 4 bits. If the encapsulation is VXLAN, the BGP will take the entire 24-bit value encoded in the MPLS label field as the VNI.
If the encapsulation extended community (as defined in RFC 5512) is not present in a received route, BGP will treat the route as an MPLS or VXLAN-based configuration of the config>router>bgp>neighbor# def-recv-evpn-encap [mpls | vxlan] command. The command is also available at the bgp and group levels.
EVPN can be used in MPLS networks where PEs are interconnected through any type of tunnel, including RSVP-TE, Segment-Routing TE, LDP, BGP, Segment Routing IS-IS, Segment Routing OSPF, RIB-API, MPLS-forwarding-policy, SR-Policy, or MPLSoUDP. As with VPRN services, tunnel selection for a VPLS service (with BGP-EVPN MPLS enabled) is based on the auto-bind-tunnel command. The BGP EVPN routes next-hops can be IPv4 or IPv6 addresses and can be resolved to a tunnel in the IPv4 tunnel-table or IPv6 tunnel-table.
EVPN-MPLS is modeled similar to EVPN-VXLAN, that is, using a VPLS service where EVPN-MPLS “bindings” can coexist with SAPs and SDP bindings. The following shows an example of a VPLS service with EVPN-MPLS.
First configure a bgp-evpn context where vxlan must be shutdown and mpls no shutdown. In addition to the mpls no shutdown command, the minimum set of commands to be configured to set up the EVPN-MPLS instance are the evi and the auto-bind-tunnel resolution commands. Other relevant configuration options are described below.
evi {1..65535} — This EVPN identifier is unique in the system and will be used for the service-carving algorithm used for multi-homing (if configured) and auto-deriving route-target and route-distinguishers in the service. It can be used for EVPN-MPLS and EVPN-VXLAN services.
If this EVPN identifier is not specified, the value will be zero and no route-distinguisher or route-targets will be auto-derived from it. If specified and no other route-distinguisher/route-target are configured in the service:, then the following applies:
![]() | Note: When the vsi-import/export polices are configured, the route-target must be configured in the policies and those values take preference over the auto-derived route-targets. The operational route-target for a service will be displayed by the show service id svc-id bgp command. If the bgp-ad>vpls-id is configured in the service, the vpls-id derived route-target takes precedence over the evi-derived route-target. |
When the evi is configured, a config>service>vpls>bgp node (even empty) is required to allow the user to see the correct information on the show service id 1 bgp and show service system bgp-route-distinguisher commands.
The configuration of an evi is enforced for EVPN services with SAPs/SDP bindings in an ethernet-segment. See EVPN Multi-Homing in VPLS Services for more information about ESs.
The following options are specific to EVPN-MPLS (and defined in bgp-evpn>mpls):
![]() | Note: This command may be used in conjunction with the sap ingress vlan-translation command. If so, the configured translated VLAN ID will be the VLAN ID sent to the EVPN binds as opposed to the service-delimiting tag VLAN ID. If the ingress SAP/binding is null-encapsulated, the output VLAN ID and pbits will be zero. |
In addition to these options, the following bgp-evpn commands are also available for EVPN-MPLS services:
When EVPN-MPLS is established among some PEs in the network, EVPN unicast and multicast 'bindings' are created on each PE to the remote EVPN destinations. A specified ingress PE will create:
Those bindings, as well as the MACs learned on them, can be checked through the following show commands. In the following example, the remote PE(192.0.2.69) is configured with no ingress-replication-bum-label and PE(192.0.2.70) is configured with ingress-replication-bum-label. Hence, Dut has a single EVPN-MPLS destination binding to PE(192.0.2.69) and two bindings (unicast and multicast) to PE(192.0.2.70).
The 7750 SR, 7450 ESS, or 7950 XRS router SR OS EVPN implementation supports RFC 8560 so that EVPN-MPLS and VPLS can be integrated into the same network and within the same service. Since EVPN will not be deployed in green-field networks, this feature is useful for the integration between both technologies and even for the migration of VPLS services to EVPN-MPLS.
The following behavior enables the integration of EVPN and SDP bindings in the same VPLS network:
a) Systems with EVPN endpoints and SDP bindings to the same far-end bring down the SDP bindings.
b) The user can add spoke SDPs and all the EVPN-MPLS endpoints in the same split horizon group (SHG).
c) The system disables the advertisement of MACs learned on spoke SDPs and SAPs that are part of an EVPN split horizon group.
The operation in services with BGP-VPLS and BGP-EVPN is equivalent to the one outlined above for BGP-AD and BGP-EVPN.
In a VPLS service to which multiple EVPN PEs and BGP-VPLS PEs are attached, single-active multi-homing is supported on two or more of the EVPN PEs with no special considerations. All-active multi-homing is not supported, since the traffic from the all-active multi-homed CE could cause a MAC flip-flop effect on remote BGP-VPLS PEs, asymmetric flows, or other issues.
Figure 165 illustrates a scenario with a single-active ethernet-segment used in a service where EVPN PEs and BGP-VPLS are integrated.
Although other single-active examples are supported, in Figure 165, CE1 is connected to the EVPN PEs via a single LAG (lag-1). The LAG is associated to ethernet-segment 1 on PE1 and PE2, which is configured as single-active and with oper-group 1. PE1 and PE2 make use of lag>monitor-oper-group 1 so that the non-DF PE can signal the non-DF state to CE1 (in the form of LACP out-of-synch or power-off).
In addition to the BGP-VPLS routes sent for the service ve-id, the multi-homing PEs in this case need to generate additional BGP-VPLS routes per Ethernet Segment (per VPLS service) for the purpose of MAC flush on the remote BGP-VPLS PEs in case of failure.
The sap>bgp-vpls-mh-veid number command should be configured on the SAPs that are part of an EVPN single-active Ethernet Segment, and allows the advertisement of L2VPN routes that indicate the state of the multi-homed SAPs to the remote BGP-VPLS PEs. Upon a Designated Forwarder (DF) switchover, the F and D bits of the generated L2VPN routes for the SAP ve-id are updated so that the remote BGP-VPLS PEs can perform a mac-flush operation on the service and avoid blackholes.
As an example, in case of a failure on the ethernet-segment sap on PE1, PE1 must indicate PE3 and PE4 the need to flush MAC addresses learned from PE1 (flush-all-from-me message). Otherwise, for example, PE3 continues sending traffic with MAC DA = CE1 to PE1, and PE1 blackholes the traffic.
In the Figure 165 example:
Other considerations:
In a VPLS service, multiple BGP families and protocols can be enabled at the same time. When bgp-evpn is enabled, bgp-ad and bgp-mh are supported as well. A single RD is used per service and not per BGP family/protocol.
The following rules apply:
![]() | Note: When the RD changes, the active routes for that VPLS will be withdrawn and re-advertised with the new RD. |
![]() | Note: This reconfiguration will fail if the new RD already exists in a different VPLS/epipe. |
EVPN multi-homing implementation is based on the concept of the ethernet-segment. An ethernet-segment is a logical structure that can be defined in one or more PEs and identifies the CE (or access network) multi-homed to the EVPN PEs. An ethernet-segment is associated with port, LAG, PW port, or SDP objects and is shared by all the services defined on those objects. In the case of virtual ESs, individual VID or VC-ID ranges can be associated to the port, LAG, or PW port, SDP objects defined in the ethernet-segment.
Each ethernet-segment has a unique Ethernet Segment Identifier (esi) that is 10 bytes long and is manually configured in the router.
![]() | Note: The esi is advertised in the control plane to all the PEs in an EVPN network; therefore, it is very important to ensure that the 10-byte esi value is unique throughout the entire network. Single-homed CEs are assumed to be connected to an Ethernet-Segment with esi = 0 (single-homed Ethernet-Segments are not explicitly configured). |
This section describes the behavior of the EVPN multi-homing implementation in an EVPN-MPLS service.
As described in RFC 7432, all-active multi-homing is only supported on access LAG SAPs and it is mandatory that the CE is configured with a LAG to avoid duplicated packets to the network. LACP is optional. SR OS, also, supports the association of a PW port to an all-active multi-homing ES.
Three different procedures are implemented in 7750 SR, 7450 ESS, and 7950 XRS SR OS to provide all-active multi-homing for a specified Ethernet-Segment:
Figure 166 shows the need for DF election in all-active multi-homing.
The DF election in EVPN all-active multi-homing avoids duplicate packets on the multi-homed CE. The DF election procedure is responsible for electing one DF PE per ESI per service; the rest of the PEs being non-DF for the ESI and service. Only the DF will forward BUM traffic from the EVPN network toward the ES SAPs (the multi-homed CE). The non-DF PEs will not forward BUM traffic to the local Ethernet-Segment SAPs.
![]() | Note: BUM traffic from the CE to the network and known unicast traffic in any direction is allowed on both the DF and non-DF PEs. |
Figure 167 shows the EVPN split-horizon concept for all-active multi-homing.
The EVPN split-horizon procedure ensures that the BUM traffic originated by the multi-homed PE and sent from the non-DF to the DF, is not replicated back to the CE (echoed packets on the CE). To avoid these echoed packets, the non-DF (PE1) will send all the BUM packets to the DF (PE2) with an indication of the source Ethernet-Segment. That indication is the ESI Label (ESI2 in the example), previously signaled by PE2 in the AD per-ESI route for the Ethernet-Segment. When PE2 receives an EVPN packet (after the EVPN label lookup), the PE2 will find the ESI label that will identify its local Ethernet-Segment ESI2. The BUM packet will be replicated to other local CEs but not to the ESI2 SAP.
Figure 168 shows the EVPN aliasing concept for all-active multi-homing.
Because CE2 is multi-homed to PE1 and PE2 using an all-active Ethernet-Segment, 'aliasing' is the procedure by which PE3 can load-balance the known unicast traffic between PE1 and PE2, even if the destination MAC address was only advertised by PE1 as in the example. When PE3 installs MAC1 in the FDB, it will associate MAC1 not only with the advertising PE (PE1) but also with all the PEs advertising the same esi (ESI2) for the service. In this example, PE1 and PE2 advertise an AD per-EVI route for ESI2, therefore, the PE3 installs the two next-hops associated with MAC1.
Aliasing is enabled by configuring ECMP greater than 1 in the bgp-evpn mpls context.
The following shows an example PE1 configuration that provides all-active multi-homing to the CE2 shown in Figure 168.
In the same way, PE2 is configured as follows:
The preceding configuration will enable the all-active multi-homing procedures. The following must be considered:
![]() | Note: The source-bmac-lsb attribute must be defined for PBB-EVPN (so that it will only be used in PBB-EVPN, and ignored by EVPN). Other than EVPN-MPLS and PBB-EVPN I-VPLS/Epipe services, no other Layer 2 services are allowed in the same ethernet-segment (regular VPLS or EVPN-VXLAN SAPs defined on the ethernet-segment will be kept operationally down). |
The ES discovery and DF election is implemented in three logical steps, as shown in Figure 169.
Ethernet-segment ESI-1 is configured as per the previous section, with all the required parameters. When ethernet-segment no shutdown is executed, PE1 and PE2 will advertise an ES route for ESI-1. They will both include the route-target auto-derived from the MAC portion of the configured ESI. If the route-target address family is configured in the network, this will allow the RR to keep the dissemination of the ES routes under control.
In addition to the ES route, PE1 and PE2 will advertise AD per-ESI routes and AD per-EVI routes.
When ES routes exchange between PE1 and PE2 is complete, both run the DF election for all the services in the ethernet-segment.
PE1 and PE2 elect a Designated Forwarder (DF) per <ESI, service>. The default DF election mechanism in 7750 SR, 7450 ESS, and 7950 XRS SR OS is service-carving (as per RFC 7432). The following applies when enabled on a specified PE:
![]() | Note: The remote PE IPs must be present in the local PE's RTM so that they can participate in the DF election. |
![]() | Note: The algorithm takes the configured evi in the service as opposed to the service-id itself. The evi for a service must match in all the PEs that are part of the ESI. This guarantees that the election algorithm is consistent across all the PEs of the ESI. The evi must be always configured in a service with SAPs/SDP bindings that are created in an ES. |
config>service>system>bgp-evpn>eth-seg>service-carving> mode off
The following show command displays the ethernet-segment configuration and DF status for all the EVIs and ISIDs (if PBB-EVPN is enabled) configured in the ethernet-segment.
Based on the result of the DF election or the manual service-carving, the control plane on the non-DF (PE1) will instruct the data path to remove the LAG SAP (associated with the ESI) from the default flooding list for BM traffic (unknown unicast traffic may still be sent if the EVI label is a unicast label and the source MAC address is not associated to the ESI). On PE1 and PE2, both LAG SAPs will learn the same MAC address (coming from the CE). For instance, in the following show commands, 00:ca:ca:ba:ce:03 is learned on both PE1 and PE2 access LAG (on ESI-1). However, PE1 learns the MAC as 'Learned' whereas PE2 learns it as 'Evpn'. This is due to the CE2 hashing the traffic for that source MAC to PE1. PE2 learns the MAC through EVPN but it associates the MAC to the ESI SAP, because the MAC belongs to the ESI.
When PE1 (non-DF) and PE2 (DF) exchange BUM packets for evi 1, all those packets will be sent including the ESI label at the bottom of the stack (in both directions). The ESI label advertised by each PE for ESI-1 can be displayed by the following command:
Following the example in Figure 169, if the service configuration on PE3 has ECMP > 1, PE3 will add PE1 and PE2 to the list of next-hops for ESI-1. As soon as PE3 receives a MAC for ESI-1, it will start load-balancing between PE1 and PE2 the flows to the remote ESI CE. The following command shows the FDB in PE3.
![]() | Note: mac 00:ca:ca:ba:ce:03 is associated with the Ethernet-Segment eES:01:00:00:00:00:71:00:00:00:01 (esi configured on PE1 and PE2 for ESI-1). |
The following command shows all the EVPN-MPLS destination bindings on PE3, including the ES destination bindings.
The Ethernet-Segment eES:01:00:00:00:00:71:00:00:00:01 is resolved to PE1 and PE2 addresses:
PE3 will perform aliasing for all the MACs associated with that ESI. This is possible because PE1 is configured with ECMP parameter >1:
Figure 170 shows the behavior on the remote PEs (PE3) when there is an ethernet-segment failure.
The unicast traffic behavior on PE3 is as follows:
For BUM traffic, the following events would trigger a DF election on a PE and only the DF would forward BUM traffic after the esi-activation-timer expiration (if there was a transition from non-DF to DF).
Be aware of the effects triggered by certain 'failure scenarios'; some of these scenarios are shown in Figure 171:
If an individual VPLS service is shutdown in PE1 (the example is also valid for PE2), the corresponding LAG SAP will go operationally down. This event will trigger the withdrawal of the AD per-EVI route for that particular SAP. PE3 will remove PE1 of its list of aliased next-hops and PE2 will take over as DF (if it was not the DF already). However, this will not prevent the network from black-holing the traffic that CE2 'hashes' to the link to PE1. Traffic sent from CE2 to PE2 or traffic from the rest of the CEs to CE2 will be unaffected, so this situation is not easily detected on the CE.
The same result occurs if the ES SAP is administratively shutdown instead of the service.
![]() | Note: When bgp-evpn mpls shutdown is executed, the SAP associated with the ES will be brought operationally down (StandbyforMHprotocol) and so will the entire service if there are no other SAPs or SDP bindings in the service. However, if there are other SAPs/SDP bindings, the service will remain operationally up. |
Some situations may cause potential transient issues to occur. These are shown in Figure 172 and explained below.
Transient packet duplication caused by delay in PE3 to learn MAC1:
This scenario is illustrated by the diagram on the left in Figure 172. In an all-active multi-homing scenario, if a specified MAC address is not yet learned in a remote PE, but is known in the two PEs of the ES, for example, PE1 and PE2, the latter PEs might send duplicated packets to the CE.
In an all-active multi-homing scenario, if a specified MAC address (for example, MAC1), is not learned yet in a remote PE (for example, PE3), but it is known in the two PEs of the ES (for example, PE1 and PE2), the latter PEs might send duplicated packets to the CE.
This issue is solved by the use of ingress-replication-bum-label in PE1 and PE2. If configured, PE1/PE2 will know that the received packet is an unknown unicast packet, therefore, the NDF (PE1) will not send the packets to the CE and there will not be duplication.
![]() | Note: Even without the ingress-replication-bum-label, this is only a transient situation that would be solved as soon as MAC1 is learned in PE3. |
Transient blackhole caused by delay in PE1 to learn MAC1:
This case is illustrated by the diagram on the right in Figure 172. In an all-active multi-homing scenario, MAC1 is known in PE3 and aliasing is applied to MAC1. However, MAC1 is not known yet in PE1, the NDF for the ES. If PE3 hashing picks up PE1 as the destination of the aliased MAC1, the packets will be blackholed. This case is solved on the NDF by not blocking unknown unicast traffic that arrives with a unicast label. If PE1 and PE2 are configured using ingress-replication-bum-label, PE3 will send unknown unicast with a BUM label and known unicast with a unicast label. In the latter case, PE1 considers it is safe to forward the frame to the CE, even if it is unknown unicast. It is important to note that this is a transient issue and as soon as PE1 learns MAC1 the frames are forwarded as known unicast.
The 7750 SR, 7450 ESS, and 7950 XRS SR OS supports single-active multi-homing on access LAG SAPs, regular SAPs, and spoke SDPs for a specified VPLS service. For LAG SAPs, the CE will be configured with a different LAG to each PE in the Ethernet-Segment (as opposed to a single LAG in an all-active multi-homing).
The following SR OS procedures support EVPN single-active multi-homing for a specified Ethernet-Segment:
The following shows an example of PE1 configuration that provides single-active multi-homing to CE2, as shown in Figure 173.
The PE2 example configuration for this scenario is as follows:
In single-active multi-homing, the non-DF PEs for a specified ESI will block unicast and BUM traffic in both directions (upstream and downstream) on the object associated with the ESI. Other than that, single-active multi-homing is similar to all-active multi-homing with the following differences:
![]() | Note: In this case, key, system-id, and system-priority must be different on the PEs that are part of the Ethernet-Segment). |
In all-active multi-homing, the non-DF keeps the SAP up, although it removes it from the default flooding list. In the single-active multi-homing implementation the non-DF will bring the SAP or SDP binding operationally down. Refer to the ES Discovery and DF Election Procedures for more information.
The following show commands display the status of the single-active ESI-7413 in the non-DF. The associated spoke SDP is operationally down and it signals PW Status standby to the multi-homed CE:
A remote PE (PE3 in Figure 173) will import the AD routes per ESI, where the single-active flag is set. PE3 will interpret that the Ethernet-Segment is single-active if at least one PE sends an AD route per-ESI with the single-active flag set. MACs for a specified service and ESI will be learned from a single PE, that is, the DF for that <ESI, EVI>.
The remote PE will install a single EVPN-MPLS destination (TEP, label) for a received MAC address and a backup next-hop to the PE for which the AD routes per-ESI and per-EVI are received. For instance, in the following command, 00:ca:ca:ba:ca:06 is associated with the remote ethernet-segment eES 01:74:13:00:74:13:00:00:74:13. That eES is resolved to PE(192.0.2.73), which is the DF on the ES.
If PE3 sees only two single-active PEs in the same ESI, the second PE will be the backup PE. Upon receiving an AD per-ES/per-EVI route withdrawal for the ESI from the primary PE, the PE3 will start sending the unicast traffic to the backup PE immediately.
If PE3 receives AD routes for the same ESI and EVI from more than two PEs, the PE will not install any backup route in the data path. Upon receiving an AD per-ES/per-EVI route withdrawal for the ESI, it will flush the MACs associated with the ESI.
Figure 174 shows the remote PE (PE3) behavior when there is an Ethernet-Segment failure.
The PE3 behavior for unicast traffic is as follows:
Also, a DF election on PE1 is triggered. In general, a DF election is triggered by the same events as for all-active multi-homing. In this case, the DF will forward traffic to CE2 when the esi-activation-timer expiration occurs (the timer kicks in when there is a transition from non-DF to DF).
P2MP mLDP tunnels for BUM traffic in EVPN-MPLS services are supported and enabled through the use of the provider-tunnel context. If EVPN-MPLS takes ownership over the provider-tunnel, bgp-ad is still supported in the service but it will not generate BGP updates, including the PMSI Tunnel Attribute. The following CLI example shows an EVPN-MPLS service that uses P2MP mLDP LSPs for BUM traffic.
When provider-tunnel inclusive is used in EVPN-MPLS services, the following commands can be used in the same way as for BGP-AD or BGP-VPLS services:
The following commands are used by provider-tunnel in BGP-EVPN MPLS services:
Figure 175 shows the use of P2MP mLDP tunnels in an EVI with a root node and a few leaf-only nodes.
Consider the use case of a root-and-leaf PE4 where the other nodes are configured as leaf-only nodes (no root-and-leaf). This scenario is handled as follows:
As described in IETF Draft draft-ietf-bess-evpn-etree, mLDP and Ingress Replication (IR) can work in the same network for the same service; that is, EVI1 can have some nodes using mLDP (for example, PE1) and others using IR (for example, PE2). For scaling, this is significantly important in services that consist of a pair of root nodes sending BUM in P2MP tunnels and hundreds of leaf-nodes that only need to send BUM traffic to the roots. By using IMET-P2MP-IR routes from the roots, the operator makes sure the leaf-only nodes can send BUM traffic to the root nodes without the need to set up P2MP tunnels from the leaf nodes.
When both static and dynamic P2MP mLDP tunnels are used on the same router, Nokia recommends that the static tunnels use a tunnel ID lower than 8193. If a tunnel ID is statically configured with a value equal to or greater than 8193, BGP-EVPN may attempt to use the same tunnel ID for services with enabled provider-tunnel, and fail to set up an mLDP tunnel.
Inter-AS option C or seamless-MPLS models for non-segmented mLDP trees are supported with EVPN for BUM traffic. The leaf PE that joins an mLDP EVPN root PE supports Recursive and Basic Opaque FEC elements (types 7 and 1, respectively). Therefore, packet forwarding is handled as follows:
For more information about mLDP opaque FECs, refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 3 Services Guide: IES and VPRN and the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide.
All-active multihoming and single-active with an ESI label multihoming are supported in EVPN-MPLS services together with P2MP mLDP tunnels. Both use an upstream-allocated ESI label, as described in RFC 7432 section 8.3.1.2, which is popped at the leaf PEs, resulting in the requirement that, in addition to the root PE, all EVPN-MPLS P2MP leaf PEs must support this capability (including the PEs not connected to the multihoming ES).
This section contains information about EVPN-VPWS for MPLS tunnels.
EVPN-VPWS for MPLS tunnels uses the RFC 8214 BGP extensions described in EVPN-VPWS for VXLAN Tunnels, with the following differences for the Ethernet AD per-EVI routes:
The use and configuration of EVPN-VPWS services is described in EVPN-VPWS for VXLAN Tunnels with the following differences when the EVPN-VPWS services use MPLS tunnels instead of VXLAN.
When MPLS tunnels are used, the bgp-evpn>mpls context must be configured in the Epipe. As an example, if Epipe 2 is an EVPN-VPWS service that uses MPLS tunnels between PE2 and PE4, this would be its configuration:
Where the following BGP-EVPN commands, specific to MPLS tunnels, are supported in the same way as in VPLS services:
EVPN-VPWS Epipes with MPLS tunnels can also be configured with the following characteristics:
The following features, described in EVPN-VPWS for VXLAN Tunnels, are also supported for MPLS tunnels:
PW ports in ESs and virtual ESs (vESs) are supported for EVPN-VPWS MPLS services. In addition to lag, port and sdp objects, pw-port pw-port-id can be configured. PW-port based ESs support all-active mode only, and not single-active mode. The following statements apply:
For all the above scenarios, fault-propagation to the access CE only works in case of physical failures. Administrative shutdown of individual Epipes, PW-SAPs, Ethernet-segments or BGP-EVPN may result in traffic black-holes.
Figure 176 illustrates the use of pw-ports in ESs. In this example, an FPE-based pw-port is associated with the ES, where the stitching service itself uses EVPN-VPWS as well.
In this example, the following applies:
EVPN-MPLS and IP-prefix advertisement (enabled by the ip-route-advertisement command) are fully supported in routed VPLS services and provide the same feature-set as EVPN-VXLAN. The following capabilities are supported in a service where bgp-evpn mpls is enabled:
IES interfaces do not support either ip-route-advertisement or evpn-tunnel.
SAP and spoke SDP based ESs are supported on R-VPLS services where bgp-evpn mpls is enabled.
Figure 177 shows an example of EVPN-MPLS multi-homing in R-VPLS services, with the following assumptions.
In the example in Figure 177, the hosts connected to CE1 and CE4 could use regular VRRP for default gateway redundancy; however, this may not be the most efficient way to provide upstream routing.
For example, if PE1 and PE2 are using regular VRRP, the upstream traffic from CE1 may be hashed to the backup IRB VRRP interface, instead of being hashed to the master. The same thing may occur for single-active multi-homing and regular VRRP for PE3 and PE4. The traffic from CE4 will be sent to PE3, while PE4 may be the VRRP master. In that case, PE3 will have to send the traffic to PE4, instead of route it directly.
In both cases, unnecessary bandwidth between the PEs is used to get to the master IRB interface. In addition, VRRP scaling is limited if aggressive keepalive timers are used.
Because of these issues, passive VRRP is recommended as the best method when EVPN-MPLS multi-homing is used in combination with R-VPLS redundant interfaces.
Passive VRRP is a VRRP setting in which the transmission and reception of keepalive messages is completely suppressed, and therefore the VPRN interface always behaves as the master. Passive VRRP is enabled by adding the passive keyword to the VRRP instance at creation, as shown in the following examples:
config service vprn 1 interface int-1 vrrp 1 passive
config service vprn 1 interface int-1 ipv6 vrrp 1 passive
For example, if PE1, PE2, and PE5 in Figure 177 use passive VRRP, even if each individual R-VPLS interface has a different MAC/IP address, because they share the same VRRP instance 1 and the same backup IP, the three PEs will own the same virtual MAC and virtual IP address (for example, 00-00-5E-00-00-01 and 10.0.0.254). The virtual MAC is auto-derived from 00-00-5E-00-00-VRID per RFC 3768. The following is the expected behavior when passive VRRP is used in this example.
The following list summarizes the advantages of using passive VRRP mode versus regular VRRP for EVPN-MPLS multi-homing in R-VPLS services.
This section contains information on PBB-EVPN.
PBB-EVPN uses a reduced subset of the routes and procedures described in RFC 7432. The supported routes are:
This route is used to advertise the ISIDs that belong to I-VPLS services as well as the default multicast tree. PBB-epipe ISIDs are not advertised in Inclusive Multicast routes. The following fields are used:
![]() | Note: This label will be the same label used in the B-MAC routes for the same B-VPLS service unless bgp-evpn mpls ingress-replication-bum-label is configured in the B-VPLS service. |
![]() | Note: The mLDP P2MP tunnel type is supported on PBB-EPVN services, but it can be used in the default multicast tree only. |
The 7750 SR, 7450 ESS, or 7950 XRS will generate this route type for advertising B-MAC addresses for the following:
The route type 2 generated by the router uses the following fields and values:
This route type is used for DF election as described in section BGP-EVPN Control Plane for MPLS Tunnels.
![]() | Note: The EVPN route type 1—Ethernet Auto Discovery route is not used in PBB-EVPN. |
The 7750 SR, 7450 ESS, and 7950 XRS SR OS implementation of PBB-EVPN reuses the existing PBB-VPLS model, where N I-VPLS (or Epipe) services can be linked to a B-VPLS service. BGP-EVPN is enabled in the B-VPLS and the B-VPLS becomes an EVI (EVPN Instance). Figure 178 shows the PBB-EVPN model in the SR OS.
Each PE in the B-VPLS domain advertises its source-bmac as either configured in (b)vpls>pbb>source-bmac or auto-derived from the chassis mac. The remote PEs will install the advertised B-MACs in the B-VPLS FDB. If a specified PE is configured with an ethernet-segment associated with an I-VPLS or PBB Epipe, it may also advertise an es-bmac for the Ethernet-Segment.
In the example shown in Figure 178, when a frame with MAC DA = AA gets to PE1, a mac lookup is performed on the I-VPLS FDB and B-MAC-34 is found. A B-MAC lookup on the B-VPLS FDB yields the next-hop (or next-hops if the destination is in an all-active Ethernet-Segment) to which the frame is sent. As in PBB-VPLS, the frame will be encapsulated with the corresponding PBB header. A label specified by EVPN for the B-VPLS and the MPLS transport label are also added.
If the lookup on the I-VPLS FDB fails, the system sends the frame encapsulated into a PBB packet with B-MAC DA = Group B-MAC for the ISID. That packet will be distributed to all the PEs where the ISID is defined and contains the EVPN label distributed by the Inclusive Multicast routes for that ISID, as well as the transport label.
For PBB-Epipes, all the traffic is sent in a unicast PBB packet to the B-MAC configured in the pbb-tunnel.
The following CLI output shows an example of the configuration of an I-VPLS, PBB-Epipe, and their corresponding B-VPLS.
Configure the bgp-evpn context as described in section EVPN for MPLS Tunnels in VPLS Services (EVPN-MPLS).
Some EVPN configuration options are not relevant to PBB-EVPN and are not supported when BGP-EVPN is configured in a B-VPLS; these are as follows:
When bgp-evpn>mpls no shutdown is added to a specified B-VPLS instance, the following considerations apply:
In general, PBB technologies in the 7750 SR, 7450 ESS, or 7950 XRS SR OS support a way to contain the flooding for a specified I-VPLS ISID, so that BUM traffic for that ISID only reaches the PEs where the ISID is locally defined. Each PE will create an MFIB per I-VPLS ISID on the B-VPLS instance. That MFIB supports SAP or SDP bindings endpoints that can be populated by:
In PBB-EVPN, B-VPLS EVPN endpoints can be added to the MFIBs using EVPN Inclusive Multicast Ethernet Tag routes.
The example in Figure 179 shows how the MFIBs are populated in PBB-EVPN.
When the B-VPLS 10 is enabled, PE1 will advertise as follows:
![]() | Note: The MPLS label that will be advertised for the MAC routes and the inclusive multicast routes for a specified B-VPLS can be the same label or a different label. As in regular EVPN-MPLS, this will depend on the [no] ingress-replication-bum-label command. |
When I-VPLS 2001 (ISID 2001) is enabled as per the CLI in the preceding section, PE1 will advertise as follows:
This configuration has the following effect for the ISID range:
The ISID flooding behavior on B-VPLS SAPs and SDP bindings is as follows:
![]() | Note: The configuration of PBB-Epipes does not trigger any IMET advertisement. |
The 7750 SR, 7450 ESS, and 7950 XRS SR OS EVPN implementation supports RFC 8560 so that PBB-EVPN and PBB-VPLS can be integrated into the same network and within the same B-VPLS service.
All the concepts described in section EVPN and VPLS Integration are also supported in B-VPLS services so that B-VPLS SAP or SDP bindings can be integrated with PBB-EVPN destination bindings. The features described in that section also facilitate a smooth migration from B-VPLS SDP bindings to PBB-EVPN destination bindings.
The 7750 SR, 7450 ESS, and 7950 XRS SR OS PBB-EVPN implementation supports all-active and single-active multi-homing for I-VPLS and PBB Epipe services.
PBB-EVPN multi-homing reuses the ethernet-segment concept described in section EVPN Multi-Homing in VPLS Services. However, unlike EVPN-MPLS, PBB-EVPN does not use AD routes; it uses B-MACs for split-horizon checks and aliasing.
RFC 7623 describes two types of B-MAC assignments that a PE can implement:
In this document and in 7750 SR, 7450 ESS, and 7950 XRS SR OS terminology:
B-MAC selection and use depends on the multi-homing model; for single-active mode, the type of B-MAC will impact the flooding in the network as follows:
Figure 180 shows the use of all-active multi-homing in the 7750 SR, 7450 ESS, and 7950 XRS SR OS PBB-EVPN implementation.
For example, the following shows the ESI-1 and all-active configuration in PE3 and PE4. As in EVPN-MPLS, all-active multi-homing is only possible if a LAG is used at the CE. All-active multi-homing uses es-bmacs, that is, each ESI will be assigned a dedicated B-MAC. All the PEs part of the ES will source traffic using the same es-bmac.
In Figure 180 and the following configuration, the es-bmac used by PE3 and PE4 will be B-MAC-34, i.e. 00:00:00:00:00:34. The es-bmac for a specified ethernet-segment is configured by the source-bmac-lsb along with the (b-)vpls>pbb>use-es-bmac command.
Configuration in PE3:
Configuration in PE4:
The above configuration will enable the all-active multi-homing procedures for PBB-EVPN.
![]() | Note: The ethernet-segment ESI-1 can also be used for regular VPLS services. |
The following considerations apply when the ESI is used for PBB-EVPN.
In the configuration above, a PBB-Epipe is configured in PE3 and PE4, both pointing at the same remote pbb tunnel backbone-dest-mac. On the remote PE, i.e. PE1, the configuration of the PBB-Epipe will point at the es-bmac:
When PBB-Epipes are used in combination with all-active multi-homing, Nokia recommends using bgp-evpn mpls ingress-replication-bum-label in the PEs where the ethernet-segment is created, that is in PE3 and PE4. This guarantees that in case of flooding in the B-VPLS service for the PBB Epipe, only the DF will forward the traffic to the CE.
![]() | Note: PBB-Epipe traffic always uses B-MAC DA = unicast; therefore, the DF cannot check whether the inner frame is unknown unicast or not based on the group B-MAC. Therefore, the use of an EVPN BUM label is highly recommended. |
Aliasing for PBB-epipes with all-active multi-homing only works if shared-queuing or ingress policing is enabled on the ingress PE epipe. In any other case, the IOM will send the traffic to a single destination (no ECMP will be used in spite of the bgp-evpn mpls ecmp setting).
All-active multi-homed es-bmacs are treated by the remote PEs as eES:MAX-ESI BMACs. The following example shows the FDB in B-VPLS 1 in PE1 as shown in Figure 180:
The show service id evpn-mpls on PE1 shows that the remote es-bmac, i.e. 00:00:00:00:00:34, has two associated next-hops, i.e. PE3 and PE4:
ES failures are resolved by the PEs withdrawing the es-bmac. The remote PEs will withdraw the route and update their list of next-hops for a specified es-bmac.
No mac-flush of the I-VPLS FDB tables is required as long as the es-bmac is still in the FDB.
When the route corresponding to the last next-hop for a specified es-bmac is withdrawn, the es-bmac will be flushed from the B-VPLS FDB and all the C-MACs associated with it will be flushed too.
The following events will trigger a withdrawal of the es-bmac and the corresponding next-hop update in the remote PEs:
![]() | Note: Individual SAPs going operationally down in an ES will not generate any BGP withdrawal or indication so that the remote nodes can flush their C-MACs. This is solved in EVPN-MPLS by the use of AD routes per EVI; however, there is nothing similar in PBB-EVPN for indicating a partial failure in an ESI. |
In single-active multi-homing, the non-DF PEs for a specified ESI will block unicast and BUM traffic in both directions (upstream and downstream) on the object associated with the ESI. Other than that, single-active multi-homing will follow the same service model defined in the PBB-EVPN All-Active Multi-Homing Service Model section with the following differences:
ESI failures are resolved depending on the B-MAC address assignment chosen by the user:
The following events will trigger a C-MAC flush notification. A 'C-MAC flush notification' means the withdrawal of a specified B-MAC or the update of B-MAC with a higher sequence number (SQN). Both BGP messages will make the remote PEs flush all the C-MACs associated with the indicated B-MAC:
EVPN multi-homing is supported with PBB-EVPN Epipes, but only in a limited number of scenarios. In general, the following applies to PBB-EVPN Epipes:
Figure 182 shows the EVPN MH support in a three-node scenario.
EVPN MH support in a three-node scenario has the following characteristics:
Figure 183 shows the EVPN MH support in a two-node scenario.
EVPN MH support in a two-node scenario has the following characteristics, as shown in Figure 183:
P2MP mLDP tunnels can also be used in PBB-EVPN services. The use of provider-tunnel inclusive MLDP is only supported in the B-VPLS default multicast list; that is, no per-ISID IMET-P2MP routes are supported. IMET-P2MP routes in a B-VPLS are always advertised with Ethernet tag zero. All-active EVPN multihoming is supported in PBB-EVPN services together with P2MP mLDP tunnels; however, single-active multihoming is not supported. This capability is only required on the P2MP root PEs within PBB-EVPN services using all-active multihoming.
B-VPLS supports the use of MFIBs for ISIDs using ingress replication. The following considerations apply when provider-tunnel is enabled in a B-VPLS service.
The following example CLI output shows a range of ISIDs (1000-2000) that use the P2MP tree and the system does not advertise the IMET-IR routes for those ISIDs. Other local ISIDs will advertise the IMET-IR and will use the MFIB to forward BUM packets to the EVPN-MPLS destinations created by the IMET-IR routes.
SR OS supports ISID-based C-MAC flush procedures for PBB-EVPN I-VPLS services where no single-active ESs are used. SR OS also supports C-MAC flush procedure where other redundancy mechanisms, such as BGP-MH, need these procedures to avoid blackholes caused by a SAP or spoke SDP failure.
The C-MAC flush procedures are enabled on the I-VPLS service using the config>service>vpls>pbb>send-bvpls-evpn-flush CLI command. The feature can be disabled on a per-SAP or per-spoke SDP basis by using the disable-send-bvpls-evpn-flush command in the config>service>vpls>sap or config>service>vpls>spoke-sdp context.
With the feature enabled on an I-VPLS service and a SAP or spoke SDP, if there is a SAP or spoke SDP failure, the router sends a C-MAC flush notification for the corresponding B-MAC and ISID. The router receiving the notification flushes all the C-MACs associated with the indicated B-MAC and ISID when the config>service>vpls>bgp-evpn>accept-ivpls-evpn-flush command is enabled for the B-VPLS service.
The C-MAC flush notification consists of an EVPN B-MAC route that is encoded as follows: the ISID to be flushed is encoded in the Ethernet Tag field and the sequence number is incremented with respect to the previously advertised route.
If send-bvpls-evpn-flush is configured on an I-VPLS with SAPs or spoke SDPs, one of the following rules must be observed.
ISID-based C-MAC flush can be enabled in I-VPLS services with ES or vES. If enabled, the expected interaction between the RFC 7623-based C-MAC flush and the ISID-based C-MAC flush is as follows.
Figure 184 shows an example where the ISID-based C-MAC flush prevents blackhole situations for a CE that is using BGP-MH as the redundancy mechanism in the I-VPLS with an ISID of 3000.
When send-bvpls-evpn-flush is enabled, the I-VPLS service is ready to send per-ISID C-MAC flush messages in the form of B-MAC/ISID routes. The first B-MAC/ISID route for an I-VPLS service is sent with sequence number zero; subsequent updates for the same route increment the sequence number. A B-MAC/ISID route for the I-VPLS is advertised or withdrawn in the following cases:
If the user explicitly configures disable-send-bvpls-evpn-flush for a SAP or spoke SDP, the system will not send per-ISID C-MAC flush messages for failures on that SAP or spoke SDP.
The B-VPLS on the receiving node must be configured with bgp-evpn>accept-ivpls-evpn-flush to accept and process C-MAC flush non-zero Ethernet-tag MAC routes. If the accept-ivpls-evpn-flush command is enabled (the command is disabled by default), the node accepts non-zero Ethernet-tag MAC routes (B-MAC/ISID routes) and processes them. When a new B-MAC/ISID update (with an incremented sequence number) for an existing route is received, the router will flush all the C-MACs associated with that B-MAC and ISID. The B-MAC/ISID route withdrawals will also cause a C-MAC flush.
![]() | Note: Only B-MAC routes with the Ethernet Tag field set to zero are considered for B-MAC installation in the FDB. |
The following CLI example shows the commands that enable the C-MAC flush feature on PE1 and PE3.
In the preceding example, with send-bvpls-evpn-flush enabled on the I-VPLS service of PE1, a B-MAC/ISID route (for pbb source-bmac address B-MAC 00:..:01 and ISID 3000) is advertised. If the SAP goes operationally down, PE1 will send an update of the source B-MAC address (00:..:01) for ISID 3000 with a higher sequence number.
With accept-ivpls-evpn-flush enabled on PE3’s B-VPLS service, PE3 flushes all C-MACs associated with B-MAC 00:01 and ISID 3000. The C-MACs associated with other B-MACs or ISIDs are retained in PE3’s FDB.
Routers with PBB-EVPN services use the following route types to advertise the ISID of a specific service.
Although the preceding routes are only relevant for routers where the advertised ISID is configured, they are sent with the B-VPLS route-target by default. As a result, the routes are unnecessarily disseminated to all the routers in the B-VPLS network.
SR OS supports the use of per-ISID or group of ISID route-targets, which limits the dissemination of IMET-ISID or BMAC-ISID routes for a specific ISID to the PEs where the ISID is configured.
The config>service>(b-)vpls>isid-route-target>isid-range from [to to] [auto-rt | route-target rt] command allows the user to determine whether the IMET-ISID and BMAC-ISID routes are sent with the B-VPLS route-target (default option, no command), or a route-target specific to the ISID or range of ISIDs.
The following configuration example shows how to configure ISID ranges as auto-rt or with a specific route-target.
The auto-rt option auto-derives a route-target per ISID in the following format:
<2-byte-as-number>:<4-byte-value>
Where: 4-byte-value = 0x30+ISID, as described in RFC 7623. Figure 185 shows the format of the auto-rt option.
Where:
If isid-route-target is enabled, the export and import directions for IMET-ISID and BMAC-ISID route processing are modified as follows.
SR OS supports virtual Ethernet Segments (vES) for EVPN multi-homing in accordance with draft-ietf-bess-evpn-virtual-eth-segment.
Regular ESs can only be associated to ports, LAGs, and SDPs, which satisfies the redundancy requirements for CEs that are directly connected to the ES PEs by a port, LAG, or SDP. However, this implementation does not work when an aggregation network exists between the CEs and the ES PEs, which requires different ESs to be defined for the port or LAG of the SDP.
Figure 186 shows an example of how CE1 and CE2 use all-active multi-homing to the EVPN-MPLS network despite the third-party Ethernet aggregation network to which they are connected.
The ES association can be made in a more granular way by creating a vES. A vES can be associated to the following:
The following CLI examples show the vES configuration options:
Where:
Although qtag values 0, * and 1 to 4094 are allowed, the following considerations must be taken in to account when configuring a dot1q or qinq vES:
Table 80 lists examples of the supported qtag values between 1 to 4094.
vES Configuration for Port 1/1/1 | SAP Association |
dot1q qtag-range 100 | 1/1/1:100 |
dot1q qtag-range 100 to 102 | 1/1/1:100, 1/1/1:101, 1/1/1:102 |
qinq s-tag 100 c-tag-range 200 | 1/1/1:100.200 |
qinq s-tag-range 100 | All the SAPs 1/1/1:100.x where: x is a value between 1 to 4094, 0, * |
qinq s-tag-range 100 to 102 | All SAPs 1/1/1:100.x, 1/1/1:101.x, 1/1/1:102.x where: x is a value between 1 to 4094, 0, * |
Table 81 lists all the supported combinations that include qtag values <0, *, null>. Any other combination of these special values is not supported.
vES Configuration for Port 1/1/1 | SAP Association |
dot1q qtag-range 0 | 1/1/1:0 |
dot1q qtag-range * | 1/1/1:* |
qinq s-tag 0 c-tag-range * | 1/1/1:0.* |
qinq s-tag * c-tag-range * | 1/1/1:*.* |
qinq s-tag * c-tag-range null | 1/1/1:*.null |
qinq s-tag x c-tag-range 0 | 1/1/1:x.0 where: x is a value between 1 to 4094 |
qinq s-tag x c-tag-range * | 1/1/1:x.* where: x is a value between 1 to 4094 |
On vESs, the single-active and all-active modes are supported for EVPN-MPLS VPLS, Epipe, and PBB-EVPN services. Single-active multi-homing is supported on port and SDP-based vESs, and all-active multi-homing is only supported on LAG-based vESs.
The following considerations apply if the vES is used with PBB-EVPN services:
In addition to the ES service-carving modes auto and off, the manual mode also supports the preference-based algorithm with the non-revertive option, as described in draft-rabadan-bess-evpn-pref-df.
When ES is configured to use the preference-based algorithm, the ES route is advertised with the Designated Forwarder (DF) election extended community (sub-type 0x06). Figure 187 shows the DF election extended community.
In the extended community, a DF type 2 preference algorithm is advertised with a 2-byte preference value (32767 by default) if the preference-based manual mode is configured. The Don't Preempt Me (DP) bit is set if the non-revertive option is enabled.
The following CLI excerpt shows the relevant commands to enable the preference-based DF election on a specific ES (regular or virtual):
Where:
The following configuration example shows the use of the preference-based algorithm and non-revertive option in an ES defined in PE1 and PE2.
Based on the configuration in the preceding example, the PE behavior is as follows:
IPv4 multicast routing is supported in an EVPN-MPLS VPRN routed VPLS service through its IP interface, when the source of the multicast stream is on one side of its IP interface and the receivers are on either side of the IP interface. For example, the source for multicast stream G1 could be on the IP side sending to receivers on both other regular IP interfaces and the VPLS of the routed VPLS service, while the source for group G2 could be on the VPLS side sending to receivers on both the VPLS and IP side of the routed VPLS service. See IPv4 and IPv6 Multicast Routing Support for more details.
IGMP snooping is supported in EVPN-MPLS VPLS and PBB-EVPN I-VPLS (where BGP EVPN is running in the associated B-VPLS service) services. It is also supported in EVPN-MPLS VPRN and IES R-VPLS services. It is required in scenarios where the operator does not want to flood all of the IP multicast traffic to the access nodes or CEs, and only wants to deliver IP multicast traffic for which IGMP reports have been received.
The following points apply when IGMP snooping is configured in EVPN-MPLS VPLS or PBB-EVPN I-VPLS services.
![]() | Note: When IGMP snooping is enabled with P2MP LSPs, at least one EVPN-MPLS multicast destination must be established to enable the processing of IGMP messages by the system. The use of P2MP LSPs is not supported when sending IPv4 multicast into an EVPN-MPLS R-VPLS service from its IP interface. |
In the following show command output, the EVPN-MPLS destinations are shown as part of the MFIB (when igmp-snooping is in a no shutdown state), and the EVPN-MPLS logical interface is shown as an mrouter.
The equivalent output for PBB-EVPN services is similar to the output above for EVPN-MPLS services, with the exception that the EVPN destinations are named "b-EVPN-MPLS".
When single-active multihoming is used, the IGMP snooping state is learned on the active multihoming object. If a failover occurs, the system with the newly active multihoming object must wait for IGMP messages to be received to instantiate the IGMP snooping state after the ES activation timer expires; this could result in an increased outage.
The outage can be reduced by using MCS synchronization, which is supported for IGMP snooping in both EVPN-MPLS and PBB-EVPN services (see Multi-Chassis Synchronization for Layer 2 Snooping States). However, MCS only supports synchronization between two PEs, whereas EVPN multihoming is supported between a maximum of four PEs. Also, IGMP snooping state can be synchronized only on a SAP.
An increased outage would also occur when using all-active EVPN multihoming. The IGMP snooping state on an ES LAG SAP or virtual ES to the attached CE must be synchronized between all the ES PEs, as the LAG link used by the DF PE might not be the same as that used by the attached CE. MCS synchronization is not applicable to all-active multihoming as MCS only supports active/standby synchronization.
To eliminate any additional outage on a multihoming failover, IGMP snooping messages can be synchronized between the PEs on an ES using data-driven IGMP snooping state synchronization, which is supported in EVPN-MPLS services, PBB-EVPN services, EVPN-MPLS VPRN and IES R-VPLS services. The IGMP messages received on an ES SAP or spoke SDP are sent to the peer ES PEs with an ESI label (for EVPN-MPLS) or ES B-MAC (for PBB-EVPN) and these are used to synchronize the IGMP snooping state on the ES SAP or spoke SDP on the receiving PE.
Data-driven IGMP snooping state synchronization is supported for both all-active multihoming and single-active with an ESI label multihoming in EVPN-MPLS, EVPN-MPLS VPRN and IES R-VPLS services, and for all-active multihoming in PBB-EVPN services. All PEs participating in a multihomed ES must be running an SR OS version supporting this capability. PBB-EVPN with IGMP snooping using single-active multihoming is not supported.
Data-driven IGMP snooping state synchronization is also supported with P2MP mLDP LSPs in both EVPN-MPLS and PBB-EVPN services. When P2MP mLDP LSPs are used in EVPN-MPLS services, all PEs (including the PEs not connected to a multihomed ES) in the EVPN-MPLS service must be running an SR OS version supporting this capability with IGMP snooping enabled and all network interfaces must be configured on FP3 or higher-based line cards.
Figure 188 shows the processing of an IGMP message for EVPN-MPLS. In PBB-EVPN services, the ES B-MAC is used instead of the ESI label to synchronize the state.
Data-driven synchronization is enabled by default when IGMP snooping is enabled within an EVPN-MPLS service using all-active multihoming or single-active with an ESI label multihoming, or in a PBB-EVPN service using all-active multihoming. If IGMP snooping MCS synchronization is enabled on an EVPN-MPLS or PBB-EVPN (I-VPLS) multihoming SAP then MCS synchronization takes precedence over the data-driven synchronization and the MCS information is used. Mixing data-driven and MCS IGMP synchronization within the same ES is not supported.
When using EVPN-MPLS, the ES should be configured as non-revertive to avoid an outage when a PE takes over the DF role. The Ethernet A-D per ESI route update is withdrawn when the ES is down which prevents state synchronization to the PE with the ES down, as it does not advertise an ESI label. The lack of state synchronization means that if the ES comes up and that PE becomes DF after the ES activation timer expires, it might not have any IGMP snooping state until the next IGMP messages are received, potentially resulting in an additional outage. Configuring the ES as non-revertive will avoid this potential outage. Configuring the ES to be non-revertive would also avoid an outage when PBB-EVPN is used, but there is no outage related to the lack of the ESI label as it is not used in PBB-EVPN.
The following steps can be used when enabling IGMP snooping in EVPN-MPLS and PBB-EVPN services.
If P2MP mLDP LSPs are also configured, the following steps can be used when enabling IGMP snooping in EVPN-MPLS and PBB-EVPN services.
To aid with troubleshooting, the debug packet output displays the IGMP packets used for the snooping state synchronization. An example of a join sent on ES esi-1 from one ES PE and the same join received on another ES PE follows.
PIM Snooping for VPLS allows a VPLS PE router to build multicast states by snooping PIM protocol packets that are sent over the VPLS. The VPLS PE then forwards multicast traffic based on the multicast states. When all receivers in a VPLS are IP multicast routers running PIM, multicast forwarding in the VPLS is efficient when PIM snooping for VPLS is enabled.
PIM snooping for IPv4 is supported in EVPN-MPLS (for VPLS and R-VPLS) and PBB-EVPN I-VPLS (where BGP EVPN is running in the associated B-VPLS service) services. It is enabled using the following command (as IPv4 multicast is enabled by default):
PIM snooping on SAPs and spoke SDPs operates in the same way as in a plain VPLS service. However, EVPN-MPLS/PBB-EVPN B-VPLS destinations are treated as a single PIM interface, specifically:
PIM snooping for IPv4 is supported in EVPN-MPLS services using P2MP LSPs and PBB-EVPN I-VPLS services with P2MP LSPs in the associated B-VPLS service. When PIM snooping is enabled with P2MP LSPs, at least one EVPN-MPLS multicast destination is required to be established to enable the processing of PIM messages by the system.
Multi-chassis synchronization (MCS) of PIM snooping for IPv4 state is supported for both SAPs and spoke SDPs which can be used with single-active multihoming. Care should be taken when using *.null to define the range for a QinQ virtual ES if the associated SAPs are also being synchronized by MCS, as there is no equivalent MCS sync-tag support to the *.null range.
PBB-EVPN services operate in a similar way to regular PBB services, specifically:
In EVPN-MPLS services, the individual EVPN-MPLS destinations appear in the MFIB but the information for each EVPN-MPLS destination entry will always be identical, as shown below:
Similarly for the PIM neighbors:
A single EVPN-MPLS interface is shown in the outgoing interface, as can be seen in the following output:
An example of the debug trace output for a join received on an EVPN-MPLS destination is shown below:
The equivalent output for PBB-EVPN services is similar to that above for EVPN-MPLS services, with the exception that the EVPN destinations are named “b-EVPN-MPLS”.
When single-active multihoming is used, PIM snooping for IPv4 state is learned on the active multihoming object. If a failover occurs, the system with the newly active multihoming object must wait for IPv4 PIM messages to be received to instantiate the PIM snooping for IPv4 state after the ES activation timer expires, which could result in an increased outage.
This outage can be reduced by using MCS synchronization, which is supported for PIM snooping for IPv4 in both EVPN-MPLS and PBB-EVPN services (see Multi-Chassis Synchronization for Layer 2 Snooping States). However, MCS only supports synchronization between two PEs, whereas EVPN multihoming is supported between a maximum of four PEs.
An increased outage would also occur when using all-active EVPN multihoming. The PIM snooping for IPv4 state on an all-active ES LAG SAP or virtual ES to the attached CE must be synchronized between all the ES PEs, as the LAG link used by the DF PE might not be the same as that used by the attached CE. MCS synchronization is not applicable to all-active multihoming as MCS only supports active/standby synchronization.
To eliminate any additional outage on a multihoming failover, snooped IPv4 PIM messages should be synchronized between the PEs on an ES using data-driven PIM snooping for IPv4 state synchronization, which is supported in both EVPN-MPLS and PBB-EVPN services. The IPv4 PIM messages received on an ES SAP or spoke SDP are sent to the peer ES PEs with an ESI label (for EVPN-MPLS) or ES B-MAC (for PBB-EVPN) and are used to synchronize the PIM snooping for IPv4 state on the ES SAP or spoke SDP on the receiving PE.
Data-driven PIM snooping state synchronization is supported for all-active multihoming and single-active with an ESI label multihoming in EVPN-MPLS services. All PEs participating in a multihomed ES must be running an SR OS version supporting this capability with PIM snooping for IPv4 enabled. It is also supported with P2MP mLDP LSPs in the EVPN-MPLS services, in which case all PEs (including the PEs not connected to a multihomed ES) must have PIM snooping for IPv4 enabled and all network interfaces must be configured on FP3 or higher-based line cards.
In addition, data-driven PIM snooping state synchronization is supported for all-active multihoming in PBB-EVPN services and with P2MP mLDP LSPs in PBB-EVPN services. All PEs participating in a multihomed ES, and all PEs using PIM proxy mode (including the PEs not connected to a multihomed ES) in the PBB-EVPN service must be running an SR OS version supporting this capability and must have PIM snooping for IPv4 enabled. PBB-EVPN with PIM snooping for IPv4 using single-active multihoming is not supported.
Figure 189 shows the processing of an IPv4 PIM message for EVPN-MPLS. In PBB-EVPN services, the ES B-MAC is used instead of the ESI label to synchronize the state.
Data-driven synchronization is enabled by default when PIM snooping for IPv4 is enabled within an EVPN-MPLS service using all-active multihoming and single-active with an ESI label multihoming, or in a PBB-EVPN service using all-active multihoming. If PIM snooping for IPv4 MCS synchronization is enabled on an EVPN-MPLS or PBB-EVPN (I-VPLS) multihoming SAP or spoke SDP, then MCS synchronization takes preference over the data-driven synchronization and the MCS information is used. Mixing data-driven and MCS PIM synchronization within the same ES is not supported.
When using EVPN-MPLS, the ES should be configured as non-revertive to avoid an outage when a PE takes over the DF role. The Ethernet A-D per ESI route update is withdrawn when the ES is down, which prevents state synchronization to the PE with the ES down as it does not advertise an ESI label. The lack of state synchronization means that if the ES comes up and that PE becomes DF after the ES activation timer expires, it might not have any PIM snooping for IPv4 state until the next PIM messages are received, potentially resulting in an additional outage. Configuring the ES as non-revertive will avoid this potential outage. Configuring the ES to be non-revertive would also avoid an outage when PBB-EVPN is used, but there is no outage related to the lack of the ESI label as it is not used in PBB-EVPN.
The following steps can be used when enabling PIM snooping for IPv4 (using PIM snooping and PIM proxy modes) in EVPN-MPLS and PBB-EVPN services.
If P2MP mLDP LSPs are also configured, the following steps can be used when enabling PIM snooping or IPv4 (using PIM snooping and PIM proxy modes) in EVPN-MPLS and PBB-EVPN services.
In the above steps, when PIM snooping for IPv4 is enabled, the traffic loss can be reduced or eliminated by configuring a larger hold-time (up to 300 seconds), during which multicast traffic is flooded.
To aid with troubleshooting, the debug packet output displays the PIM packets used for the snooping state synchronization. An example of a join sent on ES esi-1 from one ES PE and the same join received on another ES PE follows:
This section contains information about EVPN E-Tree.
BGP EVPN control plane is extended and aligned with IETF RFC 8317 to support EVPN E-Tree services. Figure 190 shows the main EVPN extensions for the EVPN E-Tree information model.
The following BGP extensions are implemented for EVPN E-Tree services.
EVPN E-Tree services are modeled as VPLS services configured as E-Trees with bgp-evpn mpls enabled.
The following example CLI shows an excerpt of a VPLS E-Tree service with EVPN E-Tree service enabled.
The following considerations apply to the configuration of the EVPN E-Tree service.
EVPN E-Tree supports all operations related to flows among local root-ac and leaf-ac objects in accordance with IETF Draft draft-ietf-bess-evpn-etree. This section describes the extensions required to forward to or from BGP-EVPN destinations.
Known unicast traffic forwarding is based on ingress PE filtering. Figure 191 shows an example of EVPN-E-Tree forwarding behavior for known unicast.
MAC addresses learned on leaf-ac objects are advertised in EVPN with their corresponding leaf indication.
In Figure 191, PE1 advertises MAC1 using the E-Tree EC and leaf indication, and PE2 installs MAC1 with a leaf flag in the FDB.
Assuming MAC DA is present in the local FDB (MAC1 in the FDB of PE2) when PE2 receives a frame, it is handled as follows.
The ingress filtering for E-Tree leaf-to-leaf traffic requires the implementation of an extra leaf EVPN MPLS destination per remote PE (containing leaf objects) per E-Tree service. The ingress filtering for E-Tree leaf-to-leaf traffic is as follows.
BUM traffic forwarding is based on egress PE filtering. Figure 192 shows an example of EVPN E-Tree forwarding behavior for BUM traffic.
In Figure 192, BUM frames are handled as follows when they ingress PE or PE2.
The BUM-encapsulated packet is received on the network ingress interface at the egress PE or PE1. The packet is processed as follows.
The egress PE checks the MAC Source Address (SA) for traffic received without the leaf MPLS label. This check covers corner cases where the ingress PE sends traffic originating from a leaf-ac but without a leaf indication.
In Figure 192, PE2 receives a frame with MAC DA = MAC3 and MAC SA = MAC2. Because MAC3 is a root MAC, MAC lookup at PE2 allows the system to unicast the packet to PE1 without the leaf label. If MAC3 was no longer in PE1's FDB, PE1 would flood the frame to all the root and leaf-acs, despite the frame having originated from a leaf-ac.
To minimize and prevent leaf traffic from leaking to other leaf-acs (as described in the preceding case), the egress PE always performs a MAC SA check for all types of traffic. The data path performs MAC SA-based egress filtering as follows.
EVPN E-Tree procedures support all-active and single-active EVPN multi-homing. Ingress filtering can handle MACs learned on ES leaf-ac SAP or SDP bindings. If a MAC associated with an ES leaf-ac is advertised with a different E-Tree indication or if the AD per-EVI routes have inconsistent leaf indications, then the remote PEs performing the aliasing treat the MAC as root.
Figure 193 shows the expected behavior for multi-homing and egress BUM filtering.
Multi-homing and egress BUM filtering in Figure 193 is handled as follows:
If the PE receives an ES MAC from a peer that shares the ES and decides to install it against the local ES SAP that is oper-up, it checks the E-Tree configuration (root or leaf) of the local ES SAP against the received MAC route. The MAC route is processed as follows.
SR OS supports PBB-EVPN E-Tree services in accordance with IETF RFC 8317. PBB-EVPN E-Tree services are modeled as PBB-EVPN services where some I-VPLS services are configured as etree and some of their SAP or spoke SDPs are configured as leaf-acs.
The procedures for the PBB-EVPN E-Tree are similar to those for the EVPN E-Tree, except that the egress leaf-to-leaf filtering for BUM traffic is based on the B-MAC source address. Also, the leaf label and the EVPN AD routes are not used.
The PBB-EVPN E-Tree operation is as follows.
The following CLI example shows an I-VPLS E-Tree service that uses PBB-EVPN E-Tree. The leaf-source-bmac address must be configured prior to the configuration of the I-VPLS E-Tree. As is the case in regular E-Tree services, SAP and spoke SDPs that are not explicitly configured as leaf-acs are considered root-ac objects.
The following considerations apply to PBB-EVPN E-Trees and multi-homing.
The router supports the MPLS entropy label (RFC 6790) and the Flow Aware Transport label (known as the hash label) (RFC 6391). These labels allow LSR nodes in a network to load-balance labeled packets in a much more granular fashion than allowed by simply hashing on the standard label stack. The entropy label can be enabled on BGP-EVPN services (VPLS and Epipe). See the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for further information.
Inter-AS Option B and Next-Hop-Self Route-Reflector (VPN-NH-RR) functions are supported for the BGP-EVPN family in the same way both functions are supported for IP-VPN families.
A typical use case for EVPN Inter-AS Option B or EVPN VPN-NH-RR is Data Center Interconnect (DCI) networks, where cloud and service providers are looking for efficient ways to extend their Layer 2 and Layer 3 tenant services beyond the data center and provide a tighter DC-WAN integration. While the instantiation of EVPN services in the DC GW to provide this DCI connectivity is a common model, some operators use Inter-AS Option B or VPN-NH-RR connectivity to allow the DC GW to function as an ASBR or ABR respectively, and the services are only instantiated on the edge devices.
Figure 194 shows a DCI example where the EVPN services in two DCs are interconnected without the need for instantiating services on the DC GWs.
The ASBRs or ABRs connect the DC to the WAN at the control plane and data plane levels where the following considerations apply.
Refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Unicast Routing Protocols Guide for more information about the next-hop resolution of BGP-labeled routes.
Inter-AS Option B for EVPN services on ABSRs and VPN-NH-RR on ABRs re-use the existing commands enable-inter-as-vpn and enable-rr-vpn-forwarding respectively. The two commands enable the ASBR or ABR function for both EVPN and IP-VPN routes. These two features can be used with the following EVPN services:
The following sub-sections clarify some aspects of EVPN when used in an Inter-AS Option B or VPN-NH-RR network.
When enable-rr-vpn-forwarding or enable-inter-as-vpn is configured, only EVPN-MPLS routes are processed for label swap and the next hop is changed. EVPN-VXLAN routes are re-advertised without a change in the next hop.
The following shows how the router processes and re-advertises the different EVPN route types. Refer to BGP-EVPN Control Plane for MPLS Tunnels for detailed information about the route fields.
Inter-AS Option B and VPN-NH-RR support the use of non-segmented trees for forwarding BUM traffic in EVPN.
For ingress replication and non-segmented trees, the ASBR or ABR performs an EVPN BUM label swap without any aggregation or further replication. This concept is shown in Figure 195.
In Figure 195, when PE2, PE3, and PE4 advertise their IMET routes, the ABRs re-advertise the routes with NHS and a different label. However, IMET routes are not aggregated; therefore, PE1 sets up three different EVPN multicast destinations and sends three copies of every BUM packet, even if they are sent to the same ABR. This example is also applicable to ASBRs and Inter-AS Option B.
P2MP mLDP may also be used with VPN-NH-RR, but not with Inter-AS Option B. The ABRs, however, will not aggregate or change the mLDP root IP addresses in the IMET routes. The root IP addresses must be leaked across IGP domains. For example, if PE2 advertises an IMET route with mLDP or composite tunnel type, PE1 will be able to join the mLDP tree if the root IP is leaked into PE1’s IGP domain.
In general, EVPN multi-homing is supported in Inter-AS Option B or VPN-NH-RR networks with the following limitations.
In Figure 196, PE1’s aliasing and backup functions to the remote ES-1 are supported. However, PE1 cannot identify the originating PE for the received AD per-ES routes because they are both arriving with the same next hop (ASBR/ABR4) and RDs may not help to correlate each AD per-ES route to a given PE. Therefore, if there is a failure on PE2’s ES link, PE1 cannot remove PE2 from the destinations list for ES-1 based on the AD per-ES route. PE1 must wait for the AD per-EVI route withdrawals to remove PE2 from the list. In summary, when the ES PEs and the remote PE are in different ASs or IGP domains, per-service withdrawal based on AD per-EVI routes is supported, but mass-withdrawal based on AD per-ES routes is not supported.
Unicast procedures known to EVPN-MPLS E-Tree are supported in Inter-AS Option B or VPN-NH-RR scenarios, however, the BUM filtering procedures are affected.
As described in EVPN E-Tree, leaf-to-leaf BUM filtering is based on the Leaf Label identification at the egress PE. In a non-Inter-AS or non-VPN-NH-RR scenario, EVPN E-tree AD per-ES (ESI-0) routes carrying the Leaf Label are distinguished by the advertised next hop. In Inter-AS or VPN-NH-RR scenarios, all the AD per-ES routes are received with the ABR or ASBR next hop. Therefore, AD per-ES routes originating from different PEs would all have the same next hop, and the ingress PE would not be able to determine which leaf label to use for a given EVPN multicast destination.
A simplified EVPN E-Tree solution is supported, where an E-Tree Leaf Label is not installed in the IOM if the PE receives more than one E-Tree AD per-ES route, with different RDs, for the same next hop. In this case, leaf BUM traffic is transmitted without a Leaf Label and the leaf-to-leaf traffic filtering depends on the egress source MAC filtering on the egress PE. See EVPN E-Tree Egress Filtering Based on MAC Source Address.
PBB-EVPN E-tree services are not affected by Inter-AS or VPN-NH-RR scenarios, as AD per-ES routes are not used.
ECMP is supported for EVPN route next hops that are resolved to EVPN-MPLS destinations as follows:
In all of these cases, the config>service>epipe/vpls>bgp-evpn>mpls>auto-bind-tunnel>ecmp number command determines the number of Traffic Engineering (TE) tunnels that an EVPN next hop can resolved to. TE tunnels refer to RSVP-TE or SR-TE types. For shortest path tunnels, such as, ldp, sr-isis, sr-ospf, udp, and so on, the number of tunnels in the ECMP group are determined by the config>router>ecmp command.
EVPN MPLS services can be deployed in a pure IPv6 network infrastructure, where IPv6 addresses are used as next-hops of the advertised EVPN routes, and EVPN routes received with IPv6 next-hops are resolved to tunnels in the IPv6 tunnel-table.
To change the system-ipv4 address that is advertised as the next-hop for a local EVPN MPLS service by default, configure the config>service>vpls|epipe>bgp-evpn>mpls>route-next-hop {system-ipv4 | system-ipv6 | ip-address} command.
The configured IP address is used as a next-hop for the MAC/IP, IMET, and AD per-EVI routes advertised for the service. Note that this configured next-hop can be overridden by a policy with the next-hop-self command.
In the case of Inter-AS model B or next-hop-self route-reflector scenarios, at the ASBR/ABR:
EVPN MPLS multi-homing is supported on PEs that use non-system IPv4 or IPv6 addresses for tunnel resolution. Similar to multi-homing in EVPN VXLAN networks, (see Non-system IPv4 and IPv6 VXLAN Termination for EVPN VXLAN Multihoming), additional configuration steps are required.
When multi-homing is used in the service, the same IP address should be configured in all three of the commands detailed above, so the DF Election candidate list is built correctly.
This section provides information on general topics related to EVPN.
VPLS services support proxy-ARP (Address Resolution Protocol) and proxy-ND (Neighbor Discovery) functions that can be enabled or disabled independently per service. When enabled (proxy-ARP/proxy-ND no shutdown), the system will populate the corresponding proxy-ARP/proxy-ND table with IP--MAC entries learned from the following sources:
In addition, any ingress ARP or ND frame on a SAP or SDP binding will be intercepted and processed. ARP requests and Neighbor Solicitations will be answered by the system if the requested IP address is present in the proxy table.
Figure 197 shows an example of how proxy-ARP is used in an EVPN network. Proxy-ND would work in a similar way. The MAC address notation in the diagram is shortened for readability.
PE1 is configured as follows:
Figure 197 shows the following steps, assuming proxy-ARP is no shutdown on PE1 and PE2, and the tables are empty:
From this point onward, the PEs reply to any ARP-request for 00:01 or 00:03, without the need for flooding the message in the EVPN network. By replying to known ARP-requests / Neighbor Solicitations, the PEs help to significantly reduce the flooding in the network.
Use the following commands to customize proxy-ARP/proxy-ND behavior:
![]() | Note: A static IP-MAC entry requires the addition of the MAC address to the FDB as either learned or CStatic (conditional static mac) in order to become active (Status —> active). |
Table 82 shows the combinations that will produce a Status = Active proxy-arp entry in the table. The system replies to proxy-ARP requests for active entries. Any other combination results in a Status = inActv entry. If the service is not active, the proxy-arp entries will not be active either, regardless of the FDB entries
![]() | Note: A static entry is active in the FDB even when the service is down. |
Proxy-arp Entry Type | FDB Entry Type (for the same MAC) |
Dynamic | learned |
Static | learned |
Dynamic | CStatic/Static |
Static | CStatic/Static |
EVPN | EVPN, learned/CStatic/Static with matching ESI |
Duplicate | — |
When proxy-ARP/proxy-ND is enabled on services with all-active multi-homed Ethernet segments, a proxy-arp entry type evpn might be associated with learned/CStatic/Static FDB entries (because for example, the CE can send traffic for the same MAC to all the multi-homed PEs in the ES). If this is the case, the entry is active if the ESI of the EVPN route and the FDB entry match, or inactive otherwise, as per Table 82.
When proxy-ARP/proxy-ND is enabled, the system starts populating the proxy table and responding to ARP-requests/NS messages. To keep the active IP-MAC entries alive and ensure that all the host/routers in the service update their ARP/ND caches, the system may generate the following three types of ARP/ND messages for a specified IP-MAC entry:
RFC 4861 describes the use of the (R) or “Router” flag in NA messages as follows:
The use of the “R” flag in NA messages impacts how the hosts select their default gateways when sending packets “off-link”. Therefore, it is important that the proxy-ND function on the 7750 SR, 7450 ESS, or 7950 XRS must meet one of the following criteria:
Due to the use of the “R” flag, the procedure for learning proxy-ND entries and replying to NS messages differs from the procedures for proxy-ARP in IPv4: the router or host flag will be added to each entry, and that will determine the flag to use when responding to a NS.
The procedure to add the R flag to a specified entry is as follows:
SR OS supports the association of configured MAC lists with a configured dynamic proxy-ARP or proxy-ND IP address. The actual proxy-ARP or proxy-ND entry is not created until an ARP or Neighbor Advertisement message is received for the IP and one of the MACs in the associated MAC-list. This is in accordance with IETF Draft draft-ietf-bess-evpn-proxy-arp-nd. which states that a proxy-ARP or proxy-ND IP entry can be associated to one MAC among a list of allowed MACs.
The following example shows the use of MAC lists for dynamic entries.
where:
Although no new proxy-ARP or proxy-ND entries are created when a dynamic IP is configured, the router triggers the following resolve procedure.
![]() | Note: The dynamic entry is created only if an ARP, GARP, or NA message is received for the configured IP, and the associated MAC belongs to the configured MAC list of the IP. If the MAC list is empty, the proxy-ARP or proxy-ND entry is not created for the configured IP. |
After a dynamic entry (with a MAC address included in the list) is successfully created, its behavior (for send-refresh, age-time, and other activities) is the same as a configured dynamic entry with the following exceptions.
EVPN defines a mechanism to allow the smooth mobility of MAC addresses from an NVE to another NVE. The 7750 SR, 7450 ESS, and 7950 XRS support this procedure as well as the MAC-mobility extended community in MAC advertisement routes as follows:
![]() | Note: When EVPN multi-homing is used in EVPN-MPLS, the ESI is compared to determine whether a MAC received from two different PEs has to be processed within the context of MAC mobility or multi-homing. Two MAC routes that are associated with the same remote or local ESI but different PEs are considered reachable through all those PEs. Mobility procedures are not triggered as long as the MAC route still belongs to the same ESI. |
EVPN defines a mechanism to protect the EVPN service from control plane churn as a result of loops or accidental duplicated MAC addresses. The 7750 SR, 7450 ESS, and 7950 XRS support an enhanced version of this procedure as described in this section.
A situation may arise where the same MAC address is learned by different PEs in the same VPLS because of two (or more hosts) being misconfigured with the same (duplicate) MAC address. In such situation, the traffic originating from these hosts would trigger continuous MAC moves among the PEs attached to these hosts. It is important to recognize such situation and avoid incrementing the sequence number (in the MAC Mobility attribute) to infinity.
To remedy such situation, a router that detects a MAC mobility event by way of local learning starts a window <in-minutes> timer (default value of window = 3) and if it detects num-moves <num> before the timer expires (default value of num-moves = 5), it concludes that a duplicate MAC situation has occurred. The router then alerts the operator with a trap message. The offending MAC address can be shown using the show service id svc-id bgp-evpn command:
After detecting the duplicate, the router stops sending and processing any BGP MAC advertisement routes for that MAC address until one of the following occurs:
![]() | Note: The other routers in the VPLS instance will forward the traffic for the duplicate MAC address to the router advertising the best route for the MAC. |
The values of num-moves and window are configurable to allow for the required flexibility in different environments. In scenarios where BGP rapid-update evpn is configured, the operator might want to configure a shorter window timer than in scenarios where BGP updates are sent every (default) min-route-advertisement interval.
Mac-duplication is always enabled in EVPN-VXLAN VPLS services, and the preceding described mac duplication parameters can be configured per VPLS service under the bgp-evpn mac-duplication context:
RFC 7432 defines the use of the sticky bit in the mac-mobility extended community to signal static mac addresses. These addresses must be protected in case there is an attempt to dynamically learn them in a different place in the EVPN-VXLAN VPLS service.
In the 7750 SR, 7450 ESS, and 7950 XRS, any conditional static mac defined in an EVPN-VXLAN VPLS service will be advertised by BGP-EVPN as a static address, that is, with the sticky bit set. An example of the configuration of a conditional static mac is shown below:
Local static MACs or remote MACs with sticky bit are considered as 'protected'. A packet entering a SAP / SDP binding will be discarded if its source MAC address matches one of these 'protected' MACs.
Auto-learn MAC protect, together with the ability to restrict where the protected source MACs are allowed to enter the service, can be enabled within an EVPN-MPLS and EVPN-VXLAN VPLS and routed VPLS services, but not in PBB-EVPN services. The protection, using the auto-learn-mac-protect command (described in Auto-Learn MAC Protect), and the restrictions, using the restrict-protected-src [discard-frame] command, operate in the same way as in a non-EVPN VPLS service.
In addition, the following behavioral differences are specific to EVPN services:
Conditional static MACs, EVPN static MACs and locally protected MACs are marked as protected within the FDB, as shown in the example output.
In this output:
The command auto-learn-mac-protect can be optionally extended with an exclude-list by using the following command:
auto-learn-mac-protect [exclude-list name]
This list refers to a mac-list <name> created under the config>service context and contains a list of MACs and associated masks.
When auto-learn-mac-protect [exclude-list name] is configured on a service object, dynamically learned MACs are excluded from being learned as protected if they match a MAC entry in the MAC list. Dynamically learned MAC SAs are protected only if they are learned on an object with ALMP configured and:
The MAC lists can be used in multiple objects of the same or different service. When empty, ALMP does not exclude any learned MAC from protection on the object. This extension allows the mobility of certain MACs in objects where MACs are learned as protected.
A blackhole MAC is a local FDB record. It is similar to a conditional static MAC; it is associated with a black-hole (similar to a VPRN blackhole static-route in VPRNs) instead of a SAP or SDP binding. A blackhole MAC can be added by using the following command:
The static blackhole MAC can have security applications (for example, replacement of MAC filters) for certain MACs. When used in combination with restrict-protected-src, the static blackhole MAC provides a simple and scalable way to filter MAC DA or SA in the data plane, regardless of how the frame arrived at the system (using SAP or SDP bindings or EVPN endpoints).
For example, when a specified static-mac mac 00:00:ca:fe:ca:fe create black-hole is added to a service, the following behavior occurs:
Blackhole MACs can also be used in services with proxy-ARP/proxy-ND enabled to filter traffic with destination to anti-spoof-macs. The anti-spoof-mac provides a way to attract traffic to a specified IP when a duplicate condition is detected for that IP address (see section ARP/ND Snooping and Proxy Support for more information); however, the system still needs to drop the traffic addressed to the anti-spoof-mac by using either a MAC filter or a blackhole MAC.
The user does not need to configure MAC filters when configuring a static-black-hole MAC address for the anti-spoof-mac function. To use a blackhole MAC entry for the anti-spoof-mac function in a proxy-ARP/proxy-ND service, the user needs to configure:
When this configuration is complete, the behavior of the anti-spoof-mac function changes as follows:
When the static-black-hole option is not configured with the anti-spoof-mac, the behavior of the anti-spoof-mac function, as described in ARP/ND Snooping and Proxy Support, remains unchanged. In particular:
SR OS can combine a blackhole MAC address concept and the EVPN MAC duplication procedures to provide loop protection in EVPN networks. The feature is compliant with the MAC mobility and multi-homing functionality in RFC 7432. The config>service>vpls>bgp-evpn>mac-duplication>black-hole-dup-mac CLI command enables the feature.
If enabled, there are no apparent changes in the MAC duplication; however, if a duplicated MAC is detected (for example, M1), then the router performs the following:
While the MAC type value remains EvpnD:P, the following additional operational details apply.
The following sample CLI shows an example EVPN-MPLS service where black-hole-dup-mac is enabled and MAC duplication programs the duplicate MAC as a blackhole.
If the retry time expires, the MAC is flushed from the FDB and the process starts again. The clear service id 30 evpn mac-dup-detect {ieee-address | all} command clears the duplicate blackhole MAC address.
![]() | Note: The clear service id 30 fdb command clears learned MAC addresses; blackhole MAC addresses are not cleared. |
Support for the black-hole-dup-mac command and the preceding associated loop detection procedures is as follows:
Ethernet Connectivity and Fault Management (ETH-CFM) allows the operator to validate and measure Ethernet Layer 2 services using standard IEEE 802.1ag and ITU-T Y.1731 protocols. Each tool performs a unique function and adheres to that tool's specific PDU and frame format and the associate rules governing the transmission, interception, and process of the PDU. Detailed information describing the ETH-CFM architecture, the tools, and various functions is located in the various OAM and Diagnostics guides and is not repeated here.
EVPN provides powerful solution architectures. ETH-CFM is supported in the various Layer 2 EVPN architectures. Since the destination Layer 2 MAC address, unicast or multicast, is ETH-CFM tool dependent (i.e. ETH-CC is sent as an L2 multicast and ETH-DM is sent as an L2 unicast), the ETH-CFM function is allowed to multicast and broadcast to the virtual EVPN connections. The Maintenance Endpoint (MEP) and Maintenance Intermediate Point (MIP) do not populate the local Layer 2 MAC Address forwarding database (FDB) with the MAC related to the MEP and MIP. This means that the 48-bit IEEE MAC address is not exchanged with peers and all ETH-CFM frames are broadcast across all virtual connections. To prevent the flooding of unicast packets and allow the remote forwarding databases to learn the remote MEP and MIP Layer 2 MAC addresses, the command cfm-mac-advertisement must be configured under the config>service>vpls>bgp-evpn context. This allows the MEP and MIP Layer 2 IEEE MAC addresses to be exchanged with peers. This command tracks configuration changes and send the required updates via the EVPN notification process related to a change.
Up MEP, Down MEP, and MIP creation is supported on the SAP, spoke, and mesh connections within the EVPN service. There is no support for the creation of ETH-CFM Management Points (MPs) on the virtual connection. VirtualMEP (vMEP) is supported with a VPLS context and the applicable EVPN Layer 2 VPLS solution architectures. The vMEP follows the same rules as the general MPs. When a vMEP is configured within the supported EVPN service, the ETH-CFM extraction routines are installed on the SAP, Binding, and EVPN connections within an EVPN VPLS Service. The vMEP extraction within the EVPN-PBB context requires the vmep-extensions parameter to install the extraction on the EVPN connections.
When MPs are used in combination with EVPN multi-homing, the following must be considered:
Due to the above considerations, the use of ETH-CFM in EVPN multi-homed SAPs/SDP bindings is only recommended on operationally down MEPs and single-active multi-homing. ETH-CFM is used in this case to notify the CE of the DF or non-DF status.
When two BGP instances are added to a VPLS service, both BGP-EVPN MPLS and BGP-EVPN VXLAN can be configured at the same time in the service. A maximum of two BGP instances are supported in the same VPLS, where BGP-EVPN MPLS and BGP-EVPN VXLAN can both use BGP instance 1 or 2, as long as they use different instances.
For example, in a service where EVPN-VXLAN and EVPN-MPLS are configured together, the config>service>vpls>bgp-evpn>vxlan bgp 1 and config>service>vpls>bgp-evpn>mpls bgp 2 commands allow the user to associate the BGP-EVPN MPLS to a different instance from that associated with BGP-EVPN VXLAN, and have both encapsulations simultaneously enabled in the same service. At the control plane level, MAC or IP routes received in one instance are consumed and re-advertised in the other instance as long as the route is the best route for a specific MAC. Inclusive multicast routes are independently generated for each BGP instance. From a data plane perspective, the EVPN-MPLS and EVPN-VXLAN destinations are instantiated in different implicit Split Horizon Groups (SHGs) so that traffic can be forwarded between them.
The following example shows a VPLS service with two BGP instances and both VXLAN and MPLS encapsulations configured for the same BGP-EVPN service.
The following list describe the preceding example:
![]() | Note: The bgp-evpn vxlan no shutdown command is only allowed if bgp-evpn mpls shutdown is configured, or if the BGP instance associated with the MPLS has a different route distinguisher than the VXLAN instance. |
The following features are not supported when two BGP instances are enabled on the same VPLS/R-VPLS service:
The command service>vpls>bgp-evpn>ip-route-advertisement is not supported on R-VPLS services with two BGP instances.
From a BGP perspective, the two BGP instances configured in the service are independent of each other. The redistribution of routes between the BGP instances is resolved at the EVPN application layer.
By default, if BGP-EVPN VXLAN and BGP-EVPN MPLS are both enabled in the same service, BGP sends the generated EVPN routes twice: once with the RFC 5512 BGP encapsulation extended community set to VXLAN and a second time with the encapsulation type set to MPLS.
Usually, a DC gateway peers a pair of Route-Reflectors (RRs) in the DC and a pair of RRs in the WAN. For this reason, the user needs to add router policies so that EVPN-MPLS routes are only sent to the WAN RRs and EVPN-VXLAN routes are only sent to the DC RRs. The following examples show how to configure router policies.
In a BGP instance, the EVPN routes are imported based on the route-targets and regular BGP selection procedures, regardless of their encapsulation.
The BGP-EVPN routes are generated and redistributed between BGP instances based on the following rules.
Figure 198 shows the Anycast mechanism used to support gateway redundancy for dual BGP instance services. The example shows two redundant DC gateways (DC GWs) where the VPLS services contain two BGP instances: one each for EVPN-VXLAN and EVPN-MPLS.
The example shown in Figure 198 depends on the ability of the two DC GWs to send the same inclusive multicast route to the remote PE or NVEs, such that:
This solution avoids loops for BUM traffic, and known unicast traffic can use either DC GW router, depending on the BGP selection. The following CLI example output shows the configuration of each DC GW.
Based on the preceding configuration example, the DC GWs behavior in this scenario is as follows:
In the example shown in Figure 198, PE-1 picks up DC GW-1's inclusive multicast route (because of its lower BGP next-hop) and create a BUM destination to 192.0.2.4. When sending BUM traffic for VPLS-1, it only sends the traffic to DC GW-1. In the same way, the DC GWs do no set up BUM destinations between each other as they use the same originating-ip in their inclusive multicast routes.
The remote PE or NVEs perform a similar BGP selection for MAC or IP routes, as a specific MAC is sent by the two DC GWs with the same route-key. A PE or NVE sends known unicast traffic for a specific MAC to only one DC GW.
Figure 199 shows an example of a common BGP EVPN service configured in redundant Anycast DC GWs and mLDP used in the MPLS instance.
![]() | Note: Packet duplication may occur if the service configuration is not performed carefully. |
When mLDP is used with multiple Anycast multi-homing DC GWs, the same originating IP address must be used by all the DC GWs. Failure to do so may result in packet duplication.
In the example shown in Figure 199, each pair of DC GWs (DCGW1/DCGW2 and DCGW3/DCGW4) is configured with a different originating IP address, which causes the following behavior.
To avoid the packet duplication shown in Figure 199, Nokia recommends to configure the same originating IP address in all four DC GWs (DCGW1/DCGW2 and DCGW3/DCGW4). However, the route-distinguishers can be different per pair.
The following behavior occurs if the same originating IP address is configured on the DC GW pairs shown in Figure 199.
![]() | Note: This configuration allows the use of mLDP as long as BUM traffic is not required between the two DCs. Ingress Replication must be used if BUM traffic between the DCs is required. |
SR OS supports Interconnect ESs (I-ES) for VXLAN as per IETF Draft draft-ietf-bess-dci-evpn-overlay. An I-ES is a virtual ES that allows DC GWs with two BGP instances to handle VXLAN access networks as any other type of ES. I-ESs support the RFC 7432 multi-homing functions, including single-active and all-active, ESI-based split-horizon filtering, DF election, and aliasing and backup on remote EVPN-MPLS PEs.
In addition to the EVPN multi-homing features, the main advantages of the I-ES redundant solution compared to the redundant solution described in Anycast Redundant Solution for Dual BGP Instance Services are as follows.
Where EVPN-MPLS networks are interconnected to EVPN-VXLAN networks, the I-ES concept applies only to the access VXLAN network; the EVPN-MPLS network does not modify its existing behavior.
Figure 200 shows the use of I-ES for Layer 2 EVPN DCI between VXLAN and MPLS networks.
The following example shows how I-ES-1 would be provisioned on DC GW1 and the association between I-ES to a given VPLS service. A similar configuration would occur on DC GW2 in the I-ES.
New I-ES configuration:
Service configuration:
The above configuration associates I-ES-1 to the VXLAN instance in services VPLS1 and VPLS 2. The I-ES is modeled as a virtual ES, with the following considerations.
The configuration of an I-ES on DC GWs with two BGP-instances has the following impact on the advertisement and processing of BGP-EVPN routes.
In general, when I-ESs are used for redundancy, the use of router policies is needed to avoid control plane loops with MAC/IP routes. Consider the following to avoid control plane loops:
The following outlines the considerations for BGP peer policies on DC GW1 to avoid control plane loops. Similar policies would be configured on DC GW2.
The following shows the CLI configuration.
When an I-ES is configured as single-active and is no shutdown with at least one associated service, the DC GWs send ES and AD routes as for any ES and runs DF election as normal, based on the ES routes, with the candidate list being pruned by the AD routes.
Figure 201 shows the expected behavior for a single-active I-ES.
As shown in Figure 201, the Non-Designated-Forwarder (NDF) for a specified service carries out the following tasks.
The I-ES DF PE for the service continues advertising IMET and MAC/IP routes for the associated VXLAN instance as usual, as well as forwarding on the DF VXLAN bindings. When the DF DC GW receives BUM traffic, it sends the traffic with the egress ESI label if needed.
The same considerations for ES and AD routes, and DF election apply for all-active multi-homing as for single-active multi-homing; the difference is in the behavior on the NDF DC GW. The NDF for a specified service performs the following tasks.
The I-ES DF PE for the service continues advertising IMET and MAC/IP routes for the associated VXLAN instance as usual. When the DF DC GW receives BUM traffic, it sends the traffic with the egress ESI label if needed.
As described in Configuring EVPN-VXLAN and EVPN-MPLS in the Same VPLS/R-VPLS Service, two BGP instances are supported in VPLS services, where one instance can be associated to EVPN-VXLAN and the other instance to EVPN-MPLS. In addition, both BGP instances in a VPLS/R-VPLS service can also be associated to EVPN-VXLAN.
For example, a VPLS service can be configured with two VXLAN instances that use VNI 500 and 501 respectively, and those instances are associated to different BGP instances:
From a data plane perspective, each VXLAN instance is instantiated in a different implicit SHG, so that traffic can be forwarded between them.
At a control plane level, the processing of MAC or IP routes and inclusive multicast routes is described in BGP-EVPN Routes in Services Configured with Two BGP Instances with the differences described in BGP-EVPN Routes in Multi-Instance EVPN-VXLAN Services.
The following features are not supported along with dual BGP instance EVPN-VXLAN services:
In addition, multi-instance EVPN-VXLAN services support:
If two BGP instances with the same encapsulation (VXLAN) are configured in the same VPLS service, different import route-targets in each BGP instance are mandatory (although this is not enforced).
BGP-EVPN Routes in Services Configured with Two BGP Instances describes the use of policies to avoid sending WAN routes (routes meant to be redistributed from DC to WAN) to the DC again and DC routes (routes meant to be redistributed from WAN to DC) to the WAN again. Those policies are based on export policy statements that match on the RFC 5512 BGP Encapsulation Extended Community (MPLS and VXLAN respectively).
When the two BGP instances are VXLAN based, the above policies matching on different BGP encapsulation extended community are not feasible because both instances advertise routes with VXLAN encapsulation. Because the export route targets in the two BGP instances must be different, the policies, to avoid sending WAN routes back to the WAN and DC routes back to the DC, can be based on export policies that prevent routes with a DC route target from being sent to the WAN peers (and opposite for routes with WAN route target).
In scaled scenarios, matching based on route targets, does not scale well. An alternative and preferred solution is to configure a default-route-tag that identifies all the BGP-EVPN instances connected to the DC, and a different default-route-tag in all the BGP-EPVPN instances connected to the WAN. Anycast Redundant Solution for Multi-Instance EVPN-VXLAN Services shows an example that demonstrates the use of default-route-tags.
Other than the specifications described in this section, the processing of MAC or IP routes and inclusive multicast Ethernet tag routes in multi-instance EVPN-VXLAN services follow the rules described in BGP-EVPN Routes in Services Configured with Two BGP Instances.
The solution described in Anycast Redundant Solution for Dual BGP Instance Services is also supported in multi-instance EVPN-VXLAN VPLS services.
The following CLI sample output shows the configuration of DCGW-1 and DCGW-2 in Figure 198 where VPLS 500 is a multi-instance EVPN-VXLAN service and BGP instance 2 is associated to VXLAN instead of MPLS.
Different default-route-tags are used in BGP instance 1 and instance 2, so that in the export route policies, DC routes are not advertised to the WAN, and WAN routes are not advertised to the DC, respectively.
Refer to Anycast Redundant Solution for Dual BGP Instance Services for a full description of this solution.
The Interconnect Ethernet Segment (I-ES) of network-interconnect-vxlan Ethernet segment is described in Interconnect Ethernet-Segment Solution for Dual BGP Instance Services. I-ES’es are also supported on VPLS and R-VPLS services with two EVPN-VXLAN instances.
Figure 203 illustrates the use of an I-ES in a dual EVPN-VXLAN instance service.
Similar to (single-instance) EVPN-VXLAN all-active multi-homing, the BUM forwarding procedures follow the “Local Bias” behavior.
At the ingress PE these are the forwarding rules for EVPN-VXLAN services:
These are the forwarding rules at the egress PE:
The following configuration example shows how I-ES-1 would be provisioned on DC GW1 and the association between I-ES to a given VPLS service. A similar configuration would occur on DC GW2 in the I-ES.
I-ES configuration:
Service configuration:
See Interconnect Ethernet-Segment Solution for Dual BGP Instance Services for information about how the EVPN routes are processed and advertised in a I-ES.
In some DC Gateway use cases, static VXLAN must be used to connect DC switches that do not support EVPN to the WAN so that a tenant subnet can be extended to the WAN. For those cases, the DC Gateway is configured with VPLS services that include a static VXLAN instance and a BGP-EVPN instance in the same service. The following combinations are supported in the same VPLS/R-VPLS service:
When a static VXLAN instance coexists with EVPN-MPLS in the same VPLS/R-VPLS service, the VXLAN instance can be associated to a network-interconnect-vxlan ES if VXLAN uses instance 1. Both single-active and all-active multi-homing modes are supported as follows:
The use of the rx-discard-on-ndf options is illustrated in the following cases.
Use Case 1: static VXLAN with anycast VTEPs and all-active ES
This use case, which is illustrated in Figure 204, works only for all-active I-ESs.
In this use case, the DCGWs use anycast VTEPs, that is, PE1 has a single egress VTEP configured to the DCGWs, for example, 12.12.12.12. Normally, PE1 finds ECMP paths to send the traffic to both DCGWs. However, because a given BUM flow can be sent to either the DF or the NDF (but not to both at the same time), the DCGWs must be configured with the following option so that BUM is not discarded on the NDF:
rx-discard-on-ndf none
Similar to any LAG-like scenario at the access, the access CE load balances the traffic to the multi-homed PEs, but a given flow is only sent to one of these PEs. With the option none, the BUM traffic on RX is accepted, and there are no duplicate packets or black-holed packets.
Use Case 2: static VXLAN with non-anycast VTEPs
This use case, which is illustrated in the following figure, works with single or all-active multi-homing.
In this case, the DCGWs use different VTEPs, for example 1.1.1.1 and 2.2.2.2 respectively. PE1 has two separate egress VTEPs to the DCGWs. Therefore, PE1 sends BUM flows to both DCGWs at the same time. Concerning all-active multi-homing, if the default option for rx-discard-on-ndf is configured, PE2 and PE3 receive duplicate unknown unicast packets from PE1 (because the default option accepts unknown unicast on the RX of the NDF). So, the DCGWs must be configured with the following option:
rx-discard-on-ndf bum
Any use case in which the access PE sends BUM flows to all multi-homed PEs, including the NDF, is similar to Figure 205. BUM traffic must be blocked on the NDF’s RX to avoid duplicate unicast packets.
For single-active multi-homing, the rx-discard-on-ndf is irrelevant because BUM and known unicast are always discarded on the NDF.
Also, when non-anycast VTEPs are used on DCGWs, the following can be stated:
When a static VXLAN instance coexists with EVPN-VXLAN in the same VPLS or R-VPLS service, no VXLAN instance should be associated to an all-active network-interconnect-vxlan ES. This is because when multi-homing is used with an EVPN-VXLAN core, the non-DF PE always discards unknown unicast traffic to the static VXLAN instance (this is not the case with EVPN-MPLS if the unknown traffic has a BUM label) and traffic blackholes may occur. This is discussed in the following example:
Due to the behavior illustrated above, when a static VXLAN instance coexists with an EVPN-VXLAN instance in the same VPLS/R-VPLS service, redundancy based on all-active I-ES is not recommended and single-active or an anycast solution without I-ES should be used instead. Anycast solutions are discussed in Anycast Redundant Solution for Multi-Instance EVPN-VXLAN Services, only with a static VXLAN instance in instance 1 instead of EVPN-VXLAN in this case.
SR OS supports the three IP-VRF-to-IP-VRF models defined in draft-ietf-bess-evpn-prefix-advertisement for EVPN-VXLAN and EVPN-MPLS R-VPLS services. Those three models are known as:
SR OS supports all three models for IPv4 and IPv6 prefixes. The three models have pros and cons, and different vendors have chosen different models depending on the use cases that they intend to address. When a third-party vendor is connected to an SR OS router, it is important to know which of the three models the third-party vendor implements. The following sections describe the models and the required configuration in each of them.
The SBD is equivalent to an R-VPLS that connects all the PEs that are attached to the same tenant VPRN. Interface-ful refers to the fact that there is a full IRB interface between the VPRN and the SBD (an interface object with MAC and IP addresses, over which interface parameters can be configured).
Figure 206 illustrates this model.
Figure 206 shows a 7750 SR and a third-party router using interface-ful IP-VRF-to-IP-VRF with SBD IRB model. The two routers are attached to a VPRN for the same tenant, and those VPRNs are connected by R-VPLS-2, or SBD. Both routers exchange IP prefix routes with a non-zero GW IP (this is the IP address of the SBD IRB). The SBD IRB MAC and IP are advertised in a MAC/IP route. On reception, the IP prefix route creates a route-table entry in the VPRN, where the GW IP must be recursively resolved to the information provided by the MAC/IP route and installed in the ARP and FDB tables.
This model is explained in detail in EVPN for VXLAN in IRB Backhaul R-VPLS Services and IP Prefixes. As an example, and based on Figure 206 above, the following CLI output shows the configuration of a 7750 SR SBD and VPRN, using on this interface-ful with SBD IRB mode:
The model is, also, supported for IPv6 prefixes. There are no configuration differences except the ability to configure an IPv6 address and interface.
Interface-ful refers to the fact that there is a full IRB interface between the VPRN and the SBD. However, the SBD IRB is unnumbered in this model, which means no IP address is configured on it. In SR OS, an unnumbered SBD IRB is equivalent to an R-VPLS linked to a VPRN interface through an EVPN tunnel. See EVPN for VXLAN in EVPN Tunnel R-VPLS Services for more information.
Figure 207 illustrates this model.
Figure 207 shows a 7750 SR and a third-party router running interface-ful IP-VRF-to-IP-VRF with unnumbered SBD IRB model. The IP prefix routes are now expected to have a zero GW IP and the MAC in the router's MAC extended community used for the recursive resolution to a MAC/IP route.
The corresponding configuration of the 7750 SR VPRN and SBD in the example could be:
Note that the evpn-tunnel command controls the use of the Router's MAC extended community and the zero GW IP in the IPv4-prefix route. For IPv6, the ipv6-gateway-address mac option makes the router advertise the IPv6-prefix routes with a Router's MAC extended community and zero GW IP.
This model is called interface-less because it does not require an SBD connecting the VPRNs for the tenant, and no recursive resolution is required upon receiving an IP prefix route. In other words, the IP prefix route's next hop is directly resolved to an EVPN tunnel, without the need for any other route. The ingress PE uses the received router's MAC extended community address as the inner destination MAC address for the EVPN packets sent to the prefix.
Figure 208 illustrates this model.
In SR OS, this model is implemented as follows:
A typical configuration of a PE's SBD and VPRN that work in interface-less model for IPv4 and IPv6, follows:
SR OS supports the creation of host routes for IP addresses that are present in the ARP or neighbor tables of a routing context. These host routes are referred to as ARP-ND routes and can be advertised using EVPN or IP-VPN families. A typical use case where ARP-ND routes are needed is the extension of Layer 2 Data Centers (DCs). Figure 209 illustrates this use case.
Subnet 10.0.0.0/16 in Figure 209 is extended throughout two DCs. The DC gateways are connected to the users of subnet 20.0.0.0/24 on PE1 using IP-VPN (or EVPN). If the virtual machine VM 10.0.0.1 is connected to DC1, when PE1 needs to send traffic to host 10.0.0.1, it performs a Longest Prefix Match (LPM) lookup on the VPRN’s route table. If the only IP prefix advertised by the four DC GWs was 10.0.0.0/16, PE1 could send the packets to the DC where the VM is not present.
To provide efficient downstream routing to the DC where the VM is located, DCGW1 and DCGW2 must generate host routes for the VMs to which they connect. When the VM moves to the other DC, DCGW3 and DCGW4 must be able to learn the VM’s host route and advertise it to PE1. DCGW1 and DCGW2 must withdraw the route for 10.0.0.1, since the VM is no longer in the local DC.
In this case, the SR OS is able to learn the VM’s host route from the generated ARP or ND messages when the VM boots or when the VM moves.
A route owner type called “ARP-ND” is supported in the base or VPRN route table. The ARP-ND host routes have a preference of 1 in the route table and are automatically created out of the ARP or ND neighbor entries in the router instance.
The following commands enable ARP-ND host routes to be created in the applicable route tables:
When the command is enabled, the EVPN, dynamic and static ARP entries of the routing context create ARP-ND host routes in the route table. Similarly, ARP-ND host routes are created in the IPv6 route table out of static, dynamic, and EVPN neighbor entries if the command is enabled.
The arp and nd-host-route populate commands are used with the following features:
![]() | Note: The ARP-ND host routes are created in the route table but not in the routing context FIB. This helps preserve the FIB scale in the router. |
In Figure 209, enabling arp-host-route-populate on the DCGWs allows them to learn or advertise the ARP-ND host route 10.0.0.1/32 when the VM is locally connected and to remove or withdraw the host routes when the VM is no longer present in the local DC.
ARP-ND host routes installed in the route table can be exported to VPN IPv4, VPN IPv6, or EVPN routes. No other BGP families or routing protocols are supported.
EVPN host mobility is supported in SR OS as in Section 4 of draft-ietf-bess-evpn-inter-subnet-forwarding. When a host moves from a source PE to a target PE, it can behave in one of the following ways:
The SR OS supports the above scenarios as follows.
Figure 210 shows an example of a host connected to a source PE, PE1, that moved to the target, PE2. The figure shows the expected configuration on the VPRN interface, where R-VPLS 1 is attached (for both PE1 and PE2). PE1 and PE2 are configured with an “anycast gateway”, that is, a VRRP passive instance with the same backup MAC and IP in both PEs.
In this initial phase:
The configuration described in this section and the cases in the following sections are for IPv4 hosts, however, the functionality is also supported for IPv6 hosts. The IPv6 configuration requires equivalent commands, that use the prefix "nd-" instead of "arp-". The only exception is the flood-garp-and-unknown-req command, which does not have an equivalent command for ND.
Host Initiates an ARP/GARP upon Moving to the Target PE
An example is illustrated in Figure 211. This is the expected behavior based on the configuration described in EVPN Host Mobility Configuration.
After step 5, no one replies to PE1’s ARP request and the procedure is over. If a host replied to the ARP for 10.1, the process starts again.
Host Sends a Data Packet upon a Move to Target PE
In this case, the host does not send a GARP/ARP packet when moving to the target PE. Only regular data packets are sent. The steps are illustrated in Figure 212.
Silent Host upon a Move to the Target PE
This case assumes the host moves but it stays silent after the move. The steps are illustrated in Figure 213.
When two or more EVPN routes are received at a PE, BGP route selection typically takes place when the route key or the routes are equal. When the route key is different, but the PE has to make a selection (for instance, the same MAC is advertised in two routes with different RDs), BGP hands over the routes to EVPN and the EVPN application performs the selection.
EVPN and BGP selection criteria are described below.
![]() | Note: In case BGP has to run an actual selection and a specified (otherwise valid) EVPN route 'loses' to another EVPN route, the non-selected route is displayed by the show router BGP routes evpn x detail command with a tie-breaker reason. |
![]() | Note: Protected MACs do not overwrite EVPN static MACs; in other words, if a MAC is in the FDB and protected due to it being received with the sticky/static bit set in a BGP EVPN update and a frame is received with the source MAC on an object configured with auto-learn-mac-protect, that frame is dropped due to the implicit restrict-protected-src discard-frame. The reverse is not true; when a MAC is learned and protected using auto-learn-mac-protect, its information is not overwritten with the contents of a BGP update containing the same MAC address. |
It is possible to constrain the tunnels used by the system for resolution of BGP next-hops or prefixes and BGP labeled unicast routes using LSP administrative tags. Refer to “LSP Tagging and Auto-Bind Using Tag Information” in the 7450 ESS, 7750 SR, 7950 XRS, and VSR MPLS Guide for further details.
Operational groups, also referred to as oper-groups, are supported in EVPN services. In addition to supporting SAP and SDP-binds, oper-groups can also be configured under the following objects:
These oper-groups can be monitored in LAGs or service objects. A few applications that can particularly benefit from them:
CE-to-CE fault propagation in EVPN-VPWS services is supported in SR OS by using Eth-CFM fault-propagation. That is, upon detecting a CE failure, an EVPN-VPWS PE withdraws the corresponding Auto-Discovery per-EVI route, which then triggers a down MEP on the remote PE that signals the fault to the connected CE. In cases where the CE connected to EVPN-VPWS services does not support Eth-CFM, then the fault can be propagated to the remote CE by using LAG standby-signaling, that can be LACP-based or simply power-off.
Figure 214 illustrates an example of link loss forwarding for EVPN-VPWS.
In this example, PE1 is configured as follows:
where:
![]() | Note: Do not configure the operational group under config>service>epipe, as circular dependencies are created when the access SAPs go down due to the LAG monitor-oper-group command. |
![]() | Note: The configure>lag>monitor-oper-group name command is only supported in access mode. Any encap-type can be used. |
As shown in Figure 214, upon failure on CE2, the following events occur.
Figure 215 illustrates how blackholes can be avoided when a PE becomes isolated from the core.
In this example, consider PE2 and PE1 are single-active multi-homed to CE1. If PE2 loses all its core links, PE2 must somehow notify CE1 so that PE2 does not continue attracting traffic and so that PE1 can take over. This notification is achieved by using oper-groups under the BGP-EVPN instance in the service. The following output is an example of the PE2 configuration.
With the PE2 configuration and Figure 215 example, the following steps occur:
Generally, when oper-groups are associated to EVPN instances:
As described in EVPN for MPLS Tunnels, EVPN single-active multi-homing PEs that are elected as non-DF must notify their attached CEs so the CE does not send traffic to the non-DF PE. This can be performed on a per-service basis that is based on the ETH-CFM and fault-propagation. However, sometimes ETH-CFM is not supported in multi-homed CEs and other notification mechanisms are needed, such as LACP standby or power-off. This scenario is illustrated in Figure 216.
Based on Figure 216, the multi-homed PEs is configured with multiple EVPN services that use ES-1. ES-1 and its associated LAG is configured as follows:
When the oper-group is configured on the ES and monitored on the associated LAG:
In order to avoid flapping issues, the oper-group configured in the ES must be configured with a hold-time down that is a non-zero value and greater than the ES es-activation-timer. This is because, when a PE transitions from non-DF to DF and the oper-group transitions to the up state, the PE takes a certain amount of time for the LAG to clear the standby conditions. During this time, the PE transitions back to non-DF. When the standby conditions are cleared, and after the es-activation-timer, then the PE finalizes as DF. To debounce this flap, the hold-time down should be configured as es-activation-timer plus a delta, where this delta is the time that it takes for the LAG to come up after the power-off or LACP standby conditions are cleared (this is typically 1 to 2 seconds, or longer).
Oper-groups cannot be assigned to ESs that are configured as virtual, all-active or service-carving mode auto.
OISM is an EVPN-based solution that optimizes the forwarding of IP multicast across R-VPLS of the same or different subnet. EVPN OISM is supported for EVPN-MPLS and EVPN-VXLAN services, IPv4 and IPv6 multicast groups, and is described in this section.
EVPN OISM is similar to Multicast VPNs (MVPN) in some aspects, since it does IP multicast routing in VPNs, uses MP-BGP to signal the interest of a PE in a given multicast group and uses Provider Multicast Service Interface (PMSI) trees among the PEs to send and receive the IP multicast traffic.
However, OISM is simpler than MVPN and allows efficient multicast in networks that integrate Layer 2 and Layer 3; that is, networks where PEs may be attached to different subnets, but could also be attached to the same subnet.
OISM is simpler than MVPN since:
EVPN OISM is defined by draft-ietf-bess-evpn-irb-mcast and uses the following terminology that will be also used in the rest of this section:
In an EVPN OISM network, it is assumed that the sources and receivers are connected to ordinary BDs and EVPN is the only multicast control plane protocol used among the PEs. Also, the subnets (and optionally hosts) are advertised normally by the EVPN IP Prefix routes. The IP-Prefix routes are installed in the PEs' VPRN route tables and are used for multicast RPF checks when routing multicast packets. Figure 217 illustrates a simple EVPN OISM network.
In Figure 217, and from the perspective of the multicast flow (S1,G1), PE1 is considered an upstream PE, whereas PE2 and PE3 are downstream PEs. The OISM forwarding rules are as follows.
![]() | Note: OISM does not use any multicast Designated Router (DR) concept, hence the upstream PE always routes locally as long as it has local receivers. |
![]() | Note: In order for PE3 to receive the multicast traffic on the SBD, the source PE, PE1, forms an EVPN destination from BD1 to PE3's SBD. This EVPN destination on PE1 is referred to as an SBD destination. |
OISM uses the Selective Multicast Ethernet Tag (SMET) route or route type 6 to signal interest on a given (S,G) or (*,G). Figure 218 provides an example.
As shown in Figure 218, a PE with local receivers interested in a multicast group G1 issues an SMET route encoding the source and group information (upon receiving local IGMP join messages for that group). EVPN OISM uses the SMET route in the following way.
![]() | Note: MVPN uses different route types or even families to address the different multicast group types. |
EVPN OISM supports multi-homed multicast sources and receivers.
While MVPN requires complex UMH (Upstream Multicast Hop) selection procedures to provide multi-homing for sources, EVPN simply reuses the existing EVPN multi-homing procedures. Figure 219 illustrates an example of a multi-homed source that makes use of EVPN All-Active multi-homing.
The source S1 is attached to a switch SW1 that is connected via single LAG to PE1 and PE2, a pair of EVPN OISM PEs. PE1 and PE2 define Ethernet Segment ES-1 for SW1, where ES-1 is all-active in this case (single-active multi-homing being supported too). Even in case of all-active, the multicast flow for (S1,G1) is only sent to one OISM PE, and the regular all-active multi-homing procedures (Split-Horizon) make sure that PE3 does not send the multicast traffic back to SW1. This is true for EVPN-MPLS and EVPN-VXLAN BDs.
Convergence, in case of failure, is very fast due to the fact that the downstream PEs, for example, PE2, advertise the SMET route for (*,G1) with the SBD route-target and it is imported by both PE1 and PE3. In case of failure on PE2, PE3 already has state for (*,G1) and can forward the multicast traffic immediately.
EVPN OISM also supports multi-homed receivers. Figure 220 illustrates an example of multi-homed receivers.
Multi-homed receivers as depicted in Figure 220, require the support of multicast state synchronization on the multi-homing PEs in order to avoid blackholes. As an example, consider that SW1 hashes an IGMP join (*,G1) to PE2, and PE2 adds the ES-1 SAP to the OIF list for (*,G1). Consider PE1 is the ES-1 DF. Unless the (*,G1) state is synchronized on PE1, the multicast traffic is pulled to PE2 only and then discarded. The state synchronization on PE1 pulls the multicast traffic to PE1 too, and PE1 forwards to the receiver using its DF SAP. The IGMP state is synchronized using MCS (Multi-Chassis Synchronization protocol).
This section shows a configuration example for the network illustrated in Figure 221.
The following CLI excerpt shows the configuration required on PE4 for services 2000 (VPRN), BD-2003 and BD-2004 (ordinary BDs) and BD-2002 (SBD).
As shown in the previous configuration commands, the VPRN must be configured as follows.
Besides the VPRN, BD-2003, BD-2004 and BD-2002 (SBD) must be configured as follows.
As shown in the previous configuration commands, igmp-snooping no shutdown must be configured in ordinary and SBD R-VPLS services so that IGMP messages or SMET routes are processed by the IGMP module. Also, the command forward-ipv4-multicast-to-ip-int allows the system to forward multicast traffic from the R-VPLS to the VPRN interface.
Finally, bgp-evpn>sel-mcast-advertisement must be enabled on the SBD R-VPLS, so that the SBD aggregates the multicast state for all the ordinary BDs and advertises the corresponding SMET routes with the SBD route-target. In OISM mode, the SMET route is only advertised from the SBD R-VPLS and not from the other ordinary BD R-VPLSes.
PE2 and PE3 are configured with the VPRN (2000), ordinary BD (BD-2001) and SBD (BD-2002) as above. In addition, PE2 and PE3 are attached to ES-1 where a receiver is connected. Since multicast state synchronization is required, MCS is configured appropriately too:
Once the previous configuration is executed in the three nodes, the EVPN routes are exchanged. BD2003 in PE4 receives IMET routes from the remote SBD PEs and creates "SBD" destinations to PE2 and PE3. Those SBD destinations are used to forward multicast traffic to PE2 and PE3, following the OISM forwarding procedures explained in OISM Forwarding Plane. The following command shows an example of IMET route (flagged as SBD route working on OISM mode) and SMET route received on PE4 from PE2.
When PE4 receives the IMET routes from PE2 and PE3 SBDs, it identifies the routes as SBD routes in OISM mode, and PE4 creates special EVPN destinations on the BD-2003 service that are used to forward the multicast traffic. The SBD destinations are shown as Sup BCast Domain in the show command output.
Based on the reception of the SMET routes from PE2 and PE3, PE4 adds the SBD EVPN destinations to its MFIB on BD-2003.
PE2 and PE3 also creates regular destinations and SBD destinations based on the reception of IMET routes. As an example, the show service id 2001 evpn-mpls command shows the destinations created by PE3 in the ordinary BD-2001.
In case of an SBD destination and a non-SBD destination to the same PE (PE2), IGMP only uses the non-SBD one in the MFIB. The non-SBD destination always has priority over the SBD destination. This can be seen in the show service id igmp-snooping base command in PE3, where the SBD destination to PE2 is down as long as the non-SBD destination is up.
Finally, to check the Layer 3 IIF and OIF entries on the VPRN services, enter the show router id pim group detail command. As an example, the command is executed in PE2:
This section contains information about EVPN and how it interacts with other features.
When enabling existing VPLS features in an EVPN-VXLAN or an EVPN-MPLS enabled service, the following must be considered:
![]() | Note: The MAC duplication already provides a protection against mac-moves between EVPN and SAPs/SDP bindings. |
![]() | Note: EVPN provides its own protection mechanism for static mac addresses. |
![]() | Note: The number of BGP-MH sites per EVPN-VXLAN service is limited to 1. |
In addition to the B-VPLS considerations described in section Interaction of EVPN-VXLAN and EVPN-MPLS with Existing VPLS Features, the following specific interactions for PBB-EVPN should also be considered.
When enabling existing VPRN features on interfaces linked to VXLAN R-VPLS (static or BGP-EVPN based), or EVPN-MPLS R-VPLS interfaces, consider that the following are not supported:
When enabling existing IES features on interfaces linked to VXLAN R-VPLS or EVPN-MPLS R-VPLS interfaces, the following are not supported:
Routing policies match on specific fields when importing or exporting EVPN routes. These matching fields (excluding route-table evpn ip-prefix routes, unless explicitly mentioned), are:
Additionally, the route tags can be used on export policies to match EVPN routes that belong to a service and BGP instance, routes that are created by the proxy-arp/nd application, or IP-Prefix routes that are added to the route-table with a route tag.
EVPN can pass only one route tag to BGP to achieve matching on export policies. In case of a conflict, the default-route-tag has the least priority of the three potential tags added by EVPN.
For instance, if VPLS 10 is configured with proxy-arp>evpn-route-tag 20 and bgp-evpn>mpls>default-route-tag 10, all MAC/IP routes, which are generated by the proxy-arp application, uses route tag 20. Export policies can then use “from tag 20” to match all those routes. In this case, inclusive Multicast routes are matched by using “from tag 10”.
BGP routing policies are supported for IP prefixes imported or exported through BGP-EVPN.
When applying routing policies to control the distribution of prefixes between EVPN and IP-VPN, the user must consider that both families are completely separate as far as BGP is concerned and that when prefixes are imported in the VPRN routing table, the BGP attributes are lost to the other family. The use of route tags allows the controlled distribution of prefixes across the two families.
Figure 222 shows an example of how VPN-IPv4 routes are imported into the RTM (Routing Table Manager), and then passed to EVPN for its own process.
![]() | Note: VPN-IPv4 routes can be tagged at ingress and that tag is preserved throughout the RTM and EVPN processing, so that the tag can be matched at the egress BGP routing policy. |
Policy tags can be used to match EVPN IP prefixes that were learned not only from BGP VPN-IPv4 but also from other routing protocols. The tag range supported for each protocol is different:
Figure 223 shows an example of the reverse workflow, routes imported from EVPN and exported from RTM to BGP VPN-IPv4.
The preceding described behavior and the use of tags is also valid for vsi-import and vsi-export policies in the R-VPLS.
Summarizing, the policy behavior for EVPN IP-prefixes is stated as follows:
Policy entries add tags for static routes, RIP, OSPF, IS-IS, BGP, and ARP-ND routes, which can then be matched on the BGP peer export policy, or on the VSI export policy for EVPN prefix routes.