Configuring ETH-CFM requires commands at two different hierarchy levels of the CLI.
The configuration under the config>eth-cfm hierarchy defines the domains, associations, and the applicable global parameters for each of those contexts, including the linkage to the service using the bridge-identifier option. After this configuration is complete, the Management Points (MPs = MEPs and MIPs) may be defined referencing the appropriate global context.
As described in the 7450 ESS, 7750 SR, 7950 XRS, and VSR OAM and Diagnostics Guide, MEPs can be implemented at the service or the facility level. The focus of this guide is on how the ETH-CFM MPs are configured within the service hierarchy level. However, because of the wide range of features that the ITU-T has defined in recommendation Y.1731 (Fault Management, Performance Management and Protection Mechanisms) the features may be applied to other features and hierarchies. For example, Ethernet Ring Protection (G.8032) also make use of various ETH-CFM functions. Different section in this guide may contain ETH-CFM specific material as it applies to that specific feature.
The following is an example of how domains and associations could be constructed, illustrating how the different services are linked to the contexts.
config>eth-cfm# info
----------------------------------------------
domain 3 format none level 3
association 1 format icc-based name "03-0000000101"
bridge-identifier 100
exit
exit
exit
domain 4 format none level 4
association 1 format icc-based name "04-0000000102"
bridge-identifier 100
remote-mepid 200
ccm-interval 60
exit
exit
exit
The following configuration examples illustrate how different services make use of the domain and association configuration. An Epipe, VPLS, and IES service are shown in this example. See the previous table that shows the supported services and the management points.
# configure service epipe 100 customer 1 create
* config>service>epipe# info
----------------------------------------------
sap 1/1/2:100.31 create
eth-cfm
mep 111 domain 3 association 1 direction down
mac-address d0:0d:1e:00:01:11
no shutdown
exit
exit
exit
sap 1/1/10:100.31 create
eth-cfm
mep 101 domain 4 association 1 direction up
mac-address d0:0d:1e:00:01:01
ccm-enable
no shutdown
exit
exit
exit
no shutdown
----------------------------------------------
# configure service vpls 100 customer 1 create
* config>service>vpls# info
----------------------------------------------
sap 1/1/2:100.31 create
eth-cfm
mep 111 domain 3 association 1 direction down
mac-address d0:0d:1e:00:01:11
no shutdown
exit
exit
exit
sap 1/1/10:100.31 create
eth-cfm
mep 101 domain 4 association 1 direction up
mac-address d0:0d:1e:00:01:01
ccm-enable
no shutdown
exit
exit
exit
no shutdown
----------------------------------------------
# configure service ies 100 customer 1 create
config>service>ies# info
----------------------------------------------
interface "test" create
address 10.1.1.1/30
sap 1/1/9:100 create
eth-cfm
mep 111 domain 3 association 1 direction down
ccm-enable
no shutdown
exit
exit
exit
exit
no shutdown
----------------------------------------------
A Virtual MEP (vMEP) is a MEP that is configured at the service level instead of on a SAP or SDP binding. A vMEP sends ETH-CFM to all the SAPs and SDP bindings in the VPLS, depending on the type of traffic. If it is multicast traffic, the packets forward out all SAPs and SDP bindings. Unicast traffic is forwarded appropriately based on the type of ETH-CFM packet and the forwarding tables. Packets inbound to a context containing a vMEP performs normal processing and forwarding through the data plane with a copying of the ETH-CFM packet delivered to the local MEP for the appropriate levels. The local MEP determines whether it should process a copied inbound ETH-CFM frame acting in accordance with standard rules.
Configuring a vMEP is similar in concept to placing down MEPs on the individual SAPs and SDP bindings in the associated VPLS. This ensures that packets inbound to the service get redirected to the vMEP for processing. Correct domain nesting must be followed to avoid ETH-CFM error conditions.
vMEPs support VPLS, M-VPLS, BVPLS, and I-VPLS contexts.
A vMEP in an I-VPLS context can only extract packets inbound on local SAP and SDP bindings. This extraction does not include packets that are mapped to the I-VPLS from an associated B-VPLS context. If this type of extraction is required in an I-VPLS context then UP MEPs are required on the appropriate SAPs and SDP bindings in the I-VPLS service.
The wider scope of the vMEP feature requires all the SAPs within the service and every network port on the node to be FP2 or higher hardware.
As with the original vMEP functionality introduced for B-VPLS contexts, DOWN MEPs are supported on the individual SAPs or SDP bindings as long as domain nesting rules are not violated. Of course, local UP MEPs are only supported at the same level as the vMEP otherwise various CCM defect conditions are raised, assuming CCM is enabled, and leaking of ETH-CFM packets occurs (lower level ETH-CFM packets arriving on a lower level MEP). Domain nesting must be properly deployed to avoid unexpected defect conditions and leaking between ETH-CFM domains.
MIPs map be configured on the SAPs and spoke SDPs at or above level of the vMEP.
An optional vmep-filter provides a coarse means of silently dropping all ETH-CFM packets that would normally be redirected to the CPU following egress processing. These includes any ETH-CFM level equal to or lower than the vMEP and any level equal to and lower than any other Management Points on the same SAP or SDP binding that includes the vmep-filter. MIPs are automatically deleted when they coexist on the same SAP or spoke SDP as the vmep-filter. Because DOWN MEPs are ingress processed they are supported in combination with a vMEP and operate normally regardless of any vmep-filter. Domain nesting rules must be adhered to.
If the operator requires an MP on the SAP or SDP binding an UP MEP may be created at the same level as the vMEP on the appropriate SAP or SDP binding to perform the same function as the filter but at the specific level of the MEP. Scalability needs to be clearly understood because this redirects the ETH-CFM packets to the CPU (consider using CPU protection). Consideration must also be given to the impact this approach could have on the total number of MEPs required. There are a number of other approaches that may lend themselves to the specific network architecture.
vMEP filtering is not supported within the a PBB VPLS because it already provides separation between B-components (typically the core) and I-components (typically the customer)
vMEPs do not support any ETH-AIS functionality and do not support fault propagation functions.
The following shows a configuration example to configure a vMEP in a VPLS context.
config>service# vpls 100 customer 1 create
config>service>vpls$ info
----------------------------------------------
stp
shutdown
exit
eth-cfm
mep 100 domain 3 association 1
mac-address d0:0d:1e:00:01:11
ccm-enable
no shutdown
exit
exit
no shutdown
----------------------------------------------