3. OAM and SAA

This chapter provides information about the Operations, Administration and Management (OAM) and Service Assurance Agent (SAA) commands available in the CLI for troubleshooting services.

3.1. OAM overview

Delivery of services requires that a number of operations occur correctly and at different levels in the service delivery model. For example, operations such as the association of packets to a service, must be performed correctly in the forwarding plane for the service to function correctly. To verify that a service is operational, a set of in-band, packet-based Operation, Administration, and Maintenance (OAM) tools is required, with the ability to test each of the individual packet operations.

For in-band testing, the OAM packets closely resemble customer packets to effectively test the customer forwarding path, but they are distinguishable from customer packets so they are kept within the service provider network and not forwarded to the customer.

The suite of OAM diagnostics supplement the basic IP ping and traceroute operations with diagnostics specialized for the different levels in the service delivery model. There are diagnostics for services.

3.1.1. LSP diagnostics: LSP ping and trace

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

The following guidelines and restrictions apply:

  1. LDP LER ECMP is not supported. The following description on LDP ECMP as an LER node does not apply to 7210 SAS nodes.
  2. The 7210 SAS supports LDP and BGP 3107 labeled routes. The 7210 SAS does not support LDP FEC stitching to BGP 3107 labeled route to LDP FEC stitching and vice-versa. The following description about stitching of BGP 3107 labeled routes to LDP FEC is provided to describe the behavior in an end-to-end network solution deployed using 7210 SAS and 7750 SR nodes, with 7210 SAS nodes acting as the LER node.
  3. The 7210 SAS does not support ECMP for BGP 3107 labeled routes. The description on use of ECMP routes for BGP 3107 labeled routes does not apply to the 7210 SAS. It is provided as follows for completeness and better understanding of the behavior in end-to-end network solution deployed using 7210 SAS and 7750 SR nodes.

The router LSP diagnostics are implementations of LSP ping and LSP trace based on RFC 4379, Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures. LSP ping provides a mechanism to detect data plane failures in MPLS LSPs. LSP ping and LSP trace are modeled after the ICMP echo request/reply used by ping and trace to detect and localize faults in IP networks.

For a specific LDP FEC, RSVP P2P LSP, or BGP IPv4 Label Router, LSP ping verifies whether the packet reaches the egress label edge router (LER), while in LSP trace mode, the packet is sent to the control plane of each transit label switched router (LSR) which performs various checks to see if it is actually a transit LSR for the path.

The downstream mapping TLV is used in lsp-ping and lsp-trace to provide a mechanism for the sender and responder nodes to exchange and validate interface and label stack information for each downstream of an LDP FEC or an RSVP LSP and at each hop in the path of the LDP FEC or RSVP LSP.

Two downstream mapping TLVs are supported. The original Downstream Mapping (DSMAP) TLV defined in RFC 4379 and the new Downstream Detailed Mapping (DDMAP) TLV defined in RFC 6424.

When the responder node has multiple equal cost next-hops for an LDP FEC prefix, the downstream mapping TLV can further be used to exercise a specific path of the ECMP set using the path-destination option. The behavior in this case is described in the following ECMP subsection.

3.1.1.1. LSP ping/trace for an LSP using a BGP IPv4 label route

This feature adds support of the target FEC stack TLV of type BGP Labeled IPv4 /32 Prefix as defined in RFC 4379.

The new TLV structure is shown in the following figure

Figure 7:  Target FEC stack TLV for a BGP labeled IPv4 prefix 

The user issues a LSP ping using the existing CLI command and specifying a new type of prefix:

oam lsp-ping bgp-label prefix ip-prefix/mask [src-ip-address ip-address] [fc fc-name [profile {in|out}]] [size octets] [ttl label-ttl] [send-count send-count] [timeout timeout] [interval interval] [path-destination ip-address [interface if-name | next-hop ip-address]] [detail]

The path-destination option is used for exercising specific ECMP paths in the network when the LSR performs hashing on the MPLS packet.

Similarly, the user issues a LSP trace using the following command:

oam lsp-trace bgp-label prefix ip-prefix/mask [src-ip-address ip-address] [fc fc-name [profile {in|out}]] [max-fail no-response-count] [probe-count probes-per-hop] [size octets] [min-ttl min-label-ttl] [max-ttl max-label-ttl] [timeout timeout] [interval interval] [path-destination ip-address [interface if-name | next-hop ip-address]] [detail]

The following are the procedures for sending and responding to an LSP ping or LSP trace packet. These procedures are valid when the downstream mapping is set to the DSMAP TLV. The detailed procedures with the DDMAP TLV are presented in Using DDMAP TLV in LSP stitching and LSP hierarchy:

  1. The next-hop of a BGP label route for a core IPv4 /32 prefix is always resolved to an LDP FEC or an RSVP LSP. Therefore the sender node encapsulates the packet of the echo request message with a label stack which consists of the LDP/RSVP outer label and the BGP inner label.
    If the packet expires on an RSVP or LDP LSR node which does not have context for the BGP label IPv4 /32 prefix, it validates the outer label in the stack and if the validation is successful it replies the same way as it does today when it receives an echo request message for an LDP FEC which is stitched to a BGP IPv4 label route. In other words it replies with return code 8 “Label switched at stack-depth <RSC>.”
  2. The LSR node that is the next-hop for the BGP label IPv4 /32 prefix as well as the LER node that originated the BGP label IPv4 prefix both have full context for the BGP IPv4 target FEC stack and, as a result, can perform full validation of it.
  3. If the BGP IPv4 label route is stitched to an LDP FEC, the egress LER for the resulting LDP FEC will not have context for the BGP IPv4 target FEC stack in the echo request message and replies with return code 4 “Replying router has no mapping for the FEC at stack- depth <RSC>.” This is the same behavior as that of an LDP FEC which is stitched to a BGP IPv4 label route when the echo request message reaches the egress LER for the BGP prefix.
Note:

Only BGP label IPv4 /32 prefixes are supported because these are usable as tunnels on nodes. BGP label IPv6 /128 prefixes are not currently usable as tunnels on a node and are not supported in LSP ping or trace.

3.1.1.2. ECMP considerations

Note:

The following restrictions apply for this section.

  1. LDP ECMP for LER and LSR is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.
  2. BGP 3107 labeled route ECMP is not supported on 7210 SAS platforms. References to BGP 3107 labeled route ECMP are included in this section only for completeness of the feature description.

When the responder node has multiple equal cost next-hops for an LDP FEC or a BGP label IPv4 prefix, it replies in the DSMAP TLV with the downstream information of the outgoing interface that is part of the ECMP next-hop set for the prefix.

However, when a BGP label route is resolved to an LDP FEC (of the BGP next-hop of the BGP label route), ECMP can exist at both the BGP and LDP levels. The following next-hop selection is performed in this case:

  1. For each BGP ECMP next hop of the label route, a single LDP next hop is selected even if multiple LDP ECMP next hops exist. Therefore, the number of ECMP next hops for the BGP IPv4 label route is equal to the number of BGP next-hops.
  2. ECMP for a BGP IPv4 label route is only supported at the provider edge (PE) router (BGP label push operation) and not at ABR and ASBR (BGP label swap operation). Therefore, at an LSR, a BGP IPv4 label route is resolved to a single BGP next hop, which is resolved to a single LDP next hop.
  3. LSP trace will return one downstream mapping TLV for each next-hop of the BGP IPv4 label route. It will also return the exact LDP next-hop that the datapath programmed for each BGP next-hop.

In the following description of LSP ping and LSP trace behavior, generic references are made to specific terms as follows:

  1. FEC can represent either an LDP FEC or a BGP IPv4 label router, and a Downstream Mapping TLV can represent either the DSMAP TLV or the DDMAP TLV.
  2. If the user initiates an LSP trace of the FEC without the path-destination option specified, the sender node does not include multi-path information in the Downstream Mapping TLV in the echo request message (multipath type=0). In this case, the responder node replies with a Downstream Mapping TLV for each outgoing interface that is part of the ECMP next-hop set for the FEC. The sender node will select the first Downstream Mapping TLV only for the subsequent echo request message with incrementing TTL.
  3. If the user initiates an LSP ping of the FEC with the path-destination option specified, the sender does not include the Downstream Mapping TLV. However, the user can configure the interface option, which is part of the same destination-path option, to direct the echo request message at the sender node to be sent from a specific outgoing interface that is part of an ECMP path set for the FEC.
  4. If the user initiates an LSP trace of the FEC with the path-destination option specified but configured to exclude a Downstream Mapping TLV in the MPLS echo request message using the CLI command downstream-map-tlv {none}, the sender node does not include the Downstream Mapping TLV. However, the user can configure the interface option, which is part of the same destination-path option, to direct the echo request message at the sender node to be sent out a specific outgoing interface that is part of an ECMP path set for the FEC.
  5. If the user initiates an LSP trace of the FEC with the path-destination option specified, the sender node includes the multipath information in the Downstream Mapping TLV in the echo request message (multipath type=8). The path-destination option allows the user to exercise a specific path of a FEC in the presence of ECMP. The user enters a specific address from the 127/8 range, which is then inserted in the multipath type 8 information field of the Downstream Mapping TLV. The CPM code at each LSR in the path of the target FEC runs the same hash routine as the datapath and replies in the Downstream Mapping TLV with the specific outgoing interface the packet would have been forwarded to if it had not expired at this node and if the DEST IP field in the packet’s header was set to the 127/8 address value inserted in the multipath type 8 information.
  6. The ldp-treetrace tool always uses the multipath type=8 value and inserts a range of 127/8 addresses instead of a single address to exercise multiple ECMP paths of an LDP FEC. The behavior is the same as the lsp-trace command with the path-destination option enabled.
  7. The path-destination option can also be used to exercise a specific ECMP path of an LDP FEC tunneled over an RSVP LSP or ECMP path of an LDP FEC stitched to a BGP FEC in the presence of BGP ECMP paths. The user must enable the use of the DDMAP TLV either globally (config>test-oam>mpls-echo-request-downstream-map ddmap) or within the specific ldp-treetrace or LSP trace test (downstream-map-tlv ddmap option).

3.1.1.3. LSP ping and LSP trace over unnumbered IP interface

LSP ping operates over a network using unnumbered links without any changes. LSP trace are modified such that the unnumbered interface is correctly encoded in the downstream mapping (DSMAP/DDMAP) TLV.

In an RSVP P2P LSP, the upstream LSR encodes the downstream router ID in the Downstream IP Address field and the local unnumbered interface index value in the Downstream Interface Address field of the DSMAP/DDMAP TLV as per RFC 4379. Both values are taken from the TE database.

In an LDP unicast FEC, the interface index assigned by the peer LSR is not readily available to the LDP control plane. In this case, the alternative method described in RFC 4379 is used. The upstream LSR sets the Address Type to IPv4 Unnumbered, the Downstream IP Address to a value of 127.0.0.1, and the interface index is set to 0. If an LSR receives an echo-request packet with this encoding in the DSMAP/DDMAP TLV, it will bypass interface verification but continue with label validation.

3.1.1.4. Downstream Detailed Mapping (DDMAP) TLV

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

The DDMAP TLV provides with exactly the same features as the existing DSMAP TLV, plus the enhancements to trace the details of LSP stitching and LSP hierarchy. The latter is achieved using a new sub-TLV of the DDMAP TLV called the FEC stack change sub-TLV. Figure 8 and Figure 9 are the structures of these two objects as defined in RFC 6424.

Figure 8:  DDMAP TLV 

The DDMAP TLV format is derived from the DSMAP TLV format. The key change is that variable length and optional fields have been converted into sub-TLVs. The fields have the same use and meaning as in RFC 4379.

Figure 9:  FEC stack change sub-TLV 

The operation type specifies the action associated with the FEC stack change. The following operation types are defined.

          Type #     Operation
          ------     ---------
             1          Push
             2          Pop

More details on the processing of the fields of the FEC stack change sub-TLV are provided later in this section.

The following shows the command usage to configure which downstream mapping TLV to use globally on a system.

configure test-oam mpls-echo-request-downstream-map {dsmap | ddmap}

This command specifies which format of the downstream mapping TLV to use in all LSP trace packets and LDP tree trace packets originated on this node. The Downstream Mapping (DSMAP) TLV is the original format in RFC 4379 and is the default value. The Downstream Detailed Mapping (DDMAP) TLV is the new enhanced format specified in RFC 6424.

This command applies to LSP trace of an RSVP P2P LSP, a BGP IPv4 Label Route, or LDP unicast FEC, and to LDP tree trace of a unicast LDP FEC. It does not apply to LSP trace of an RSVP P2MP LSP which always uses the DDMAP TLV.

The global DSMAP/DDMAP setting impacts the behavior of both OAM LSP trace packets and SAA test packets of type lsp-trace and is used by the sender node when one of the following events occurs:

  1. An SAA test of type lsp-trace is created (not modified) and no value is specified for the per-test downstream-map-tlv {dsmap | ddmap | none} option. In this case the SAA test downstream-map-tlv value defaults to the global mpls-echo-request-downstream-map value.
  2. An OAM test of type lsp-trace test is executed and no value is specified for the per-test downstream-map-tlv {dsmap | ddmap | none} option. In this case, the OAM test downstream-map-tlv value defaults to the global mpls-echo-request-downstream-map value.

A consequence of the preceding rules is that a change to the value of mpls-echo-request-downstream-map option does not affect the value inserted in the downstream mapping TLV of existing tests.

The following are the details of the processing of the new DDMAP TLV:

  1. When either the DSMAP TLV or the DDMAP TLV is received in an echo request message, the responder node will include the same type of TLV in the echo reply message with the correct downstream interface information and label stack information.
  2. If an echo request message without a Downstream Mapping TLV (DSMAP or DDMAP) expires at a node which is not the egress for the target FEC stack, the responder node always includes the DSMAP TLV in the echo reply message. This can occur in the following cases:
    1. The user issues a LSP trace from a sender node with a min-ttl value higher than 1 and a max-ttl value lower than the number of hops to reach the egress of the target FEC stack. This is the sender node behavior when the global configuration or the per-test setting of the DSMAP/DDMAP is set to DSMAP.
    2. The user issues a LSP ping from a sender node with a ttl value lower than the number of hops to reach the egress of the target FEC stack. This is the sender node behavior when the global configuration of the DSMAP/DDMAP is set to DSMAP.
    3. The behavior in (a) is changed when the global configuration or the per-test setting of the Downstream Mapping TLV is set to DDMAP. The sender node will include in this case the DDMAP TLV with the Downstream IP address field set to the all-routers multicast address as per Section 3.3 of RFC 4379. The responder node then bypasses the interface and label stack validation and replies with a DDMAP TLV with the correct downstream information for the target FEC stack.
  3. A sender node never includes the DSMAP or DDMAP TLV in an lsp-ping message.

3.1.1.5. Using DDMAP TLV in LSP stitching and LSP hierarchy

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

In addition to performing the same features as the DSMAP TLV, the new DDMAP TLV addresses the following scenarios:

  1. Full validation of an LDP FEC stitched to a BGP IPv4 label route. In this case, the LSP trace message is inserted from the LDP LSP segment or from the stitching point.
  2. Full validation of a BGP IPv4 label route stitched to an LDP FEC. The LSP trace message is inserted from the BGP LSP segment or from the stitching point.
  3. Full validation of an LDP FEC which is stitched to a BGP LSP and stitched back into an LDP FEC. In this case, the LSP trace message is inserted from the LDP segments or from the stitching points.
  4. Full validation of an LDP FEC tunneled over an RSVP LSP using LSP trace.
  5. Full validation of a BGP IPv4 label route tunneled over an RSVP LSP or an LDP FEC.

To correctly check a target FEC which is stitched to another FEC (stitching FEC) of the same or a different type, or which is tunneled over another FEC (tunneling FEC), it is necessary for the responding nodes to provide details about the FEC manipulation back to the sender node. This is achieved via the use of the new FEC stack change sub-TLV in the Downstream Detailed Mapping TLV (DDMAP) defined in RFC 6424.

When the user configures the use of the DDMAP TLV on a trace for an LSP that does not undergo stitching or tunneling operation in the network, the procedures at the sender and responder nodes are the same as in the case of the existing DSMAP TLV.

This feature however introduces changes to the target FEC stack validation procedures at the sender and responder nodes in the case of LSP stitching and LSP hierarchy. These changes pertain to the processing of the new FEC stack change sub-TLV in the new DDMAP TLV and the new return code 15 Label switched with FEC change. The following is a description of the main changes which are a superset of the rules described in Section 4 of RFC 6424 to allow greater scope of interoperability with other vendor implementations.

3.1.1.5.1. Responder node procedures

  1. As a responder node, the node will always insert a global return code return code of either 3 Replying router is an egress for the FEC at stack-depth <RSC> or 14 See DDMAP TLV for Return Code and Return Subcode.
  2. When the responder node inserts a global return code of 3, it will not include a DDMAP TLV.
  3. When the responder node includes the DDMAP TLV, it inserts a global return code 14 See DDMAP TLV for Return Code and Return Subcode and:
    1. On a success response, include a return code of 15 in the DDMAP TLV for each downstream which has a FEC stack change TLV.
    2. On a success response, include a return code 8 Label switched at stack-depth <RSC> in the DDMAP TLV for each downstream if no FEC stack change sub-TLV is present.
    3. On a failure response, include an appropriate error return code in the DDMAP TLV for each downstream.
  4. A tunneling node indicates that it is pushing a FEC (the tunneling FEC) on top of the target FEC stack TLV by including a FEC stack change sub-TLV in the DDMAP TLV with a FEC operation type value of PUSH. It also includes a return code 15 Label switched with FEC change. The downstream interface address and downstream IP address fields of the DDMAP TLV are populated for the pushed FEC. The remote peer address field in the FEC stack change sub-TLV is populated with the address of the control plane peer for the pushed FEC. The Label stack sub-TLV provides the full label stack over the downstream interface.
  5. A node that is stitching a FEC indicates that it is performing a POP operation for the stitched FEC followed by a PUSH operation for the stitching FEC and potentially one PUSH operation for the transport tunnel FEC. It will therefore include two or more FEC stack change sub-TLVs in the DDMAP TLV in the echo reply message. It also includes and a return code 15 Label switched with FEC change. The downstream interface address and downstream address fields of the DDMAP TLV are populated for the stitching FEC. The remote peer address field in the FEC stack change sub-TLV of type POP is populated with a null value (0.0.0.0). The remote peer address field in the FEC stack change sub-TLV of type PUSH is populated with the address of the control plane peer for the tunneling FEC. The Label stack sub-TLV provides the full label stack over the downstream interface.
  6. If the responder node is the egress for one or more FECs in the target FEC Stack, then it must reply with no DDMAP TLV and with a return code 3 Replying router is an egress for the FEC at stack-depth <RSC>. RSC must be set to the depth of the topmost FEC. This operation is iterative in a sense that at the receipt of the echo reply message the sender node will pop the topmost FEC from the target stack FEC TLV and resend the echo request message with the same TTL value as described in (5) as follows. The responder node will therefore perform exactly the same operation as described in this step until all FECs are popped or until the topmost FEC in the target FEC stack TLV matches the tunneled or stitched FEC. In the latter case, processing of the target FEC stack TLV follows again steps (1) or (2).

3.1.1.5.2. Sender node procedures

  1. If the echo reply message contains the return code 14 See DDMAP TLV for Return Code and Return Subcode and the DDMAP TLV has a return code 15 Label switched with FEC change, the sender node adjusts the target FEC Stack TLV in the echo request message for the next value of the TTL to reflect the operation on the current target FEC stack as indicated in the FEC stack change sub-TLV received in the DDMAP TLV of the last echo reply message. In other words, one FEC is popped at most and one or more FECs are pushed as indicated.
  2. If the echo reply message contains the return code 3 Replying router is an egress for the FEC at stack-depth <RSC>, then:
    1. If the value for the label stack depth specified in the Return Sub-Code (RSC) field is the same as the depth of current target FEC Stack TLV, then the sender node considers the trace operation complete and terminates it. A responder node will cause this case to occur as per step (6) of the responder node procedures.
    2. If the value for the label stack depth specified in the Return Sub-Code (RSC) field is different from the depth of the current target FEC Stack TLV, the sender node must continue the LSP trace with the same TTL value after adjusting the target FEC stack TLV by removing the top FEC. Note this step will continue iteratively until the value for the label stack depth specified in the Return Sub-Code (RSC) field is the same as the depth of current target FEC Stack TLV and in which case step (a) is performed. A responder node will cause this case to occur as per step (6) of the responder node procedures.
    3. If a DDMAP TLV with or without a FEC stack change sub-TLV is included, then the sender node must ignore it and processing is performed as per steps (a) or (b). A responder node will not cause this case to occur but a third party implementation may do.
  3. As a sender node, the can accept an echo-reply message with the global return code of either 14 (with DDMAP TLV return code of 15 or 8), or15 and process correctly the FEC stack change TLV as per step (1) of the sender node procedures.
  4. If an LSP ping is performed directly to the egress LER of the stitched FEC, there is no DDMAP TLV included in the echo request message and therefore the responder node, which is the egress node, will still reply with return code 4 Replying router has no mapping for the FEC at stack- depth <RSC>. This case cannot be resolved with this feature.

Note the following limitation when a BGP IPv4 label route is resolved to an LDP FEC which is resolved to an RSVP LSP all on the same node. This 2-level LSP hierarchy is not supported as a feature on the SROS but user is not prevented from configuring it. In that case, user and OAM packets are forwarded by the sender node using two labels (T-LDP and BGP). The LSP trace will fail on the downstream node with return code 1 Malformed echo request received since there is no label entry for the RSVP label.

3.1.2. MPLS OAM support in segment routing

Note:

This feature is supported only on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

MPLS OAM supports segment routing extensions to lsp-ping and lsp-trace as specified in draft-ietf-mpls-spring-lsp-ping.

When the data plane uses MPLS encapsulation, MPLS OAM tools such as lsp-ping and lsp-trace can be used to check connectivity and trace the path to any midpoint or endpoint of an SR-ISIS or SR-OSPF shortest path tunnel.

The CLI options for lsp-ping and lsp-trace are under OAM and SAA for SR-ISIS and SR-OSPF node SID tunnels.

3.1.2.1. SR extensions for LSP-PING and LSP-TRACE

This section describes how MPLS OAM models the SR tunnel types.

An SR shortest path tunnel, SR-ISIS or SR-OSPF tunnel, uses a single FEC element in the target FEC stack TLV. The FEC corresponds to the prefix of the node SID in a specific IGP instance.

The following figure shows the format of the IPv4 IGP-prefix segment ID.

Figure 10:  IPv4 IGP-prefix SID format 

In this format, the fields are as follows:

  1. IPv4 Prefix
    The IPv4 Prefix field carries the IPv4 prefix to which the segment ID is assigned. For an anycast segment ID, this field carries the IPv4 anycast address. If the prefix is shorter than 32 bits, trailing bits must be set to zero.
  2. Prefix Length
    The Prefix Length field is one octet. It gives the length of the prefix in bits; allowed values are 1 to 32.
  3. Protocol
    The Protocol field is set to 1 if the IGP protocol is OSPF and 2 if the IGP protocol is IS-IS.

Both lsp-ping and lsp-trace apply to the following contexts:

  1. SR-ISIS or SR-OSPF shortest path IPv4 tunnel
  2. SR-ISIS IPv4 tunnel stitched to an LDP IPv4 FEC
  3. BGP IPv4 LSP resolved over an SR-ISIS IPv4 tunnel or an SR-OSPF IPv4 tunnel; including support for BGP LSP across AS boundaries and for ECMP next-hops at the transport tunnel level

3.1.2.2. Operating guidelines on SR-ISIS or SR-OSPF tunnels

The following operating guidelines apply to lsp-ping and lsp-trace.

  1. The sender node builds the target FEC stack TLV with a single FEC element corresponding to the destination node SID of the SR-ISIS or SR-OSPF tunnel.
  2. A node SID label swapped at the LSR results in return code 8, “Label switched at stack-depth <RSC>” in accordance with RFC 4379.
  3. A node SID label that is popped at the LSR results in return code 3, “Replying router is an egress for the FEC at stack-depth <RSC>”.
  4. The lsp-trace command is supported with the inclusion of the DSMAP TLV, the DDMAP TLV, or none. If none is configured, no map TLV is sent. The downstream interface information is returned, along with the egress label for the node SID tunnel and the protocol that resolved the node SID at the responder node.

The following figure shows a sample topology for an lsp-ping and lsp-trace for SR-ISIS node SID tunnels.

Figure 11:  Testing MPLS OAM with SR tunnels 

Given this topology, the following output is an example of LSP-PING on DUT-A for target node SID on DUT-F.

*A:Dut-A# oam lsp-ping sr-isis prefix 10.20.1.6/32 igp-instance 0 detail
LSP-PING 10.20.1.6/32: 80 bytes MPLS payload
Seq=1, send from intf int_to_B, reply from 10.20.1.6
       udp-data-len=32 ttl=255 rtt=3.2ms rc=3 (EgressRtr)
---- LSP 10.20.1.6/32 PING Statistics ----
1 packets sent, 1 packets received, 0.00% packet loss
round-trip min = 3.2ms, avg = 3.2ms, max = 3.2ms, stddev = 0.000ms

The following output is an example of LSP-TRACE on DUT-A for target node SID on DUT-F (DSMAP TLV):

*A:Dut-A# oam lsp-trace sr-isis prefix 10.20.1.6/32 igp-instance 0 detail
lsp-trace to 10.20.1.6/32: 0 hops min, 0 hops max, 108 byte packets
1  10.20.1.2  rtt=2.29ms rc=8(DSRtrMatchLabel) rsc=1
     DS 1: ipaddr=10.10.4.4 ifaddr=10.10.4.4 iftype=ipv4Numbered MRU=1496
           label[1]=26406 protocol=6(ISIS)
2  10.20.1.4  rtt=3.74ms rc=8(DSRtrMatchLabel) rsc=1
     DS 1: ipaddr=10.10.9.6 ifaddr=10.10.9.6 iftype=ipv4Numbered MRU=1496
           label[1]=26606 protocol=6(ISIS)
3  10.20.1.6  rtt=4.97ms rc=3(EgressRtr) rsc=1

The following output is an example of LSP-TRACE on DUT-A for target node SID onDUT-F (DDMAP TLV).

*A:Dut-A# oam lsp-trace sr-isis prefix 10.20.1.6/32 igp-instance 0 downstream-map-
tlv ddmap detail
lsp-trace to 10.20.1.6/32: 0 hops min, 0 hops max, 108 byte packets
1  10.20.1.2  rtt=2.56ms rc=8(DSRtrMatchLabel) rsc=1
     DS 1: ipaddr=10.10.4.4 ifaddr=10.10.4.4 iftype=ipv4Numbered MRU=1496
           label[1]=26406 protocol=6(ISIS)
2  10.20.1.4  rtt=3.59ms rc=8(DSRtrMatchLabel) rsc=1
     DS 1: ipaddr=10.10.9.6 ifaddr=10.10.9.6 iftype=ipv4Numbered MRU=1496
           label[1]=26606 protocol=6(ISIS)
3  10.20.1.6  rtt=5.00ms rc=3(EgressRtr) rsc=1

3.1.2.3. Operating guidelines on SR-ISIS tunnel stitched to LDP FEC

The following operating guidelines apply to lsp-ping and lsp-trace:

  1. The responder and sender nodes must be in the same domain (SR or LDP) for lsp-ping tool operation.
  2. The lsp-trace tool can operate in both LDP and SR domains. When used with the DDMAP TLV, lsp-trace provides the details of the SR-LDP stitching operation at the boundary node. The boundary node as a responder node replies with the FEC stack change TLV, which contains the following operations:
    1. a PUSH operation of the SR (LDP) FEC in the LDP-to-SR (SR-to-LDP) direction
    2. a POP operation of the LDP (SR) FEC in the LDP-to-SR (SR-to-LDP) direction

The following is an output example of the lsp-trace command of the DDMAP TLV for LDP-to-SR direction (symmetric topology LDP-SR-LDP).

*A:Dut-E# oam lsp-trace prefix 10.20.1.2/32 detail downstream-map-tlv ddmap  
lsp-trace to 10.20.1.2/32: 0 hops min, 0 hops max, 108 byte packets
1  10.20.1.3  rtt=3.25ms rc=15(LabelSwitchedWithFecChange) rsc=1 
     DS 1: ipaddr=10.10.3.2 ifaddr=10.10.3.2 iftype=ipv4Numbered MRU=1496 
           label[1]=26202 protocol=6(ISIS)
           fecchange[1]=POP  fectype=LDP IPv4 prefix=10.20.1.2 remotepeer=0.0.0.0  (
Unknown)
           fecchange[2]=PUSH fectype=SR Ipv4 Prefix prefix=10.20.1.2 remotepeer= 10.
10.3.2 
2  10.20.1.2  rtt=4.32ms rc=3(EgressRtr) rsc=1 
*A:Dut-E#

The following output is an example of the lsp-trace command of the DDMAP TLV for SR-to-LDP direction (symmetric topology LDP-SR-LDP).

*A:Dut-B# oam lsp-trace prefix 10.20.1.5/32 detail downstream-map-tlv ddmap sr-isis 
lsp-trace to 10.20.1.5/32: 0 hops min, 0 hops max, 108 byte packets
1  10.20.1.3  rtt=2.72ms rc=15(LabelSwitchedWithFecChange) rsc=1 
     DS 1: ipaddr=10.11.5.5 ifaddr=10.11.5.5 iftype=ipv4Numbered MRU=1496 
           label[1]=262143 protocol=3(LDP)
           fecchange[1]=POP  fectype=SR Ipv4 Prefix prefix=10.20.1.5 remotepeer= 0.0
.0.0 (Unknown)
           fecchange[2]=PUSH fectype=LDP IPv4 prefix=10.20.1.5 remotepeer=10.11.5.5 
2  10.20.1.5  rtt=4.43ms rc=3(EgressRtr) rsc=1

3.1.2.4. Operation on a BGP IPv4 LSP resolved over an SR-ISIS IPv4 tunnel or an SR-OSPF IPv4 tunnel

The 7210 SAS enhances lsp-ping and lsp-trace of a BGP IPv4 LSP resolved over an SR-ISIS IPv4 tunnel or an SR-OSPF IPv4 tunnel. The 7210 SAS enhancement reports the full set of ECMP next-hops for the transport tunnel at both ingress PE and at the ABR or ASBR. The list of downstream next-hops is reported in the DSMAP or DDMAP TLV.

If an lsp-trace of the BGP IPv4 LSP is initiated with the path-destination option specified, the CPM hash code at the responder node selects the outgoing interface to return in the DSMAP or DDMAP TLV. The decision is based on the modulo operation of the hash value on the label stack or the IP headers (where the DST IP is replaced by the specific 127/8 prefix address in the multipath type 8 field of the DSMAP or DDMAP) of the echo request message and the number of outgoing interfaces in the ECMP set.

The following figure shows a sample topology used in the subsequent BGP over SR-OSPF and BGP over SR-ISIS examples.

Figure 12:  Sample topology for BGP over SR-OSPF and SR-ISIS 

The following outputs are examples of the lsp-trace command for a hierarchical tunnel consisting of a BGP IPv4 LSP resolved over an SR-ISIS IPv4 tunnel or an SR-OSPF IPv4 tunnel.

The following output is an example of BGP over SR-OSPF.

*A:Dut-A# oam lsp-trace bgp-label prefix 10.21.1.6/32 detail downstream-map-
 tlv ddmap path-destination 127.1.1. 
lsp-trace to 10.21.1.6/32: 0 hops min, 0 hops max, 168 byte packets
1  10.20.1.3  rtt=2.31ms rc=8(DSRtrMatchLabel) rsc=2 
     DS 1: ipaddr=10.10.5.5 ifaddr=10.10.5.5 iftype=ipv4Numbered MRU=1496 
           label[1]=27506 protocol=5(OSPF)
           label[2]=262137 protocol=2(BGP)
     DS 2: ipaddr=10.10.11.4 ifaddr=10.10.11.4 iftype=ipv4Numbered MRU=1496 
           label[1]=27406 protocol=5(OSPF)
           label[2]=262137 protocol=2(BGP)
     DS 3: ipaddr=10.10.11.5 ifaddr=10.10.11.5 iftype=ipv4Numbered MRU=1496 
           label[1]=27506 protocol=5(OSPF)
           label[2]=262137 protocol=2(BGP)
2   10.20.1.4  rtt=4.91ms rc=8(DSRtrMatchLabel) rsc=2 
     DS 1: ipaddr=10.10.9.6 ifaddr=10.10.9.6 iftype=ipv4Numbered MRU=1492 
           label[1]=27606 protocol=5(OSPF)
           label[2]=262137 protocol=2(BGP)
3  10.20.1.6  rtt=4.73ms rc=3(EgressRtr) rsc=2 
3  10.20.1.6  rtt=5.44ms rc=3(EgressRtr) rsc=1 
*A:Dut-A# 

The following output is an example of BGP over SR-ISIS.

A:Dut-A# oam lsp-trace bgp-label prefix 10.21.1.6/32 detail downstream-map-
 tlv ddmap path-destination 127.1.1.1 
lsp-trace to 10.21.1.6/32: 0 hops min, 0 hops max, 168 byte packets
1  10.20.1.3  rtt=3.33ms rc=8(DSRtrMatchLabel) rsc=2 
    DS 1:  ipaddr=10.10.5.5 ifaddr=10.10.5.5 iftype=ipv4Numbered MRU=1496 
           label[1]=28506 protocol=6(ISIS)
           label[2]=262139 protocol=2(BGP)
     DS 2: ipaddr=10.10.11.4 ifaddr=10.10.11.4 iftype=ipv4Numbered MRU=1496 
           label[1]=28406 protocol=6(ISIS)
           label[2]=262139 protocol=2(BGP)
     DS 3: ipaddr=10.10.11.5 ifaddr=10.10.11.5 iftype=ipv4Numbered MRU=1496 
           label[1]=28506 protocol=6(ISIS)
           label[2]=262139 protocol=2(BGP)
2  10.20.1.4  rtt=5.12ms rc=8(DSRtrMatchLabel) rsc=2 
     DS 1: ipaddr=10.10.9.6 ifaddr=10.10.9.6 iftype=ipv4Numbered MRU=1492 
           label[1]=28606 protocol=6(ISIS)
           label[2]=262139 protocol=2(BGP)
3  10.20.1.6  rtt=8.41ms rc=3(EgressRtr) rsc=2 
3  10.20.1.6  rtt=6.93ms rc=3(EgressRtr) rsc=1 

Assuming the topology in the following figure includes an eBGP peering between nodes B and C, the BGP IPv4 LSP spans the AS boundary and resolves to an SR-ISIS tunnel within each AS.

Figure 13:  Sample topology for BGP over SR-ISIS in inter-AS option C  

The following output is an example of BGP over SR-ISIS using inter-AS option C.

*A:Dut-A# oam lsp-trace bgp-label prefix 10.20.1.6/32 src-ip-
 address 10.20.1.1 detail downstream-map-tlv ddmap path-destination 127.1.1.1 
lsp-trace to 10.20.1.6/32: 0 hops min, 0 hops max, 168 byte packets
1  10.20.1.2  rtt=2.69ms rc=3(EgressRtr) rsc=2 
1  10.20.1.2  rtt=3.15ms rc=8(DSRtrMatchLabel) rsc=1 
     DS 1: ipaddr=10.10.3.3 ifaddr=10.10.3.3 iftype=ipv4Numbered MRU=0 
           label[1]=262127 protocol=2(BGP)
2  10.20.1.3  rtt=5.26ms rc=15(LabelSwitchedWithFecChange) rsc=1 
     DS 1: ipaddr=10.10.5.5 ifaddr=10.10.5.5 iftype=ipv4Numbered MRU=1496 
           label[1]=26506 protocol=6(ISIS)
           label[2]=262139 protocol=2(BGP)
           fecchange[1]=PUSH fectype=SR Ipv4 Prefix prefix=10.20.1.6 remotepeer=10.1
 0.5.5 
3  10.20.1.5  rtt=7.08ms rc=8(DSRtrMatchLabel) rsc=2 
     DS 1: ipaddr=10.10.10.6 ifaddr=10.10.10.6 iftype=ipv4Numbered MRU=1496 
           label[1]=26606 protocol=6(ISIS)
       label[2]=262139 protocol=2(BGP)
4  10.20.1.6  rtt=9.41ms rc=3(EgressRtr) rsc=2 
4  10.20.1.6  rtt=9.53ms rc=3(EgressRtr) rsc=1

3.1.3. SDP diagnostics

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

The 7210 SAS SDP diagnostics are SDP ping and SDP MTU path discovery.

3.1.3.1. SDP ping

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

SDP ping performs in-band uni-directional or round-trip connectivity tests on SDPs. The SDP ping OAM packets are sent in-band, in the tunnel encapsulation, so it will follow the same path as traffic within the service. The SDP ping response can be received out-of-band in the control plane, or in-band using the data plane for a round-trip test.

For a uni-directional test, SDP ping tests:

  1. Egress SDP ID encapsulation
  2. ability to reach the far-end IP address of the SDP ID within the SDP encapsulation
  3. path MTU to the far-end IP address over the SDP ID
  4. Forwarding class mapping between the near-end SDP ID encapsulation and the far-end tunnel termination

For a round-trip test, SDP ping uses a local egress SDP ID and an expected remote SDP ID. Since SDPs are uni-directional tunnels, the remote SDP ID must be specified and must exist as a configured SDP ID on the far-end 7210 SAS. SDP round trip testing is an extension of SDP connectivity testing with the additional ability to test:

  1. remote SDP ID encapsulation
  2. potential service round trip time
  3. round trip path MTU
  4. round trip forwarding class mapping

3.1.3.2. SDP MTU path discovery

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

In a large network, network devices can support a variety of packet sizes that are transmitted across its interfaces. This capability is referred to as the Maximum Transmission Unit (MTU) of network interfaces. It is important to understand the MTU of the entire path end-to-end when provisioning services, especially for virtual leased line (VLL) services where the service must support the ability to transmit the largest customer packet.

The Path MTU discovery tool provides a powerful tool that enables service provider to get the exact MTU supported by the network's physical links between the service ingress and service termination points (accurate to one byte).

3.1.4. Service diagnostics

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

-The Nokia Service ping feature provides end-to-end connectivity testing for an individual service. Service ping operates at a higher level than the SDP diagnostics in that it verifies an individual service and not the collection of services carried within an SDP.

Service ping is initiated from a 7210 SAS router to verify round-trip connectivity and delay to the far-end of the service. -The Nokia implementation functions for MPLS tunnels and tests the following from edge-to-edge:

  1. tunnel connectivity
  2. VC label mapping verification
  3. service existence
  4. service provisioned parameter verification
  5. round trip path verification
  6. service dynamic configuration verification

3.1.5. VPLS MAC diagnostics

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

While the LSP ping, SDP ping and service ping tools enable transport tunnel testing and verify whether the correct transport tunnel is used, they do not provide the means to test the learning and forwarding functions on a per-VPLS-service basis.

It is conceivable, that while tunnels are operational and correctly bound to a service, an incorrect Forwarding Information Base (FIB) table for a service could cause connectivity issues in the service and not be detected by the ping tools. Nokia has developed VPLS OAM functionality to specifically test all the critical functions on a per-service basis. These tools are based primarily on the IETF document draft-stokes-vkompella-ppvpn-hvpls-oam-xx.txt, Testing Hierarchical Virtual Private LAN Services.

The VPLS OAM tools are:

  1. Provides an end-to-end test to identify the egress customer-facing port where a customer MAC was learned. MAC ping can also be used with a broadcast MAC address to identify all egress points of a service for the specified broadcast MAC.
  2. Provides the ability to trace a specified MAC address hop-by-hop until the last node in the service domain. An SAA test with MAC trace is considered successful when there is a reply from a far-end node indicating that they have the destination MAC address on an egress SAP or the CPM.
  3. Provides the ability to check network connectivity to the specified client device within the VPLS. CPE ping will return the MAC address of the client, as well as the SAP and PE at which it was learned.
  4. Allows specified MAC addresses to be injected in the VPLS service domain. This triggers learning of the injected MAC address by all participating nodes in the service. This tool is generally followed by MAC ping or MAC trace to verify if correct learning occurred.
  5. Allows MAC addresses to be flushed from all nodes in a service domain.

3.1.5.1. MAC ping

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

For a MAC ping test, the destination MAC address (unicast or multicast) to be tested must be specified. A MAC ping packet can be sent through the control plane or the data plane. When sent by the control plane, the ping packet goes directly to the destination IP in a UDP/IP OAM packet. If it is sent by the data plane, the ping packet goes out with the data plane format.

In the control plane, a MAC ping is forwarded along the flooding domain if no MAC address bindings exist. If MAC address bindings exist, then the packet is forwarded along those paths (if they are active). Finally, a response is generated only when there is an egress SAP binding to that MAC address. A control plane request is responded to via a control reply only.

In the data plane, a MAC ping is sent with a VC label TTL of 255. This packet traverses each hop using forwarding plane information for next hop, VC label, etc. The VC label is swapped at each service-aware hop, and the VC TTL is decremented. If the VC TTL is decremented to 0, the packet is passed up to the management plane for processing. If the packet reaches an egress node, and would be forwarded out a customer facing port, it is identified by the OAM label after the VC label and passed to the management plane.

MAC pings are flooded when they are unknown at an intermediate node. They are responded to only by the egress nodes that have mappings for that MAC address.

3.1.5.2. MAC trace

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

A MAC trace functions like an LSP trace with some variations. Operations in a MAC trace are triggered when the VC TTL is decremented to 0.

Like a MAC ping, a MAC trace can be sent either by the control plane or the data plane.

For MAC trace requests sent by the control plane, the destination IP address is determined from the control plane mapping for the destination MAC. If the destination MAC is known to be at a specific remote site, then the far-end IP address of that SDP is used. If the destination MAC is not known, then the packet is sent unicast, to all SDPs in the service with the appropriate squelching.

A control plane MAC traceroute request is sent via UDP/IP. The destination UDP port is the LSP ping port. The source UDP port is whatever the system gives (note that this source UDP port is really the demultiplexor that identifies the particular instance that sent the request, when correlating the reply). The source IP address is the system IP of the sender.

When a traceroute request is sent via the data plane, the data plane format is used. The reply can be via the data plane or the control plane.

A data plane MAC traceroute request includes the tunnel encapsulation, the VC label, and the OAM, followed by an Ethernet DLC, a UDP and IP header. If the mapping for the MAC address is known at the sender, then the data plane request is sent down the known SDP with the appropriate tunnel encapsulation and VC label. If it is not known, then it is sent down every SDP (with the appropriate tunnel encapsulation per SDP and appropriate egress VC label per SDP binding).

The tunnel encapsulation TTL is set to 255. The VC label TTL is initially set to the min-ttl (default is 1). The OAM label TTL is set to 2. The destination IP address is the all-routers multicast address. The source IP address is the system IP of the sender.

The destination UDP port is the LSP ping port. The source UDP port is whatever the system gives (note that this source UDP port is really the demultiplexor that identifies the particular instance that sent the request, when correlating the reply).

The Reply Mode is either 3 (i.e., reply via the control plane) or 4 (i.e., reply through the data plane), depending on the reply-control option. By default, the data plane request is sent with Reply Mode 3 (control plane reply) Reply Mode 4 (data plane reply).

The Ethernet DLC header source MAC address is set to either the system MAC address (if no source MAC is specified) or to the specified source MAC. The destination MAC address is set to the specified destination MAC. The EtherType is set to IP.

3.1.5.3. CPE ping

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

The MAC ping OAM tool makes it possible to detect whether a particular MAC address has been learned in a VPLS.

The cpe-ping command extends this capability to detecting end-station IP addresses inside a VPLS. A CPE ping for a specific destination IP address within a VPLS will be translated to a MAC-ping toward a broadcast MAC address. Upon receiving such a MAC ping, each peer PE within the VPLS context will trigger an ARP request for the specific IP address. The PE receiving a response to this ARP request will report back to the requesting 7210 SAS. It is encouraged to use the source IP address of 0.0.0.0 to prevent the provider IP address of being learned by the CE.

3.1.5.4. MAC populate

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

MAC populate is used to send a message through the flooding domain to learn a MAC address as if a customer packet with that source MAC address had flooded the domain from that ingress point in the service. This allows the provider to craft a learning history and engineer packets in a particular way to test forwarding plane correctness.

The MAC populate request is sent with a VC TTL of 1, which means that it is received at the forwarding plane at the first hop and passed directly up to the management plane. The packet is then responded to by populating the MAC address in the forwarding plane, like a conventional learn although the MAC will be an OAM-type MAC in the FIB to distinguish it from customer MAC addresses.

This packet is then taken by the control plane and flooded out the flooding domain (squelching appropriately, the sender and other paths that would be squelched in a typical flood).

This controlled population of the FIB is very important to manage the expected results of an OAM test. The same functions are available by sending the OAM packet as a UDP/IP OAM packet. It is then forwarded to each hop and the management plane has to do the flooding.

Options for MAC populate are to force the MAC in the table to type OAM (in case it already existed as dynamic or static or an OAM induced learning with some other binding), to prevent new dynamic learning to over-write the existing OAM MAC entry, to allow customer packets with this MAC to either ingress or egress the network, while still using the OAM MAC entry.

Finally, an option to flood the MAC populate request causes each upstream node to learn the MAC, for example, populate the local FIB with an OAM MAC entry, and to flood the request along the data plane using the flooding domain.

An age can be provided to age a particular OAM MAC after a different interval than other MACs in a FIB.

3.1.5.5. MAC purge

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

MAC purge is used to clear the FIBs of any learned information for a particular MAC address. This allows one to do a controlled OAM test without learning induced by customer packets. In addition to clearing the FIB of a particular MAC address, the purge can also indicate to the control plane not to allow further learning from customer packets. This allows the FIB to be clean, and be populated only via a MAC Populate.

MAC purge follows the same flooding mechanism as the MAC populate.

A UDP/IP version of this command is also available that does not follow the forwarding notion of the flooding domain, but the control plane notion of it.

3.1.6. VLL diagnostics

This section describes VLL diagnostics.

3.1.6.1. VCCV ping

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

VCCV ping is used to check connectivity of a VLL in-band. It checks that the destination (target) PE is the egress for the Layer 2 FEC. It provides a cross-check between the data plane and the control plane. It is in-band, meaning that the VCCV ping message is sent using the same encapsulation and along the same path as user packets in that VLL. This is equivalent to the LSP ping for a VLL service. VCCV ping reuses an LSP ping message format and can be used to test a VLL configured over an MPLS SDP.

3.1.6.1.1. VCCV ping application

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

VCCV effectively creates an IP control channel within the pseudowire between PE1 and PE2. PE2 should be able to distinguish on the receive side VCCV control messages from user packets on that VLL. There are three possible methods of encapsulating a VCCV message in a VLL which translates into three types of control channels:

  1. Use of a Router Alert Label immediately preceding the VC label. This method has the drawback that if ECMP is applied to the outer LSP label (for example, transport label), the VCCV message will not follow the same path as the user packets. This effectively means it will not troubleshoot the appropriate path. This method is supported by the 7210 SAS.
  2. Use of the OAM control word as shown in the following figure.
    Figure 14:  OAM control word format 
    The first nibble is set to 0x1. The Format ID and the reserved fields are set to 0 and the channel type is the code point associated with the VCCV IP control channel as specified in the PWE3 IANA registry (RFC 4446). The channel type value of 0x21 indicates that the Associated Channel carries an IPv4 packet.
    The use of the OAM control word assumes that the draft-martini control word is also used on the user packets. This means that if the control word is optional for a VLL and is not configured, the 7210 SAS, X PE node will only advertise the router alert label as the CC capability in the Label Mapping message. This method is supported by the 7210 SAS.
  3. Set the TTL in the VC label to 1 to force PE2 control plane to process the VCCV message. This method is not guaranteed to work under all circumstances. For instance, the draft mentions some implementations of penultimate hop popping overwrite the TTL field. This method is not supported by the 7210 SAS.

When sending the label mapping message for the VLL, PE1 and PE2 must indicate which of the preceding OAM packet encapsulation methods (for example, which control channel type) they support. This is accomplished by including an optional VCCV TLV in the pseudowire FEC Interface Parameter field. The format of the VCCV TLV is shown in the following figure.

Figure 15:  VCCV TLV 

Note that the absence of the optional VCCV TLV in the Interface parameters field of the pseudowire FEC indicates the PE has no VCCV capability.

The Control Channel (CC) Type field is a bitmask used to indicate if the PE supports none, one, or many control channel types, as follows:

  1. 0x00 None of the following VCCV control channel types are supported
  2. 0x01 PWE3 OAM control word (see Figure 14)
  3. 0x02 MPLS Router Alert Label
  4. 0x04 MPLS inner label TTL = 1

If both PE nodes support more than one of the CC types, then a 7210 SAS PE will make use of the one with the lowest type value. For instance, OAM control word will be used in preference to the MPLS router alert label.

The Connectivity Verification (CV) bit mask field is used to indicate the specific type of VCCV packets to be sent over the VCCV control channel. The valid values are:

0x00 None of the following VCCV packet types are supported.

0x01 ICMP ping. Not applicable to a VLL over a MPLS SDP and as such is not supported by the 7210 SAS.

0x02 LSP ping. This is used in VCCV-Ping application and applies to a VLL over an MPLS SDP. This is supported by the 7210 SAS.

A VCCV ping is an LSP echo request message as defined in RFC 4379. It contains an L2 FEC stack TLV which must include within the sub-TLV type 10 “FEC 128 Pseudo-wire”. It also contains a field which indicates to the destination PE which reply mode to use. There are four reply modes defined in RFC 4379:

Reply mode, meaning:

  1. Do not reply. This mode is supported by the 7210 SAS.
  2. Reply via an IPv4 UDP packet. This mode is supported by the 7210 SAS.
  3. Reply with an IPv4 UDP packet with a router alert. This mode sets the router alert bit in the IP header and is not be confused with the CC type which makes use of the router alert label. This mode is not supported by the 7210 SAS.
  4. Reply via application level control channel. This mode sends the reply message inband over the pseudo-wire from PE2 to PE1. PE2 will encapsulate the Echo Reply message using the CC type negotiated with PE1. This mode is supported by the 7210 SAS.

The reply is an LSP echo reply message as defined in RFC 4379. The message is sent as per the reply mode requested by PE1. The return codes supported are the same as those supported in the 7210 SAS LSP ping capability.

The VCCV ping feature (Figure 16) is in addition to the service ping OAM feature which can be used to test a service between 7210 SAS nodes. The VCCV ping feature can test connectivity of a VLL with any third party node which is compliant to RFC 5085.

Figure 16:  VCCV ping application 

3.1.6.1.2. VCCV ping in a multi-segment pseudo-wire

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

Pseudo-wire switching is a method for scaling a large network of VLL or VPLS services by removing the need for a full mesh of T-LDP sessions between the PE nodes as the number of these nodes grow over time. Pseudo-wire switching is also used whenever there is a need to deploy a VLL service across two separate routing domains.

In the network, a Termination PE (T-PE) is where the pseudo-wire originates and terminates.

VCCV ping is extended to be able to perform the following OAM functions:

  1. VCCV ping to a destination PE. A VLL FEC Ping is a message sent by T-PE1 to test the FEC at T-PE2. The operation at T-PE1 and T-PE2 is the same as in the case of a single-segment pseudo-wire. The pseudo-wire switching node, S-PE1, pops the outer label, swaps the inner (VC) label, decrements the TTL of the VC label, and pushes a new outer label. The 7210 SAS PE1 node does not process the VCCV OAM Control Word unless the VC label TTL expires. In that case, the message is sent to the CPM for further validation and processing. This is the method described in draft-hart-pwe3-segmented-pw-vccv.

3.1.6.2. Automated VCCV-trace capability for MS-pseudo-wire

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

Although tracing of the MS-pseudo-wire path is possible using the methods described in previous sections, these require multiple manual iterations and that the FEC of the last pseudo-wire segment to the target T-PE/S-PE be known a priori at the node originating the echo request message for each iteration. This mode of operation is referred to as a “ping” mode.

The automated VCCV-trace can trace the entire path of a pseudo-wire with a single command issued at the T-PE or at an S-PE. This is equivalent to LSP-trace and is an iterative process by which the ingress T-PE or T-PE sends successive VCCV-ping messages with incrementing the TTL value, starting from TTL=1.

The method is described in draft-hart-pwe3-segmented-pw-vccv, VCCV Extensions for Segmented Pseudo-Wire, and is pending acceptance by the PWE3 working group. In each iteration, the source T-PE or S-PE builds the MPLS echo request message in a way similar to VCCV ping. The first message with TTL=1 will have the next-hop S-PE T-LDP session source address in the Remote PE Address field in the pseudo-wire FEC TLV. Each S-PE which terminates and processes the message will include in the MPLS echo reply message the FEC 128 TLV corresponding the pseudo-wire segment to its downstream node. The inclusion of the FEC TLV in the echo reply message is allowed in RFC 4379, Detecting Multi-Protocol Label Switched (MPLS) Data Plane Failures. The source T-PE or S-PE can then build the next echo reply message with TTL=2 to test the next-next hop for the MS-pseudo-wire. It will copy the FEC TLV it received in the echo reply message into the new echo request message. The process is terminated when the reply is from the egress T-PE or when a timeout occurs. If specified, the max-ttl parameter in the vccv-trace command will stop on SPE before reaching T-PE.

The results VCCV-trace can be displayed for a fewer number of pseudo-wire segments of the end-to-end MS-pseudo-wire path. In this case, the min-ttl and max-ttl parameters are configured accordingly. However, the T-PE/S-PE node will still probe all hops up to min-ttl to correctly build the FEC of the desired subset of segments.

Note that this method does not require the use of the downstream mapping TLV in the echo request and echo reply messages.

3.1.6.2.1. VCCV for static pseudo-wire segments

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

MS pseudo-wire is supported with a mix of static and signaled pseudo-wire segments. However, VCCV ping and VCCV-trace is allowed until at least one segment of the MS pseudo-wire is static. Users cannot test a static segment but also, cannot test contiguous signaled segments of the MS-pseudo-wire. VCCV ping and VCCV trace is not supported in static-to-dynamic configurations.

3.1.6.2.2. Detailed VCCV-trace operation

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

A trace can be performed on the MS-pseudo-wire originating from T-PE1 by a single operational command. The following process occurs:

  1. T-PE1 sends a VCCV echo request with TTL set to 1 and a FEC 128 containing the pseudo-wire information of the first segment (pseudowire1 between T-PE1 and S-PE) to S-PE for validation.
  2. S-PE validates the echo request with the FEC 128. Since it is a switching point between the first and second segment it builds an echo reply with a return code of 8 and includes the FEC 128 of the second segment (pseudowire2 between S-PE and T-PE2) and sends the echo reply back to T-PE1.
  3. T-PE1 builds a second VCCV echo request based on the FEC128 in the echo reply from the S-PE. It increments the TTL and sends the next echo request out to T-PE2. Note that the VCCV echo request packet is switched at the S-PE datapath and forwarded to the next downstream segment without any involvement from the control plane.
  4. T-PE2 receives and validates the echo request with the FEC 128 of the pseudowire2 from T-PE1. Since T-PE2 is the destination node or the egress node of the MS-pseudo-wire it replies to T-PE1 with an echo reply with a return code of 3 (egress router) and no FEC 128 is included.
  5. T-PE1 receives the echo reply from T-PE2. T-PE1 is made aware that T-PE2 is the destination of the MS pseudo-wire because the echo reply does not contain the FEC 128 and because its return code is 3. The trace process is completed.

3.1.6.2.3. Control plane processing of a VCCV echo message in a MS-pseudo-wire

Note:

This feature is only supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C.

3.1.6.2.3.1. Sending a VCCV echo request

When in the ping mode of operation, the sender of the echo request message requires the FEC of the last segment to the target S-PE/T-PE node. This information can either be configured manually or be obtained by inspecting the corresponding sub-TLV's of the pseudo-wire switching point TLV. However, the pseudo-wire switching point TLV is optional and there is no guarantee that all S-PE nodes will populate it with their system address and the pseudo-wire-id of the last pseudo-wire segment traversed by the label mapping message. Therefore the 7210 SAS implementation will always make use of the user configuration for these parameters.

3.1.6.2.3.2. Receiving a VCCV echo request

Upon receiving a VCCV echo request the control plane on S-PEs (or the target node of each segment of the MS pseudo-wire) validates the request and responds to the request with an echo reply consisting of the FEC 128 of the next downstream segment and a return code of 8 (label switched at stack-depth) indicating that it is an S-PE and not the egress router for the MS-pseudo-wire.

If the node is the T-PE or the egress node of the MS-pseudo-wire, it responds to the echo request with an echo reply with a return code of 3 (egress router) and no FEC 128 is included.

3.1.6.2.3.3. Receiving a VCCV echo reply

The operation to be taken by the node that receives the echo reply in response to its echo request depends on its current mode of operation such as ping or trace.

In ping mode, the node may choose to ignore the target FEC 128 in the echo reply and report only the return code to the operator.

3.2. IP performance monitoring

The 7210 SAS OS supports Two-Way Active Measurement Protocol (TWAMP) and Two-Way Active Measurement Protocol Light (TWAMP Light).

3.2.1. Two-Way Active Measurement Protocol

Two-Way Active Measurement Protocol (TWAMP) provides a standards-based method for measuring the round-trip IP performance (packet loss, delay and jitter) between two devices. TWAMP uses the methodology and architecture of One-Way Active Measurement Protocol (OWAMP) to define a way to measure two-way or round-trip metrics.

There are four logical entities in TWAMP:

  1. the control-client
  2. the session-sender
  3. the server
  4. the session-reflector

The control-client and session-sender are typically implemented in one physical device (the “client”) and the server and session-reflector in a second physical device (the “server”) with which the two-way measurements are being performed. The 7210 SAS acts as the server. The control-client and server establishes a TCP connection and exchange TWAMP-Control messages over this connection. When the control-client requires to start testing, the client communicates the test parameters to the server. If the server corresponds to conduct the described tests, the test begins as soon as the client sends a Start-Sessions message. As part of a test, the session sender sends a stream of UDP-based test packets to the session-reflector, and the session reflector responds to each received packet with a response UDP-based test packet. When the session-sender receives the response packets from the session-reflector, the information is used to calculate two-way delay, packet loss, and packet delay variation between the two devices.

3.2.1.1. Configuration notes

The following are the configuration notes:

  1. Unauthenticated mode is supported. Encrypted and Authenticated modes are not supported.
  2. TWAMP is supported only in the base router instance.
  3. By default, the 7210 SAS uses TCP port number 862 to listen for TWAMP control connections. This is not user configurable.

3.2.2. Two-Way Active Measurement Protocol Light

Note:

This feature is supported on all 7210 SAS platforms as described in this document, except the 7210 SAS-D.

TWAMP Light is an optional model included in the TWAMP standard RFC5357 that uses standard TWAMP test packets but provides a lightweight approach to gathering ongoing IP delay performance data for base router and per-VPRN statistics. Full details are described in Appendix I of RFC 5357 (Active Two Way Measurement Protocol). The 7210 SAS implementation supports the TWAMP Light model for gathering delay and loss statistics.

For TWAMP Light, the TWAMP Client/Server model is replaced with the Session Controller/Responder model. In general, the Session Controller is the launch point for the test packets and the Responder performs the reflection function.

TWAMP Light maintains the TWAMP test packet exchange but eliminates the TWAMP TCP control connection with local configurations; however, not all negotiated control parameters are replaced with a local configuration. For example, CoS parameters communicated over the TWAMP control channel are replaced with a reply-in-kind approach. The reply-in-kind model reflects back the received CoS parameters, which are influenced by the reflector QoS policies.

The responder function is configured under the config>router>twamp-light command hierarchy for base router reflection, and under the config>service>vprn>twamp-light command hierarchy for per VPRN reflection. The TWAMP Light reflector function is configured per context and must be activated before reflection can occur; the function is not enabled by default for any context. The reflector requires the operator to define the TWAMP Light UDP listening port that identifies the TWAMP Light protocol and the prefixes that the reflector will accept as valid sources for a TWAMP Light request. If the configured TWAMP Light listening UDP port is in use by another application on the system, a Minor OAM message will be presented indicating that the port is unavailable and that the activation of the reflector is not allowed. If the source IP address in the TWAMP Light packet arriving on the responder does not match a configured IP address prefix, the packet is dropped. Multiple prefix entries may be configured per context on the responder. An inactivity timeout under the config>test-oam>twamp>twamp-light hierarchy defines the amount of time the reflector will keep the individual reflector sessions active in the absence of test packets. A responder requires CPM3 or better hardware.

TWAMP Light test packet launching is controlled by the OAM Performance Monitoring (OAM-PM) architecture and adheres to those rules; this includes the assignment of a "Test-Id". TWAMP Light does not carry the 4-byte test ID in the packet to remain locally significant and uniform with other protocols under the control of the OAM-PM architecture. The OAM-PM construct allow the various test parameters to be defined. These test parameters include the IP session-specific information which allocates the test to the specific routing instance, the source and destination IP address, the destination UDP port (which must match the listening UDP port on the reflector) and a number of other options that allow the operator to influence the packet handling. The probe interval and padding size can be configured under the specific session. The size of the all “0” padding can be included to ensure that the TWAMP packet is the same size in both directions. The TWAMP PDU definition does not accomplish symmetry by default. A pad size of 27 bytes will accomplish symmetrical TWAMP frame sizing in each direction.

The OAM-PM architecture does not perform any validation of the session information. The test will be allowed to be activated regardless of the validity of this information. For example, if the configured source IP address is not local within the router instance to which the test is allocated, the test will start sending TWAMP Light packets but will not receive any responses.

The OAM-PM section of this guide provides more information describing the integration of TWAMP Light and the OAM-PM architecture, including hardware dependencies.

The following TWAMP Light functions are supported on the 7210 SAS-K 2F1C2T:

  1. base router instance for IES services
  2. IPv4
    1. Must be unicast
  3. reflector prefix definition for acceptable TWAMP Light sources:
    1. Prefix list may be added and removed without shutting down the reflector function.
    2. If no prefixes are defined, the reflector will drop all TWAMP Light packets.
  4. integration with OAM-PM architecture capturing delay and loss measurement statistics:
    1. Not available from interactive CLI.
    2. Multiple test sessions can be configured between the same source and destination IP endpoints. The tuple of Source IP, Destination IP, Source UDP, and Destination UDP provide a unique index for each test.

The following TWAMP Light functions are supported on the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C:

  1. base router instances for network interfaces and IES services
  2. per-VPRN service context
  3. IPv4 and IPv6:
    1. must be unicast
  4. reflector prefix definition for acceptable TWAMP Light sources:
    1. prefix list may be added and removed without shutting down the reflector function
    2. if no prefixes are defined, the reflector will drop all TWAMP Light packets
  5. integration with OAM-PM architecture capturing delay and loss measurement statistics:
    1. Not available from interactive CLI.
    2. Multiple test sessions can be configured between the same source and destination IP endpoints. The tuple of Source IP, Destination IP, Source UDP, and Destination UDP provide a unique index for each test.

The following sample reflector configuration output shows the use of TWAMP Light to monitor two IP endpoints in a VPRN service on the 7210 SAS, including the default TWAMP Light values that were not overridden with configuration entries.

config>test-oam>twamp>twamp-light# info detail
--------------------------------------------------------------------------
(default)      inactivity-timeout 100 
-------------------------------------------------------------------------- 
 
config>service>vprn# info
--------------------------------------------------------------------------- 
            route-distinguisher 65535:500
            auto-bind ldp
            vrf-target target:65535:500
            interface "to-cpe31" create
                address 10.1.1.1/30
                sap 1/1/2:500 create
                exit
            exit
            static-route 192.168.1.0/24 next-hop 10.1.1.2
            bgp
                no shutdown
            exit
            twamp-light
                reflector udp-port 15000 create
                    description "TWAMP Light reflector VPRN 500"
                    prefix 10.2.1.1/32 create
                        description "Process only 10.2.1.1 TWAMP Light Packets"
                    exit
                    prefix 172.16.1.0/24 create
                        description "Process all 172.16.1.0 TWAMP Light packets"
                    exit
                    no shutdown
                exit
            exit
            no shutdown
------------------------------------------------------------------------------

The following is a sample session controller configuration output.

config>service>vprn# info
-------------------------------------------------------------------------
            route-distinguisher 65535:500
            auto-bind ldp
            vrf-target target:65535:500
            interface "to-cpe28" create
                address 10.2.1.1/30
                sap 1/1/4:500 create
                exit
            exit
            static-route 192.168.2.0/24 next-hop 10.2.1.2
            no shutdown
------------------------------------------------------------------------- 
 
config>oam-pm>session# info detail
-------------------------------------------------------------------------
            bin-group 2
            meas-interval 15-mins create
                intervals-stored 8
            exit
            ip
                dest-udp-port 15000
                destination 10.1.1.1
                fc "l2"
(default)           no forwarding 
                profile in
                router 500
                source 10.2.1.1
(default)          ttl 255  
                twamp-light test-id 500 create
(default)              interval 100
(default)              pad-size 0
(default)              no test-duration
                   no shutdown
                exit
            exit
------------------------------------------------------------------------- 
 

3.3. Ethernet Connectivity Fault Management

The IEEE and the ITU-T have cooperated to define the protocols, procedures, and managed objects to support service-based fault management. Both IEEE 802.1ag standard (Ethernet Connectivity Fault Management (ETH-CFM)) and the ITU-T Y.1731 recommendation support a common set of tools that allow operators to deploy the necessary administrative constructs, management entities and functionality. The ITU-T has also implemented a set of advanced ETH-CFM and performance management functions and features that build on the proactive and on demand troubleshooting tools.

CFM uses Ethernet frames and is distinguishable by ether-type 0x8902. In certain cases, the different functions use a reserved multicast address that can also be used to identify specific functions at the MAC layer. However, the multicast MAC addressing is not used for every function or in every case. The Operational Code (OpCode) in the common CFM header is used to identify the type of function carried in the CFM packet. CFM frames are only processed by IEEE MAC bridges. With CFM, interoperability can be achieved between different vendor equipment in the service provider network up to and including customer premise bridges.

IEEE 802.1ag and ITU-T Y.1731 functions that are implemented are available on the 7210 SAS platforms.

The following table lists the CFM-related acronyms used in this section.

Table 9:  ETH-CFM acronym expansions 

Acronym

Expansion

1DM

One way Delay Measurement (Y.1731)

AIS

Alarm Indication Signal

BNM

Bandwidth Notification Message (Y.1731 sub OpCode of GNM)

CCM

Continuity Check Message

CFM

Connectivity Fault Management

DMM

Delay Measurement Message (Y.1731)

DMR

Delay Measurement Reply (Y.1731)

GNM

Generic Notification Message

LBM

Loopback Message

LBR

Loopback Reply

LTM

Linktrace Message

LTR

Linktrace Reply

ME

Maintenance Entity

MA

Maintenance Association

MA-ID

Maintenance Association Identifier

MD

Maintenance Domain

MEP

Maintenance Association Endpoint

MEP-ID

Maintenance Association Endpoint Identifier

MHF

MIP Half Function

MIP

Maintenance Domain Intermediate Point

OpCode

Operational Code

RDI

Remote Defect Indication

TST

Ethernet Test (Y.1731)

3.3.1. ETH-CFM building blocks

The IEEE and the ITU-T use their own nomenclature when describing administrative contexts and functions. This introduces a level of complexity to configuration, discussion and different vendors naming conventions. The 7210 SAS OS CLI has chosen to standardize on the IEEE 802.1ag naming where overlap exists. ITU-T naming is used when no equivalent is available in the IEEE standard. In the following definitions, both the IEEE name and ITU-T names are provided for completeness, using the format IEEE Name/ITU-T Name.

Maintenance Domain (MD)/Maintenance Entity (ME) is the administrative container that defines the scope, reach and boundary for faults. It is typically the area of ownership and management responsibility. The IEEE allows for various formats to name the domain, allowing up to 45 characters, depending on the format selected. ITU-T supports only a format of “none” and does not accept the IEEE naming conventions, as follows:

  1. 0
    Undefined and reserved by the IEEE.
  2. 1
    No domain name. It is the only format supported by Y.1731 as the ITU-T specification does not use the domain name. This is supported in the IEEE 802.1ag standard but not in currently implemented for 802.1ag defined contexts.
  3. 2,3,4
    Provides the ability to input various different textual formats, up to 45 characters. The string format (2) is the default and therefore the keyword is not shown when looking at the configuration.

Maintenance Association (MA)/Maintenance Entity Group (MEG) is the construct where the different management entities will be contained. Each MA is uniquely identified by its MA-ID. The MA-ID is comprised of the by the MD level and MA name and associated format. This is another administrative context where the linkage is made between the domain and the service using the bridging-identifier configuration option. The IEEE and the ITU-T use their own specific formats. The MA short name formats (0-255) have been divided between the IEEE (0-31, 64-255) and the ITU-T (32-63), with five currently defined (1-4, 32). Even though the different standards bodies do not have specific support for the others formats a Y.1731 context can be configured using the IEEE format options, as follows:

  1. 1 (Primary VID) - Values 0 — 4094
  2. 2 (String) - raw ASCII, excluding 0-31 decimal/0-1F hex (which are control characters) form the ASCII table
  3. 3 (2-octet integer) - 0 — 65535
  4. 4 (VPN ID) - hex value as described in RFC 2685, Virtual Private Networks Identifier
  5. 32 (icc-format) — exactly 13 characters from the ITU-T recommendation T.50
Note:

When a VID is used as the short MA name, 802.1ag will not support VLAN translation because the MA-ID must match all the MEPs. The default format for a short MA name is an integer. Integer value 0 means the MA is not attached to a VID. This is useful for VPLS services on 7210 SAS platforms because the VID is locally significant.

Maintenance Domain Level (MD Level)/Maintenance Entity Group Level (MEG Level) is the numerical value (0-7) representing the width of the domain. The wider the domain, higher the numerical value, the farther the ETH-CFM packets can travel. It is important to understand that the level establishes the processing boundary for the packets. Strict rules control the flow of ETH-CFM packets and are used to ensure correct handling, forwarding, processing and dropping of these packets. To keep it simple ETH-CFM packets with higher numerical level values will flow through MEPs on MIPs on SAPs configured with lower level values. This allows the operator to implement different areas of responsibility and nest domains within each other. Maintenance association (MA) includes a set of MEPs, each configured with the same MA-ID and MD level used verify the integrity of a single service instance.

Maintenance Endpoint (MEP)/MEG Endpoint (MEP) are the workhorses of ETH-CFM. A MEP is the unique identification within the association (0-8191). Each MEP is uniquely identified by the MA-ID, MEPID tuple. This management entity is responsible for initiating, processing and terminating ETH-CFM functions, following the nesting rules. MEPs form the boundaries which prevent the ETH-CFM packets from flowing beyond the specific scope of responsibility. A MEP has direction, up or down. Each indicates the directions packets will be generated; UP toward the switch fabric, down toward the SAP away from the fabric. Each MEP has an active and passive side. Packets that enter the active point of the MEP will be compared to the existing level and processed accordingly. Packets that enter the passive side of the MEP are passed transparently through the MEP. Each MEP contained within the same maintenance association and with the same level (MA-ID) represents points within a single service. MEP creation on a SAP is allowed only for Ethernet ports with NULL, Q-tags, q-in-q encapsulations. MEPs may also be created on SDP bindings.

Maintenance Intermediate Point (MIP)/MEG Intermediate Point (MIP) are management entities between the terminating MEPs along the service path. These provide insight into the service path connecting the MEPs. MIPs only respond to Loopback Messages (LBM) and Linktrace Messages (LTM). All other CFM functions are transparent to these entities. Only one MIP is allowed per SAP or SDP. The creation of the MIPs can be done when the lower level domain is created (explicit). This is controlled by the use of the mhf-creation mode within the association under the bridge-identifier. MIP creation is supported on a SAP and SDP, not including Mesh SDP bindings. By default, no MIPs are created.

There are two locations in the configuration where ETH-CFM is defined. The domains, associations (including linkage to the service id), MIP creation method, common ETH-CFM functions and remote MEPs are defined under the top level eth-cfm command. It is important to note, when Y.1731 functions are required the context under which the MEPs are configured must follow the Y.1731 specific formats (domain format of none, MA format icc-format). When these parameters have been entered, the MEP and possibly the MIP can be defined within the service under the SAP or SDP.

Table 10, Table 11, Table 12, Table 13, and Table 14 are general tables that indicates the ETH-CFM support for the different services and endpoints. They are not meant to indicate the services that are supported or the requirements for those services on the individual platforms.

Table 10:  ETH-CFM support matrix for 7210 SAS-D 

Service

Ethernet

connection type

MEP

MIP

Primary VLAN

Down MEP

Up MEP

Ingress MIP

Egress MIP

Epipe

SAP (access and access-uplink SAP)

VPLS

SAP (access and access-uplink SAP)

 1

R-VPLS

SAP

IES

IES IPv4 interface

SAP

    Note:

  1. Supported only with Up MEP and dot1q range SAPs. Refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Services Guide for more information.
Table 11:  ETH-CFM support matrix for 7210 SAS-Dxp 

Service

Ethernet

Connection Type

MEP

MIP

Primary VLAN

Down MEP

Up MEP

Ingress MIP

Egress MIP

Epipe

SAP (access and access-uplink SAP)

VPLS

SAP (access and access-uplink SAP)

 1

R-VPLS

SAP

IES

IES IPv4 interface

SAP

    Note:

  1. Supported only with Up MEP and dot1q range SAPs. Refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Services Guide for more information.
Table 12:  ETH-CFM support matrix for 7210 SAS-K 2F1C2T 

Service

Ethernet

Connection Type

MEP

MIP

Primary VLAN

Down MEP

Up MEP

Ingress MIP

Egress MIP

Epipe

SAP (access and access-uplink SAP)

 1

VPLS

SAP (access and access-uplink SAP)

R-VPLS

SAP

IES

IES IPv4 interface

SAP

    Note:

  1. Supported for Up MEPs only
Table 13:  ETH-CFM support matrix for 7210 SAS-K 2F6C4T 

Service

Ethernet

Connection Type

MEP

MIP

Primary VLAN

Down MEP

Up MEP

Ingress MIP

Egress MIP

Epipe

SAP (access and access-uplink SAP)

Spoke-SDP

VPLS

SAP (access and access-uplink SAP)

Spoke-SDP

Mesh-SDP

R-VPLS

SAP

IP interface

(IES or

VPRN)

IES

IES IPv4 interface

SAP

Table 14:  ETH-CFM support matrix for 7210 SAS-K 3SFP+ 8C 

Service

Ethernet

Connection Type

MEP

MIP

Primary VLAN

Down MEP

Up MEP

Ingress MIP

Egress MIP

Epipe

SAP (access and access-uplink SAP)

 1

Spoke-SDP

VPLS

SAP (access and access-uplink SAP)

Spoke-SDP

Mesh-SDP

R-VPLS

SAP

IP interface

(IES or

VPRN)

IES

IES IPv4 interface

SAP

    Note:

  1. Supported for Up MEPs only

The following figures show the detailed IEEE representation of MEPs, MIPs, levels and associations, using the standards defined icons.

Figure 17:  MEP and MIP 
Figure 18:  MEP, MIP and MD levels 

3.3.1.1. Loopback

A loopback message is generated by a MEP to its peer MEP (Figure 19). The functions are similar to an IP ping to verify Ethernet connectivity between the nodes.

Figure 19:  CFM loopback 

The following loopback-related functions are supported:

  1. Loopback message functionality on a MEP or MIP can be enabled or disabled.
  2. A MEP supports generating loopback messages and responding to loopback messages with loopback reply messages.
  3. A MIP supports responding to loopback messages with loopback reply messages when loopback messages are targeted to itself.
  4. The Sender ID TLV may optionally be configured to carry the Chassis ID. When configured, the following information will be included in LBM messages:
    1. Only the Chassis ID portion of the TLV will be included.
    2. The Management Domain and Management Address fields are not supported on transmission.
    3. As per the specification, the LBR function copies and returns any TLVs received in the LBM message. This means that the LBR message will include the original Sender ID TLV.
    4. The Sender ID TLV is supported for service (id-permission) MEPs.
    5. The Sender ID TLV is supported for both MEPs and MIPs.
  5. Loopback test results are displayed on the originating MEP. There is a limit of 10 outstanding tests per node.

3.3.1.2. Linktrace

A linktrace message is originated by a MEP and targeted to a peer MEP in the same MA and within the same MD level (see Figure 20). Its function is similar to IP traceroute. Linktrace traces a specific MAC address through the service. The peer MEP responds with a linktrace reply message after successful inspection of the linktrace message. The MIPs along the path also process the linktrace message and respond with linktrace replies to the originating MEP if the received linktrace message has a TTL greater than 1; the MIPs also forward the linktrace message if a lookup of the target MAC address in the Layer 2 FIB is successful. The originating MEP will receive multiple linktrace replies and from processing the linktrace replies, it can determine the route to the target bridge.

A traced MAC address (the targeted MAC address) is carried in the payload of the linktrace message. Each MIP and MEP receiving the linktrace message checks whether it has learned the target MAC address. To use linktrace, the target MAC address must have been learned by the nodes in the network. If the address has been learned, a linktrace message is sent back to the originating MEP. A MIP forwards the linktrace message out of the port where the target MAC address was learned.

The linktrace message has a multicast destination address. On a broadcast LAN, it can be received by multiple nodes connected to that LAN; however, only one node will send a reply.

Figure 20:  CFM linktrace 

The following linktrace-related functions are supported:

  1. Linktrace functions can be enabled or disabled on a MEP.
  2. A MEP supports generating linktrace messages and responding with linktrace reply messages.
  3. Linktrace test results are displayed on the originating MEP. There is a limit of 10 outstanding tests per node. Storage is provided for up to 10 MEPs and for the last 10 responses. If more than 10 responses are received, older entries will be overwritten.
  4. The Sender ID TLV may optionally be configured to carry the Chassis ID. When configured, the following information will be included in LTM and LTR messages:
    1. Only the Chassis ID portion of the TLV will be included.
    2. The Management Domain and Management Address fields are not supported on transmission.
    3. The LBM message will include the Sender ID TLV that is configured on the launch point. The LBR message will include the Sender ID TLV information from the reflector (MIP or MEP) if it is supported.
    4. The Sender ID TLV is supported for service (id-permission) MEPs.
    5. The Sender ID TLV is supported for both MEPs and MIPs.

The following display output has been updated to include the Sender ID TLV contents if they are included in the LBR.

oam eth-cfm linktrace 00:00:00:00:00:30 mep 28 domain 14 association 2
Index Ingress Mac          Egress Mac           Relay     Action
----- -------------------- -------------------- ---------- ----------
1     00:00:00:00:00:00    00:00:00:00:00:30    n/a       terminate
SenderId TLV: ChassisId (local)
              access-012-west
----- -------------------- -------------------- ---------- ----------
No more responses received in the last 6 seconds.

3.3.1.3. Continuity Check (CC)

A Continuity Check Message (CCM) is a multicast frame that is generated by a MEP and multicast to all other MEPs in the same MA. The CCM does not require a reply message. To identify faults, the receiving MEP maintains an internal list of remote MEPs it should be receiving CCM messages from.

This list is based on the remote MEP ID configuration within the association the MEP is created in. When the local MEP does not receive a CCM from one of the configured remote MEPs within a preconfigured period, the local MEP raises an alarm.

The following figure shows a CFM continuity check.

Figure 21:  CFM continuity check 

The following figure shows a CFM CC failure scenario.

Figure 22:  CFM CC failure scenario 

The following functions are supported:

  1. CC can be enabled or disabled for a MEP.
  2. MEP entries can be configured and deleted in the CC MEP monitoring database manually. Only remote MEPs must be configured. Local MEPs are automatically added to the database when they are created.
  3. The CCM transmit interval can be configured for 100 ms (only supported on the 7210 SAS-D).
    When configuring MEPs with subsecond CCM intervals, bandwidth consumption must be taken into consideration. Each CCM PDU is 100 bytes (800 bits). Taken individually, this is a small value. However, the bandwidth consumption increases rapidly as multiple MEPs are configured with 10 ms timers, 100 packets per second. Subsecond CCM-enabled MEPs are supported on the following:
    1. Down MEPs configured on Ethernet SAPs.
    2. Lowest MD-level, when multiple MEPs exist on the same Ethernet SAP.
    3. Individual Ethernet tunnel paths requiring EAPs but not on the Ethernet tunnel itself. This requires the MEPs to be part of the Y.1731 context because of the EAPS.
  4. The CCM will declare a fault when:
    1. the CCM stops hearing from one of the remote MEPs for 3.5 times the CC interval
    2. the CCM hears from a MEP with a lower MD level
    3. the CCM hears from a MEP that is not part of the local MEP MA
    4. the CCM hears from a MEP that is in the same MA but not in the configured MEP list
    5. the CCM hears from a MEP in the same MA with the same MEP ID as the receiving MEP
    6. the CC interval of the remote MEP does not match the local configured CC interval
    7. the remote MEP is declaring a fault
  5. An alarm is raised and a trap is sent if the defect is greater than or equal to the configured low-priority-defect value.
  6. Remote Defect Indication (RDI) is supported but by default is not recognized as a defect condition because the low-priority-defect setting default does not include RDI.
  7. The Sender ID TLV may optionally be configured to carry the Chassis ID. When configured, the following information will be included in CCM messages:
    1. Only the Chassis ID portion of the TLV will be included.
    2. The Management Domain and Management Address fields are not supported on transmission.
    3. The Sender ID TLV is not supported with subsecond CCM-enabled MEPs.
    4. The Sender TLV is supported for service (id-permission) MEPs.

3.3.1.4. Alarm Indication Signal (ETH-AIS Y.1731)

Alarm Indication Signal (AIS) provides an Y.1731 capable MEP the ability to signal a fault condition in the reverse direction of the MEP, out the passive side. When a fault condition is detected the MEP will generate AIS packets at the configured client levels and at the specified AIS interval until the condition is cleared. Currently a MEP configured to generate AIS must do so at a level higher than its own. The MEP configured on the service receiving the AIS packets is required to have the active side facing the receipt of the AIS packet and must be at the same level the AIS, The absence of an AIS packet for 3.5 times the AIS interval set by the sending node will clear the condition on the receiving MEP.

It is important to note that AIS generation is not supported to an explicitly configured endpoint. An explicitly configured endpoint is an object that contains multiple individual endpoints, as in PW redundancy.

3.3.1.5. Test (ETH-TST Y.1731)

Ethernet test affords operators an Y.1731 capable MEP the ability to send an in service on demand function to test connectivity between two MEPs. The test is generated on the local MEP and the results are verified on the destination MEP. Any ETH-TST packet generated that exceeds the MTU will be silently dropped by the lower level processing of the node.

3.3.2. Y.1731 timestamp capability

Timestamps for different Y.1731 messages are obtained as follows:

  1. The 7210 SAS-D support is as follows:
    1. Y.1731 2-DM messages for Down MEPs uses hardware timestamps for both Rx (packets received by the node) and Tx (packets sent out of the node). The timestamps is obtained from a free-running hardware clock. It provides accurate 2-way delay measurements and it is not recommended to use for computing 1-way delay.
    2. Y.1731 2-DM messages for Up MEPs, 1-DM for both Down MEPs and UP MEPs, and 2-SLM for both Down MEPs and Up MEPs use software based timestamps on Tx and hardware based timestamp on Rx. It uses the system clock (free-running or synchronized to NTP) to obtain the timestamps.
  2. For the 7210 SAS-Dxp, Y.1731 2-DM and 1-DM messages for both Down MEPs and Up MEPs use software-based timestamps on Tx and hardware-based timestamps on Rx. Timestamps are obtained from the system clock, which is free-running or synchronized to NTP.
  3. The 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C support hardware time-stamping for Y.1731 1-DM, 2-DM and SLM. The timestamp values are obtained using a synchronized clock. The synchronized clock provides timestamp values as follows:
    1. When neither PTP nor NTP is enabled in the system, the synchronized clock is same as free-run system clock. Therefore, accurate two-way delay measurements are possible. However, one-way delay measurements can result in unexpected values.
    2. When NTP is enabled in the system, the synchronized clock is derived by the NTP clock.
    3. When PTP is enabled in the system, the synchronized clock is derived by PTP clock.
Note:

Accurate results for one-way and two-way delay measurement tests using Y.1731 messages are obtained if the nodes are capable of time stamping packets in hardware.

3.3.2.1. One-way delay measurement (ETH-1DM Y.1731)

One-way delay measurement allows the operator the ability to check unidirectional delay between MEPs. An ETH-1DM packet is time stamped by the generating MEP and sent to the remote node. The remote node time stamps the packet on receipt and generates the results. The results, available from the receiving MEP, will indicate the delay and jitter. Jitter, or delay variation, is the difference in delay between tests. This means the delay variation on the first test will not be valid. It is important to ensure that the clocks are synchronized on both nodes to ensure the results are accurate. NTP can be used to achieve a level of wall clock synchronization between the nodes.

3.3.2.2. Two-way delay measurement (ETH-DMM Y.1731)

Two-way delay measurement is similar to one way delay measurement except it measures the round trip delay from the generating MEP. In this case wall clock synchronization issues will not influence the test results because four timestamps are used. This allows the remote nodes time to be removed from the calculation and as a result clock variances are not included in the results. The same consideration for first test and hardware based time stamping stated for one way delay measurement are applicable to two-way delay measurement.

Delay can be measured using one-way and two-way on demand functions. The two-way test results are available single-ended, test initiated, calculation and results viewed on the same node. There is no specific configuration under the MEP on the SAP to enabled this function. An example of an on demand test and results follows. The latest test result is stored for viewing. Further tests will overwrite the previous results. Delay Variation is only valid if more than one test has been executed.

oam eth-cfm two-way-delay-test d0:0d:1e:00:01:02 mep 101 domain 4 association 1
 
Two-Way-Delay-Test Response:
Delay 2955 microseconds        Variation 111 microseconds
 
# show eth-cfm mep 101 domain 4 association 1 two-way-delay-test
===================================================================
Eth CFM Two-way Delay Test Result Table
===================================================================
Peer Mac Addr         Delay (us)          Delay Variation (us)
-------------------------------------------------------------------
d0:0d:1e:00:01:02     2955                111

3.3.3. ITU-T Y.1731 Ethernet Bandwidth Notification

Note:

This feature is supported only on the 7210 SAS-K 2F1C2T and 7210 SAS-K 2F6C4T.

The Ethernet Bandwidth Notification (ETH-BN) function is used by a server MEP to signal link bandwidth changes to a client MEP.

This functionality is for point-to-point microwave radios. When a microwave radio uses adaptive modulation, the capacity of the radio can change based on the condition of the microwave link. For example, in adverse weather conditions that cause link degradation, the radio can change its modulation scheme to a more robust one (which will reduce the link bandwidth) to continue transmitting.

This change in bandwidth is communicated from the server MEP on the radio, using an Ethernet Bandwidth Notification Message (ETH-BNM), to the client MEP on the connected router. The server MEP transmits periodic frames with ETH-BN information, including the interval, the nominal and currently available bandwidth. A port MEP with the ETH-BN feature enabled will process the information contained in the CFM PDU and appropriately adjust the rate of traffic sent to the radio.

A port MEP that is not a LAG member port supports the client side reception and processing of the ETH-BN CFM PDU sent by the server MEP. By default, processing is disabled. The config>port>ethernet>eth-cfm>mep>eth-bn>receive CLI command sets the ETH-BN processing state on the port MEP. A port MEP supports untagged packet processing of ETH-CFM PDUs at domain levels 0 and 1 only. The port client MEP sends the ETH-BN rate information received to be applied to the port egress rate in a QoS update. A pacing mechanism limits the number of QoS updates sent. The config>port>ethernet>eth-cfm>mep>eth-bn>rx-update-pacing CLI command allows the updates to be paced using a configurable range of 1 to 600 seconds (the default is 5 seconds). The pacing timer begins to count down following the most recent QoS update sent to the system for processing. When the timer expires, the most recent update that arrived from the server MEP is compared to the most recent value sent for system processing. If the value of the current bandwidth is different from the previously processed value, the update is sent and the process begins again. Updates with a different current bandwidth that arrive when the pacing timer has already expired are not subject to a timer delay. Refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Interface Configuration Guide for more information about these CLI commands.

A complimentary QoS configuration is required to allow the system to process current bandwidth updates from the CFM engine. The config>port>ethernet>eth-bn-egress-rate-changes CLI command is required to enable the QoS function to update the port egress rates based on the current available bandwidth updates from the CFM engine. By default, the function is disabled.

Both the CFM and QoS functions must be enabled for the changes in current bandwidth to dynamically update the egress rate.

When the MEP enters a state that prevents it from receiving the ETH-BNM, the current bandwidth last sent for processing is cleared and the egress rate reverts to the configured rate. Under these conditions, the last update cannot be guaranteed as current. Explicit notification is required to dynamically update the port egress rate. The following types of conditions lead to ambiguity:

  1. administrative MEP shut down
  2. port admin down
  3. port link down
  4. eth-bn no receive transitioning the ETH-BN function to disable

If the eth-bn-egress-rate-changes command is disabled using the no option, CFM continues to send updates, but the updates are held without affecting the port egress rate.

The ports supporting ETH-BN MEPs can be configured for the network, access, hybrid, and access-uplink modes. When ETH-BN is enabled on a port MEP and the config>port>ethernet>eth-cfm>mep>eth-bn>receive and the QoS config>port>ethernet>eth-bn-egress-rate-changes contexts are configured, the egress rate is dynamically changed based on the current available bandwidth indicated by the ETH-BN server.

Note:

For SAPs configured on an access port or hybrid port, changes in port bandwidth on reception of ETH-BNM messages will result in changes to the port egress rate, but the SAP egress aggregate shaper rate and queue egress shaper rate provisioned by the user are unchanged, which may result in an oversubscription of the committed bandwidth. Consequently, Nokia recommends that the user should change the SAP egress aggregate shaper rate and queue egress shaper rate for all SAPs configured on the port from an external management station after egress rate changes are detected on the port.

The port egress rate is capped by the minimum of the configured egress-rate, and the maximum port rate. The minimum egress rate using ETH-BN is 1024 kb/s. If a current bandwidth of zero is received, it does not affect the egress port rate and the previously processed current bandwidth will continue to be used.

The client MEP requires explicit notification of changes to update the port egress rate. The system does not timeout any previously processed current bandwidth rates using a timeout condition. The specification does allow a timeout of the current bandwidth if a frame has not been received in 3.5 times the ETH-BNM interval. However, the implicit approach can lead to misrepresented conditions and has not been implemented.

When you start or restart the system, the configured egress rate is used until an ETH-BNM arrives on the port with a new bandwidth request from the ETH-BN server MEP.

An event log is generated each time the egress rate is changed based on reception of a BNM. If a BNM is received that does not result in a bandwidth change, no event log is generated.

The destination MAC address can be a Class 1 multicast MAC address (that is, 01-80-C2-00-0x) or the MAC address of the port MEP configured. Standard CFM validation and identification must be successful to process CFM PDUs.

For information on the eth-bn-egress-rate-changes command, refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Interface Configuration Guide.

The Bandwidth Notification Message (BNM) PDU used for ETH-BN information is a sub-OpCode within the Ethernet Generic Notification Message (ETH-GNM).

The following table shows the BNM PDU format fields.

Table 15:  BNM PDU format fields   

Label

Description

MEG Level

Carries the MEG level of the client MEP (0 to 7). This field must be set to either 0 or 1 to be recognized as a port MEP.

Version

The current version is 0

OpCode

The value for this PDU type is GNM (32)

Flags

Contains one information element: Period (3 bits), which indicates how often ETH-BN messages are transmitted by the server MEP. The following are the valid values:

  1. 100 (1 frame/s)
  2. 101 (1 frame/10 s)
  3. 110 (1 frame/min)

TLV Offset

This value is set to 13

Sub-OpCode

The value for this PDU type is BNM (1)

Nominal Bandwidth

The nominal full bandwidth of the link, in Mb/s. This information is reported in the display but not used to influence QoS egress rates.

Current Bandwidth

The current bandwidth of the link in Mb/s. The value is used to influence the egress rate.

Port ID

A non-zero unique identifier for the port associated with the ETH-BN information, or zero if not used. This information is reported in the display, but is not used to influence QoS egress rates.

End TLV

An all zeros octet value

On the 7210 SAS, port-level MEPs with level 0 or 1 should be implemented to support this application. A port-level MEP must support CCM, LBM, LTM, RDI, and ETH-BN, but can be used for ETH-BN only.

The show eth-cfm mep eth-bandwidth-notification display output includes the ETH-BN values received and extracted from the PDU, including a last reported value and the pacing timer. If the n/a value appears in the field, it indicates that field has not been processed.

The base show eth-cfm mep output is expanded to include the disposition of the ETH-BN receive function and the configured pacing timer.

The show port port-id detail is expanded to include an Ethernet Bandwidth Notification Message Information section. This section includes the ETH-BN Egress Rate disposition and the current Egress BN rate being used.

3.3.4. Port-based MEPs

The 7210 SAS supports port-based MEPs for use with CFM ETH-BN. The port MEP must be configured at level 0 and can be used for ETH-BN message reception and processing as described in ITU-T Y.1731 Ethernet Bandwidth Notification. Port-based MEPs only support CFM CC, LT, LS, and RDI message processing. No other CFM and Y.1731 messages are supported on these port-based MEPs.

Note:

Port-based MEPs are designed to be used with the ETH-BN application. Nokia recommends not to use port-based MEPs for other applications.

3.3.5. ETH-CFM statistics

Note:

The ETH-CFM statistics feature is supported on all platforms as described in this document, except the 7210 SAS-D.

A number of statistics are available to view the current processing requirements for CFM. Any packet that is counted against the CFM resource is included in the statistics counters. The counters do not include sub-second CCM and ETH-CFM PDUs generated by non-ETH-CFM functions (which include OAM-PM & SAA) or filtered by a security configuration.

SAA and OAM-PM use standard CFM PDUs. The reception of these packets is included in the receive statistics. However, SAA and OAM-PM launch their own test packets and do not consume ETH-CFM transmission resources.

Per-system and per-MEP statistics are included with a per-OpCode breakdown. These statistics help operators determine the busiest active MEPs on the system and provide a breakdown of per-OpCode processing at the system and MEP level.

Use the show eth-cfm statistics command to view the statistics at the system level. Use the show eth-cfm mep mep-id domain md-index association ma-index statistics command to view the per-MEP statistics. Use the clear eth-cfm mep mep-id domain md-index association ma-index statistics command to clear statistics. The clear command clears the statistics for only the specified function. For example, clearing the system statistics does not clear the individual MEP statistics because each MEP maintains its own unique counters.

All known OpCodes are listed in the transmit and receive columns. Different versions for the same OpCode are not displayed. This does not imply that the network element supports all functions listed in the table. Unknown OpCodes are dropped.

Use the tools dump eth-cfm top-active-meps command to display the top ten active MEPs in the system. This command provides a nearly real-time view of the busiest active MEPS by displaying the active (not shutdown) MEPs and inactive (shutdown) MEPs in the system. ETH-CFM MEPs that are shutdown continue to consume CPM resources because the main task is syncing the PDUs. The counts begin from the last time that the command was issued using the clear option.

tools dump eth-cfm top-active-meps
Calculating most active MEPs in both direction without clear ...
 
MEP                  Rx Stats     Tx Stats     Total Stats
-------------------- ------------ ------------ ------------
12/4/28              3504497      296649       3801146
14/1/28              171544       85775        257319
14/2/28              150942       79990        230932
 
tools dump eth-cfm top-active-meps clear
Calculating most active MEPs in both direction with clear ...
 
MEP                  Rx Stats     Tx Stats     Total Stats
-------------------- ------------ ------------ ------------
12/4/28              3504582      296656       3801238
14/1/28              171558       85782        257340
14/2/28              150949       79997        230946
 
tools dump eth-cfm top-active-meps clear
Calculating most active MEPs in both direction with clear ...
 
MEP                  Rx Stats     Tx Stats     Total Stats
-------------------- ------------ ------------ ------------
12/4/28              28           2            30
14/1/28              5            2            7
14/2/28              3            2            5 

3.3.6. Synthetic Loss Measurement (ETH-SL)

Nokia applied pre-standard OpCodes 53 (Synthetic Loss Reply) and 54 (Synthetic Loss Message) for the purpose of measuring loss using synthetic packets.

Notes: These will be changes to the assigned standard values in a future release. This means that the Release 4.0R6 is pre-standard and will not inter-operate with future releases of SLM or SLR that supports the standard OpCode values.

This synthetic loss measurement approach is a single-ended feature that allows the operator to run on-demand and proactive tests to determine “in”, “out” loss and “unacknowledged” packets. This approach can be used between peer MEPs in both point to point and multi-point services. Only remote MEP peers within the association and matching the unicast destination will respond to the SLM packet.

The specification uses various sequence numbers to determine in which direction the loss occurred. Nokia has implemented the required counters to determine loss in each direction. To correctly use the information that is gathered the following terms are defined:

  1. count
    The count is the number of probes that are sent when the last frame is not lost. When the last frames is or are lost, the count and unacknowledged equals the number of probes sent.
  2. out-loss (far-end)
    Out-loss packets are lost on the way to the remote node, from test initiator to the test destination.
  3. in-loss (near-end)
    In-loss packets are lost on the way back from the remote node to the test initiator.
  4. unacknowledged
    Unacknowledged packets are the number of packets at the end of the test that were not responded to.

The per probe specific loss indicators are available when looking at the on-demand test runs, or the individual probe information stored in the MIB. When tests are scheduled by Service Assurance Application (SAA) the per probe data is summarized and per probe information is not maintained. Any “unacknowledged” packets will be recorded as “in-loss” when summarized.

The on-demand function can be executed from CLI or SNMP. The on demand tests are meant to provide the carrier a way to perform on the spot testing. However, this approach is not meant as a method for storing archived data for later processing. The probe count for on demand SLM has a range of one to 100 with configurable probe spacing between one second and ten seconds. This means it is possible that a single test run can be up to 1000 seconds. Although possible, it is more likely the majority of on demand case can increase to 100 probes or less at a one second interval. A node may only initiate and maintain a single active on demand SLM test at any specific time. A maximum of one storage entry per remote MEP is maintained in the results table. Subsequent runs to the same peer can overwrite the results for that peer. This means, when using on demand testing the test should be run and the results checked before starting another test.

The proactive measurement functions are linked to SAA. This backend provides the scheduling, storage and summarization capabilities. Scheduling may be either continuous or periodic. It also allows for the interpretation and representation of data that may enhance the specification. As an example, an optional TVL has been included to allow for the measurement of both loss and delay or jitter with a single test. The implementation does not cause any interoperability because the optional TVL is ignored by equipment that does not support this. In mixed vendor environments loss measurement continues to be tracked but delay and jitter can only report round trip times. It is important to point out that the round trip times in this mixed vendor environments include the remote nodes processing time because only two time stamps will be included in the packet. In an environment where both nodes support the optional TLV to include time stamps unidirectional and round trip times is reported. Since all four time stamps are included in the packet the round trip time in this case does not include remote node processing time. Of course, those operators that wish to run delay measurement and loss measurement at different frequencies are free to run both ETH-SL and ETH-DM functions. ETH-SL is not replacing ETH-DM. Service Assurance is only briefly described here to provide some background on the basic functionality. To know more about SAA functions see Service Assurance Agent overview.

The ETH-SL packet format contains a test-id that is internally generated and not configurable. The test-id is visible for the on demand test in the display summary. It is possible for a remote node processing the SLM frames receives overlapping test-ids as a result of multiple MEPs measuring loss between the same remote MEP. For this reason, the uniqueness of the test is based on remote MEP-ID, test-id and Source MAC of the packet.

ETH-SL is applicable to up and down MEPs and as per the recommendation transparent to MIPs. There is no coordination between various fault conditions that could impact loss measurement. This is also true for conditions where MEPs are placed in shutdown state as a result of linkage to a redundancy scheme like MC-LAG. Loss measurement is based on the ETH-SL and not coordinated across different functional aspects on the network element. ETH-SL is supported on service based MEPs.

It is possible that two MEPs may be configured with the same MAC on different remote nodes. This causes various issues in the FDB for multipoint services and is considered a misconfiguration for most services. It is possible to have a valid configuration where multiple MEPs on the same remote node have the same MAC. In fact, this is likely to happen. In this release, only the first responder is used to measure packet loss. The second responder is dropped. Since the same MAC for multiple MEPs is only truly valid on the same remote node this should is an acceptable approach

There is no way for the responding node to understand when a test is completed. For this reason a configurable “inactivity-timer” determines the length of time a test is valid. The timer will maintain an active test as long as it is receiving packets for that specific test, defined by the test-id, remote MEP Id and source MAC. When there is a gap between the packets that exceeds the inactivity-timer the responding node responds with a sequence number of one regardless of what the sequence number was the instantiating node sent. This means the remote MEP accepts that the previous test has expired and these probes are part of a new test. The default for the inactivity timer is 100 second and has a range of 10 to 100 seconds.

The responding node is limited to a fixed number of SLM tests per platform. Any test that attempts to involve a node that is already actively processing more than the system limit of the SLM tests shows up as “out loss” or “unacknowledged” packets on the node that instantiated the test because the packets are silently discarded at the responder. It is important for the operator to understand this is silent and no log entries or alarms is raised. It is also important to keep in mind that these packets are ETH-CFM based and the different platforms stated receive rate for ETH-CFM must not be exceeded. ETH-SL provides a mechanism for operators to pro-actively trend packet loss for service based MEPs.

3.3.6.1. Configuration example

The following figure shows the configuration required for proactive SLM test using SAA.

Figure 23:  SLM example 

The following is a sample MIB output of an on-demand test. Node1 is tested for this example. The SAA configuration does not include the accounting policy required to collect the statistics before they are overwritten. NODE2 does not have an SAA configuration. NODE2 includes the configuration to build the MEP in the VPLS service context.

config>eth-cfm# info
----------------------------------------------
     domain 3 format none level 3
          association 1 format icc-based name "03-0000000100"
               bridge-identifier 100
               exit
               ccm-interval 1
               remote-mepid 101
          exit
     exit
----------------------------------------------
config>service>vpls# info
----------------------------------------------
     stp
          shutdown
     exit
     sap 1/1/3:100.100 create
     exit
     sap lag-1:100.100 create
          eth-cfm
               mep 100 domain 3 association 1 direction down
                    ccm-enable
                    mac-address d0:0d:1e:00:01:00
                    no shutdown
          exit
     exit
exit
no shutdown
----------------------------------------------
*A:7210SAS>config>service>vpls
 
*A:7210SAS>config>saa# info detail
----------------------------------------------
        test "SLM" owner "TiMOS CLI"
            no description
            type
                eth-cfm-two-way-slm 
00:01:22:22:33:34 mep 1 domain 1 association 1 size 0 fc "nc" count 100 timeout
 1 interval 1
            exit
            trap-gen
                no probe-fail-enable
                probe-fail-threshold 1
                no test-completion-enable
                no test-fail-enable
                test-fail-threshold 1
          exit
          continuous
          no shutdown
          exit
----------------------------------------------
*A:7210SAS>config>saa#
 

The following sample output is meant to demonstrate the different loss conditions that an operator may see. The total number of attempts is “100” is because the final probe in the test was not acknowledged.

 
*A:7210SAS# show saa SLM42
 
===============================================================================
SAA Test Information
===============================================================================
Test name                    : SLM42
Owner name                   : TiMOS CLI
Description                  : N/A
Accounting policy            : None
Continuous                   : Yes
Administrative status        : Enabled
Test type                    : eth-cfm-two-way-slm 00:25:ba:02:a6:50 mep 4
                               domain 1 association 1 fc "h1" count 100
                               timeout 1 interval 1
Trap generation              : None
Test runs since last clear   : 117
Number of failed test runs   : 1
Last test result             : Success
-------------------------------------------------------------------------------
Threshold
Type        Direction Threshold  Value      Last Event          Run #
-------------------------------------------------------------------------------
Jitter-in   Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Jitter-out  Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Jitter-rt   Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Latency-in  Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Latency-out Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Latency-rt  Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Loss-in     Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Loss-out    Rising    None       None       Never               None     
            Falling   None       None       Never               None     
Loss-rt     Rising    None       None       Never               None     
            Falling   None       None       Never               None     
 
===============================================================================
Test Run: 116
Total number of attempts: 100
Number of requests that failed to be sent out: 0
Number of responses that were received: 100
Number of requests that did not receive any response: 0
Total number of failures: 0, Percentage: 0
 (in ms)            Min          Max      Average       Jitter
Outbound  :         8.07         8.18         8.10        0.014
Inbound   :        -7.84        -5.46        -7.77        0.016
Roundtrip :        0.245         2.65        0.334        0.025
Per test packet: 
  Sequence     Outbound      Inbound    RoundTrip Result
         1         8.12        -7.82        0.306 Response Received
         2         8.09        -7.81        0.272 Response Received
         3         8.08        -7.81        0.266 Response Received
         4         8.09        -7.82        0.270 Response Received
         5         8.10        -7.82        0.286 Response Received
         6         8.09        -7.81        0.275 Response Received
         7         8.09        -7.81        0.271 Response Received
         8         8.09        -7.82        0.277 Response Received
         9         8.11        -7.81        0.293 Response Received
        10         8.10        -7.82        0.280 Response Received
        11         8.11        -7.82        0.293 Response Received
        12         8.10        -7.82        0.287 Response Received
        13         8.10        -7.82        0.286 Response Received
        14         8.09        -7.82        0.276 Response Received
        15         8.10        -7.82        0.284 Response Received
        16         8.09        -7.82        0.271 Response Received
        17         8.11        -7.81        0.292 Response Received
===============================================================================

The following is an example of an on demand tests that and the associated output. Only single test runs are stored and can be viewed after the fact.

#oam eth-cfm two-way-slm-test 00:25:ba:04:39:0c mep 4 domain 1 association 1 send-
count
10 interval 1 timeout 1
Sending 10 packets to 00:25:ba:04:39:0c from MEP 4/1/1 (Test-id: 143
Sent 10 packets, 10 packets received from MEP ID 3, (Test-id: 143)
(0 out-loss, 0 in-loss, 0 unacknowledged)
 
*A:7210SAS>show# eth-cfm mep 4 domain 1 association 1 two-way-slm-test
 
===============================================================================
Eth CFM Two-way SLM Test Result Table (Test-id: 143)
===============================================================================
Peer Mac Addr      Remote MEP       Count     In Loss    Out Loss         Unack
-------------------------------------------------------------------------------
00:25:ba:04:39:0c           3          10           0           0            0
===============================================================================
*A:7210SAS>show#
 

3.3.7. ETH-CFM QoS considerations

UP MEPs and Down MEPs have been aligned as of this release to better emulate service data. When an UP MEP or DOWN MEP is the source of the ETH-CFM PDU the priority value configured, as part of the configuration of the MEP or specific test, will be treated as the Forwarding Class (FC) by the egress QoS policy. If there is no egress QoS policy the priority value will be mapped to the CoS values in the frame. However, egress QoS Policy may overwrite this original value. The Service Assurance Agent (SAA) uses [fc {fc-name} to accomplish similar functionality.

UP MEPs and DOWN MEPs terminating an ETH-CFM PDU will use the received FC as the return priority for the appropriate response, again feeding into the egress QoS policy as the FC.

ETH-CFM PDUs received on the MPLS-SDP bindings will now correctly pass the EXP bit values to the ETH-CFM application to be used in the response.

These are default behavioral changes without CLI options.

3.3.8. ETH-CFM configuration guidelines

The following are ETH-CFM configuration guidelines:

  1. The 7210 SAS platforms support only ingress MIPs in some services or bidirectional MIPs (that is, ingress and egress MIPs) in some services. Table 10, Table 11, Table 12, Table 13, and Table 14 list the MIP and MEP support for different services on different platforms.
  2. On 7210 SAS-D and 7210 SAS-Dxp, Up MEPs cannot be created by default on system bootup. Before Up MEPs can be created, the user must first use the configure>system>resource-profile context to explicitly allocate hardware resources for use with this feature. The software will reject the configuration to create an Up MEP and generate an error until resources are allocated. Refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Basic System Configuration Guide for more information.
  3. On the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C, no explicit resource allocation is required before configuring Up MEPs.
  4. On the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C, all MIPs are bidirectional. A MIP responds to OAM messages that are received from the wire and also responds to OAM messages that are being sent out to the wire. MIP support for SAP, SDP bindings, and services varies. Refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Services Guide for more information.
  5. On 7210 SAS platforms Ethernet Linktrace Response (ETH-LTR) is always sent out with priority 7.
  6. 7210 SAS platforms, send out all CFM packets as in-profile. Currently, there is no mechanism in the SAA tools to specify the profile of the packet.
    This behavior is applicable to all 7210 platforms where the feature supported has been added in both access-uplink mode and network mode of operation. Refer to the 7210 SAS Software Release Notes 22.x.Rx for more information about which platform and MEPs or MIPs support this feature.
  7. To enable DMM version 1 message processing on the 7210 SAS-D (access-uplink mode) and 7210 SAS-Dxp (access-uplink mode) platforms, the configure>eth-cfm>system>enable-dmm-version-interop command must be used.
  8. To achieve better scaling on the 7210 SAS-D and 7210 SAS-Dxp, Nokia recommends that the MEPs are configured at specific levels. The recommended levels are 0, 1, 3, and 7.
  9. Ethernet rings are not configurable under all service types. Any service restrictions for the MEP direction or MIP support will override the generic capability of the Ethernet ring MPs. For more information about Ethernet rings, refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Interface Configuration Guide.
  10. The supported minimum CCM transmission interval values vary depending on the MEP type and 7210 SAS platform. The following table lists the supported minimum CCM timer values.
    Table 16:  Minimum CCM transmission interval value support by 7210 SAS platform 

    Platform

    G.8032 down MEP

    Service down MEP

    Service up MEP

    7210 SAS-D

    100 ms

    100 ms

    1 s

    7210 SAS-Dxp

    10 ms

    1 s

    1 s

    7210 SAS-K 2F1C2T

    10 ms

    1 s

    1 s

    7210 SAS-K 2F6C4T

    10 ms

    1 s

    1 s

    7210 SAS-K 3SFP+ 8C

    10 ms

    1 s

    1 s

  11. Sender-ID TLV processing is supported only for service MEPs. It is not supported for G.8032 MEPs.

3.4. OAM mapping

OAM mapping is a mechanism that enables a way of deploying OAM end-to-end in a network where different OAM tools are used in different segments. For instance, an Epipe service could span across the network using Ethernet access (CFM used for OAM).

In the 7210 SAS implementation, the Service Manager (SMGR) is used as the central point of OAM mapping. It receives and processes the events from different OAM components, then decides the actions to take, including triggering OAM events to remote peers.

Fault propagation for CFM is by default disabled at the MEP level to maintain backward compatibility. When required, it can be explicitly enabled by configuration.

Fault propagation for a MEP can only be enabled when the MA is comprised of no more than two MEPs (point-to-point).

3.4.1. CFM connectivity fault conditions

CFM MEP declares a connectivity fault when its defect flag is equal to or higher than its configured lowest defect priority. The defect can be any of the following depending on configuration:

  1. DefRDICCM
  2. DefMACstatus
  3. DefRemoteCCM
  4. DefErrorCCM
  5. DefXconCCM

The following additional fault condition applies to Y.1731 MEPs:

  1. reception of AIS for the local MEP level

Setting the lowest defect priority to allDef may cause problems when fault propagation is enabled in the MEP. In this scenario, when MEP A sends CCM to MEP B with interface status down, MEP B will respond with a CCM with RDI set. If MEP A is configured to accept RDI as a fault, then it gets into a dead lock state, where both MEPs will declare fault and never be able to recover. The default lowest defect priority is DefMACstatus, which will not be a problem when interface status TLV is used. It is also very important that different Ethernet OAM strategies should not overlap the span of each other. In some cases, independent functions attempting to perform their normal fault handling can negatively impact the other. This interaction can lead to fault propagation in the direction toward the original fault, a false positive, or worse, a deadlock condition that may require the operator to modify the configuration to escape the condition. For example, overlapping Link Loss Forwarding (LLF) and ETH-CFM fault propagation could cause these issues.

For the DefRemoteCCM fault, it is raised when any remote MEP is down. So, whenever a remote MEP fails and fault propagation is enabled, a fault is propagated to SMGR.

3.4.2. CFM fault propagation methods

When CFM is the OAM module at the other end, it is required to use any of the following methods (depending on local configuration) to notify the remote peer:

  1. generating AIS for certain MEP levels
  2. sending CCM with interface status TLV “down”
  3. stopping CCM transmission
Note:

7210 platforms expect that the fault notified using interface status TLV, is cleared explicitly by the remote MEP when the fault is no longer present on the remote node. On 7210 SAS-D and 7210 SAS-Dxp, use of CCM with interface status TLV Down is not recommended to be configured with a Down MEP, unless it is known that the remote MEP clears the fault explicitly.

User can configure UP MEPs to use Interface Status TLV with fault propagation. Special considerations apply only to Down MEPs.

When a fault is propagated by the service manager, if AIS is enabled on the SAP/SDP-binding, then AIS messages are generated for all the MEPs configured on the SAP/SDP-binding using the configured levels.

Note that the existing AIS procedure still applies even when fault propagation is disabled for the service or the MEP. For example, when a MEP loses connectivity to a configured remote MEP, it generates AIS if it is enabled. The new procedure that is defined in this document introduces a new fault condition for AIS generation, fault propagated from SMGR, that is used when fault propagation is enabled for the service and the MEP.

The transmission of CCM with interface status TLV must be done instantly without waiting for the next CCM transmit interval. This rule applies to CFM fault notification for all services.

Notifications from SMGR to the CFM MEPs for fault propagation should include a direction for the propagation (up or down: up means in the direction of coming into the SAP/SDP-binding; down means in the direction of going out of the SAP/SDP-binding), so that the MEP knows what method to use. For instance, an up fault propagation notification to a down MEP will trigger an AIS, while a down fault propagation to the same MEP can trigger a CCM with interface TLV with status down.

For a specific SAP/SDP-binding, CFM and SMGR can only propagate one single fault to each other for each direction (up or down).

When there are multiple MEPs (at different levels) on a single SAP/SDP-binding, the fault reported from CFM to SMGR will be the logical OR of results from all MEPs. Basically, the first fault from any MEP will be reported, and the fault will not be cleared as long as there is a fault in any local MEP on the SAP/SDP-binding.

3.4.3. Epipe services

Down and up MEPs are supported for Epipe services as well as fault propagation. When there are both up and down MEPs configured in the same SAP/SDP-binding and both MEPs have fault propagation enabled, a fault detected by one of them will be propagated to the other, which in turn will propagate fault in its own direction.

3.4.3.1. CFM detected fault

When a MEP detects a fault and fault propagation is enabled for the MEP, CFM needs to communicate the fault to SMGR, so SMGR will mark the SAP/SDP-binding faulty but still oper-up. CFM traffic can still be transmitted to or received from the SAP/SDP-binding to ensure when the fault is cleared, the SAP will go back to normal operational state. Since the operational status of the SAP/SDP-binding is not affected by the fault, no fault handling is performed. For example, applications relying on the operational status are not affected.

If the MEP is an up MEP, the fault is propagated to the OAM components on the same SAP/SDP-binding; if the MEP is a down MEP, the fault is propagated to the OAM components on the mate SAP/SDP-binding at the other side of the service.

3.4.3.2. Service down

This section describes procedures for the scenario where an Epipe service is down when service is administratively shutdown. When service is administratively shutdown, the fault is propagated to the SAP/SDP-bindings in the service.

3.4.3.3. LLF and CFM fault propagation

LLF and CFM fault propagation are mutually exclusive. CLI protection is in place to prevent enabling both LLF and CFM fault propagation in the same service, on the same node and at the same time. However, there are still instances where irresolvable fault loops can occur when the two schemes are deployed within the same service on different nodes. This is not preventable by the CLI. At no time should these two fault propagation schemes be enabled within the same service.

3.4.3.4. 802.3ah EFM OAM mapping and interaction with service manager

802.3ah EFM OAM declares a link fault when any of the following occurs:

  1. loss of OAMPDU for a certain period of time
  2. receiving OAMPDU with link fault flags from the peer

When 802.3ah EFM OAM declares a fault, the port goes into operation state down. The SMGR communicates the fault to CFM MEPs in the service. OAM fault propagation in the opposite direction (SMGR to EFM OAM) is not supported.

3.4.4. Fault propagation to access dot1q/QinQ ports with access-uplink ports

A fault on the access-uplink port brings down all access ports with services independent of the encapsulation type of the access port (null, dot1q, or QinQ), that is, support Link Loss Forwarding (LLF). A fault propagated from the access-uplink port to access ports is based on configuration. A fault is propagated only in a single direction from the access-uplink port to access port.

A fault on the access-uplink port is detected using Loss of Signal (LoS) and EFM-OAM.

The following figure shows local fault propagation.

Figure 24:  Local fault propagation  

3.4.4.1. Configuring fault propagation

The operational group functionality, also referred to as oper-group, is used to detect faults on access-uplink ports and propagate them to all interested access ports regardless of their encapsulation. On the 7210 SAS operating in access-uplink mode, ports can be associated with oper-groups. Perform the following procedure to configure the use of the oper-group functionality for fault detection on a port and monitor-oper-group to track the oper-group status and propagate the fault based on the operational state of the oper-group:

  1. Create an oper-group (for example, “uplink-to-7210”).
  2. Configure an access-uplink port to track its operational state (for example, 1/1/20) and associate it with the oper-group created in 1 (that is, uplink-to-7210).
  3. Configure dot1q access ports for which the operational state must be driven by the operational state of the access-uplink port (for example, 1/1/1 and 1/1/5) as the monitor-oper-group.
  4. To detect a fault on the access-uplink port and change the operational state, use either the LoS or EFM OAM feature.
  5. When the operational state of the access-uplink port changes from up to down, the state of all access ports configured to monitor the group changes to down. Similarly, a change in state from down to up changes the operational state of the access port to up. When the operational state of the access port is brought down, the laser of the port is also shut down. The hold-timers command is supported to avoid the flapping of links.

3.4.4.1.1. Configuration example for fault propagation using oper-group

The following is a sample oper-group system configuration output.

*A:7210SAS>config>system>oper-group# info detail
----------------------------------------------
            hold-time
                group-down 0
                group-up 4
            exit
----------------------------------------------
*A:7210SAS>config>system>oper-group#
Note:

For more details about the CLI, refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Basic System Configuration Guide.

3.4.5. Fault propagation to access port using CFM on uplinks

Note:

This feature is not supported on the 7210 SAS-D.

Fault propagation allows the user to track an Epipe service link connectivity failure and propagate the failure toward customer devices connected to access ports.

The following figure shows an example Epipe service architecture.

Figure 25:  Example Epipe service architecture 

In the preceding figure, the 7210 SAS node is deployed as the network interface device (NID) and the customer premises equipment (CPE) is connected to the 7210 SAS over a dot1q SAP (may be a single VLAN ID dot1q SAP, range dot1q SAP, or default dot1q SAP). The 7210 SAS adds an S-tag to the packet received from the customer, and the packet is transported over the backhaul network to the service edge, which is typically a 7750 SR node acting as an external network-to-network interface (ENNI) where the service provider (SP) is connected. At the ENNI, the 7750 SR hands off the service to the SP over a SAP.

Service availability must be tracked end-to-end between the uplink on the 7210 SAS and the customer hand-off point. If there is a failure or a fault in the service, the access port toward the SP is brought down. For this reason, a Down MEP is configured on the 7210 SAS access-uplink SAP facing the network, or alternatively, an Up MEP can be configured on the access port connected to the CPE. The Down MEP has a CFM session with a remote Up MEP configured on the SAP facing the SP node that is the service hand-off point toward the SP. The customer-facing access port and the access-uplink or access SAP facing the network with the Down MEP are configured in an Epipe service on the 7210 SAS.

Connectivity fault management (CFM) continuity check messages (CCMs) are enabled on the Down MEP and used to track end-to-end availability. On detection of a fault that is higher than the lowest-priority defect of the configured MEP, the access port facing the customer is brought down (with the port tx-off action), which immediately indicates the failure to the CPE device connected to the 7210 SAS and allows the node to switch to another uplink, if available.

To enable fault propagation from the access-uplink SAP to the access port, the user can configure an operational group using the defect-oper-group command. An operational group configured on a service object (the CFM MEP on the Epipe SAP in this use case) inherits the operational status of that service object and on change in the operational state notifies the service object that is monitoring its operational state. The user can configure monitoring using the monitor-oper-group command under the service object (the access port connected to the customer in this use case).

For this use case, the user can configure an operational group under the CFM MEP to track the CMF MEP fault status so that when the CFM MEP reports a fault, the corresponding access ports that are monitoring the CFM MEP state is brought down (by switching off the laser on the port, similar to link loss forwarding (LLF)). When the CFM MEP fault is cleared, the operational group must notify the service object monitoring the MEP (that is, the access port) so that the access port can be brought up (by switching on the laser on the port, similar to LLF). The user can use the low-priority-defect command to configure the CFM session events that cause the MEP to enter the fault state.

If the MEP reports a fault greater than the configured low-priority defect, the software brings down the operational group so that the port configured to monitor that operational group becomes operationally down. After the MEP fault is cleared (that is, the MEP reports a fault lower than the configured low-priority defect), the software brings up the operational group so that the port monitoring the operational group becomes operationally up.

Fault detection is supported in only one direction from the MEP toward the other service endpoint. The fault is propagated in the opposite direction from the MEP toward the access port. Fault detection and propagation must not be configured in the other direction.

Note:

This feature does not support two-way fault propagation and must not be used for scenarios that require it.

3.4.5.1. Configuring an operational group for fault propagation

Before configuring an operational group, ensure that the low-priority-defect allDef option is configured on the MEP so that a fault is raised for all errors or faults detected by the MEP.

Use the following syntax to configure an operational group for a Down MEP configured in an Epipe service on an uplink SAP.

CLI Syntax:
config>service>epipe>sap>eth-cfm>mep mep-id# defect-oper-group name

The following example shows the command usage.

Example:
config>service>epipe>sap>eth-cfm>mep 1# defect-oper-group “ccm-track-uplink”

The access port on the 7210 SAS node to which the enterprise CPE is connected must be configured with the corresponding monitor object using the monitor-oper-group command.

Use the following syntax to configure monitoring of an operational group.

CLI Syntax:
config>lag# monitor-oper-group
config>port>ethernet# monitor-oper-group

The following example shows the command usage.

Example:
config>port>ethernet# monitor-oper-group “ccm-track-uplink”

3.4.5.2. Fault propagation restrictions

The following restrictions apply for fault propagation:

  1. This feature is supported only on the 7210 SAS-Dxp, 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C.
  2. This feature is supported only for an Epipe service. It is not supported for a VPLS or R-VPLS service.
  3. The Epipe service can be configured with two access SAPs, or two access-uplink SAPs, or one access SAP and one access-uplink SAP. SDP bindings (spoke-SDP) are not supported:
    1. On the 7210 SAS-K 2F6C4T and 7210 SAS-K 3SFP+ 8C, the SAPs in the Epipe service can be configured on access, access-uplink, or hybrid ports or LAGs.
    2. On the 7210 SAS-Dxp and 7210 SAS-K 2F1C2T, the SAPs in the Epipe service can be configured on access or access-uplink ports or LAGs.
    3. All supported encapsulation can be used with access, access-uplink, and hybrid ports. That is, unlike LLF, which provides similar capability but only for null SAPs, this feature is not restricted to only null-encapsulated ports.
  4. A MEP (Up or Down) with defect-oper-group enabled can be configured on one of the SAPs of the Epipe service:
    1. The Down MEP, which is used to track service uplink connectivity, must be configured on a different port, and not the port to which the fault is propagated. That is, the Down MEP with defect-oper-group configured and the port with monitor-oper-group configured must not be the same.
    2. The Up MEP used to track service uplink connectivity must be configured on the same port to which the fault is propagated. That is, the Up MEP with defect-oper-group configured and the port with the monitor-oper-group configured must be the same. In this case, it is mandatory for the user to enable config>service>epipe>sap>ignore-oper-down so that the Up MEP can continue to process the CFM messages received from the remote end. When an Up MEP is used, the fault must not be propagated to the other local endpoint in an Epipe service because it is likely to block all communications with the remote endpoint of an Epipe service.
    3. The defect-oper-group configuration can only track the fault state of a single MEP. Multiple MEPs are not supported. Consequently, only a single uplink can be tracked in an Epipe service.
  5. The monitor-oper-group command can be configured on any port or LAG other than the one on which the CFM defect-oper-group command is configured. It does not need to be restricted to the port or LAG that has a SAP configured in the same Epipe service, even though this is the most common use case. Allowing the configuration of the monitor-oper-group command on any port allows the user to have one service to track multiple services between the same two endpoints and propagate the fault to all the service ports that use the same uplink or path for service connectivity to the same remote endpoint.

3.5. Service Assurance Agent overview

In the last few years, service delivery to customers has drastically changed. The introduction of Broadband Service Termination Architecture (BSTA) applications such as Voice over IP (VoIP), TV delivery, video and high speed Internet services force carriers to produce services where the health and quality of Service Level Agreement (SLA) commitments are verifiable to the customer and internally within the carrier.

SAA is a feature that monitors network operations using statistics such as jitter, latency, response time, and packet loss. The information can be used to troubleshoot network problems, problem prevention, and network topology planning.

The results are saved in SNMP tables are queried by either the CLI or a management system. Threshold monitors allow for both rising and falling threshold events to alert the provider if SLA performance statistics deviate from the required parameters.

SAA allows two-way timing for several applications. This provides the carrier and their customers with data to verify that the SLA agreements are being correctly enforced.

3.5.1. Traceroute implementation

In the 7210 SAS, for various applications, such as IP traceroute, control CPU inserts the timestamp in software.

When interpreting these timestamps care must be taken that some nodes are not capable of providing timestamps, as such timestamps must be associated with the same IP-address that is being returned to the originator to indicate what hop is being measured.

3.5.2. NTP

Because NTP precision can vary (+/- 1.5ms between nodes even under best case conditions), SAA one-way latency measurements might display negative values, especially when testing network segments with very low latencies. The one-way time measurement relies on the accuracy of NTP between the sending and responding nodes.

3.5.3. Ethernet CFM

Loopback (LBM), linktrace (LTR) and two-way-delay measurements (Y.1731 ETH-DMM) can be scheduled using SAA. Additional timestamping is required for non Y.1731 delay-measurement tests, to be specific, loopback and linktrace tests. An organization-specific TLV is used on both sender and receiver nodes to carry the timestamp information. Currently, timestamps are only applied by the sender node. This means any time measurements resulting from loopback and linktrace tests includes the packet processing time of the remote node. Since Y.1731 ETH-DMM uses a four time stamp approach to remove the remote processing time it should be used for accurate delay measurements.

The SAA versions of the CFM loopback, linktrace and ETH-DMM tests support send-count, interval, timeout, and FC. The existing CFM OAM commands have not been extended to support send-count and interval natively. The summary of the test results are stored in an accounting file that is specified in the SAA accounting-policy.

3.5.4. Writing SAA results to accounting files

SAA statistics enables writing statistics to an accounting file. When results are calculated an accounting record is generated.

To write the SAA results to an accounting file in a compressed XML format at the termination of every test, the results must be collected, and, in addition to creating the entry in the appropriate MIB table for this SAA test, a record must be generated in the appropriate accounting file.

3.5.4.1. Accounting file management

Because the SAA accounting files have a similar role to existing accounting files that are used for billing purposes, existing file management information is leveraged for these accounting (billing) files.

3.5.4.2. Assigning SAA to an accounting file ID

When an accounting file has been created, accounting information can be specified and will be collected by the config>log>acct-policy> to file log-file-id context.

3.5.4.3. Continuous testing

When you configure a test, use the config>saa>test>continuous command to make the test run continuously. Use the no continuous command to disable continuous testing and shutdown to disable the test completely. When you have configured a test as continuous, you cannot start or stop it by using the saa test-name [owner test-owner] {start | stop} [no-accounting] command.

3.5.5. Configuring SAA test parameters

The following is a sample SAA configuration output.

 
*A:7210 SAS>config>saa# info
----------------------------------------------
        test "abc"
            shutdown
            description "test"
            jitter-event rising-threshold 100 falling-threshold 10
            loss-event rising-threshold 300 falling-threshold 30
            latency-event rising-threshold 100 falling-threshold 20
        exit
----------------------------------------------
*A:7210 SAS>config>saa# 
 

3.6. Y.1564 testhead OAM tool

The 7210 SAS provides support for both the original Y.1564 testhead OAM tool implementation and an enhanced implementation of the tool, referred to as the service test testhead OAM tool. The 7210 SAS-D supports only the original Y.1564 testhead OAM tool implementation. The 7210 SAS-K 2F1C2T and 7210 SAS-K 2F6C4T support both the original and the enhanced implementations. The 7210 SAS-K 3SFP+ 8C supports only the enhanced implementations. For information about the enhanced implementation, see Service test testhead OAM Tool for the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C in this section.

Note:

  1. Port loopback with MAC swap and the Y.1564 testhead OAM tool is only supported on the 7210 SAS-D and 7210 SAS-Dxp, and only for Epipe and VPLS services.
  2. Per-SAP loopback with MAC swap and the Y.1564 testhead OAM tool is supported on the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C, and only for Epipe and VPLS services. Port loopback with MAC swap is not supported on these platforms.

ITU-T Y.1564 defines the out-of-service test methodology to be used and parameters to be measured to test service SLA conformance during service turn up. It primarily defines two test phases. The first test phase defines the service configuration test, which consists of validating whether the service is configured correctly. As part of this test the throughput, frame delay, frame delay variation (FDV), and frame loss ratio (FLR) is measured for each service. This test is typically run for a short duration. The second test phase consists of validating the quality of services delivered to the end customer and is referred to as the service performance test. These tests are typically run for a longer duration. All traffic is generated up to the configured rate for all the services simultaneously and the service performance parameters are measured for each service.

The 7210 SAS supports the service configuration test for a user-configured rate and measurement of delay, delay variation, and frame loss ratio with the testhead OAM tool. The testhead OAM tool supports bidirectional measurement. On the 7210 SAS-D, the testhead OAM tool can generate test traffic for only one service at a specific time. On the 7210 SAS-K 2F1C2T and 7210 SAS-K 2F6C4T, the testhead OAM tool can generate test traffic for up to four services simultaneously. On the 7210 SAS-K 3SFP+ 8C, the testhead OAM tool can generate test traffic for up to eight streams or services simultaneously. The tool validates if the user-specified rate is available and computes the delay, delay variation, and frame loss ratio for the service under test at the specified rate. The tool is capable of generating traffic at a rate of up to 1 Gb/s on the 7210 SAS-D, 7210 SAS-K 2F1C2T, and 7210 SAS-K 2F6C4T. The tool is capable of generating traffic at a rate of up to approximately 10 Gb/s on the 7210 SAS-Dxp and 7210 SAS-K 3SFP+ 8C. On some 7210 SAS devices, the resources needed for this feature must be configured on the front-panel port; on other 7210 SAS devices, the resources needed for this feature are automatically allocated by software from the internal ports. See Configuration guidelines for information about which 7210 SAS platforms need user configuration.

The following figure shows the remote loopback required and the flow of the frame through the network generated by the testhead tool.

Figure 26:  7210 SAS acting as traffic generator and traffic analyzer 

The tool allows the user to specify the frame payload header parameters independent of the test SAP configuration parameters. This capability gives the user flexibility to test for different possible frame header encapsulations. The user can specify the appropriate VLAN tags, Ethertype, and Dot1p values independent of the SAP configuration like with actual service testing. In other words, the software does not use the parameters (For example: SAP ID, Source MAC, and Destination MAC) during the invocation of the testhead tool to build the test frames. Instead it uses only the parameters specified in the frame-payload CLI command. The software does not verify that the parameters specified match the service configuration used for testing, for example, software does not match if the VLAN tags specified matches the SAP tags, the Ethertype specified matches the user configured port Ethertype, and so on. It is expected that the user configures the frame-payload appropriately so that the traffic matches the SAP configuration.

7210 SAS-D and 7210 SAS-Dxp support Y.1564 testhead for performing CIR or PIR tests for both color-blind mode and color-aware mode. In color-aware mode, users can perform service turn-up tests to validate the performance characteristics (delay, jitter, and loss) for committed rate (CIR) and excess rate above CIR (that is, PIR rate). The testhead OAM tool uses the in-profile packet marking value and out-of-profile packet marking value to differentiate between committed traffic and PIR traffic in excess of CIR traffic. Traffic within CIR (that is, committed traffic) is expected to be treated as in-profile traffic in the network and traffic in excess of CIR (that is, PIR traffic) is expected to be treated as out-of-profile traffic in the network, allowing the network to prioritize committed traffic over PIR traffic. The testhead OAM tool allows the user to configure individual thresholds for green or in-profile packets and out-of-profile or yellow packets. It is used by the testhead OAM tool to compare the measured value for green or in-profile packets and out-of-profile or yellow packets against the configured thresholds and report success or failure.

Note:

CIR and PIR tests in color-aware mode are only supported on the 7210 SAS-D and 7210 SAS-Dxp.

The 7210 SAS testhead OAM tool supports the following functionality:

  1. Supports configuration of only access SAPs as the test measurement point.
  2. Supports all port encapsulations on all service SAP types, with some exceptions as indicated in the following Configuration guidelines.
  3. Supported for SAPs configured for VPLS and Epipe service. SAPs configured for other services are not supported.
  4. Supports two-way measurement of service performance metrics. The tests measure throughput, frame delay, frame delay variation, and frame loss ratio.
  5. On the 7210 SAS-D and 7210 SAS-Dxp, for two-way measurement of the service performance metrics such as frame delay and frame delay variation, test frames (also known as marker packets) are injected into the test flow at a low rate at periodic intervals. Frame delay and frame delay variation is computed for these frames. On the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C, there are no separate marker packets used for delay measurements. Instead, samples of the test traffic are used for measuring the delay. Hardware-based timestamps are used for delay computation.
  6. The 7210 SAS supports configuration of a rate value and provides an option to measure performance metrics. The testhead OAM tool generates traffic up to the specified rate and measures service performance metrics such as delay, jitter, loss for in-profile traffic, and out-of-profile traffic.
  7. The testhead tool can generate traffic at a maximum rate of 1 Gb/s on the 7210 SAS-D, 7210 SAS-K 2F1C2T, and 7210 SAS-K 2F6C4T. The testhead tool can generate traffic at a maximum rate of approximately 10 Gb/s on the 7210 SAS-Dxp and 7210 SAS-K 3SFP+ 8C. The CIR and PIR can be specified by the user. These rates are rounded off to the nearest hardware-supported rate by using the adaptation rule configured by the user.
  8. Allows the user to specify a frame size ranging from 64 bytes to 9212 bytes. Only a single frame size can be specified.
  9. The user can configure the following frame payload types: L2 payload, IP payload, and IP/TCP/UDP payload. The testhead tool uses these configured values for the IP header fields and TCP header fields in the generated frames. The user can optionally specify the data pattern to use in the payload field of the frame/packet.
  10. Allows the user to configure the duration of the test up to a maximum of 24 hours, 60 minutes, and 60 seconds. The test performance measurements are done after the specified rate is achieved. At any time user can probe the system to know the current status and progress of the test.
  11. Supports configuration of the Forwarding Class (FC). It is expected that user will define consistent QoS classification policies to map the packet header fields to the FC configured on the test SAP ingress on the local node, in the network on the nodes through which the service transits, and on the SAP ingress in the remote node.
  12. On the 7210 SAS-D, 7210 SAS-K 2F1C2T, and 7210 SAS-K 2F6C4T, the user can use the testhead tool to configure a test profile, also known as a policy template, that defines the test configuration parameters. The user can start a test using a preconfigured test policy for a specific SAP and service. The test profile allows the user to configure the acceptance criteria. The acceptance criteria allows user to configure the thresholds that indicates the acceptable range for the service performance metrics. For more information, see Configuring testhead tool parameters. An event is generated if the test results exceed the configured thresholds. At the end of the test, the measured values for frame delay (FD), frame delay variation (FDV), and frame loss ratio (FLR) are compared against the configured thresholds to determine the pass or fail criteria and to generate a trap to the management station. If the acceptance criteria are not configured, the test result is declared to be pass if the throughput is achieved and frame loss is 0 (zero).
  13. ITU-T Y.1564 specifies the following tests:
    1. service configuration tests
      Short duration tests used to check if the configuration is correct and can meet requested SLA metrics such as throughput, delay, and loss:
      1. CIR configuration test (color-aware and non-color aware)
      2. PIR configuration test (color-aware and non-color aware)
      3. traffic policing test (color-aware and non-color aware)
    2. service performance test
      Long duration test used to check the service performance metrics.
    Service configuration tests can be run by setting the rate value appropriately for the specific test. For example, traffic policing tests can be executed by specifying a PIR to be 125% of the desired PIR. A traffic policing test can be executed in either color-aware mode or color-blind (non-color-aware) mode. The 7210 SAS-D and 7210 SAS-Dxp support both color-aware and color-blind mode. The 7210 SAS-K 2F1C2T and 7210 SAS-K 2F6C4T support color-blind mode only. The 7210 SAS-K 3SFP+ 8C supports color-blind mode only using the service test testhead OAM tool.
  14. ITU-T Y.1564 specifies separate test methodology for color-aware and non-color-aware tests. The standard requires a single test to provide the capability to generate both green-color/in-profile traffic for rates within CIR and yellow-color or out-of-profile traffic for rates above CIR and within EIR. Since SAP ingress does not support color-aware metering, it is not possible to support EIR color-aware and traffic policing color-aware tests end-to-end in a network (that is, from test SAP to test SAP). Instead, It is possible to use the tests to measure the performance parameters from the other endpoint (example Access-uplink SAP) in the service, through the network, to the remote test SAP, and back again to the local test SAP.

3.6.1. Prerequisites for using the testhead tool

This section describes the prerequisites a user must be aware of before using the testhead OAM tool. It is divided into three sections. First, the generic prerequisites applicable to all 7210 SAS platforms are listed, followed by prerequisites specific to the 7210 SAS-D and 7210 SAS-Dxp platforms, and then followed by prerequisites specific to the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C.

3.6.1.1. Generic prerequisites for use of the Y.1564 testhead OAM tool (applicable to all 7210 SAS platforms)

  1. It is expected that the user will configure the appropriate ACL and QoS policies to ensure that the testhead traffic is processed as desired by the local and remote node/SAP. In particular, QoS policies in use must ensure that the rate in use for the SAP ingress meters exceed or are equal to the user configured rate for testhead tests and the classification policies map the testhead packets to the appropriate FCs/queues (the FC classification must match the FC specified in the CLI command testhead-test) using the packet header fields configured in the frame-payload. Similarly, ACL policies must ensure that testhead traffic is not blocked.
  2. The testhead OAM tool does not check the state of the service or the SAPs on the local endpoint before initiating the tests. The operator must ensure that the service and SAPs used for the test are UP before the tests are started. If they are not, the testhead tool will report a failure.
  3. The port configuration of the ports used for validation (For example: access port on which the test SAP is configured and access-uplink/network port) must not be modified after the testhead tool is invoked. Any modifications can be made only when the testhead tool is not running.
  4. Testhead tool can be used to test only unicast traffic flows. It must not be used to test BUM traffic flows.
  5. Only out-of-service performance metrics can be measured using testhead OAM tool. For in-service performance metrics, user has the option to use SAA-based Y.1731/CFM tools or OAM-PM-based tools.

The following list describes some prerequisites for using the testhead tool on the 7210 SAS-D and 7210 SAS-Dxp:

  1. The configuration guidelines and prerequisites that are to be followed when the port loopback with MAC swap feature is used standalone, applies to its use along with testhead tool. For more information, see the description in the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Interface Configuration Guide.
  2. User must configure resources for ACL MAC criteria in ingress-internal-tcam using the config>system>resource-profile>ingress-internal-tcam>acl-sap-ingress>mac-match-enable command. Additionally they must allocate resources to egress ACL MAC or IPv4 or IPv6 64-bit criteria (using the config>system>resource-profile>egress-internal-tcam>acl-sap-egress>mac-ipv4-match-enable or mac-ipv6-64bit-enable or mac-ipv4-match-enable commands). Testhead tool uses resources from these resource pools. If no resources are allocated to these pools or no resources are available for use in these pools, then testhead fails to function. Testhead needs a minimum of about 6 entries from the ingress-internal-tcam pool and 2 entries from the egress-internal-tcam pool. If user allocates resources to egress ACLs IPv6 128-bit match criteria (using the config>system>resource-profile>egress-internal-tcam>acl-sap-egress>ipv6-128bit-match-enable command), then testhead fails to function.
  3. For both Epipe and VPLS service, the test can be used to perform only a point-to-point test between the specific source and destination MAC address. Port loopback MAC swap functionality must be used for both Epipe and VPLS services. The configured source and destination MAC address is associated with the two SAPs configured in the service and used as the two endpoints. In other words, the user configured source MAC and destination MAC addresses are used by the testhead tool on the local node to identify the packets as belonging to testhead application and are processed appropriately at the local end and at the remote end these packets are processed by the port loopback with MAC swap application.
  4. Configure the MACs (source and destination) statically for VPLS service.
  5. Port loopback must be in use on both the endpoints (that is, the local node, the port on which the test SAP is configured and the remote node, the port on which the remote SAP is configured for both Epipe and VPLS services. Port loopback with MAC swap must be set up by the user on both the local end and the remote end before invoking the testhead tool. These must match appropriately for traffic to flow, else there will be no traffic flow and the testhead tool reports a failure at the end of the completion of the test run.
  6. Additionally, port loopback with MAC swap must be used at both the ends and if any services/SAPs are configured on the test port, they need to be shutdown to avoid packets being dropped on the non-test SAP. The frames generated by the testhead tool will egress the access SAP and ingress back on the same port, using the resources of the loopback ports configured for use with this tool (one for testhead and another for MAC swap functionality), before being sent out to the network side (typically an access-uplink SAP) to the remote end”. At the remote end, it is expected that the frames will egress the SAP under test and ingress back in again through the same port, going through another loopback (with MAC swap) before being sent back to the local node where the testhead application is running.
  7. The FC specified is used to determine the queue to enqueue the marker packets generated by testhead application on the egress of the test SAP on the local node.
  8. The use of port loopback is service affecting. It affects all the services configured on the port. It is not recommended to use a SAP, if the port on which they are configured, is used to transport the service packets toward the core. As, a port loopback is required for the testhead to function correctly, doing so might result in loss of connectivity to the node when in-band management is in use. Additionally, all services being transported to the core will be affected.
  9. It also affects service being delivered on that SAP. Only out-of-service performance metrics can be measured using testhead OAM tool. For in-service performance metrics, user has the option to use SAA based Y.1731/CFM tools.
  10. The testhead tool uses marker packets with special header values. The QoS policies and ACL policies need to ensure that same treatment as accorded to testhead traffic is given to marker packets. Marker packets are IPv4 packet with IP option set and IP protocol set to 252. It uses the src and dst MAC addresses, Dot1p, IP ToS, IP DSCP, IP TTL, IP source address and destination address as configured in the frame-payload. It does not use the IP protocol and TCP/UDP port numbers from the frame-payload configured. If the payload-type is “l2”, IP addresses are set to 0.0.0.0, IP TTL is set to 0, IP TOS is set to 0 and DSCP is set to be, if these values are not explicitly configured in the frame-payload. Ethertype configured in the frame-payload is not used for marker packets, it is always set to Ethertype = 0x0800 (Ethertype for IPv4) as marker packets are IPv4 packets. QoS policies applied in the network needs to configured such that the classification for marker packets is similar to service packets. An easy way to do this is by using the header fields that are common across marker packets and service packets, such as MAC (src and dst) addresses, VLAN ID, Dot1p, IPv4 (src and dst) addresses, IP DSCP, and IP ToS. Use of other fields which are different for marker packets and service packets is not recommended. ACL policies in the network must ensure that marker packets are not dropped.
  11. The MAC swap loopback port, the testhead loopback port and the uplink port must not be modified after the testhead tool is invoked. Any modifications can be made only when the testhead tool is not running.
  12. Link-level protocols (For example: LLDP, EFM, and other protocols) must not be enabled on the port on which the test SAP is configured. In general, no other traffic must be sent out of the test SAP when the testhead tool is running.
  13. The frame payload must be configured such that number of tags match the number of SAP tags. For example: For 0.* SAP, the frame payload must be untagged or priority tagged and it cannot contain another tag following the priority tag.

The following list describes the prerequisites for using Y,1564 testhead OAM functionality on the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C.

  1. For both Epipe and VPLS service, the test can be used to perform only a point-to-point test between the specific source and destination MAC address. SAP loopback MAC swap functionality must be used for both Epipe and VPLS services. The configured source and destination MAC address is associated with the two SAPs configured in the service and used as the two endpoints.
  2. It is recommended to configure the MACs (source and destination) statically for VPLS service.
  3. Software validates the frame payload to ensure the VLAN tag matches the SAP configuration used for the test. The VLAN tag (or untagged) is the only field validated by the software. No other fields are validated by software.
  4. SAP loopback needs to be configured only on the remote endpoint, that is, on the remote node on which the remote SAP is configured in both Epipe and VPLS services. SAP loopback with MAC swap must be set up by the user on the remote end before invoking the testhead tool. The testhead tool injects packets on SAP ingress; therefore, a SAP loopback on the local endpoint (where the test SAP is located) is not required.
  5. Use of per-SAP loopback does not affect other services configured on the same port. It does affect service being delivered on that SAP.
  6. A cookie used to identify testhead packets is added after the protocol headers for every frame generated by the testhead tool. This cookie is used to identify testhead generated frames. The cookie follows immediately after the protocol header configured by the user (for example, in a TCP/IP packet configured by the user, the cookie is added immediately after the TCP/IP header, and forms the first 8 bytes of the payload, after which the data pattern specified by the user is added to the packet).
  7. The 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C do not require any loopback ports to be assigned for the testhead OAM tool.

3.6.2. Service test testhead OAM Tool for the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C

The 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C support the service test framework through the use of the service test testhead OAM tool. This tool allows for configuration of multiple streams (also called flows) for which service performance metrics can be obtained. Please consult the Platform Scaling Guide to know the number of streams supported by the various platforms. With multiple streams, it is possible to potentially configure two service tests to validate two services each with two forwarding classes (FCs), or validate a single service with four FCs, or validate a mix of services and FCs as long as the number of streams are within the limit supported by the platform.

A set of streams under a single service test can be grouped together using the service-stream configuration commands and each stream can be configured with the options listed as follows:

  1. The CIR and/or PIR can be configured for each of the streams, along with the frame payload contents and the frame size.
  2. Different acceptance criteria per stream can be configured and used to determine pass/fail criteria for the stream, along with the ability to monitor the streams that are in progress.
  3. For each stream, it is possible to use a single command to run a service configuration test, CIR test, PIR test, and service performance test concurrently instead of running each test individually.
  4. Rather than using threshold parameters to determine the pass/fail criteria for a test, it is possible configure the margin by which the measured throughput is off from the configured throughput to determine pass/fail criteria. The margin is configured using the use-m-factor CLI command.

The test results can be stored in an accounting record in XML format. The XML file contains the keywords and MIB references listed in the following figure.

Table 17:  OAM Y.1564 XML keywords and MIB reference 

XML file keyword

Description

TIMETRA-SAS-OAM-Y1564-MIB

acceptanceCriteriaId

Provides the ID of the acceptance criteria policy used to compare the measured results

tmnxY1564StreamAccCritId

accountingPolicy

Provides the ID of the accounting policy, which determines the properties of the accounting record generated, such as how frequently to write records, rollover interval, and so on

tmnxY1564ServTestAccPolicy

achievedThroughput

The throughput measured by the tool, as observed by measuring the rate of testhead packets received by the tool

tmnxY1564StreamResAchvdThruput

cirAdaptRule

The adaptation rule to apply to the configured CIR rate to match it to hardware-supported rates

tmnxY1564StreamCIRAdaptation

cirRate

The user-configured CIR rate

tmnxY1564StreamAdminCIR

cirTestDur

The duration, in seconds, of the CIR configuration test

tmnxY1564ServTestCirTestDuration

cirThreshold

The CIR rate threshold to compare with the measured value

tmnxY1564AccCritCirThres

dataPattern

The data pattern to include in the packet generated by the service testhead tool

tmnxY1564PayLddataPattrn

description

The user-configured description for the test

tmnxY1564ServTestDescription

desiredThroughput

The user-configured rate that is the target to achieve. The desired throughput value is either the user-configured CIR rate or PIR rate, based on the test type.

tmnxY1564StreamResDesiredThruput

dstIp

The destination IP address to use in the packet generated by the tool

tmnxY1564PayLdDstIpv4Addr

dstMac

The destination MAC address to use in the packet generated by the tool

tmnxY1564PayLdDstMac

dstPort

The destination TCP/UDP port to use in the packet generated by the tool

tmnxY1564PayLdDstPort

endTime

The time (wall-clock time) the test was completed

tmnxY1564ServTestCompletionTime

etherType

The Ethertype value to use in the packet generated by the tool

tmnxY1564PayLdEthertype

fc

The forwarding class for which the tool is being used to measure the performance metrics

tmnxY1564StreamFc

fixedFrameSize

The frame size to use for the generated packet; used to specify a single value for all frames generated by the tool

tmnxY1564StreamFrameSize

flr

The measured frame loss ratio

tmnxY1564StreamResMeasuredFLR

flrAcceptance

Indicates whether the measured FLR is within the configured loss threshold

tmnxY1564StreamResFLRAcceptanceResult

frameLossThreshold

The loss threshold configured in the acceptance criteria

tmnxY1564AccCritLossRiseThres

frameMixId

The ID of the frame-mix policy. The testhead tool generates packets sizes as specified in the frame-mix policy. This is used to specify a mix of frames with different sizes to be generated by the tool.

tmnxY1564StreamFrameMixId

framePayloadId

The ID of the frame payload. The frame payload defines the format of the payload and provides the frame/packet header values and data pattern to use for the payload.

tmnxY1564StreamPayLdId

id

Provides information about either the frame mix policy ID, acceptance criteria policy ID, or frame payload ID to use for the service test, depending on the context it appears in

tmnxY1564StreamFrameMixId

tmnxY1564StreamAccCritId

tmnxY1564StreamPayLdId

ipDscp

The IP DSCP value used in the frame payload

tmnxY1564PayLdDSCP

ipProto

The IP protocol value used in the frame payload

tmnxY1564PayLdIpProto

ipTos

The IP ToS bits value used in the frame payload

tmnxY1564PayLdIpTos

ipTtl

The IP TTL value used in the frame payload

tmnxY1564PayLdIpTTL

jitter

The measured jitter value

tmnxY1564StreamResMeasuredJitter

jitterAcceptance

Indicates whether the measured jitter is within the configured jitter threshold

tmnxY1564StreamResJitterAcceptanceResult

jitterThreshold

The jitter threshold configured in the acceptance criteria

tmnxY1564AccCritJittrRiseThres

latencyAcceptance

Indicates whether the measured FLR is within configured loss threshold.

tmnxY1564StreamResLatencyAcceptanceResult

latencyAvg

The average of latency values computed for the test stream

tmnxY1564StreamResMeasuredLatency

latencyMax

The maximum value of latency measured by the tool

tmnxY1564StreamResMaxLatency

latencyMin

The minimum value of latency measured by the tool

tmnxY1564StreamResMinLatency

latencyThreshold

The latency threshold configured in the acceptance criteria

tmnxY1564AccCritLatRiseThres

measuredCir

The measured CIR rate

tmnxY1564StreamResMeasuredCIR

measuredpir

The measured PIR rate

tmnxY1564StreamResMeasuredPIR

measuredThroughput

The measured throughput

tmnxY1564StreamResMeasuredThruput

mfactor

A factor to use as a margin by which the observed throughput is off from the configured throughput to determine whether a service test passes or fails

tmnxY1564AccCritUseMFactor

perfTestDur

The duration, in seconds, of the performance test

tmnxY1564ServTestPerformanceTestDuration

pirAdaptRule

The PIR adaptation rule used

tmnxY1564StreamPIRAdaptation

pirRate

The PIR rate configured

tmnxY1564StreamAdminPIR

pirTestDur

The PIR test duration, in seconds

tmnxY1564ServTestCirPirTestDuration

pirThreshold

The PIR threshold configured in the acceptance criteria

tmnxY1564AccCritPirThres

pktCountRx

The received packet count

tmnxY1564StreamResRecvCount

pktCountTx

The transmitted packet count

tmnxY1564StreamResTransCount

policingTestDur

The policing test duration, in seconds

tmnxY1564ServTestPolicingTestDuration

resultStatus

Indicates whether the stream has passed or failed

tmnxY1564StreamResStatus

runningInstance

A counter used to indicate the run instance of the test

tmnxY1564ServTestRunningInstance

sap

The SAP used as the test endpoint

tmnxY1564StreamSapPortId

tmnxY1564StreamSapEncapValue

sequence

The sequence of payload sizes specified in the frame-mix policy

tmnxY1564StreamFrameMixSeq

serviceTest

A tag to indicate the start of the service test in the accounting record

None

sizeA

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeA specifies the frame size for the packet identified with the letter ‘a’ in the frame sequence.

tmnxY1564FrameMixSizeA

sizeB

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeB specifies the frame size for the packet identified with the letter ‘b’ in the frame sequence.

tmnxY1564FrameMixSizeB

sizeC

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeC specifies the frame size for the packet identified with the letter ‘c’ in the frame sequence.

tmnxY1564FrameMixSizeC

sizeD

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeD specifies the frame size for the packet identified with the letter ‘d’ in the frame sequence.

tmnxY1564FrameMixSizeD

sizeE

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeE specifies the frame size for the packet identified with the letter ‘e’ in the frame sequence.

tmnxY1564FrameMixSizeE

sizeF

A frame sequence can be configured by user to indicate the sequence of frame-sizes to be generated by the tool. The sequence of frames is specified using letters a-h and u. sizeF specifies the frame-size for packet identified with letter ‘f’ in the frame-sequence.

tmnxY1564FrameMixSizeF

sizeG

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeG specifies the frame size for the packet identified with the letter ‘g’ in the frame sequence.

tmnxY1564FrameMixSizeG

sizeH

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeH specifies the frame size for the packet identified with the letter ‘h’ in the frame sequence.

tmnxY1564FrameMixSizeH

sizeU

A frame sequence can be configured by the user to indicate the sequence of frame sizes to be generated by the tool. The sequence of frames is specified using letters a to h and u. sizeU specifies the frame size for the packet identified with the letter ‘u’ in the frame sequence. sizeU is the user-defined packet size.

tmnxY1564FrameMixSizeU

srcIp

The source IP address in the frame payload generated by the tool

tmnxY1564PayLdSrcIpv4Addr

srcMac

The source MAC address in the frame payload generated by the tool

tmnxY1564PayLdSrcMac

srcPort

The source TCP/UDP port in the frame payload generated by the tool

tmnxY1564PayLdSrcPort

startTime

The time at which the test was started (wall clock time)

tmnxY1564ServTestStartTime

streamId

The stream identifier

tmnxY1564StreamId

streamOrdered

Indicates if the streams configured for the service test were run one after another or run in parallel

tmnxY1564ServTestStreamOrder

testCompleted

Indicates if the test was completed or not

tmnxY1564StreamResCompleted

testCompletion

The execution status of the test (either completed or running)

tmnxY1564ServTestCompletion

testDuration

The duration of the entire test (including all test types)

tmnxY1564ServTestTime

testIndex

The service test index configured

tmnxY1564ServTestIndex

testResult

Indicates the result of the test

tmnxY1564ServTestTestResult

testStopped

Indicates if the test was stopped and did not complete

tmnxY1564ServTestStopped

testTime

The time taken for each stream. If the test is stopped, the time given is the execution time of the stream up until it was stopped.

tmnxY1564StreamResTestTime

testTypeCir

The CIR configuration test

tmnxY1564StreamTests

testTypeCirPir

The CIR-PIR configuration test

tmnxY1564StreamTests

testTypePerf

The performance test

tmnxY1564StreamTests

testTypePolicing

The policing test

tmnxY1564StreamTests

throughputAcceptance

Indicates whether the measured throughput matches the configure CIR/PIR rate

tmnxY1564StreamResThruputAcceptanceResult

trapEnabled

Indicates if the trap needs to be sent in completion of the test

tmnxY1564ServTestTrapEnable

type

The test type configured for the stream ID

tmnxY1564PayLdType

vlanTag1Dei

The DEI value set for VLAN Tag #1 (outer most VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagOneDei

vlanTag1Id

The VLAN ID value set for VLAN Tag #1 (outer most VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagOne

vlanTag1Tpid

The VLAN Tag TPID value set for VLAN Tag #1 (outer most VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagOneTpid

vlanTag2Dei

The DEI value set for VLAN Tag #2 (inner VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagTwoDei

vlanTag2Dot1p

The Dot1p value set for VLAN Tag #2 (inner VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagTwoDot1p

vlanTag2Id

The VLAN ID value set for VLAN Tag #2 (inner VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagTwo

vlanTag2Tpid

The VLAN tag TPID value set for VLAN Tag #2 (inner VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagTwoTpid

vlanTag1Dot1p

ThenDot1p value set for VLAN Tag #1 (outermost VLAN tag) in the frame payload generated by the tool

tmnxY1564PayLdVTagOneDot1p

3.6.2.1. Ethernet frame size specification

On the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C, the service test testhead tool can be configured to generate a flow containing a sequence of frames of different sizes. The frame sizes and designations are defined by ITU-T Y.1564 and listed in the following table.

Table 18:  Ethernet frame sizes and size designations  

a

b

c

d

e

f

g

h

u

64

128

256

512

1024

1280

1518

MTU

User defined

The test tool can be configured to generate a flow containing frames with sizes ranging from 64 bytes to 9212 bytes using the designated letters to specify the frame size to use. The frame size is configured using the fixed-size command. The configured size applies to all the frames in the test flow.

It is also possible to configure the test tool to generate frames of different sizes in a flow. The frame-mix command creates a template that specifies sizes of the frames to be generated by the testhead tool. The frame sizes are defined in ITU-T Y.1564.The frame-sequence command configures a string of up to 16 characters that specifies the order in which the frames are to be generated by the testhead tool for a specific template. For example, a frame-sequence configured as aabcdduh indicates that a packet of size-a configured in the specified frame mix template should be generated first, followed by another packet of size-a, followed by packet of size-b and so on until a packet of size-h is generated. The tool then starts again and repeats the sequence until the rate of packets generated matches the required rate.

3.6.3. Configuration guidelines

This section lists the configuration guidelines for the testhead OAM tool. These guidelines apply to all platforms described in this guide unless a specific platform is called out explicitly. As well, these guidelines apply to both the testhead OAM tool and service test testhead OAM tool, unless indicated otherwise:

  1. SAPs configured on LAG cannot be configured for testing with testhead tool. Other than the test SAP, other service endpoints (For example: SAPs/SDP-Bindings) configured in the service can be over a LAG.
  2. On the 7210 SAS-D, software automatically assigns the resources associated with internal ports for use with this feature.
  3. On the 7210 SAS-D, testhead OAM tool uses internal loopback ports. On the 7210 SAS-Dxp, the user can assign the resources of an internal loopback port to the testhead OAM tool or port loopback with MAC swap. Depending on the throughput being validated, either a 1GE internal loopback port or 10GE internal loopback port can be used.
  4. On the 7210 SAS-D and 7210 SAS-Dxp, port loopback with MAC swap is used at both ends and all services on the port on which the test SAP is configured and SAPs in the VPLS service, other than the test SAP should be shutdown or should not receive any traffic.
  5. The configured CIR/PIR is rounded off to the nearest available hardware rates. User is provided with an option to select the adaptation rule to use (similar to support available for QoS policies).
  6. On the 7210 SAS-D and 7210 SAS-Dxp, the FC specified is used to determine the queue to enqueue the marker packets generated by testhead application on the egress of the test SAP on the local node.
  7. On the 7210 SAS-Dxp and 7210 SAS-K 3SFP+ 8C, the testhead OAM tool allows validation of speeds up to approximately 10 Gb/s.
  8. ITU-T Y.1564 recommends to provide an option to configure the CIR step-size and the step-duration for the service configuration tests. This is not supported directly in 7210 SAS. It can be achieved by the NSP NFM-P or a third-party NMS system or an application with configuration of the desired rate and duration to correspond to the CIR step-size and step duration and repeating the test a second time, with a different value of the rate (that is, CIR step size) and duration (that is, step duration) and so on.
  9. Testhead waits for about 5 seconds at the end of the configured test duration before collecting statistics. This allows for all in-flight packets to be received by the node and accounted for in the test measurements. User cannot start another test during this period.
  10. When using testhead to test bandwidth available between SAPs configured in a VPLS service, operators must ensure that no other SAPs in the VPLS service are exchanging any traffic, particularly BUM traffic and unicast traffic destined to either the local test SAP or the remote SAP. BUM traffic eats into the network resources which is also used by testhead traffic.
  11. It is possible that test packets (both data and/or marker packets) remain in the loop created for testing when the tests are stopped. This is highly probable when using QoS policies with much lower shaper rate values, resulting in high latency for packets flowing through the network loop. User must remove the loop at both ends when the test is complete or when the test is stopped and wait for a suitable interval before starting the next test for the same service, to ensure that packets drain out of the network for that service. If this is not done, then the subsequent tests might process and account these stale packets, resulting in incorrect results. Software cannot detect stale packets in the loop as it does not associate or check each and every packet with a test session.
  12. Traffic received from the remote node and looped back into the test port (where the test SAP is configured) on the local end (that is, the end where the testhead tool is invoked) is dropped by hardware after processing (and is not sent back to the remote end). The SAP ingress QoS policies and SAP ingress filter policies must match the packet header fields specified by the user in the testhead profile, except that the source/destination MAC addresses are swapped.
  13. On the 7210 SAS-D and 7210 SAS-Dxp, latency is not be computed if marker packets are not received by the local node where the test is generated and printed as 0 (zero), in such cases. If jitter = 0 and latency > 0, it means that jitter calculated is less than the precision used for measurement. There is also a small chance that jitter was not actually calculated, that is, only one value of latency has been computed. This typically indicates a network issue, rather than a testhead issue.
  14. When the throughput is not met, FLR cannot be calculated. If the measured throughput is approximately +/-10% of the user configured rate, FLR value is displayed; else software prints “Not Applicable”. The percentage of variance of measured bandwidth depends on the packet size in use and the configured rate.
  15. On the 7210 SAS-D and 7210 SAS-Dxp, the user must not use the CLI command to clear statistics of the test SAP port, testhead loopback port and MAC swap loopback port when the testhead tool is running. The port statistics are used by the tool to determine the Tx/Rx frame count.
  16. On the 7210 SAS-D and 7210 SAS-Dxp, the testhead tool generates traffic at a rate slightly above the CIR. The additional bandwidth is attributable to the marker packets used for latency measurements. This is not expected to affect the latency measurement or the test results in a significant way.
  17. If the operational throughput is 1kb/s and it is achieved in the test loop, the throughput computed could still be printed as 0 if it is < 1Kb/s (0.99 kb/s, for example). Under such cases, if FLR is PASS, the tool indicates that the throughput has been achieved.
  18. The testhead tool displays a failure result if the received count of frames is less than the injected count of frames, even though the FLR might be displayed as 0. This happens due to truncation of FLR results to 6 decimal places and can happen when the loss is very less.
  19. On the 7210 SAS-D and 7210 SAS-Dxp, as the rate approaches the maximum rate supported on the platform or the maximum rate of the loop (which includes the capacities of the internal loopback ports in use), the user needs to account for the marker packet rate and the meter behavior while configuring the CIR. In other words, if the user needs to test 1 Gb/s for a frame size of 512 bytes, then they will need to configure about 962396 Kb/s, instead of 962406 Kb/s, the maximum rate that can be achieved for this frame size. In general, they would need to configure about 98% to 99% (based on packet size) of the maximum possible rate to account for marker packets when they need to test at rates which are closer to the bandwidth available in the network. The reason for this is that at the maximum rate injection of marker packets by CPU will result in drops of either the injected data traffic or the marker packets themselves, as the net rate exceeds the capacity. These drops cause the testhead to always report a failure, unless the rate is marginally reduced.
  20. The testhead uses the Layer 2 rate, which is calculated by subtracting the Layer 1 overhead that is caused when the IFG and Preamble are added to every Ethernet frame (typically about 20 bytes (IFG = 12 bytes and Preamble = 8 bytes)). The testhead tool uses the user-configured frame size to compute the Layer 2 rate and does not allow the user to configure a value greater than that rate. For 512-byte Ethernet frames, the L2 rate is 962406 Kb/s and the Layer 1 rate is 1 Gb/s.
  21. It is not expected that the operator will use the testhead tool to measure the throughput or other performance parameters of the network during the course of network event. The network events could be affecting the other SAP/SDP-Binding/PW configured in the service. Examples are transition of a SAP due to G8032 ring failure, transition of active/ standby SDP-Binding/PW due to link or node failures.
  22. On the 7210 SAS-D and 7210 SAS-Dxp, the 2-way delay (also known as latency) values measured by the testhead tool are more accurate than those obtained using OAM tools, as the timestamps are generated in hardware.
  23. On the 7210 SAS-D and 7210 SAS-Dxp, the profile assigned to the packets generated by the testhead is ignored on access SAP ingress. 7210 SAS service access port, access-uplink port, or network port can mark the packets appropriately on egress to allow the subsequent nodes in the network to differentiate the in-profile and out-of-profile packets and provide them with appropriate QoS treatment. 7210 SAS access-uplink ingress and network port ingress is capable of providing appropriate QoS treatment to in-profile and out-of-profile packets.
  24. On the 7210 SAS-D and 7210 SAS-Dxp, the marker packets are sent over and above the configured CIR or PIR. The tool cannot determine the number of green packets injected and the number of yellow packets injected individually. Therefore, marker packets are not accounted for in the injected or received green or in-profile and yellow or out-of-profile packet counts. They are only accounted for in the Total Injected and the Total Received counts. So, the FLR metric accounts for marker packet loss (if any), while green or yellow FLR metric does not account for any marker packet loss.
  25. On the 7210 SAS-D and 7210 SAS-Dxp, marker packets are used to measure green or in-profile packets latency and jitter and the yellow or out-of-profile packets latency and jitter. These marker packets are identified as green or yellow based on the packet marking (for example, dot1p). The latency values can be different for green and yellow packets based on the treatment provided to the packets by the network QoS configuration.
  26. The following table describes of SAP encapsulation that are supported for testhead on 7210 SAS-D and 7210 SAS-Dxp.
    Table 19:  SAP encapsulations supported for testhead on 7210 SAS-D and 7210 SAS-Dxp 

    Epipe service configured with svc-sap-type

    Test SAP encapsulations

    null-star

    Null, :* , 0.* , Q.*

    Any

    Null , :0 , :Q , :Q1.Q2

    dot1q-preserve

    :Q

3.6.4. Configuring testhead tool parameters

Note:

  1. On the 7210 SAS-D, mac swap and testhead loopback port are configured and do not require dedicated front-panel ports.
  2. On the 7210 SAS-Dxp, mac swap and testhead loopback port must be configured and can use either a 1GE or 10GE internal loopback port depending on the throughput being measured.

3.6.4.1. Allocating CAM resources for the OAM testhead tool on the 7210 SAS-D

The following output is an example of CAM resource allocation for use by the OAM testhead tool.

config> system> resource-profile
.............
resource-profile
ingress-internal-tcam
qos s-sap-ingress-resource 5
exit
acl-sap-ingress 5
mac-match-enable  max
exit
exit
egress-internal-tcam
acl-sap-egress 2
mac-ipv4-match-enable max
exit
exit
exit
..............

3.6.4.2. Loopback ports on the 7210 SAS-D

Note:

The user does not need to allocate loopback ports on the 7210 SAS-D. They are allocated by the software and can be displayed using the show system internal-loopback-ports command.

The following output is an example of internal-loopback-ports information.

A:Dut-B# show system internal-loopback-ports 
===========================================================================
Internal Loopback Port Status
===========================================================================
Port           Loopback       Application    Service        Speed Type
Id             Type                          Enabled        1G/10G
---------------------------------------------------------------------------
1/1/11         Virtual        Mac-Swap       No             1G
1/1/12         Virtual        Dot1q-Mirror   No             1G
1/1/13         Virtual        Testhead       No             1G
===========================================================================

The preceding example shows that virtual port 1/1/11 and 1/1/13 are allocated for MAC-swap and testhead application.

3.6.4.3. Configuring the testhead profile for the OAM testhead tool

The following output is an example of the testhead profile.

A:Dut-B>config>test-oam># info detail 
----------------------------------------------
        testhead-profile 1 create
            description "Testhead_Profile_1"
            frame-size 512
            rate cir 962388 adaptation-rule max
            no dot1p
            no test-duration
            no test-completion-trap-enable
            frame-payload 1 payload-type tcp-ipv4 create
                description "Frame-payload_1"
                no data-pattern
                dscp "af11"
                dst-ip ipv4 10.2.2.2
                dst-mac 00:00:00:00:00:02
                src-mac 00:00:00:00:00:01
                dst-port 50
                src-port 40
                no ethertype
                ip-proto 6
                ip-tos 8
                ip-ttl 64
                src-ip ipv4 10.1.1.1
                no vlan-tag-1
                no vlan-tag-2
            exit
        exit 
----------------------------------------------

3.6.4.4. Starting the testhead test and displaying results

Before starting the test, return the testhead packets to the local node by ensuring that loopback with mac-swap is configured on the remote end point.

The following is an example of the CLI command to start the testhead session.

oam testhead testhead-profile 1 test-me owner owner-me sap 1/1/2 fc be frame-
payload 1 color-aware disable

The following is an example of Result and Sample testhead output.

*A:Dut-B# show testhead "test-me" owner "owner-me" detail 
===============================================================================
Y.1564 Testhead Session
===============================================================================
Owner              : owner-me
Test               : test-me
Profile Id         : 1                        SAP               : 1/1/2
Accept. Crit. Id   : 0                        Completed         : Yes
Frame Payload Id   : 1                        Stopped           : No
Frame Payload Type : tcp-ipv4                 FC                : be
Color Aware Test   : No                       
Start Time         : 08/29/2018 08:56:03      
End Time           : 08/29/2018 08:59:09      
Total time taken   : 0d 00:03:05              
-------------------------------------------------------------------------------
Latency Results
-------------------------------------------------------------------------------
 (total pkts in us):       Min       Max   Average    Jitter
         Roundtrip :        51        51        51         0
-------------------------------------------------------------------------------
Packet Count
-------------------------------------------------------------------------------
Total Injected     : 42290555                 
Total Received     : 42290555                 
-------------------------------------------------------------------------------
Test Compliance Report
-------------------------------------------------------------------------------
Throughput Configd : 962388                   
Throughput Oper    : 962384                   
Throughput Measurd : 962345                   
FLR Configured     : None                     
FLR Measurd        : 0.000000                 
FLR Acceptance     : Pass                     
Latency Configd(us): None                     
Latency Measurd(us): 51                       
Latency Acceptance : Not Applicable           
Jitter Configd(us) : None                     
Jitter Measurd(us) : None                     
Jitter Acceptance  : Not Applicable           
Total Pkts. Tx.    : 110                      Latency Pkts. Tx. : 100
OutPrf Latency Pkt*: 0                        InPrf Latency Pkt*: 0
Total Tx. Fail     : 0                        
===============================================================================
*indicates that the corresponding row element may have been truncated.

The following CLI command stops a testhead session.

oam testhead "test-me" owner "owner-me" stop

The following CLI command clears testhead session results.

clear testhead result "test-me" owner "owner-me"

3.6.5. Configuring service test OAM testhead tool parameters

Note:

This feature is only supported on the 7210 SAS-K 2F1C2T, 7210 SAS-K 2F6C4T, and 7210 SAS-K 3SFP+ 8C.

The following output is an example of frame-mix parameters.

*A:Dut-B# configure test-oam frame-mix 1 create
*A:Dut-B>config>test-oam>frame-mix# info detail
----------------------------------------------
no size-a
no size-b
no size-c
no size-d
no size-e
no size-f
no size-g
no size-h
no size-u
----------------------------------------------
*A:Dut-B>config>test-oam>frame-mix#
Note:

In the following example, fixed-sized frames are used. The device provides an option to configure a stream with frames of different frame sizes using the frame-mix option shown in the preceding example.

The following output is an example of frame-payload parameters.

*A:Dut-B>config>test-oam>fr-payld# info detail
----------------------------------------------
description "Frame_Payload_1"
no data-pattern
no dscp
no dst-ip
dst-mac 00:00:00:00:00:02
src-mac 00:00:00:00:00:01
no dst-port
no src-port
no ethertype
no ip-proto
no ip-tos
no ip-ttl
no src-ip
no vlan-tag-1
no vlan-tag-2
----------------------------------------------
*A:Dut-B>config>test-oam>fr-payld#

The following output is an example of acceptance-criteria parameters.

*A:Dut-B# configure test-oam acceptance-criteria 1
*A:Dut-B>config>test-oam>acc-criteria# info detail
----------------------------------------------
description "Acceptance_Criteria_1"
jitter-rising-threshold 0
no jitter-rising-threshold-in
no jitter-rising-threshold-out
latency-rising-threshold 45
no latency-rising-threshold-in
no latency-rising-threshold-out
loss-rising-threshold 0
no loss-rising-threshold-in
no loss-rising-threshold-out
cir-threshold 199990
pir-threshold 249750
use-m-factor 1000
----------------------------------------------
*A:Dut-B>config>test-oam>acc-criteria#

The following output is an example of a service-test configuration.

*A:Dut-B>config>test-oam># service-test 1 
*A:Dut-B>config>test-oam>serv-test# info detail 
----------------------------------------------
            description "Service_Test_1"
            stream-run-type ordered
            no test-completion-trap-enable
            no collect-stats
            no accounting-policy
            test-duration cir minutes 1
            test-duration cir-pir minutes 3
            service-stream 1 create
                description "Stream_1"
                no fc
                frame-payload 1
                acceptance-criteria 1
                sap 1/1/2
                no wait-timeout
                frame-size fixed-size 512
                rate cir 200000 cir-adaptation-rule min pir 250000 pir-adaptation-
rule closest
                no dot1p
                test-type cir
                no shutdown
            exit
            service-stream 2 create
                description "Stream_2"
                no fc
                frame-payload 1
                acceptance-criteria 1
                sap 1/1/2
                no wait-timeout
                frame-size fixed-size 512
                rate cir 200000 cir-adaptation-rule min pir 250000 pir-adaptation-
rule closest
                no dot1p
                test-type cir
                shutdown
            exit
            service-stream 3 create
                description "Stream_3"
                no fc
                frame-payload 1
                acceptance-criteria 1 
                sap 1/1/2
                no wait-timeout
                frame-size fixed-size 512
                rate cir 200000 cir-adaptation-rule min pir 250000 pir-adaptation-
 rule closest
                no dot1p
                test-type cir
                shutdown
            exit
            service-stream 4 create
                description "Stream_4"
                no fc
                frame-payload 1
                acceptance-criteria 1
                sap 1/1/2
                no wait-timeout
                frame-size fixed-size 512
                rate cir 200000 cir-adaptation-rule min pir 250000 pir-adaptation-
rule closest
                no dot1p
                test-type cir
                shutdown
            exit
            no shutdown
----------------------------------------------
*A:Dut-B>config>test-oam>serv-test# 

The following output is an example of the CLI command used to start a service test.

*A:Dut-B# oam service-test 1 start 
INFO: CLI service test 1 started. instance is 3.

The following output is an example of show service-test results.

*A:Dut-B# show test-oam service-test 1 results 
===============================================================================
Y.1564 service test 1 instance 3 results
===============================================================================
Service test id  : 1                       Run instance  : 3
Status           : finished                Stopped       : no
Start Time       : 06/17/2018 04:16:01     End Time      : 06/17/2018 04:19:06
Test result      : pass                    Total time ta*: 0d 00:03:05
===============================================================================
* indicates that the corresponding row element may have been truncated.
===============================================================================
Y.1564 service test 1 stream 1
===============================================================================
Stream Id        : 1                       Desc          : Stream_1
SAP              : 1/1/2                   FC            : be
Accept. Crit.    : 2                       Frame Payload : 1
Frame size       : 512                     
Cir              : 200000                  adaptation    : min
Pir              : 250000                  adaptation    : closest
Tests            : cir
===============================================================================
===============================================================================
Y.1564 service test 1 stream 1 instance 3 test-type cir results
===============================================================================
Start Time       : 06/17/2018 04:16:01     End Time      : 06/17/2018 04:19:06
Time taken       : 0d 00:03:05             Stopped       : no
-------------------------------------------------------------------------------
Packet Count And Roundtrip Latency Results
-------------------------------------------------------------------------------
Total Pkts Injected:              8788729
Total Pkts Received:              8788729
    Min Latency(us):                   34
    Max Latency(us):                   53
Average Latency(us):                   34
     M-Factor(kbps):                 1000
-------------------------------------------------------------------------------
                                      
-------------------------------------------------------------------------------
Test Compliance Report
-------------------------------------------------------------------------------
          Criteria : thruput(kbps) FLR(10000th%)   latency(us)    jitter(us)
        acceptable :        199990             0            45             0
        configured :        200000             0            45             0
          measured :        199992             0            34             0
            result :          Pass          Pass          Pass          Pass
-------------------------------------------------------------------------------

The following output is an example of an overview of all the service test results using result-summary.

A:Dut-B# show test-oam service-test 1 results-summary 
-------------------------------------------------------------------------------
Y.1564 service test 1 results
-------------------------------------------------------------------------------
Run Inst.    |Svc |     |                    |                    | | | | | | 
/SAP         |Str |Test |Start time          |End time            |S|R|B|F|L|J
-------------------------------------------------------------------------------
2            -    -     06/17/2018 04:14:57  06/17/2018 04:15:54  |Y|P|P|P|P|P
1/1/2        1    cir   06/17/2018 04:14:57  NA                   |Y|P|P|P|P|P
-------------------------------------------------------------------------------
3            -    -     06/17/2018 04:16:01  06/17/2018 04:19:06  |N|P|P|P|P|P
1/1/2        1    cir   06/17/2018 04:16:01  NA                   |N|P|P|P|P|P
-------------------------------------------------------------------------------
S - test stopped - Yes/No;
R - Result for all tests - Pass/Fail/NA;
B - Bandwidth/Throughput measured - Pass/Fail/NA;
F - Frame Loss Ratio(FLR) measured - Pass/Fail/NA;
L - Latency measured - Pass/Fail/NA;
J - Jitter measured - Pass/Fail/NA;
-------------------------------------------------------------------------------

3.6.6. Port loopback for Ethernet ports

Note:

Port loopback with MAC swap is only supported on the 7210 SAS-D.

7210 SAS devices support port loopback for ethernet ports. There are two types of port loopback commands: port loopback without MAC swap and port loopback with MAC swap. Both these commands are helpful for testing the service configuration and measuring performance parameters such as throughput, delay, and jitter on service turn-up. Typically, a third-party external test device is used to inject packets at desired rate into the service at a central office location. For detailed information about port loop back functionality refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Interface Configuration Guide.

3.6.7. SAP loopback for Ethernet SAPs

Note:

Per-SAP loopback with MAC swap is supported on all 7210 SAS platforms as described in this document, except the 7210 SAS-D.

Per- SAP loopback with MAC swap is is useful for testing the service configuration and measuring performance parameters such as throughput, delay, and jitter on service turn-up. Typically, the testhead OAM tool is used to inject packets at a desired rate into the service from the remote site. It is also possible to use a third-party external test device to inject packets at a desired rate into the service at a central office location. For detailed information about SAP loopback functionality, refer to the 7210 SAS-D, Dxp, K 2F1C2T, K 2F6C4T, K 3SFP+ 8C Services Guide.

3.7. OAM Performance Monitoring (OAM-PM)

OAM-PM provides an architecture for gathering and computing key performance indicators (KPIs) using standard protocols and a robust collection model. The architecture is comprised of the following foundational components:

  1. session
    The overall collection of different tests, test parameters, measurement intervals, and mappings to configured storage models. It is the overall container that defines the attributes of the session.
  2. standard PM packets
    The protocols defined by various standards bodies which contains the necessary fields to collect statistical data for the performance attribute they represent. OAM-PM leverages single-ended protocols. Single-ended protocols typically follow a message response model, message sent by a launch point, response updated and reflected by a responder.
  3. Measurement Intervals (MI)
    Time-based non-overlapping windows that capture all results that are received in that window of time.
  4. data structures
    The unique counters and measurement results that represent the specific protocol.
  5. bin group
    Ranges in microseconds that count the results that fit into the range.

The following figure shows the hierarchy of the architecture. This diagram is only meant to show the relationship between the components. It is not meant to depict all details of the required parameters.

Figure 27:  OAM-PM architecture hierarchy 

OAM-PM configurations are not dynamic environments. All aspects of the architecture must be carefully considered before configuring the various architectural components, making external references to other related components, or activating the OAM-PM architecture. No modifications are allowed to any components that are active or have any active subcomponents. Any function being referenced by an active OAM-PM function or test cannot be modified or shut down. For example, to change any configuration element of a session, all active tests must be in a shutdown state. To change any bin group configuration (described later in this section) all sessions that reference the bin group must have every test shutdown. The description parameter is the only exception to this rule.

Session source and destination configuration parameters are not validated by the test that makes use of that information. When the test is activated with a no shutdown command, the test engine will attempt to send the test packets even if the session source and destination information does not accurately represent the entity that must exist to successfully transmit packets. If the entity does not exist, the transmit count for the test will be zero.

OAM-PM is not a hitless operation. If a high availability event occurs that causes the backup CPM to become the active CPM, or when ISSU functions are performed, the test data will not be correctly reported. There is no synchronization of state between the active and the backup control modules. All OAM-PM statistics stored in volatile memory will be lost. When the reload or high availability event is completed and all services are operational then the OAM-PM functions will commence.

It is possible that during times of network convergence, high CPU utilizations, or contention for resources, OAM-PM may not be able to detect changes to an egress connection or allocate the necessary resources to perform its tasks.

Note:

OAM-PM is supported on all 7210 SAS platforms as described in this document, except the 7210 SAS-D.

3.7.1. Session

This is the overall collection of different tests, the test parameters, measurement intervals, and mapping to configured storage models. It is the overall container that defines the attributes of the session:

  1. session type
    The session type is the impetus of the test, which is either proactive (default) or on-demand. Individual test timing parameters are influenced by this setting. A proactive session will start immediately following the execution of a no shutdown command for the test. A proactive test will continue to execute until a manual shutdown stops the individual test. On-demand tests will also start immediately following the no shutdown command. However, the operator can override the no test-duration default and configure a fixed amount of time that the test will execute, up to 24 hours (86400 seconds). If an on-demand test is configured with a test-duration, it is important to shut down tests when they are completed. In the event of a high availability event causing the backup CPM to become the active CPM, all on-demand tests that have a test-duration statement will restart and run for the configured amount of time regardless of their progress on the previously active CPM.
  2. test family
    The test family is the main branch of testing that addresses a specific technology. The available test for the session are based on the test family. The destination, source, and priority are common to all tests under the session and are defined separately from the individual test parameters.
  3. test parameters
    Test parameters are included in individual tests, as well as the associated parameters including start and stop times and the ability to activate and deactivate the individual test.
  4. measurement interval
    A measurement interval is the assignment of collection windows to the session with the appropriate configuration parameters and accounting policy for that specific session.

The session can be viewed as the single container that brings all aspects of individual tests and the various OAM-PM components under a single umbrella. If any aspects of the session are incomplete, the individual test cannot be activated with a no shutdown command, and an “Invalid ethernet session parameters” error will occur.

3.7.2. Standard PM packets

A number of standards bodies define performance monitoring packets that can be sent from a source, processed, and responded to by a reflector. The protocols available to carry out the measurements are based on the test family type configured for the session.

Ethernet PM delay measurements are carried out using the Two Way Delay Measurement Protocol version 1 (DMMv1) defined in Y.1731 by the ITU-T. This allows for the collection of Frame Delay (FD), InterFrame Delay Variation (IFDV), Frame Delay Range (FDR), and Mean Frame Delay (MFD) measurements for round trip, forward, and backward directions.

DMMv1 adds the following to the original DMM definition:

  1. the Flag Field (1 bit – LSB) is defined as the Type (Proactive=1 | On-Demand=0)
  2. the TestID TLV (32 bits) is carried in the Optional TLV portion of the PDU

DMMv1 and DMM are backwards compatible and the interaction is defined in Y.1731 ITU-T-2011 Section 11 "OAM PDU validation and versioning".

Ethernet PM loss measurements are carried out using Synthetic Loss Measurement (SLM), which is defined in Y.1731 by the ITU-T. This allows for the calculation of Frame Loss Ratio (FLR) and availability.

A session can be configured with one or more tests. Depending on the session test type family, one or more test configurations may need to be included in the session to gather both delay and loss performance information. Each test that is configured shares the common session parameters and the common measurement intervals. However, each test can be configured with unique per-test parameters. Using Ethernet as an example, both DMM and SLM would be required to capture both delay and loss performance data.

Each test must be configured with a TestID as part of the test parameters, which uniquely identifies the test within the specific protocol. A TestID must be unique within the same test protocol. Again using Ethernet as an example, DMM and SLM tests within the same session can use the same TestID because they are different protocols. However, if a TestID is applied to a test protocol (like DMM or SLM) in any session, it cannot be used for the same protocol in any other session. When a TestID is carried in the protocol, as it is with DMM and SLM, this value does not have global significance. When a responding entity must index for the purpose of maintaining sequence numbers, as in the case of SLM, the TestID, Source MAC, and Destination MAC are used to maintain the uniqueness of the responder. This means that the TestID has only local, and not global, significance.

3.7.3. Measurement intervals

A measurement interval is a window of time that compartmentalizes the gathered measurements for an individual test that have occurred during that time. Allocation of measurement intervals, which equates to system memory, is based on the metrics being collected. This means that when both delay and loss metrics are being collected, they allocate their own set of measurement intervals. If the operator is executing multiple delay and loss tests under a single session, then multiple measurement intervals will be allocated, with one interval allocated per criteria per test.

Measurement intervals can be 15 minutes (15-min), one hour (1-hour) and 1 day (1-day) in duration. The boundary-type defines the start of the measurement interval and can be aligned to the local time of day clock, with or without an optional offset. The boundary-type can be aligned using the test-aligned option, which means that the start of the measurement interval coincides with the activation of the test. By default the start boundary is clock-aligned without an offset. When this configuration is deployed, the measurement interval will start at zero, in relation to the length. When a boundary is clock-aligned and an offset is configured, the specified amount of time will be applied to the measurement interval. Offsets are configured on a per-measurement interval basis and only applicable to clock-aligned measurement intervals. Only offsets less than the measurement interval duration are allowed. The following table describes some examples of the start times of each measurement interval.

Table 20:  Measurement interval start times 

Offset

15-min

1-hour

1-day

0 (default)

00, 15, 30, 45

00 (top of the hour)

midnight

10 minutes

10, 25, 40, 55

10 min after the hour

10 min after midnight

30 minutes

rejected

30 min after the hour

30 min after midnight

60 minutes

rejected

rejected

01:00 AM

Although test-aligned approaches may seem beneficial for simplicity, there are some drawbacks that need to be considered. The goal of the time-based and well defined collection windows allows for the comparison of measurements across common windows of time throughout the network and for relating different tests or sessions. It is suggested that proactive sessions use the default clock-aligned boundary type. On-demand sessions may make use of test-aligned boundaries. On-demand tests are typically used for troubleshooting or short term monitoring that does not require alignment or comparison to other PM data.

The statistical data collected and the computed results from each measurement interval are maintained in volatile system memory by default. The number of intervals stored is configurable per measurement interval. Different measurement intervals will have different defaults and ranges. The interval-stored parameter defines the number of completed individual test runs to store in volatile memory. There is an additional allocation to account for the active measurement interval. To look at the statistical information for the individual tests and a specific measurement interval stored in volatile memory, the show oam-pm statistics … interval-number command can be used. If there is an active test, it can be viewed by using the interval number 1. In this case, the first completed record would be interval number 2, and previously completed records would increment up to the maximum intervals stored value plus one.

As new tests for the measurement interval are completed, the older entries are renumbered to maintain their relative position to the current test. If the retained test data for a measurement interval consumes the final entry, any subsequent entries cause the removal of the oldest data.

There are drawbacks to this storage model. Any high availability function that causes an active CPM switch will flush the results that are in volatile memory. Another consideration is the large amount of system memory consumed using this type of model. Due to the risks and resource consumption this model incurs, an alternate method of storage is supported. An accounting policy can be applied to each measurement interval to write the completed data in system memory to non-volatile flash memory in an XML format. The amount of system memory consumed by historically completed test data must be balanced with an appropriate accounting policy. It is recommended that only necessary data be stored in non-volatile memory to avoid unacceptable risk and unnecessary resource consumption. It is further suggested that a large overlap between the data written to flash memory and stored in volatile memory is unnecessary.

The statistical information in system memory is also available through SNMP. If this method is chosen, a balance must be struck between the intervals retained and the times at which the SNMP queries collect the data. Determining the collection times through SNMP must be done with caution. If a file is completed while another file is being retrieved through SNMP, then the indexing will change to maintain the relative position to the current run. Correct spacing of the collection is key to ensuring data integrity.

The OAM-PM XML file contains the keywords and MIB references listed in the following table.

Table 21:  OAM-PM XML keywords and MIB reference 

XML file keyword

Description

TIMETRA-OAM-PM-MIB object

oampm

None - header only

Keywords shared by all OAM-PM protocols

sna

OAM-PM session name

tmnxOamPmCfgSessName

mi

Measurement interval record

None - header only

dur

Measurement interval duration (minutes)

tmnxOamPmCfgMeasIntvlDuration (enumerated)

ivl

Measurement interval number

tmnxOamPmStsIntvlNum

sta

Start timestamp

tmnxOamPmStsBaseStartTime

ela

Elapsed time (seconds)

tmnxOamPmStsBaseElapsedTime

ftx

Frames sent

tmnxOamPmStsBaseTestFramesTx

frx

Frames received

tmnxOamPmStsBaseTestFramesRx

sus

Suspect flag

tmnxOamPmStsBaseSuspect

dmm

Delay record

None - header only

mdr

Minimum frame delay, round-trip

tmnxOamPmStsDelayDmm2wyMin

xdr

Maximum frame delay, round-trip

tmnxOamPmStsDelayDmm2wyMax

adr

Average frame delay, round-trip

tmnxOamPmStsDelayDmm2wyAvg

mdf

Minimum frame delay, forward

tmnxOamPmStsDelayDmmFwdMin

xdf

Maximum frame delay, forward

tmnxOamPmStsDelayDmmFwdMax

adf

Average frame delay, forward

tmnxOamPmStsDelayDmmFwdAvg

mdb

Minimum frame delay, backward

tmnxOamPmStsDelayDmmBwdMin

xdb

Maximum frame delay, backward

tmnxOamPmStsDelayDmmBwdMax

adb

Average frame delay, backward

tmnxOamPmStsDelayDmmBwdAvg

mvr

Minimum inter-frame delay variation, round-trip

tmnxOamPmStsDelayDmm2wyMin

xvr

Maximum inter-frame delay variation, round-trip

tmnxOamPmStsDelayDmm2wyMax

avr

Average inter-frame delay variation, round-trip

tmnxOamPmStsDelayDmm2wyAvg

mvf

Minimum inter-frame delay variation, forward

tmnxOamPmStsDelayDmmFwdMin

xvf

Maximum inter-frame delay variation, forward

tmnxOamPmStsDelayDmmFwdMax

avf

Average inter-frame delay variation, forward

tmnxOamPmStsDelayDmmFwdAvg

mvb

Minimum inter-frame delay variation, backward

tmnxOamPmStsDelayDmmBwdMin

xvb

Maximum inter-frame delay variation, backward

tmnxOamPmStsDelayDmmBwdMax

avb

Average inter-frame delay variation, backward

tmnxOamPmStsDelayDmmBwdAvg

mrr

Minimum frame delay range, round-trip

tmnxOamPmStsDelayDmm2wyMin

xrr

Maximum frame delay range, round-trip

tmnxOamPmStsDelayDmm2wyMax

arr

Average frame delay range, round-trip

tmnxOamPmStsDelayDmm2wyAvg

mrf

Minimum frame delay range, forward

tmnxOamPmStsDelayDmmFwdMin

xrf

Maximum frame delay range, forward

tmnxOamPmStsDelayDmmFwdMax

arf

Average frame delay range, forward

tmnxOamPmStsDelayDmmFwdAvg

mrb

Minimum frame delay range, backward

tmnxOamPmStsDelayDmmBwdMin

xrb

Maximum frame delay range, backward

tmnxOamPmStsDelayDmmBwdMax

arb

Average frame delay range, backward

tmnxOamPmStsDelayDmmBwdAvg

fdr

Frame delay bin record, round-trip

None - header only

fdf

Frame delay bin record, forward

None - header only

fdb

Frame delay bin record, backward

None - header only

fvr

Inter-frame delay variation bin record, round-trip

None - header only

fvf

Inter-frame delay variation bin record, forward

None - header only

fvb

Inter-frame delay variation bin record, backward

None - header only

frr

Frame delay range bin record, round-trip

None - header only

frf

Frame delay range bin record, forward

None - header only

frb

Frame delay range bin record, backward

None - header only

lbo

Configured lower bound of the bin

tmnxOamPmCfgBinLowerBound

cnt

Number of measurements within the configured delay range

Note: The session_name, interval_duration, interval_number, {fd, fdr, ifdv}, bin_number, and {forward, backward, round-trip} indices are provided by the surrounding XML context

tmnxOamPmStsDelayDmmBinFwdCount

tmnxOamPmStsDelayDmmBinBwdCount

tmnxOamPmStsDelayDmmBin2wyCount

slm

Synthetic loss measurement record

None - header only

txf

Transmitted frames in the forward direction

tmnxOamPmStsLossSlmTxFwd

rxf

Received frames in the forward direction

tmnxOamPmStsLossSlmRxFwd

txb

Transmitted frames in the backward direction

tmnxOamPmStsLossSlmTxBwd

rxb

Received frames in the backward direction

tmnxOamPmStsLossSlmRxBwd

avf

Available count in the forward direction

tmnxOamPmStsLossSlmAvailIndFwd

avb

Available count in the forward direction

tmnxOamPmStsLossSlmAvailIndBwd

uvf

Unavailable count in the forward direction

tmnxOamPmStsLossSlmUnavlIndFwd

uvb

Unavailable count in the forward direction

tmnxOamPmStsLossSlmUnavlIndBwd

uaf

Undetermined available count in the forward direction

tmnxOamPmStsLossSlmUndtAvlFwd

uab

Undetermined available count in the backward direction

tmnxOamPmStsLossSlmUndtAvlBwd

uuf

Undetermined unavailable count in the forward direction

tmnxOamPmStsLossSlmUndtUnavlFwd

uub

Undetermined unavailable count in the backward direction

tmnxOamPmStsLossSlmUndtUnavlBwd

hlf

Count of HLIs in the forward direction

tmnxOamPmStsLossSlmHliFwd

hlb

Count of HLIs in the backward direction

tmnxOamPmStsLossSlmHliBwd

chf

Count of CHLIs in the forward direction

tmnxOamPmStsLossSlmChliFwd

chb

Count of CHLIs in the backward direction

tmnxOamPmStsLossSlmChliBwd

mff

Minimum FLR in the forward direction

tmnxOamPmStsLossSlmMinFlrFwd

xff

Maximum FLR in the forward direction

tmnxOamPmStsLossSlmMaxFlrFwd

aff

Average FLR in the forward direction

tmnxOamPmStsLossSlmAvgFlrFwd

mfb

Minimum FLR in the backward direction

tmnxOamPmStsLossSlmMinFlrBwd

xfb

Maximum FLR in the backward direction

tmnxOamPmStsLossSlmMaxFlrBwd

afb

Average FLR in the backward direction

tmnxOamPmStsLossSlmAvgFlrBwd

By default, the 15-min measurement interval stores 33 test runs (32+1) with a configurable range of 1 to 96, and the 1-hour measurement interval stores 9 test runs (8+1) with a configurable range of 1 to 24. The only storage for the 1-day measurement interval is 2 (1+1). This value for the 1-day measurement interval cannot be changed.

All three measurement intervals may be added to a single session if required. Each measurement interval that is included in a session is updated simultaneously for each test that is executing. If a measurement interval length is not required, it should not be configured. In addition to the three predetermined length measurement intervals, a fourth “always on” raw measurement interval is allocated at test creation. Data collection for the raw measurement interval commences immediately following the execution of a no shutdown command. It is a valuable tool for assisting in real-time troubleshooting as it maintains the same performance information and relates to the same bins as the fixed length collection windows. The operator may clear the contents of the raw measurement interval and flush stale statistical data to look at current conditions. This measurement interval has no configuration options, cannot be written to flash memory, and cannot be disabled; It is a single never-ending window.

Memory allocation for the measurement intervals is performed when the test is configured. Volatile memory is not flushed until the test is deleted from the configuration, a high availability event causes the backup CPM to become the newly active CPM, or some other event clears the active CPM system memory. Shutting down a test does not release the allocated memory for the test.

Measurement intervals also include a suspect flag. The suspect flag is used to indicate that data collected in the measurement interval may not be representative. The flag will be set to true only under the following conditions:

  1. Time of day clock is adjusted by more than 10 seconds
  2. Test start does not align with the start boundary of the measurement interval. This would be common for the first execution for clock aligned tests.
  3. Test stopped before the end of the measurement interval boundary

The suspect flag is not set when there are times of service disruption, maintenance windows, discontinuity, low packet counts, or other such events. Higher level systems would be required to interpret and correlate those types of event for measurement intervals which executed during the time that relate to the specific interruption or condition. Since each measurement interval contains a start and stop time, the information is readily available for higher level systems to discount the specific windows of time.

3.7.4. Data structures and storage

There are two main metrics that are the focus of OAM-PM: delay and loss. The different metrics have two unique storage structures and will allocate their own measurement intervals for these structures. This occurs regardless of whether the performance data is gathered with a single packet or multiple packet types.

Delay metrics include Frame Delay (FD), InterFrame Delay Variation (IFDV), Frame Delay Range (FDR) and Mean Frame Delay (MFD). Unidirectional and round trip results are stored for each metric:

  1. Frame Delay (FD)
    FD is the amount of time required to send and receive the packet.
  2. InterFrame Delay Variation (IFDV)
    IFDV is the difference in the delay metrics between two adjacent packets.
  3. Frame Delay Range (FDR)
    FDR is the difference between the minimum frame delay and the individual packet.
  4. Mean Frame Delay (MFD)
    MFD is the mathematical average for the frame delay over the entire window.

FD, IFDV and FDR statistics are binnable results. FD, IFDV, FDR and MFD all include minimum, maximum, and average values. Unidirectional and round trip results are stored for each metric.

Unidirectional frame delay and frame delay range measurements require exceptional time of day clock synchronization. If the time of day clock does not exhibit extremely tight synchronization, unidirectional measurements will not be representative. In one direction, the measurement will be artificially increased by the difference in the clocks. In the other direction, the measurement will be artificially decreased by the difference in the clocks. This level of clocking accuracy is not available with NTP. To achieve this level of time of day clock synchronization, Precision Time Protocol (PTP) 1588v2 should be considered.

Round trip metrics do not require clock synchronization between peers, since the four timestamps allow for accurate representation of the round trip delay. The mathematical computation removes remote processing and any difference in time of day clocking. Round trip measurements do require stable local time of day clocks.

Any delay metric that is negative will be treated as zero and placed in bin 0, the lowest bin which has a lower boundary of 0 microseconds.

Delay results are mapped to the measurement interval that is active when the result arrives back at the source.

There are no supported log events based on delay metrics.

Loss metrics are only unidirectional and will report frame loss ratio (FLR) and availability information. Frame loss ratio is the computation of loss (lost/sent) over time. Loss measurements during periods of unavailability are not included in the FLR calculation as they are counted against the unavailability metric.

Availability requires relating three different functions. First, the individual probes are marked as available or unavailable based on sequence numbers in the protocol. A number of probes are rolled up into a small measurement window, typically 1 s. Frame loss ratio is computed over all the probes in a small window. If the resulting percentage is higher than the configured threshold, the small window is marked as unavailable. If the resulting percentage is lower than the threshold, the small window is marked as available. A sliding window is defined as some number of small windows, typically 10. The sliding window is used to determine availability and unavailability events. Switching from one state to the other requires every small window in the sliding window to be the same state and different from the current state.

Availability and unavailability counters are incremented based on the number of small windows that have occurred in all available and unavailable windows.

Availability and unavailability using synthetic loss measurements is meant to capture the loss behavior for the service. It is not meant to capture and report on service outages or communication failures. Communication failures of a bidirectional or unidirectional nature must be captured using some other means of connectivity verification, alarming, or continuity checking. During times of complete or extended failure periods it becomes necessary to timeout individual test probes. It is not possible to determine the direction of the loss because no response packets are being received back on the source. In this case, the statistics calculation engine maintains the previous state, updating the appropriate directional availability or unavailability counter. At the same time, an additional per-direction undetermined counter is updated. This undetermined counter is used to indicate that the availability or unavailability statistics could not be determined for a number of small windows.

During connectivity outages, the higher level systems can be used to discount the loss measurement interval, which covers the same span as the outage.

Availability and unavailability computations may delay the completion of a measurement interval. The declaration of a state change or the delay to a closing a measurement interval could be equal to the length of the sliding window and the timeout of the last packet. Closing of a measurement interval cannot occur until the sliding window has determined availability or unavailability. If the availability state is changing and the determination is crossing two measurement intervals, the measurement interval will not complete until the declaration has occurred. Typically, standard bodies indicate the timeout per packet. In the case of Ethernet, DMMv1, and SLM, timeout values are set at 5 s and cannot be configured.

There are no log events based on availability or unavailability state changes.

During times of availability, there can be times of high loss intervals (HLI) or consecutive high loss intervals (CHLI). These are indicators that the service was available but individual small windows or consecutive small windows experienced frame loss ratios exceeding the configured acceptable limit. A HLI is any single small window that exceeds the configured frame loss ratio. This could equate to a severely errored second, assuming the small window is one second. A CHIL is a consecutive high loss interval that exceeds a consecutive threshold within the sliding window. Only one HLI will be counted for a window.

Availability can only be reasonably determined with synthetic packets. This is because the synthetic packet is the packet being counted and provides a uniform packet flow that can be used for the computation. Transmit and receive counter-based approaches cannot reliably be used to determine availability because there is no guarantee that service data is on the wire, or the service data on the wire uniformity could make it difficult to make a declaration valid.

Figure 28 shows loss in a single direction using synthetic packets, and demonstrates what happens when a possible unavailability event crosses a measurement interval boundary. In the diagram, the first 13 small windows are all marked available (1), which means that the loss probes that fit into each of those small windows did not equal or exceed a frame loss ratio of 50%. The next 11 small windows are marked as unavailable, which means that the loss probes that fit into each of those small windows were equal to or above a frame loss ratio of 50%. After the 10th consecutive small window of unavailability, the state transitions from available to unavailable. The 25th small window is the start of the new available state which is declared following the 10th consecutive available small window. Notice that the frame loss ratio is 00.00%; this is because all the small windows that are marked as unavailable are counted toward unavailability, and as such are excluded from impacting the FLR. If there were any small windows of unavailability that were outside of an unavailability event, they would be marked as HLI or CHLI and be counted as part of the frame loss ratio.

Figure 28:  Evaluating and computing loss and availability 

3.7.5. Bin groups

Bin groups are templates that are referenced by the session. Three types of binnable statistics are available: FD, IFDV, and FDR, all of which are available in forward, backward, and round trip directions. Each of these metrics can have up to ten bin groups configured to group the results. Bin groups are configured by indicating a lower boundary. Bin 0 has a lower boundary that is always zero and is not configurable. The microsecond range of the bins is the difference between the adjacent lower boundaries. For example, bin-type fd bin 1 configured with lower-bound 1000 means that bin 0 will capture all frame delay statistics results between 0 and 1 ms. Bin 1 will capture all results above 1 ms and below the bin 2 lower boundary. The last bin to be configured would represent the bin that collects all the results at and above that value. Not all ten bins must be configured.

Each binnable delay metric type requires their own values for the bin groups. Each bin in a type is configurable for one value. It is not possible to configure a bin with different values for round trip, forward, and backward. Consideration must be taken when configuring the boundaries that represent the important statistics for that specific service.

As stated earlier in this section, this is not a dynamic environment. If a bin group is being referenced by any active test the bin group cannot shutdown. To modify the bin group it must be shut down. If the configuration of a bin group must be changed, and a large number of sessions are referencing the bin group, migrating existing sessions to a new bin group with the new parameters can be considered to reduce the maintenance window. To modify any session parameter, every test in the session must be shut down.

Bin group 1 is the default bin group. Every session requires a bin group to be assigned. By default, bin group 1 is assigned to every OAM-PM session that does not have a bin group explicitly configured. Bin group 1 cannot be modified. The bin group 1 configuration parameters are as follows:

-------------------------------------------------------------------------------
Configured Lower Bounds for Delay Measurement (DMM) Tests, in microseconds
-------------------------------------------------------------------------------
Group Description                    Admin Bin     FD(us)    FDR(us)   IFDV(us)
-------------------------------------------------------------------------------
1     OAM PM default bin group (not*    Up   0          0          0          0
                                             1       5000       5000       5000
                                             2      10000          -          -
-------------------------------------------------------------------------------

3.7.6. Relating the components

The following table shows the architecture of all of the OAM-PM concepts previously described. It shows a more detailed hierarchy than previously shown in the introduction. This shows the relationship between the tests, the measurement intervals, and the storage of the results.

Figure 29:  Relating OAM-PM components 

3.7.7. Monitoring

The following configuration examples are used to demonstrate the different show and monitoring commands available to check OAM-PM.

3.7.7.1. Accounting policy configuration

config>log# info
----------------------------------------------
        file-id 1
            description "OAM PM XML file Paramaters"
            location cf2:
            rollover 10 retention 2
        exit
        accounting-policy 1
            description "Default OAM PM Collection Policy for 15-min Bins"
            record complete-pm
            collection-interval 5
            to file 1
            no shutdown
        exit
        log-id 1
        exit
----------------------------------------------

3.7.7.2. ETH-CFM configuration

config>eth-cfm# info
----------------------------------------------
        domain 12 format none level 2
            association 4 format string name "vpls4-0000001"
                bridge-identifier 4
                    id-permission chassis
                exit
                ccm-interval 1
                remote-mepid 30
            exit
        exit

3.7.7.3. Service configuration

config>service>vpls# info
----------------------------------------------
            description "OAM PM Test Service to v30"
            stp
                shutdown
            exit
            sap 1/1/10:4.* create
                eth-cfm
                    mep 28 domain 12 association 4 direction up
                        ccm-enable
                        mac-address 00:00:00:00:00:28
                        no shutdown
                    exit
                exit
            exit
            sap 1/2/1:4.* create
            exit
            no shutdown

3.7.7.4. OAM-PM configuration

config>oam-pm#info detail
----------------------------------------------- 
        bin-group 2 fd-bin-count 10 fdr-bin-count 2 ifdv-bin-count 10 create
            no description
            bin-type fd
                bin 1
                    lower-bound 1000
                exit
                bin 2
                    lower-bound 2000
                exit
                bin 3
                    lower-bound 3000
                exit
                bin 4
                    lower-bound 4000
                exit
                bin 5
                    lower-bound 5000
                exit
                bin 6
                    lower-bound 6000
                exit
                bin 7
                    lower-bound 7000
                exit
                bin 8
                    lower-bound 8000
                exit
                bin 9
                    lower-bound 10000
                exit
            exit
            bin-type fdr
                bin 1
                    lower-bound 5000
                exit
            exit
            bin-type ifdv
                bin 1
                    lower-bound 100
                exit
                bin 2
                    lower-bound 200
                exit
                bin 3
                    lower-bound 300
                exit
                bin 4
                    lower-bound 400
                exit
                bin 5
                    lower-bound 500
                exit
                bin 6
                    lower-bound 600
                exit
                bin 7
                    lower-bound 700
                exit
                bin 8
                    lower-bound 800
                exit
                bin 9
                    lower-bound 1000
                exit
            exit
            no shutdown
        exit
        session "eth-pm-service-4" test-family ethernet session-
type proactive create
            bin-group 2
            no description
            meas-interval 15-mins create
                no accounting-policy
                boundary-type clock-aligned
                clock-offset 0
                intervals-stored 32
            exit
            ethernet
                dest-mac 00:00:00:00:00:30
                priority 0
                source mep 28 domain 12 association 4
                dmm test-id 10004 create
                    data-tlv-size 1000
                    interval 1000
                    no test-duration
                    no shutdown
                exit
                slm test-id 10004 create
                    data-tlv-size 1000
                    flr-threshold 50
                    no test-duration
                    timing frames-per-delta-t 10 consec-delta-t 10 interval 100 
                           chli-threshold 4
                    no shutdown
                exit
            exit
        exit

3.7.7.5. Show and monitor commands

The monitor command can be used to automatically update the statistics for the raw measurement interval.

show oam-pm bin-group
-------------------------------------------------------------------------------
Configured Lower Bounds for Delay Measurement (DMM) Tests, in microseconds
-------------------------------------------------------------------------------
Group Description                    Admin Bin     FD(us)    FDR(us)   IFDV(us)
-------------------------------------------------------------------------------
1     OAM PM default bin group (not*    Up   0          0          0          0
                                             1       5000       5000       5000
                                             2      10000          -          -
-------------------------------------------------------------------------------
2                                       Up   0          0          0          0
                                             1       1000       5000        100
                                             2       2000          -        200
                                             3       3000          -        300
                                             4       4000          -        400
                                             5       5000          -        500
                                             6       6000          -        600
                                             7       7000          -        700
                                             8       8000          -        800
                                             9      10000          -       1000
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
* indicates that the corresponding row element may have been truncated.
 
show oam-pm bin-group 2
-------------------------------------------------------------------------------
Configured Lower Bounds for Delay Measurement (DMM) Tests, in microseconds
-------------------------------------------------------------------------------
Group Description                    Admin Bin     FD(us)    FDR(us)   IFDV(us)
-------------------------------------------------------------------------------
2                                       Up   0          0          0          0
                                             1       1000       5000        100
                                             2       2000          -        200
                                             3       3000          -        300
                                             4       4000          -        400
                                             5       5000          -        500
                                             6       6000          -        600
                                             7       7000          -        700
                                             8       8000          -        800
                                             9      10000          -       1000
------------------------------------------------------------------------------- 
 
show oam-pm bin-group-using
=========================================================================
OAM Performance Monitoring Bin Group Configuration for Sessions
=========================================================================
Bin Group       Admin   Session                            Session State
-------------------------------------------------------------------------
2               Up      eth-pm-service-4                             Act
-------------------------------------------------------------------------
========================================================================= 
 
show oam-pm bin-group-using bin-group 2
=========================================================================
OAM Performance Monitoring Bin Group Configuration for Sessions
=========================================================================
Bin Group       Admin   Session                            Session State
-------------------------------------------------------------------------
2               Up      eth-pm-service-4                             Act
-------------------------------------------------------------------------
========================================================================= 
 
show oam-pm sessions test-family ethernet
============================================================================
OAM Performance Monitoring Session Summary for the Ethernet Test Family
============================================================================
Session                          State   Bin Group   Sess Type   Test Types
----------------------------------------------------------------------------
eth-pm-service-4                   Act           2   proactive      DMM SLM
============================================================================ 
 
show oam-pm session "eth-pm-service-4" all
-------------------------------------------------------------------------------
Basic Session Configuration
-------------------------------------------------------------------------------
Session Name      : eth-pm-service-4
Description       : (Not Specified)
Test Family       : ethernet            Session Type       : proactive
Bin Group         : 2
-------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
Ethernet Configuration
-------------------------------------------------------------------------------
Source MEP        : 28                  Priority           : 0
Source Domain     : 12                  Dest MAC Address   : 00:00:00:00:00:30
Source Assoc'n    : 4
-------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
DMM Test Configuration and Status
-------------------------------------------------------------------------------
Test ID           : 10004               Admin State        : Up
Oper State        : Up                  Data TLV Size      : 1000 octets
On-Demand Duration: Not Applicable      On-Demand Remaining: Not Applicable
Interval          : 1000 ms
-------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
SLM Test Configuration and Status
-------------------------------------------------------------------------------
Test ID           : 10004               Admin State        : Up
Oper State        : Up                  Data TLV Size      : 1000 octets
On-Demand Duration: Not Applicable      On-Demand Remaining: Not Applicable
Interval          : 100 ms
CHLI Threshold    : 4 HLIs              Frames Per Delta-T : 10 SLM frames
Consec Delta-Ts   : 10                  FLR Threshold      : 50%
-------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
15-mins Measurement Interval Configuration
-------------------------------------------------------------------------------
Duration          : 15-mins             Intervals Stored   : 32
Boundary Type     : clock-aligned       Clock Offset       : 0 seconds
Accounting Policy : none
-------------------------------------------------------------------------------
 
-------------------------------------------------------------------------------
Configured Lower Bounds for Delay Measurement (DMM) Tests, in microseconds
-------------------------------------------------------------------------------
Group Description                    Admin Bin     FD(us)    FDR(us)   IFDV(us)
-------------------------------------------------------------------------------
2                                       Up   0          0          0          0
                                             1       1000       5000        100
                                             2       2000          -        200
                                             3       3000          -        300
                                             4       4000          -        400
                                             5       5000          -        500
                                             6       6000          -        600
                                             7       7000          -        700
                                             8       8000          -        800
                                             9      10000          -       1000
------------------------------------------------------------------------------- 
 
show oam-pm statistics session "eth-pm-service-4" dmm meas-interval 15-
mins interval-number 2 all
------------------------------------------------------------------------------
Start (UTC)       : 2014/02/01 10:00:00          Status          : completed
Elapsed (seconds) : 900                          Suspect         : no
Frames Sent       : 900                          Frames Received : 900
------------------------------------------------------------------------------
 
----------------------------------------------------------------------
Bin Type     Direction     Minimum (us)   Maximum (us)   Average (us)
----------------------------------------------------------------------
FD           Forward                  0           8330            712
FD           Backward               143          11710           2605
FD           Round Trip            1118          14902           3111
FDR          Forward                  0           8330            712
FDR          Backward               143          11710           2605
FDR          Round Trip               0          13784           1990
IFDV         Forward                  0           8330            431
IFDV         Backward                 1          10436            800
IFDV         Round Trip               2          13542           1051
----------------------------------------------------------------------
 
---------------------------------------------------------------
Frame Delay (FD) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us           624            53             0
1            1000 us           229           266           135
2            2000 us            29           290           367
3            3000 us             4           195           246
4            4000 us             7            71            94
5            5000 us             5            12            28
6            6000 us             1             7            17
7            7000 us             0             1             5
8            8000 us             1             4             3
9           10000 us             0             1             5
---------------------------------------------------------------
 
---------------------------------------------------------------
Frame Delay Range (FDR) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us           893           875           873
1            5000 us             7            25            27
---------------------------------------------------------------
 
---------------------------------------------------------------
Inter-Frame Delay Variation (IFDV) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us           411           162            96
1             100 us           113           115           108
2             200 us            67            84            67
3             300 us            56            67            65
4             400 us            36            46            53
5             500 us            25            59            54
6             600 us            25            27            38
7             700 us            29            34            22
8             800 us            41            47            72
9            1000 us            97           259           325
--------------------------------------------------------------- 
 
show oam-pm statistics session "eth-pm-service-4" slm meas-interval 15-
mins interval-number 2
------------------------------------------------------------------------------
Start (UTC)       : 2014/02/01 10:00:00          Status          : completed
Elapsed (seconds) : 900                          Suspect         : no
Frames Sent       : 9000                         Frames Received : 9000
------------------------------------------------------------------------------
 
------------------------------------------------------
                    Frames Sent       Frames Received
------------------------------------------------------
Forward                    9000                  9000
Backward                   9000                  9000
------------------------------------------------------
 
-------------------------------------------
Frame Loss Ratios
-------------------------------------------
             Minimum    Maximum    Average
-------------------------------------------
Forward       0.000%     0.000%     0.000%
Backward      0.000%     0.000%     0.000%
-------------------------------------------
 
-------------------------------------------------------------------------------
Availability Counters (Und = Undetermined)
-------------------------------------------------------------------------------
           Available   Und-Avail Unavailable Und-Unavail        HLI       CHLI
-------------------------------------------------------------------------------
Forward          900           0           0           0          0          0
Backward         900           0           0           0          0          0
------------------------------------------------------------------------------- 
 
show oam-pm statistics session "eth-pm-service-4" dmm meas-interval raw
------------------------------------------------------------------------------
Start (UTC)       : 2014/02/01 09:43:58          Status          : in-progress
Elapsed (seconds) : 2011                         Suspect         : yes
Frames Sent       : 2011                         Frames Received : 2011
------------------------------------------------------------------------------
 
----------------------------------------------------------------------
Bin Type     Direction     Minimum (us)   Maximum (us)   Average (us)
----------------------------------------------------------------------
FD           Forward                  0          11670            632
FD           Backward                 0          11710           2354
FD           Round Trip            1118          14902           2704
FDR          Forward                  0          11670            611
FDR          Backward                 0          11710           2353
FDR          Round Trip               0          13784           1543
IFDV         Forward                  0          10027            410
IFDV         Backward                 0          10436            784
IFDV         Round Trip               0          13542           1070
----------------------------------------------------------------------
 
---------------------------------------------------------------
Frame Delay (FD) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us          1465           252             0
1            1000 us           454           628           657
2            2000 us            62           593           713
3            3000 us             8           375           402
4            4000 us            11           114           153
5            5000 us             7            26            41
6            6000 us             2            10            20
7            7000 us             0             2             8
8            8000 us             1            10            11
9           10000 us             1             1             6
---------------------------------------------------------------
 
---------------------------------------------------------------
Frame Delay Range (FDR) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us          2001          1963          1971
1            5000 us            11            49            41
---------------------------------------------------------------
 
---------------------------------------------------------------
Inter-Frame Delay Variation (IFDV) Bin Counts
---------------------------------------------------------------
Bin      Lower Bound       Forward      Backward    Round Trip
---------------------------------------------------------------
0               0 us           954           429           197
1             100 us           196           246           197
2             200 us           138           168           145
3             300 us           115           172           154
4             400 us            89            96           136
5             500 us            63            91           108
6             600 us            64            53            89
7             700 us            61            55            63
8             800 us           112            82           151
9            1000 us           219           619           771
--------------------------------------------------------------- 
 
show oam-pm statistics session "eth-pm-service-4" slm meas-interval raw
------------------------------------------------------------------------------
Start (UTC)       : 2014/02/01 09:44:03          Status          : in-progress
Elapsed (seconds) : 2047                         Suspect         : yes
Frames Sent       : 20470                        Frames Received : 20469
------------------------------------------------------------------------------
 
------------------------------------------------------
                    Frames Sent       Frames Received
------------------------------------------------------
Forward                   20329                 20329
Backward                  20329                 20329
------------------------------------------------------
 
-------------------------------------------
Frame Loss Ratios
-------------------------------------------
             Minimum    Maximum    Average
-------------------------------------------
Forward       0.000%     0.000%     0.000%
Backward      0.000%     0.000%     0.000%
-------------------------------------------
 
-------------------------------------------------------------------------------
Availability Counters (Und = Undetermined)
-------------------------------------------------------------------------------
           Available   Und-Avail Unavailable Und-Unavail        HLI       CHLI
-------------------------------------------------------------------------------
Forward         2033           0           0           0          0          0
Backward        2033           0           0           0          0          0
-------------------------------------------------------------------------------