This section lists the configuration guidelines for the testhead OAM tool. These guidelines apply to all platforms described in this guide unless a specific platform is called out explicitly. As well, these guidelines apply to both the testhead OAM tool and service test testhead OAM tool, unless indicated otherwise:
SAPs configured on LAG cannot be configured for testing with testhead tool. Other than the test SAP, other service endpoints (For example: SAPs/SDP-Bindings) configured in the service can be over a LAG.
On the 7210 SAS-D, software automatically assigns the resources associated with internal ports for use with this feature.
On the 7210 SAS-D, testhead OAM tool uses internal loopback ports. On the 7210 SAS-Dxp, the user can assign the resources of an internal loopback port to the testhead OAM tool or port loopback with MAC swap. Depending on the throughput being validated, either a 1GE internal loopback port or 10GE internal loopback port can be used.
On the 7210 SAS-D and 7210 SAS-Dxp, port loopback with MAC swap is used at both ends and all services on the port on which the test SAP is configured and SAPs in the VPLS service, other than the test SAP should be shutdown or should not receive any traffic.
The configured CIR/PIR is rounded off to the nearest available hardware rates. User is provided with an option to select the adaptation rule to use (similar to support available for QoS policies).
On the 7210 SAS-D and 7210 SAS-Dxp, the FC specified is used to determine the queue to enqueue the marker packets generated by testhead application on the egress of the test SAP on the local node.
On the 7210 SAS-Dxp and 7210 SAS-K 3SFP+ 8C, the testhead OAM tool allows validation of speeds up to approximately 10 Gb/s.
ITU-T Y.1564 recommends to provide an option to configure the CIR step-size and the step-duration for the service configuration tests. This is not supported directly in 7210 SAS. It can be achieved by the NSP NFM-P or a third-party NMS system or an application with configuration of the desired rate and duration to correspond to the CIR step-size and step duration and repeating the test a second time, with a different value of the rate (that is, CIR step size) and duration (that is, step duration) and so on.
Testhead waits for about 5 seconds at the end of the configured test duration before collecting statistics. This allows for all in-flight packets to be received by the node and accounted for in the test measurements. User cannot start another test during this period.
When using testhead to test bandwidth available between SAPs configured in a VPLS service, operators must ensure that no other SAPs in the VPLS service are exchanging any traffic, particularly BUM traffic and unicast traffic destined for either the local test SAP or the remote SAP. BUM traffic eats into the network resources which is also used by testhead traffic.
It is possible that test packets (both data and/or marker packets) remain in the loop created for testing when the tests are stopped. This is highly probable when using QoS policies with much lower shaper rate values, resulting in high latency for packets flowing through the network loop. User must remove the loop at both ends when the test is complete or when the test is stopped and wait for a suitable interval before starting the next test for the same service, to ensure that packets drain out of the network for that service. If this is not done, then the subsequent tests might process and account these stale packets, resulting in incorrect results. Software cannot detect stale packets in the loop as it does not associate or check each and every packet with a test session.
Traffic received from the remote node and looped back into the test port (where the test SAP is configured) on the local end (that is, the end where the testhead tool is invoked) is dropped by hardware after processing (and is not sent back to the remote end). The SAP ingress QoS policies and SAP ingress filter policies must match the packet header fields specified by the user in the testhead profile, except that the source/destination MAC addresses are swapped.
On the 7210 SAS-D and 7210 SAS-Dxp, latency is not be computed if marker packets are not received by the local node where the test is generated and printed as 0 (zero), in such cases. If jitter = 0 and latency > 0, it means that jitter calculated is less than the precision used for measurement. There is also a small chance that jitter was not actually calculated, that is, only one value of latency has been computed. This typically indicates a network issue, instead of a testhead issue.
When the throughput is not met, FLR cannot be calculated. If the measured throughput is approximately +/-10% of the user configured rate, FLR value is displayed; else software prints ‟Not Applicable”. The percentage of variance of measured bandwidth depends on the packet size in use and the configured rate.
On the 7210 SAS-D and 7210 SAS-Dxp, the user must not use the CLI command to clear statistics of the test SAP port, testhead loopback port and MAC swap loopback port when the testhead tool is running. The port statistics are used by the tool to determine the Tx/Rx frame count.
On the 7210 SAS-D and 7210 SAS-Dxp, the testhead tool generates traffic at a rate slightly above the CIR. The additional bandwidth is attributable to the marker packets used for latency measurements. This is not expected to affect the latency measurement or the test results in a significant way.
If the operational throughput is 1kb/s and it is achieved in the test loop, the throughput computed could still be printed as 0 if it is < 1Kb/s (0.99 kb/s, for example). Under such cases, if FLR is PASS, the tool indicates that the throughput has been achieved.
The testhead tool displays a failure result if the received count of frames is less than the injected count of frames, even though the FLR may be displayed as 0. This happens because of truncation of FLR results to 6 decimal places and can happen when the loss is very less.
On the 7210 SAS-D and 7210 SAS-Dxp, as the rate approaches the maximum rate supported on the platform or the maximum rate of the loop (which includes the capacities of the internal loopback ports in use), the user needs to account for the marker packet rate and the meter behavior while configuring the CIR. In other words, if the user needs to test 1 Gb/s for a frame size of 512 bytes, then they will need to configure about 962396 Kb/s, instead of 962406 Kb/s, the maximum rate that can be achieved for this frame size. In general, they would need to configure about 98% to 99% (based on packet size) of the maximum possible rate to account for marker packets when they need to test at rates which are closer to the bandwidth available in the network. The reason for this is that at the maximum rate injection of marker packets by CPU will result in drops of either the injected data traffic or the marker packets themselves, as the net rate exceeds the capacity. These drops cause the testhead to always report a failure, unless the rate is marginally reduced.
The testhead uses the Layer 2 rate, which is calculated by subtracting the Layer 1 overhead that is caused when the IFG and Preamble are added to every Ethernet frame (typically about 20 bytes (IFG = 12 bytes and Preamble = 8 bytes)). The testhead tool uses the user-configured frame size to compute the Layer 2 rate and does not allow the user to configure a value greater than that rate. For 512-byte Ethernet frames, the L2 rate is 962406 Kb/s and the Layer 1 rate is 1 Gb/s.
It is not expected that the operator will use the testhead tool to measure the throughput or other performance parameters of the network during the course of network event. The network events could be affecting the other SAP/SDP-Binding/PW configured in the service. Examples are transition of a SAP because of G8032 ring failure, transition of active/ standby SDP-Binding/PW because of link or node failures.
On the 7210 SAS-D and 7210 SAS-Dxp, the 2-way delay (also known as latency) values measured by the testhead tool are more accurate than those obtained using OAM tools, as the timestamps are generated in hardware.
On the 7210 SAS-D and 7210 SAS-Dxp, the profile assigned to the packets generated by the testhead is ignored on access SAP ingress. 7210 SAS service access port, access-uplink port, or network port can mark the packets appropriately on egress to allow the subsequent nodes in the network to differentiate the in-profile and out-of-profile packets and provide them with appropriate QoS treatment. 7210 SAS access-uplink ingress and network port ingress is capable of providing appropriate QoS treatment to in-profile and out-of-profile packets.
On the 7210 SAS-D and 7210 SAS-Dxp, the marker packets are sent over and above the configured CIR or PIR. The tool cannot determine the number of green packets injected and the number of yellow packets injected individually. Therefore, marker packets are not accounted for in the injected or received green or in-profile and yellow or out-of-profile packet counts. They are only accounted for in the Total Injected and the Total Received counts. So, the FLR metric accounts for marker packet loss (if any), while green or yellow FLR metric does not account for any marker packet loss.
On the 7210 SAS-D and 7210 SAS-Dxp, marker packets are used to measure green or in-profile packets latency and jitter and the yellow or out-of-profile packets latency and jitter. These marker packets are identified as green or yellow based on the packet marking (for example, dot1p). The latency values can be different for green and yellow packets based on the treatment provided to the packets by the network QoS configuration.
The following table describes of SAP encapsulation that are supported for testhead on 7210 SAS-D and 7210 SAS-Dxp.
Epipe service configured with svc-sap-type |
Test SAP encapsulations |
---|---|
null-star |
Null, :* , 0.* , Q.* |
Any |
Null , :0 , :Q , :Q1.Q2 |
dot1q-preserve |
:Q |