The Service Application Agent (SAA) tool allows operators to configure several tests that provide performance information such as delay, jitter, and loss of services or network segments. The test results are saved in SNMP tables or summarized XML files. These results can be collected and reported on using network management systems.
SAA uses resources allocated to various OAM processes. These processes are not dedicated to SAA but are shared throughout the system. Table: SAA test and descriptions describes the logical groups of different OAM functions.
Test | Description |
---|---|
Background |
Tasks are configured outside the SAA hierarchy that consume OAM task resources. Specifically, these include SDP keepalive, static route CPE check, filter redirect policy, ping test, and VRRP policy host unreachable. These are critical tasks that ensure the network operation and may affect data forwarding or network convergence. |
SAA Continuous |
SAA tests configured as continuous (always scheduled) |
SAA non-continuous |
SAA tests that are not configured as continuous, and are scheduled outside the SAA application. The oam saa test-name start command is required to initiate the test run. |
Non-SAA (Directed) |
Tasks that do not include any configuration under SAA. These tests are SNMP or via the CLI that is used to troubleshoot or profile network condition. This would take the form ‟oam test-type”, or ping or traceroute with the specific test parameters. |
Y.1731 defines two approaches for measuring frame delay and frame delay variation: single-ended and dual-ended. SAA supports the single-ended approach.
SAA test types are restricted to tests that use a request response mechanism; that is, single-ended tests. Dual-ended tests that initiate the test on one node but require the statistical gathering on the other node are not supported under SAA.
Post-processing analysis of individual test runs can be used to determine the success or failure of these runs. The operator can set rising and lowering thresholds for delay, jitter, and loss. Exceeding the threshold causes the Last Test Result field to display the Failed keyword. A trap can be generated when the test fails. The operator can also configure a probe failure threshold and trap when these thresholds are exceeded.
Each supported test type has test-specific configuration properties. Not all options, intervals, and parameters are available for all tests. Some configuration parameters, such as the sub-second probe interval, require specific hardware.
Trace type tests apply the timeout to each individual packet, which may affect spacing. Packet timeout is required to move from one probe to the next probe. For tests that do not require this type of behavior, typically ping and ETH-CFM PM functions, the probes are sent at the specified interval and the timeout is only applied at the end of the test if any probe is lost during the run. When the timeout is applied at the end of the run, the test is considered complete either when all responses are received or the timeout expires at the end of the test run. For tests marked continuous (always scheduled), the spacing between runs may be delayed by the timeout value when a packet is lost. The test run is complete when all probes have either been received back or the timeout value has expired.
To preserve system resources, specifically memory, the operator should only store summarized history results. By default, summary results are stored for tests configured with sub-second probe intervals or a probe count above 100, or is written to a file. By default, per-probe information is stored for tests configured with an interval of one second counters or above and probe counts of 100 or less, and is not written to a file. The operator may choose to override these defaults using the probe-history {keep | drop | auto} command options. The auto option sets the preceding defaults. The other options override the default retention schemes based on the operator requirements. The keep option retains and stores per-probe information and the drop option stores summary-only information. The probe data can be viewed using the show saa test command. If the per-probe information is retained, this data is available at the completion of the test run. The summary data is updated throughout the test run. The overall memory system usage is available using the show system memory-pools command. The OAM entry represents the overall memory usage. This includes the history data stored for SAA tests. A clear saa testname option is available to release the memory and flush test results.
SAA-launched tests maintain the two most recently completed tests and one in progress test. It is important to ensure that the collection and accounting record process is configured to write the data to file before it is overwritten. After the results are overwritten, they are lost.
Any data not written to file is lost on a CPU switchover.
The operator can use the following show, clear, and monitor commands to monitor the test OAM toolset:
The show test-oam oam-config-summary command provides information about the configured tests.
The show test-oam oam-perf command provides the transmit (launched form me) rate information and remotely launched test receive rate on the local network element.
The clear test-oam oam-perf command provides the ability to clear the test OAM performance statistics for a current view of the different rates in the preceding oam-perf command.
The monitor test-oam oam-perf command provides time sliced performance statistics for test oam functions.