This chapter provides an overview of the 7705 SAR Quality of Service (QoS) and information about QoS policy management.
Topics in this chapter include:
This section contains the following overview topics related to QoS:
In order to provide what network engineers call Quality of Service (QoS), the flow of data in the form of packets must be predetermined and resources must be somehow assured for that predetermined flow. Simple routing does not provide a predetermined path for the traffic, and priorities that are described by Class of Service (CoS) coding simply increase the odds of successful transit for one packet over another. There is still no guarantee of service quality. The guarantee of service quality is what distinguishes QoS from CoS. CoS is an element of overall QoS.
By using the traffic management features of the 7705 SAR, network engineers can achieve a QoS for their customers. Multiprotocol Label Switching (MPLS) provides a predetermined path, while policing, shaping, scheduling, and marking features ensure that traffic flows in a predetermined and predictable manner.
There is a need to distinguish between high-priority (that is, mission-critical traffic like signaling) and best-effort traffic priority levels when managing traffic flow. Within these priority levels, it is important to have a second level of prioritization, that is, between a certain volume of traffic that is contracted/needed to be transported, and the amount of traffic that is transported if the system resources allow. Throughout this guide, contracted traffic is referred to as in-profile traffic. Traffic that exceeds the user-configured traffic limits is either serviced using a lower priority or discarded in an appropriate manner to ensure that an overall quality of service is achieved.
The 7705 SAR must be properly configured to provide QoS. To ensure end-to-end QoS, each and every intermediate node together with the egress node must be coherently configured. Proper QoS configuration requires careful end-to-end planning, allocation of appropriate resources and coherent configuration among all the nodes along the path of a given service. Once properly configured, each service provided by the 7705 SAR will be contained within QoS boundaries associated with that service and the general QoS parameters assigned to network links.
The 7705 SAR is designed with QoS mechanisms at both egress and ingress to support different customers and different services per physical interface or card, concurrently and harmoniously (refer to Egress and Ingress Traffic Direction for a definition of egress and ingress traffic). The 7705 SAR has extensive and flexible capabilities to classify, police, shape and mark traffic to make this happen.
![]() | Note: The characteristics and nature of traffic flows in the ingress and egress directions are usually totally different. As an example, traffic is usually shaped at egress for pacing purposes and jitter tolerance imposed by the network transport rules, whereas at ingress, traffic is usually policed to ensure it fits into the traffic volumes defined in the Service Level Agreement. Thus, segregation between ingress and egress offers not only the seamless flexibility to address different requirements but as well allows fine-tuning of appropriate parameters in each direction. |
The 7705 SAR supports multiple forwarding classes (FCs) and associated class-based queuing. Ingress traffic can be classified to multiple FCs, and the FCs can be flexibly associated with queues. This provides the ability to control the priority and drop priority of a packet while allowing the fine-tuning of bandwidth allocation to individual flows.
Each forwarding class is important only in relation to the other forwarding classes. A forwarding class allows network elements to weigh the relative importance of one packet over another. With such flexible queuing, packets belonging to a specific flow within a service can be preferentially forwarded based on the CoS of a queue. The forwarding decision is based on the forwarding class of the packet, as assigned by the ingress QoS policy defined for the service access point (SAP).
7705 SAR routers use QoS policies to control how QoS is handled at distinct points in the service delivery model within the device. QoS policies act like a template. Once a policy is created, it can be applied to many other similar services and ports. As an example, if there is a group of Node Bs connected to a 7705 SAR node, one QoS policy can be applied to all services of the same type, such as High-Speed Downlink Packet Access (HSDPA) offload services.
There are different types of QoS policies that cater to the different QoS needs at each point in the service delivery model. QoS policies are defined in a global context in the 7705 SAR and only take effect when the policy is applied to a relevant entity.
QoS policies are uniquely identified with a policy ID number or a policy ID name. Policy ID 1 and policy ID “default” are reserved for the default policy, which is used if no policy is explicitly applied.
The different QoS policies within the 7705 SAR can be divided into two main types.
The sections that follow provide an overview of the QoS traffic management performed on the 7705 SAR.
Throughout this document, the terms ingress and egress, when describing traffic direction, are always defined relative to the fabric. For example:
When combined with the terms access and network, which are port and interface modes, the four traffic directions relative to the fabric are (see Figure 1):
![]() | Note: Throughout this guide, the terms access ingress/egress and service ingress/egress are interchangeable. This section (QoS Overview) uses the term access, and the following sections (beginning with QoS Policies Overview) use the term service. |
On the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module, traffic can flow between the Layer 2 bridging domain and the Layer 3 IP domain (see Figure 2). In the bridging domain, ring traffic flows from one ring port to another, as well as to and from the add/drop port. From the network point of view, traffic from the ring towards the add/drop port and the v-port is considered ingress traffic (drop traffic). Similarly, traffic from the fabric towards the v-port and the add/drop port is considered egress traffic (add traffic).
The 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module function as an add/drop card to a network side 10 Gb/s optical ring. Conceptually, the card and module should be envisioned as having two domains—a Layer 2 bridging domain where the add/drop function operates, and a Layer 3 IP domain where the normal IP processing and IP nodal traffic flows are managed. Ingress and egress traffic flow remains in the context of the nodal fabric. The ring ports are considered to be east-facing and west-facing and are referenced as Port 1 and Port 2. A virtual port (or v-port) provides the interface to the IP domain within the structure of the card or module.
Queues can be created for each forwarding class to determine the manner in which the queue output is scheduled and the type of parameters the queue accepts. The 7705 SAR supports eight forwarding classes per SAP. Table 2 shows the default mapping of these forwarding classes in order of priority, with Network Control having the highest priority.
FC Name | FC Designation | Queue Type | Typical use |
Network Control | NC | Expedited | For network control and traffic synchronization |
High-1 | H1 | For delay/jitter sensitive traffic | |
Expedited | EF | For delay/jitter sensitive traffic | |
High-2 | H2 | For delay/jitter sensitive traffic | |
Low-1 | L1 | Best Effort | For best-effort traffic |
Assured | AF | For best-effort traffic | |
Low-2 | L2 | For best-effort traffic | |
Best Effort | BE | For best-effort traffic |
The traffic flows of different forwarding classes are mapped to the queues. This mapping is user-configurable. Each queue has a unique priority. Packets from high-priority queues are scheduled separately, before packets from low-priority queues. More than one forwarding class can be mapped to a single queue. In such a case, the queue type defaults to the priority of the lowest forwarding class (see Queue Type for more information on queue type). By default, the following logical order is followed:
At access ingress, traffic can be classified as unicast traffic or one of the multipoint traffic types (broadcast, multicast, or unknown (BMU)). After classification, traffic can be assigned to a queue that is configured to support one of the four traffic types, namely:
The scheduler modes available on adapter cards are profiled (2-priority), 4-priority, and 16-priority. Which modes are supported on a particular adapter card depends on whether the adapter card is a first-, second-, or third-generation card.
![]() | Note: Throughout the 7705 SAR documentation set, first-, second-, and third-generation Ethernet adapter cards and Ethernet ports on fixed platforms are also referred to as Gen-1, Gen-2, and Gen-3 hardware. |
On Gen-3 hardware, 4-priority scheduling mode is the implicit, default scheduling mode and is not user-configurable. Gen-3 platforms with a TDM block support 4-priority scheduling mode. Gen-2 adapter cards support 16-priority and 4-priority scheduling modes. Gen-1 adapter cards support profile (2-priority) and 4-priority scheduling modes.
For details on differences between Gen-1, Gen-2, and Gen-3 hardware related to scheduling mode QoS behavior, see QoS for Gen-3 Adapter Cards and Platforms.
For information on scheduling modes as they apply to traffic direction, refer to the following sections:
Most 7705 SAR systems are susceptible to network processor congestion if the packet rate of small packets received on a node or card exceeds the processing capacity. If a node or card receives a high rate of small packet traffic, the node or card enters overload mode. Prior to the introduction of intelligent discards, when a node or card entered an overload state, the network processor would randomly drop packets.
The “intelligent discards during overload” feature allows the network processor to discard packets according to a preset priority order. In the egress direction, intelligent discards is applied to traffic entering the card from the fabric.
Traffic is discarded in the following order: low-priority out-of-profile user traffic is discarded first, followed by high-priority out-of-profile user traffic, then low-priority in-profile user traffic, high priority in-profile user traffic, and lastly control plane traffic. In the ingress direction, intelligent discards is applied to traffic entering the card from the physical ports. Traffic is discarded in the following order: low-priority user traffic is always discarded first, followed by high-priority user traffic. This order ensures that low-priority user traffic is always the most susceptible to discards.
In the egress direction, the system differentiates between high-priority and low-priority user traffic based on the internal forwarding class and queue-type fabric header markers. In the ingress direction, the system differentiates between high-priority and low-priority user traffic based on packet header bits. Table 3 details the classification of user traffic in the ingress direction.
Fabric Header Marker | High-priority Values | Low-priority Values |
MPLS TC | 7 to 4 | 3 to 0 |
IP DSCP | 63 to 32 | 31 to 0 |
Eth Dot1p | 7 to 4 | 3 to 0 |
Intelligent discards during overload ensures priority-based handling of traffic and helps existing traffic management implementations. It does not change how QoS-based classification, buffer management, or scheduling operates on the 7705 SAR. If the node or card is not in overload operation mode, there is no change to the way packets are handled by the network processor.
There are no commands to configure intelligent discards during overload; the feature is automatically enabled on the following cards, modules, and ports:
Buffer space is allocated to queues based on the committed buffer space (CBS), the maximum buffer space (MBS) and availability of the resources, and the total amount of buffer space. The CBS and the MBS define the queue depth for a particular queue. The MBS represents the maximum buffer space that is allocated to a particular queue. Whether that much space can actually be allocated depends on buffer usage (that is, the number of other queues and their sizes).
Memory allocation is optimized to guarantee the CBS for each queue. The allocated queue space beyond the CBS is limited by the MBS and depends on the use of buffer space and the guarantees accorded to queues as configured in the CBS.
This section contains information on the following topics:
The 7705 SAR supports two types of buffer pools that allocate memory as follows:
Both buffer pools can be displayed in the CLI using the show pools command.
On the access side, CBS is configured in bytes and MBS in bytes or kilobytes using the CLI. See, for example, the config>qos>sap-ingress/egress>queue>cbs and mbs configuration commands.
On the network side, CBS and MBS values are expressed as a percentage of the total number of available buffers. If the buffer space is further segregated into pools (for example, ingress and egress, access and network, or a combination of these), the CBS and MBS values are expressed as a percentage of the applicable buffer pool. See the config>qos>network-queue>queue>cbs and mbs configuration commands.
The configured CBS and MBS values are converted to the number of buffers by dividing the CBS or MBS value by a fixed buffer size of 512 bytes or 2304 bytes, depending on the type of adapter card or platform. The number of buffers can be displayed for an adapter card using the show pools command.
When a packet is being multicast to two or more interfaces on the egress adapter card or block of fixed ports, or when a packet at port ingress is mirrored, one extra buffer per packet is used.
In previous releases, this extra buffer was not added to the queue count. When checking CBS during multicast traffic enqueuing, the CBS was divided by two to prevent buffer overconsumption by the extra buffers. As a result, during multicast traffic enqueuing, the CBS buffer limit for the queue was considered reached when half of the available buffers were in use.
In Release 8.0 of the 7705 SAR, the CBS is no longer divided by two. Instead, the extra buffers are added to the queue count when enqueuing, and are removed from the queue count when the multicast traffic exits the queue. The full CBS value is used, and the extra buffer allocation is visible in buffer allocation displays.
When upgrading to Release 8.0 of the 7705 SAR, ensure that previous CBS and MBS configurations take extra buffers for multicast packets into account, and that maximum delay budgets are not affected adversely.
Packetization buffers and queues are supported in the packet memory of each adapter card or platform. All adapter cards and platforms allocate a fixed space for each buffer. The 7705 SAR supports two buffer sizes: 512 bytes or 2304 bytes, depending on the type of adapter card or platform.
The adapter cards and platforms that support a buffer size of 2304 bytes do not support buffer chaining (see the description below) and only allow a 1-to-1 correspondence of packets to buffers.
The adapter cards and platforms that support a buffer of size of 512 bytes use a method called buffer chaining to process packets that are larger than 512 bytes. To accommodate packets that are larger than 512 bytes, these adapter cards or platforms divide the packet dynamically into a series of concatenated 512-byte buffers. An internal 64-byte header is prepended to the packet, so only 448 bytes of buffer space is available for customer traffic in the first buffer. The remaining customer traffic is split among concatenated 512-byte buffers.
![]() | Note: The 8-port Ethernet Adapter card (version 2) uses an additional 64 bytes at the end of the first buffer for packets larger than 452 bytes (448 bytes plus 4 bytes for FCS); for example, when 449 bytes need to be buffered, the available space in the first buffer is 384 bytes, whereas all other adapter cards that support buffer chaining will have 448 bytes of buffer space available in the first buffer. |
Table 4 shows the supported buffer sizes on the 7705 SAR adapter cards and platforms. If a version number or variant is not specified, this implies all versions of the adapter card or variants of the platform. Adapter cards and platforms that support buffer chaining have 512 byte buffer size (“Yes”); those that do not support buffer chaining have 2304 byte buffer size (“No”).
Adapter Card or Platform | Buffer Space per Card/Platform (Mbytes) | Buffer Chaining Support |
2-port 10GigE (Ethernet) Adapter card | 268 201 (for L2 bridging domain) | Yes Yes (each buffer unit is 768 bytes) |
2-port 10GigE (Ethernet) module | 201 (for L2 bridging domain) | Yes (each buffer unit is 768 bytes) |
2-port OC3/STM1 Channelized Adapter card | 310 | No |
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card | 217 | Yes |
4-port OC3/STM1 Clear Channel Adapter card | 352 | No |
4-port DS3/E3 Adapter card | 280 | No |
6-port E&M Adapter card | 38 | No |
6-port FXS Adapter card | 38 | No |
6-port Ethernet 10Gbps Adapter card | 1177 | Yes |
8-port Ethernet Adapter card | 156 | Yes |
8-port FXO Adapter card | 38 | No |
8-port Gigabit Ethernet Adapter card | 268 | Yes |
8-port Voice & Teleprotection card | 38 | No |
10-port 1GigE/1-port 10GigE X-Adapter card | 537 | Yes |
12-port Serial Data Interface card, version 1 | 38 | No |
12-port Serial Data Interface card, version 2 | 268 | Yes |
16-port T1/E1 ASAP Adapter card | 38 | No |
32-port T1/E1 ASAP Adapter card | 57 | No |
Integrated Services card | 268 | Yes |
Packet Microwave Adapter card | 268 | Yes |
7705 SAR-A | 268 | Yes |
7705 SAR-Ax | 268 | Yes |
7705 SAR-H | 268 | Yes |
7705 SAR-Hc | 268 | Yes |
7705 SAR-M | 268 | Yes |
7705 SAR-W | 268 | Yes |
7705 SAR-Wx | 268 | Yes |
7705 SAR-X (Ethernet ports) 1 | 1177 | Yes |
7705 SAR-X (TDM ports) 1 | 46 | Yes |
Note:
Buffer chaining offers improved efficiency, which is especially evident when smaller packet sizes are transmitted. For example, to queue a 64-byte packet, a card with a fixed buffer of 2304 bytes allocates 2304 bytes, whereas a card with a fixed buffer of 512 bytes allocates only 512 bytes. To queue a 1280-byte packet, a card with a fixed buffer of 2304 bytes allocates 2304 bytes, whereas a card with a fixed buffer of 512 bytes allocates only 1536 bytes (that is, 512 bytes × 3 buffers).
This section contains overview information as well as information on the following topics:
This section provides information on per-SAP aggregate shapers for Gen-2 adapter cards and platforms. For information on Gen-3 adapter cards and platforms, see QoS for Gen-3 Adapter Cards and Platforms.
Hierarchical QoS (H-QoS) provides the 7705 SAR with the ability to shape traffic on a per-SAP basis for traffic from up to eight CoS queues associated with that SAP.
On Gen-2 hardware, the per-SAP aggregate shapers apply to access ingress and access egress traffic and operate in addition to the 16-priority scheduler, which must be used for per-SAP aggregate shaping.
The 16-priority scheduler acts as a soft policer, servicing the SAP queues in strict priority order, with conforming traffic (less than CIR) serviced prior to non-conforming traffic (between CIR and PIR). The 16-priority scheduler on its own cannot enforce a traffic limit on a per-SAP basis; to do this, per-SAP aggregate shapers are required (see H-QoS Example).
The per-SAP shapers are considered aggregate shapers because they shape traffic from the aggregate of one or more CoS queues assigned to the SAP.
Figure 3 and Figure 4 illustrate per-SAP aggregate shapers for access ingress and access egress, respectively. They indicate how shaped and unshaped SAPs are treated.
H-QoS is not supported on the 4-port SAR-H Fast Ethernet module.
Shaped SAPs have user-configured rate limits (PIR and CIR)—called the aggregate rate limit—and must use 16-priority scheduling mode. Unshaped SAPs use default rate limits (PIR is maximum and CIR is 0 kb/s) and can use 4-priority or 16-priority scheduling mode.
Shaped 16-priority SAPs are configured with a PIR and a CIR using the agg-rate-limit command in the config>service>service-type service-id>sap context, where service-type is epipe, ipipe, ies, vprn, or vpls (including routed VPLS). The PIR is set using the agg-rate variable and the CIR is set using the cir-rate variable.
Unshaped 4-priority SAPs are considered unshaped by definition of the default PIR and CIR values (PIR is maximum and CIR is 0 kb/s). Therefore, they do not require any configuration other than to be set to 4-priority scheduling mode.
Unshaped 16-priority SAPs are created when 16-priority scheduling mode is selected, when the default PIR is maximum and the default CIR is 0 kb/s, which are same default settings of a 4-priority SAP. The main reason for preferring unshaped SAPs using 16-priority scheduling over unshaped SAPs using 4-priority scheduling is to have a coherent scheduler behavior (one scheduling model) across all SAPs.
In order for unshaped 4-priority SAPs to compete fairly for bandwidth with 16-priority shaped and unshaped SAPs, a single, aggregate CIR for all the 4-priority SAPs can be configured. This aggregate CIR is applied to all the 4-priority SAPs as a group, not to individual SAPs. In addition, the aggregate CIR is configured differently for access ingress and access egress traffic. On the 7705 SAR-8 and 7705 SAR-18, access ingress is configured in the config>qos>fabric-profile context. On the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, access ingress is configured in the config>system>qos>access-ingress-aggregate-rate context. For all platforms, access egress configuration uses the config>port>ethernet context.
For more information about access ingress scheduling and traffic arbitration from the 16-priority and 4-priority schedulers toward the fabric, see Access Ingress Per-SAP Aggregate Shapers (Access Ingress H-QoS).
The per-SAP aggregate shapers are supported in both access ingress and access egress directions and can be enabled on the following Ethernet access ports:
A typical example in which H-QoS is used is where a transport provider uses a 7705 SAR as a PE device and sells 100 Mb/s of fixed bandwidth for point-to-point Internet access, and offers premium treatment to 10% of the traffic. A customer can mark up to 10% of their critical traffic such that it is classified into high-priority queues and serviced prior to low-priority traffic.
Without H-QoS, there is no way to enforce a limit to ensure that the customer does not exceed the leased 100 Mb/s bandwidth, as illustrated in the following two scenarios.
The second-tier shaper—that is, the per-SAP aggregate shaper—is used to limit the traffic at a configured rate on a per-SAP basis. The per-queue rates and behavior are not affected when the aggregate shaper is enabled. That is, as long as the aggregate rate is not reached then there are no changes to the behavior. If the aggregate rate limit is reached, then the per-SAP aggregate shaper throttles the traffic at the configured aggregate rate while preserving the 16-priority scheduling priorities that are used on shaped SAPs.
This section provides information on per-VLAN network egress shapers for Gen-1 and Gen-2 adapter cards and platforms. For information on Gen-3 adapter cards and platforms, see QoS for Gen-3 Adapter Cards and Platforms.
The 7705 SAR supports a set of eight network egress queues on a per-port or on a per-VLAN basis for network Ethernet ports. Eight unique per-VLAN CoS queues are created for each VLAN when a per-VLAN shaper is enabled. When using per-VLAN shaper mode, in addition to the per-VLAN eight CoS queues, there is a single set of eight queues for hosting traffic from all unshaped VLANs, if any. VLAN shapers are enabled on a per-interface basis (that is, per VLAN) when a network queue policy is assigned to the interface. See Per-VLAN Shaper Support for a list of cards and nodes that support per-VLAN shapers.
On a network port with dot1q encapsulation, shaped and unshaped VLANs can coexist. In such a scenario, each shaped VLAN has its own set of eight CoS queues and is shaped with its own configured dual-rate shaper. The remaining VLANs (that is, the unshaped VLANs) are serviced using the unshaped-if-cir rate, which is configured using the config>port>ethernet>network>egress>unshaped-if-cir command. Assigning a rate to the unshaped VLANs is required for arbitration between the shaped VLANs and the bulk (aggregate) of unshaped VLANs, where each shaped VLAN has its own shaping rate while the aggregate of the unshaped VLANs has a single rate assigned to it.
Per-VLAN shapers are supported on dot1q-encapsulated ports. They are not supported on null- or qinq-encapsulated ports.
Figure 5 illustrates the queuing and scheduling blocks for network egress VLAN traffic.
![]() | Note: Due to space limitations in Figure 5, the second-tier, per-VLAN aggregate shapers are represented as a single loop containing the label “per VLAN”, even though they are dual-rate shapers similar to the third-tier network aggregate shaper. |
Shaped VLANs have user-configured rate limits (PIR and CIR)—called the aggregate rate limits—and must use 16-priority scheduling mode. Shaped VLANs operate on a per-interface basis and are enabled after a network queue policy is assigned to the interface. If a VLAN does not have a network queue policy assigned to the interface, it is considered an unshaped VLAN.
![]() | Note: On the 8-port Ethernet Adapter card (version 2), profiled scheduling must be used for VLAN shaping. For these cases, fewer shaped VLANs are supported. |
To configure a shaped VLAN with aggregate rate limits, use the agg-rate-limit command in the config>router>if>egress context. If the VLAN shaper is not enabled, then the agg-rate-limit settings do not apply. The default aggregate rate limit (PIR) is set to the port egress rate.
Unshaped VLANs use default rate limits (PIR is the maximum possible port rate and CIR is 0 kb/s) and use 16-priority scheduling mode. All unshaped VLANs are classified, queued, buffered, and scheduled into an aggregate flow that gets prepared for third-tier arbitration by a single VLAN aggregate shaper.
In order for the aggregated unshaped VLANs to compete fairly for bandwidth with the shaped VLANs, a single, aggregate CIR for all the unshaped VLANs can be configured using the unshaped-if-cir command. The aggregate CIR is applied to all the unshaped VLANs as a group, not to individual VLANs, and is configured in the config>port>ethernet> network>egress context.
The following cards and nodes support network egress per-VLAN shapers:
![]() | Note:
|
This section describes the following two scenarios:
One of the main uses of per-VLAN network egress shapers is to enable load balancing across dual uplinks out of a spoke site. Figure 6 represents a typical hub-and-spoke mobile backhaul topology. To ensure high availability through the use of redundancy, a mobile operator invests in dual 7750 SR nodes at the MTSO. Dual 7750 SR nodes at the MTSO offer equipment protection, as well as protection against backhaul link failures.
In this example, the cell site 7705 SAR is dual-homed to 7750 SR_1 and SR_2 at the MTSO, using two disjoint Ethernet virtual connections (EVCs) leased from a transport provider. Typically, the EVCs have the same capacity and operate in an forwarding/standby manner. One of the EVCs—the 7750 SR—transports all the user/mobile traffic to and from the cell site at any given time. The other EVC transports only minor volumes of control plane traffic between network elements (the 7705 SAR and the 7750 SR). Leasing two EVCs with the same capacity and using only one of them actively wastes bandwidth and is expensive (the mobile operator pays for two EVCs with the same capacity).
Mobile operators with increasing volumes of mobile traffic look for ways to utilize both of the EVCs simultaneously, in an active/active manner. In this case, using per-VLAN shapers would ensure that each EVC is loaded up to the leased capacity. Without per-VLAN shapers, the 7705 SAR supports a single per-port shaper, which does not meet the active/active requirement.
Another typical use of per-VLAN shapers at network egress is shown in Figure 7. The figure shows a hub-and-spoke mobile backhaul network where EVCs leased from a transport provider are groomed to a single port, typically a 10-Gigabit Ethernet or a 1-Gigabit Ethernet port, at the hand-off point at the hub site. Traffic from different cell sites is handed off to the aggregation node over a single port, where each cell site is uniquely identified by the VLAN assigned to it.
In the network egress direction of the aggregation node, per-VLAN shaping is required to ensure traffic to different cell sites is shaped at the EVC rate. The EVC for each cell site would typically have a different rate. Therefore, every VLAN feeding into a particular EVC needs to be shaped at its own rate. For example, compare a relatively small cell site (Cell Site-1) at 20 Mb/s rate with a relatively large cell site (Cell Site-2) at 200 Mb/s rate. Without the granularity of per-VLAN shaping, shaping only at the per-port level cannot ensure that an individual EVC does not exceed its capacity.
This section provides information on per-customer aggregate shapers for Gen-2 adapter cards and platforms. For information on Gen-3 adapter cards and platforms, see QoS for Gen-3 Adapter Cards and Platforms.
A per-customer aggregate shaper is an aggregate shaper into which multiple SAP aggregate shapers can feed. The SAPs can be shaped at a desired rate called the Multiservice Site (MSS) aggregate rate. At ingress, SAPs that are bound to a per-customer aggregate shaper can span a whole Ethernet MDA meaning that SAPs mapped to the same MSS can reside on any port on a given Ethernet MDA. At egress, SAPs that are bound to a per-customer aggregate shaper can only span a port. Towards the fabric at ingress and towards the port at egress, multiple per-customer aggregate shapers are shaped at their respective configured rates to ensure fair sharing of available bandwidth among different per-customer aggregate shapers. Deep ingress queuing capability ensures that traffic bursts are absorbed rather than dropped. Multi-tier shapers are based on an end-to-end backpressure mechanism that uses the following order (egress is given as an example):
To configure per-customer aggregate shaping, a shaper policy must be created and shaper groups must be created within that shaper policy. For access ingress per-customer aggregate shaping, a shaper policy must be assigned to an Ethernet MDA and SAPs on that Ethernet MDA must be bound to a shaper group within the shaper policy bound to that Ethernet MDA. For access egress per-customer aggregate shaping, a shaper policy must be assigned to a port and SAPs on that port must be bound to a shaper group within the shaper policy bound to that port. The unshaped SAP shaper group within the policy provides the shaper rate for all the unshaped SAPs (4-priority scheduled SAPs). For each shaped SAP, however, an ingress or egress shaper group can be specified. For more information on shaper policies, see Applying a Shaper QoS Policy and Shaper Groups.
The access ingress shaper policy is configured at the MDA level for fixed platforms. The default value for an access ingress shaper policy for each MDA and module is blank, as configured using the no shaper-policy command. On all 7705 SAR fixed platforms (with the exception of the 7705 SAR-X), when no MSS is configured, the existing access ingress aggregate rate is used as the shaper rate for the bulk of access ingress traffic. In order to use MSS, a shaper policy must be assigned to the access ingress interface of one MDA, and the shaper policy change is cascaded to all MDAs and modules in the chassis.
Before the access ingress shaper policy is assigned, the config system qos access-ingress-aggregate-rate 10000000 unshaped-sap-cir max command must be configured. Once a shaper policy is assigned to an access ingress MDA, the values configured using the access-ingress-aggregate-rate command cannot be changed.
On all 7705 SAR fixed platforms (with the exception of the 7705 SAR-X), when a shaper policy is assigned to an Ethernet MDA for access ingress aggregate shaping, it is automatically assigned to all the Ethernet MDAs in that chassis. The shaper group members contained in the shaper policy span all the Ethernet MDAs. SAPs on different Ethernet MDAs configured with the same ingress shaper group will share the shaper group rate.
Once the first MSS is configured, traffic from the designated SAPs is mapped to the MSS and shaped at the configured rate. The remainder of the traffic is then shaped according to the configured unshaped SAP rate. When a second MSS is added, SAPs that are mapped to the second MSS are shaped at the configured rate and traffic is arbitrated between the first MSS, the second MSS and unshaped SAP traffic.
In the access egress direction, the default shaper policy is assigned to each MSS-capable port. Ports that cannot support MSS are assigned a blank value, as configured using the no shaper-policy command. The default egress shaper group is assigned to each egress SAP that supports MSS. If the SAP does not support MSS, the egress SAP is assigned a blank value, as configured using the no shaper-group command.
The following cards, modules, and platforms support MSS:
![]() | Note:
|
A SAP that uses a LAG can include two or more ports from the same adapter card or two different adapter cards.
In the access egress direction, each port can be assigned a shaper policy for access and can have shaper groups configured with different shaping rates. If a shaper group is not defined, the default shaper group is used. The port egress shaper policy, when configured on a LAG, must be configured on the primary LAG member. This shaper policy is propagated to each of the LAG port members, ensuring that each LAG port member has the same shaper policy.
The following egress MSS restrictions ensure that both active and standby LAG members have the same configuration.
In the ingress direction, there can be two different shaper policies on two different adapter cards for the two port members in a LAG. When assigning a shaper group to an ingress LAG SAP, each shaper policy assigned to the LAG port MDAs must contain that shaper group or the shaper group cannot be assigned. In addition, after a LAG activity switch occurs, the CIR/PIR configuration from the subgroup of the policy of the adapter card of the newly active member will be used.
The following ingress MSS restrictions allow the configuration of shaper groups for LAG SAPs, but the router ignores shaper groups that do not meet the restrictions.
This section provides information on QoS for hybrid ports on Gen-2 adapter cards and platforms. For information on Gen-3 adapter cards and platforms, see QoS for Gen-3 Adapter Cards and Platforms.
In the ingress direction of a hybrid port, traffic management behavior is the same as it is for access and network ports. See Access Ingress and Network Ingress.
In the egress direction of a hybrid port, access and network aggregate shapers are used to arbitrate between the bulk (aggregate) of access and network traffic flows. As shown in Figure 8, on the access side (above the solid line), both the access egress SAP aggregates (#1) and the unshaped SAP shaper (#2) feed into the access egress aggregate shaper (#3). On the network side (below the solid line), both the per-VLAN shapers (#4) and the unshaped interface shaper (#5) feed into the network egress aggregate shaper (#6). Then, the access and the network aggregate shapers are arbitrated in a dual-rate manner, in accordance with their respective configured committed and peak rates (#7). As a last step, the egress-rate for the port (when configured) applies backpressure to both the access and the network aggregate shapers, which apply backpressure all the way to the FC queues belonging to both access and network traffic.
![]() | Note: Due to space limitations in Figure 8, the second-tier, per-SAP and per-VLAN aggregate shapers are represented as a single loop containing the label “per SAP” or “per VLAN”, even though they are dual-rate shapers similar to the third-tier network aggregate shaper. Tiers are labeled at the top of the figure. |
As part of the hybrid port traffic management solution, access and network second-tier shapers are bound to access and network aggregate shapers, respectively. The hybrid port egress datapath can be visualized as access and network datapaths that coexist separately up until the access and network aggregate shapers at Tier 3 (#3 and #6).
In Figure 8, the top half is identical to access egress traffic management, where CoS queues (Tier 1) feed into either per-SAP shapers for shaped SAPs (#1) or a single second-tier shaper for all unshaped SAPs (#2). Up to the end of the second-tier, per-SAP aggregate shapers, the access egress datapath is maintained in the same manner as an Ethernet port in access mode. The same logic applies for network egress. The bottom half of the figure shows the datapath from the CoS queues to the per-VLAN shapers, which is identical to the datapath for any other Ethernet port in network mode.
The main difference between hybrid mode and access and network modes is shown when the access and the network traffic is arbitrated towards the port (Tier 3). At this point, a new set of dual-rate shapers (called shaper groups) are introduced: one shaper for the aggregate (bulk) of the access traffic (#3) and another shaper for the aggregate of the network traffic (#6) —to ensure rate-based arbitration among access and network traffic.
Depending on the use and the application, the committed rate for any one mode of flow might need to be fine-tuned to minimize delay, jitter and loss. In addition, through the use of egress-rate limiting, a fourth level of shaping can be achieved.
When egress-rate is configured (under config>port>ethernet), the following events occur:
Third-generation (Gen-3) Ethernet adapter cards and Ethernet ports on Gen-3 platforms support 4-priority scheduling.
The main differences between Gen-3 hardware and Gen-1 and Gen-2 hardware are that on Gen-3 hardware:
See Scheduling Modes for a summary of scheduler mode support. For information on adapter card generations, refer to the “Evolution of Ethernet Adapter Cards, Modules, and Platforms” section in the 7705 SAR Interface Configuration Guide.
![]() | Note:
|
![]() | Caution: Any Gen-3 adapter card or platform running Release 7.0.R6 or later software uses 4-priority scheduling instead of 4-priority-hqos scheduling, which was supported previously. The migration of scheduler mode is automatic with an upgrade and there is no operator action required. As part of the migration, all CIR values at second-tier (per-SAP and per-VLAN) and third-tier (per-customer (MSS)) aggregate shaper levels are set to zero. Operators must exercise caution when performing an upgrade to Release 7.0.R6 or later from a previous Release 7.0 version, and must adjust the affected CIR values in accordance with the needs of their applications as soon as possible. The shapers with affected CIR values are listed later in this section. |
Table 5 describes the access, network, and hybrid port scheduling behavior after upgrading to 7705 SAR Release 7.0.R6 (or later) for Gen-3 hardware, and compares it with the scheduling behavior of Gen-2 and Gen-1 hardware.
Port Type | Gen-3 Hardware with 4-Priority Mode | Gen-1 or Gen-2 Hardware with 4-Priority Mode | Gen-1 Hardware with Profiled Mode | |
Access | Within a SAP | EXP over BE | EXP over BE | N/A |
Default configuration | Simple round-robin (RR) scheduling among SAPs | EXP (across all queues, no SAP boundaries) over BE | N/A | |
H-QoS and MSS aggregate shapers | RR among aggregates based on PIR and CIR (SAP at tier 2, MSS at tier 3) | N/A | N/A | |
Network | Default configuration (8 queues per port) | EXP over BE | EXP over BE | Conforming over non-conforming traffic in RR among queues |
Per-VLAN shaper | EXP over BE | RR among VLAN shapers based on PIR and CIR | RR among VLAN shapers based on PIR and CIR | |
Hybrid | Default configuration (8 queues per port) | EXP over BE | EXP (across all SAPs and network queues) over BE | N/A |
Per-VLAN shaper | RR among VLAN shapers based on PIR and CIR | RR among VLAN shapers based on PIR and CIR | N/A |
In summary, with Release 7.0.R6 and later, the following updates to Gen-3 scheduling are implemented:
![]() | Note: For network egress traffic when the port is in network mode, there is no change to CIR-based shaping for per-VLAN shapers (that is, PIR-based shaping only, CIR-based shaping is not enabled), and backpressure to the FC queues based on the relative priority among all VLAN-bound IP interfaces is still enabled. |
The egress datapath shapers on a 6-port SAR-M Ethernet module operate on the same frame size as any other shaper. These egress datapath shapers are:
The egress port shaper on a 6-port SAR-M Ethernet module does not account for the 4-byte FCS. Packet byte offset can be used to make adjustments to match the desired operational rate or eliminate the implications of FCS. See Packet Byte Offset (PBO) for more information.
For access ingress, access egress, and network ingress traffic, the behavior of 4-priority scheduling on Gen-3 hardware is similar to 4-priority scheduling on Gen-1 and Gen-2 hardware. See Figure 9 (access ingress), Figure 10 (access egress), Figure 11 (network ingress, destination mode), and Figure 12 (network ingress, aggregate mode).
For network egress traffic through a network port on Gen-3 hardware, the behavior of 4-priority scheduling is as follows: traffic priority is determined at the queue-level scheduler, which is based on the queue PIR and CIR and the queue type. The queue-level priority is carried through the various shaping stages and is used by the 4-priority Gen-3 VLAN scheduler at network egress. See Figure 13 and its accompanying description.
For hybrid ports, both access and network egress traffic use 4-priority scheduling that is similar to 4-priority scheduling on Gen-1 and Gen-2 hardware. See Figure 14 and its accompanying description.
![]() | Note: The 7705 SAR-X defaults to 4-priority mode and does not support fabric shapers. Traffic from fabric shapers (access and network) are arbitrated in a round-robin manner towards the egress datapath. |
In Figure 9, the shaper groups all belong within one shaper policy and only one shaper policy is assigned to an ingress adapter card. Each SAP can be associated with one shaper group. Multiple SAPs can be associated with the same shaper group. All the access ingress traffic flows to the access ingress fabric shaper, in-profile (conforming) traffic first, then out-of-profile (non-conforming) traffic. Network ingress traffic behaves similarly.
The 4-priority schedulers on the Gen-1/Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP basis.
Figure 10 shows 4-priority scheduling for access egress on Gen-3 hardware. QoS behavior for access egress is similar to QoS behavior for access ingress.
The 4-priority schedulers on the Gen-1/Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP basis.
Figure 11 and Figure 12 show network ingress scheduling for per-destination and aggregate modes, which are configured under the fabric-profile command. Traffic arriving on a network port is examined for its destination MDA and directed to the QoS block that sends traffic to the appropriate MDA. There is one set of queues for each block, and an additional set for multipoint traffic.
In Figure 11, there is one per-destination shaper for each destination MDA. In Figure 12, there is a single shaper to handle all the traffic.
Figure 13 shows 4-priority scheduling for Gen-3 hardware at network egress. Queue-level CIR and PIR values and the queue type are determined at queuing and provide the scheduling priority for a given flow across all shapers towards the egress port (#1 in Figure 13). At the per-VLAN aggregate level (#2), only a single rate—the total aggregate rate (PIR)—can be configured; CIR configuration is not supported at the per-VLAN aggregate-shaper level for network egress traffic. All VLANs are aggregated and scheduled by a 4-priority aggregate scheduler (#3). The flow is then fed to the port shaper and processed at the egress rate. In case of congestion, the port shaper provides backpressure, resulting in the buffering of traffic by individual FC queues until the congested state ends.
![]() | Note: The behavior of 4-priority scheduling for network egress traffic through a network port differs from the behavior for network egress traffic through a hybrid port. Figure 13 and Figure 14 show the differences. |
Figure 14 shows 4-priority scheduling for Gen-3 hardware where ports are in hybrid mode. The QoS behavior for both access and network egress traffic is similar except that the access egress path includes tier 3, per-customer aggregate shapers. Access and network shapers prepare and present traffic to the port shaper, which arbitrates between access and network traffic.
The 4-priority schedulers on the Gen-1/Gen-2 and Gen-3 hardware are very similar, except that 4-priority scheduling on Gen-3 hardware is done on a per-SAP or a per-VLAN basis (for access egress and network egress, respectively).
When a Gen-3-based port and a Gen-2-based port are attached to a LAG SAP (also referred to as mix-and-match LAG), configuring scheduler mode for the LAG SAP is required because it is used by Gen-2 ports, but it is ignored by the Gen-3 port. When a Gen-3-based and a Gen-1-based port are attached to a LAG SAP, the scheduler mode cannot be changed on either port.
For more information, refer to the “LAG Support on Third-Generation Ethernet Adapter Cards, Ports, and Platforms” section in the 7705 SAR Interface Configuration Guide.
This section contains overview information as well as information on the following topics:
Figure 15 shows a simplified diagram of the ports on a 2-port 10GigE (Ethernet) Adapter card (also known as a ring adapter card). The ports can also be conceptualized the same way for a 2-port 10GigE (Ethernet) module. A ring adapter card or module has physical Ethernet ports used for Ethernet bridging in a ring network (labeled Port 1 and Port 2 in the figure). These ports are referred to as the ring ports because they connect to the Ethernet ring. The ring ports operate on the Layer 2 bridging domain side of the ring adapter card or module, as does the add/drop port, which is an internal port on the card or module.
On the Layer 3 IP domain side of a ring adapter card or module, there is a virtual port (v-port) and a fabric port. The v-port is also an internal port. Its function is to help control traffic on the IP domain side of a ring adapter card or module.
To manage ring and add/drop traffic mapping to queues in the L2 bridging domain, a ring type network QoS policy can be configured for the ring at the adapter card level (under the config>card>mda context). To manage ring and add/drop traffic queuing and scheduling in the L2 bridging domain, network queue QoS policies can be configured for the ring ports and the add-drop port.
To manage add/drop traffic classification and remarking in the L3 IP domain, ip-interface type network QoS policies can be configured for router interfaces on the v-port. To manage add/drop traffic queuing and scheduling in the L3 IP domain, network queue QoS policies can be configured for the v-port and at network ingress at the adapter card level (under the config>card>mda context).
All ports on a ring adapter card or module are possible congestion points and therefore can have network queue QoS policies applied to them.
In the bridging domain, a single ring type network QoS policy can be applied at the adapter card level and operates on the ring ports and the add/drop port. In the IP domain, IP interface type network QoS policies can be applied to router interfaces.
Network QoS policies are created using the config>qos>network command, which includes the network-policy-type keyword to specify the type of policy:
Once the policy has been created, its default action and classification mapping can be configured.
Network QoS policies are applied to the ring ports and the add/drop port using the qos-policy command found under the config>card>mda context. These ports are not explicitly specified in the command.
Network queue QoS policies are applied to the ring ports and the v-port using the queue-policy command found under the config>port context. Similarly, a network queue policy is applied to the add/drop port using the add-drop-port- queue-policy command, found under the config>card>mda context. The add/drop port is not explicitly specified in this command.
The CLI commands for applying QoS policies are given in this guide. Their command descriptions are given in the 7705 SAR Interface Configuration Guide.
The following notes apply to configuring and applying QoS policies to a ring adapter card or module, as well as other adapter cards.
For specific information about QoS for IPSec traffic, refer to the “QoS” section in the “IPSec” chapter in the 7705 SAR Services Guide.
The 7705 SAR provides priority and scheduling for traffic into the encryption and decryption engines on nodes that support network group encryption (NGE). This applies to traffic at network ingress or network egress.
For specific information, refer to the “QoS for NGE Traffic” section in the “Network Group Encryption” chapter in the 7705 SAR Services Guide.
This section contains the following topics for traffic flow in the access ingress direction:
Traffic classification identifies a traffic flow and maps the packets belonging to that flow to a preset forwarding class, so that the flow can receive the required special treatment. Up to eight forwarding classes are supported for traffic classification. Refer to Table 2 for a list of these forwarding classes.
For TDM channel groups, all of the traffic is mapped to a single forwarding class. Similarly, for ATM VCs, each VC is linked to one forwarding class. On Ethernet ports and VLANs, up to eight forwarding classes can be configured based on 802.1p (dot1p) bits or DSCP bits classification. On PPP/MLPPP, FR (for Ipipes), or cHDLC SAPs, up to eight forwarding classes can be configured based on DSCP bits classification. FR (for Fpipes) and HDLC SAPs are mapped to one forwarding class.
![]() | Note:
|
Once the classification takes place, forwarding classes are mapped to queues as described in the sections that follow.
The various traffic classification methods used on the 7705 SAR are described in Table 6. A list of classification rules follows the table.
Traffic Classification Based on... | Description |
a channel group (n × DS0) | Applies to 16-port T1/E1 ASAP Adapter card and 32-port T1/E1 ASAP Adapter card ports, 2-port OC3/STM1 Channelized Adapter card ports, 12-port Serial Data Interface card ports, 4-port T1/E1 and RS-232 Combination module ports, and 6-port E&M Adapter card ports in structured or unstructured circuit emulation mode. In this mode, a number of DS0s are transported within the payload of the same Circuit Emulation over Packet Switched Networks (CESoPSN) packet, Circuit Emulation over Ethernet (CESoETH) packet, or Structure-Agnostic TDM over Packet (SAToP) packet. Thus the timeslots transporting the same type of traffic are classified all at once. |
an ATM VCI | On ATM-configured ports, any virtual connection regardless of service category is mapped to the configured forwarding class. One-to-one mapping is the only supported option. VP- or VC-based classifications are both supported. A VC with a specified VPI and VCI is mapped to the configured forwarding class. A VP connection with a specified VPI is mapped to the configured forwarding class. |
an ATM service category | Similar ATM service categories can be mapped against the same forwarding class. Traffic from a given VC with a specified service category is mapped to the configured forwarding class. VC selection is based on the ATM VC identifier. |
an Ethernet port | All the traffic from an access ingress Ethernet port is mapped to the selected forwarding class. More granular classification can be performed based on dot1p or DSCP bits of the incoming packets. Classification rules applied to traffic flows on Ethernet ports behave similarly to access/filter lists. There can be multiple tiers of classification rules associated with an Ethernet port. In this case, classification is performed based on priority of classifier. The order of the priorities is described in Hierarchy of Classification Rules. |
an Ethernet VLAN (dot1q or qinq) | Traffic from an access Ethernet VLAN (dot1q or qinq) interface can be mapped to a forwarding class. Each VLAN can be mapped to one forwarding class. |
IEEE 802.1p bits (dot1p) | The dot1p bits in the Ethernet/VLAN ingress packet headers are used to map the traffic to up to eight forwarding classes. |
PPP/MLPPP, FR (for Ipipes), and cHDLC SAPs | Traffic from an access ingress SAP is mapped to the selected forwarding class. More granular classification can be performed based on DSCP bits of the incoming packets. |
FR (for Fpipes) and HDLC SAPs | Traffic from an access ingress SAP is mapped to the selected (default) forwarding class. |
DSCP bits | When the Ethernet payload is IP, ingress traffic can be mapped to a maximum of eight forwarding classes based on DSCP bit values. DSCP-based classification supports untagged, single-tagged, double-tagged, and triple-tagged Ethernet frames. If an ingress frame has more than three VLAN tags, then dot1q or qinq dot1p-based classification must be used. |
Multi-field classifiers | Traffic is classified based on any IP criteria currently supported by the 7705 SAR filter policies; for example, source and destination IP address, source and destination port, whether or not the packet is fragmented, ICMP code, and TCP state. For information on multi-field classification, refer to the 7705 SAR Router Configuration Guide, “Multi-field Classification (MFC)” and “IP, MAC, and VLAN Filter Entry Commands”. |
Table 7 shows classification options for various access entities (SAP identifiers) and service types. For example, traffic from a TDM port using a TDM (Cpipe) PW maps to one FC (all traffic has the same CoS). Traffic from an Ethernet port using a Epipe PW can be classified to as many as eight FCs based on DSCP classification rules, while traffic from a SAP with dot1q or qinq encapsulation can be classified to up to eight FCs based on dot1p or DSCP rules.
For Ethernet traffic, dot1p-based classification for dot1q or QinQ SAPs takes precedence over DSCP-based classification. For null-encapsulated Ethernet ports, only DSCP-based classification applies. In either case, when defining classification rules, a more specific match rule is always preferred to a general match rule.
For additional information on hierarchy rules, see Table 19 in the Service Ingress QoS Policies section.
Access Type (SAP) | Service Type | |||||||
TDM PW | ATM PW | FR PW | HDLC PW | Ethernet PW | IP PW | VPLS | VPRN | |
TDM port | 1 FC |
|
|
|
|
|
|
|
Channel group | 1 FC |
|
|
|
|
|
|
|
ATM virtual connection identifier |
| 1 FC |
|
|
|
| 1 FC |
|
FR |
|
| 1 FC |
|
| DSCP, up to 8 FCs |
|
|
HDLC |
|
|
| 1 FC |
|
|
|
|
PPP / MLPPP |
|
|
|
|
| DSCP, up to 8 FCs |
| DSCP, up to 8 FCs |
cHDLC |
|
|
|
|
| DSCP, up to 8 FCs |
|
|
Ethernet port |
|
|
|
| DSCP, up to 8 FCs | DSCP, up to 8 FCs | DSCP, up to 8 FCs | DSCP, up to 8 FCs |
Dot1q encapsulation |
|
|
|
| Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs |
QinQ encapsulation |
|
|
|
| Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs | Dot1p or DSCP, up to 8 FCs |
Once the traffic is mapped against a forwarding class, the discard probability for the traffic can be configured as high or low priority at ingress. Once the traffic is further classified as high or low priority, different congestion management schemes could be applied based on this priority. For example WRED curves can then be run against the high- and low-priority traffic separately, as described in Slope Policies (WRED and RED).
This ability to specify the discard probability is very significant because it controls the amount of traffic that is discarded under congestion or high usage. If you know the characteristics of your traffic, particularly the burst characteristics, the ability to change the discard probability can be used to great advantage. The objective is to customize the properties of the random discard functionality such that the minimal amount of data is discarded.
Once the traffic is classified to different forwarding classes, the next step is to create the ingress queues and bind forwarding classes to these queues.
There is no restriction of a one-to-one association between a forwarding class and a queue. That is, more than one forwarding class can be mapped to the same queue. This capability is beneficial in that it allows a bulk-sum amount of resources to be allocated to traffic flows of a similar nature. For example, in the case of 3G UMTS services, HSDPA and OAM traffic are both considered BE in nature. However, HSDPA traffic can be mapped to a better forwarding class (such as L2) while OAM traffic can remain mapped to a BE forwarding class. But they both can be mapped to a single queue to control the total amount of resources for the aggregate of the two flows.
A large but finite amount of memory is available for the queues. Within this memory space, many queues can be created. The queues are defined by user-configurable parameters. This flexibility and complexity is necessary in order to create services that offer optimal quality of service and is much better than a restrictive and fixed buffer implementation alternative.
Memory allocation is optimized to guarantee the CBS for each queue. The allocated queue space beyond the CBS that is bounded by the MBS depends on the usage of buffer space and existing guarantees to queues (that is, the CBS). The CBS is defaulted to 8 kilobytes (for 512 byte buffer size) or 18 kilobytes (for 2304 byte buffer size) for all access ingress queues on the 7705 SAR. With a small default queue depth (CBS) allocated for each queue, all services at full scale are guaranteed to have buffers for queuing. The default value would need to be altered to meet the requirements of a specific traffic flow or flows.
Traffic management on the 7705 SAR uses a packet-based implementation of the dual leaky bucket model. Each queue has a guaranteed space limited with CBS and a maximum depth limited with MBS. New packets are queued as they arrive. Any packet that causes the MBS to be exceeded is discarded.
The packets in the queue are serviced by two different profiled (rate-based) schedulers, the In-Profile and Out-of-Profile schedulers, where CIR traffic is scheduled before PIR traffic. These two schedulers empty the queue continuously.
For 4-priority scheduling, rate-based schedulers (CIR and PIR) are combined with queue-type schedulers (EXP or BE). For 16-priority scheduling, the rate-based schedulers are combined with the strict priority schedulers (CoS-8 queue first to CoS-1 queue last).
![]() | Note: For access ingress and egress, the 16-priority schedulers use additional hardware resources and capabilities, which results in increased throughput. |
Access ingress scheduling is supported on the adapter cards and ports listed in Table 8. The supported scheduling modes are 4-priority scheduling and 16-priority scheduling. Table 8 shows which scheduling mode each card and port supports at access ingress.
This section also contains information on the following topics:
Adapter Card or Port | 4-Priority | 16-Priority |
8-port Ethernet Adapter cards, versions 1 and 2 on the 7705 SAR-8 and version 2 on the 7705 SAR-18 | ✓ | |
8-port Gigabit Ethernet Adapter card | ✓ | ✓ |
Packet Microwave Adapter card | ✓ | ✓ |
6-port Ethernet 10Gbps Adapter card | ✓ 1 | |
10-port 1GigE/1-port 10GigE X-Adapter card (in 10-port 1GigE mode) | ✓ | ✓ |
4-port SAR-H Fast Ethernet module | ✓ | |
6-port SAR-M Ethernet module | ✓ | ✓ |
Ethernet ports on the 7705 SAR-A (both variants) | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Ax | ✓ | ✓ |
Ethernet ports on the 7705 SAR-H | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Hc | ✓ | ✓ |
Ethernet ports on the 7705 SAR-M (all variants) | ✓ | ✓ |
Ethernet ports on the 7705 SAR-W | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Wx (all variants) | ✓ | ✓ |
DSL ports on the 7705 SAR-Wx | ✓ | ✓ |
Ethernet ports on the 7705 SAR-X | ✓ 1 | |
16-port T1/E1 ASAP Adapter card | ✓ | |
32-port T1/E1 ASAP Adapter card | ✓ | |
2-port OC3/STM1 Channelized Adapter card | ✓ | |
4-port OC3/STM1 Clear Channel Adapter card | ✓ | |
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card | ✓ | |
4-port DS3/E3 Adapter card | ✓ | |
T1/E1 ASAP ports on the 7705 SAR-A (variants with T1/E1 ASAP ports) | ✓ | |
T1/E1 ASAP ports on the 7705 SAR-M (variants with T1/E1 ASAP ports) | ✓ | |
TDM ports on the 7705 SAR-X | ✓ | |
12-port Serial Data Interface card | ✓ | |
6-port E&M Adapter card | ✓ | |
6-port FXS Adapter card | ✓ | |
8-port FXO Adapter card | ✓ | |
8-port Voice & Teleprotection card | ✓ | |
Integrated Services card | ✓ |
Note:
Each queue is serviced based on the user-configured CIR and PIR values. If the packets that are collected by a scheduler from a queue are flowing at a rate that is less than or equal to the CIR value, then the packets are scheduled as in-profile. Packets with a flow rate that exceeds the CIR value but is less than the PIR value are scheduled as out-of-profile. Figure 16 depicts this behavior by the “In-Prof” and “Out-Prof” labels. This behavior is comparable to the dual leaky bucket implementation in ATM networks. With in-profile and out-of-profile scheduling, traffic that flows at rates up to the traffic contract (that is, CIR) from all the queues is serviced prior to traffic that flows at rates exceeding the traffic contract. This mode of operation ensures that Service Level Agreements are honored and traffic that is committed to be transported is switched prior to traffic that exceeds the contract agreement.
![]() | Note: A profile is an arithmetical analysis of the rates that are permitted for a particular packet flow; therefore, profiled scheduling may also be called rate-based scheduling. |
As well as profiled scheduling described above, queue-type scheduling is supported at access ingress. Queues are divided into two categories, those that are serviced by the Expedited scheduler and those that are serviced by the Best Effort scheduler.
The Expedited scheduler has precedence over the Best Effort scheduler. Thus, at access ingress, CoS queues that are marked with an Expedited priority are serviced first. Then, the Best Effort marked queues are serviced. In a default configuration, the Expedited scheduler services the following CoS queues before the Best Effort scheduler services the rest:
If a packet with an Expedited forwarding class arrives while a Best Effort marked queue is being serviced, then the Expedited Scheduler will take over and service the Expedited marked CoS queue, as soon as the packet from the Best Effort scheduler is serviced.
The schedulers at access ingress in the 7705 SAR service the group of all Expedited queues exhaustively ahead of the group of all Best Effort queues. This means that all expedited queues have to be empty before any packet from a Best Effort queue is serviced.
![]() | Note: There is no user configuration for the schedulers. The operation of the schedulers is described for informational purposes. A user can control the mapping of traffic flows based on classification controls, that is, by mapping forwarding classes to as many as eight CoS queues. |
The following basic rules apply to the queue-type scheduling of CoS queues.
4-priority scheduling combines profiled scheduling and queue-type scheduling and applies the combination to all of the access ingress queues. The profile and queue-type schedulers are combined and applied to all the queues to provide maximum flexibility and scalability that meet the stringent QoS requirements of modern network applications. See Profiled (Rate-based) Scheduling and Queue-Type Scheduling for information on these types of scheduling.
Packets with a flow rate that is less than or equal to the CIR value of a queue are scheduled as in-profile. Packets with a flow rate that exceeds the CIR value but is less than the PIR value of a queue are scheduled as out-of-profile.
The scheduling cycle for 4-priority scheduling of CoS queues is shown in Figure 16. The following basic steps apply:
![]() | Note: If a packet arrives at any of the queues marked for Expedited scheduling while the scheduler is servicing a packet from a Best Effort queue or is servicing an out-of-profile packet, the scheduler finishes servicing the current packet and then returns to the Expedited queues immediately. |
At access ingress, 4-priority scheduling for Gen-3 hardware is the same as 4-priority scheduling for Gen-2 or Gen-1 hardware, except that scheduling is done on a per-SAP basis. For more information, see QoS for Gen-3 Adapter Cards and Platforms.
For 16-priority scheduling, the rate-based schedulers (CIR and PIR) are combined with the strict priority schedulers (CoS-8 queue first to CoS-1 queue last).
For general information on 16-priority scheduling, see Network Egress 16-Priority Scheduling. Access ingress 16-priority scheduling functions in a similar fashion to network egress 16-priority scheduling.
The 7705 SAR treats broadcast, multicast, and unknown traffic in the same way as unicast traffic. After being classified, the BMU traffic can be mapped to individual queues in order to be forwarded to the fabric. Classification of unicast and BMU traffic does not differ, which means that BMU traffic that has been classified to a BMU-designated queue can be shaped at its own rate, offering better control and fairer usage of fabric resources. For more information, see BMU Support.
On the 7705 SAR, H-QoS adds second-tier (or second-level), per-SAP aggregate shapers. As shown in Figure 17, traffic ingresses at an Ethernet SAP and is classified and mapped to up to eight different CoS queues on a per-ingress SAP basis. The aggregate rate CIR and PIR values are then used to shape the traffic. The conforming loop (aggregate CIR loop) schedules the packets out of the eight CoS queues in strict priority manner (queue priority CIRs followed by queue priority PIRs). If the aggregate CIR is crossed at any time during the scheduling operation, regardless of the per-queue CIR/PIR configuration, then the aggregate conforming loop for the SAP ends and the aggregate non-conforming loop begins. The aggregate non-conforming loop schedules the packets out of the eight CoS queues in strict priority manner. SAPs sending traffic to the 4-priority scheduler do not have a second-tier per-SAP aggregate shaper unless traffic arbitration is desired, in which case an aggregate CIR for all the 4-priority SAPs can be configured (see Access Ingress Per-SAP Shapers Arbitration). See Per-SAP Aggregate Shapers (H-QoS) On Gen-2 Hardware for general information.
The aggregate rate limit for the per-SAP aggregate shaper is configured in the service context, using the sap>ingress>agg-rate-limit or sap>egress>agg-rate- limit command.
For per-SAP aggregate shaping on Gen-2 adapter cards, the SAP must be scheduled using a 16-priority scheduler.
![]() | Note: The default setting for scheduler-mode is 4-priority on Gen-2 adapter cards and platforms. The user must toggle the scheduling mode to 16-priority for a given SAP before SAP aggregate shaper rates (agg-rate-limit) can be configured. Before changing the scheduling mode, the SAP must be shut down. |
The 16-priority scheduler can be used without setting an aggregate rate limit for the SAP, in which case traffic out of the SAP queues are serviced in strict priority order, the conforming traffic prior to the non-conforming traffic. Using 16-priority schedulers without a configured per-SAP aggregate shaper (PIR = maximum and CIR = 0 kb/s) might be preferred over 4-priority mode for the following reasons:
As indicated in Figure 17, all the traffic leaving from the shaped SAPs must be serviced using 16-priority scheduling mode.
The SAPs without an aggregate rate limit, which are called unshaped SAPs, can be scheduled using either 4-priority or 16-priority mode as one of the following:
The arbitration of access ingress traffic leaving the 4-priority and 16-priority schedulers and continuing towards the fabric is described in the following section.
The 7705 SAR provides per-SAP aggregate shapers for access ingress SAPs. With this feature, both shaped and unshaped SAPs can coexist on the same adapter card. When switching traffic from shaped and unshaped SAPs to the fabric, arbitration is required.
Figure 18 shows how the 7705 SAR arbitrates traffic to the fabric between 4-priority unshaped SAPs, and 16-priority shaped and unshaped SAPs.
All SAPs support configurable CIR and PIR rates on a per-CoS queue basis (per-queue level). In addition, each 16-priority SAP has its own configurable per-SAP aggregate CIR and PIR rates that operate one level above the per-queue rates.
To allow the 4-priority unshaped SAPs to compete for fabric bandwidth with the aggregate CIR rates of the shaped SAPs, the 4-priority unshaped SAPs (as a group) have their own configurable unshaped SAP aggregate CIR rate, which is configured on the 7705 SAR-8 and 7705 SAR-18 under the config>qos>fabric-profile aggregate-mode context using the unshaped-sap-cir parameter. On the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, the CIR rate is configured in the config>system>qos>access-ingress-aggregate-rate context.
The configured CIR and PIR for the 16-priority shaped SAPs dictates committed and uncommitted fabric bandwidth for each of these SAPs. Configuring the unshaped-sap-cir parameter for the group (aggregate) of 4-priority unshaped SAPs ensures that the unshaped SAPs will compete for fabric bandwidth with the aggregate CIR rate of the shaped SAPs. Otherwise, the unshaped SAPs would only be able to send traffic into the fabric after the aggregate CIR rates of all the shaped SAPs were serviced. The 16-priority unshaped SAPs are serviced as if they are non-conforming traffic for 16-priority shaped SAPs.
The aggregate fabric shaper (Figure 18) performs round-robin selection between the 16-priority SAPs (shaped and unshaped) and the 4-priority unshaped SAP aggregate until:
![]() | Note: The CLI does not block a fabric profile with an unshaped SAP CIR configuration on an 8-port Ethernet Adapter card, a 6-port Ethernet 10Gbps Adapter card, an ASAP card (16-port T1/E1 ASAP Adapter card or 32-port T1/E1 ASAP Adapter card). However, the unshaped SAP CIR configuration has no effect and is ignored on these cards. |
After the traffic is scheduled, it must be sent to the fabric interface. In order to avoid congestion in the fabric and ease the effects of possible bursts, a shaper is implemented on each adapter card.
The shapers smooth out any packet bursts and ease the flow of traffic onto the fabric. The shapers use buffer space on the adapter cards and eliminate the need for large ingress buffers in the fabric.
The ingress to-fabric shapers are user-configurable. For the 7705 SAR-8 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable Ingress Shaping to Fabric (Access and Network) for details. For the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, the shapers can operate at a maximum rate of 5 Gb/s. For the 7705 SAR-X, the shapers are not user-configurable. See Fabric Shaping on the Fixed Platforms (Access and Network) for details.
After the shaping function, all of the traffic is forwarded to the fabric interface in round-robin fashion, one packet at a time, from every access ingress adapter card.
![]() | Note: Starting with Release 8.0.R4, unless per-service-hashing is enabled, a 4-byte hash value will be appended to internal overhead for VPLS multicast traffic at ingress. The egress internal hash value is discarded at egress before scheduling. Therefore, shaping rates at access and network ingress and for fabric policies may need to be adjusted accordingly. In addition, the 4-byte internal hash value may be included in any affected statistics counters. |
Fabric shapers support both unicast and multipoint traffic. Multipoint traffic can be any combination of broadcast, multicast, and unknown (BMU) frames. From access ingress to the fabric, BMU traffic is treated as unicast traffic. A single copy of BMU traffic is handed off to the fabric, where it is replicated and sent to all potential destination adapter cards.
An aggregate mode shaper provides a single aggregate shaping rate. The rate defines the maximum bandwidth that an adapter card can switch through its fabric interface at any given time. The rate is a bulk value and is independent of the destination or the type of traffic. For example, in aggregate mode, an ingress adapter card might use the full rate to communicate with a single destination adapter card, or it might use the same rate to communicate with multiple egress adapter cards.
Aggregate mode and the aggregate rate apply to fabric shapers that handle combined unicast/BMU traffic, unicast-only traffic, or BMU-only traffic. One aggregate rate sets the rate on all adapter cards. The proportional distribution between unicast and BMU traffic can be fine-tuned using queue-level schedulers, while the to-fabric shaper imposes a maximum rate that ensures fairness on the fabric for traffic from all adapter cards.
When services (IES, VPRN, and VPLS) are enabled, the fabric profile mode for access ingress should be set to aggregate mode.
Destination mode offers granular to-fabric shaping rates on a per-destination adapter card basis. While destination mode offers more flexibility and gives more control than aggregate mode, it also requires a greater understanding of network topology and flow characteristics under conditions such as node failures and link, adapter card, or port failures.
In a destination mode fabric profile, the unicast traffic and BMU traffic are always shaped separately.
For unicast traffic, individual destination rates can be configured on each adapter card. For BMU traffic, one multipoint rate sets the rate on all adapter cards. Fairness among different BMU flows is ensured by tuning the QoS queues associated with the port.
Fabric shapers support access ingress traffic being switched from a SAP to another SAP residing on a port that is part of a Link Aggregation Group (LAG). Either the aggregate mode or destination mode can be used for fabric shaping.
When the aggregate mode is used, one aggregate rate sets the rate on all adapter cards. When the destination mode is used, the multipoint shaper is used to set the fabric shaping rate for traffic switched to a LAG SAP.
![]() | Note: Even though the multipoint shaper is used to set the fabric shaping rate for traffic switched to a LAG SAP, it is the per-destination unicast counters that are incremented to show the fabric statistics rather than the multipoint counter. Only the fabric statistics of the active port of the LAG are incremented, not the standby port. |
The use of fabric profiles allows the ingress (to the fabric) shapers to be user-configurable for access ingress and network ingress traffic.
For the 7705 SAR-8 and 7705 SAR-18, the maximum rates are:
For information about fabric shapers on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-X, see Fabric Shaping on the Fixed Platforms (Access and Network).
By allowing a rate of 1 Gb/s or higher to be configured from any adapter card to the fabric, the fabric may become congested. Therefore, the collection and display of fabric statistics are provided. These statistics report about the fabric traffic flow and potential discards. Refer to the 7705 SAR Interface Configuration Guide, “Configuring Adapter Card Fabric Statistics”, “Configuration Command Reference”, and “Show, Monitor, Clear, and Debug Command Reference” for information on how to configure, show, and monitor fabric statistics on an adapter card.
The ingress buffers for a card are much larger than the ingress buffers for the fabric; therefore, it is advantageous to use the larger card buffers for ingress shaping. In order to use the ingress card buffers and have much more granular control over traffic, two fabric profile modes are supported, per-destination mode and aggregate mode. Both modes offer shaping towards the fabric from an adapter card, but per-destination shapers offer the maximum flexibility by precisely controlling the amount of traffic to each destination card at a user-defined rate. Aggregate mode is used for simpler deployments, where the amount of traffic flowing to a destination adapter card is not controlled.
The default mode of operation for the 7705 SAR is set to aggregate, and the fixed aggregate rate of 200 Mb/s is set for both access ingress and network ingress traffic. Therefore, in a default configuration, each adapter card can switch up to 200 Mb/s of access ingress and network ingress traffic towards the fabric.
All the switched traffic can be destined for a single adapter card or it can be spread among multiple adapter cards. For higher-bandwidth applications, a network traffic analysis is recommended to determine which shaper rates would best suit the application and traffic patterns of a particular environment.
The to-fabric shapers are provided on the 7705 SAR to ensure adequate use of ingress buffers in case of congestion. With the ingress shapers, the large ingress card buffers can be configured to absorb bursty traffic and pace the traffic for better use of resources.
For example, if the average access ingress traffic bandwidth for an adapter card is 400 Mb/s and the peak bandwidth is 800 Mb/s, the rate of the to-fabric shapers can be configured to be 400 Mb/s. This allows the bursty ingress traffic to be paced by absorbing the bursty traffic after being shaped at 400 Mb/s. The initial burst is absorbed at the adapter card where the bursty traffic ingresses the 7705 SAR. The ingress buffers are used to absorb the burst and the fabric buffers are not exhausted by any single adapter card. The same example applies to network ingress traffic.
Table 9 summarizes the different capabilities offered by the two modes.
Capability | Per Destination Mode | Aggregate Mode |
Access ingress to-fabric shapers | ✓ | ✓ |
Network ingress to-fabric shapers | ✓ | ✓ |
Individual shaping from an ingress card towards each destination card based on a user-defined rate | ✓ | |
Aggregate/bulk sum shaping regardless of destination from an ingress card | ✓ |
Figure 19 and Figure 20 illustrate the functionality of fabric shapers in per-destination mode and aggregate mode, respectively.
Referring to Figure 19, once the per-destination prioritization and scheduling takes place, as described in previous sections in this chapter, the per-destination adapter card shapers take effect. With per destination shapers, the maximum amount of bandwidth that each destination adapter card can receive from the fabric can be controlled. For example, the maximum amount of bandwidth that adapter card 1 can switch to the remaining adapter cards, as well as the amount of bandwidth switched back to adapter card 1, can be configured at a set rate.
Figure 20 illustrates the functionality of fabric shapers in aggregate mode. Once the policing, classification, queuing and per-destination based priority queuing takes place, as described in previous sections in this chapter, the aggregate mode adapter card shapers take effect. In aggregate mode, the aggregate of all the access ingress and network ingress traffic is shaped at a user-configured rate regardless of the destination adapter card.
Mixing different fabric shaper modes within the same chassis and on the same adapter card is not recommended; however, it is supported. As an example, an 8-port Gigabit Ethernet Adapter card in a 7705 SAR-18 can be configured for aggregate mode for access ingress and for per-destination mode for network ingress. The same chassis can also contain an adapter card (for example, the 8-port Ethernet Adapter card) that is configured for per-destination mode for all traffic. This setup is shown in the following example.
MDA | Access Fabric Mode | Network Profile Mode | |
1/1 | a8-ethv2 | Destination | Destination |
1/2 | a8-1gb-sfp | Aggregate | Destination |
1/3 | a4-oc3 | Destination | Destination |
1/4 | a32-chds1v2 | Destination | Destination |
1/X1 | x-10GigE | Aggregate | Destination |
![]() | Note:
|
The 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-W, and 7705 SAR-Wx support user-configurable fabric shapers at rates of up to 5 Gb/s for access ingress and network ingress traffic. The fabric interface on these nodes is a shared resource between both traffic types, and one buffer pool serves all MDAs (ports).
These nodes do not support fabric profiles; instead, they have a single aggregate rate limit for restricting access traffic into the fabric and a single aggregate rate limit for restricting network traffic into the fabric. These limits apply to all MDAs. Both access ingress and network ingress traffic can be configured to shaping rates of between 1 kb/s and 5 Gb/s. The default rate for access ingress traffic is 500 Mb/s. The default rate for network ingress traffic is 2 Gb/s. Statistics can be viewed for aggregate access and network traffic flow through the fabric and possible discards.
The 7705 SAR-X fabric shaper rate is not configurable for access ingress or network ingress traffic, and is set to the maximum rate for the platform. There are three buffer pools on the 7705 SAR-X, one for each MDA (block of ports).
This section contains the following topics for traffic flow in the access ingress direction:
The 7705 SAR uses an Ethernet-based fabric. Each packet that is sent to the fabric is equipped with a fabric header that contains its specific CoS requirement. Since all of the packets switched across the fabric are already classified, queued, scheduled and marked according to the desired QoS parameters, each of these packets has been passed through the Traffic Management (TM) block on an adapter card, or the Control and Switching Module (CSM) in the case of control packets. Therefore, each packet arrives at the fabric having been already scheduled for optimal flow. The function of the fabric is to switch each packet through to the appropriate destination adapter card, or CSM in the case of control packets, in an efficient manner.
Since the traffic is shaped at a certain rate by the ingress adapter card (that is, bursts are smoothened by the traffic management function), minimal buffering should be needed on the switch fabric. However, the buffer space allocation and usage is in accordance with the priorities at the ingress adapter card. As is the case with schedulers at the adapter cards, there are two priorities supported on the switch fabric. The switch fabric serves the traffic in the following priority order:
![]() | Note:
|
This section contains the following topics for traffic flow in the network egress direction:
BMU traffic at network egress is handled in the same way as unicast traffic in terms of scheduling, queuing, or port-level shaping. Both unicast and BMU traffic are mapped to queues as per the FC markings. Traffic from these queues, whether unicast or BMU, is scheduled according to user-configured rates. Port-level shapers treat all the queues identically, regardless of traffic type.
After traffic is switched through the fabric from one or several access ingress adapter cards to a network egress adapter card, queuing-level aggregation on a per-forwarding-class basis is performed on all of the received packets.
An adapter card that is used for network egress can receive—and will likely receive—packets from multiple adapter cards that are configured for access ingress operations, and from the CSM. Adapter cards that are configured for network access permit user configuration of queues and the association of forwarding classes to the queues. These are the same configuration principles that are used for adapter cards that are configured for access ingress connectivity. Like access ingress, more than one forwarding class can share the same queue.
Aggregation of different forwarding classes under queues takes place for each bundle or port. If a port is a member of a bundle, such as a Multilink Point-to-Point Protocol (MLPPP) bundle, then the aggregation and queuing is implemented for the entire bundle. If a port is a standalone port, that is, not a member of bundle, then the queuing takes place for the port.
Network Ethernet ports support network egress per-VLAN (per-interface) shapers with eight CoS queues per VLAN, which is an extension to the eight CoS queues per port shared by all unshaped VLANs. Eight unique per-VLAN CoS queues are created for each VLAN when the VLAN shaper is enabled. These per-VLAN CoS queues are separate from the eight unshaped VLAN queues. The eight CoS queues that are shared by all the remaining unshaped VLANs are referred to as unshaped VLAN CoS queues. VLAN shapers are enabled when the queue-policy command is used to assign a network queue policy to the interface.
For details on per-VLAN network egress queuing and scheduling, see Per-VLAN Network Egress Shapers.
Network egress scheduling is supported on the adapter cards and ports listed in Table 10. The supported scheduling modes are profiled, 4-priority, and 16-priority. Table 10 shows which scheduling mode each card and port supports at network egress.
This section also contains information on the following topics:
Adapter Card or Port | Profiled | 4-Priority | 16-Priority |
8-port Ethernet Adapter cards, versions 1 and 2 on the 7705 SAR-8 and version 2 on the 7705 SAR-18 | ✓ | ✓ | |
8-port Gigabit Ethernet Adapter card | ✓ | ||
Packet Microwave Adapter card | ✓ | ||
2-port 10GigE (Ethernet) Adapter card/module | ✓ | ||
6-port Ethernet 10Gbps Adapter card | ✓ 1 | ||
10-port 1GigE/1-port 10GigE X-Adapter card | ✓ | ||
4-port SAR-H Fast Ethernet module | ✓ | ||
6-port SAR-M Ethernet module | ✓ | ||
Ethernet ports on the 7705 SAR-A (both variants) | ✓ | ||
Ethernet ports on the 7705 SAR-Ax | ✓ | ||
Ethernet ports on the 7705 SAR-H | ✓ | ||
Ethernet ports on the 7705 SAR-Hc | ✓ | ||
Ethernet ports on the 7705 SAR-M (all variants) | ✓ | ||
Ethernet ports on the 7705 SAR-W | ✓ | ||
Ethernet ports on the 7705 SAR-Wx (all variants) | ✓ | ||
DSL ports on the 7705 SAR-Wx | ✓ | ||
Ethernet ports on the 7705 SAR-X | ✓ 1 | ||
16-port T1/E1 ASAP Adapter card | ✓ | ||
32-port T1/E1 ASAP Adapter card | ✓ | ||
2-port OC3/STM1 Channelized Adapter card | ✓ | ||
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card | ✓ | ||
4-port OC3/STM1 Clear Channel Adapter card | ✓ | ||
4-port DS3/E3 Adapter card | ✓ | ||
T1/E1 ASAP ports on the 7705 SAR-A (variants with T1/E1 ASAP ports | ✓ | ||
T1/E1 ASAP ports on the 7705 SAR-M (variants with T1/E1 ASAP ports) | ✓ | ||
TDM ports on the 7705 SAR-X | ✓ |
Note:
The implementation of network egress scheduling on the cards and ports listed in Table 10 under “4-Priority” is very similar to the scheduling mechanisms used for adapter cards that are configured for access ingress traffic. 4-priority scheduling is a combination of queue-type scheduling (Expedited vs. Best-effort scheduling) and profiled scheduling (rate-based scheduling).
![]() | Note: The encapsulation type must be ppp-auto for PPP/MLPPP bundles on the following:
|
Packets less than or up to the CIR are scheduled as in-profile. Packets that arrive at rates greater than the CIR, but less than the PIR, are scheduled as out-of-profile. In-profile traffic is exhaustively transmitted from the queues before out-of-profile traffic is transmitted. That is, all of the in-profile packets must be transmitted before any out-of-profile packets are transmitted. In addition, Expedited queues are always scheduled prior to Best Effort queues.
The default configuration of scheduling CoS queues provides a logical and consistent means to manage the traffic priorities. The default configuration is as follows:
![]() | Note: Default configuration means that the queues are configured according to the tables and defaults described in this guide. Customers can configure the queues differently. |
The order shown below is maintained when scheduling the traffic on the adapter card’s network ports. A strict priority is applied between the four schedulers, and all four schedulers are exhaustive:
The adapter cards and ports that support 4-priority scheduling for network egress traffic on Gen-3 hardware are identified in Table 10. This type of scheduling takes into consideration the traffic’s profile type and the CoS queue priority. It also uses priority information to apply backpressure to lower-level CoS queues. See QoS for Gen-3 Adapter Cards and Platforms for details.
Network egress ports on the 8-port Ethernet Adapter card support both profiled scheduling (rate-based scheduling) and 4-priority scheduling (queue-type scheduling combined with profiled scheduling). Users can configure either scheduling mode on a per-port basis. The default is profiled scheduling.
If a port is configured for profiled scheduling, packets less than or up to the CIR are scheduled as in-profile and packets that arrive at rates greater than the CIR, but less than the PIR, are scheduled as out-of-profile. In-profile traffic is exhaustively transmitted from the queues before out-of-profile traffic is transmitted. That is, all of the in-profile packets must be transmitted before any out-of-profile packets are transmitted.
The following profiled schedulers service the queues in the order shown:
If a port is configured for 4-priority scheduling, the traffic on the port is scheduled in the same way as with non-Ethernet cards and ports. For details, refer to Network Egress 4-Priority Scheduling.
The scheduler mode is set at the interface level in the config>port>ethernet> network context. The encapsulation type for an 8-port Ethernet Adapter card in profiled scheduling mode or 4-priority scheduling mode can be either null or dot1q.
The adapter cards and ports that support 16-priority scheduling for network egress traffic are listed in Table 10. This type of scheduling takes into consideration the traffic’s profile type and the priority of the CoS queue that the traffic is coming from.
Packets less than or up to the CIR are scheduled as in-profile. Packets that arrive at rates greater than the CIR, but less than the PIR, are scheduled as out-of-profile. Eight CoS queues in total are available for packets to go through.
In-profile traffic is exhaustively transmitted from the queues, starting with the highest-priority CoS queue. A strict priority is applied between the eight CoS queues. If a packet arrives at a queue of higher priority than the one being serviced, the scheduler services the packet at the higher-priority queue as soon as it finishes servicing the current packet.
Once all the in-profile traffic is transmitted, the out-of-profile traffic is transmitted, still maintaining priority of the queues. If an in-profile packet arrives and the scheduler is servicing an out-of-profile packet, the scheduler finishes servicing the out-of-profile packet and then immediately services the in-profile packet.
The order of priority in the default configuration is as follows:
![]() | Note: Default configuration means that the queues are configured according to the tables and defaults described in this guide. Customers can configure the queues differently. |
All the network egress traffic is shaped at the bundle or interface rate. An interface may not necessarily correspond directly to a port, and an interface could be a sub-channel of a port. As an example, Fast Ethernet could be the choice of network egress, but the leased bandwidth could still be a fraction of the port speed. In this case, it is possible to shape at the interface rate of 15 Mb/s, for example.
The same also applies to MLPPP bundles. The shaping takes place per MLPPP bundle, and the traffic is shaped at the aggregate rate of the MLPPP bundle.
Hybrid ports use a third-tier, dual-rate aggregate shaper to provide arbitration between the bulk of access and network egress traffic flows. For details, see QoS for Hybrid Ports on Gen-2 Hardware.
Network egress VLAN traffic uses second-tier (or second-level), per-VLAN shapers to prepare network egress traffic for arbitration with the aggregate of the unshaped VLAN shaper. All the shaped VLAN shapers are arbitrated with one unshaped VLAN shaper.
![]() | Note: This section applies to Gen-1 and Gen-2 adapter cards. For information on per-VLAN shapers for Gen-3 adapter cards and platforms, such as the 6-port Ethernet 10Gbps Adapter card and the 7705 SAR-X, see Figure 13 in the QoS for Gen-3 Adapter Cards and Platforms section. |
As shown in Figure 21, traffic from the fabric flows to one or more VLANs, where it is classified and mapped to up to eight different CoS queues on a per-VLAN basis. The VLANs can be shaped or unshaped. Each shaped VLAN has its own set of CoS queues. The aggregate of unshaped VLANs uses the same set of CoS queues (that is, one set of queues for all unshaped VLANs).
For more information, see Per-VLAN Network Egress Shapers and Shaped and Unshaped VLANs.
![]() | Note: Due to space limitations in Figure 21, the second-tier, per-VLAN aggregate shapers are represented as a single loop containing the label “per VLAN”, even though they are dual-rate shapers similar to the third-tier network aggregate shaper. |
Because the per-VLAN shapers are dual-rate shapers, their aggregate rate CIR and PIR values shape the traffic, as follows.
A shaped VLAN configured with default aggregate rate limits (PIR = maximum and CIR = 0 kb/s) is equivalent to an unshaped VLAN except that its traffic flows through a per-VLAN shaper rather than getting combined with the bulk (aggregate) of the unshaped VLANs. Using a shaped VLAN in this way (default rate limits) might be preferred over using an unshaped VLAN for the following reasons:
The arbitration of shaped and unshaped VLAN traffic at the third-tier shaper is described in the following section.
For shaped VLANs, the configured CIR and PIR limits dictate committed and uncommitted port bandwidth for each of these VLANs. To ensure that the bulk (aggregate) of unshaped VLANs can compete for port bandwidth with the aggregate of CIR rates for the shaped VLANs, the unshaped VLANs (as a group) have their own aggregate CIR rate, which is configured using the unshaped-if-cir command (under the config>port> ethernet>network>egress context). Otherwise, without their own aggregate CIR rate, the unshaped VLANs are only able to send traffic into the port after the aggregate CIR rates of all the shaped VLANs are serviced. Shaped VLANs using default aggregate rate limits (PIR = maximum and CIR = 0 kb/s) are serviced as if they are non-conforming traffic for shaped VLANs.
![]() | Note: This section applies to Gen-2 adapter cards. For information on per-VLAN shapers arbitration for Gen-3 adapter cards and platforms, such as the 6-port Ethernet 10Gbps Adapter card and the 7705 SAR-X, see Figure 13 in the QoS for Gen-3 Adapter Cards and Platforms section. |
Referring to Figure 21, at the port shaper, conforming (CIR) traffic has priority over non-conforming traffic. The arbitration between the shaped VLANs and unshaped VLANs is handled in the following priority order:
The EXP bit settings can be marked at network egress. The EXP bit markings of the forwarding class are used for this purpose. The tunnel and pseudowire EXP bits are marked to the forwarding class value.
The default network egress QoS marking settings are given in Table 24.
For MPLS tunnels, if network egress Ethernet ports are used, dot1p bit marking can be enabled in conjunction with EXP bit marking. In this case, the tunnel and pseudowire EXP bits do not have to be the same as the dot1p bits.
For GRE and IP tunnels, dot1p marking and pseudowire EXP marking can be enabled, and DSCP marking can also be enabled.
Network egress dot1p is supported for Ethernet frames, which can carry IPv4, IPv6, or MPLS packets. EXP re-marking is supported for MPLS packets.
This section contains the following topics for traffic flow in the network ingress direction:
Network ingress traffic originates from a network egress port located on another interworking device, such as a 7710 or 7750 Service Router or another 7705 SAR, and flows from the network toward the fabric in the 7705 SAR.
The ingress MPLS packets can be mapped to forwarding classes based on EXP bits that are part of the headers in the MPLS packets. These EXP bits are used across the network to ensure an end-to-end network-wide QoS offering. With pseudowire services, there are two labels, one for the MPLS tunnel and one for the pseudowire. Mapping is performed using the EXP values from the outer tunnel MPLS label. This ensures that the EXP bit settings, which may have been altered along the path by the tandem label switch routers (LSRs), are used to identify the forwarding class of the encapsulated traffic.
Ingress GRE and IP packets are mapped to forwarding classes based on DSCP bit settings of the IP header. GRE tunnels are not supported for IPv6; therefore, DSCP bit classification of GRE packets is only supported for IPv4. DSCP bit classification of IP packets is supported for both IPv4 and IPv6.
Untrusted traffic uses multi-field classification (MFC), where the traffic is classified based on any IP criteria currently supported by the 7705 SAR filter policies; for example, source and destination IP address, source and destination port, whether or not the packet is fragmented, ICMP code, and TCP state. For information on MFC, refer to the 7705 SAR Router Configuration Guide, “Multi-field Classification (MFC)” and “IP, MAC, and VLAN Filter Entry Commands”.
In order to simplify QoS management through the network core, some operators aggregate multiple forwarding classes of traffic at the ingress LER or PE and use two or three QoS markings instead of the eight different QoS markings that a customer device might be using to dictate QoS treatment. However, in order to ensure the end-to-end QoS enforcement required by the customer, the aggregated markings must be mapped back to their original forwarding classes at the egress LER (eLER) or PE.
For IP traffic (including IPSec packets) riding over MPLS or GRE tunnels that will be routed to the base router, a VPRN interface, or an IES interface at the tunnel termination point (the eLER), the 7705 SAR can be configured to ignore the EXP/DSCP bits in the tunnel header when the packets arrive at the eLER. Instead, classification is based on the inner IP header which is essentially the customer IP packet header. This configuration is done using the ler-use-dscp command.
When the command is enabled on an ingress network IP interface, the IP interface will ignore the tunnel’s QoS mapping and will derive the internal forwarding class and associated profile state based on the DSCP values of the IP header ToS field rather than on the network QoS policy defined on the IP interface. This function is useful when the mapping for the tunnel QoS marking does not completely reflect the required QoS handling for the IP packet. The command applies only on the eLER where the tunnel or service is terminated and the next header in the packet is IP.
Network ingress traffic can be classified in up to eight different forwarding classes, which are served by 16 queues (eight queues for unicast traffic and eight queues for multicast (BMU) traffic). Each queue serves at least one of the eight forwarding classes that are identified by the incoming EXP bits. These queues are automatically created by the 7705 SAR. Table 11 shows the default network QoS policy for the 16 CoS queues.
The value for CBS and MBS is a percentage of the size of the buffer pool for the adapter card. MBS can be shared across queues, which allows overbooking to occur.
Queue /FC | CIR (%) | PIR (%) | CBS (%) | MBS (%) |
Queue-1/BE | 0 | 100 | 0.1 | 5 |
Queue-2/L2 | 25 | 100 | 0.25 | 5 |
Queue-3/AF | 25 | 100 | 0.75 | 5 |
Queue-4/L1 | 25 | 100 | 0.25 | 2.5 |
Queue-5/H2 | 100 | 100 | 0.75 | 5 |
Queue-6/EF | 100 | 100 | 0.75 | 5 |
Queue-7/H1 | 10 | 100 | 0.25 | 2.5 |
Queue-8/NC | 10 | 100 | 0.25 | 2.5 |
Queue-9/BE | 0 | 100 | 0.1 | 5 |
Queue-10/L2 | 5 | 100 | 0.1 | 5 |
Queue-11/AF | 5 | 100 | 0.1 | 5 |
Queue-12/L1 | 5 | 100 | 0.1 | 2.5 |
Queue-13/H2 | 100 | 100 | 0.1 | 5 |
Queue-14/EF | 100 | 100 | 0.1 | 5 |
Queue-15/H1 | 10 | 100 | 0.1 | 2.5 |
Queue-16/NC | 10 | 100 | 0.1 | 2.5 |
At network ingress, broadcast, multicast, and unknown (BMU) traffic identified using DSCP and/or EXP (also known as LSP TC) is mapped to a forwarding class (FC). Since BMU traffic is considered to be multipoint traffic, the queue hosting BMU traffic must be configured with the multipoint keyword. Queues 9 through 16 support multipoint traffic (see Table 11). For any adapter card hosting any number of network ports, up to 16 queues can be configured to host 8 unicast and 8 multicast queues.
Similar to unicast queues, BMU queues require configuration of:
In addition, as is the case for unicast queues, all other queue-based congestion management techniques apply to multipoint queues.
The benefits of using multipoint queues occur when the to-fabric shapers begin scheduling traffic towards the destination line card. To-fabric shapers can be configured for aggregate or per-destination mode. For more information, see BMU Support.
Network ingress scheduling is supported on the adapter cards and ports listed in Table 12. The supported scheduling modes are profiled, 4-priority, and 16-priority. Table 12 shows which scheduling mode each card and port supports at network ingress.
This section also contains information on the following topics:
Adapter Card or Port | Profiled | 4-Priority | 16-Priority |
8-port Ethernet Adapter cards, versions 1 and 2 on the 7705 SAR-8 and version 2 on the 7705 SAR-18 | ✓ | ||
8-port Gigabit Ethernet Adapter card | ✓ | ||
Packet Microwave Adapter card | ✓ | ||
2-port 10GigE (Ethernet) Adapter card/module |
| ✓ | |
6-port Ethernet 10Gbps Adapter card | ✓ 1 | ||
10-port 1GigE/1-port 10GigE X-Adapter card | ✓ | ||
4-port SAR-H Fast Ethernet module | ✓ | ||
6-port SAR-M Ethernet module | ✓ | ||
Ethernet ports on the 7705 SAR-A (all variants) | ✓ | ||
Ethernet ports on the 7705 SAR-Ax | ✓ | ||
Ethernet ports on the 7705 SAR-M (all variants) | ✓ | ||
Ethernet ports on the 7705 SAR-H | ✓ | ||
Ethernet ports on the 7705 SAR-Hc | ✓ | ||
Ethernet ports on the 7705 SAR-W | ✓ | ||
Ethernet ports on the 7705 SAR-Wx (all variants) | ✓ | ||
DSL ports on the 7705 SAR-Wx | ✓ | ||
Ethernet ports on the 7705 SAR-X | ✓ 1 | ||
16-port T1/E1 ASAP Adapter card | ✓ | ||
32-port T1/E1 ASAP Adapter card | ✓ | ||
2-port OC3/STM1 Channelized Adapter card | ✓ | ||
4-port OC3/STM1 Clear Channel Adapter card | ✓ | ||
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card | ✓ | ||
4-port DS3/E3 Adapter card | ✓ | ||
T1/E1 ASAP ports on the 7705 SAR-A (variants with T1/E1 ASAP ports) | ✓ |
| |
T1/E1 ASAP ports on the 7705 SAR-M (variants with T1/E1 ASAP ports) | ✓ | ||
TDM ports on the 7705 SAR-X | ✓ |
Note:
The adapter cards listed in Table 12 under “4-Priority” can receive network ingress traffic. One or more ports on the card is configured for PPP/MLPPP for this purpose.
The implementation of network ingress scheduling on the cards listed in Table 12 under “4-Priority” is very similar to the scheduling mechanisms used for adapter cards that are configured for access ingress traffic. That is, 4-priority scheduling is used (queue-type scheduling combined with profiled scheduling).
![]() | Note: The encapsulation type must be ppp-auto for PPP/MLPPP bundles on the following:
|
The adapter cards provide sets of eight queues for incoming traffic: 7 sets of queues for the 7705 SAR-8 and 17 sets of queues for the 7705 SAR-18. Each set of queues is specific to a destination adapter card. For the 7705 SAR-8 and 7705 SAR-18 (respectively), 6 and 16 sets of queues are automatically created for each access egress adapter card, plus 1 set of queues for multicast traffic.
There is one additional set of queues for slow-path (control) traffic destined for the CSMs.
The individual queues within each set of queues provide buffer space for traffic isolation based on the CoS values being applied (from the received EXP bits).
All of the network ingress ports of the adapter card share the same sets of queues, which are created automatically.
Once the packets received from the network are mapped to queues, four access ingress-like queue-type and profile (rate-based) schedulers per destination card service the queues in strict priority. The following queue-type and profiled schedulers service the queues in the order listed.
To complete the operation, user-configurable shapers send the traffic into the fabric. See Configurable Ingress Shaping to Fabric (Access and Network) for details. Throughout this operation, each packet retains its individual CoS value.
The adapter cards and ports that support 4-priority (Gen-3) scheduling for network ingress traffic are listed in Table 12. See QoS for Gen-3 Adapter Cards and Platforms for details.
Network ingress ports on the 8-port Ethernet Adapter card support profiled scheduling (rate-based scheduling). This scheduling is done in the same manner as for access ingress traffic described in Profiled (Rate-based) Scheduling.
The 8-port Ethernet Adapter card provides sets of eight queues for incoming traffic. Each set of queues is specific to a destination adapter card. For the 7705 SAR-8, 6 sets of queues are automatically created for each access egress adapter card, plus 1 set of queues for multicast traffic. For the 7705 SAR-18, 16 sets of queues are automatically created, plus 1 set of queues for multicast traffic. For both the 7705 SAR-8 and the 7705 SAR-18, there is 1 additional set of queues for slow-path (control) traffic that is destined for the CSMs.
Each queue within each set provides buffer space for traffic isolation based on the classification carried out on EXP bits of the MPLS packet header (that is, the CoS setting).
All of the network ingress ports on the 8-port Ethernet Adapter card share the same sets of queues, which are created automatically.
Once the packets received from the network are mapped to queues, two profiled (rate-based) schedulers per destination card service the queues in strict priority. The following profiled schedulers service the queues in the order shown.
To complete the operation, user-configurable shapers send the traffic into the fabric.
See Configurable Ingress Shaping to Fabric (Access and Network) for details. Throughout this operation, each packet retains its individual CoS value.
The cards and ports that support 16-priority scheduling for network ingress traffic are listed in Table 12.
For a detailed description of how 16-priority scheduling functions, refer to Network Egress 16-Priority Scheduling.
The 7705 SAR-8 and 7705 SAR-18 adapter cards, and the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx ports provide sets of 8 queues for incoming traffic: 7 sets of queues for the 7705 SAR-8, 17 sets of queues for the 7705 SAR-18, and 4 sets of queues for the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx.
Each set of queues is specific to a destination adapter card. For the 7705 SAR-8, 6 sets of queues are automatically created for each access egress adapter card, plus 1 set of queues for multicast traffic. For the 7705 SAR-18, 16 sets of queues are automatically created, plus 1 set of queues for multicast traffic. For the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, 3 sets of queues are automatically created, plus 1 set of queues for multicast traffic. For all these platforms, there is 1 additional set of queues for slow-path (control) traffic that is destined for the CSMs.
Each queue within each set provides buffer space for traffic isolation based on the classification carried out on EXP bits of the MPLS packet header (that is, the CoS setting).
All of the network ingress ports on an adapter card on a 7705 SAR-8 or 7705 SAR-18 share the same sets of queues, which are created automatically. All of the network ingress ports across the entire 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, or 7705 SAR-Wx also share the same sets of queues, which are created automatically.
After the traffic is scheduled, it must be sent to the fabric interface. In order to avoid congestion in the fabric and ease the effects of possible bursts, a shaper is implemented on each adapter card.
Network ingress shaping to the fabric operates in a similar fashion to access ingress shaping to the fabric. See Ingress Shaping to Fabric (Access and Network) for details.
Configuring network ingress shapers to the fabric is similar to configuring access ingress shapers to the fabric.
The ingress to-fabric shapers are user-configurable. For the 7705 SAR-8 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable Ingress Shaping to Fabric (Access and Network) for details.
For information about fabric shapers on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, see Fabric Shaping on the Fixed Platforms (Access and Network). The 7705 SAR-X does not support configurable network ingress shapers.
The 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc,7705 SAR-W, and 7705 SAR-Wx support user-configurable fabric shapers at rates of up to 5 Gb/s for access ingress traffic and network ingress traffic.
On the 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-W, and 7705 SAR-Wx, network ingress shapers to the fabric operate similarly to access ingress shapers to the fabric. The 7705 SAR-X does not support configurable network ingress shapers. See Fabric Shaping on the Fixed Platforms (Access and Network) for more information.
This section contains the following topics for traffic flow in the access ingress direction:
The following sections discuss the queuing and scheduling of access egress traffic, which is traffic that egresses the fabric on the access side:
Access egress scheduling takes place at the native traffic layer. As an example, once the ATM pseudowire payload is delivered from the network ingress to the access egress, the playback of the ATM cells to the appropriate ATM SAP is done according to ATM traffic management specifications.
Access egress scheduling is supported on the adapter cards and ports listed in Table 13. The supported scheduling modes are 4-priority and 16-priority. Table 13 shows which scheduling mode each card and port supports at access egress.
![]() | Note: For access ingress and egress, the 16-priority schedulers use additional hardware resources and capabilities, which results in increased throughput. |
Adapter Card or Port | 4-Priority | 16-Priority |
8-port Ethernet Adapter cards, versions 1 and 2 on the 7705 SAR-8 and version 2 on the 7705 SAR-18 | ✓ | |
8-port Gigabit Ethernet Adapter card | ✓ | ✓ |
Packet Microwave Adapter card | ✓ | ✓ |
6-port Ethernet 10Gbps Adapter card | ✓ 1 | |
10-port 1GigE/1-port 10GigE X-Adapter card (10-port 1GigE mode) | ✓ | ✓ |
4-port SAR-H Fast Ethernet module | ✓ | |
6-port SAR-M Ethernet module | ✓ | ✓ |
Ethernet ports on the 7705 SAR-A (both variants) | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Ax | ✓ | ✓ |
Ethernet ports on the 7705 SAR-H | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Hc | ✓ | ✓ |
Ethernet ports on the 7705 SAR-M (all variants) | ✓ | ✓ |
Ethernet ports on the 7705 SAR-W | ✓ | ✓ |
Ethernet ports on the 7705 SAR-Wx (all variants) | ✓ | ✓ |
DSL ports on the 7705 SAR-Wx | ✓ | ✓ |
Ethernet ports on the 7705 SAR-X | ✓ 1 | |
16-port T1/E1 ASAP Adapter card | ✓ | |
32-port T1/E1 ASAP Adapter card | ✓ | |
2-port OC3/STM1 Channelized Adapter card | ✓ | |
4-port OC3/STM1 Clear Channel Adapter card | ✓ | |
4-port OC3/STM1 / 1-port OC12/STM4 Adapter card | ✓ | |
4-port DS3/E3 Adapter card | ✓ | |
T1/E1 ASAP ports on the 7705 SAR-A (variants with T1/E1 ASAP ports) | ✓ | |
T1/E1 ASAP ports on the 7705 SAR-M (variants with T1/E1 ASAP ports) | ✓ |
|
TDM ports on the 7705 SAR-X | ✓ | |
12-port Serial Data Interface card | ✓ | |
6-port E&M Adapter card | ✓ | |
6-port FXS Adapter card | ✓ | |
8-port FXO Adapter card | ✓ | |
8-port Voice & Teleprotection card | ✓ | |
Integrated Services card | ✓ |
Note:
At access egress, the 7705 SAR handles traffic management for unicast and BMU traffic in the same way. Unicast and/or BMU traffic is mapped to a queue and the mapping is based on the FC classification. Individual queues are then scheduled based on the available traffic.
After the ATM pseudowire is terminated at the access egress, all the ATM cells are mapped to the default queue, which is queue 1, and queuing is performed per SAP. ATM access egress queuing and scheduling applies to the 16-port T1/E1 ASAP Adapter card, 32-port T1/E1 ASAP Adapter card, and 2-port OC3/STM1 Channelized Adapter card with atm/ima encapsulation. ATM access egress queuing and scheduling applies to the 4-port OC3/STM1 Clear Channel Adapter card and 4-port DS3/E3 Adapter card with atm encapsulation.
Once the per-SAP queuing takes place, the ATM Scheduler services these queues in the fashion and order defined below, based on the service categories assigned to each of these SAPs.
At access egress, CBR and rt-VBR VCs are always shaped, since there is no option to the user to turn shaping off. Shaping for nrt-VBR is optional.
Strict priority scheduling in an exhaustive fashion takes place for the shaped VCs in the order listed below:
UBR traffic is not shaped. To offer maximum flexibility to the user, nrt-VBR unshaped (also known as scheduled) is implemented.
ATM traffic is serviced in priority order. CBR traffic has the highest priority and is serviced ahead of all other traffic. After all of the CBR traffic has been serviced, rt-VBR traffic is serviced. Then, nrt-VBR traffic is serviced.
After scheduling all the other traffic from the CBR and VBR service categories, UBR is serviced. If there is no other traffic, UBR can burst up to the line rate. Scheduled nrt-VBR is treated the same way as UBR. Both UBR and unshaped nrt-VBR are scheduled using the weighted round-robin scheduler.
The scheduler weight assigned to queues hosting scheduled nrt-VBR and UBR traffic is determined by the configured traffic rate. The weight used by the scheduler for UBR+ VCs is dependent on the Minimum Information Rate (MIR) defined by the user. UBR with no MIR traffic has an MIR of 0.
Similarly, the scheduler weight is dependent on the Sustained Information Rate (SIR) for scheduled nrt-VBR. Weight used by the scheduler is programmed automatically based on the user-configured MIR/SIR value and is not user-configurable.
For UBR+, Table 14 and Table 15 are used to determine the weight of a UBR+ VC. These tables are also applicable to scheduled nrt-VBR weight determination. Instead of the MIR, the SIR is used to determine the scheduler weight.
Minimum Information Rate | Scheduler Weight |
<64 kb/s | 1 |
<128 kb/s | 2 |
<256 kb/s | 3 |
<512 kb/s | 4 |
<1024 kb/s | 5 |
<1536 kb/s | 6 |
<1920 kb/s | 7 |
≥1920 kb/s | 8 |
Range OC3 ATM | Range DS3 ATM | Weight |
0 to 1 Mb/s | 0 to 512 kb/s | 1 |
>1 Mb/s to 4 Mb/s | >512 kb/s to 1 Mb/s | 2 |
>4 Mb/s to 8 Mb/s | >1 Mb/s to 2 Mb/s | 3 |
>8 Mb/s to 16 Mb/s | >2 Mb/s to 4 Mb/s | 4 |
>16 Mb/s to 32 Mb/s | >4 Mb/s to 8 Mb/s | 5 |
>32 Mb/s to 50 Mb/s | >8 Mb/s to 16 Mb/s | 6 |
>50 Mb/s to 100 Mb/s | >16 Mb/s to 32 Mb/s | 7 |
>100 Mb/s | >32 Mb/s | 8 |
The access egress ATM scheduling behavior is shown in Table 16. For UBR traffic, the scheduler weight of the lowest possible value is always used, which is the value of 1. Only cell-based operations are carried out.
Flow type | Transmission Rate | Priority |
Shaped CBR | Limited to configured PIR | Strict priority over all other traffic |
Shaped rt-VBR | Limited to configured SIR, but with bursts up to PIR within MBS | Strict priority over all but shaped CBR |
Shaped nrt-VBR | Limited to configured SIR, but with bursts up to PIR within MBS | Strict priority over all scheduled traffic |
Scheduled nrt-VBR | Weighted share (according to SIR) of port bandwidth remaining after shaped traffic has been exhausted | In the same WRR scheduler as UBR+ and UBR |
Scheduled UBR+ | Weighted share (according to MIR) of port bandwidth remaining after shaped traffic has been exhausted | In the same WRR scheduler as nrt-VBR and UBR |
Scheduled UBR | Weighted share (with weight of 1) of port bandwidth remaining after shaped traffic has been exhausted | In the same WRR scheduler as nrt-VBR and UBR+ |
Ethernet access egress queuing and scheduling is very similar to the Ethernet access ingress behavior. Once the Ethernet pseudowire is terminated, traffic is mapped to up to eight different forwarding classes per SAP. Mapping traffic to different forwarding classes is performed based on the EXP bit settings of the received Ethernet pseudowire by network ingress classification.
Queue-type and profile scheduling are both supported for Ethernet access egress ports. If the queues are configured according to the tables and defaults described in this guide (implying a default mode of operation), the configuration is as follows:
In this default configuration, for queue-type scheduling, CoS-8 to CoS-5 are serviced by the Expedited scheduler, and CoS-4 to CoS-1 are serviced by the Best Effort scheduler. This default mode of operation can be altered to better fit the operating characteristics of certain SAPs.
With profile scheduling, the Ethernet frames can be either in-profile or out-of-profile, and scheduling takes into account the state of the Ethernet frames in conjunction with the configured CIR and PIR rates.
After the queuing, an aggregate queue-type and profile scheduling takes place in the following order:
Once the traffic is scheduled using the aggregate queue-type and profile schedulers, the per-port shapers shape the traffic at a sub-rate (that is, at the configured/shaped port rate). Per-port shapers ensure that a sub-rate is met and attainable at all times.
Per-SAP aggregate shapers in the access egress direction operate in a similar fashion to aggregate shapers for access ingress, except that egress traffic goes through the schedulers to the egress port shaper instead of through the schedulers to the fabric port as in the access ingress case. For information on how access egress and access ingress per-SAP shaping is similar, see Access Ingress Per-SAP Aggregate Shapers (Access Ingress H-QoS). For general information on per-SAP shapers, see Per-SAP Aggregate Shapers (H-QoS) On Gen-2 Hardware.
The arbitration of access egress traffic from the per-SAP aggregate shapers to the schedulers is described in the following section.
The arbitration of traffic from 4-priority and 16-priority schedulers towards an access egress port is achieved by configuring a committed aggregate rate limit for the aggregate of all the 4-priority unshaped SAPs. By configuring the 4-priority unshaped SAPs committed aggregate rate, the arbitration between the 16-priority shaped SAPs, 16-priority unshaped SAPs, and 4-priority unshaped SAPs is handled in the following priority order:
Figure 22 illustrates the traffic treatment for a single Ethernet port. It also illustrates that the shaped SAP aggregate CIR rate competes with the unshaped 4-priority aggregate CIR rate for port bandwidth. Once the aggregate CIR rates are satisfied, the shaped SAP aggregate PIR rate competes with the 4-priority PIR rate (always maximum) for port bandwidth.
The egress aggregate CIR rate limit for all the unshaped 4-priority SAPs is configured using the config>port>ethernet>access>egress>unshaped-sap-cir command.
Hybrid ports use a third-tier, dual-rate aggregate shaper to provide arbitration between the bulk of access and network egress traffic flows. For details, see QoS for Hybrid Ports on Gen-2 Hardware.
The adapter cards and ports that support 4-priority (Gen-3) scheduling for access egress traffic are listed in Table 13. See QoS for Gen-3 Adapter Cards and Platforms for details.
At access egress, where the network-wide QoS boundary is reached, there may be a requirement to mark or re-mark the CoS indicators to match customer requirements. Dot1p and DSCP marking and re-marking is supported at Ethernet access egress.
![]() | Note: When dot1p re-marking is needed for a qinq egress SAP, it may be necessary to use the qinq-mark-top-only command to indicate which qtag needs to have its dot1p bits re-marked. The qinq-mark-top-only command is found under the config>service context. Refer to the 7705 SAR Services Guide, 'VLL Services Command Reference”, for details. |
Similar to access ingress for Ethernet, DSCP marking or re-marking is supported for untagged, single-tagged, or double-tagged Ethernet frames.
On Ipipe SAPs over an Ethernet VLAN, both dot1p and DSCP marking and re-marking is supported at access egress. On Ipipe SAPs over PPP/MLPPP, DSCP marking and re-marking is supported at access egress. DSCP re-marking is supported for Ipipes using FR or cHDLC SAPS at the access egress.
Packet byte offset (PBO), or internal headerless rate, allows 7705 SAR schedulers to operate on a modified packet size by adding or subtracting a certain number of bytes. The actual packet size remains the same but schedulers take into account the modified size as opposed to the actual size of the packet. One of the main uses of the packet byte offset feature is to allow scheduling, at access ingress, to be carried out on the received packet size without taking into account service (for example, PW, MPLS) or internal overhead. Transport providers who sell bandwidth to customers typically need the 7705 SAR shapers/schedulers to only take into account the received packet size without the added overhead in order to accurately calculate the bandwidth they need to provide to their customers. Packet byte offset addresses this requirement. Another common use is at egress where port shapers can take into account four additional bytes, associated with Ethernet FCS.
Packet byte offset is configured under QoS profiles. Packet size modification might be desired to accommodate inclusion or exclusion of certain headers or even fields of headers during the scheduling operation. The packet size that the schedulers take into account is altered to accommodate or omit the desired number of bytes. Both addition and subtraction options are supported by the packet-byte-offset command. The actual packet size is not modified by the command; only the size used by ingress or egress schedulers is changed. The scheduling rates are affected by the offset, as well as the statistics (accounting) associated with the queue. Packet byte offset does not affect port-level and service-level statistics. It only affects the queue statistics.
When a QoS policy configured with packet byte offset is applied to a SAP or network interface, all the octet counters and statistics operate and report based on the new adjusted value. If configured, per-SAP aggregate shapers and per-customer aggregate shapers also operate on the adjusted packet sizes. The only exceptions to this rule are port shapers. The egress port shapers do not take the adjusted packet size into account, except on the 8-port Ethernet Adapter card, version 2, but operate only on the final packet size.
![]() | Note: The fabric shaper, in general, does not take the adjusted packet size into account, except on the 8-port Ethernet Adapter card, version 2. |
Table 17 shows PBO support on the 7705 SAR.
Traffic Direction and PBO Count | Second / Third Generation Adapter Cards and Platforms | First Generation Adapter Cards | |||||
Per SAP CoS Queue | Per SAP Shaper | Per Customer Shaper | Fabric Shaper 7705 SAR-8 / 7705 SAR-18 | Fabric Shaper 7705 SAR-8 / 7705 SAR-18 | |||
Sum of adjusted MSS shapers £ fabric shapers | Sum of adjusted MSS shapers > fabric shapers | ||||||
Access ingress | ✓ | ✓ | ✓ | internal packet size, no FCS | internal packet size, no FCS | ✓ | |
auto | 3 | 3 | 3 | 3 | internal packet size, no FCS (fabric shaper rate) | 3 | |
add 50 | 2 | 2 | 2 | 2 | 3 | 2 | |
subtract 50 | 6 | 6 | 6 | 6 | 3 | 6 | |
Per CoS Queue | Bypass | Fabric Shaper (on Non-Chassis Based Nodes) Otherwise Bypass | Fabric Shaper 7705 SAR-8 / 7705 SAR-18 | Fabric Shaper 7705 SAR-8 / 7705 SAR-18 | |||
Network ingress | ✓ | n/a | ✓ | ✓ | ✓ | ||
add 50 | 2 | n/a | 2 | 2 | 2 | ||
subtract 50 | 6 | n/a | 6 | 6 | 6 | ||
Per SAP CoS Queue | Per SAP Shaper | Per Customer Shaper | Port Shaper | Port Shaper | |||
Sum of adjusted MSS shapers £ egress rate | Sum of adjusted MSS shapers > egress rate | ||||||
Access egress | ✓ | ✓ | ✓ | ✓ | final packet size, FCS optional | ✓ | |
add 50 | 2 | 2 | 2 | 2 (room for 3) | 3 | 2 | |
subtract 50 | 6 | 6 | 6 | 6 | 3 | 6 | |
Per CoS Queue | Per VLAN Shaper | Bypass | Port Shaper | Port Shaper | |||
Sum of adjusted VLAN shapers £ egress rate | Sum of adjusted VLAN shapers > egress rate | ||||||
Network egress | ✓ | ✓ | n/a | ✓ | final packet size, FCS optional | ✓ | n/a if VLAN shaper is in use |
add 50 | 2 | 2 | n/a | 2 (room for 3) | 2 (room for 3) | 2 | n/a |
subtract 50 | 6 | 6 | n/a | 6 | 3 | 6 | n/a |
Per Access / Network CoS queue | SAP / VLAN Shaper | Per Customer Shaper Access / Network Arbitrator | Port Shaper | n/a | |||
Sum of adjusted MSS/NW arbitrator shapers £ egress rate | Sum of adjusted MSS/NW arbitrator shapers > egress rate | ||||||
Hybrid egress | ✓ | ✓ | ✓ | ✓ | final packet size, FCS optional | n/a | |
add 50 | 2 | 2 | 2 | 2 (room for 3) | 3 | n/a | |
subtract 50 | 6 | 6 | 6 | 6 | 3 | n/a |
This section contains the following topics related to QoS policies:
7705 SAR QoS policies are applied on service ingress, service egress, and network interfaces. The service ingress and service egress points may be considered as the network QoS boundaries for the service being provided.
The QoS policies define:
There are several types of QoS policies (see Table 18 for summaries and references to details):
![]() | Note: The terms access ingress/egress and service ingress/egress are interchangeable. The previous sections used the term access, and the sections that follow use the term service. |
Service ingress QoS policies are applied to the customer-facing Service Access Points (SAPs) and map traffic to forwarding class queues on ingress. The mapping of traffic to queues can be based on combinations of customer QoS marking (dot1p bits and DSCP values). The number of forwarding class queues for ingress traffic and the queue characteristics are defined within the policy. There can be up to eight ingress forwarding class queues in the policy, one for each forwarding class.
Within a service ingress QoS policy, up to three queues per forwarding class can be used for multipoint traffic for multipoint services. Multipoint traffic consists of broadcast, multicast, and unknown (BMU) traffic types. For VPLS, four types of forwarding are supported (which are not to be confused with forwarding classes): unicast, broadcast, multicast, and unknown. The BMU types are flooded to all destinations within the service, while the unicast forwarding type is handled in a point-to-point fashion within the service.
Service ingress QoS policies on the 7705 SAR permits flexible arrangement of these queues. For example, more than one FC can be mapped to a single queue, both unicast and multipoint (BMU) traffic can be mapped to a single queue, or unicast and BMU traffic can be mapped to separate queues. Therefore, customers are not limited to the default configurations that are described in this guide.
Service egress QoS policies are applied to egress SAPs and provide the configurations needed to map forwarding classes to service egress queues. Each service can have up to eight queues configured, since a service may require multiple forwarding classes. A service egress QoS policy also defines how to re-mark dot1p bits and DSCP values of the customer traffic in native format based on the forwarding class of the customer traffic.
Network ingress and egress QoS policies are applied to network interfaces. On ingress for traffic received from the network, the policy maps incoming EXP values to forwarding classes and profile states. On egress, the policy maps forwarding classes and profile states to EXP values for traffic to be transmitted into the network.
On the network side, there are two types of QoS policies: network and network queue (see Table 18). The network type of QoS policy is applied to the network interface under the config>router>interface command and contains the EXP marking rules for both ingress and egress. The network queue type of QoS policy defines all of the internal settings; that is, how the queues, or sets of queues (for ingress), are set up and used per physical port on egress and per adapter card for ingress.
A ring type network policy can be applied to the ring ports and the add/drop port on a ring adapter card. The policy is created under the config>qos>network command, and applied at the adapter card level under the config>card>mda command. The policy maps each dot1p value to a queue and a profile state.
If GRE or IP tunneling is enabled, policy mapping can be set up to use DSCP bits.
Network queue policies are applied on egress to network ports and channels and on ingress to adapter cards. The policies define the forwarding class queue characteristics for these entities.
Service ingress, service egress, and network QoS policies are defined with a scope of either template or exclusive. Template policies can be applied to multiple SAPs or interfaces, whereas exclusive policies can only be applied to a single entity.
One service ingress QoS policy and one service egress QoS policy can be applied to a specific SAP. One network QoS policy can be applied to a specific interface. A network QoS policy defines both ingress and egress behavior. If no QoS policy is explicitly applied to a SAP or network interface, a default QoS policy is applied.
Table 18 provides a summary of the major functions performed by the QoS policies.
Policy Type | Applied at… | Description | Section |
Service Ingress | SAP ingress | Defines up to eight forwarding class queues and queue parameters for traffic classification Defines match criteria to map flows to the queues based on combinations of customer QoS (dot1p bits and DSCP values) | |
Service Egress | SAP egress | Defines up to eight forwarding class queues and queue parameters for traffic classification Maps one or more forwarding classes to the queues | |
MC-MLPPP | SAP egress | Defines up to eight forwarding class queues and queue parameters for traffic classification Maps one or more forwarding classes to the queues | |
Network | Network interface | Packets are marked using QoS policies on edge devices, such as the 7705 SAR at access ingress. Invoking a QoS policy on a network port allows for the packets that match the policy criteria to be re-marked at network egress for appropriate CoS handling across the network | |
Network Queue | Adapter card network ingress and egress | Defines forwarding class mappings to network queues | |
Slope | Adapter card ports | Enables or disables the high-slope and low-slope parameters within the egress or ingress queue | |
ATM Traffic Descriptor Profile | SAP ingress | Defines the expected rates and characteristics of traffic. Specified traffic parameters are used for policing ATM cells and for selecting the service category for the per-VC queue. | |
SAP egress | Defines the expected rates and characteristics of traffic. Specified traffic parameters are used for scheduling and shaping ATM cells and for selecting the service category for the per-VC queue. | ||
Fabric Profile | Adapter card access and network ingress | Defines access and network ingress to-fabric shapers at user-configurable rates | |
Shaper | Adapter card ports | Defines dual-rate shaping parameters for a shaper group in a shaper policy |
Service ingress QoS policies define ingress service forwarding class queues and map flows to those queues. When a service ingress QoS policy is created, it always has a default ingress traffic queue defined that cannot be deleted. These queues exist within the definition of the policy. The queues only get created when the policy is applied to a SAP.
In the simplest service ingress QoS policy, all traffic is treated as a single flow and mapped to a single queue. The required elements to define a service ingress QoS policy are:
Optional service ingress QoS policy elements include:
Each queue can have unique queue parameters to allow individual policing and rate shaping of the flow mapped to the forwarding class. Figure 23 depicts service traffic being classified into three different forwarding class queues.
Mapping flows to forwarding classes is controlled by comparing each packet to the match criteria in the QoS policy. The ingress packet classification to forwarding class and enqueuing priority is subject to a classification hierarchy. Each type of classification rule is interpreted with a specific priority in the hierarchy.
Table 19 is given as an example for an Ethernet SAP (that is, a SAP defined over a whole Ethernet port, over a single VLAN, or over QinQ VLANs). It lists the classification rules in the order in which they are evaluated.
Rule | Forwarding Class | Enqueuing Priority | Comments |
default-fc | Set to the policy’s default FC. | Set to the policy default | All packets match the default rule |
dot1p dot1p-value | Set when an fc-name exists in the policy. Otherwise, preserve from the previous match. | Set when the priority parameter is high or low. Otherwise, preserve from the previous match. | Each dot1p-value must be explicitly defined. Each packet can only match a single dot1p rule. For QinQ applications, the dot1p-value used (top or bottom) is specified by the match-qinq-dot1p command. |
dscp dscp-name | Set when an fc-name exists in the policy. Otherwise, preserve from the previous match. | Set when the priority parameter is high or low in the entry. Otherwise, preserve from the previous match. | Each dscp-name that defines the DSCP value must be explicitly defined. Each packet can only match a single DSCP rule. |
The enqueuing priority is specified as part of the classification rule and is set to high or low. The enqueuing priority relates to the forwarding class queue’s high-priority-only allocation, where only packets with a high enqueuing priority are accepted into the queue once the queue’s depth reaches the defined threshold. See High-Priority-Only Buffers.
The mapping of ingress traffic to a forwarding class based on dot1p or DSCP bits is optional. The default service ingress policy is implicitly applied to all SAPs that do not explicitly have another service ingress policy assigned. The characteristics of the default policy are listed in Table 20.
Characteristic | Item | Definition |
Queues | Queue 1 | One queue for all ingress traffic:
|
Flows | Default FC | One flow defined for all traffic:
|
Note:
Service egress queues are implemented at the transition from the service network to the service access network. The advantages of per-service queuing before transmission into the access network are:
The subrate capabilities and per-service scheduling control are required to make multiple services per physical port possible. Without egress shaping, it is impossible to support more than one service per port. There is no way to prevent service traffic from bursting to the available port bandwidth and starving other services.
For accounting purposes, per-service statistics can be logged. When statistics from service ingress queues are compared with service egress queues, the ability to conform to per-service QoS requirements within the service network can be measured. The service network statistics are a major asset to network provisioning tools.
Service egress QoS policies define egress service queues and map forwarding class flows to queues. In the simplest service egress QoS policy, all forwarding classes are treated as a single flow and mapped to a single queue.
To define a basic service egress QoS policy, the following are required:
Optional service egress QoS policy elements include:
Each queue in a policy is associated with one or more of the supported forwarding classes. Each queue can have its individual queue parameters, allowing individual rate shaping of the forwarding classes mapped to the queue. More complex service queuing models are supported in the 7705 SAR where each forwarding class is associated with a dedicated queue.
The forwarding class determination per service egress packet is determined at ingress. If the packet ingressed the service on the same 7705 SAR router, the service ingress classification rules determine the forwarding class of the packet. If the packet was received over a service transport tunnel, the forwarding class is marked in the tunnel transport encapsulation.
Service egress QoS policy ID 1 is reserved as the default service egress policy. The default policy cannot be deleted or changed.
The default service egress policy is applied to all SAPs that do not have another service egress policy explicitly assigned. The characteristics of the default policy are listed in Table 21.
Characteristic | Item | Definition |
Queues | Queue 1 | One queue defined for all traffic classes:
|
Flows | Default action | One flow defined for all traffic classes:
|
Note:
SAPs running MC-MLPPP have their own SAP egress QoS policies that differ from standard policies. Unlike standard SAP policies, MC-MLPPP SAP egress policies do not contain queue types, CIR, CIR adaptation rules, or dot1p re-marking.
Standard and MC-MLPPP SAP egress policies can never have the same policy ID except when the policy ID is 1 (default). Standard SAP egress QoS policies cannot be applied to SAPs running MC-MLPPP. Similarly, MC-MLPPP SAP egress QoS policies cannot be applied to standard SAPs. The default policy can be applied to both MC-MLPPP and other SAPs. It will remain the default policy regardless of SAP type.
MC-MLPPP on the 7705 SAR supports scheduling based on multi-class implementation. Instead of the standard profiled queue-type scheduling, an MC-MLPPP encapsulated access port performs class-based traffic servicing.
The four MC-MLPPP classes are scheduled in a strict priority fashion, as shown in Table 22.
MC-MLPPP Class | Priority |
0 | Priority over all other classes |
1 | Priority over classes 2 and 3 |
2 | Priority over class 3 |
3 | No priority |
For example, if a packet is sent to an MC-MLPPP class 3 queue and all other queues are empty, the 7705 SAR fragments the packet according to the configured fragment size and begins sending the fragments. If a new packet is sent to an MC-MLPPP class 2 queue, the 7705 SAR finishes sending any fragments of the class 3 packet that are on the wire, then holds back the remaining fragments in order to service the higher-priority packet. The fragments of the first packet remain at the top of the class 3 queue. For packets of the same class, MC-MLPPP class queues operate on a first-in, first-out basis.
The user configures the required number of MLPPP classes to use on a bundle. The forwarding class of the packet, as determined by the ingress QoS classification, is used to determine the MLPPP class for the packet. The mapping of forwarding class to MLPPP class is a function of the user-configurable number of MLPPP classes. The default mapping for a 4-class, 3-class, and 2-class MLPPP bundle is shown in Table 23.
FC ID | FC Name | MLPPP Class 4-class bundle | MLPPP Class 3-class bundle | MLPPP Class 2-class bundle |
7 | NC | 0 | 0 | 0 |
6 | H1 | 0 | 0 | 0 |
5 | EF | 1 | 1 | 1 |
4 | H2 | 1 | 1 | 1 |
3 | L1 | 2 | 2 | 1 |
2 | AF | 2 | 2 | 1 |
1 | L2 | 3 | 2 | 1 |
0 | BE | 3 | 2 | 1 |
If one or more forwarding classes are mapped to a queue, the scheduling priority of the queue is based on the lowest forwarding class mapped to it. For example, if forwarding classes 0 and 7 are mapped to a queue, the queue is serviced by MC-MLPPP class 3 in a 4-class bundle model.
The QoS mechanisms within the 7705 SAR are specialized for the type of traffic on the interface. For customer interfaces, there is service ingress and service egress traffic, and for network interfaces, there is network ingress and network egress traffic.
The 7705 SAR uses QoS policies applied to a SAP for a service or to a network port to define the queuing, queue attributes, and QoS marking/interpretation.
The 7705 SAR supports the following types of network and service QoS policies:
![]() | Note: Queuing parameters are the same for both network and service QoS policies. See Network and Service QoS Queue Parameters. |
Network QoS policies define egress QoS marking and ingress QoS classification for traffic on network interfaces. The 7705 SAR automatically creates egress queues for each of the forwarding classes on network interfaces.
A network QoS policy defines ingress, egress, and ring handling of QoS on the network interface. The following functions are defined:
The required elements to be defined in a network QoS policy are:
Optional ip-interface type network QoS policy elements include the LSP EXP value or DSCP name to forwarding class and profile state mappings for all EXP values or DSCP values received. Optional ring type network QoS policy elements include the dot1p bits value to queue and profile state mappings for all dot1p bit values received.
Network policy ID 1 is reserved as the default network QoS policy. The default policy cannot be deleted or changed. The default network QoS policy is applied to all network interfaces and ring ports (for ring adapter cards) that do not have another network QoS policy explicitly assigned.
Table 24, Table 25, Table 26, and Table 27 list the various network QoS policy default mappings.
Table 24 lists the default mapping of forwarding class to LSP EXP values and DSCP names for network egress.
FC-ID | FC Name | FC Label | DiffServ Name | Egress LSP EXP Marking | Egress DSCP Marking | ||
In-Profile | Out-of-Profile | In-Profile Name | Out-of-Profile Name | ||||
7 | Network Control | nc | NC2 | 111 - 7 | 111 - 7 | nc2 111000 - 56 | nc2 111000 - 56 |
6 | High-1 | h1 | NC1 | 110 - 6 | 110 - 6 | nc1 110000 - 48 | nc1 110000 - 48 |
5 | Expedited | ef | EF | 101 - 5 | 101 - 5 | ef 101110 - 46 | ef 101110 - 46 |
4 | High-2 | h2 | AF4 | 100 - 4 | 100 - 4 | af41 100010 - 34 | af42 100100 - 36 |
3 | Low-1 | l1 | AF2 | 011 - 3 | 010 - 2 | af21 010010 - 18 | af22 010100 - 20 |
2 | Assured | af | AF1 | 011 - 3 | 010 - 2 | af11 001010 - 10 | af12 001100 - 12 |
1 | Low-2 | l2 | CS1 | 001 - 1 | 001 - 1 | cs1 001000 - 8 | cs1 001000 - 8 |
0 | Best Effort | be | BE | 000 - 0 | 000 - 0 | be 000000 - 0 | be 000000 - 0 |
For network ingress, Table 25 lists the default mapping of DSCP name to forwarding class and profile state for the default network QoS policy.
Ingress DSCP | Forwarding Class | ||||
DSCP Name | DSCP Value | FC ID | Name | Label | Profile State |
Default 1 | 0 | Best-Effort | be | Out | |
ef | 101110 - 46 | 5 | Expedited | ef | In |
cs1 | 001000 - 8 | 1 | Low-2 | l2 | In |
nc-1 | 110000 - 48 | 6 | High-1 | h1 | In |
nc-2 | 111000 - 56 | 7 | Network Control | nc | In |
af11 | 001010 - 10 | 2 | Assured | af | In |
af12 | 001100 - 12 | 2 | Assured | af | Out |
af13 | 001110 - 14 | 2 | Assured | af | Out |
af21 | 010010 - 18 | 3 | Low-1 | l1 | In |
af22 | 010100 - 20 | 3 | Low-1 | l1 | Out |
af23 | 010110 - 22 | 3 | Low-1 | l1 | Out |
af31 | 011010 - 26 | 3 | Low-1 | l1 | In |
af32 | 011100 - 28 | 3 | Low-1 | l1 | Out |
af33 | 011110 - 30 | 3 | Low-1 | l1 | Out |
af41 | 100010 - 34 | 4 | High-2 | h2 | In |
af42 | 100100 - 36 | 4 | High-2 | h2 | Out |
af43 | 100110 - 38 | 4 | High-2 | h2 | Out |
Note:
Table 26 lists the default mapping of LSP EXP values to forwarding class and profile state for the default network QoS policy.
Ingress LSP EXP | Forwarding Class | ||||
LSP EXP ID | LSP EXP Value | FC ID | Name | Label | Profile State |
Default 1 | 0 | Best-Effort | be | Out | |
1 | 001 - 1 | 1 | Low-2 | l2 | In |
2 | 010 - 2 | 2 | Assured | af | Out |
3 | 011 - 3 | 2 | Assured | af | In |
4 | 100 - 4 | 4 | High-2 | h2 | In |
5 | 101 - 5 | 5 | Expedited | ef | In |
6 | 110 - 6 | 6 | High-1 | h1 | In |
7 | 111 - 7 | 7 | Network Control | nc | In |
Note:
Table 27 lists the default mapping of dot1p values to queue and profile state for the default network QoS policy.
Dot1p Value | Queue | Profile State |
0 | 1 | Out |
1 | 2 | In |
2 | 3 | Out |
3 | 3 1 | In |
4 | 5 | In |
5 | 6 | In |
6 | 7 | In |
7 | 8 | In |
Note:
The 7705 SAR is the source of some types of traffic; for example, a link state PDU for sending IS-IS topology updates or an SNMP trap sent to indicate that an event has happened. This type of traffic that is created by the 7705 SAR is considered to be self-generated traffic (SGT). Another example of self-generated traffic is Telnet, but in that application, user commands initiate the sending of the Telnet traffic.
Network operators often have different QoS models throughout their networks and apply different QoS schemes to portions of the networks in order to better accommodate delay, jitter, and loss requirements of different applications. The class of service (DSCP or dot1p) bits of self-generated traffic can be marked on a per-application basis to match the network operator’s QoS scheme. This marking option enhances the ability of the 7705 SAR to match the various requirements of these applications.
The 7705 SAR supports marking self-generated traffic for the base routers and for virtual routers. Refer to “QoS Policies” in the 7705 SAR Services Guide for information on SGT QoS as applied to virtual routers (for VPRN services).
The DSCP and dot1p values of the self-generated traffic, where applicable, are marked in accordance with the values that are configured under the sgt-qos command. In the egress direction, self-generated traffic is forwarded using the egress control queue to ensure premium treatment, unless SGT redirection is configured (see SGT Redirection). PTP (IEEE 1588v2) and SAA-enabled ICMP traffic is forwarded using the CoS queue. The next-hop router uses the DSCP values to classify the traffic accordingly.
![]() | Note: IS-IS and ARP traffic are not IP-generated traffic types and are not DSCP-configurable; however, the dot1p bits can be configured in the same way as the DSCP bits. The default setting for the dot1p bits for both types of traffic is 111. For all other applications, the dot1p bits are marked based on the mapped network egress forwarding class. |
Table 28 lists various applications and indicates whether they have configurable DSCP or dot1p markings.
Application | Supported Marking | Default DSCP/dot1p |
ARP | dot1p | 7 |
IS-IS | dot1p | 7 |
BGP | DSCP | NC1 |
DHCP | DSCP | NC1 |
DNS | DSCP | AF41 |
FTP | DSCP | AF41 |
ICMP (ping) | DSCP | BE |
IGMP | DSCP | NC1 |
LDP (T-LDP) | DSCP | NC1 |
MLD | DSCP | NC1 |
NDIS | DSCP | NC1 |
NTP | DSCP | NC1 |
OSPF | DSCP | NC1 |
PIM | DSCP | NC1 |
1588 PTP | DSCP | NC1 |
RADIUS | DSCP | AF41 |
RIP | DSCP | NC1 |
RSVP | DSCP | NC1 |
SNMP (get, set, etc.) | DSCP | AF41 |
SNMP trap/log | DSCP | AF41 |
SSH (SCP) | DSCP | AF41 |
syslog | DSCP | AF41 |
TACACS+ | DSCP | AF41 |
Telnet | DSCP | AF41 |
TFTP | DSCP | AF41 |
Traceroute | DSCP | BE |
VRRP | DSCP | NC1 |
![]() | Note:
|
The 7705 SAR can be used in deployments where the uplink bandwidth capacity is considerably less than if the router is used for fixed or mobile backhaul applications. However, the 7705 SAR is optimized to operate in environments with megabits per second of uplink capacity for network operations. Therefore, many of the software timers are designed to ensure the fastest possible detection of failures, without considering bandwidth limitations. In deployments with very low bandwidth constraints, the system must also be optimized for effective operation of the routers without any interruption to mission-critical customer traffic.
In lower-bandwidth deployments, SGT can impact mission-critical user traffic such as TDM pseudowire traffic. To minimize the impact on this traffic, SGT can be redirected to a data queue rather than to the high-priority control queue on egress. All SGT applications can be redirected to a data queue, but the type of application must be considered because not all SGT is suitable to be scheduled at a lower priority. SGT applications such as FTP, TFTP, and syslog can be mapped to a lower-priority queue.
![]() | Caution:
|
As an example, in a scenario where the uplink bandwidth is limited to a fractional E1 link with 2 x DS0 channel groups, downloading software for a new release can disrupt TDM pseudowire traffic, especially if SGT traffic is always serviced first over all other traffic flows. Having the option to map a subset of SGT to data queues will ensure that the mission-critical traffic flows effectively. For example, if FTP traffic is redirected to the best-effort forwarding queue, FTP traffic is then serviced only after all higher-priority traffic is serviced, including network control traffic and TDM pseudowire traffic. This redirection ensures the proper treatment of all traffic types matching the requirements of the network.
![]() | Caution:
|
Redirection of SGT applications is done using the config>router>sgt-qos> application>fc-queue or config>service>vprn>sgt-qos>application>fc-queue command.
Redirection of the global ping application is not done through the sgt-qos menu hierarchy; this is configured using the fc-queue option in the ping command. Refer to the 7705 SAR OAM and Diagnostics Guide, “OAM and SAA Command Reference”, for details.
SGT redirection is supported on the base router and the virtual routers on ports with Ethernet or PPP/MLPPP encapsulation.
Network queue policies define the queue characteristics that are used in determining the scheduling and queuing behavior for a given forwarding class or forwarding classes. Network queue policies are applied on ingress and egress network ports, as well as on the ring ports and the add/drop port on the 2-port 10GigE (Ethernet) Adapter card and 2-port 10GigE (Ethernet) module.
Network queue policies are identified with a unique policy name that conforms to the standard 7705 SAR alphanumeric naming conventions. The policy name is user-configured when the policy is created.
Network queue policies can be configured to use up to 16 queues (8 unicast and 8 multicast). This means that the number of queues can vary. Not all user-created policies will require and use 16 queues; however, the system default network queue policy (named default) does define 16 queues.
The queue characteristics that can be configured on a per-forwarding class basis are:
Table 29 describes the default network queue policy definition.
![]() | Note:
|
Forwarding Class | Queue | Definition | Queue | Definition |
Network-Control (nc) | 8 | Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% | 16 | Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.1% High-Prio-Only = 10% |
High-1 (h1) | 7 | Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% | 15 | Rate = 100% CIR = 10% MBS = 2.5% CBS = 0.1% High-Prio-Only = 10% |
Expedited (ef) | 6 | Rate = 100% CIR = 100% MBS = 5% CBS = 0.75% High-Prio-Only = 10% | 14 | Rate = 100% CIR = 100% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
High-2 (h2) | 5 | Rate = 100% CIR = 100% MBS = 5% CBS = 0.75% High-Prio-Only = 10% | 13 | Rate = 100% CIR = 100% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Low-1 (l1) | 4 | Rate = 100% CIR = 25% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% | 12 | Rate = 100% CIR = 5% MBS = 2.5% CBS = 0.25% High-Prio-Only = 10% |
Assured (af) | 3 | Rate = 100% CIR = 25% MBS = 5% CBS = 0.75% High-Prio-Only = 10% | 11 | Rate = 100% CIR = 5% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Low-2 (l2) | 2 | Rate = 100% CIR = 25% MBS = 5% CBS = 0.25% High-Prio-Only = 10% | 10 | Rate = 100% CIR = 5% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
Best Effort (be) | 1 | Rate = 100% CIR = 0% MBS = 5% CBS = 0.1% High-Prio-Only = 10% | 9 | Rate = 100% CIR = 0% MBS = 5% CBS = 0.1% High-Prio-Only = 10% |
The following queue parameters are provisioned on network and service queues:
The queue ID is used to uniquely identify the queue. The queue ID is only unique within the context of the QoS policy within which the queue is defined.
The CIR for a queue defines a limit for scheduling. Packets queued at service ingress queues are serviced by in-profile or out-of-profile schedulers based on the queue’s CIR and the rate at which the packets are flowing. For each packet in a service ingress queue, the CIR is checked with the current transmission rate of the queue. If the current rate is at or below the CIR threshold, the transmitted packet is internally marked in-profile. If the flow rate is above the threshold, the transmitted packet is internally marked out-of-profile.
All 7705 SAR queues support the concept of in-profile and out-of-profile. The network QoS policy applied at network egress determines how or if the profile state is marked in packets transmitted into the network core. This is done by enabling or disabling the appropriate priority marking of network egress packets within a particular forwarding class. If the profile state is marked in the packets that are sent toward the network core, then out-of-profile packets are preferentially dropped over in-profile packets at congestion points in the network.
When defining the CIR for a queue, the value specified is the administrative CIR for the queue. The 7705 SAR maps a user-configured value to a hardware supported rate that it uses to determine the operational CIR for the queue. The user has control over how the administrative CIR is converted to an operational CIR if a slight adjustment is required. The interpretation of the administrative CIR is discussed in Adaptation Rule.
The CIR value for a service queue is assigned to ingress and egress service queues based on service ingress QoS policies and service egress QoS policies, respectively.
The CIR value for a network queue is defined within a network queue policy specifically for the forwarding class. The queue-id parameter links the CIR values to the forwarding classes. The CIR values for the forwarding class queues are defined as a percentage of the network interface bandwidth.
The PIR value defines the maximum rate at which packets are allowed to exit the queue. It does not specify the maximum rate at which packets may enter the queue; this is governed by the queue’s ability to absorb bursts and is user-configurable using its maximum burst size (MBS) value.
The PIR value is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively.
The PIR values for network queues are defined within network queue policies and are specific for each forwarding class. The PIR value for each queue for the forwarding class is defined as a percentage of the network interface bandwidth.
When defining the PIR for a queue, the value specified is the administrative PIR for the queue. The 7705 SAR maps a user-configured value to a hardware supported rate that it uses to determine the operational PIR for the queue. The user has control over how the administrative PIR is converted to an operational CIR if a slight adjustment is required. The interpretation of the administrative PIR is discussed in Adaptation Rule.
The schedulers on the network processor can only operate with a finite set of rates. These rates are called the operational rates. The configured rates for PIR and CIR do not necessarily correspond to the operational rates. In order to offer maximum flexibility to the user, the adaptation-rule command can be used to choose how an operational rate is selected based on the configured PIR or CIR rate.
The max parameter causes the network processor to be programmed at an operational rate that is less than the configured PIR or CIR rate by up to 1.0%. The min parameter causes the network processor to be programmed at an operational rate that is greater than the configured PIR or CIR rate by up to 1.0%. The closest parameter causes the network processor to be programmed at an operational rate that is closest to the configured PIR or CIR rate.
A 4-priority scheduler on the network processor of a third-generation (Gen-3) Ethernet adapter card or platform can be programmed at an operational CIR rate that exceeds 1.0% of the configured CIR rate. The PIR rate (that is, the maximum rate for the queue) and the SAP aggregate rates (CIR and PIR), maintain an accuracy of +/- 1.0% of the configured rates.
The average difference between the configured CIR rate and the programmed (operational) CIR rate is as follows:
![]() | Note: The percentages in the above list are averages only; the actual values can be higher. |
The Gen-3 network processor PIR rate is programmed to an operational PIR rate that is within 1.0% of the configured rate, which ensures that the FC/CoS queue does not exceed its fair share of the total bandwidth.
The CBS parameter specifies the committed buffer space allocated for a given queue.
The CBS is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively. The CBS for a queue is specified in kilobytes.
The CBS values for network queues are defined within network queue policies based on the forwarding class. The CBS values for the queues for the forwarding class are defined as a percentage of buffer space for the pool.
Once the reserved buffers for a given queue have been used, the queue contends with other queues for additional buffer resources up to the maximum burst size. The MBS parameter specifies the maximum queue depth to which a queue can grow. This parameter ensures that a traffic flow (that is, a customer or a traffic type within a customer port) that is massively or continuously over-subscribing the PIR of a queue will not consume all the available buffer resources. For high-priority forwarding class service queues, the MBS can be small because the high-priority service packets are scheduled with priority over other service forwarding classes. In other words, very small queues would be needed for high-priority traffic since the contents of the queues should have been scheduled by the best available scheduler.
The MBS value is provisioned on ingress and egress service queues within service ingress QoS policies and service egress QoS policies, respectively. The MBS value for a queue is specified in bytes or kilobytes.
The MBS values for network queues are defined within network queue policies based on the forwarding class. The MBS values for the queues for the forwarding class are defined as a percentage of buffer space for the pool.
High-priority-only buffers are defined on a queue and allow buffers to be reserved for traffic classified as high priority. When the queue depth reaches a specified level, only high-priority traffic can be enqueued. The high-priority-only reservation for a queue is defined as a percentage of the MBS value and has a default value of 10% of the MBS value.
On service ingress, the high-priority-only reservation for a queue is defined in the service ingress QoS policy. High-priority traffic is specified in the match criteria for the policy.
On service egress, the high-priority-only reservation for a queue is defined in the service egress QoS policy. Service egress queues are specified by forwarding class. High-priority traffic for a given traffic class is traffic that has been marked as in-profile either on ingress classification or based on interpretation of the QoS markings.
The high-priority-only buffers for network queues are defined within network queue policies based on the forwarding class. High-priority-only traffic for a specific traffic class is marked as in-profile either on ingress classification or based on interpretation of the QoS markings.
The high/low priority feature allows a provider to offer a customer the ability to have some packets treated with a higher priority when buffered to the ingress queue. If the queue is configured with a high-prio-only setting (which set the high-priority MBS threshold higher than the queue’s low-priority MBS threshold), then a portion of the ingress queue’s allowed buffers are reserved for high-priority traffic. An access ingress packet must hit an ingress QoS action in order for the ingress forwarding plane to treat the packet as high priority (the default is low priority).
If the packet’s ingress queue is above the low-priority MBS, the packet will be discarded unless it has been classified as high priority. The priority of the packet is not retained after the packet is placed into the ingress queue. Once the packet is scheduled out of the ingress queue, the packet will be considered in-profile or out-of-profile based on the dynamic rate of the queue relative to the queue’s CIR parameter.
If an ingress queue is not configured with a high-prio-only parameter (the parameter is set to 0%), the low-priority and high-priority MBS thresholds are the same. There is no difference in high-priority and low-priority packet handling. At access ingress, the priority of a packet has no effect on which packets are scheduled first. Only the first buffering decision is affected. At ingress and egress, the current dynamic rate of the queue relative to the queue’s CIR does affect the scheduling priority between queues going to the same destination (egress port).
From highest to lowest, the strict operating priority for queues is:
For access ingress, the CIR controls both dynamic scheduling priority and the marking threshold. At network ingress, the queue’s CIR affects the scheduling priority but does not provide a profile marking function (as the network ingress policy trusts the received marking of the packet based on the network QoS policy).
At egress, the profile of a packet is only important for egress queue buffering decisions and egress marking decisions, not for scheduling priority. The egress queue’s CIR will determine the dynamic scheduling priority, but will not affect the packet’s ingress determined profile.
The 7705 SAR maintains extensive counters for queues within the system to allow granular or extensive debugging and planning; that is, the usage of queues and the scheduler used for servicing a queue or packet is extremely useful in network planning activities. The following separate billing and accounting counters are maintained for each queue:
The 7705 SAR allows two kinds of queue types: Expedited queues and Best-Effort queues. Users can configure the queue type manually using the expedite and best-effort keywords, or automatically using the auto-expedite keyword. The queue type is specified as part of the queue command (for example, config>qos> sap-ingress>queue queue-id queue-type create).
With expedite, the queue is treated in an expedited manner, independent of the forwarding classes mapped to the queue.
With best-effort, the queue is treated in a non-expedited (best-effort) manner, independent of the forwarding classes mapped to the queue.
With auto-expedite, the queue type is automatically determined by the forwarding classes that are assigned to the queue. The queues that are set as auto-expedite are still either Expedited or Best-Effort queues, but whether a queue is Expedited or Best-Effort is determined by its assigned forwarding classes. In the default configuration, four of the eight forwarding classes (NC, H1, EF, and H2) result in an Expedited queue type, while the other four forwarding classes (L1, AF, L2, and BE) result in a Best-Effort queue type.
Assigning one or more L1, AF, L2, and BE forwarding class to an Expedited queue results in a Best-Effort queue type. See Forwarding Classes for more information on default configuration values.
The expedite, best-effort, and auto-expedite queue types are mutually exclusive. Each defines the method that the system uses to service the queue from a hardware perspective.
The 7705 SAR supports egress-rate limiting and ingress-rate limiting on Ethernet ports.
The egress rate is set at the port level in the config>port>ethernet context.
Egress-rate limiting sets a limit on the amount of traffic that can leave the port to control the total bandwidth on the interface. If the egress-rate limit is reached, the port applies backpressure on the queues, which stops the flow of traffic until the queue buffers are emptied. This feature is useful in scenarios where there is a fixed amount of bandwidth; for example, a mobile operator who has leased a fixed amount of bandwidth from the service provider.
The ingress-rate command configures a policing action to rate-limit the ingress traffic. Ingress-rate enforcement uses dedicated hardware for rate limiting; however, software configuration is required at the port level (ingress-rate limiter) to ensure that the network processor or adapter card or port never receives more traffic than they are optimized for.
The configured ingress rate ensures that the network processor does not receive traffic greater than this configured value on a per-port basis. Once the ingress-rate value is reached, all subsequent frames are dropped. The ingress-rate limiter drops excess traffic without determining whether the traffic has a higher or lower priority.
For more information about egress and ingress rate limiting, refer to the egress-rate and ingress-rate command descriptions in the 7705 SAR Interface Configuration Guide.
As part of 7705 SAR queue management, policies for WRED and/or RED queue management (also known as congestion management or buffer management) to manage the queue depths can be enabled at both access and network ports and associated with both ingress and egress queues. WRED policies can also be enabled on bridged domain (ring) ports.
Without WRED and RED, once a queue reaches its maximum fill size, the queue discards any new packets arriving at the queue (tail drop).
WRED and RED policies prevent a queue from reaching its maximum size by starting random discards once the queue reaches a user-configured threshold value. This avoids the impact of discarding all the new incoming packets. By starting random discards at this threshold, customer devices at an end-system may be adjusted to the available bandwidth.
As an example, TCP has built-in mechanisms to adjust for packet drops. TCP-based flows lower the transmission rate when some of the packets fail to reach the far end. This mode of operation provides a much better way of dealing with congestion than dropping all the packets after the whole queue space is depleted.
The WRED and RED curve algorithms are based on two user-configurable thresholds (minThreshold and maxThreshold) and a discard probability factor (maxDProbability) (see Figure 24). The minThreshold (minT) indicates the level when where discards start and the discard probability is zero. The maxThreshold (maxT) is indicates the level where the discard probability reaches its maximum value. Beyond this the maxT level, all newly arriving packets are discarded. The steepness of the slope between minT and maxT is derived from the maxDProbability (maxDP). Thus, the maxDP indicates the random discard probability at the maxT level.
The main difference between WRED and RED is that with WRED, there can be more than one curve managing the fill rate of the same queue.
WRED slope curves can run against high-priority and low-priority traffic separately for ingress and egress queues. This allows the flexibility to treat low-priority and high-priority traffic differently. WRED slope policies are used to configure the minT, maxT and maxDP values, instead of configuring these thresholds against every queue. It is the slope policies that are then applied to individual queues. Thus, WRED slope policies affect how and when the high-priority and low-priority traffic is discarded within the same queue.
Referring to Figure 24, one WRED slope curve can manage discards on high-priority traffic and another WRED slope curve can manage discards on low-priority traffic. The minT, maxT and maxDP values configured for high-priority and low-priority traffic can be different and can start discarding traffic at different thresholds. Use the start-avr, max-avr, and max-prob commands to set the minThreshold, maxThreshold, and maxDProbabilty values, respectively.
![]() | Note: The figure shows a step function at maxT. The maxDP value is the target value entered for the configuration and it partly determines the slope of the weighting curve. At maxT, if the arrival of a new packet will overflow the buffer, the discard probability jumps to 1, which is not the maxDP value. Therefore, a step function exists in this graph. |
The formula to calculate the average queue size is:
average queue size = (previous average × (1 – 1/2^TAF)) + (current queue size × 1/2^TAF)
The Time Average Factor (TAF) is the exponential weight factor used in calculating the average queue size. The time_average_factor parameter is not user-configurable and is set to a system-wide default value of 3. By locking TAF to a static value of 3, the average queue size closely tracks the current queue size so that WRED can respond quickly to long queues.
CBS is configured in kilobytes through the CLI; MBS is configured in bytes or kilobytes. These configured values are converted to the corresponding number of buffers. The conversion factor is a non-user-configurable, fixed default value that is equal to the system-defined maximum frame size, ensuring that even the largest frames can be hosted in the allocated buffer pools. This type of WRED is called buffer-based WRED.
User-defined minThreshold and maxThreshold values, each defined as a percentage, are also converted to the number of buffers. The minT is converted to the system-minThreshold, and the maxT is converted to the system-maxThreshold.
The system-minT must be the absolute closest value to the minT that satisfies the formula below (2^x means 2 to the exponent x):
system-maxThreshold – system-minThreshold = 2^x
![]() | Note: The 6-port Ethernet 10Gbps Adapter card and the 7705 SAR-X Ethernet ports use payload-based WRED (also called byte-based WRED); see Payload-based WRED. The above “system-minT” calculation does not apply to payload-based WRED. The 7705 SAR-X TDM ports use buffer-based WRED. |
The bridging domain queues support the following DP (discard probability) values: 0% to 10%, 25%, 50%, 75%, and 100%. User-configured values are rounded down to match these DP values.
For example, configuring a DP to be 74% means that the actual value used is 50%.
![]() | Note: Tail drop for out-of-profile traffic begins when the queue occupancy of the out-of-profile traffic exceeds that of the committed buffer size. |
The third-generation Ethernet adapter cards and platforms use payload-based WRED rather than buffer-based WRED (see WRED MinThreshold and MaxThreshold Computation). Payload-based WRED does not count the unused overhead space (empty space in the buffer) when making discard decisions, whereas buffer-based WRED counts the unused overhead space. Payload-based WRED is also referred to as byte-based WRED.
When a queue on an adapter card that uses payload-based WRED reaches its maximum fill (that is, the total byte count exceeds the configured maximum threshold), tail drop begins and operates in the same way as it does on any other adapter card or platform.
With payload-based WRED, the discard decision is based on the number of bytes in the queue instead of the number of buffers in the queue. For example, to accumulate 512 bytes of payload in a queue will take four buffers if the frame size is 128 bytes, but will take one buffer if the frame size is 512 bytes or more. Basing discards on bytes rather than buffers improves the efficient use of queues. In either case, byte- or buffer-based WRED, random discards begin at the minimum threshold (minT) point.
For example, assume a queue has MBS set to 512 kilobytes (converts to 1000 buffers), minT (start-avg) is set to 10% (100 buffers), and maxT (max-avg) is set to 80% (800 buffers). Table 30 shows when discards and tail drop start when payload-based WRED is used.
Frame Size | Discards Start | Tail Drop Starts |
128 bytes | 400 buffers in the queue (100 x 4) | 3200 buffers in the queue (800 x 4) |
512 bytes | 100 buffers in the queue | 800 buffers in the queue |
1024 bytes | 100 buffers in the queue | 800 buffers in the queue |
For tail drop, if the high-priority-only threshold is set to 10%:
![]() | Note:
|
Traffic descriptor profiles capture the cell arrival pattern for resource allocation. Source traffic descriptors for an ATM connection include at least one of the following:
QoS traffic descriptor profiles are applied on ATM VLL (Apipe) SAPs.
Fabric profiles allow access and network ingress to-fabric shapers to have user-configurable rates of switching throughput from an adapter card towards the fabric.
Two fabric profile modes are supported: per-destination mode and aggregate mode. Both modes offer shaping towards the fabric from an adapter card, but per-destination shapers offer the maximum flexibility by precisely controlling the amount of traffic to each destination card at a user-defined rate.
For the 7705 SAR-8 and the 7705 SAR-18, the maximum rate depends on a number of factors, including platform, chassis variant, and slot type. See Configurable Ingress Shaping to Fabric (Access and Network) for details. For information about fabric shaping on the 7705 SAR-M, 7705 SAR-H, 7705 SAR-Hc, 7705 SAR-A, 7705 SAR-Ax, 7705 SAR-W, and 7705 SAR-Wx, see Fabric Shaping on the Fixed Platforms (Access and Network).
Shaper policies define dual-rate shaper parameters that control access or network traffic by providing tier-3 aggregate shaping to:
![]() | Note: For network egress traffic on a non-hybrid Gen-3 port, the CIR value of the shaper group is ignored because of the behavior of the 4-priority scheduler on Gen-3 hardware at network egress. For more information, see QoS for Gen-3 Adapter Cards and Platforms. |
See Per-SAP Aggregate Shapers (H-QoS) On Gen-2 Hardware and Per-VLAN Network Egress Shapers for details on per-SAP and per-VLAN shapers.
Services are configured with default QoS policies. Additional policies must be explicitly created and associated. There is one default service ingress QoS policy, one default service egress QoS policy, and one default network QoS policy. Only one ingress QoS policy and one egress QoS policy can be applied to a SAP or network port.
When you create a new QoS policy, default values are provided for most parameters with the exception of the policy ID and queue ID values, descriptions, and the default action queue assignment. Each policy has a scope, default action, a description, and at least one queue. The queue is associated with a forwarding class.
All QoS policy parameters can be configured in the CLI. QoS policies can be applied to the following service types:
QoS policies can be applied to the following network entities:
Default QoS policies treat all traffic with equal priority and allow an equal chance of transmission (Best Effort forwarding class) and an equal chance of being dropped during periods of congestion. QoS prioritizes traffic according to the forwarding class and uses congestion management to control access ingress, access egress, and network traffic with queuing according to priority.
The following guidelines and caveats apply to the implementation of QoS policies.
For information on supported IETF drafts and standards, as well as standard and proprietary MIBs, refer to Standards and Protocol Support.