This section provides information about configuring queue groups using the CLI.
Topics in this section include:
Queue groups are objects created on access or network Ethernet port or ingress forwarding plane of an IOM/IMM/XMA that allow SAP or IP interface forwarding classes to be redirected from the normal type of queue mapping to a shared queue. Queue groups may contain queues, policers, or a combination or the two depending on the type of queue group.The following types of queue groups are supported:
Queue sharing and redirection is supported on the SR and ESS platforms with the following IOM types:
Queue sharing and redirection are also supported in conjunction with the use of existing Ethernet MDA, Ethernet CMA, HS-MDA and the VSM MDA.
Normally, each SAP (Service Access Point) has dedicated ingress and egress queues that are only used by that particular SAP. The SAP queues are created based on the queue definitions within the SAP ingress and SAP egress QoS policy applied to the SAP. Each packet that enters or egresses the SAP has an associated forwarding class. The QoS policy is used to map the forwarding class to one of the SAP’s local queue IDs. This per-SAP queuing has advantages over a shared queuing model in that it allows each SAP to have a unique scheduling context per queue. During congestion, SAPs operating within their conforming bandwidth will experience little impact since they do not need to compete for queue buffer space with misbehaving or heavily loaded SAPs.
The situation is different for a shared or port-queuing model that is based on policing color packets that conform or exceed a static rate before the single queue and that use WRED or drop tail functions to essentially reserve room for the conforming packets.
In this model, there is no way for the conforming packets to go to the head of line in the view of the port scheduler. Another advantage of per-SAP queuing is the ability for the SAP queues to perform shaping to control burst sizes and forwarding rates based on the SAPs defined SLA. This is especially beneficial when a provider is enforcing a sub-line rate bandwidth limit and the customer does not have the ability to shape at the CE.
However, there are cases where per-SAP queuing is not preferred. Per SAP queuing requires a more complex provisioning model in order to properly configure the SAPs ingress and egress SLAs. This requires service awareness at some points in the network where an aggregation function is being performed. In this case, a shared queuing or per-port queuing model will suffice. Creating ingress and egress access queue groups and mapping the SAPs forwarding classes to the queues within the queue group provides this capability.
A further use case is where a set of ingress SAPs, then may represent a subset of the total number of ingress SAPs, is to be shaped or policed on an aggregate per-forwarding class basis when those SAPs are spread across a LAG on multiple ingress ports, and where color-aware treatment is required so that explicitly in-profile traffic is honored up to CIR, but above then it is marked as out of profile
The above scenarios can be supported with access queue groups. A single ingress queue group is supported per access port, while multiple ingress queue group instances are supported per IOM/IMM/XMA forwarding plane. To provide more flexibility on the egress side of the access port, multiple egress access queue group queue-group instances are supported per egress access port.
Since queue redirection is defined per forwarding class, it is possible to redirect some forwarding classes to a queue group while having others on the SAP use the SAP local queues. This is helpful when shared queuing is only desired for a few applications such as VOIP or VOD while other applications still require queuing at the SAP level.
When ingress access port queue groups are configured, hardware queues are allocated to each switch fabric destination for each queue configured in the queue group template.
The allocation of ingress access port queue group hardware queues has been optimized for the 7950 XRS-20 systems to avoid allocating ingress hardware queues to XCMs in slots 11 and above.
When the first XCM in slot 11 or above is provisioned additional ingress hardware queues will be allocated to XCMs in slots 11 to 20 for any configured ingress access port queue group queue. If sufficient hardware queues are unavailable, the XCM provisioning will fail. Adding queues to the queue group template or adding additional ingress access port queue groups will continue to require more hardware queue to be allocated, with the configurations failing if there are not sufficient available. When the last XCM in slot 11 and above is unprovisioned, the related additional hardware queues to the all of the XCMs in slots 11 and above are freed.
Queue groups may be created on egress network ports in order to provide network IP interface queue redirection. A single set of egress port based forwarding class queues are available by default and all IP interfaces on the port share the queues. Creating a network queue group allows one or more IP interfaces to selectively redirect forwarding classes to the group in order to override the default behavior. Using network egress queue groups it is possible to provide dedicated queues for each IP interface.
Non-IPv4/non-IPv6/non-MPLS packets will remain on the regular network port queues. Therefore, when using an egress port-scheduler it is important to parent the related regular network port queues to appropriate port-scheduler priority levels to ensure the desired operation under port congestion. This is particularly important for protocol traffic such as LACP, EFM-OAM, ETH-CFM, ARP and IS-IS, then by default use the FC NC regular network port queue.
This feature allows the user to perform ingress and egress data path shaping of packets forwarded within a spoke-sdp (PW). It applies to a VLL service, a VPLS/B-VPLS service, and an IES/VPRN spoke-interface.
The ingress PW rate-limiting feature uses a policer in the queue-group provisioning model. This model allows the mapping of one or more PWs to the same instance of policers that are defined in a queue-group template.
Operationally, the provisioning model in the case of the ingress PW shaping feature consists of the following steps:
One or more spoke-sdps can have their FCs redirected to use policers in the same policer queue-group instance.
The egress PW shaping provisioning model allows the mapping of one or more PWs to the same instance of queues, or policers and queues, that are defined in the queue-group template.
Operationally, the provisioning model consists of the following steps:
One or more spoke-sdps can have their FCs redirected to use queues only, or queues and policers in the same queue-group instance.
Traffic is tunneled between VPRN service instances on different PEs over service tunnels bound to MPLS LSPs or GRE tunnels. The binding of the service tunnels to the underlying transport is achieved either automatically (using the auto-bind-tunnel command) or statically (using the spoke-sdp command; not that under the VPRN IP interface). QoS control can be applied to the service tunnels for traffic ingressing into a VPRN service, see Figure 19.
An ingress queue group must be configured and applied to the ingress network FP where the traffic is received for the VPRN. All traffic received on that FP for any binding in the VPRN (either automatically or statically configured) then is redirected to a policer in the FP queue group (using fp-redirect-group in the network QoS policy) will be controlled by that policer. As a result, the traffic from all such bindings is treated as a single entity (per forwarding class) with regard to ingress QoS control. Any fp-redirect-group multicast-policer, broadcast-policer or unknown-policer commands in the network QoS policy are ignored for this traffic (IP multicast traffic would use the ingress network queues or queue group related to the network interface).
Ingress classification is based on the configuration of the ingress section of the specified network QoS policy, noting that the dot1p and exp classification is based on the outer Ethernet header and MPLS label whereas the DSCP applies to the outer IP header if the tunnel encapsulation is GRE, or the DSCP in the first IP header in the payload if ler-use-dscp is enabled in the ingress section of the referenced network QoS policy.
Ingress bandwidth control does not take into account the outer Ethernet header, the MPLS labels/control word or GRE headers, or the FCS of the incoming frame.
The following command configures the association of the network QoS policy and the FP queue group and instance within the network ingress of a VPRN:
When this command is configured, it overrides the QoS applied to the related network interfaces for unicast traffic arriving on bindings in that VPRN. The IP and IPv6 criteria statements are not supported in the applied network QoS policy
This is supported for all available transport tunnel types and is independent of the label mode (vrf or next-hop) used within the VPRN. It is also supported for Carrier-Supporting-Carrier VPRNs.
The ingress network interfaces on then the traffic is received must be on FP2- and higher-based hardware. The above command is ignored on FP1-based hardware.
Before a queue group with a specific name may be created on a port or an IOM/IMM/XMA ingress forwarding plane, a queue group template with the same name must first be created. The template is used to define each queue, scheduling attributes and its default parameters. When a queue or policer is defined in a queue group template, that queue will exist in every instance of a port or forwarding plane queue group with that template name. The default queue or policer parameters (such as rate or mbs values) may be overridden with a specific value in each queue group. This works in a similar manner to SAP ingress or SAP egress QoS policies.
Queue sharing is also supported if the High Scale MDA (HSMDA) is used. On ingress, HSMDA queues are bypassed, and the queue group on the IOM forwarding plane is used. On egress, it is possible to redirect forwarding classes from multiple SAPs to an HSMDA queue group. The HSMDA also uses the term queue group to describe a group of 8 preconfigured hardware queues on its egress port. When queue sharing and redirection is configured on egress, a set of 8 HSMDA queues could be configured as a part of the queue group template. These correspond to 8 hardware queues on the HSMDA. When all eight (8) egress fcs are mapped to the queue-group instantiated in the egress port, the per-sap hsmda queue-group resource is freed.
Once an ingress or egress queue group template is defined, a port based queue group with the same name may be created. Port queue groups are named objects that act as a container for a group of queues. The queues are created based on the defined queue IDs within the associated queue group template. Port queue groups must be created individually on the ingress and egress sides of the port, but multiple port queue groups of the same template name may be created on egress ports if they have a different instance identifier. These are termed ‘queue group instances’. Each instance of a named queue group created on a port is an independent set of queues structured as per the queue group template. Port queue groups are only supported on Ethernet ports and may be created on ports within a LAG.
The percent-rate command is supported in a queue group template for pir and cir parameters only for egress queues. The user has the option of specifying percent-rate for pir and cir parameters. For pir, the range is 0.01 to 100.00, and for cir, the range is 0.00 to 100.00.
The rate can be also configured using the existing keyword rate in Kbps.
When the queue rate is configured with percent-rate, a port-limit is applied, specifically, the percent-rate is relative to the rate of the port to then the queue is attached.
Ingress forwarding plane queue groups allow groups of SAPs on one or more ports, or on a LAG on the IOM, IMM, or XMA, to be bundled together from a QoS enforcement perspective with an aggregate rate limit to be enforced across all SAPs of a bundle. Multiple queue groups are supported per IOM/IMM/XMA or port on access ingress. These are implemented at the forwarding plane level on the ingress IOM so that SAPs residing on different ingress ports or SAPs on a LAG spread across ports on a given IOM can be redirected to the same queue group
Once an ingress queue group template is defined, a forwarding plane queue group with the same name may be created on an ingress forwarding plane of an IOM, IMM, or XMA. Forwarding plane queue groups are named objects that act as a container for a group of policers. Queues are not supported in forwarding plane queue groups. Only hierarchical policers are supported in the forwarding plane queue group, rather than queues. These policers may be configured to use profile-aware behavior. The policers are created based on the defined policer IDs within the associated queue group template. Multiple forwarding plane queue groups of the same template name may be created on ingress if they have a different instance identifier. These are termed queue group instances. Each instance of a named queue group created on a forwarding plane is an independent set of policers structured as per the queue group template. Forwarding plane queue groups are only supported with Ethernet ports and may be created on IOMs, IMMs, or XMAs with ports in a LAG.
Two models are supported for forwarding class redirection. In the first, the actual instance of a queue group to use for forwarding class redirection is named in the QoS policy. This is termed policy-based redirection.
In the second model, the forwarding class queue or policers to apply redirection to are identified in the ingress or egress QoS policy. However, the specific named queue group instance is not identified until a QoS policy is applied to a SAP. This is termed SAP-based redirection.
Policy-based redirection allows different forwarding classes in the same QoS policy to be redirected to different queue groups, but it requires at least one QoS policy to be configured per queue group instance.
SAP-based redirection can require less QoS policies to be configured since the policy does not have to name the queue group. However, if redirected, all forwarding classes of a given SAP must use the same named queue group instance.
Policy based redirection is applicable to port queue groups on access ingress and access and network egress, while SAP based redirection is applicable to forwarding plane queue groups on access and network ingress, and port queue groups on access and network egress.
Forwarding class redirection is provisioned within the SAP ingress or SAP egress QoS policy. In each policy, the forwarding class to queue ID mapping may optionally specify a named queue group instance (policy-based redirection) or may simply tag the forwarding class for redirection (SAP-based redirection). When the name is specified, the defined queue ID must exist in the queue group template with the same name.
Redirecting a SAP forwarding class to a queue within a port based queue group using policy-based redirection requires four steps:
Redirecting a SAP forwarding class to a queue within an egress port based or ingress forwarding plane queue group using SAP-based redirection requires four steps:
Redirection to a queue group on the HSMDA supports the SAP-based provisioning model, only.
The association rules between SAP ingress and egress QoS policies and queue group templates are simple since both the target queue group name and queue ID within the group are explicitly stated within the access QoS policies.
The following association rules apply when the policy based provisioning model is applied with port queue groups.
When a SAP ingress QoS policy forwarding class is redirected to a queue group queue ID:
When a SAP ingress QoS policy forwarding class redirection is removed from a queue group queue ID:
When a SAP egress QoS policy forwarding class is redirected to a queue group queue ID:
When a SAP egress QoS policy forwarding class redirection is removed from a queue group queue ID:
If the above operation is successful then:
When a SAP ingress QoS policy with a forwarding class redirection to a queue group queue ID is applied to a SAP:
If the operation above is successful, then:
When a SAP ingress QoS policy with a forwarding class redirection to a queue group queue ID is removed from a SAP:
If the operation above is successful, then:
When a redirection to a named forwarding plane queue group instance is applied to a SAP on ingress:
If the operation above is successful, then:
When redirection to a named queue group is removed from an ingress SAP:
If the operation above is successful, then:
The system decrements the forwarding plane queue group template association counter for each ingress queue group where redirection is applied to the ingress SAP.
For the SAP-based provisioning model, the rules for redirecting a forwarding class queue to an egress port queue group are similar to those on ingress.
When a forwarding class is redirected to a ingress or egress port queue group queue, the packets sent to the queue are statistically tracked by a set of counters associated with the queue group queue and not with any of the counters associated with the SAP.
This means that it is not possible to perform accounting within a queue group based on the source SAPs feeding packets to the queue. The statistics associated with the SAP will not reflect packets redirected to a port queue group queue.
The set of statistics per queue are eligible for collection in a similar manner as SAP queues. The collect-stats command enables or disables statistics collection in to a billing file based on the accounting policy applied to the queue group.
When a forwarding class is redirected to a forwarding plane queue group queue or policer, the packets sent to the queue or policer are statistically tracked by a set of counters associated with the queue group queue/policer and not with any of the counters associated with the SAP.
This means that it is not possible to perform accounting within a queue group based on the source SAPs feeding packets to the queue. That is, the statistics associated with the SAP will not include packets redirected to a queue group queue.
If the user enables the packet-byte-offset {add bytes | subtract bytes} option under the ingress queue-group policer, the byte counters of that policer will reflect the adjusted packet size.
The set of statistics per queue are eligible for collection in a similar manner to SAP queues. The collect-stats command enables or disables statistics collection in to a billing file based on the accounting policy applied to the queue group.
Forwarding class redirection for a network IP interface is defined in a four step process.
The association rules work differently for network egress IP interfaces than they do for access SAPs. Since the network QoS policy does not directly reference the queue group names, the system is unable to check for queue group template existence or queue ID existence when the forwarding class queue redirection is defined. Configuration verification can only be checked at the time the network QoS policy is applied to a network IP interface.
The system keeps an association counter for each queue group template and an association counter for each queue ID within the template. The system also keeps an association counter for each queue group created on a port.
When a network QoS policy is applied to an IP interface with the queue group parameter specified:
If the operation above is successful, then:
When the queue group parameter is removed from an IP interface:
When a network QoS policy egress forwarding class redirection to a queue ID is removed or added:
If the operation above is successful, then:
When an IP interface associated with a queue group is bound to a port:
If the operation above is successful, then:
When an IP interface associated with a queue group is unbound from a port:
The statistics for network interfaces work differently than statistics on SAPs. Counter sets are created for each egress IP interface and not per egress queue. When a forwarding class for an egress IP interface is redirected from the default egress port queue to a queue group queue, the system continues to use the same counter set.
This feature adds support for separate ingress IPv4 and IPv6 statistics on IP interfaces. IES and VPRN interfaces, and subscriber group interfaces on IES and VPRN, as well as for uRPF. In previous release, the ingress statistics for IPv4 and IPv6 traffic was combined into a single set of packet and bytes counters. The existing counters will now only count IPv4 traffic, while new separate counters are available to IPv6 traffic.
The feature introduces a new CLI command to explicitly enable ingress statistics on IP interfaces, changing the default to disabled.
The user applies a network QoS policy to the ingress context of a spoke-SDP to redirect the mapping of a Forwarding Class (FC) to a policer defined in a queue-group template then is instantiated on the ingress Forwarding Plane (FP) where the PW packets are received. (This feature applies to both spoke-SDP and mesh-SDP. Spoke-SDP is used throughout for ease of reading.)
config>service>vprn>if>spoke-sdp>ingress>qos network-policy-id fp-redirect-group queue-group-name instance instance-id
Let us refer to a queue-group containing policers as a policer queue-group. The user must instantiate this queue-group by applying the following command:
config>card>fp>ingress>network>queue-group queue-group-name instance instance-id
The policers are instantiated at ingress FP, one instance per destination tap, and are used to service packets of this spoke-SDP then are received on any port on the FP to support a network IP interface on LAG and on any network IP interface to support ECMP on the network IP interface and LSP reroutes to a different network IP interface on the same FP.
In the ingress context of the network QoS policy, the user defines the mapping of a FC to a policer-id and instructs the code to redirect the mapping to the policer of the same ID in some queue-group:
config>qos>network>ingress>fc>fp-redirect-group policer policer-id config>qos>network>ingress>fc>fp-redirect-group broadcast-policer policer-id config>qos>network>ingress>fc>fp-redirect-group unknown-policer policer-id config>qos>network>ingress>fc>fp-redirect-group mcast-policer policer-id
The user can redirect the unicast, broadcast, unknown, and multicast packets of a FC to different policers to allow for different policing rates for these packet types (broadcast and unknown are only applicable to VPLS services). However, the queue-group is explicitly named only at the time the network QoS policy is applied to the spoke-SDP as shown above with the example of the VPRN service.
When the FC of a PW is redirected to use a policer in the named queue-group, the policer feeds the existing per-FP ingress shared queues referred to as policer-output-queues. These queues are shared by both access and network policers configured on the same ingress FP. The shared queue parameters are configurable using the following command:
config>qos>shared-queue policer-output-queues
The CLI configuration in this section uses a spoke-SDP defined in the context of a VPRN interface. However the PW shaping feature is supported with all PW based services including the PW template.
Operationally, the provisioning model in the case of the ingress PW shaping feature consists of the following steps:
The following are the constraints and rules of this provisioning model when used in the ingress PW shaping feature:
When a PW is redirected to use a policer queue-group, the classification of the packet for the purpose of FC and profile determination is performed according to default classification rule or the QoS filters defined in the ingress context of the network QoS policy applied to the PW. This is true regardless if an instance of the named policer queue-group exists on the ingress FP the PW packet is received on. The user can apply a QoS filter matching the dot1.p in the VLAN tag corresponding to the Ethernet port encapsulation, the EXP in the outer label when the tunnel is an LSP, the DSCP in the IP header if the tunnel encapsulation is GRE, and the DSCP in the payload’s IP header if the user enabled the ler-use-dscp option and the PW terminates in IES or VPRN service (spoke-interface).
When the policer queue-group name the PW is redirected does not exist, the redirection command is failed. In this case, the packet classification is performed according to default classification rule or the QoS filters defined in the ingress context of the network QoS policy applied to the network IP interface the PW packet is received on.
The user applies a network QoS policy to the egress context of a spoke-sdp to redirect the mapping of a Forwarding Class (FC) to a policer and/or a queue part of a queue-group instance created in the egress of a network port.
config>service>vprn>if>spoke-sdp>egress>qos network-policy-id port-redirect-group queue-group-name instance instance-id
The queue-group queues or policers are instantiated at egress port, one instance per network port and per link of LAG network port and are used to service packets of this spoke-SDP, then are forwarded over any network IP interface on this port.
config>port>ethernet>network>egress>queue-group queue-group-name instance instance-id
In the egress context of the network QoS policy, the user defines the mapping of a FC to a policer-id or a queue-id and instructs the code to redirect the mapping to the queue or policer of the same ID in some queue-group. However, the queue-group is explicitly named only at the time the network QoS policy is applied to the spoke-SDP as shown above with the example of the VPRN service. The command is as follows:
config>qos>network>egress>fc>port-redirect-group {queue queue-id | policer policer-id [queue queue-id]}
There are three possible outcomes when executing this command.
The CLI configuration in this section uses a spoke-sdp defined in the context of a VPRN interface. However the PW shaping feature is supported with all PW based services and PW template.
This provisioning model allows the mapping of one ore more PWs to the same instance of queues, or policers and queue, then are defined in the queue-group template.
Operationally, the provisioning model consists of the following steps:
The following are the constraints and rules of this provisioning model:
When the queue-group name the PW is redirected to exists and the redirection succeeds, the marking of the packet’s DEI/dot1p/DSCP and the tunnel’s DEI/dot1p/DSCP/EXP is performed according to the relevant mappings of the {FC, profile} in the egress context of the network QoS policy applied to the PW. This is true if an instance of the queue-group exists on the egress port the PW packet is forwarded to. If the packet’s profile value changed due to egress child policer CIR profiling, the new profile value is used to mark the packet’s DEI/dot1p and the tunnel’s DEI/dot1p/EXP and the DSCP/prec will be remarked if enable-dscp-prec-remarking is enabled under the policer.
When the redirection command succeeds but there is no instance of the queue-group on the egress port, or when the redirection command fails due to an inexistent queue-group name, the marking of the packet’s DEI/dot1p/DSCP and the tunnel’s DEI/dot1p/DSCP/EXP fields is performed according to the relevant commands in the egress context of the network QoS policy applied to the network IP interface the PW packet is forwarded to.
The user enables IP precedence or DSCP based egress re-classification by applying the following command in the context of the network QoS policy applied to the egress context of a spoke-SDP.
config>qos>network>egress>prec ip-prec-value [fc fc-name] [profile {exceed | out | in | inplus}]
config>qos>network>egress>dscp dscp-name [fc fc-name] [profile {exceed | out | in | inplus}]
The IP precedence bits used to match against DSCP reclassification rules come from the Type of Service (ToS) field within the IPv4 header or the Traffic Class field from the IPv6 header.
The IP DSCP bits used to match against DSCP reclassification rules come from the Type of Service (ToS) field within the IPv4 header or the Traffic Class field from the IPv6 header.
If the packet does not have an IP header, DSCP or IP-precedence based matching is not performed.
The IP precedence and DSCP based re-classification are supported on a network interface, on a CSC network interface in a VPRN, and on a PW used in an IES or VPRN spoke-interface. The CLI blocks the application of a network QoS policy with the egress re-classification commands to a network IP interface or to a spoke-SDP part of L2 service. Conversely, the CLI does not allow the user to add the egress re-classification commands to a network QoS policy if it is being used by an L2 spoke-SDP.
In addition, the egress re-classification commands only take effect if the redirection of the spoke-SDP or CSC interface to use an egress port queue-group succeeds; for example, the following CLI commands succeed:
config>service>vprn>if>spoke-sdp>egress>qos network-policy-id port-redirect-group queue-group-name instance instance-id
config>service>ies>if>spoke-sdp>egress>qos network-policy-id port-redirect-group queue-group-name instance instance-id
config>service>vprn>nw-if>qos network-policy-id port-redirect-group queue-group-name instance instance-id
When the redirection command fails in CLI, the PW uses the network QoS policy assigned to the network IP interface, however any reclassification in the network QoS policy applied to the network interface will be ignored.
A new statistic displaying the number of valid ingress packets received on a SAP, or subscribers on that SAP, is shown below in the sap-stats output. This is available for SAPs in all services. This is particularly useful to display SAP level traffic statistics when forwarding classes in a SAP ingress policy have been redirected to an ingress queue group.
In the example below, traffic is received on an ingress FP policer with a packet-byte-offset of subtract 10. It can be seen that the ingress queueing stats and offered forwarding engine stats are all zero as the traffic is using the FP ingress policer. The Received Valid statistic is non-zero and matches that seen on the ingress FP queue group, with the difference being that the packet-byte-offset is applied to the queue group policer octets but not the Received Valid octets.
The value in the Received Valid field may not instantaneously match the sum of the offered stats (even in the case where all traffic is using the SAP queues) when traffic is being forwarded, however, once the traffic has stopped the Received Valid will equal the sum of the offered stats.
The PW forwarded packet and octet statistics (SDP binding statistics) are currently supported for both ingress and egress and are available via show command, monitor command, and accounting file. These statistics consist of the ingress-forwarded and ingress-dropped packet and octet counters, as well as the egress-forwarded packet and octet counters. However, they do not include discards in the ingress network queues. The latter are counted in the stats of the queues defined in the network-queue policy applied to the ingress of the MDA/FP.
The ingress and egress SDP binding stats do not count the label stack of the PW packet but count the PW Control Word (CW) if included in the packet.
With the introduction of the PW shaping feature—the ingress or egress queue-group policer—a PW FC is redirected to also provide packet and octet forwarded and dropped-statistics by means of the show command, monitor command, and accounting file of the ingress or egress queue-group instance.
Similar to the SDP binding stats, the ingress policer stats for a spoke-SDP does not count the label stack. When the spoke-SDP is part of a L2-service, they will count the L2-encapsulation, minus CRC and VLAN tag if popped out, and they also count the PW CW, if included in the packet. When the spoke-SDP is part of a L3-service, the policer stats only count the IP payload and do not count the PW CW. Unlike the ingress SDP binding stats, if the user enables the packet-byte-offset {add bytes | subtract bytes} option under the queue-group policer, then the policer stats reflect the adjusted packet size in both L2 and L3-spoke-SDPs.
The egress queue-group policer and/or queue counts the full label stack of the PW packet including the CW. If the user enables the packet-byte-offset {add bytes | subtract bytes} option under the queue-group policer and queue-group queue, then the policer and queue stats reflect the adjusted packet size.
The SDP binding and queue-group statistics does however remain separate as one or more PWs can have FCs redirected to the same policer ID in the queue-group instance.
When a port queue group is created on a Link Aggregation Group (LAG) context, it is individually instantiated on each link in the LAG.
The queue parameters for a queue within the queue group are used for each port queue and are not divided or split between the port queues representing the queue group queue. For instance, when a queue rate of 100Mbps is defined on a queue group queue, each instance of the queue group (on each LAG port) will have a rate of 100Mbps.
A queue group must be created on the primary (lowest port ID) port of the LAG. If an attempt is made to create a queue group on a port other than the primary, the attempt will fail. When the group is define on the primary port, the system will attempt to create the queue group on each port of the LAG. If sufficient resources are not available on each port, the attempt to create the queue group will fail.
Any queue group queue overrides defined on the primary port will be automatically replicated on all other ports within the LAG.
When adding a port to a LAG group, the port must have the same queue groups defined as the existing ports on the LAG before it will be allowed as a member. This includes all queue group override parameters.
A queue group must be removed from the primary port of the LAG. The queue group will be deleted by the system from each of the port members of the LAG.
The following displays an ingress queue group template configuration example:
![]() | Note: To fully use the queue group feature to save queues, you must explicitly map all forwarding classes to queue group queues. This rule is applicable to SAP ingress, SAP egress and network QoS policies. |
The following displays an egress queue group template configuration example:
The following display a SAP ingress policy configuration with group queue-group-name specified:
The following display a SAP egress policy configuration with group queue-group-name specified:
The following displays a SAP egress policy configuration with port-redirect-group-queue construct (shown for both regular and HS-MDA egress queues) and the actual queue-group-name is determined by the SAP egress QoS configuration:
The provisioning steps involved in using a queue-group queue on an ingress port are:
The following displays an Ethernet access ingress port queue-group configuration example:
The following output display a port queue group queue override example.
The provisioning steps involved in using a queue-group queue on an egress access port are:
The provisioning steps involved in using a queue-group queue on an egress network port are:
Once a queue within a template is mapped by a forwarding class on any object, the queue may be edited, but not deleted.
The provisioning steps involved in using a queue-group for ingress traffic on a network interface are:
The following output displays a VPLS service configuration example.