Diameter-Based Restrictions:
Gx-Based Restrictions:
The term Gx interface (or simply Gx) is used to refer to the implementation of the Gx reference point on the node. Gx reference points are defined in the 3GPP 29.212 document.
The Enhanced Subscriber Management (ESM) subscriber is a host or a collection of hosts instantiated on the Broadband Network Gateway. The ESM subscriber represents a household or a business entity for which various services with committed Service Level Agreements (SLA) can be delivered.
AA Subscriber is a representation of ESM subscriber in MS-ISA for the purpose of managing its traffic based on applications (Layer 7 awareness). An AA subscriber has no concepts of ESM hosts.
BNG refers to the network element on which a Gx interface is implemented and policy rules are enforced (PCEF).
Flow – A flow in Gx context represents traffic matching criteria (traffic classification or traffic identification) based on any combination of the following fields:
A Gx flow is defined in the Flow-Information AVP:
Gx flows are similar to dynamically created filter or ip-criteria (QoS) entries and are inserted within the entry range configured for the base filter/qos-policy.
IP criterion – These fields are used in IPv4/v6 packet header used as a match criterion. The supported fields are DSCP bits and 5 tuple. This is part of traffic classification (or traffic identification) within the PCC rule or within the static qos-policy/filter entry.
Gx policy rule – There are three types of Gx policy rules supported:
PCC rule represents a single or a set of IP based classifiers (DSCP bits or 5 tuple) associated with a single or multiple actions.
For example:
Each PCC rule can be removed via Gx from the node by referencing its name (Charging-Rule-Name AVP).
The PCC rule can contain a combination of QoS and IPv4/v6 filter actions as they pertain to the node.
PCC Rule Classifier — A flow-based (5 tuple) or a DSCP classifier defined in the Flow-Information AVP within the PCC Rule. There can be a single classifier or multiple classifiers (Flow-Information AVPs) within a single PCC rule. A PCC classifier (Flow-Information AVP) corresponds to an entry (match criteria) within the filter/ip-criteria definition.
CAM entry – A single entry in the CAM that counts toward the CAM scaling limit. For example a match condition within ip-criteria in a filter or qos-policy can evaluate into a single CAM entry or into multiple entries (where port-ranges are configured in the classifier, or where matching against UDP and TCP protocols are enabled simultaneously).
Subscriber Host Policy – A collection of PCC rules (classifiers and actions), Gx overrides, NAS filter inserts and statically configured rules (CLI defined QoS or filter) that are together applied to the subscriber host.
An updated Diameter Base has been implemented to improve future enhancements. Advantages that the Diameter base implementation brings about are as follows:
The legacy Diameter base is still supported along with the new implementation, although the two cannot be used simultaneously for an application (NASREQ/Gx/Gy) within a given Diameter node instance within the SR OS. Operators are advised to deploy the new Diameter base as there are no plans going forward to add new functionality in the legacy Diameter implementation. Furthermore, the legacy Diameter implementation will be discontinued in a future SR OS release, in accordance with the NOKIA software discontinuation policy.
This section describes the operation of the updated Diameter base. The principal difference between the two implementations from the operator’s point of view is in the CLI. For this reason, the CLI descriptions are provided for both implementations. Some of the CLI commands remain the same for both implementations, and some of the commands are specific to their respective implementation (new or legacy). Each command that is specific to an implementation contains an explicit statement about the implementation to which it pertains.
The Diameter base protocol is used to provide reliable and secure communication for Diameter applications between and across the Diameter peers (diameter clients, agents, or servers). Its routing capabilities to transport traffic across Diameter nodes rely on Diameter Identities which are comprised of host and realm names. Diameter applications supported in SR OS are NASREQ, Gx, and Gy.
In SR OS, Diameter base protocol runs over TCP. It starts by establishing a TCP connection with peers, followed by capability exchanges. A data exchange occurs in the form of Attribute Value Pairs (AVPs). Some of these AVPs are used by the base protocol itself, while others are used by Diameter applications that are layered on top of it.
Diameter peers exchange messages via a transport level connection (TCP in SR OS). Each peer is identified by a unique Fully Qualified Domain Name (FQDN). On the IP level, Diameter peers are always directly connected.
Applications (NASREQ/Gx/Gy) relaying on Diameter base protocol use a concept of a session to communicate between the two end nodes that may be adjacent to each other or they may be multiple Diameter hops away.
This concept is shown in Figure 190.
SR OS supports Diameter client functionality where it performs access control for a device running on the edge of the network. A Diameter client in the SR OS can only initiate a connectivity request towards the peer but it will not accept a connectivity initiation request from the peer.
Upon TCP connectivity establishment, Diameter peers in SR OS exchange Capability Exchange messages. These messages contain information about the peer such as the peer’s identity, supported applications, the protocol version, and security mechanisms. Capability Exchange messages do not traverse Diameter nodes but instead they are exchanged only between the peers (immediate Diameter next hops).
In this phase, the SR OS node performs peer authorization where the peer identity received from the peer using the Capability Exchange message is compared to the locally configured peer name. This peer identity (peer names) comparison in the SR OS is case insensitive. If the compared peer names do not match, the TCP session is closed.
The SR OS node always advertises all three supported applications (NASREQ, Gx, and Gy) in the Capability Exchange, regardless if these applications are configured in the SR OS node.
If no common applications are negotiated, the peer (the receiver of a Capability Exchange Request - CER) must return a Capability Exchange Answer (CEA) with the Result-Code AVP set to DIAMETER_NO_COMMON_APPLICATION and should disconnect the transport layer connection.
If the application ID of 0xffffffff (Diameter Relay Agent - DRA) is received by the SR OS, then this is interpreted as having a common application with the peer.
Diameter names and realms learned from peers through the Capability Exchange are used in the SR OS to populate local peer and realm routing tables that are used for forwarding and routing Diameter messages.
The SR OS may initiate termination of the peering connection due to the following reasons:
Diameter realms are administrative domains that have a relationship with the user account. For example, a Diameter client can be common to multiple departments with different domain or realm names, for example, xyz.com and wvu.com. In some cases, realms can represent different geographical regions. Regardless of the case, the realm names are maintained in Diameter nodes (as ‘routes’) and are used for routing of Diameter messages.
In general, realms can be regarded as a string in Network Access Identifier (NAI) that immediately follows ‘.’ character (for example, jdoe.example.com). In SR OS, the realms are explicitly configured with a management interface as part of the Diameter configuration.
Forwarding and routing of the Diameter messages depends on the message type (requests vs answers). Diameter requests are forwarded/routed based on destination hosts/realms.
Forwarding and routing of request messages is based on the Diameter hosts or realms and is dependent on the two tables maintained in each node between the source and destination:
The peer to which the Diameter request message is sent may be a two step process. First, if the request message contains the Destination-Host AVP, the peer table is checked to see if there is a matching peer available. If so (the destination is directly connected), the request message is forwarded to it, without further checking the routing table for the next hop. This is called the forwarding phase. If the matching host is not present in the peer table (the destination is not directly connected), or the Destination-Host AVP is not present in the request message (for example, CCR-I), then the Diameter routing table is checked for the matching realm. This is referred to as the routing phase. Each routable request message must contain the Destination-Realm AVP and the destination realm is a mandatory configuration parameter in SR OS for a given Diameter application within the SR OS Diameter node.
Diameter answer messages do not rely on the routing or forwarding tables. They are forwarded in the reverse direction of the matching requests in the transactional cache of each traversed Diameter node and the Hop-by-Hop AVP. Hop-by-Hop AVP is set to a locally unique value by each Diameter node that forwards request messages. This AVP is then used to match request and answer messages in the reverse direction. It allows a Diameter response to follow the same route as the corresponding Diameter request.
Host names and realm names are essential in Diameter routing. They are conveyed in Diameter messages through the following AVPs:
Origin-Host and Realm AVPs are used to identify nodes in Diameter networks. Every Diameter message must carry these two AVPs. These two parameters are statically provisioned per SR OS node. An origin realm name is an optional CLI parameter, and if not configured it is extracted from the origin host name. The first “.” in the configured origin host name is used as a delimiter between the host portion and the realm portion. If there is no ‘.’ in the configured origin host name, then the origin host name is used as both, the origin host name and the origin realm name. Origin host and realm names do not support online changes. Once they are configured they cannot be changed without first deleting the Diameter node hierarchy. When received in Capability Exchange Answer messages, they are used to populate peer and realm tables in SR OS.
Destination-Host/Realm AVPs are used in forwarding or routing of Diameter request messages. The destination host name that is used by applications (Gx, Gy, and NASREQ) can only be learned from the incoming application level messages. The destination-realm is a mandatory configuration parameter in SR OS. The configured value is used in the initial request messages (CCR-I) when the destination host is not yet known (has not been learned yet). However, the configured destination-realm can be overwritten for the session if that the value for the Destination-Realm AVP in an incoming application message differs from the configured value. In other words, the Destination-Realm AVP in an outgoing message is set to the value that is learned from the incoming message within the session.
Destination-Host or Destination-Realm AVPs are never present in:
Origins are creation time parameters in the Diameter node CLI hierarchy. Multiple unique Diameter nodes can be instantiated within the same SR OS node.
Applications (Gx, Gy, and NASREQ) are associated with the Diameter node (and the origins) through the diameter-application-policy:
The destination-realm referenced in the diameter-node above example is used to determine the next hop for the initial requests message. A peer with the matching realm must be configured and becomes the next hop for all messages destined to this destination realm.
The centerpiece of legacy Diameter base configuration in SR OS is a diameter-peer-policy where all the Diameter base parameters related to communication with other external Diameter peers are defined (peering connections, DiameterIdentity, timers, and so on). This diameter-peer-policy is then associated with Diameter applications (NASREQ, Gx, and Gy) which relies on it to interact with the outside world.
In the legacy Diameter base, the origins are configured under the diameter-peer-policy:
while the destinations are configured one level below, under the peers:
If these parameters are not configured, the diameter-peer-policy is operationally down.
Applications (NASREQ, Gx, and Gy) are associated with the diameter-peer-policy through the diameter-application-policy:
The Diameter configurations that are dynamically learned from a peer through the Capability Exchange are the received origin realm name (through the Origin-Realm AVP) and the App-ID. Both are used to populate the realm table (<realm, app-id> to peer mapping) and are used to find the next hop for the given destination-realm.
The Origin-Host AVP from the CEA is used to crosscheck the peer name in the peer table. Mismatches between the configured and learned (Origin-Host AVP in CEA) peer names causes TCP connection to be closed.
The destination-host name is learned by application in CCA-I and it can be re-learned through subsequent incoming application level messages. For example, the origin host name in incoming application level messages becomes the destination host name in outgoing application level messages.
Message retransmissions are performed on the Diameter application level (NASREQ, Gx, and Gy) or on the TCP level. Diameter base itself does not retransmit messages.
Routable Diameter request messages can be retransmitted by the application (NASREQ, Gx, and Gy). If retransmission is enabled (by failover configuration on the application level), the original message is retransmitted only once and has the T-bit (retransmission bit) set.
An application level retransmission of the routable Diameter request messages may occur due to:
Answer messages are never retransmitted on the Diameter application level. However, they can be retransmitted on a TCP level.
Retransmission of non-routable Diameter request messages is driven by an internal timer which is set to a fixed value of 10 seconds. Non-routable Diameter messages are messages required for maintenance of the peering connection. They never propagate beyond the peer. This internal timer is used in the following cases:
Diameter Watchdog messages receive special treatment. They are sent in absence of any traffic on the peer level and only when the peer level watchdog timer expires. However, only one watchdog message is sent, and it is not retransmitted. Watchdog messages are used to detect the liveliness of a peer.
Consecutive attempts to re-open a failed TCP peering connection are governed by the connection timer configured under the peer.
The following is a summary retransmissions and timers:
Retransmissions:
Diameter watchdog message transmissions are governed by the peer level watchdog timer.
Timers include:
Python processing of the Diameter messages is supported in Diameter base where the processing can be enabled per message type per direction.
The 3GPP-based Diameter Credit Control Online charging applications allow the control of subscriber access to services based on a pre-paid credit. The volume and time accounting on the node supports online charging using the Diameter Credit-Control Application (DCCA). The node supports Session Charging with Unit Reservation (SCUR), allowing the node to reserve volume and time quotas for rating-groups. Furthermore, the node supports centralized unit determination and centralized rating: it requests quotas and reports usage against the quota provided by the Online Charging Server (OCS). Credit control is always on a per-rating group basis. A rating group maps to a category inside a category map of the node volume-based and time-based accounting function.
The following are the basic configuration steps:
The following are examples of Diameter online charging flows:
Scenario 1 — Depicts a redirect use-case:
When the quota is depleted, the subscriber is redirected to a web portal. When the credit is refilled, the OCS server notifies the BNG and provides a new quota. The configured out-of-credit-action when receiving a Final Unit Indication with action different from terminate is installed. See Figure 191 and Figure 192.
Scenario 2 — Depicts a terminate use case:
When the quota is depleted after reception of a Final Unit Indication with action set to Terminate, the subscriber host is disconnected. The configured out-of-credit-action is ignored in this case. See Figure 193.
Abbreviations used in the previous drawings:
Abbreviation | Expansion |
CCR | Credit Control Request (-Initial, -Update, -Terminate) |
CCA | Credit Control Answer (-Initial, -Update, -Terminate) |
RAR | Re-Authentication Request |
RAA | Re-Authentication Answer |
MSCC | Multiple Services Credit Control |
GSU | Granted Service Unit |
RSU | Requested Service Unit |
USU | Used Service Unit |
RC | Result Code |
RR | Reporting Reason |
VT | Validity Time |
When all quota of a Diameter Gy credit control rating group is consumed and no additional quota is granted by the server, the out of credit action as configured in the credit-control-policy or category-map is executed.
The out of credit action is one of the following:
RFC 4006, Diameter Credit Control Application, specifies a graceful service termination mechanism using the Final-Unit-Indication to indicate that an Answer message contains the final units for the service. When the final units are consumed, the action is specified with the Final-Unit-Action AVP.
In SR OS, with the Final-Unit-Action AVP = TERMINATE, the subscriber host or session is disconnected. With the Final-Unit-Action = REDIRECT or RESTRICT_ACCESS, the out-of-credit-action as specified in the credit-control-policy or category-map is executed. In the case of REDIRECT, the URL specified in the Redirect-Server AVP is used when IPv4 HTTP redirect is enabled as the out of credit action and the following conditions are met:
In all other cases, the URL specified in the Redirect-Server-Address AVP is ignored and the configured URL is used if HTTP redirect is enabled as the out of credit action.
The Restriction-Filter-Rule and Filter-Id AVPs included in the Final-Unit-Indication AVP are ignored.
Use the following show commands to find the active URL when an out-of-credit-action change-service-level is triggered that includes an IPv4 HTTP redirect action in the credit control ingress IP filter entries:
In a Diameter Gy application, that is, a Diameter Credit Control Application (DCCA), Credit Control Failure Handling (CCFH) determines the behavior of a credit control client in fault situations. When the CCFH value is set to CONTINUE and a failure occurs, the credit control client first attempts a failover procedure. If failover is not enabled or not supported by both client and server, or the failover is not successful, the client deletes the credit control session and continues the service to the end user without the Diameter Gy session until the user disconnects.
Extended Failure Handling (EFH) enables the credit control client to establish a new Diameter Gy session with the Online Charging Server (OCS) in failure situations where CCFH is triggered and the CCFH value is set to CONTINUE.
The following occurs when EFH is enabled and the CCFH value is set to CONTINUE.
Figure 194 shows an example of EFH.
If a failure occurs when EFH is active, a preconfigured time interim credit or volume interim credit with an optional validity time is assigned to all rating groups. A new Diameter Gy session setup is attempted each time the interim credit is used or the validity time expires. The following occurs when an attempt to re-establish the Diameter Gy session is made.
Figure 195 shows a sample call flow with EFH enabled. The following describes the call flow.
For EFH to become active, the Credit Control Failure Handling (CCFH) value must be set to CONTINUE.
For a new session, the CCFH value is set in the configuration:
For ongoing sessions, the CCFH value is determined from the configuration or can be overridden by the OCS by including the following AVP in an answer message (CCA-I or CCA-U):
EFH is triggered when the CCFH value is set to CONTINUE and one of the following conditions occurs:
Result Code | Command Level | MCSCC Level | ||
CCA-I | CCA-U | |||
4001 | DIAMETER_AUTHENTICATION_REJECTED | known | known | unknown |
4010 | DIAMETER_END_USER_SERVICE_DENIED | unknown | known | known |
4011 | DIAMETER_CREDIT_CONTROL_NOT_APPLICABLE | known | known | known |
4012 | DIAMETER_CREDIT_LIMIT_REACHED | unknown | known | known |
5003 | DIAMETER_AUTHORISATION_REJECTED | known | known | known |
5030 | DIAMETER_USER_UNKNOWN | known | known | unknown |
5031 | DIAMETER_RATING_FAILED | unknown | known | known |
EFH interim credit can be specified in two ways:
A single validity time value can be specified and applied to all rating groups that have interim credit assigned, regardless of whether the interim credit is configured for all rating groups in the Diameter application policy or per rating group in the category map.
The maximum number of times that interim credit is assigned to all rating groups of a Diameter Gy session when EFH is active can be limited in the configuration. The max-attempts value also corresponds with the maximum number of attempts to establish a new Diameter Gy session with the OCS (the default maximum attempts = 10).
An attempt to establish a new Diameter Gy session with the OCS is made when one of the following conditions occurs.
When the maximum number of attempts is reached and a new Diameter Gy session is not yet successfully established, then the user session is terminated; that is, the corresponding subscriber hosts are deleted from the system.
The reporting of used EFH interim credit can be enabled using the following CLI command:
With reporting enabled, the following accumulated used credit is reported when a new Diameter Gy session is established with the OCS and usage reporting is triggered for a rating group:
By default, reporting is disabled and all the used credit from the initial Gy session and all used interim credit are discarded.
EFH is enabled by specifying no shutdown in the extended-failure-handling CLI context in a Gy Diameter application policy.
Table 187 lists the EFH states for a Diameter Gy session.
Extended Failure Handling State | Description |
Disabled | EFH is disabled (shutdown). EFH cannot be triggered for the Diameter Gy session. |
Enabled - active | EFH is enabled (no shutdown) and active. A failure event occurred that triggered EFH. Interim credit is assigned to all rating groups and when either the validity time expires or the interim credit is used for a rating group, a new attempt is made to establish a Diameter Gy session with the OCS. |
Enabled - inactive | EFH is enabled (no shutdown) and inactive. A Diameter Gy session with the OCS is established. EFH is activated for the Diameter Gy session if a trigger condition occurs. |
In this example, EFH is enabled and when active, 100 Mbytes of volume interim credit is assigned to all rating groups of the Diameter Gy session with a validity time of 900 s. The maximum number of attempts to establish a Diameter Gy session with the OCS is 96.
Therefore, a maximum of 96 x 100 Mbytes or 9.6 Gbytes can be consumed before the user session is terminated (that is, the subscriber host is deleted from the system) when the OCS remains unreachable. Alternatively, when less than 100 Mbytes per rating group is consumed every 15 minutes (900 s), the user is disconnected after 900 s x 96 = 24 hours when the OCS remains unreachable.
In this example, EFH is enabled and when active, the following interim credit is assigned:
For inactive users, the validity time ensures that new Diameter Gy session connection attempts with the OCS are made at regular intervals.
For each attempt to establish a Diameter Gy session with the OCS, a new session ID is used.
When a new Diameter Gy session is successfully established, the EFH unreported credit for each rating group is included in the used service units when reporting is triggered for that rating group on the new Diameter Gy session.
When no new Diameter Gy session is established after 96 attempts, the user session is terminated; that is, the subscriber hosts are deleted from the system.
Subscribers with Diameter Gy sessions that have EFH enabled can be displayed with the following CLI command:
where the EFH state can be set to:
Example output:
The following information is displayed In the Extended Failure Handling (EFH) section of the example:
State indicates that EFH is enabled and active. Another possible state is “inactive”. When EFH is disabled, no EFH information is included.
Attempts indicates the number of times interim credit has been assigned to all categories followed by an attempt to establish a new Diameter Gy session with the OCS. When the attempt to establish a new Diameter Gy session with the OCS is still failing after the Maximum Attempts value is reached, then the user session is terminated (that is, the subscriber hosts are deleted from the system).
Active time indicates the time since the EFH state became active for this subscriber session.
Total Active time indicates the accumulative time of all occurrences that EFH was active during the lifetime of this subscriber session.
Total Active Count indicates the number of times that EFH was active during the lifetime of this subscriber session.
For each category (rating group), the EFH Unreported Credit is displayed:
The Current Volume and Current Time counters contain, respectively, the unreported volume and time credit for the current occurrence of the EFH in an active state. These counters include the unreported used credit for the initial Diameter Gy session that caused the EFH state to become active and the unreported interim credit from previous attempts. Used interim credit for the current attempt per category (rating group) is shown in the following counters:
The Total Volume and Total Time counters contain respectively the accumulated total unreported volume and time credit for the previous occurrences of EFH active state. The total counters are updated when the EFH state toggles from active to inactive. When interim credit reporting is enabled, the counters are reset to zero when the actual usage reporting happens for that rating group. When interim credit reporting is disabled, the counters are accumulating the total unreported credit during the lifetime of the subscriber session.
Current and Total Volume EFH Unreported Credit counters are the sum of used ingress and egress octets.
For each category (rating group), the validity time is displayed:
The following fields are only displayed when the EFH state is active:
When EFH is disabled (shutdown), then the EFH information is not included in the credit control output.
The call flow in Figure 199 shows a scenario where EFH is activated before the session is established with the OCS. The scenario is similar when EFH is activated by a CCR-U message.
The call flow in Figure 200 shows a scenario where the maximum attempts is reached to establish a Diameter Gy session with the OCS.
This section describes two scenarios where EFH is activated during a graceful service termination initiated by the OCS with a FUI AVP. A graceful service termination with FUI action equal to REDIRECT or RESTRICT_ACCESS relies on a validity time or RAR to trigger a new credit negotiation with the OCS. Since the OCS is unreachable, it cannot be verified if a new quota has been granted. With EFH enabled, interim credit is assigned to guarantee service to the user until the connectivity with OCS is restored.
In the first scenario shown in Figure 201, the OCS initiates the graceful service termination with the Final-Unit-Action AVP = REDIRECT or RESTRICT_ACCESS. EFH is activated immediately after the out-of-credit action is installed.
In the second scenario shown in Figure 202, the OCS initiates the graceful service termination with the Final-Unit-Action AVP = REDIRECT or RESTRICT_ACCESS. EFH is activated after the FUI validity time expires.
In diameter credit control, if the OCS is unreachable, EFH can be enabled to re-establish a Gy session with the OCS when the server becomes reachable again. This guarantees that no usage information is lost for billing. EFH applies to new and ongoing subscriber sessions.
If the OCS is unreachable after subscribers disconnect, the CCR-t that contains the final usage reporting is lost. CCR-t replay is a mechanism to recover final usage reporting data during OCS failure by replaying the CCR-t at configured intervals for a configured maximum lifetime or until an answer (CCA-t) is received from the OCS.
When enabled, CCR-t replay is triggered after:
Figure 203 illustrates CCR-t replay in action when the OCS is unreachable.
Where:
The following configuration example enables CCR-t replay for Diameter Gy sessions:
The ccrt-replay interval value can be configured in seconds between 60 seconds (1 minute) and 86,400 seconds (24 hours).
The ccrt-replay max-lifetime value can be configured in hours between 1 hour and 24 hours.
Use the show subscriber-mgmt diameter-session ccrt-replay command to display Diameter Gy sessions that are in CCR-t replay mode.
Use the show subscriber-mgmt diameter-session ccrt-replay diameter-application-policy name command to display Diameter Gy sessions from the specified diameter application policy that are in CCR-t replay mode and per-diameter application policy statistics for those sessions.
Use the clear subscriber-mgmt diameter-session ccrt-replay diameter-application-policy name sessions command to drop all Diameter Gy sessions that are in CCR-t replay mode.
Use the clear subscriber-mgmt diameter-session ccrt-replay diameter-application-policy name statistics command to clear the CCR-t replay statistics and update the “Statistics last cleared time” timestamp.
Gx is a reference point in the network architecture model describing mobile service delivery. The network elements are described in various technical documents under the umbrella of 3GPP and are used to deliver, manage, report on and charge end-user traffic for mobile users. The Gx reference point is used for policy control and charging control. As shown in Figure 204, it is placed between a policy server (Policy and Rule Charging Function (PCRF)) and a traffic forwarding node (Policy and Charging Enforcement Function) that enforces rules set by the policy server.
The Gx reference point is defined in the Policy and Charging Control (PCC) architecture within 3GPP standardization body. The PCC architecture is defined in the document 23.203 while the Gx functionality is defined in the document 29.212. Gx is an application of the Diameter protocol (RFC 3588/6733).
Although the Gx reference point is defined within 3GPP standardization body (spurred by mobile/wireless industry) its applicability has spread to wire-line operation as well. For example, mobile operators that also have fixed line customers (residential plus business) would like to streamline policy management in their mobile and non-mobile domains with a single and already existing Gx based policy management infrastructure. In other words they want to integrate policy management of nodes serving fixed line subscribers into the system that is currently managing mobile subscriber base.
In such mixed environments, the node plays the role of a PCEF with an integrated TDF (Traffic Detection Function, or Application Awareness [AA] in ALU terminology) where policies and charging rules can be managed via PCRF.
With WiFi Offload as a new emerging application, supporting Gx reference point on nodes is becoming even more important.
The Gx interface on the node encompasses the following functionality:
Gx is applicable to Enhanced Subscriber Management (ESM) as well as to AA.
The Gx application is defined as a vendor specific Diameter application, where the vendor is 3GPP and the Application-ID for Gx application is 16777238. The vendor identifier assigned to 3GPP by IANA is 10415.
When a Diameter protocol defined over the Gx interface, the node (PCEF) acts as a Diameter Client and the PCRF acts as a Diameter server. The Gx Diameter application uses existing Diameter command codes from the Diameter Base Protocol (RFC 6733) and Diameter Credit Control Application (RFC 4006), both of which are already implemented on the node.
Gx is using Attribute-Value Pairs (AVPs) for data representation within its messaging structures (command codes). AVPs in Gx come from various sources:
The initialization and maintenance of the connection between the node (PCEF) and the PCRF is defined by the underlying Diameter protocol as defined in RFC 3588/6733.
Subscriber and AA policies on the node (PCEF with integrated TDF) is assigned through the Gx protocol from the policy server (PCRF).
There are two modes of operation:
In the pull mode, during the host creation process, a user is authenticated by the AAA server. This process is independent from PCRF. Once the user is authenticated and the IP address is allocated to it, the node sends a request for policies to the PCRF via CCR-i messages (initialization request message). This communication occurs via the Gx interface. The subscriber-host must be uniquely identified in this request towards the PCRF. This sub- identification over the Gx interface could be by means of IP address, username, SAP ID, and so on.
Based on the user identification, PCRF submits policies to the node. Those policies can range from subscriber strings (sub/sla-profiles/AA-profiles) to QoS and filter-related parameters.
In the push mode, the PCRF initiates the mid-session policy change through the Re-Authentication Request (RAR) message (Figure 205).
If that Usage-Monitoring is requested, the PCRF submitted policy changes are triggered by the Credit Control Request (Update) messages. This is based on ESM or AA Usage-Monitoring. Once the specified usage threshold is reached on the session-level, credit-category level, pcc rule level or application level on the node, the Usage-Monitoring is reported from the node to the PCRF in the CCR-u message. Refer to the 7450 ESS, 7750 SR, and VSR Multiservice Integrated Service Adapter Guide for details on AA based Usage-Monitoring (Figure 206).
Alternatively, PCRF can request usage reporting on-demand via the rar command.
The IP Connectivity Access Network (IP-CAN) session is a concept that has roots in mobile applications. A policy rule via the Gx interface can be applied/modified to an entity that is identified as an IP-CAN session (in addition to individual bearers within the IP-CAN session, the bearer concept is currently not applicable to the BNG). For example, a UE (user interface or simply a mobile phone) can host several services, each of which appears as a separate IP-CAN session associated with the same IP address. For example, in mobile technologies, an IP-CAN session can be defined as <IP_address, APN, IMSI>, where:
In wireline environment (ESM deployments), an IP-CAN session identifies an entity to which the policy can be applied or modified. In SR OS, this can be a single or dual stack IPoE host, IPoE session, or PPPoE session.
For the purpose of identifying the subscriber host or session in the SR OS node in all Gx-related transactions, the SR OS node generates a unique Gx session-id AVP (RFC 6733, §8.8) per single or dual stack IPoE host, IPoE session, or PPPoE session. Note that the Gx Session-Id AVP is not the same as the Acct-Session-Id attribute used in RADIUS accounting.
If the IPoE session is enabled, the Gx session key can be based on one of the following combinations (configuration dependent):
If the IPoE session is disabled, the Gx session key for IPoE hosts (dual or single stack) is, by default, based on the {SAP,MAC} combination.
In an environment where a Layer 3 node is in front of the BNG, the MAC address of arriving packets are that of the Layer 3 node. Then, it is not possible to differentiate between IPoE hosts on the same SAP unless the IPoE session concept is enabled in SR OS. Each IPoE session must have a Circuit-id or Remote-id as a differentiator.
A session concept is native to PPPoE where the Gx session equates to a single or dual stack PPPoE session. This means that the Gx session key is based on a {MAC,SAP, PPPoE session-id} combination.
The following identification related AVPs are sent to the PCRF through Gx messages to aid in IP-CAN session identification:
Physical and logical access IDs are also defined in BBF TR-134 (§7.1.4.1).
Table 188 shows PDP to PEP direction parameters.
Parameter | Category | Type | Description |
Logical access ID | User identification | Octet String | The identity of the logical access to which the user device is connected. It is stored temporarily in the AAA function connected to PDP. This corresponds to the Agent ID in case of IPv4 and to THR Interface Id of DHCP option 82 for IPv6 |
Physical Access ID | User identification | UTF8String | The identity of the physical access to which the user device is connected. It is stored temporarily in the AAA function connected to the PDP. This corresponds to the Agent Remote ID |
A Subscription-id AVP is most commonly used to identify the subscriber but any combination of the above listed parameters can be used to uniquely identify the IP-CAN session on PCRF and consequently identify the subscriber.
In addition, NAS-Port, NAS-Port-Type, and Called-Station-ID AVPs from RFC 4005 (§4.2, §4.4, §4.5) can be passed to the PCRF.
The node allows the NAS-Port-Id to be carried within the Subscription-Id AVP. Since the NAS-Port-Id may not be unique network-wide (two independent nodes may use the same NAS-Port-Id), there is a need for another identifier in conjunction with NAS-Port-Id to make the Subscription-Id unique across the network. This additional identifier is a custom string that can be appended to the NAS-Port-Id. It is defined when the NAS-Port-Id is configured for inclusion in Gx messages. Refer to the 7750 SR and VSR RADIUS Attributes Reference Guide for information to format NAS-Port-Id AVP on the node.
The string can be used to identify the location of the node. For example:
An example of the augmented NAS-Port-Id would look like:
where: 'lag-1.1/1/2:23.2000' is the part referencing the SAP on the node (port + vlan tags) and the '@ALU-MOV-SITE-1' is the node itself.
The NAS-Port-Id can be then inserted in the Subscription-Id AVP.
Policy management via Gx enables operators to consolidate policy management systems used in wireline (mostly based on RADIUS/CoA) and wireless environment (PCRF/Gx) into a single system (PCRF/Gx).
The model for policy instantiation/modification via Gx is similar to the one using RADIUS CoA. The authentication and IP address assignment is provided outside of Gx while the policy management function is provided via Gx.
Some PCRFs may require that the IP address information is passed from the node in CCR-i. This assumes that the IP address assignment phase (via LUDB, RADIUS or DHCP Server) is completed before the PCRF is contacted via CCR-i. Message flow for various protocols (DHCP, AAA, Gx) related to IPv4 subscriber-host instantiation phase is shown in Figure 207.
CCR-i message is sent to the PCRF once DHCP Ack is received from the DHCP server. Relaying DHCP Ack to the client in the final phase of the host instantiation process depends on the answer from the PCRF and the configuration settings of the fallback function if the answer is not received.
This model allows the IP address of the host to be sent in the CCR-i message, even though the subscriber-host is not fully instantiated at the time when the CCR-i message is generated.
AAA/LUDB must still be used for authentication and assignment of parameters necessary to place the subscriber host in the proper routing context (service-id, grp-id, msap-policy).
Start of the accounting process nicely fits into this model since the host is not instantiated until the policy information from all sources (Gx, AAA, defaults) is known. Once the final sub-profile (containing the acct-policy) is known, the host is instantiated and accounting can consequently be activated.
The IP address itself cannot be assigned via Gx, and this functionality is outside of the Gx scope (3GPP TS 23.203 Rel12, Annex S, IP-CAN Session Establishment section).
The purpose of the CCR-i message is the following:
Figure 207 shows the message flow during the DHCPv4 host instantiation phase.
Message flow for PPPoEv4 host is similar. The host is instantiated once the answer from PCRF is received.
However, IPCP negotiation and Gx negotiation (CCR/CCA) are performed in parallel, independently of each other, and therefore the node will not wait for the Gx session to be established before the last IPCP ConfAck is sent (as it is for DHCP ACK).
Once the host is instantiated on the node (after the CCA-i is received or as defined by the fallback action if the PCRF is not available), the Accounting-Start message is sent by the node (assuming that accounting is enabled).
The message flow is shown in Figure 208.
The host is created when the Gx session is established and therefore the subscriber host transitions into the traffic forwarding state once the Gx processing is completed. If the PCRF is unavailable or unresponsive, the host creation or termination is driven by the fallback configuration.
Dual-stack (DS) hosts are treated as a single session from the Gx perspective, regardless whether or not an IPoE session concept is enabled. The difference between the IPoE session and non-IPoE session concept is related to the identification of the hosts for identification and policy management purposes.
If the IPoE session concept is disabled, a dual-stack host is identified by its {MAC,SAP} combination. Consequently, the key used for Gx session creation is based on the same {MAC,SAP} combination.
In certain scenarios, this is not sufficient to differentiate hosts on the same SAP for Gx policy management purposes. As a result. a single Gx session is created for all hosts on that SAP. To avoid this creation, the IPoE session concept must be enabled in SR OS, where a Circuit-id or a Remote-id can be used for further differentiation between the Gx managed entities (subscriber-hosts).
The PCRF submits the policy rules per Gx session. The rules are then be applied to the underlying entity that is managed by this Gx session (single or dual stack IPoE host or IPoE session). For PPPoE deployment, the Gx session is automatically tied to the PPPoE session. A notion of a session is native to PPPoE (there is a session ID in PPPoE), and thus, it is more natural to conceptualize the relationship between a PPPoE session (for single of dual-stack hosts) and a Gx session. By contrast, the concept of a session for IPoE is artificial, and which is why additional consideration is required for IPoE hosts, as described above.
The following example examines a case where IPoE session concept is disabled and consequently the IPoE dual-stack host is tied by a {MAC,SAP} combination.
The CCR-I contains the IP address that was first allocated (meaning, the first IP address that triggered the session creation). The request for the second IP address family triggers (if enabled by configuration) an additional CCR-u that carries the IP address allocation update to the PCRF along with the UE_IP_ADDRESS_ALLOCATE (18) event.
Separately, the CCR-u content mirrors the content of the CCR-i with exception of the pre- allocated IP address or addresses. A single Gx message (CCR-i or CCR-u) carries the update for the DHCPv6 IA-NA+IA-PD and DHCPv6/PPPoE NA+PD address/prefix. This assumes that the NA+PD is requested in a single DHCP message.
Similarly, for the Gx session teardown, the CCR-u messages are sent carrying the UE_IP_ADDRESS_RELEASE event, followed by the CCR-t message.
The message flow is depicted in Figure 209.
For Dual-Stack PPPoE host, the CCR-i is sent when the first IP address is assigned to the host. In the example in Figure 209, processing of the DHCPv6 Replay and CCR-u messages is performed in parallel. In other words, sending the DHCPv6 Reply message to the client is not delayed until the response from the PCRF is received. The reason being is that the Gx session is already established (triggered by the IPv4 host in our example) and all parameters for IPv4 and IPv6 are already known as received in CCA-i. Then, the CCR-u message is simply a notification message, informing the PCRF about the new IPv6 address/prefix being assigned to an existing client.
For PPPoE v6 hosts, the IPv6 address is not obtained during the IPCP phase (only interface-id is negotiated). Then, the node waits until the IPv6 address/prefix is allocated to the IPv6 hosts before it sends the CCR-I message. Otherwise, the IP address would not be available in CCR-i. This is shown in Figure 210.
The Gx fallback functionality refers to the behavior related to the subscriber host instantiation in situations where the PCRF is unresponsive while peering connection(s) are up or the PCRF is unavailable with all peering connections down. This functionality affects only Gx session processing related to CCR-i messages on the node and has no effect on already established Gx sessions.
The fallback behavior can be controlled via local configuration on the node or can be controlled via certain AVPs provided by PCRF.
PCRF-provided AVPs that control fallback behavior are:
If the fallback-related AVPs are not provided via PCRF, the node can provide a local configuration option to define the fallback behavior. If the response from the PCRF cannot be obtained, the local configuration can allow the subscriber host to be instantiated with default parameters, or alternatively the local configuration can deny subscriber host instantiation.
PCRF provided AVPs overrule the local configuration.
The local configuration that defines Gx fallback behavior can be found under the following CLI hierarchy:
The failover configuration option (equivalent to CC-Session-Failover AVP) controls whether the secondary peer is used in if the primary peer is unresponsive. The unresponsiveness is determined by the timeout of the previously sent message.
The handling configuration option (equivalent to Credit-Control-Failure-Handling AVP) controls whether the subscriber is terminated or instantiated with default parameters if the PCRF is unresponsive.
Handling: CONTINUE | Handling: RETRY-AND-TERMINATE | Handling: TERMINATE | |
Failover: ENABLED | |||
Once the message sent to the primary peer times out, the secondary peer (and consecutive peers after that) is attempted. Once the message times out after it has been sent to all available peers, the HANDLING action is examined in order to determine whether to terminate the host instantiation attempt or whether to use default parameters to instantiate the host. | Once the message times out after it has been sent to all available peers, the subscriber host is instantiated with default parameters (if they are configured) | Once the message times out after it has been sent to all available peers, the subscriber host instantiation is terminated. | Once the message sent to the primary peer times out, the subscriber host instantiation is terminated. |
Failover: DISABLED | |||
Once the message sent to the primary peer times out, the HANDLING action is examined in order to determine whether to terminate the host instantiation attempt or whether to use default parameters to instantiate the host. | Once the message sent to the primary server times out, the subscriber host is instantiated with default parameters (if they are configured) | Once the message sent to the primary peer times out, the subscriber host is terminated. | Once the message sent to the primary peer times out, the subscriber host is terminated. |
The CCR retransmissions are controlled by the tx-timer command under diameter-application-policy. See Subscriber Management Command Reference for the description of retransmission handling.
If all peers are down (no connections are open), the handling action determines the behavior. If the action is set to continue, the subscriber host is immediately instantiated with the default-settings (provided that the defaults are available). In all other action cases, the host instantiation is immediately terminated.
As described in the previous section, the subscriber host can optionally be (configuration controlled) established with default settings (sla-profile, sub-profile, app-profile) if the PCRF is not available to answer the CCR-i. This results in a subscriber-host state mismatch between the node and the PCRF, where the subscriber-host is established on the node but there is no corresponding Gx session established in the PCRF.
In order to resolve this situation, ESM periodically sends CCR-i for the Gx orphaned subscriber-host until the response from PCRF is received. The CCR-i is periodically retransmitted every 60 seconds.
Termination of the subscriber-host on the node without termination of the corresponding Gx session in PCRF results in a state mismatch between the node and the PCRF, whereby the host Gx session is present in the PCRF while it is removed from the node.
Some PCRFs can cope with such out-of-sync condition by periodically auditing all existing Gx sessions. For example, a probing RAR can be sent periodically for each active Gx session. The sole purpose of this probing RAR is to solicit a response from the PCEF (node) and provide indication on whether the corresponding Gx session is alive on the node or is vanished. The ‘probing’ RAR can contain an Event-Trigger that is already applied on the node, or if none is applied, then the Event-Trigger can contain NO_EVENT_TRIGGER. In either case, the probing RAR will not cause any specific action to be taken in the node and it is used only to solicit reply from PCRF.
To minimize the impact on performance, probing RARs are sent infrequently; therefore, it may take days to discover a stale Gx session on the PCRF. The node supports a mechanism that can clear the stale session in the PCRF sooner. It does this by replaying CCR-t messages until the proper response from the PCRF is received (CCA-t). The CCR-t messages are replayed up to 24 hours. Both the interval at which the CCR-t messages are replayed and the max-lifetime period are configurable. If the max-lifetime period expires before a valid answer is received, the CCR-t is deleted and a log is generated. The log contains Gx session ID.
The T-bit (retransmission bit) is set in all replayed CCR-t messages.
The following command clears all orphaned sessions on the node for a specified Diameter application policy:
Certain scenarios allow the PCRF to send a RAR message to an orphaned Gx session running CCR-t replays on the node. The ESM host associated with the orphaned Gx session does not exist and therefore RAR cannot be applied.
In this scenario, the node replies with RAA carrying Result-Code= DIAMETER_UNKNOWN_ SESSION_ID (5002).
The first CCR-t reply for each Gx session is synchronized, but the consecutive CCR-t replays for the same Gx sessions are not synchronized. Once the answer (CCA-t) is received, the CCR-t replay is terminated and this event (deletion of CCR-t replay) is synchronized to the other node.
CCR-t replays are sent from the node that was in SRRP active state at the time when the CCR-t was initiated. They continue to be sent from the same node even if the SRRP is switched over in the meantime.
This entire process can be thought of as if the CCR-t initiating node (active SRRP) armed its MCS peer with CCR-t replay for a given Gx session. This occurs at the very beginning, when a CCR-t replay is first initiated for a given Gx session. The armed node stays silent until the MCS peer that is actively sending CCR-t replays for a given Gx session, reboots. Only when the MCS peer reboots, the armed node starts sending CCR-t replays for a given Gx session in the following fashion: first message is sent with cleared T-bit, followed by replays at the configured replay interval and a fresh 24 hour lifetime.
On CPM switchover, the newly active CPM resumes the CCR-t replay with T-bit set until the lifetime, which is synchronized between the CPMs, expires.
During the subscriber-host setup phase, the first allocated IP address is sent in the CCR-i message from the node to the PCRF.
Each subsequent IP address allocation or de-allocation for the same host can optionally trigger a CCR-u, notifying the PCRF of the IP address allocation/de-allocation event.
This behavior can be enabled via the following CLI command:
The IP address allocation/de-allocation event driven CCR-u message carries the respective event code [UE_IP_ADDRESS_ALLOCATE(18) or UE_IP_ADDRESS_RELEASE(19)] along with the corresponding IP address.
The IP address allocation/de-allocation events are applicable to the following addresses:
These event-codes are only sent in CCR-u messages and not in CCR-i and CCR-t messages (when the host is instantiated and terminated).
Examples:
If the IP address notification event is enabled, the node-originated Gx message carries all known IP addresses/prefixes associated with the subscriber-host (Gx session), unless those messages contain one of the two event codes:
UE_IP_ADDRESS_ALLOCATE(18) or UE_IP_ADDRESS_RELEASE(19).
If one of those two events is present in the Gx message, the IP address/prefix carried in that message is only relevant to the event contained in the message (address/prefix allocated or released).
If the IP address notification event is disabled, the node only sends the IP address from the first host. This IP address is included in all messages related to the Gx session. If this IP address is removed (de-allocated) mid-session from the dual-stack host, the node stops advertising it, or any other address, from Gx messages for that particular session.
If re-authentication for DHCPv4/v6 hosts is enabled, any policy changes that may be submitted during re-authentication (for example sla-profile update via Access-Accept) overwrites the one previously applied, regardless of the source of the policy update. For example, if that the Gx policy is applied to a subscriber host via RAR (mid-session policy update) and then some time later an overlapping policy with different values is submitted via RADIUS or LUDB during the re-authentication phase, the RADIUS/LUDB submitted policy overwrites the one applied via Gx. In other words, the origin of the current policy in effect is not maintained internally in the system and therefore the overlapping policy update cannot be prioritized according to the source of the policy.
The following guidelines should be followed if the policy is provided via Gx:
These guidelines are not applicable for PPPoE subscriber-hosts since re-authentication cannot be enabled for PPPoE hosts. Consequently, LUDB or RADIUS parameters cannot override Gx provided parameters.
Coexistence of RADIUS CoA and Gx for the same host is allowed. The two policy change mechanisms are independent of each other and as such they can override each other. For example, if the RADIUS CoA for policy change for the host is received, the policy is updated but the PCRF (Gx) is not notified of the change. If both policy management mechanisms are deployed simultaneously, then it is the operator’s responsibility to synchronize the actions between the two.
Although the ESM subscriber and the AA subscriber are two separate instantiations on the node, their policy management and Usage-Monitoring are handled uniformly through a single Gx session.
Since ESM and AA modules are part of integrated service offering (ESM with residential AA on the same node), they share the same subscriber-id string. However, Gx interface in ESM is primarily applicable to hosts (basic entity to which policy is applied) while AA has no awareness of hosts. AA is only aware of subscribers (which is, in broader terms, a collection of hosts within a residence). Refer to the 7450 ESS, 7750 SR, and VSR Multiservice Integrated Service Adapter Guide for details on Application Assurance concepts.
AA subscriber state must exist for App-profiles and ASO overrides to be applied.
The app-profile for the aa-sub is applied explicitly by a CCR-i or RAR message with an AVP AA-App-Profile-Name.
App-profiles interact with ASO characteristics in this way:
![]() | Note: If an app-prof AVP is present, even if it is the same app-profile as currently applied, all previous ASO override policies are removed for the sub. |
The state of the subscriber policy attributes is modified by ASO AVPs in this way:
A policy change can be implicitly requested by the node at IP-Can session establishment time via the CCR-i command. The node supplies user identification attributes to the PCRF so that the PCRF can identify rules to be applied. However, the node will not explicitly request specific policy update, for example via Event-Trigger = RESOURCE_MODIFICATION_REQUEST.
Another way to request a policy update on the node is via the rar command in a PUSH model.
Gx policies on the node can be enforced via these three mechanisms:
Gx-based overrides refer to the activation or modification of the existing subscriber host- related objects on the node.
Subscriber host-related objects are shown in Figure 211. A subscriber represents a residence or home and it is identified by Subscriber-Id string on the node. A subscriber on the node can consist of multiple hosts in a bridged home environment or a single host in a routed home environment.
The two basic concepts in ESM context are sla-profile with its associated objects and sub-profile with its associated objects.
For a list of Gx related AVPs supported on the node, refer to the 7750 SR Gx AVPs Reference Guide.
Gx overrides are installed via Charging-Rule-Install AVP (for ESM or AA) or ADC-Rule-Install AVP (for AA only – 3GPP Release 11) sent from the PCRF towards the node.
AVP Format:
Every Gx override must have a Charging-Rule-Name (ESM) or ADC-Rule-Name (AA - 3GPP Release 11 and Release 12) associated with it. This is important in order to return the override status from the node to the PCRF upon the override instantiation.
The objects (subscriber-hosts) to which the new overrides are applied must exist on the node; otherwise, the override installation fails.
The parameters defining a new override simply replaces the existing parameters that are already applied to the subscriber-host, without the need to remove the previously installed parameters.
There are four types of overrides that are currently supported via Gx:
A Charging-Rule-Name AVP within the Charging-Rule-Install grouped AVP can have several meanings.
In all of the above cases, the existing objects applied to the subscriber-host is replaced with the referenced object.
It is important to distinguish two locations for invoking the Charging-Rule-Name AVP for overrides:
![]() | Note: AA-Function AVP and AA-Usage-Monitoring cannot co-exist in the same ADC rule. |
The Charging-Rule-Definition AVP (AVP code 1003, 3GPP 29.212 §5.3.4) is of type Grouped, and it defines the override sent by the PCRF to the node.
The Charging-Rule-Name in this AVP can be arbitrarily set and it is used to uniquely identify the override in error reporting.
The ADC-Rule-Definition AVP (AVP code 1094, 3GPP 29.212 §5.3.87) is of type Grouped, and it defines the ADC override sent by the PCRF to the node. The ADC-Rule-Name AVP within the ADC-Rule-Definition AVP uniquely identifies the ADC policy rule and it is used to reference to a policy rule in communication between the node and the PCRF within one IP CAN session.
HTTP redirect override submitted via the Gx interface overrides the current URL string defined in the filter that is currently applied to the subscriber-host.The override is implemented through the standard Redirect-Information AVP nested within the Charging-Rule-Definition (CRD) AVP.
IPv4
IPv6
The keywords v4-http-url and v6-http-url are special keywords that must be part of the Charging-Rule-Name (CRN) AVP. These keywords can be followed by an arbitrary string name.
The purpose of the <name> string in the CRN AVP is for the PCRF to differentiate between different HTTP redirect overrides. However, the name string in the context of the http url host override command in a filter has no meaning on the node, and therefore it is ignored. This means that there can be only one HTTP redirect override per host and per address family on a node.
The outcome of this Gx directive (Redirect-Information AVP without the Flow-Information AVP within the Charging-Rule-Definition AVP) is the override of the HTTP redirect URL in the currently applied subscriber-host filter. The filter definition must explicitly allow overrides via the allow-radius-override keyword.
As long as the override rule is present in the system (meaning, it has been submitted via the Gx and has not been explicitly removed since), the override tries to enforce itself when both of the following two conditions are met:
If the above conditions are not met, the override is accepted (the node responds with RAA=OK) and stored by the system, although it will not be applied until the above conditions are met.
For the HTTP URL host, the CRD directive must not contain any flow information or any other action besides the Redirect-Information AVP. Otherwise, the Diameter encoding fails and an error response is generated for RAR while CCR-I is silently dropped.
With the exception of HTTP redirect override, overrides cannot be removed by the Charging-Rule-Remove AVP. They can only be overridden, and consequently the Charging-Rule-Remove AVP is ignored. It is ignored only for regular overrides and not for PCC rules (see PCC Rules) or for HTTP redirect override. An HTTP redirect override can be removed whether it is active (a filter with HTTP redirect action is applied) or inactive (a filter without HTTP redirection is applied).
The name string in the CRN AVP is ignored in the context of HTTP redirect override. This means that the removal of HTTP redirect override with any name removes the currently installed HTTP redirect override.
Similarly, the installation of the HTTP redirect override replaces any currently installed HTTP redirect override, regardless of the name string (implicit removal of the current HTTP redirect override, followed by the installation of the new one).
The node replies with RAA=OK if a properly formatted Charging-Rule-Remove directive with any name is received for HTTP redirect override.
The instantiation of HTTP redirect overrides via the Gx can be summarized as:
The following is an example of Gx override instantiation where all Gx overrides are submitted under a single Charging-Rule-Install AVP. The AVPs in this example can be included in the CCA-i, CCA-u or RAR messages sent from the PCRF.
The outcome of the override is the following:
In the following example, all Gx overrides are submitted via a separate Charging-Rule-Install AVP:
Gx overrides (QoS rates, sub/sla-profiles, filters, and so on) can be examined individually with subscriber specific operational commands. In the example below, fields in bold can be overridden.
A generic use case for flow-based dynamic policy is related to customized network level treatment of on-demand services. Such services can represent a wide range of applications, such as video-on-demand or access to a specific application in the network. The service can be identified by traffic destination parameters or DSCP bits. Once the service is identified, a set of actions can be applied to the service (rate change, forwarding-class change, Usage-Monitoring, and so on).
Typical flow of events for service activation is shown in Figure 212:
1) An established user subscribes to a service in the network via a Web portal at any given time.
2) Once the authentication/payment is accepted, the back end (for example, the Web portal integrated in OSS) identifies the service and submits the parameters defining the network delivery of the offered service to the PCRF.
3) The PCRF converts those parameters into rules and submits those rules to the subscriber-host on the BNG via the Gx. The rules identify the service on the network level (destination IP@ and port) along with the desired action.
4) [and 5) and 6)] Before the service can be started, the action of individual policy management elements must be acknowledged to ensure that the resources for the service delivery are available and instantiated before the service is delivered to the subscriber.
7) The service traffic can be started from the subscriber side. Network requirements for the successful service delivery are enforced on a per flow or DSCP basis as defined by the PCC rule.
A PCC rule consists of traffic classifiers (Flow-Information AVPs) required for traffic identification, and one or more actions associated with such classified traffic. PCC rules are unidirectional, which means that each rule is applied on ingress or egress. They are provisioned from PCRF via Gx interface.
Traffic classification is based on:
Supported actions are:
A PCC rule that is submitted to the node via PCRF is internally instantiated using two basic policy constructs, QoS policy and filter policy (ACL). This internal division is transparent to the operator at the time of the rule provisioning. The operator perceives Gx as a unified method for provisioning policy rules, whether the rule is QoS-related or filter-related.
The type of action within the PCC rule determines whether the PCC rule is split between the QoS policy and the filter policy.
Rules with actions:
are converted into a QoS policy while the rules with actions:
are converted into filter rules.
The operator should be aware of this division for dimensioning (scaling) purposes. Operational commands can be utilized to reveal resource consumption on the node.
A PCC rule is addressed to a subscriber-host (single stack or dual stack) via the diameter session-id. However, qos-policy-related entries are applied per sla-profile instance since the qos resources are allocated per sla-instance. An sla-profile instance and sla-profile are two distinct concepts. An sla-profile instance is an instantiation of the sla-profile which is a configuration concept in which parameters are defined. An sla-profile is instantiated per a subscriber-host, or multiple subscriber-hosts can share an sla-profile instance as long as they belong to the same SAP and have the same subscriber ID.
This means that all hosts sharing the same sla-profile instance inherits the change. The sla-profile instance and sla-profile are two distinct concepts. The sla-profile instance is an instantiation of the sla-profile which is a configuration concept in which parameters are defined. The sla-profile is instantiated per a subscriber-host, or multiple subscriber-hosts can share an sla-profile instance as long as they belong to the same SAP and have the same subscriber-id.
Filter-related entries are applied per each subscriber-host, whether the hosts are sharing or not sharing an sla-profile instance.
The concept of splitting the rules is shown in Figure 213.
The PCC rule instantiation fails if a PCC rule contains only actions without any classification, or if it contains only classification without any actions.
Subscriber host must have an explicit static (or base) filter or qos-policy before any dynamic entries can be inserted via Gx. For example, a base filter/qos-policy can be referenced by a sla-profile when the subscriber is instantiated. However, the parameters in the base qos-policy and base filter cannot be modified via Gx.
In the absence of explicitly defined qos-policy for the subscriber host, the default qos-policy 1 is in effect. Then, PCC rules with a QoS-related action cannot be applied.
PCC rule entries can be inserted in specifically allocated range in the base filter or qos-policy. The insertion point is controlled by the operator. This is shown in Figure 214. The entries reserved for PCC rules start at the beginning of the range specified by the following CLI command:
under the following CLI hierarchy:
An entry corresponds to a Flow-Information AVP and is equivalent to a match condition defined as any combination of the following parameters under a filter or qos-policy ip-criteria:
Such defined entry maps into a single CAM entry with exception of port range configured as match criteria whereby a single port range command can expand into multiple CAM entries.
Static entries in filter/qos-policy can be inserted before and after the range reserved for PCC rules.
Policy (defined in this context as a collection of static and dynamic rules) sharing between the subscriber hosts is depicted in Figure 215. In order to simplify CAM scaling explanations, the examples in this section assume that one rule within the policy occupies exactly one CAM entry. For simplicity, only PCC rules are shown but in reality a subscriber-host policy consist of PCC rules together with the base qos-policy/filter.
A policy, as a set of rules, can be shared amongst the subscriber-hosts. However, when a new rule is added to one of the subscriber-host, the newly created set of rules for this host becomes unique. Hence, a new policy for the subscriber-host is instantiated. This new policy consumes additional resources for all the old rules (clone of the old policy) along with the new rule. Figure below shows that a new policy (3) is instantiated when rule D is added to User 1, even though the rules A, B and C remain the same for Users 1 and 2. Policy 3 is a newly cloned with the same rules as Policy 1, and then Rule D is added onto it. On the other hand, when the rule C is applied to User 3, the set of rules becomes identical to the set of rules for User 2. Thus the two can start sharing rules and therefore the resources are freed.
Each PCC rule has a subscriber-host scope and it is referred to it by its name which is assigned by the operator on PCRF. The rules with exactly the same content but different rule names are evaluated into separate rules. To optimize performance and maximize scale, it is recommended that the rules sharing the same content have the same name (as provisioned in the PCRF).
PCC rules can be removed from the node via a Gx directive by referencing the PCC rule name. The rule name is supplied via the Charging-Rule-Name AVP at the time of the rule submission to the node by the PCRF. There is no Gx mechanism that would remove all PCC rules at once. Each PCC rule must be removed individually.
The AVP used to remove the rule from the node is:
An example of rule instantiation and rule removal is shown in Figure 216.
Entries in IPv4/v6 filter and QoS policy created via CLI are ordered according to the numerical value associated with each entry command (which corresponds to the match condition) within the policy. CLI rules can be re-ordered with the renum command (in filters and QoS policies).
On the other hand, the PCC rules are ordered in one of the two ways. The difference between ordering of the entries with the rules, and the ordering of the rules themselves is:
The ordering of PCC rules has no effect on the ordering of the static entries in the base qos-policy or filter.
Mixing of the PCC rules with the Precedence AVP and without Precedence AVP is allowed for the same subscriber-host. PCC rules without the Precedence AVP are inserted at the end of all PCC rules that do have the Precedence value set explicitly. In other words, the Precedence value for PCC rules without the explicitly configured Precedence AVP is assumed to be the highest. The PCC rules without the Precedence value are automatically inserted at the bottom of the PCC rule range.
A distinction should be made between the order of PCC rules in a PCC rule set and the order between the entries within each PCC rule. A PCC rule contains a group of classifiers that are all associated with the same actions. Therefore, the order of the entries (equivalent to match conditions) within any given PCC rule does not matter (all entries result in the same action). For this reason, PCC rules with identical name and identical entries but different order of the entries are automatically ordered in a way that would allow more optimal sharing of the rules between different subscribers.
A PCC rule applied to a subscriber-host on the node can be overridden by re-submitting the PCC rule with the same name but different contents.
If at least one new flow is sent in the PCC rule update, then the existing flows are removed and replaced with the new flow. If no new flows are submitted, then the existing flows stay in place.
If there are conflicting parameters between the existing rule and the modified rule (for example the combination of the unsupported actions), the PCC rule override fails.
An action with a PCC rule can be applied for a set of IP-criterion.
For example, a single policer can be instantiated for a set of flows for rate-limiting purposes.
A pseudo Gx directive would look like this:
All three flows are fed into the same rate limiter (policer).
IPv4 flow entries and IPv6 flows entries can be combined within the same PCC rule. The actions that carry the IP address are address-type-specific (for example next-hop-redirect). All other actions (rate-limit, FC change, and so on) are universal and it are applied to both flow types (IPv4 and IPv6). The node automatically sorts out flow types (IPv4 and IPv6) within the rule and apply corresponding actions.
If the rule contains a mismatching flow type and actions (for example, IPv4 flows and IPv6 specific actions), the rule is rejected. It is the operator’s responsibility to ensure that the address-type-specific actions in the rule have corresponding flows to which they can be applied.
PCC rules can contain multiple actions. For the list of support action combinations, refer to the PCC Rules.
A Gx rule (as defined in a single Charging-Rule-Definition AVP) can contain either Flow-Information AVP or Alc-NAS-Filter-Rule-Shared AVP, but not both simultaneously.
Presence of either AVP within the Charging-Rule-Definition AVP determines the mode of operation for the rule:
Alc-NAS-Filter-Rule-Shared AVP indicates the mode of operation in which the permit or deny action is part of the flow definition itself (Alc-NAS-Filter-Rule-Shared AVP). This mode of operation is referred as NAS filter inserts. The basic format of the AVP is the following (RFC 4849 and 4005; AVP Code 400):
There can be multiple ip-criteria definitions within the rule per subscriber-host, and each ip-criteria carries its own permit/deny action. There can be only one such rule (Charging-Rule-Definition) per subscriber-host. The rule entries are installed within the filter range defined by the following command:
Such rule cannot be removed by the Charging-Rule-Remove directive referencing the rule name. Instead, each such Gx rule overwrites the previous one.
Flow-Information AVP indicates the mode of operation whereby all the flows in the rule share the same actions carried in separate AVPs. This mode of operation is referred to as PCC rule inserts. The rule entries are installed within the filter or qos-policy range defined by the following command:
There can be multiple flow based rules present in an orderly fashion and each rule can be individually removed by referencing its name.
Both modes of operation are supported simultaneously for the subscriber host.
Gx and RADIUS (CoA) policy management interfaces are simultaneously supported for the same subscriber-host.
RADIUS and Gx share the same entries for filter entry inserts (NAS-Filter-Rules and Alc-NAS-Filter-Rule-Shared) and therefore the most recent insert overrides the previous one. Similar logic applies to subscriber-string overrides and QoS overrides, where the most recent source, overrides the previous one.
However, PCC rules (IP-criteria based Gx rules) are provisioned in a separate filter ‘entry’ space from RADIUS and Gx filter inserts and therefore the PCC rules and RADIUS/Gx based filter inserts can independently coexist.
Filter/QoS-policy entry order is shown in Figure 219. The order of configuration blocks (static, PCC rules or NAS filter inserts) is configurable. For example, an operator can specify that static filter entries are populated before PCC rules which are then populated before NAS filter inserts.
Once PCC rules are applied to a subscriber-host, the operator is allowed to modify via CLI some of the parameters in the base filter/qos-policy. For example, the operator is allowed to add/remove terms in the base ACL filter.
The list of the parameters in the QoS policy that can be changed is shown in Table 3. Adding/removing queue/policer, re-mapping of FC, modifying dscp-map or modifying static ip-criteria is not allowed.
Modified parameters in the base-policy/filter referenced in the sla-profile affects all subscribers using this sla-profile. Replacing the base qos-policy/filter in sla-profile is not allowed for any subscriber-host if a clone of the base qos-policy/filter exist anywhere in the system.
However, replacing the base filter-id for a host via CoA or Gx override is allowed. Then, only the targeted host is affected and all existing PCC rules for this host are merged with the new filter.
CLI |
config>qos>sap-ingress>queue |
[no] cbs - Specify CBS |
drop-tail low [no] percent-reduction-from-mbs - Specifies the drop tail for out-of-profile packets |
[no] mbs - Specify MBS |
[no] packet-byte-of* - Specify packet byte offset |
[no] parent - Specify the scheduler to which this queue feeds |
[no] percent-rate - Specify percent rates (CIR and PIR) |
[no] rate - Specify rates (CIR and PIR) |
config>qos>sap-egress>queue |
[no] cbs - Specify CBS rate |
drop-tail low [no] percent-reduction-from-mbs - Specifies the drop tail for out of profile packets |
drop-tail exceed [no] percent-reduction-from-mbs - Specifies the drop tail for exceed profile packets |
drop-tail high [no] percent-reduction-from-mbs - Specifies the drop tail for in profile packets |
drop-tail highplus [no] percent-reduction-from-mbs - Specifies the drop tail for inplus profile packets |
[no] mbs - Specify MBS rate |
[no] parent - Specify the scheduler to which this queue feeds |
[no] percent-rate - Specify percent rates (CIR and PIR) |
[no] port-parent - Specify the port-scheduler to which this queue feeds |
[no] rate - Specify rates (CIR and PIR) |
[no] packet-byte-of* - Specify packet byte offset |
config>qos>sap-ingress>policer |
[no] cbs - Specify CBS |
[no] high-prio-only - Specify high priority only percent-of-mbs |
[no] mbs - Specify MBS |
[no] packet-byte-of* - Specify packet byte offset |
[no] parent - Specify the arbiter to which this policer feeds |
[no] percent-rate - Specify percent rates (CIR and PIR) |
[no] rate - Specify rates (CIR and PIR) |
config>qos>sap-egress>policer |
[no] cbs - Specify Cbs |
[no] high-prio-only - Specify high priority only percent-of-mbs |
[no] mbs - Specify Mbs |
[no] packet-byte-of* - Specify packet byte offset |
[no] parent - Specify the scheduler to which this policer feeds |
[no] percent-rate - Specify percent rates (CIR and PIR) |
[no] rate - Specify rates (CIR and PIR) |
PCC rules are unidirectional. The PCC rule direction is determined based on the value of the Flow-Direction AVP within the Flow-Information AVP. In the absence of the Flow-Direction AVP, the PCC rule direction is determined based on the Flow-Description AVP (as part of IPFilterRule direction field). Both of these AVPs (Flow-Direction and Flow-Description) are part of the PCC rule definition.
If the action within the PCC rule is in conflict with the direction of the flow, the PCC rule instantiation fails. For example, an error is raised if the flow direction is UPSTREAM, while the action is ‘Max-Requested-Bandwidth-DL’ (downstream bandwidth limit).
A PCC rule may contain multiple actions. Each action is carried in a separate, action specific AVP. The action specified in the flow-description->ipfilterrule data type is ignored. If rule contains multiple instances of the same action, each with a different value, the last occurrence of the action value is in effect.
Not all of the action types can be applied at the same time. The allowed combination of the actions per direction is given in Table 190 and Table 191.
Rate-limiting action is implemented via policers. The policer is dynamically created at the PCC rule instantiation time. The rate can be enforced based on Layer 2 rates or Layer 3 rates.
Dynamically instantiated policers have their own policer id range to avoid the conflict with static policers.
The dynamically created policers shares common properties configured under the dynamic-policer CLI hierarchy:
The configured dynamic policer parameters can be overridden per PCC rule by including the Alc-Dynamic-Policer grouped AVP in the QoS-Information AVP. All AVPs are optional and when specified override the configured value:
The policer rates are part of PCC rule itself and are not part of static configuration.
The generic Gx directive for rate-limiting action is:
The above rate limits refer to PIR and CIR rates of the dynamic policer in the respective direction.
Once traffic is processed by the dynamic policers on ingress, the traffic flows through the policer-output-queues shared queues. Traffic through dynamic policers will always bypass subscriber queues or policers on ingress that are statically configured in the base QoS policy.
Similar behavior is exhibited when static policers are configured on egress. Traffic outputting dynamic policer is never mapped to another static policer. Instead, such traffic is mapped to the corresponding shared queue in a queue-group. By default, this queue-group is the policer-output-queue group. However, the selection of the queue-group is configurable.
In contrast to the above, traffic processed by dynamic policers can be fed into statically configured subscriber (local) queues on egress. Dynamic policers and subscriber queues are tied through the forwarding-class.
The policer to local queue mapping and inheritance of the forwarding-class is shown. In this example, the mapping of traffic —> forwarding-class in rule 2 (flow 2) depends on the DSCP bits in the traffic flow. If the DSCP value in this traffic flow are different from the explicitly configured DSCP values in the static (base) QoS policy, then traffic is mapped to the default forwarding-class.
By default, policer rates are configured based on Layer 2 frame length (for example, the Ethernet header plus the IP packet). This can be changed by the packet-byte-offset (PBO) command under the policer. If the policer is fed into a local queue, the PBO of the policer will not affect the PBO of the local queue it is feeding.
The rates for local subscriber queues can be independently measured based on Layer 2 or Layer 1 frame length and the queue statistics can be measured based on Layer 1, Layer 2 or Layer 3 (IP-only) frame length. The IP-only stats for queues can be configured in the sub-profil>volume-stats-type {ip|default}.
Dynamic policer (instantiated due to rate limiting or usage-monitoring action in PCC rules) statistics are not reported in RADIUS-based accounting. On egress, this has no effect on volume counters in RADIUS-based accounting, since the dynamic policers are normally fed into local queues whose statistics are reported in RADIUS-based accounting. However, on ingress, the dynamic policers are always fed into the queue-group queues which are excluded from RADIUS based accounting. The consequence is that the ingress RADIUS-based accounting lacks statistics for the traffic that is flowing via dynamic policers.
If the dynamic policer is feeding a local queue, the aggregate statistics in show commands for such queue are not reported in order to avoid double counting (since the traffic statistics in show commands are already reported for the dynamic policer). However, the per-queue statistics are reported in show commands, irrespective of whether the policer is mapped to the local queue or not.
To avoid losing aggregate SAP or subscriber stats in show commands, the recommendation is to have policers feed into local queues which are not already mapped to an FC. For example:
FC BE, FC2 L1 —> queue 4
FC EF —> policer 2, queue 4
Then, traffic from queue 4 is not counted in aggregate stats at all and consequently the aggregate accounting information is lost for FC BE and FC L1.
Traffic can be re-prioritized via PCC rule by re-classification into a different forwarding class. The forwarding-class can be changed in several cases.
The original static mapping between traffic type, forwarding-class and the queue/policer in the base qos-policy is configured outside of the ip-criteria CLI hierarchy.
For example:
Such mapping is configured outside of CAM and as such it has lower evaluation priority than the mapping configured via PCC rule which is installed in CAM.
The original static mapping is provisioned in the base qos-policy via ip-criteria CLI hierarchy.
For example:
Then, the configured entry range for PCC rules must precede the static entry (match criteria) in which the original forwarding-class is configured. The insertion point (entry) is controlled via configuration: sub-insert-shared-pccrule start-entry <entry-id> count <count> command under the qos-policy.
In both of the above cases, the following PCC rule would override forwarding-class for traffic with DSCP value of 10 (af11 traffic class) from value af to h2.
The eight forwarding classes are mapped to QCIs (3GPP TS 23.203 §6.1.7.2) in the following manner:
BE —> QCI 8
L2 —> QCI 7
AF —> QCI 6
L1 —> QCI 4
H2 —> QCI 2
EF —> QCI 3
H1 —> QCI 1
NC —> QCI 5
The generic Gx directive for forwarding-class change:
Create an ip-criteria and/or ipv6-criteria entry with no action specified. Matching traffic is forwarded without a QoS action and not match on a next entry (match and exit behavior). This is equivalent to a white-list entry.CLI equivalent:.
The generic Gx directive for QoS forwarding:
The next hop redirection explicitly or implicitly changes the next hop for the traffic flow within the same service ID (routing context) or a different service ID (routing context).
If the next hop is not explicitly provided, the next hop is selected automatically, according to the routing lookup in the referenced service ID.
The generic Gx directive:
This action overwrites the routing table lookup based on the destination IP and sets the next hop to the:
The next hop search is indirect, which means that if the explicitly provided next hop in the PCC rule cannot be found in the routing table, then an additional routing table lookup is performed to find the path (next hop) to the indirect next hop from the PCC rule.
If only the service-id is specified in PCC rule (without the next hop), then the next hop is selected from the specified service-id based on the destination IP address of the packet.
HTTP redirect utilizes Redirect-Information AVP from 3GPP 29.212, §5.3.82.
The generic Gx directive:
This action is used to control traffic flow within a PCC rule by using ALU specific AVP. PCC rules are utilizing filters and QoS policies as distinct building blocks. This action within a PCC rule creates an IP or IPv6 filter entry with an action forward or drop.
The CLI equivalent follows:
The generic Gx directive for filter forward/drop is implemented through ALU specific AVP:
The service gating function is used to enable or disable the service that is represented by the PCC rule. This action is enforced through a Flow-Status AVP (AVP code 511) - 3GPP 29.214, §5.3.11. The system supports the following values (actions) for the Flow-Status:
The service gating function is applicable in the direction that is associated with the rule (PCC rules in the system are unidirectional).
Flow-Status is by default enabled (2) (if the Flow-Status AVP is not explicitly specified within the PCC rule). Flow-Status=Enabled must be accompanied by one or more additional actions in the same PCC rule (refer to Gx Rules with Multiple Actions and Action Sharing for a list of allowed simultaneous actions), otherwise the PCC rule instantiation in the node fails.
If the Flow-Status is set to disabled (3), all other actions within the same rule loses their meaning since the packet is dropped. The disabled directive disables the flow of packet through the system. A disabled Flow-Status is equivalent to the Alc-Filter-Action = Drop (2).
This AVP is carried inside of Charging-Rule-Definition (3GPP 29.212, §5.3.5):
The following is an example of PCC rule provisioning in a CCA-I message:
In this example the host is instantiated using the two Charging-Rule-Install AVPs. The first is used to instantiate the host. The second is used to instantiate the IP-criterion based service named service-1. Service-1 is defined as the upstream traffic flow with traffic class AF11, destined to the TCP port range 40000-40010 on the node with IP address 10.10.10.10/32.
The actions for this traffic flow are:
The commands used to examine dynamic rules and NAS filter inserts associated with the subscriber hosts are shown in Figure 221.
One of the most important factors to be considered for capacity planning with PCC rules is the number of unique policies that are applied to subscribers.
A unique policy constitutes a base a QoS policy or filter ID along with all PCC rules that are applied to a subscriber or a set of subscribers.
Now examine an example where there are ‘n’ PCC rules in the system (‘n’ qos rules and ‘n’ ACL filter rules). Those rules are applied to IPv4 traffic in ingress direction. Further, assume that the PCC rules do not have defined Precedence AVP, which means that the system can optimize their order for maximum sharing and maximized scale. Then, ‘n’ PCC rules can be combined by various permutation into 2^n-1 unique combinations Next assumption is that there are five possible base qos-policies for IPv4 traffic in ingress direction and five possible base filters for IPv4 traffic in the ingress direction.
Given the above, the unique PCC rule combinations (2^n-1) together with five base QoS polices produce 5*(2^n-1) unique qos-policies per ingress IPv4. Same logic can be applied for ingress IPv4 filters.
This exercise must be repeated for egress direction as well as for IPv6 type traffic, by taking into consideration the number of respective base qos-policies/filters and the number of PCC rules.
Once the number of unique policy combinations is determined and ensured that it is within the system limits, each policy must be further evaluated to determine the number of entries it will take in CAM.
Figure 222 depicts an example relevant to capacity planning with focusing on understanding the scaling limits pertaining to the number of PCC rules and their mutual combinations when they are applied to the subscriber hosts.
This example is focuses on an IPv4 filter applied in ingress direction but similar logic can be used in understanding other policy types (QoS, egress, IPv6).
The system/line card limits in this example are set to the following values for illustration purposes only:
![]() | Note: The actual CAM limits vary per policy type (filter/QoS), direction and IP address type (v4 vs v6). The actual scaling limits can be found in the Scaling Guides for the relevant software release. |
Gx filter entries inserted via the NAS-Filter-Rule are subscriber-host-specific entries. This means that in the upstream direction, the source IP address in the NAS-Filter-Rule is always internally set by the node to the IP address of the subscriber host itself. Similarly, in the downstream direction, the destination IP address in the NAS-Filter-Rule is set by the node to be the IP address of the subscriber-host itself.
On the other hand, the entries in the Alc-NAS-Filter-Rule-Shared AVP are processed as received without any modifications. This means that such entries can be shared with all the hosts that have the same Alc-NAS-Filter-Rule-Shared applied.
Similar to QoS overrides, NAS filter entries are not predefined on the node but instead they are defined under the Charging-Rule-Install — Charging-Rule-Definition AVP.
The Charging-Rule-Name AVP for NAS filter inserts is an arbitrary name that is part of Charging-Rule-Definition AVP in which NAS-Filter-Rule AVP or Alc-NAS-Filter-Rule-Shared is provided. Such Charging-Rule-Name is used to report errors related to instantiation of the inserts.
The following AVPs identify NAS filter inserts that are applied to a subscriber host. Those AVPs can be included in CCA-i, CCA-u or RAR message sent from the PCRF.
In this example, the filter entry defined in Alc-NAS-Filter-Rule-Shared AVP is inserted in the clone of the existing base filter for the subscriber(s).
The Gx rule (overrides, PCC rules or NAS filter inserts) instantiation failure can occur on two levels:
Reporting an AVP decoding problem in Gx is described in the following example:
A Gx directive is received to install two overrides on the node. The two overrides are supposed to change the sla-profiles and sub-profiles for the subscriber host. The AVP that is used to change the sla-profile is miss-formatted. The predefined sla-profile keyword in the Charging-Rule-Install AVP is misspelled as spa-profile instead of sla-profile.
Since the Charging-Rule-Name AVP has the M-bit set, the whole message fails and an error is reported. No rules within this Gx message is installed (not even the valid ones, then this would be the Charging-Rule-Name = “Sub-Profile:prem”).
![]() | Note: If the M-bit was clear in the Charging-Rule-Name AVP, the erroneous AVP would be simply ignored and proceed with installation of the remaining, correctly formatted rules. |
The nature of the error depends on the original directive sent by the PCRF (RAR or CCA – push or pull model).
If the directive from the PCRF is passed with the cca command, the response is CCR-u with the following error related AVPs:
Similarly, if the number of filter entries for each entry type (NAS-Filter-Rule — host-specific or Alc-NAS-Filter-Rule-Shared — shared) exceeds the maximum supported number (see the 7750 SR Gx AVPs Reference Guide), the whole message fails the decoding phase.
The reason that the Result-Code AVP is present in the RAA message and not in the CCR-u message is that this code is only allowed to be present in the answer messages, according to the standard.
This assumes that the rule installation directives are successfully passed from the Gx module to the ESM module and the failure to install rules occurs in the ESM module.
In the Gx override example below, the referenced sla-profile is unknown. Then, all directives passed to the ESM module fails and consequently no rules or overrides are installed. The sub-profile change fails as well although the prem sub-profile is known in the system.
The error reporting flow is as follows:
Similar behavior would be exhibited if the directive is sent to the UM or AA modules. However, ESM, UM and AA are separate modules and failure to install rule in one module will not affect rule installation in another.
Failure reporting in AA is performed in similar fashion as in ESM.
Instead of Charging-Rule-Report AVP, the ADC-Rule-Report is used:
Table 191 summarizes Gx failure reporting on the node.
Failure Event | Gx Message Received via CCA (Pull Model) | Gx Message Received via RAR (Push Model) |
AVP decoding/interpreting failure; M-bit cleared | Ignore AVP | Ignore AVP |
AVP decoding/interpreting failure; M-bit set | CCR-u is sent by the node. CCR-u will contain:
No rules within the message is instantiated on the node. | RAA is sent by the node. RAA will contain:
No rules within the message is instantiated on the node. |
Rule failure in ESM | CCR-U is sent by the node. CCR-u will contain:
No rules is instantiated in the ESM module. | RAA with the Result-Code AVP ‘success’ (2001) is sent by the node, followed by a CCR-u. CCR-u will contain:
No rules is instantiated in the ESM module. |
Rule failure in Usage-Monitoring (UM) | CCR-U is sent by the node. CCR-u will contain:
No rules is instantiated in the UM module. | RAA with the Result-Code AVP ‘success’ (2001) is sent by the node, followed by a CCR-u. CCR-u will contain:
No rules is instantiated in the UM module. |
Rule failure in AA | CCR-U is sent by node. CCR-u will contain:
No AA rules is instantiated in the AA module. | RAA with the Result-Code AVP ‘success’ (2001) is sent by the node, followed by CCR-u. CCR-u will contain:
No rules is instantiated in the AA module. |
Usage-Monitoring and reporting refers to the collection and reporting of octets (volume) that a service or application on the node has consumed during a certain period. The usage on the node is reported via the Gx interface to the PCRF. Based on this information, the PCRF can apply a specific action (policy change) to the entity being monitored. For example, QoS can be modified, or the service can be blocked when specific thresholds are reached.
Usage-Monitoring and reporting is performed over a single Gx session for the ESM/AA subscriber. In other words, there is only a single session for an ESM subscriber-host and corresponding AA subscriber. Via this single Gx session, Usage-Monitoring can be requested simultaneously in ESM context (PCC rule level, credit-category and/or IP-CAN session) and AA context (application based Usage-Monitoring).
In the ESM context, volume consumption (octets - 3GPP 23.203 §4.4) can be monitored on three levels:
Usage-Monitoring can be monitored simultaneously on all three levels.
An IP-CAN session on the node represents a subscriber-host whose service types are determined by the sla-profile instance. In per IP-CAN session volume monitoring, the aggregated queue or policer counters are reported per direction (in or out). This includes dynamic policers that are instantiated as a result of a Gx action; for example, rate-limiting.
The following configuration is necessary to allow per IP-CAN session level Usage-Monitoring to be enabled for sessions associated with the category map:
If the sla-profile instance changes mid-session, the counters are reset.
One obvious difference between regular RADIUS accounting and Gx Usage-Monitoring is that in RADIUS accounting the cumulative byte number for sla-profile instance is presented in each report (interim-updates or stop acct messages), while in Usage-Monitoring this count is reset between the two reports (when the quota is reached, the usage report is triggered).
Per credit-category monitoring refers to volume monitoring of a single queue/policer or a set of queues/policers within the sla-profile instance. Each queue/policer (or set of queues/policers as a subset of the sla-profile instance) represents a service for which the Usage-Monitoring is required. Those queues/policers (services) are organized on the node in credit categories.
Each service category has a name that is used to reference the category in Usage-Monitoring and reporting.
The category-map (predefined on the node) that is used in Usage-Monitoring can be associated with the subscriber-host through the following methods (in the order of priority):
PCC rule Usage-Monitoring reports volume usage per flow or set of flows. PCC rule Usage-Monitoring is described in a separate section below.
Usage-Monitoring for the subscriber host can be configured on the node, but it will not be active until it is turned on by the PCRF either via CCA-i, CCA-u or RAR.
Usage-Monitoring can be enabled per ingress and/or egress direction or as total count. However monitoring the total count is mutually exclusive with per direction count. For example, total Usage-Monitoring cannot be enabled simultaneously with ingress (or egress) Usage-Monitoring for the same monitoring entity (session or category).
In AA, charging groups (CG), application groups (AG) and applications are monitored. Refer to the 7450 ESS, 7750 SR, and VSR Multiservice Integrated Service Adapter Guide for details.
Gx Usage-Monitoring is activated explicitly from the PCRF via CCA-I, CCA-U or RAR. It is triggered via the Usage-Monitoring-Information AVP along with the event-trigger = usage-report (33). The Usage-Monitoring-Information AVP contains the following AVPs:
There could be multiple instances of Usage-Monitoring-Information AVP present in a single CCA or RAR messages. For example, simultaneous Usage-Monitoring for IP-CAN session level, credit-category level or pcc rule level can be requested.
Usage-Monitoring-Level for IP-CAN session is set to SESSION_LEVEL (0)
Usage-Monitoring-Level for category-map is set to PCC_RULE_LEVEL (1)
Usage-Monitoring-Level for PCC rules is set to PCC_RULE_LEVEL (1)
The node reports usage information to the PCRF under the following conditions:
To report accumulated usage for a specific monitoring-key, the node sends a CCR with the Usage-Monitoring-Information AVP containing the accumulated usage information since the last report. For each of the enabled monitoring-keys, the Usage-Monitoring-Information AVP includes the Monitoring-Key AVP and the accumulated volume usage in the Used-Service-Unit AVP.
A usage report on the node can be triggered by reaching the usage threshold communicated to the node by the PCRF in the CCR-u message carrying accumulated usage for that monitoring entity along with the Event-Trigger AVP set to USAGE_REPORT.
In response to the CCR-u message, the PCRF communicates to the node via a CCA-u message whether the Usage-Monitoring should continue:
Thresholds are incremental. For example, if the quota of 100 MB is submitted to the node, the usage should be reported when that quota is reached. At that point, the user can be granted another 100 MB. The new usage report on the node is triggered when another 100 MB are accumulated. Absence of the threshold for a given entity in the CCA-u message is an indication that the Usage-Monitoring should stop.
When the PCRF informs the node that Usage-Monitoring should stop (by not including thresholds in CCA-u), the node does not report usage which has accumulated between sending the CCR and receiving the CCA.
Another possibility of usage reporting is on-demand. In this scenario, usage for one or more monitoring keys is reported whether the usage threshold has been reached. This is achieved by sending the node the Usage-Monitoring-Report AVP (within the Usage-Monitoring-Information AVP) set to USAGE-MONITORING_REPORT_REQUIRED. If the Monitoring-Key AVP is omitted in such a request, Usage-Monitoring for all enabled entities is reported to the PCRF.
If that the credit-category is removed from the subscriber host (the sla-profile instance referencing the category-map is changed for the subscriber host), the node reports the outstanding usage in a CCR-u message with the Event-Trigger set to USAGE_REPORT.
When the PCRF explicitly disables Usage-Monitoring on the node, the node will report the accumulated usage which has occurred while Usage-Monitoring was enabled.
To disable Usage-Monitoring for an entity, the PCRF sends the Usage-Monitoring-Information AVP referencing only the applicable monitoring entity with the Monitoring-Key AVP and the Usage-Monitoring-Support AVP set to USAGE_MONITORING_DISABLED.
When the PCRF disables Usage-Monitoring in a RAR or CCA command, the node sends new a CCR-U with the Event-Trigger AVP set to "USAGE_REPORT" to report the accumulated usage for the disabled Usage-Monitoring entities.
Each PCC rule for which Usage-Monitoring is required, contains Monitoring-Key AVP.
Usage-Monitoring for PCC rules is implemented through a dynamic policer. The policer is instantiated at the time when the PCC rule with Monitoring-Key AVP is installed.
The same monitoring-key can be used in multiple PCC rules assuming that these rules are for the same direction. In other words, the charging rule is rejected if the same monitoring-key is used for ingress and egress.
At IP-CAN session termination, the node sends the accumulated usage information for all entities for which Usage-Monitoring is enabled in the CCR-t.
A Diameter Gx session from which Usage Monitoring is started controls the Usage Monitoring for the entire SLA profile instance. Only one Diameter Gx session can control the Usage Monitoring per SPI at a given time. That is, Usage Monitoring can only be started from a single GX session when multiple subscriber hosts or sessions share an SLA profile instance.
The session ID of the Diameter Gx session that controls the Usage Monitoring is displayed as the “Diameter Session Gx” field in the output of the following show command.
A Diameter Gx session stops being the controlling Gx session for Usage Monitoring of the SLA profile instance in the following situations:
When the Usage Monitoring terminates, the usage of subscriber hosts or sessions sharing the SLA profile instance is no longer accounted for. Usage Monitoring can now be started from another Diameter Gx session, which then becomes the controlling Gx session for Usage Monitoring on the SLA profile instance.
For the description of the specific AVP, refer to the 7750 SR Gx AVPs Reference Guide.
IP-CAN session Usage-Monitoring
A category-map with gx-session-level-usage-monitoring must be associated with the subscriber host or session:
PCRF in RAR sends the following AVPs (among all the other mandatory ones: session-id, etc.)
The node reports usage when the thresholds are reached some time later in the CCR-U. The usage is monitored internally on the node based on the current sla-profile instance.
The PCRF instructs the node to continue Usage-Monitoring with the new thresholds in the CCA-U:
Category Usage-Monitoring
Assume that the following category-map is associated with the subscriber host:
The PCRF sends the following AVPs in the RAR message (among all the other mandatory ones: session-id, etc.)
The node reports usage when the thresholds are reached some time later in the CCR-U:
The PCRF instructs the node to continue Usage-Monitoring with the new thresholds in the CCA-U:
The PCRF may subscribe to an event trigger on the node. The PCRF subscribes to new event triggers or removes armed event triggers unsolicited at any time. When an event matching the event trigger occurs, the node reports the event to the PCRF. The event triggers that are required in procedures are unconditionally reported (for example, IP address allocation/de-allocation) from the node, while the PCRF may subscribe to the remaining events (for example Usage-Monitoring).
When sent from the PCRF to the node, the Event Trigger AVP indicates an event that triggers an action in the node. When sent from the node to the PCRF, the Event Trigger AVP indicates that the corresponding event has occurred. If no Event Trigger AVP is included in a CCA or RAR operation, any previously provisioned event trigger is still applicable.
The PCRF may remove all previously provided event triggers by providing the Event-Trigger AVP set to the value NO_EVENT_TRIGGERS. When an Event-Trigger AVP is provided with this value, no other Event-Trigger AVP is provided in the cca or rar command. Upon reception of an Event-Trigger AVP with this value, the node does not inform the PCRF of any event except for those events that are always reported and do not require provisioning from the PCRF.
When the PCRF subscribes to one or more event triggers by using the rar command, the node will send the corresponding currently applicable values to the PCRF in the RAA if available, and in this case, the Event-Trigger AVPs are not included.
For a list of the supported events on the node, refer to the 7750 SR Gx AVPs Reference Guide.
At any time, the PCRF can query the node for the presence of the subscriber-host via a RAR message.
The node responds with the following result-codes in RAA:
The PCRF can request IP-CAN session termination on the node via two messages:
Upon the arrival of either of those messages, the node starts the IP-CAN session termination procedure (CCR-t with a corresponding Termination-Cause AVP is sent to the PCRF). This is described in the 3GPP 29.212 document, §4.5.9.
For a list of the supported Termination-Cause AVP values on the node, refer to the 7750 SR Gx AVPs Reference Guide.
When a WiFi subscriber moves between the access points (APs), a CCR-u message is triggered on the node, carrying the Called-Station-Id AVP. The Called-Station-Id AVP carries the MAC IP address of the new AP. This functionality allows the PCRF to make a location-based policy decision.
This functionality is enabled via event trigger USER_LOCATION_CHANGE (13) [3GPP 29.212, §5.3.7] sent to the node by a PCRF in a CCA or RAR message.
The same event is reported back from the node to the PCRF in a CCR-u message when the user location changes.
Redundancy in Gx relies on the Diameter redundancy mechanisms described in Diameter Redundancy.
Persistency and Origin-State-ID AVP (RFC 6733, §8.6 and §8.16).
Persistency (saving the state of IPoE hosts on the compact flash) for Gx sessions is not supported. This means that, on reboot, the node restores the subscriber-hosts from the persistency but the Gx session awareness for the recovered hosts is lost. Any previously applied QoS or filter overrides are lost. However, subscriber-strings (subscriber-id, sub-profile, sla-profile, aa-profile) can be made persistent and can be preserved across reboots.
The Origin-State-Id (OSI) AVP is not stored in persistency. If the node reboots, the Origin-State-ID AVP is set to boot time (UTC).
The Origin-State-Id AVP is contained in the CER messages and application messages that are sent from the node to the PCRF/DRA. In the other direction, sent by the PCRF to the node, the OSI is ignored.
To restore a lost session after the reboot, the node initiates a CCR-i message for every host that has recovered from persistency. The CCR-i contains the new session-id and origin-state-id. Based on this CCR-i, it is expected that the PCRF returns the most current policy for the host.
Each SR OS node has a receiving queue per Gx application (ESM, UM, AA). Each queue can hold 10,000 messages. While the queue is in the overloaded state, the SR OS node replies to every new RAR message with the RAA (ACK) immediately followed by a CCR-U message containing the error-message with the description ‘Overload’. This can be considered as explicit signaling towards the PCRF notifying it of the condition on the SR OS node.
If the messages in the overwhelmed queue do not require sending an answer (in case that the overwhelmed queue contains CCA-I/U messages), the TCP window fills up, TCP ACKs will not be sent and consequently this IS an implicit notification to the PCRF to slow down.
If the SR OS node receives a response from an overloaded PCRF (Result-Code = DIAMETER_TOO_BUSY), the SR OS node timeouts (tx-timer) the originally sent message. Once the message is timed out, the configuration settings (on-failure) determines whether to trigger the peer-failover procedure or not (Peer-failover based on DIAMETER_TOO_BUSY Result-Code is recommended in RFC6733, §7.1.3.
The Diameter NASREQ application is used for Authentication, Authorization, and Accounting services in the Network Access Server (NAS) environment. The SR OS supports a stateless operation of NASREQ authentication and authorization, interacting with a NASREQ server that does not maintain session state.
Subscriber host or session authentication results in an AA-Request (AAR) message being sent to the Diameter NASREQ server. An Auth-Session-State AVP with value equal to 1 (No State Maintained) is included in the AAR to inform the server of the stateless mode. The server responds with an AA-Answer (AAA) message and must include the Auth-Session-State AVP with value equal to 1 (No State Maintained), together with the authorization AVPs.
Diameter NASREQ accounting is not supported.
Table 192 lists the supported Diameter NASREQ messages. Vendor-specific AVP's are shown as: v-<vendor-id>-<AVP id>.
Diameter Message | Code | |
AAR | AA-Request | 265 |
AAA | AA-Answer | 265 |
Diameter NASREQ authentication is supported for IPoE hosts and sessions, PPPoE PTA PAP/CHAP authentication. Diameter NASREQ authentication is not supported for L2TP LAC/LNS.
NASREQ and RADIUS authentication cannot be configured simultaneously on a capture-sap, local-user-database, or group-interface. They have the same priority in the hierarchy of different sources (such as local user database, Gx, defaults, etc.) for obtaining the subscriber host or session authorization parameters.
Multi-chassis redundancy is supported via separate Diameter NASREQ peers on each redundant node. Each node of the multi-chassis redundancy pair has its own Diameter Identity (origin host/realm). The subscriber host or session is authenticated on the BNG where it is initially connected. Due to the stateless operation, there is no need to synchronize NASREQ session state. Alternatively, the Diameter proxy can be used if it is required to have a single Diameter Identity (origin host/realm) per pair of multi-chassis redundant nodes.
There is no NASREQ re-authentication for active subscriber hosts or sessions, except for a forced re-authentication when the circuit ID/interface ID or remote ID of a DHCP host is changed.
Stateless NASREQ authentication can be complemented with Diameter Gx policy management for policy control and mid-session changes. Diameter NASREQ and Gx applications are supported simultaneously on a single Diameter peer.
Figure 223 shows a sample call flow for a subscriber using Diameter NASREQ for authentication and Diameter Gx for policy management.
Table 193 lists the authorization AVPs that are accepted in a Diameter NASREQ AA-Answer message. Vendor-specific AVPs are shown in the table as: v-<vendor-id>-<AVP-id>.
AVP ID | AVP Name | Description |
1 | User-Name | Overrides the “Radius User-Name”. |
8 | Framed-IP-Address | The IPv4 address of the subscriber host. |
9 | Framed-IP-Netmask | The IPv4 netmask of the subscriber host. |
22 | Framed-Route | IPv4 managed route to be configured on the NAS for a routed subscriber host. |
25 | Class | Opaque value; echoed in RADIUS accounting. |
88 | Framed-Pool | The name of an IPv4 address pool. |
97 | Framed-IPv6-Prefix | SLAAC IPv6 prefix (wan-host). |
99 | Framed-IPv6-Route | IPv6 managed route to be configured on the NAS for a v6 routed subscriber host. |
100 | Framed-IPv6-Pool | The name of an IPv6 IA-NA address pool (wan-host). |
123 | Delegated-IPv6-Prefix | DHCPv6 IA-PD IPv6 prefix (pd-host). |
26.6527.9 | Alc-Primary-Dns | The IPv4 address of the primary DNS server. |
26.6527.10 | Alc-Secondary-Dns | The IPv4 address of the secondary DNS server. |
26.6527.11 | Alc-Subsc-ID-Str | Unique subscriber ID string. |
26.6527.12 | Alc-Subsc-Prof-Str | Subscriber profile string. |
26.6527.13 | Alc-SLA-Prof-Str | SLA profile string. |
26.6527.16 | Alc-ANCP-Str | ANCP string. |
26.6527.17 | Alc-Retail-Serv-Id | The service-id of the retailer to which this subscriber host belongs. |
26.6527.18 | Alc-Default-Router | The default gateway for the user (DHCP option [3] default-router for a DHCPv4 proxy) |
26.6527.28 | Alc-Int-Dest-Id-Str | Intermediate destination ID string. |
26.6527.29 | Alc-Primary-Nbns | The IPv4 address of the primary NetBios Name Server (NBNS). |
26.6527.30 | Alc-Secondary-Nbns | The IPv4 address of the secondary NetBios Name Server (NBNS). |
26.6527.31 | Alc-MSAP-Serv-Id | Service ID where the managed SAP is to be created. |
26.6527.32 | Alc-MSAP-Policy | Managed SAP policy used to create the MSAP. |
26.6527.33 | Alc-MSAP-Interface | Group-interface name where the managed SAP is to be created. |
26.6527.45 | Alc-App-Prof-Str | Application profile string. |
26.6527.99 | Alc-Ipv6-Address | DHCPv6 IA-NA IPv6 address (wan-host). |
26.6527.105 | Alc-Ipv6-Primary-Dns | The IPv6 address of the primary DNSv6 server. |
26.6527.106 | Alc-Ipv6-Secondary-Dns | The IPv6 address of the secondary DNSv6 server. |
26.6527.131 | Alc-Delegated-Ipv6-Pool | The name of an IPv6 IA-PD prefix pool (pd-host). |
26.6527.161 | Alc-Delegated-Ipv6-Prefix-Length | DHCPv6 IA-PD prefix length (pd-host). |
26.6527.174 | Alc-Lease-Time | The lease-time for proxy, in seconds. |
26.6527.181 | Alc-SLAAC-IPv6-Pool | The name of an IPv6 SLAAC prefix pool (wan-host). |
26.6527.1036 | Alc-SPI-Sharing | grouped AVP Sets the SLA Profile Instance (SPI) sharing method for this subscriber session to SPI sharing per group or default. |
26.6527.1037 | Alc-SPI-Sharing-Type | Must be included in an Alc-SPI-Sharing grouped AVP. Sets the SPI sharing method. value 0 = default as specified in the SLA profile with def-instance-sharing. The Alc-SPI-Sharing-Id AVP should not be present. value 2 = per group; the group identifier is specified with the Alc-SPI-Sharing-Id AV. |
26.6527.1038 | Alc-SPI-Sharing-Id | Must be included in an Alc-SPI-Sharing grouped AVP. Specifies the group identifier when SPI sharing is per group. |
To specify the peers to reach the Diameter NASREQ server in a diameter peer policy:
To specify the Diameter NASREQ application specific parameters, such as AVP format and values, in a Diameter application policy:
To apply the Diameter NASREQ application policy as Diameter authentication policy at a VPLS capture SAP, at an IES/VPRN group-interface and/or at a local user database:
![]() | Note: A Diameter authentication policy cannot be configured simultaneously with a RADIUS authentication policy on the same group-interface or capture SAP, nor for the same host in a local user database. |
If no AA-Answer message is received from the primary or secondary Diameter peer, then the host or session can be instantiated with the configured defaults. This is achieved by the following NASREQ application policy configuration:
To enable flexible integration with different NASREQ servers, a Python policy can be configured on the Diameter peer policy. The Python script can interact on the AVPs present in the AA-Request and AA-Answer messages.
Diameter redundancy is supported on multiple levels:
Each configured diameter node can support several peers that are simultaneously open. Only one of those peers is used to forward application messages for a given user session. If there are multiple peers for the same realm, then the next hop, peer is selected in the order of configured preference. If all peers are of the same preference for a given realm, then the peer index is used to break the tie.
Peer failover is performed by Diameter base protocol and is supported only for routable diameter request messages of the same user session (for example, peer failover does not apply to a CER message). A peer failover triggers clearing of the destination host name from the user session, which results in a Diameter routing lookup based on the destination-realm (since destination host name is cleared). A new next hop in the chain of priorities for a given destination realm within the realm routing table is selected.
Routable answer messages do not rely on peer failover procedures since their forwarding is governed on a hop-by-hop basis (exact reverse path of the request message).
A failover peer procedure for a user session is triggered under the following conditions related to the peer connectivity problems:
Once the failed peer is restored, forwarding is resumed on the restored peer for all new and existing sessions if this peer is the best next hop for the user session (dictated by peering and realm table). There is no user session stickiness to the peer and instead the routing of the Diameter request messages always follows the routing and forwarding tables. It is important to note that peer failover refers to the Diameter base ability to select a new best peer in the list of available peers, in the case where the current peer becomes unavailable. This rerouting of Diameter application messages is independent of the application level message retransmissions. Once the Diameter base determines that it cannot deliver the request message (for example, CCR times out or the next hop to the destination-realm is unavailable), the Diameter base notifies the application level. The application (NASREQ, Gx, or Gy) then decides (by a configuration option) whether it re-transmits timed-out request messages (CCR).
The Diameter Multi-Chassis Redundancy solution on the node is based on the model where communication with the PCRF/DRA occurs over a single Diameter peering connection for the redundant pair of nodes; that is, an active Diameter proxy module running on the node is front-ending the communication with the PCRF/DRA, and is relaying messages between the Diameter clients (in redundant nodes) and the DRA/PCRF. Both nodes run the Diameter proxy module, but only the active one opens the TCP peering connection towards the DRA/PCRF.
The benefits of the Diameter proxy model are:
The goal of this redundancy model is to provide a predictable and quick recovery after a node failure, PCRF failure, or relevant components within those two entities (such as line cards, MDAs, and physical ports).
Figure 224 illustrates the basic concept for Diameter Multi-Chassis Redundancy for a Gx application. The model shows two nodes (BNGs). Each BNG contains an ESM module and a Gx/Diameter module which have a peering connection to the active Diameter proxy module. The peering connections are IP connections. Both nodes communicate with the PCRF/DRA through the active Diameter proxy which maintains a peering connection with the PCRF/DRA.
The fundamental principles of the Diameter proxy-based redundant solution are described below (also refer to Figure 224):
The Diameter proxy with the highest system MAC address assumes the controller role. The controller node decides which proxy becomes ACTIVE or STANDBY. Activity election information is processed by the controller node and then the controller node delegates the actual ACTIVE/STANDBY roles to Diameter proxies. The ACTIVE proxy may not necessarily be the same node as the controller node.
The activity selection (by the controller node) in the Diameter proxy is based on the current states of both Diameter Proxies (local and remote) and the system MAC.
Preemption is not supported, which means that newly brought up Diameter proxy does not overtake the activity state from the existing active Diameter proxy, regardless of the system MAC addresses.
Once the node becomes active, it advertises the new state to the MCS Diameter proxy peer and tries to open a DRA/PCRF peering connections and at the same time accept the client connections. The active Diameter proxy replies to the client with a DIAMETER_UNABLE_TO_DELIVER error-code when server side peers cannot be opened.
All application level (Gx or NASREQ) sessions related parameters are synchronized on the ESM level via MCS.
The parameters synchronized on the ESM level are:
The Diameter proxy module is synchronized via MCS; the information passed between the two nodes is:
The above information is used to determine the activity of the Diameter proxy at each node.
If an MCS link fails, the nodes become isolated. Each node acts independently and tries to become active. This scenario is described in Isolated Chassis.
The handling of Diameter retransmissions is crucial for the Diameter Multi-Chassis Redundancy operation. Retransmissions provide the means to recover a Diameter session that was left in an unacknowledged state due to failure of the path between the node and the DRA/PCRF.
Retransmissions of Diameter messages are handled on two levels by a pair of redundant nodes:
A more detailed explanation of the processing that occurs on each level for a Gx application is given below.
In summary:
These scenarios are shown in Figure 225, Figure 226, and Figure 227.
Multi-chassis redundancy is less concerned with the retransmissions of the answer messages (RAA) since, if the answer is not received, the PCRF retransmits the request (RAR). Retransmission of the answer messages is performed only on the TCP level within a single node. It is not performed on the Diameter level.
When the request message is retransmitted by the Diameter application (due to the Tx timer timeout, primary peer failure - DWR timeout, or receipt of the answer message with E-bit set), the content of the message stays the same, including the CC-Request numbers but the T-bit in the Diameter header is set. The T-bit indicates to the PCRF that the message is retransmitted (mostly used for accounting purposes so that the counting records are not duplicated). It also signals to the Diameter proxy that the message rerouting to the secondary peer should be performed.
The Diameter proxy is applied to an IPv4 or IPv6 address within a routing-context on a node. This IP address is a Diameter proxy listening IP address that is associated with an interface on a node, including the system interface (system IPv4 or IPv6 address) or loopback interface (loopback IPv4 or IPv6 address).
The number of Diameter Proxies per listening IP address is limited to one. That is, each proxy diameter-peer-policy requires a unique combination of source-ip (listening IP address) and the routing-context (node).
The number of Diameter-peer-policies on a node is limited to 32. This means that the combined number of Diameter clients and Proxies on a node cannot exceed 32.
The Diameter Proxy has the following role on a node:
![]() | Note: This re-routing operation in the proxy is performed per message and not per sessions, as it is the case for a Diameter client. |
Regular Diameter Client | Diameter Proxy |
Initiates messages. | Transparently passes all messages between the client and the server. Never initiates the messages. |
Buffering is implemented, thus retransmissions are supported. | Buffering is not implemented (pending queue), thus messages are never retransmitted. |
When retransmitting, it sets the T-bit in the Diameter Header. | Never retransmits the messages. |
Failover to the secondary peer is triggered by:
| Failover to the secondary peer is triggered by:
|
Diameter client performs peer failover per session. | Diameter proxy performs peer failover per message (with the T-bit set). |
CC-Request-Number AVP (RFC 4006, 8.2) are typically used to match requests with answers. Session-id and CC-Req-Num are a unique per-message pair. CC Request Numbers along with the session-id uniquely identify a transaction (matching requests and answer messages) on a global level.
The Diameter proxy does not re-write the CC-Request-Number in the messages received by the client.
CC-Req-numbers are synchronized at the ESM level. This is needed so that operation with proper CC-Req-Num can resume after the switchover.
For example, the following CC-Req-Num sequence for the session is preserved across SRRP switch-overs:
The Diameter proxy does not maintain any session state. Forwarding is based on transactions which are short lived. Transactions are based on a pairing request/answer messages matched by the same hop-by-hop identifier and the peer from which the request was received. In this fashion, answer messages coming from the DRA/PCRF can be unambiguously forwarded to the proper Diameter client (from which the request was received).
Since the session state is not kept in Diameter proxy, RAR request are be flooded to both Diameter clients.
The following are four types of switchovers that are most likely to occur:
Each switchover type for a Gx application is discussed in more detail below:
In cases where the Diameter proxy changes its states (INIT, ACTIVE, STANDBY), a log/trap is generated. This log is enabled by default in log-event control. The notification name is tmnxDiamProxyStateChange.
The AN-GW-Address carried in the CCR-I message for the Diameter application session (for example, Gx) is the IP address of the node on which the underlying SRRP instance (for this Gx session) is in the SRRP master state.
When the SRRP switches over due to the failure in the access part of the network (including the ports on a node), a CCR-U can be optionally (configuration dependent) sent with the AN-GW-Address AVP of the node on which an SRRP instance transitioned into the master state. This behavior is controlled via the event trigger id 13 (USER_LOCATION_CHANGE).
In cases where the MCS connection is broken, the Diameter proxy on both nodes try to become active since they each consider that they are the only functional node. From the local point of view, the MCS peer is dead.
While in isolation scenario, both nodes are most likely able to open the TCP peering session with the PCRF/DRA (see Figure 231).
Once the MCS is recovered, the states are re-synchronized.
Diameter identities (origin-host/realm) can be configured to be the same on both nodes. This ensures that the redundant pair of nodes appears as a single node at the Diameter level (Diameter Identities).
A CPM switchover on the active Diameter proxy causes the peering connections between the client and the proxy to be lost. Consequently, the clients have to re-establish their peering connections. Peering connections on the active Diameter proxy towards the server remain uninterrupted.
Gx specific behavior in a multi-chassis configuration is as follows:
This section explains the debugging capabilities for troubleshooting a Diameter or Diameter application problem.
Debugging should be as specific as possible and limited to relevant messages only. Enabling detailed debugging for all diameter messages on a production node generates a flood of information and is not very helpful in isolating the problem; it can, however, be a valid scenario for lab testing.
Diameter debug output can be limited to:
Only messages that match all specified criteria are shown in the debug output.
The diameter debug detail-level command can be set to
For a per-diameter peer policy additional criteria can be specified to further restrict the debug output to:
Only messages sent or received on a peer that belongs to the diameter peer policy and matches all specified criteria are shown in the debug output.
To restrict debug output to messages from a given diameter session, AVP matching with session learning is required. AVP 263 Session-Id is the only Diameter AVP that is present in all messages of a Diameter session and is dynamically assigned when the Diameter session is initiated. The session ID is not useful to specify as criteria in debugging because its value is unknown upfront. Instead, using AVP matching, it is possible to learn the session ID from Diameter application messages (NASREQ, Gx, and Gy) matching a message type and one or multiple AVP values. All subsequent Diameter application messages that belong to the learned session ID are then included in the debug output, as shown in the following example:
In this configuration, the debug output for Diameter messages is restricted to messages of type ccr, cca, rar, or raa that are sent or received on diameter peer policy “diam-pol-1” and that belong to Diameter sessions for which:
The rules for Diameter debug AVP matching are summarized below: