4. Filter Policies

4.1. ACL Filter Policy Overview

ACL filter policies, also referred to as Access Control Lists (ACLs) or just “filters”, are sets of ordered rule entries specifying packet match criteria and actions to be performed to a packet upon a match. Filter policies are created with a unique filter ID and filter name. The filter name needs to be assigned during the creation of the filter policy. If a name is not specified at creation time, then the SR OS assigns a string version of the filter-id as the name.

There are three main filter policies: ip-filter for IPv4, ipv6-filter for IPv6, and mac-filter for MAC level filtering. Additionally, the filter policy scope defines if the policy can be re-used between different interfaces, embedded in another filter policy or applied at the system level:

  1. An exclusive filter defines policy rules explicitly for a single interface. An exclusive filter allows the highest level of customization but uses the most resources, because each exclusive filter consumes hardware resources on line cards on which the interface exists.
  2. A template filter uses an identical set of policy rules across multiple interfaces. Template filters use a single set of resources per line card, regardless of how many interfaces use a specific template filter policy on that line card. Template filter policies used on access interfaces consume resources on line cards only if at least one access interface for a specific template filter policy is configured on a specific line card.
  3. An embedded filter defines a common set of policy rules that can then be used (embedded) by other exclusive or template filters in the system. This allows optimized management of filter policies.
  4. A system filter policy defines a common set of policy rules that can then be activated within other exclusive/template filters. It can be used, for example, as a system-level set of deny rules. This allows optimized management of common rules (similarly to embedded filters). However, active system filter policy entries are not duplicated inside each policy that activates the system policy (as is the case when embedding is used). The active system policy is downloaded once to line cards, and activated filter policies are chained to it.

After the filter policy is created, the policy must then be associated with interfaces/services/subscribers or with other filter policies (if the created policy cannot be directly deployed on an interface/service/subscriber), so the incoming/outgoing traffic can be subjected to filter rules. Filter policies are associated with interfaces/services/subscribers separately in the ingress and egress directions. A policy deployed on ingress and egress direction can be the same or different. In general, Nokia recommends using different filter policies for the ingress and egress directions and to use different filter policies per service type, since filter policies support different match criteria and different actions for different directions/service contexts.

A filter policy is applied to a packet in the ascending rule entry order. When a packet matches all the parameters specified in a filter entry’s match criteria, the system takes the action defined for that entry. If a packet does not match the entry parameters, the packet is compared to the next higher numerical filter entry rule, and so on. If the packet does not match any of the entries, the system executes the default-action specified in the filter policy: drop or forward.

For Layer 2, either an IPv4/IPv6 or MAC filter policy can be applied. For Layer 3 and network interfaces, an IPv4/IPv6 policy can be applied. For R-VPLS service, a Layer 2 filter policy can be applied to Layer 2 forwarded traffic and a Layer 3 filter policy can be applied to Layer 3 routed traffic. For dual-stack interfaces, if both IPv4 and IPv6 filter policies are configured, the policy applied are based on the outer IP header of the packet. Non-IP packets do not affect an IP filter policy, so the default action in the IP filter policy do not apply to these packets. Egress IPv4 QoS-based classification criteria are ignored when egress MAC filter policy is configured on the same interface.

Additionally, platforms that support Network Group Encryption (NGE) can use IP exception filters. IP exception filters scan all outbound traffic entering an NGE domain and allow packets that match the exception filter criteria to transit the NGE domain unencrypted. See Router Encryption Exceptions using ACLs for information about IP exception filters supported by NGE nodes.

4.1.1. Filter Policy Basics

The following subsections define main functionality supported by filter policies.

4.1.1.1. Filter Policy Packet Match Criteria

This section defines packet match criteria supported on SR OS for IPv4, IPv6, and MAC filters. Supported criteria types depend on the hardware platform and filter direction, see your Nokia representative for more information.

General notes:

  1. If multiple unique match criteria are specified in a single filter policy entry, all criteria must be met in order for the packet to be considered a match against that filter policy entry (logical AND).
  2. Any match criteria not explicitly defined is ignored during match.
  3. An ACL filter policy entry with match criteria defined, but no action configured, is considered incomplete and inactive (an entry is not downloaded to the line card). A filter policy must have at least one entry active for the policy to be considered active.
  4. An ACL filter entry with no match conditions defined matches all packets.
  5. Because an ACL filter policy is an ordered list, entries should be configured (numbered) from the most explicit to the least explicit.

4.1.1.2. IPv4/IPv6 Filter Policy Entry Match Criteria

The IPv4 and IPv6 match criteria supported by SR OS are listed below. The criteria are evaluated against the outer IPv4/IPv6 header and a Layer 4 header that follows (if applicable). Support for match criteria may depend on hardware or filter direction, as described below. Nokia recommends not configuring a filter in a direction or on hardware where a match criterion is not supported as this may lead to unwanted behavior. Some match criteria may be grouped in match lists and may be auto-generated based on router configuration, see Filter Policy Advanced Topics for more information.

IPv4 and IPv6 filter policies support three different filter type with normal, src-mac and packet-length each supporting different set of match criteria.

The match criteria available using the normal filter type are defined below. Layer 3 match criteria include:

  1. dscp — Match for the specified DSCP value against the Differentiated Services Code Point/Traffic Class field in the IPv4 or IPv6 packet header.
  2. src-ip/dst-ip — Match for the specified source/destination IPv4/IPv6 address prefix against the source/destination IPv4/IPv6 address field in the IPv4/IPv6 packet header. The operator can optionally configure a mask to be used in a match.
  3. flow-label — Match for the specified flow label against the Flow label field in IPv6 packets. The operator can optionally configure a mask to be used in a match. This operation is supported on ingress filters.
  4. protocol — Matches the specified protocol against the Protocol field in the IPv4 packet header (for example, TCP, UDP, IGMP) of the outer IPv4. “*” can be used to specify TCP or UDP upper-layer protocol match (Logical OR).
  5. next-header — Matches the specified upper-layer protocol (such as, TCP, UDP, IGMPv6) against the Next Header field of the IPv6 packet header. “*” can be used to specify TCP or UDP upper-layer protocol match (Logical OR). When config>system>ip>ipv6-eh max is configured, the next-header value is the last next header field in the last extension header, up to six extension header are supported. When config>system>ip>ipv6-eh limited is configured, the next-header value is the next header field from the IPv6 header.

Fragmentation match criteria:

  1. fragment — Match for the presence of fragmented packet. For IPv4, match against the MF bit or Fragment Offset field to determine whether the packet is a fragment. For IPv6, match against the Next Header Field for Fragment Extension Header value to determine whether the packet is a fragment. Up to six extension headers are matched against to find the Fragmentation Extension Header.
    IPv4 and IPv6 filters support matching against initial fragment using first-only or non-initial fragment non-first-only.
    IPv4 match fragment true or false criteria are supported on both ingress and egress.
    IPv4 match fragment first-only or non-first-only are supported on ingress only.
    Operational note for fragmented traffic — IP and IPv6 filters defined to match TCP, UDP, ICMP, or SCTP criteria (such as src-port, dst-port, port, tcp-ack, tcp-syn, icmp-type, and icmp-code) with values of zero or false also match non-first fragment packets if other match criteria within the same filer entry are also met. Non-initial fragment packets do not contain a UDP, TCP, ICMP or SCTP header.

IPv4 options match criteria:

  1. ip-option — Matches the specified option value in the first option of the IPv4 packet. Operator can optionally configure a mask to be used in a match.
  2. option-present — Matches the presence of IP options in the IPv4 packet. Padding and EOOL are also considered as IP options. Up to six IP options are matched against.
  3. multiple-option — Matches the presence of multiple IP options in the IPv4 packet.
  4. src-route-option — Matches the presence of IP Option 3 or 9 (Loose or Strict Source Route) in the first three IP options of the IPv4 packet. A packet also matches this rule if the packet has more than three IP options.

IPv6 Extension Header match criteria:

Up to six extension headers are matched against when config>system>ip>ipv6-eh max is configured. When config>system>ip>ipv6-eh limited is configured, the next header value of the IPv6 header is used instead.

  1. ah-ext-header — Matches for the presence of the Authentication Header extension header in the IPv6 packet. This match criterion is supported on ingress only.
  2. esp-ext-header — Matches for the presence of the Encapsulating Security Payload extension header in the IPv6 packet. This match criterion is supported on ingress only.
  3. hop-by-hop-opt — Matches for the presence of Hop-by-hop options extension header in the IPv6 packet. This match criterion is supported on ingress only.
  4. routing-type0 — Matches for the presence of Routing extension header type 0 in the IPv6 packet. This match criterion is supported on ingress only.

Upper-layer protocol match criteria:

  1. icmp-code — Matches the specified value against the Code field of the ICMP/ICMPv6 header of the packet. This match is supported only for entries that also define protocol/next-header match for “ICMP”/”ICMPv6” protocol.
  2. icmp-type — Matches the specified value against the Type field of the ICMP/ICMPv6 header of the packet. This match is supported only for entries that also define protocol/next-header match for “ICMP”/”ICMPv6” protocol.
  3. src-port/dst-port/port — Matches the specified port value, port list, or port range against the Source Port Number/Destination Port Number of the UDP/TCP/SCTP packet header. An option to match either source or destination (Logical OR) using a single filter policy entry is supported by using a directionless “port” command. Source/destination match is supported only for entries that also define protocol/next-header match for “TCP”, “UDP”, “SCTP”, or “TCP or UDP” protocols. A non-initial fragment never matches an entry with non-zero port criteria specified. Match on SCTP src-port, dst-port, or port is supported on ingress filter policy.
  4. tcp-ack/tcp-cwr/tcp-ece/tcp-fin/tcp-ns/tcp-psh/tcp-rst/tcp-syn/tcp-urg — Matches the presence or absence of the TCP flags defined in RFC 793/3168/3540 in the TCP header of the packet. This match criteria also requires defining the protocol/next-header match as TCP. tcp-cwr, tcp-ece, tcp-fin, tcp-ns, tcp-psh, tcp-rst, tcp-urg are supported on FP4-based line cards only. When configured on other line cards, the bit for the unsupported TCP flags is ignored.

Filter type match criteria:

Additional match criteria for src-mac, packet-length and destination-class are available using different filter types. See Filter Policy Type for more information.

4.1.1.3. MAC Filter Policy Entry Match Criteria

MAC filter policies support 3 different filter type with normal, isid and vid each supporting different set of match criteria.

The following list describes the MAC match criteria supported by SR OS or switches for all types of MAC filters (normal, isid, and vid). The criteria are evaluated against the Ethernet header of the Ethernet frame. Support for a match criteria may depend on H/W and/or filter direction as per below description. Match criterion is blocked if it is not supported by a specified frame-type or MAC filter type. Nokia recommends not configuring a filter in a direction or on hardware where a match condition is not supported as this may lead to unwanted behavior.

  1. frame-type — The filter searches to match a specific type of frame format. For example, configuring frame-type ethernet_II matches only Ethernet-II frames.
  2. src-mac — The filter searches to match source MAC address frames. Operator can optionally configure a mask to be used in a match.
  3. dst-mac — The filter searches to match destination MAC address frames. Operator can optionally configure a mask to be used in a match.
  4. dot1p — The filter searches to match 802.1p frames. The operator can optionally configure a mask to be used in a match.
  5. etype — The filter searches to match Ethernet II frames. The Ethernet type field is a two-byte field used to identify the protocol carried by the Ethernet frame.
  6. ssap — The filter searches to match frames with a source access point on the network node designated in the source field of the packet. Operator can optionally configure a mask to be used in a match.
  7. dsap — The filter searches to match frames with a destination access point on the network node designated in the destination field of the packet. Operator can optionally configure a mask to be used in a match.
  8. snap-oui — The filter searches to match frames with the specified three-byte OUI field.
  9. snap-pid — The filter searches to match frames with the specified two-byte protocol ID that follows the three-byte OUI field.
  10. isid — The filter searches to match for the matching Ethernet frames with the 24-bit ISID value from the PBB I-TAG. This match criterion is mutually exclusive of all the other match criteria under a specific MAC filter policy and is applicable to MAC filters of type isid only. The resulting MAC filter can only be applied on a BVPLS SAP or PW in the egress direction.
  11. inner-tag/outer-tag — The filter searches to match Ethernet frames with the non-service delimiting tags, as described in the VID MAC Filters section. This match criterion is mutually exclusive with all other match criteria under a specific MAC filter policy and is applicable to MAC filters of type vid only.

4.1.1.4. IP Exception Filters

An NGE node supports IPv4 exception filters. See Router Encryption Exceptions using ACLs for information about IP exception filters supported by NGE nodes.

4.1.1.5. Filter Policy Actions

The following actions are supported by ACL filter policies:

  1. drop — Allows operators to deny traffic to ingress or egress the system.
    1. IPv4 packet-length and IPv6 payload-length conditional drop — Traffic can be dropped based on IPv4 packet length or IPv6 payload length by specifying a packet length or payload length value or range within the drop filter action (the IPv6 payload length field does not account for the size of the fixed IP header, which is 40 bytes).
      This filter action is supported on ingress IPv4 and IPv6 filter policies only, if the filter is configured on an egress interface the packet-length or payload-length match condition is always true.
      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.
      Packets that match a filter policy entry match criteria and the drop packet-length-value or payload-length-value are dropped. Packets that match only the filter policy entry match criteria and do not match the drop packet-length-value or drop payload-length-value are forwarded with no further matching in following filter entries.
      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.
    2. IPv4 TTL and IPv6 hop limit conditional drop — Traffic can be dropped based on a IPv4 TTL or IPv6 hop limit by specifying a TTL or hop limit value or range within the drop filter action.
      This filter action is supported on ingress IPv4 and IPv6 filter policies only. If the filter is configured on an egress interface the packet-length or payload-length match condition is always true.
      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.
      Packets that match filter policy entry match criteria and the drop ttl-value or hop-limit-value are dropped. Packets that match only the filter policy entry match criteria and do not match the drop ttl-value or hop-limit-value are forwarded with no further match in following filter entries.
      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.
    3. Pattern conditional drop — Traffic can be dropped when it is based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset-type, and offset-value match in the first 256 bytes of a packet.
      The pattern expression is up to 8 bytes long. The offset-type-value identifies the starting point for the offset-value and the supported offset type values are:
      1. layer-3: layer 3 IP header
      2. layer-4: layer 4 protocol header
      3. data: data payload for TCP or UDP protocols
      4. dns-qtype: DNS request or response query type
      The content of the packet is compared with the expression/mask value found at the offset-type-value and offset-value as defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, offset-value 20, then the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.
      This drop condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.
      Packets that match a filter policy's entry match criteria and the pattern, are dropped. Packets that match only the filter policy's entry match criteria and do not match the pattern, are forwarded without a further match in subsequent filter entries.
      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4-based line cards, and cannot be configured on egress. A filter entry using a pattern, is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the action is forward.
      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.
  2. drop-extracted-traffic — Traffic extracted to the CPM can be dropped using ingress IPv4 and IPv6 filter policies based on filter match criteria. Any IP traffic extracted to the CPM is subject to this filter action, including routing protocols, snooped traffic, and TTL expired traffic.
    Packets that match the filter entry match criteria and extracted to the CPM are dropped. Packets that match only the filter entry match criteria and are not extracted to the CPM are forwarded with no further match in the subsequent filter entries.
    Cflowd, log, mirror, and statistics apply to all traffic matching the filter entry, regardless of drop or forward action.
  3. forward — Allows operators to accept traffic to ingress or egress the system and be subject to regular processing.
  4. forward-when — Allows operators to accept a conditional filter action.
    1. Pattern conditional accept — Traffic can be accepted based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset-type, and offset-value match in the first 256 bytes of a packet. The pattern expression is up to 8 bytes long. The offset-type identifies the starting point for the offset-value and the supported offset types are:
      1. layer-3: Layer 3 IP header
      2. layer-4: Layer 4 protocol header
      3. data: data payload for TCP or UDP protocols
      4. dns-qtype: DNS request or response query type
    2. The content of the packet is compared with the expression/mask value found at the offset-type and offset-value defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, and offset-value 20, then the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.
      This accept condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria. Packets that match a filter policy's entry match criteria and the pattern, are accepted. Packets that match only the filter policy's entry match criteria and do not match the pattern, are dropped without a further match in subsequent filter entries.
      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4-based line cards and cannot be configured on egress. A filter entry using a pattern is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the action is drop.
      Packets matching this filter entry and not matching the conditional criteria are not logged, counted, or mirrored.
  5. fc — Allows operators to mark the forwarding class (FC) of packets. This command is supported on ingress IP and IPv6 filter policies. This filter action can be combined with the rate-limit value action.
    Packets matching this filter entry action bypass QoS FC marking and are still subject to QoS queuing, policing and priority marking.
    The QPPB forwarding class takes precedence over the filter FC marking action.
  6. rate-limit — This action allows operators to rate-limit traffic matching a filter entry match criteria using IPv4, IPv6, or MAC filter policies.
    If multiple interfaces (including LAG interfaces) use the same rate-limit filter policy on different FPs, then the system allocates a rate limiter resource for each FP; an independent rate limit applies to each FP.
    If multiple interfaces (including LAG interfaces) use the same rate-limit filter policy on the same FP, then the system allocates a single rate limiter resource to the FP; a common aggregate rate limit is applied to those interfaces.
    Note that traffic extracted to the CPM is not rate limited by an ingress rate-limit filter policy while any traffic generated by the router can be rate limited by an egress rate-limit filter policy.
    rate-limit filter policy entries can coexist with cflowd, log, and mirror regardless of the outcome of the rate limit. This filter action is not supported on egress on 7750 SR-a.
    Rate limit policers are configured with MBS equals CBS equals 10 ms of the rate and high-prio-only equals 0.
    Interaction with QoS: Packets matching an ingress rate-limit filter policy entry bypass ingress QoS queuing or policing, and only the filter rate limit policer is applied. Packets matching an egress rate-limit filter policy bypass egress QoS policing, normal egress QoS queuing still applies.
    1. IPv4 packet-length and IPv6 payload-length conditional rate limit — Traffic can be rate limited based on the IPv4 packet length and IPv6 payload length by specifying a packet-length value or payload-length value or range within the rate-limit filter action. The IPv6 payload-length field does not account for the size of the fixed IP header, which is 40 bytes.
      This filter action is supported on ingress IPv4 and IPv6 filter policies only and cannot be configured on egress access or network interfaces.
      This rate-limit condition is part a filter entry action evaluation, and not a filter entry match evaluation. It is checked after the packet is determined to match the entry based on the configured filter entry match criteria.
      Packets that match a filter policy’s entry match criteria and the rate-limit packet-length-value or rate-limit payload-length-value are rate limited. Packets that match only the filter policy’s entry match criteria and do not match the rate-limit packet-length-value or rate-limit payload-length-value are forwarded with no further match in subsequent filter entries.
      Cflowd, logging, and mirroring apply to all traffic matching the ACL entry regardless of the outcome of the rate limiter and regardless of the packet-length-value or payload-length-value.
    1. IPv4 TTL and IPv6 hop-limit conditional rate limit — Traffic can be rate limited based on the IPv4 TTL or IPv6 hop-limit by specifying a TTL or hop limit value or range within the rate-limit filter action using ingress IPv4 or IPv6 filter policies.
      The match condition is part of action evaluation (for example, after the packet is determined to match the entry based on other match criteria configured). Packets that match a filter policy entry match criteria and the rate-limit ttl or hop-limit value are rate limited. Packets that match only the filter policy entry match criteria and do not match the rate-limit ttl or hop-limit value are forwarded with no further matching in the subsequent filter entries.
      Cflowd, logging, and mirroring apply to all traffic matching the ACL entry regardless of the outcome of the rate-limit value and the ttl-value or hop-limit-value.
    2. Pattern conditional rate limit — Traffic can be rate limited when it is based on a pattern found in the packet header or data payload. The pattern is defined by an expression, mask, offset-type, and offset-value match in the first 256 bytes of a packet. The pattern expression is up to 8 bytes long. The offset-type-value identifies the starting point for the offset-value and the supported offset type values are:
      1. layer-3: layer 3 IP header
      2. layer-4: layer 4 protocol header
      3. data: data payload for TCP or UDP protocols
      4. dns-qtype: DNS request or response query type
      The content of the packet is compared with the expression/mask value found at the offset-type-value and offset-value defined in the filter entry. For example, if the pattern is expression 0xAA11, mask 0xFFFF, offset-type data, and offset-value 20, then the filter entry compares the content of the first 2 bytes in the packet data payload found 20 bytes after the TCP/UDP header with 0xAA11.
      This rate limit condition is a filter entry action evaluation, and not a filter entry match evaluation. Within this evaluation, the condition is checked after the packet matches the entry based on the specified filter entry match criteria.
      Packets that match a filter policy's entry match criteria and the pattern, are rate limited. Packets that match only the filter policy's entry match criteria and do not match the pattern, are forwarded without a further match in subsequent filter entries.
      This filtering capability is supported on ingress IPv4 and IPv6 policies using FP4-based line cards and cannot be configured on egress. A filter entry using a pattern is not supported on FP2 or FP3-based line cards. If programmed, the pattern is ignored and the action is forward.
      Cflowd, logging, and mirroring apply to all traffic matching this filter entry regardless of the pattern value.
  7. forward “Policy-based Routing/Forwarding (PBR/PBF) action” — Allows operators to permit ingress traffic but change the regular routing/forwarding that a packet would be a subject to. The PBR/PBF is applicable to unicast traffic only. The following PBR/PBF actions are supported (See CLI section for command details):
    1. egress-pbr — Enabling egress-pbr activates a PBR action on egress, while disabling egress-pbr activates a PBR action on ingress (default).
      The following subset of the PBR actions (defined as follows) can be activated on egress: redirect-policy, next-hop router, and esi.
      Egress PBR is supported in IPv4 and IPv6 filter policies for ESM only. Unicast traffic that is subject to slow-path processing on ingress (for example, IPv4 packets with options or IPv6 packets with hop-by-hop extension header) does not match egress PBR entries. Filter logging, cflowd, and mirror source are mutually exclusive of configuring a filter entry with an egress PBR action. Configuring pbr-down-action-override, if supported with a specific PBR ingress action type, is also supported when the action is an egress PBR action. Processing defined by pbr-down-action-override does not apply if the action is deployed in the wrong direction. If a packet matches a filter PBR entry and the entry is not activated for the direction in which the filter is deployed, action forward is executed. Egress PBR cannot be enabled in system filters.
    2. esi — Forwards the incoming traffic using VXLAN tunnel resolved using EVPN MP BGP control plane to the first service chain function identified by ESI (Layer 2) or ESI/SF-IP (Layer 3). Supported with VPLS (Layer 2) and IES/VPRN (Layer 3) services. If the service function forwarding cannot be resolved, traffic matches an entry and action forward is executed.
      For VPLS, no cross-service PBF is supported; that is, the filter specifying ESI PBF entry must be deployed in the VPLS service where BGP EVPN control plane resolution takes place as configured for a specific ESI PBF action. The functionality is supported in filter policies deployed on ingress VPLS interfaces. BUM traffic that matches a filter entry with ESI PBF is unicast forwarded to the VTEP:VNI resolved through PBF forwarding.
      For IES/VPRN, the outgoing R-VPLS interface can be in any VPRN service. The outgoing interface and VPRN service for BGP EVPN control plane resolution must again be configured as part of ESI PBR entry configuration. The functionality is supported in filter policies deployed on ingress IES/VPRN interfaces and in filter policies deployed on ingress and egress for ESM subscribers. Only unicast traffic is subject to ESI PBR; any other traffic matching a filter entry with Layer 3 ESI action is subjected to action forward.
      When deployed in unsupported direction, traffic matching a filter policy ESI PBR/PBF entry is subject to action forward.
    3. lsp — Forwards the incoming traffic onto the specified LSP. Supports RSVP-TE LSPs (type static or dynamic only), MPLS-TP LSPs, or SR-TE LSPs. Supported for ingress IPv4/IPv6 filter policies and only deployed on IES SAPs or network interfaces. If the configured LSP is down, traffic matches the entry and action forward is executed.
    4. mpls-policy — Redirects the incoming traffic to the active instance of the MPLS forwarding policy specified by its endpoint. This policy is applicable on any ingress interface (egress is blocked). The traffic is subject to a plain forward if no policy matches the one specified, or if the policy has no programmed instance, or if it is applied on non-L3 interface.
    5. next-hop — Changes the IP destination address used in routing from the address in the packet to the address configured in this PBR action. The operator can configure whether the next-hop IP address must be direct (local subnet only) or indirect (any IP). This functionality is supported for ingress IPv4/IPv6 filter policies only, and is deployed on Layer 3 interfaces. If the configured next-hop is not reachable, traffic is dropped and a “ICMP destination unreachable” message is sent. If the indirect keyword is not specified but the IP address is a remote IP address, traffic is dropped.
      1. interface — Forwards the incoming traffic onto the specified IPv4 interface. Supported for ingress IPv4 filter policies in global routing table instance. If the configured interface is down or not of the supported type, traffic is dropped.
    6. redirect-policy — Implements PBR next-hop or PBR next-hop router action with ability to select and prioritize multiple redirect targets and monitor the specified redirect targets so PBR action can be changed if the selected destination goes down. Supported for ingress IPv4 and IPv6 filter policies deployed on Layer 3 interfaces only. See section Redirect Policies for further details.
    7. remark dscp — Allows an operator to remark the DiffServ Code Points of packets matching filter policy entry criteria. Packets are remarked regardless of QoS-based in-/out-of- profile classification and QoS-based DSCP remarking is overridden. DSCP remarking is supported both as a main action and as an extended action. As a main action, this functionality applies to IPv4 and IPv6 filter policies of any scope and can only be applied at ingress on either access or network interfaces of Layer 3 services only. Although the filter is applied on ingress the dscp remarking effectively performed on egress. As an extended action, this functionality applies to IPv4 and IPv6 filter policies of any scope and can be applied at ingress on either access or network interfaces of Layer 3 services, or at egress on Layer 3 subscriber interfaces.
    8. router — Changes the routing instance a packet is routed in from the upcoming interface’s instance to the routing instance specified in the PBR action (supports both GRT and VPRN redirect). It is supported for ingress IPv4/IPv6 filter policies deployed on Layer 3 interfaces. The action can be combined with the next-hop action specifying direct/indirect IPv4/IPv6 next hop. Packets are dropped if they cannot be routed in the configured routing instance. For further details, see section “Traffic Leaking to GRT” in the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 3 Services Guide: IES and VPRN.
    9. sap — Forwards the incoming traffic onto the specified VPLS SAP. Supported for ingress IPv4/IPv6 and MAC filter policies deployed in VPLS service. The SAP that the traffic is to egress on must be in the same VPLS service as the incoming interface. If the configured SAP is down, traffic is dropped.
    10. sdp — Forwards the incoming traffic onto the specified VPLS SDP. Supported for ingress IPv4/IPv6 and MAC filter policies deployed in VPLS service. The SDP that the traffic is to egress on must be in the same VPLS service as the incoming interface. If the configured SDP is down, traffic is dropped.
    11. srte-policy — Redirects the incoming traffic to the active instance of the SR-TE forwarding policy specified by its endpoint and color. This policy is applicable on any ingress interface (egress is blocked). The traffic is subject to a plain forward if no policy matches the one specified, or if the policy has no programmed instance, or if it is applied on non-L3 interface.
    12. vprn-target — Redirects the incoming traffic in a similar manner to combined next-hop and LSP redirection actions, but with greater control and slightly different behavior. This action is supported for both IPv4 and IPv6 filter policies and is applicable on ingress of access interfaces of IES/VPRN services. See Filter Policy Advanced Topics for further details.
  8. forward “isa action” — ISA processing actions allow operator to permit ingress traffic and send it for ISA processing as per specified ISA action. The following ISA actions are supported (see CLI section for command details):
    1. gtp-local-breakout — Forwards matching traffic to NAT instead of being GTP tunneled to the mobile operator’s PGW or GGSN. The action applies to GTP-subscriber-hosts. If filter is deployed on other entities, action forward is applied. Supported for IPv4 ingress filter policies only. If ISAs performing NAT are down, traffic is dropped.
    2. nat — Forwards matching traffic for NAT. Supported for IPv4/IPv6 filter policies for Layer 3 services in GRT or VPRN. If ISAs performing NAT are down, traffic is dropped. (see CLI for options)
    3. reassemble — Forwards matching packets to the reassembly function. Supported for IPv4 ingress filter policies only. If ISAs performing reassemble are down, traffic is dropped.
    4. tcp-mss-adjust — Forwards matching packets (TCP Syn) to an ISA BB Group for MSS adjustment. In addition to the IP filter, the operator also needs to configure the mss-adjust-group command under the Layer 3 service to specify the bb-group-id and the new segment-size.
  9. http-redirect — Implements the HTTP redirect captive portal. HTTP GET is forwarded to CPM card for captive portal processing by router. See the HTTP-Redirect (Captive Portal) section for more information.
  10. ignore-match — This action allow the operator to disable a filter entry, as a result the entry is not programmed in hardware.

In addition to the above actions:

  1. An operator can select a default-action for a filter policy. The default action is executed on packets subjected to an active filter when none of the filter’s active entries matches the packet. By default, filter policies have default action set to drop but operator can select a default action to be forward instead.
  2. An operator can override default action applied to packets matching a PBR/PBF entry when the PBR/PBF target is down using pbr-down-action-override. Supported options are to drop the packet, forward the packet, or apply the same action as configured for the filter policy's default-action. The override is supported for the following PBR/PBF actions. For the last three actions, the override is supported whether in redundancy mode or not.
    1. forward esi (Layer 2 or Layer 3)
    2. forward sap
    3. forward sdp
    4. forward next-hop [indirect] router
    Table 9 defines default behavior for packets matching a PBR/PBF filter entry when a target is down.
    Table 9:  Default behavior when a PBR/PBF target is down 

    PBR/PBF action

    Default behavior when down

    forward esi (any type)

    Forward

    forward lsp

    Forward

    forward mpls-policy

    Forward

    forward next-hop (any type)

    Drop

    forward redirect-policy

    Forward when redirect policy is shutdown

    forward redirect-policy

    Forward when destination tests are enabled and the best destination is not reachable

    forward redirect-policy

    Drop when destination tests are not enabled and the best destination is not reachable

    forward sap

    Drop

    forward sdp

    Drop

    forward srte-policy

    Forward

    forward router

    Drop

    forward vprn-target

    Forward

4.1.1.6. Viewing Filter Policy Actions

A number of parameters determine the behavior of a packet after it has been matched to a defined criterion or set of criteria:

  1. the action configured by the user
  2. the context in which a filter policy is applied. For example, applying a filter policy in an unsupported context can result in simply forwarding the packet rather than applying the configured action.
  3. external factors, such as the reachability (according to a given test criteria) of a target

Because of this, SR OS provides the following commands that enable the user to capture this context globally and identify how a packet is handled by the system:

  1. show>filter>ip
  2. show>filter>ipv6
  3. show>filter>mac

This section describes the key information displayed as part of the output for the show commands listed above, and explains how to interpret it.

From a configuration point of view, the show command output displays the main action (primary and secondary), as well as the extended action.

The “PBR Target Status” field shows the basic information that the system has of the target based on simple verification methods. This information is only shown for the filter entries which are configured in redundancy mode (that is, with both primary and secondary main actions configured), and for ESI redirections. Specifically, the target status in the case of redundancy depends on several factors; for example, on a match in the routing table for next-hop redirects, or on VXLAN tunnel resolution for ESI redirects.

The “Downloaded Action” field specifically describes the action that the system performs on the packets that match the criterion (or criteria). This typically depends on the context in which the filter has been applied (whether it is supported or not), but in the case of redundancy, it also depends on the target status. For example, the downloaded action is the secondary main action when the target associated to the primary action is down. In the nominal (for example, non-failure condition) case the “Downloaded Action” reflects the behavior a packet is subject to. However, in transient cases (for example, in the case of a failure) it may not be able to capture what effectively happens to the packet.

The output also displays relevant information such as the default action when the target is down (see Table 9) as well as the overridden default action when pbr-down-action-override has been configured.

There are situations where, collectively, this information does not capture what effectively happens to the packet throughout the system. The effective-action keyword of the show>filter>[{ip | ipv6 | mac}] commands enables advanced checks to be performed and accurate packet fates to be displayed.

The criteria for determining when a target is down. While there is little ambiguity on that aspect when the target is local to the system performing the steering action, ambiguity is much more prominent when the target is distant. Therefore, because the use of effective-action triggers advanced tests, a discrepancy is introduced compared to the action when effective-action keyword is not used. This is, for example, be the case for redundant actions.

4.1.1.7. Filter Policy Statistics

Filter policies support per-entry, packet/byte match statistics. The cumulative matched packet/Byte counters are available per ingress and per egress direction. Every packet arriving on an interface/service/subscriber using a filter policy increments ingress or egress (as applicable) matched packet/Byte count for a filter entry the packet matches (if any) on the line card the packet ingresses/egresses. For each policy, the counters for all entries are collected from all line cards, summarized and made available to an operator.

Filter policies applied on access interfaces are downloaded only to line cards that have interfaces associated with those filter policies. If a filter policy is not downloaded to any line card, the statistics show 0. If a filter policy is being removed from any of the line cards the policy is currently downloaded to (as result of association change or when a filter becomes inactive), the associated statistics are reset to 0.

Downloading a filter policy to a new line card continues incrementing existing statistics.

Operational notes:

  1. Conditional action match criteria filter entries for ttl, hop-limit, packet-length, and payload-length support logging and statistics when the condition is met, allowing visibility of filter matched and action executed. If the condition is not met, packets are not logged and statistics against the entry are not incremented.

4.1.1.8. Filter Policy Logging

SR OS supports logging of the information from the packets that match a specific filter policy. Logging is configurable per filter policy entry by specifying preconfigured filter log (config>filter>log). A filter log can be applied to ACL filters and CPM hardware filters. Operators can configure multiple filter logs and specify: memory allocated to a filter log destination, syslog ID for filter log destination, filter logging summarization, and wrap-around behavior.

Notes related to filter log summarization:

  1. The implementation of the feature applies to filter logs with destination syslog.
  2. Summarization logging is the collection and summarization of log messages for one specific log ID within a period of time.
  3. The summarization interval is 100 seconds.
  4. Upon activation of a summary, a mini-table with src/dst-address and count is created for each type (IPv4/IPv6/MAC).
  5. Every received log packet (due to filter match) is examined for source or destination address.
  6. If the log packet (source/destination address) matches a source/destination address entry in the mini-table, from a packet received previously, the summary counter of the matching address is incremented.
  7. If source or destination address of the log messages does not match an entry already present in the table, the source/destination address is stored in a free entry in the mini-table.
  8. In case the mini-table has no more free entries, only total counter is incremented.
  9. At expiry of the summarization interval, the mini-table for each type is flushed to the syslog destination.

Operational note:

  1. Conditional action match criteria filter entries for ttl, hop-limit, packet-length, and payload-length support logging and statistics when the condition is met, allowing visibility of filter matched and action executed. If the condition is not met, packets are not logged and statistics against the entry are not incremented.

4.1.1.9. Filter Policy cflowd Sampling

Filter policies can be used to control how cflowd sampling is performed on an IP interface. If an IP interface has cflowd sampling enabled, an operator can exclude some flows for interface sampling by configuring filter policy rules that match the flows and by disabling interface sampling as part of the filter policy entry configurations (interface-disable-sample). If an IP interface has cflowd sampling disabled, an operator can enable cflowd sampling on a subset of flows by configuring filter policy rules that match the flows and by enabling cflowd sampling as part of the filter policy entry configurations (filter-sample).

The above cflowd filter sampling behavior is exclusively driven by match criteria. The sampling logic applies regardless of whether an action was executed (including evaluation of conditional action match criteria, for example, packet-length or ttl).

4.1.1.10. Filter Policy Management

4.1.1.10.1. Modifying Existing Filter Policy

There are several ways to modify an existing filter policy. A filter policy can be modified through configuration change or can have entries populated through dynamic, policy-controlled dynamic interfaces; for example, RADIUS, OpenFlow, FlowSpec, or Gx. Although in general, SR OS ensures filter resources exist before a filter can be modified, because of the dynamic nature of the policy-controlled interfaces, a configuration that was accepted may not be applied in H/W due to lack of resources. When that happens, an error is raised.

A filter policy can be modified directly—by changing/adding/deleting the existing entry in that filter policy—or indirectly. Examples of indirect change to filter policy include changing embedded filter entry this policy embeds (see the Filter Policy Scope and Embedded Filters section) or changing redirect policy this filter policy uses.

Finally, a filter policy deployed on a specific interface can be changed by changing the policy the interface is associated with.

All of the above changes can be done in service. A filter policy that is associated with service/interface cannot be deleted unless all associations are removed first.

For a large (complex) filter policy change, it may take a few seconds to load and initiate the filter policy configuration. Filter policy changes are downloaded to line cards immediately; therefore, operators should use filter policy copy or transactional CLI to ensure partial policy change is not activated.

4.1.1.10.2. Filter Policy Copy and Renumbering

To assist operators in filter policy management, SR OS supports entry copy and entry renumbering operations.

Filter copy allows operators to perform bulk operations on filter policies by copying one filter’s entries to another filter. Either all entries or a specified entry of the source filter can be selected for copy. When entries are copied, entry order is preserved unless destination filter’s entry ID is selected (applicable to single-entry copy). The filter copy allows overwrite of the existing entries in the destination filter by specifying “overwrite” option during the copy command. Filter copy can be used, for example, when creating new policies from existing policies or when modifying an existing filter policy (an existing source policy is copied to a new destination policy, the new destination policy is modified, then the new destination policy is copied back to the source policy with overwrite specified).

Entry renumbering allows operators to change relative order of a filter policy entry by changing the entry ID. Entry renumbering can also be used to move two entries closer together or further apart, thereby creating additional entry space for new entries.

4.1.2. Filter Policy Advanced Topics

4.1.2.1. Match List for Filter Policies

The filter match list ip-prefix-list, ipv6-prefix-list, protocol-list, and port-list define a list of IP prefixes, IP protocols, and TCP-UDP ports that can be used as match criteria for line card IP and IPv6 filters. Additionally, ip-prefix-list, ipv6-prefix-list, and port-list can also be used in CPM filters.

A match list simplifies the filter policy configuration with multiple prefixes, protocols, or ports that can be matched in a single filter entry instead of creating an entry for each.

The same match list can be used in one or many filter policies. A change in match list content is automatically propagated across all policies that use that list.

4.1.2.1.1. Apply-Path

The router supports the auto-generation of IPv4 and IPv6 prefix list entries for BGP peers which are configured in the Base router or in VPRN services using the match-list apply-path filter. This capability simplifies the management of CPM filters to allow BGP control traffic from trusted configured peers only.By using the match-list apply-path, filter the operator can:

  1. specify one or more regex expression matches per match list, including wildcard matches (".*")
  2. mix auto-generated entries with statically configured entries within a match list

Additional rules are applied when using apply-path as follows:

  1. Operational and administrative states of a specific router configuration are ignored when auto-generating address prefixes.
  2. Duplicates are not removed when populated by different auto-generation matches and static configuration.
  3. Configuration fails if auto-generation of an address prefix results in the filter policy resource exhaustion on a filter entry, system, or line card level.

4.1.2.1.2. Prefix-exclude

A prefix can be excluded from an IPv4 or IPv6 prefix list by using the prefix-exclude command.

For example, when the operator needs to rate-limit traffic to 10.0.0.0/16 with the exception of 10.0.2.0/24, then the following options are available.

  1. By applying prefix-exclude, a single IP prefix list with two prefixes is configured:
            ip-prefix-list "list-1" create
                prefix 10.0.0.0/16
                prefix-exclude 10.0.2.0/24
            exit
     
  2. Without applying prefix-exclude, all eight included subnets should be manually configured in the ip-prefix-list:
            ip-prefix-list "list-1" create
                prefix 10.0.0.0/23
                prefix 10.0.3.0/24
                prefix 10.0.4.0/22
                prefix 10.0.8.0/21
                prefix 10.0.16.0/20
                prefix 10.0.32.0/19
                prefix 10.0.64.0/18
                prefix 10.0.128.0/17
            exit
     
    This is a time consuming, and error-prone task compared to using the prefix-exclude command.

The filter resources, consumed in hardware, are identical between the two configurations.

A filter match-list using prefix-exclude is mutually exclusive with apply-path, and is not supported as a match criterion in cpm-filter.

Configured prefix-exclude prefixes are ignored when no overlapping larger subnet is configured in the prefix-list. For example: prefix-exclude 1.1.1.1/24 is ignored if the only included subnet is 10.0.0.0/16.

4.1.2.2. Filter Policy Scope and Embedded Filters

The system supports four different filter policies:

  1. scope template
  2. scope exclusive
  3. scope embedded
  4. scope system

Each scope provides different characteristics and capabilities to deploy a filter policy on a single interface, multiple interfaces or optimize the use of system resources or the management of the filter policies when sharing a common set of filter entries.

Template and Exclusive

A filter policy of scope template can be re-used across multiple interfaces. This filter policy uses a single set of resources per line card regardless of how many interfaces use it. Template filter policies used on access interfaces consume resources on line cards where the access interfaces are configured only. A filter policy of scope template is the most common type of filter policies configured in a router.

A filter policy of scope exclusive defines a filter dedicated to a single interface. An exclusive filter allows the highest level of customization but uses the most resources on the system line cards as it cannot be shared with other interfaces.

Embedded

To simplify the management of filters sharing a common set of filter entries, the operator can create a filter policy of scope embedded. This filter can then be included in (embedded into) a filter of scope template, exclusive or system.

Using filter scope embedded, a common set of filter entries can be updated in a single place and deployed across multiple filter policies. The scope embedded is supported for IPv4 and IPv6 filter policies.

A filter policy of scope embedded is not directly downloaded to a line card and cannot be directly referenced in an interface. However, this policy helps the network operator provision a common set of rules across different filter policies.

The following rules apply when using filter policy of scope embedded:

  1. The operator explicitly defines the offset at which to insert a filter of scope embedded in a template, exclusive or system filter. The embedded filter entry-id X becomes entry-id (X + offset) in the main filter.
  2. Multiple filter policies of scope embedded can be included (embedded into) in a single filter policy of scope template, exclusive or system.
  3. The same filter policy of scope embedded can be included in multiple filter policies of scope template, exclusive or system.
  4. Configuration modifications to embedded filter policy entries are automatically applied to all filter policies that embed this filter.
  5. The system performs a resource management check when a filter policy of scope embedded is updated or embedded in a new filter. If resources are not available, the configuration is rejected. In rare cases, a filter policy resource check may pass but the filter policy can still fail to load due to a resource exhaustion on a line card (for example, when other filter policy entries are dynamically configured by applications like RADIUS in parallel). If that is the case, the embedded filter policy configured is deactivated (configuration is changed from activate to inactivate).
  6. An embedded filter is never embedded partially in a single filter and resources must exist to embed all the entries in a given exclusive, template or system filter. However, an embedded filter may be embedded only in a subset of all the filters it is referenced into, only those where there are sufficient resources available.
  7. Overlapping of filter entries between an embedded filter and a filter of scope template, exclusive or system filter can happen but should be avoided. It is recommended instead that network operators use a large enough offset value and an appropriate filter entry-id in the main filter policy to avoid overlapping. In case of overlapping entries, the main filter policy entry overwrites the embedded filter entry.
  8. Configuring a default action in a filter of scope embedded is not required as this information is not used to embed filter entries.

Figure 22 shows a configuration with two filter policies of scope template, filter 100 and 200 each embed filter policy 10 at a different offset:

  1. Filter policy 100 and 200 are of scope template.
  2. Filter policy 10 of scope embedded is configured with 4 filter entries: entry-id 10, 20, 30, 40.
  3. Filter policy 100 embed filter 10 at offset 0 and includes two additional static entries with entry-id 20010 and 20020.
  4. Filter policy 200 embed filter 10 at offset 10000 and includes two additional static entries with entry-id 100 and 110.
  5. As a result, filter 100 automatically creates entry 10, 20, 30, 40 while filter 200 automatically creates entry 10010, 10020, 10030, 10040. Filter policy 100 and 200 consumed in total 12 entries when both policies are installed in the same line card.
*A:7750>config>filter# info
----------------------------------------------
        ip-filter 10 name "10" create
            scope embedded
            entry 10 create
            ... ...
            exit
            entry 20 create
            ... ...
            exit
            entry 30 create
            ... ...
            exit
            entry 40 create
            ... ...
            exit
        exit
        ip-filter 100 name "100" create
            scope template
            embed-filter 10
            entry 20010 create
            ... ...
            exit
            entry 20020 create
            ... ...
            exit
        exit
        ip-filter 200 name "200" create
            scope template
            embed-filter 10 offset 10000
            entry 100 create
            ... ...
            exit
            entry 110 create
            ... ...
            exit
        exit
----------------------------------------------
Figure 22:  Embedded Filter Policy 

System

The filter policy of scope system provides the most optimized use of hardware resources by programming filter entries once in the line cards regardless of how many IPv4 or IPv6 filter policies of scope template or exclusive use this filter. The system filter policy entries are not duplicated inside each policy that uses it, instead, template or exclusive filter policies can be chained to the system filter using the chain-to-system-filter command.

When a template of exclusive filter policy is chained to the system filter, system filter rules are evaluated first before any rules of the chaining filter are evaluated (that is chaining filter's rules are only matched against if no system filter match took place).

The system filter policy is intended primarily to deploy a common set of system-level deny rules and infrastructure-level filtering rules to allow, block, or rate limit traffic. Other actions like, for example, PBR actions, or redirect to ISAs should not be used unless the system filter policy is activated only in filters used by services that support such action. The “nat” action is not supported and should not be configured. Failure to observe these restrictions can lead to unwanted behavior as system filter actions are not verified against the services the chaining filters are deployed for. System filter policy entries also cannot be the sources of mirroring.

System filter policies can be populated using CLI, SNMP, Netconf, Openflow and Flowspec. System filter policy entries cannot be populated using RADIUS or Gx.

An example for IPv4 system filter configuration is shown as follows:

  1. System filter policy 10 includes a single entry to rate limit NTP traffic to the Infrastructure subnets.
  2. Filter policy 100 of scope template is configured to use the system filter using the chain-to-system-filter command.
*7750>config>filter# info
----------------------------------------------
        ip-filter 10 name "10" create
            scope system
            entry 10 create
                description "Rate Limit NTP to the Infrastructure"
                match protocol udp
                    dst-ip ip-prefix-list "Infrastructure IPs"
                    dst-port eq 123
                exit
                action
                    rate-limit 2000
                exit
            exit
        exit
        ip-filter 100 name "100" create
            chain-to-system-filter
            description "Filter scope template for network interfaces"
        exit
        system-filter
            ip 10
        exit
----------------------------------------------

4.1.2.3. Filter Policy Type

The filter policy type defines the list of match criteria available in a filter policy. It provides filtering flexibility by reallocating the CAM in the line card at the filter policy level to filter traffic using additional match criteria not available using filter type normal. The filter type is specific to the filter policy, it is not a system wide or line card setting. The operator can configure different filter policy types on different interfaces of the same system and line card.

Mac-filter supports three different filter types: normal, isid or vid.

IPv4 and IPv6 filters support four different filter types: normal, src-mac, packet-length or destination-class.

4.1.2.3.1. IPv4, IPv6 Filter Type Src-Mac

This filter policy type provides src-mac match criterion for IPv4 and IPv6 filters.

The following match criteria are not available for filter entries in a filter policy of type src-mac:

  1. IPv4 — src-ip, dscp, ip-option, option-present, multiple-option, src-route-option
  2. IPv6 — src-ip

For a QoS policy assigned to the same service or interface endpoint as a filter policy of type src-mac, QoS IP criteria cannot use src-ip or dscp and QoS IPv6 criteria cannot use src-ip.

Filter type src-mac is available for egress filtering on VPLS services only. R-VPLS endpoints are not supported.

Dynamic filter entry embedding using Openflow, FlowSpec and VSD is not supported using this filter type.

4.1.2.3.2. IPv4, IPv6 Filter Type Packet-Length

This filter policy type provides packet-length match criterion capability. For this match, the system uses the total packet length including both the IP header and payload for IPv4 and IPv6.

The following match criteria are not available for filter entries in a filter policy of type packet-length:

  1. IPv4 — dscp, ip-option, option-present, multiple-option, src-route-option
  2. IPv6 — flow-label

For a QoS policy assigned to the same service or interface endpoint on egress as a filter policy of type packet-length, QoS IP criteria cannot use dscp match criteria with no restriction to ingress.

This filter type is available for both ingress and egress on all service and router interfaces endpoints with the exception of FPIPE, IPIPE, Video ISA, service templates and PW templates.

Dynamic filter entry embedding using Openflow and VSD is not supported using this filter type.

4.1.2.3.3. IPv4, IPv6 Filter Type Destination-Class

This filter policy provides BGP destination-class value match criterion capability using egress IPv4 and IPv6 filters, and is supported on network, IES, VPRN, and R-VPLS.

The following match criteria from filter type normal are not available using filter type destination-class:

  1. IPv4 — dscp, ip-option, option-present, multiple-option, src-route-option
  2. IPv6 — flow-label

Filtering egress on destination-class requires destination-class-lookup command to be enabled on the interface that the packet ingresses on. For a QoS policy or filter policy assigned to the same interface, the DSCP remarking action is performed only if a destination-class was not identified for this packet.

System filters, as well as dynamic filter embedding using OpenFlow, FlowSpec, and VSD, are not supported using this filter type.

4.1.2.3.4. IPv4 and IPv6 Filter Type and Embedding

IPv4 and IPv6 filter policy of scope embedded must have the same type as the main filter policy of scope template, exclusive or system embedding it:

  1. If this condition is not met the filter cannot be embedded.
  2. Once embedded, the main filter policy cannot change the filter type if one of the embedded filters is of a different type.
  3. Once embedded, the embedded filter cannot change the filter type if it does not match the main filter policy.

Similarly, the system filter type must be identical to the template or exclusive filter to allow chaining when using the chain-to-system-filter command.

4.1.2.4. Filter Policies and Dynamic Policy-Driven Interfaces

Filter policy entries can be statically configured using CLI, SNMP, or NETCONF or dynamically created using BGP FlowSpec, OpenFlow, VSD (XMPP), or RADIUS/Diameter for ESM subscribers.

Dynamic filter entries for flowspec, openflow, and vsd can be inserted in a filter policy of scope template or exclusive using the embed-filter command in IPv4 and IPv6 filter policies. Additionally, flowspec embedding is supported using a filter policy of scope system.

BGP flowspec

BGP FlowSpec routes are learned in a particular routing instance and can be used to dynamically create filter entries in a given filter policy using the embed-filter flowspec command.

The following rules apply to FlowSpec embedding:

  1. The operator explicitly defines both the offset at which to insert FlowSpec filter entries and the router instance the FlowSpec routes belong to. The embedded FlowSpec filter entry ID is chosen by the system following RFC 5575, Dissemination of Flow Specification Rules. Note that these entry IDs are not necessarily sequential and do not necessarily follow the order at which a rule is received.
  2. The maximum number of FlowSpec filter entries in a given filter policy is configurable by the operator at the router or VPRN level using the ip-filter-max-size and ipv6-filter-max-size commands. This limit defines the boundary for FlowSpec embedding in a filter policy (offset and ip-filter-max-size).
  3. When using a filter policy of scope template or exclusive, the router instance defined in the embed-filter flowspec command must match the router interface that the filter policy is applied to and the router instance that the FlowSpec routes are received from.
  4. When using a filter policy of scope system, embedding FlowSpec entries from different router instances is allowed and can be applied to any router interfaces.
  5. See section IPv4/IPv6 Filter Policy Entry Match Criteria on embedded filter scope for recommendations on filter entry ID spacing and overlapping of entries.

The following is a FlowSpec configuration example:

  1. The maximum number of FlowSpec routes in the base router instance is configured for 50,000 entries using the ip-filter-max-size command.
  2. The filter policy 100 of scope template is configured to embed FlowSpec routes from the base router instance at offset 100,000. The offset chosen in this example avoids overlapping with statically defined entries in the same policy. In this case, the statically defined entries can use the entry ID range 1-99999 and 149999-2M for defining static entries before or after the FlowSpec filter entries.
A:7750>config>router#
----------------------------------------------
        flowspec
            ip-filter-max-size 50000
        exit
----------------------------------------------
A:7750>config>filter# info
----------------------------------------------
        ip-filter 100 name "100" create
            embed-filter flowspec router "Base" offset 100000
        exit
----------------------------------------------

OpenFlow

The embedded filter infrastructure is used to insert OpenFlow rules into an existing filter policy. See Hybrid OpenFlow Switch for more details. Policy-controlled auto-created filters are re-created on system reboot. Policy controlled filter entries are lost on system reboot and need to be re-programmed.

VSD

VSD filters are created dynamically using XMPP and managed using a Python script so rules can be inserted into or removed from the correct VSD template or embedded filters. XMPP messages received by the 7750 SR are passed transparently to the Python module to generate the appropriate CLI. For more information about VSD filter provisioning, automation, and Python scripting details refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 2 Services and EVPN Guide: VLL, VPLS, PBB, and EVPN.

RADIUS/Diameter for Subscriber Management

The operator can assign filter policies or filter entries used by a subscriber within a preconfigured filter entry range defined for RADIUS or Diameter. Refer to the 7450 ESS, 7750 SR, and VSR Triple Play Service Delivery Architecture Guide and filter RADIUS-related commands for more details.

4.1.2.5. Primary and Secondary Filter Policy Action for PBR/PBF Redundancy

In some deployments, operators may want to specify a backup PBR/PBF target if the primary target is down. SR OS allows the configuration of a primary action (config>filter>{ip-filter | ipv6-filter | mac-filter}>entry>action) and a secondary action (config>filter>{ip-filter | ipv6-filter | mac-filter}>entry>action secondary) as part of a single filter policy entry. The secondary action can only be configured if the primary action is configured.

For Layer 2 PBF redundancy, the operator can configure the following redundancy options:

  1. action forward sap AND action secondary forward sap
  2. action forward sdp AND action secondary forward sdp
  3. action forward sap AND action secondary forward sdp
  4. action forward sdp AND action secondary forward sap

For Layer 3 PBR redundancy, an operator can configure any of the following actions as a primary action and any (either same or different than primary) of the following as a secondary action. Furthermore, none of the parameters need to be the same between primary and secondary actions. Although the following commands refer to IPv4 in the ip-address parameter, they also apply to IPv6.

  1. forward next-hop ip-address router router-instance
  2. forward next-hop ip-address router service-name service-name
  3. forward next-hop indirect ip-address router router-instance
  4. forward next-hop indirect ip-address router service-name service-name

When primary and secondary actions are configured, PBR/PBF uses the primary action if its target is operationally up, or it uses the secondary action if the primary PBR/PBF target is operationally down. If both targets are down, the default action when the target is down (see Table 9), as per the primary action, is used, unless pbr-down-action-override is configured.

When PBR/PBF redundancy is configured, the operator can use sticky destination functionality for a redundant filter entry. When sticky destination is configured (config>filter>{ip-filter | ipv6-filter | mac-filter}>entry>sticky-dest), the functionality mimics that of sticky destination configured for redirect policies. To force a switchover from the secondary to the primary action when sticky destination is enabled and secondary action is selected, the operator can use the tools>perform>filter>{ip-filter | ipv6-filter | mac-filter}>entry>activate-primary-action command. Sticky destination can be configured even if no secondary action is configured.

The control plane monitors whether primary and secondary actions can be performed and programs forwarding filter policy to use either the primary or secondary action as required. More generally, the state of PBR/PBF targets is monitored in the following situations:

  1. when a secondary action is configured
  2. when sticky destination is configured
  3. when a pbr-down-action-override is configured

The show>filter>{ip-filter | ipv6-filter | mac-filter} [entry] command displays which redundant action is activated or downloaded, including when both PBR/PBF targets are down. The following example shows partial output of the command as applicable for PBF redundancy.

*A:vsim-200001# show filter ip 10 entry 1000 
Primary Action      : Forward (SAP)         <-details  of (primary) action
  Next Hop           : 1/1/1
  Service Id          : Not configured 
  PBR Target Status : Does not exist 
Secondary Action    : Forward (SAP)        <-details  of secondary action
  Next Hop          : 1/1/2 
  Service Id        : Not configured 
  PBR Target Status : Does not exist 
PBR Down Action     : Forward (pbr-down-action-override) <- PBR down behavior
Downloaded Action   : None                 <- currently downloaded action
Dest. Stickiness    : 1000                         Hold Remain    : 0 <-
 sticky dest details

4.1.2.6. Extended Action for Performing Two Actions at a Time

In certain deployment scenarios, for example to realize service function chaining, operators may want to perform a second action in addition to a traffic steering action. SR OS allows this behavior by configuring an extended action for a main action. This functionality is supported for Layer 3 traffic steering (that is, PBR) and more specifically for the following main actions:

  1. forward esi (Layer 3 version)
  2. forward lsp
  3. forward next-hop [indirect] [router]
  4. forward next-hop interface
  5. forward redirect-policy
  6. forward router
  7. forward vprn-target

Furthermore, the capability to specify an extended action is also supported in the case of PBR redundancy, therefore for the following action:

  1. forward next-hop [indirect] router

The supported extended action is:

  1. remark dscp dscp-name

The extended action is available in the following contexts: config>filter>ip-filter>entry>action>extended-action and config>filter>ipv6-filter>entry>action>extended-action.

If the status of the target of the main action is tracked, which is the case for PBR/PBF redundancy, the extended action listed above is not performed when the PBR target is down. Moreover, a filter policy containing an entry with the extended action remark dscp is blocked in the following cases:

  1. if applied on ingress with the egress-pbr flag set
  2. if applied on egress without the egress-pbr flag set

The latter case includes actions that are not supported on egress (and for which egress-pbr cannot be set).

4.1.2.7. Advanced VPRN Redirection

The vprn-target action is a resilient redirection capability which combines both data-path and control plane lookups to achieve the desired redirection. It allows for the following redirection models:

  1. redirection towards the default PE while selecting a specific LSP to use
  2. redirection towards an alternative PE while selecting or not a specific LSP to use. If a specific LSP is not selected, then the system automatically selects one based on the BGP next-hop tunnel resolution mechanism
  3. all of the above within any VPRN

When configuring this action, the user must specify the target BGP next-hop (bgp-nh) towards which the redirection should occur, as well as the routing context (router) in which the necessary lookups are performed (to derive the service label).

The target BGP next-hop can be configured with any label allocation method (label per VRF, label per next-hop, label per prefix). These methods entail different forwarding behaviors; however, the steering node is not aware of the configuration of the target node. If the user does not specify an advertised route prefix (adv-prefix), the steering node assumes that label per VRF is used by the target node and selects the service label accordingly. If the target node is not operating according to the label per VRF method, the user must specify an appropriate route prefix for which a service label is advertised by the target node, keeping in mind the resulting forwarding behavior at the target node of the redirected packet. This specification instructs the steering node to use that specific service label.

Be aware that the system performs and exact match between the specified ip-address/mask (or ipv6-address/prefix-length) and the advertised route.

The user can specify an LSP (RSVP-TE, MPLS-TP, or SR-TE LSP) to use towards the BGP next-hop. If no LSP is specified, the system automatically selects one the same way it would have done when normally forwarding a packet towards the BGP next-hop.

Note:

While the system only performs the redirection when the traffic is effectively able to reach the target BGP next-hop, it does not verify whether the redirected packets effectively reach their destination after that.

This action is resilient in that it tracks events affecting the redirection at the service level and reacts to those events. The system performs the redirection as long as it can reach the target BGP next-hop using the proper service label. If the redirection cannot be performed (for example, if no LSP is available, the peer is down, or there is no more specific labeled route), the system reverts to normal forwarding. This can be overridden and configured to drop. A maximum of 8k of unique (3-tuple {bgp-nh, router, adv-prefix}) redirection targets can be tracked.

4.1.2.8. Destination MAC Rewrite When Deploying Policy-Based Forwarding

For Layer 2 Policy-Based Forwarding (PBF) redirect actions, a far-end router may discard redirected packets when the PBF changes the destination IP interface the packet arrives on. This happens when a far-end IP interface uses a different MAC address than the IP interface reachable via normal forwarding (for example, one of the routers does not support a configurable MAC address per IP interface). To avoid the discards, operators can deploy egress destination MAC rewrite functionality for VPLS SAPs (config>service>vpls>sap>egress>dest-mac-rewrite). Figure 23 shows a sample deployment.

Figure 23:  Layer 2 Policy-Based Forwarding (PBF) redirect action 

When enabled, all unicast packets have their destination MAC rewritten to operator-configured value on an Layer 2 switch VPLS SAP. Multicast and broadcast packets are unaffected. The feature:

  1. Is supported for regular and split-horizon group Ethernet SAPs in a regular VPLS Service
  2. Is expected to be deployed on a SAP that faces far-end IP interface (either a SAP that is the target of PBF action, as shown in Figure 23, or a VPLS SAP of a downstream Layer 2 switch that is connected to a far-end router—not shown).
  3. Applies to any unicast egress traffic including LI and mirror.

Restrictions:

  1. Is mutually exclusive with SAP MAC ingress and egress loopback feature: tools perform service id service-id loopback eth sap sap-id {ingress | egress} mac-swap ieee-address

4.1.2.9. Network-port VPRN Filter Policy

Network-port Layer 3 service-aware filter feature allows operators to deploy VPRN service aware ingress filtering on network ports. A single ingress filter of scope template can each be defined for IPv4 and for IPv6 against a VPRN service. The filter applies to all unicast traffic arriving on auto-bind and explicit-spoke network interfaces for that service. The network interface can be either Inter-AS, or Intra-AS. The filter does not apply to traffic arriving on access interfaces (SAP, spoke-sdp, network-ingress (CsC, rVPLS, eVPN).

The same filter can be used on access interfaces of the specific VPRN, can embed other filters (including OpenFlow), can be chained to a system filter, and can be used by other Layer 2 or Layer 3 services.

The filter is deployed on all line cards (chassis network mode D is required). There are no limitations related to filter match/action criteria or embedding. The filter is programmed on line cards against ILM entries for this service. All label-types are supported. If an ILM entry has a filter index programmed, that filter is used when the ILM is used in packet forwarding; otherwise, no filter is used on the service traffic.

Restrictions:

  1. Network port Layer 3 service-aware filters do not support FlowSpec and LI (cannot use filter inside LI infrastructure nor have LI sources within the VPRN filter).

4.1.2.10. ISID MAC Filters

ISID filters are a type of MAC filters that allows filtering based on the ISID values rather than Layer 2 criteria used by MAC filters of type normal or vid. ISID filters can be deployed on iVPLS PBB SAPs and Epipe PBB SAPs in the following scenarios:

The MMRP usage of the MRP policy ensures automatically that traffic using Group B-MAC is not flooded between domains. However, there could be small transitory periods when traffic originated from PBB BEB with unicast B-MAC destination may be flooded in the BVPLS context as unknown unicast in the BVPLS context for both IVPLS and PBB Epipe. To restrict distribution of this traffic for local PBB services, ISID filters can be deployed. The MAC filter configured with ISID match criterion can be applied to the same interconnect endpoints (BVPLS SAP or PW) as the MRP policy to restrict the egress transmission of any type of frames that contains a local ISID. The ISID filters are applied as required on a per B-SAP or B-PW basis, only in the egress direction.

The ISID match criteria are exclusive with any other criteria under mac-filter. A new mac-filter type attribute is defined to control the use of ISID match criteria and must be set to ISID to allow the use of ISID match criteria.

4.1.2.11. VID MAC Filters

VID filters are a type of MAC filters that extend the capability of current Ethernet ports with null or default SAP tag configuration to match and take action on VID tags. Service delimiting tags (for example, QinQ 1/1/1:10.20 or dot1q 1/1/1:10, where outer tag 10 and inner tags 20 are service delimiting) allow fine granularity control of frame operations based on the VID tag. Service delimiting tags are exact match and are stripped from the frame as shown in Figure 24. Exact match or service delimiting tags do not require VID filters. VID filters can only be used to match on frame tags that are after the service delimiting tags.

With VID filters, operators can choose to match VID tags for up to two tags on ingress, egress, or both.

  1. The outer tag is the first tag in the packet that is carried transparently through the service.
  2. The inner tag is the second tag in the packet that is carried transparently through the service.

VID filters add the capability to perform VID value filter policies on default tags (1/1/1:*, or 1/1/1:x.*, or 1/1/1/:*.0) or null tags (1/1/1, 1/1/1:0, or 1/1/1:x.0). The matching is based on the port configuration and the SAP configuration.

At ingress, the system looks for the two outer-most tags in the frame. If present, any service delimiting tags are removed and not visible to VID MAC filtering. For example:

  1. 1/1/1:x.y SAP has no tag left for VID MAC filter to match on (outer-tag and inner-tag = 0)
  2. 1/1/1:x.* SAP has potentially one tag in the * position for VID MAC filter to match on
  3. SAP such as 1/1/1, 1/1/1:*, or 1/1/1:*.* can have as many as two tags for VID MAC filter to match on
  4. For the remaining tags, the left (outer-most) tag is what is used as the outer tag in the MAC VID filter. The following tag is used as the inner tag in the filter. If any of these positions do not have tags, a value of 0 is used in the filter. At egress, the VID MAC filter is applied to the frame prior to adding the additional service tags.

In the industry, the QinQ tags are often referred to as the C-VID (customer VID) and S-VID (service VID). The terms outer tag and inner tag allow flexibility without having to refer to C-TAG and S-TAG explicitly. The position of inner and outer tags is relative to the port configuration and SAP configuration. Matching of tags is allowed for up to the first two tags on a frame because service delimiting tags may be 0, 1, or 2 tags.

The meaning of inner and outer has been designed to be consistent for egress and ingress when the number of non-service delimiting tags is consistent. Service 1 in Figure 24 shows a conversion from QinQ to a single dot1q example where there is one non-service delimiting tag on ingress and egress. Service 2 shows a symmetric example with two non-service delimiting tags (plus and additional tag for illustration) to two non-service delimiting tags on egress. Service 3 shows a single non-service delimiting tag on ingress and two tags with one non-service delimiting tag on ingress and egress.

SAP-ingress QoS setting allows for MAC-criteria type VID, which uses the VID filter matching capabilities of QoS and VID Filters (see the 7450 ESS, 7750 SR, 7950 XRS, and VSR Quality of Service Guide).

A VID filter entry can also be used as a debug or lawful intercept mirror source entry.

Figure 24:  VID Filtering Examples 

VID filters are available on Ethernet SAPs for Epipe, VPLS, or I-VPLS including eth-tunnel and eth-ring services.

4.1.2.11.1. Arbitrary Bit Matching of VID Filters

In addition to matching an exact value, a VID filter mask allows masking any set of bits. The masking operation is ((value and vid-mask) = = (tag and vid-mask)). For example: A value of 6 and a mask of 7 would match all VIDs with the lower 3 bits set to 6. VID filters allow explicit matching of VIDs and matching of any bit pattern within the VID tag.

When using VID filters on SAPs, only VID filters are allowed on this SAP. Filters of type normal and ISID are not allowed.

An additional check for the “0” VID tag may be required when using certain wild card operations. For example, frames with no tags on null encapsulated ports match a value of 0 in outer tag and inner tag because there are no tags in the frame for matching. If a zero tag is possible but not wanted, it can be explicitly filtered using exact match on “0” prior to testing other bits for “0”.

config>system>ethernet>new-qinq-untagged-sap is a special QinQ function for single tagged QinQ frames with a null second tag. Using this in combination with VID filters is not recommended. The outer tag is the only tag available for filtering on egress for frames arriving from MPLS SDPs or from PBB services, even though additional tags may be carried transparently.

4.1.2.11.2. Port Group Configuration Example

Figure 25:  Port Groups 

Figure 25 shows a customer use example where some VLANs are prevented from ingressing or egressing certain ports. In the example, port A sap 1/1/1:1.* would have a filter as shown below while port A sap 1/1/1:2.* would not.:

mac-filter 4 create
     default-action forward
            type vid
            entry 1 create
                match frame-type ethernet_II
                    outer-tag 30 4095
                exit
                action drop
            exit
        exit

4.1.2.12. IP Exception Filters

IP exception filters scan all outbound traffic entering an NGE domain and allow packets that match the exception filter criteria to transit the NGE domain unencrypted. For information on IP exception filters supported by NGE nodes, see Router Encryption Exceptions using ACLs.

The most basic IP exception filter policy must have the following:

  1. an exception filter policy ID
  2. scope, either exclusive or template
  3. at least one filter entry with a specified matching criteria

4.1.2.13. Redirect Policies

SR OS-based routers support configuring of IPv4 and IPv6 redirect policies. Redirect policies allow specifying multiple redirect target destinations and defining status check test methods used to validate the ability for a destination to receive redirected traffic. This destination monitoring allows routers to react to target destination failures. To specify an IPv4 redirect policy, define all destinations to be IPv4. To specify an IPv6 redirect policy, define all destinations to be IPv6. IPv4 redirect policies can only be deployed in IPv4 filter policies. IPv6 redirect policy can only be deployed in IPv6 filter policies.

Redirect policies support the following destination tests:

  1. ping test – with configurable interval, drop-count, and time-out
  2. unicast-rt-test – unicast routing reachability, supported only when router instance is configured for a specific redirect policy. The test yields true if the route to the specified destination exists in RTM for the configured router instance.

Each destination is assigned an initial or base priority describing this destination’s relative importance within the policy. The destination with the highest priority value is selected as most-preferred destination and programmed on line cards in filter policies using this redirect policy as an action. Only destinations that are not disabled by the programmed test (if configured) are considered when selecting the most-preferred destination.

In some deployments, it may not be necessary to switch from a currently active, most-preferred redirect-policy destination when a new more-preferred destination becomes available. To support such deployments, operators can enable the sticky destination functionality (config>filter>redirect-policy>sticky-dest). When enabled, the currently active destination remains active unless it goes down or an operator forces the switch using the tools>perform>filter>redirect-policy>activate-best-dest command. An optional sticky destination hold-time-up is available to delay programming the sticky destination in the redirect policy (transition from action forward to PBR action to the most-preferred destination). When the timer is enabled, the first destination that comes up is not programmed and instead the timer is started. Once the timer expires, the most-preferred destination at that time is programmed (which may be a different destination from the one that started the timer). Note the following:

  1. When the manual switchover to most-preferred destination is executed as described above, the hold-time-up is stopped.
  2. When the timer value is changed, the new value takes immediate effect and the timer is restarted with the new value (or expired if no-hold-time-up is configured).
Note:

The unicast-rt-test command fails when performed in the context of a VPRN routing instance when the destination is routable only through grt-leak functionality. ping-test is recommended in these cases.

Feature restrictions:

  1. Redirect policy is supported for ingress IPv4 and IPv6 filter policies only.
  2. Different platforms support different scale for redirect policies. Contact your local Nokia representative to ensure the planned deployment does not exceed recommended scale.

4.1.2.13.1. Router Instance Support for Redirect Policies

There are two modes of deploying redirect policies on VPRN interfaces. The functionality supported depends on the configuration of the redirect-policy router option with config>filter>redirect-policy>router:

  1. Redirect policy with router option enabled (recommended):
    1. When a PBR destination is up, the PBR lookup is performed in the redirect policy's configured routing instance. When that instance differs from the incoming interface where the filter policy using the specific redirect policy is deployed, the PBR action is equivalent to forward next-hop router filter policy action.
    2. When all PBR destinations are down (or a hardware does not support action router), action forward is programmed and the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.
    3. Any destination tests configured are executed in the routing context specified by the redirect policy.
    4. Changing router configuration for a redirect policy brings all destinations with a test configured down. The destinations are brought up once the test confirms reachability based on the new redirect policy router configuration.
  2. Redirect policy without router option disabled (no router) or with router options not supported (legacy):
    1. When a PBR destination is up, the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.
    2. When all PBR destinations are down, action forward is programmed and the PBR lookup is performed in the routing instance of the incoming interface where the filter policy using the specific redirect policy is deployed.
    3. Any destination tests configured are always executed in the Base router instance regardless of the router instance of the incoming interface where the filter policy using the specific redirect policy is deployed.

Restrictions:

  1. Only unicast-rt-test and ping-test are supported when the router option is enabled.

4.1.2.13.2. Binding Redirect Policies

Redirect policies can switch from a specific destination to a new destination in a coordinated manner as opposed to independently as a function of the reachability test results of their configured destinations. This is achieved by binding together destinations of redirect policies using the config>filter>redirect-policy-binding command. SR OS combines the reachability test results (either TRUE or FALSE) from each of the bound destinations and forms a master test result which prevails over each independent result. The combined result can be obtained by applying either an AND function or an OR function. For the AND function, all destinations must be UP (reachability test result equals TRUE) for each destination to be considered UP. Conversely, a single destination must be DOWN for each to be considered DOWN; for the OR case, a single destination needs to be UP for each destination to be considered UP. Apart from the master test, which overrides the test result of each destination forming a binding, redirect policies are unaltered. For stickiness capability, switching towards a more-preferred destination in a specified redirect policy does not occur until the timers (if any) of each of the associated destinations have expired.

There is no specific constraint regarding destinations that can be bound together. For example, it is possible to bind destinations of different address families (IPv4 or IPv6), destinations with no test, destinations with multiple tests, or destinations of redirect policies which are administratively down. However, some specific scenarios exist when binding redirect policies:

  1. A destination that is in the Administratively down state is considered DOWN (that is, as if its test result was negative, even if no test had been performed).
  2. An Administratively down redirect policy is equivalent to a policy with all destinations in an Administratively down state. The system performs a simple forward.
  3. A destination with no test is considered always UP.
  4. If a destination has multiple tests, all tests must be positive for the destination to be considered UP (logical AND between its own tests results).
  5. Destination tests are performed even if a redirect policy has not been applied (that is, not declared as an action of a filter which itself has been applied).

4.1.2.14. HTTP-Redirect (Captive Portal)

SR OS routers support redirecting HTTP traffic by using the line card ingress IP and IPv6 filter policy action http-redirect. This capability is mainly used in a subscriber-management context to redirect a subscriber web session to a captive portal landing page.Examples of use-cases include redirecting a subscriber after initial connection to a new network to accept the terms of service, or a subscriber out of quota redirection.

Traffic matching the http-redirect filter entry is sent to the SF/CPM for HTTP redirection:

  1. The SF/CPM completes the TCP three-way handshake for new TCP sessions on behalf of the intended server, and responds to the HTTP GET request with an 302 redirect. Therefore, the subscriber web session is redirected to the portal landing page configured in the http-redirect filter action.
  2. Non TCP flows are ignored
  3. TCP flows other than HTTP, matching an http-redirect filter action, are TCP reset after the three-way handshake. Therefore, it is recommended to configure the http-redirect filter entry to match only TCP port 80. HTTPs uses TLS as underlying protocol, and cannot be redirected to a landing page.

Additional subscriber information may be required by the captive portal. This information can be appended as variables in the http-redirect URL and automatically substituted with the relevant subscriber session data, as follows:

  1. $IP - subscriber host IP address
  2. $MAC - subscriber host MAC address
  3. $URL - original requested URL
  4. $SAP - subscriber SAP
  5. $SUB - subscriber identification string
  6. $CID - circuit-ID, or interface-ID of the subscriber host (hexadecimal format)
  7. $RID - remote-ID of the subscriber host (hexadecimal format)
  8. $SAPDESC - configured SAP description

The recommended filter configuration to redirect HTTP traffic page is described below using ingress ip-filter policy "10":

  1. entry 10: Allows DNS UDP port 53 to a list of allowed DNS servers. Allowing DNS is mandatory for a web client to resolve a URL in the first place. The UDP port directionality indicates DNS request. The destination IP match criteria is optional, creating a list that includes the operator DNS, and the most common open DNS servers provide the most security, allowing, alternatively, UDP dst-port 53 alone is another option.
  2. entry 20: Allows HTTP TCP port 80 traffic to the portal landing page defined as a prefix-list. The TCP port directionality indicates an HTTP request. Optionally, the operator can create an additional entry to allow TCP port 443 in case the landing page uses both HTTP and HTTPS.
  3. entry 30: Redirects all TCP port 80 traffic, other than entry 20, to the landing page URL http://www.mydomain/com/redirect.html?subscriber=$SUB&ipaddress=$IP&mac=$MAC&location=$SAP.
  4. entry 40: Drops explicitly any other IP flows, as in the following configuration example:
        ip-filter 10 name "10" create 
            entry 10 create
                description "Allow DNS Traffic to DNS servers"
                match protocol udp
                    dst-ip ip-prefix-list "dns-servers"
                    dst-port eq 53
                exit
                action
                    forward
                exit
            exit
            entry 20 create
                description "Allow HTTP traffic to redirect portal"
                match protocol tcp
                    dst-ip ip-prefix-list "portal-servers"
                    dst-port eq 80
                exit
                action
                    forward
                exit
            exit
            entry 30 create
                description "HTTP Redirect all other TCP 80 flows"
                match protocol tcp
                    dst-port eq 80
                exit
                action
                    http-redirect "http://www.mydomain/com/                redirect.html?subscriber=$SUB&ipaddress=$IP&mac=$MAC&location=$SAP."
                exit 
            exit
            entry 40 create
                description "Drop anything else"
                action
                    drop
                exit
            exit
        exit

Also, the router supports two redirect scale modes that are configurable at the system level. The optimized-mode improves the number of HTTP redirect sessions supported by system as compared to the no optimized-mode, as follows:

A>config>system>cpm-http-redirect#
----------------------------------------------
 optimized-mode
----------------------------------------------

4.1.2.14.1. Traffic Flow

The following example provides a brief scenario of a subscriber connecting to a new network, where it is required to authenticate or accept the network terms of use, before getting access to Internet:

  1. The subscriber typically receives an IP address upon connecting to the network using DHCP, and is assigned a filter policy to redirect HTTP traffic to a web portal.
  2. The subscriber HTTP session TCP traffic is intercepted by the router. The CPM completes the TCP three-way handshake on behalf of the destination HTTP server, and responds to the HTTP request with an HTTP 302 “Moved Temporarily” response. This response contains the URL of the web portal configured in the filter policy.
  3. Upon receiving this redirect message, the subscriber web browser closes the original TCP session, and opens a new TCP session to the redirection portal.
  4. The subscriber can now authenticate or accept the terms of use. Once done, the subscriber filter policy is dynamically modified.
  5. The subscriber can now connect to the original Internet site.
Figure 26:  Web Redirect Traffic Flow 

4.1.2.15. Filter Policy-based ESM Service Chaining

In some deployments, operators may select to redirect ESM subscribers to Value Added Services (VAS). Various deployment models can be used but often subscribers are assigned to a specific residential tier-of-service, which also defines the VAS available to subscribers of the specific tier. The subscribers are redirected to VAS based on tier-of-service rules but such an approach can be hard to manage when many VAS services/tiers of service are desired. Often the only way to identify a subscriber’s traffic with a specific tier-of-service is to preallocate IP/IPv6 address pools to a specific service tier and use those addresses in VAS PBR match criteria. This creates an application-services to network infrastructure dependency that can be hard to overcome, especially if fast and flexible application service delivery is desired.

Filter policy-based ESM service chaining removes ESM VAS steering to network infrastructure inter-dependency. An operator can configure per tier of service or per individual VAS service upstream and downstream service chaining rules without a need to define subscriber or tier-of-service match conditions. Figure 27 shows a possible ACL model (embedded filters are used for VAS service chaining rules).

On the left in Figure 27, the per-tier-of-service ACL model is depicted. Each tier of service (Gold or Silver) has a dedicated embedded VAS filter (“Gold VAS”, “Silver VAS”) that contains all steering rules for all service chains applicable to the specific tier. Each VAS filter is then embedded by the ACL filter used by a specific tier. A subscriber is subject to VAS service chain rules based on the per-tier ACL assigned to that subscriber (for example, via RADIUS). If a new VAS rule needs to be added, an operator must program that rule in all applicable tiers. Upstream and downstream rules can be configured in a single filter (as shown) or can use dedicated ingress and egress filters.

On the right in Figure 27, the per-VAS-service ACL model is depicted. Each VAS has a dedicated embedded filter (“VAS 1”, “VAS 2”, “VAS 3”) that contains all steering rules for all service chains applicable to that VAS service. A tier of service is then created by embedding multiple VAS-specific filters: Gold: VAS 1, VAS 2, VAS 3; Silver: VAS 1 and VAS 3. A subscriber is subject to VAS service chain rules based on the per-tier ACL assigned to that subscriber. If a new VAS rule needs to be added, an operator needs to program that rule in a single VAS-specific filter only. Again, upstream and downstream rules can be configured in a single filter (as shown) or can use dedicated ingress and egress filters.

Figure 27:  ACL filter modeling for ESM Service Chaining 

Figure 28 shows upstream VAS service chaining steering using filter policies. Upstream subscriber traffic entering Res-GW is subject to the subscriber's ingress ACL filter assigned to that subscriber by a policy server. If the ACL contains VAS steering rules, the VAS-rule-matching subscriber traffic is steered for VAS processing over a dedicated to-from-access VAS interface in the same or a different routing instance. After the VAS processing, the upstream traffic can be returned to Res-GW by a to-from-network interface (shown in Figure 28) or can be injected to WAN to be routed toward the final destination (not shown).

Figure 28:  Upstream ESM ACL-policy based service chaining 

Figure 29 shows downstream VAS service chaining steering using filter policies. Downstream subscriber traffic entering Res-GW is forwarded to a subscriber-facing line card. On that card, the traffic is subject to the subscriber's egress ACL filter policy processing assigned to that subscriber by a policy server. If the ACL contains VAS steering rules, the VAS rule-matching subscriber's traffic is steered for VAS processing over a dedicated to-from-network VAS interface (in the same or a different routing instance). After the VAS processing, the downstream traffic must be returned to Res-GW via a “to-from-network” interface (shown in Figure 29) to ensure the traffic is not redirected to VAS again when the subscriber-facing line card processes that traffic.

Figure 29:  Downstream ESM ACL-policy based service chaining 

Ensuring the correct settings for the VAS interface type, for upstream and downstream traffic redirected to a VAS and returned after VAS processing, is critical for achieving loop-free network connectivity for VAS services. The available configuration options (config>service>vprn>if>vas-if-type, config>service>ies>if>vas-if-type and config>router>if>vas-if-type) are described below:

  1. deployments that use two separate interfaces for VAS connectivity (recommended, and required if local subscriber-to-subscriber VAS traffic support is required)
    1. to-from-access
      1. upstream traffic arriving from subscribers over access interfaces must be redirected to a VAS PBR target reachable over this interface for upstream VAS processing
      2. downstream traffic destined for subscribers after VAS processing must arrive on this interface, so that the traffic is subject to regular routing but is not subject to Application Assurance diversion, nor to egress subscriber PBR
      3. the interface must not be used for downstream pre-VAS traffic; otherwise, routing loops occur
    2. to-from-network
      1. downstream traffic destined for subscribers arriving from network interfaces must be redirected to a VAS PBR target reachable over this interface for downstream VAS processing
      2. upstream traffic after VAS processing, if returned to the router, must arrive on this interface so that regular routing can be applied
  2. deployments that use a single interface for VAS connectivity (optional, no local subscriber-to-subscriber VAS traffic support)
    1. to-from-both
      1. both upstream traffic arriving from access interfaces and downstream traffic arriving from the network are redirected to a PBR target reachable over this interface for upstream/downstream VAS processing
      2. after VAS processing, traffic must arrive on this interface (optional for upstream), so that the traffic is subject to regular routing but is not subject to AA diversion, nor to egress subscriber PBR
      3. the interface must be used for downstream pre-VAS traffic, otherwise, routing loops occur

The ESM filter policy-based service chaining allows operators to do the following:

  1. Steer upstream and downstream traffic per-subscriber with full ACL-flow-defined granularity without the need to specify match conditions that identify subscriber or tier-of-service
  2. Steer both upstream and downstream traffic on a single Res-GW
  3. Flexibly assign subscribers to tier-of-service by changing the ACL filter policy a specific subscriber uses
  4. Flexibly add new services to a subscriber or tier-of-service by adding the subscriber-independent filter rules required to achieve steering
  5. Achieve isolation of VAS steering from other ACL functions like security through the use of embedded filters
  6. Deploy integrated Application Assurance (AA) as part of a VAS service chain—both upstream and downstream traffic is processed by AA before a VAS redirect
  7. Select whether to use IP-Src/IP-Dst address hash or IP-Src/IP-Dst address plus TCP/UDP port hash when LAG/ECMP connectivity to DC is used. Layer 4 inputs are not used in hash with IPv6 packets with extension headers present.

ESM filter policy-based traffic steering supports the following:

  1. IPv4 and IPv6 steering of unicast traffic using IPv4 and IPv6 ACLs
  2. action forward redirect-policy or action forward next-hop router for IP steering with TCAM-based load-balancing, fail-to-wire, and sticky destination
  3. action forward esi sf-ip vas-interface router for an integrated service chaining solution

Operational notes:

  1. Downstream traffic steered toward a VAS on the subscriber-facing IOM is reclassified (FC and profile) based on the subscriber egress QoS policy, and is queued toward the VAS based on the network egress QoS configuration. Packets sent toward VAS do not have DSCP remarked (since they are not yet forwarded to a subscriber). DSCP remarking based on subscriber's egress QoS profile only applies to traffic ultimately forwarded to the subscriber (after VAS or not subject to VAS).
  2. If mirroring of subscriber traffic is configured using ACL entry/subscriber/SAP/port mirror, the mirroring applies to traffic ultimately forwarded to subscriber (after VAS or not subject to VAS). Traffic that is being redirected to VAS cannot be mirrored using an ACL filter implementing PBR action (the same egress ACL filter entry being a mirror source and specifying egress PBR action is not supported).
  3. Use dedicated ingress and egress filter policies to prevent accidental match of an ingress PBR entry on egress, and vice-versa, that results in forwarding/dropping of traffic matching the entry (based on the filter's default action configuration).

Restrictions:

  1. This feature is not supported with HSMDAs on subscriber ingress.
  2. This feature is not supported when the traffic is subject to non-AA ISA on Res-GW.
  3. Traffic that matches an egress filter entry with an egress PBR action cannot be mirrored, cannot be sampled using cflowd, and cannot be logged using filter logging while being redirected to VAS on a sub-facing line card.
  4. This feature is not supported with LAC/LNS ESM (PPPoE subscriber traffic encapsulated into or de-encapsulated from L2TP tunnels).
  5. This feature is not supported for system filter policies.

4.1.2.16. Policy-Based Forwarding for Deep Packet Inspection in VPLS

The purpose policy-based forwarding is to capture traffic from a customer and perform a deep packet inspection (DPI) and forward traffic, if allowed, by the DPI.

In the following example, the split horizon groups are used to prevent flooding of traffic. Traffic from customers enter at SAP 1/1/5:5. Due to the mac-filter 100 that is applied on ingress, all traffic with dot1p 07 marking is forwarded to SAP 1/1/22:1, which is the DPI.

DPI performs packet inspection/modification and either drops the traffic or forwards the traffic back into the box through SAP 1/1/21:1. Traffic is then sent to spoke-sdp 3:5.

SAP 1/1/23:5 is configured to see if the VPLS service is flooding all the traffic. If flooding is performed by the router then traffic would also be sent to SAP 1/1/23:5 (which it should not).

Figure 30 shows an example to configure policy-based forwarding for deep packet inspection on a VPLS service. For information about configuring services, refer to the 7450 ESS, 7750 SR, 7950 XRS, and VSR Layer 2 Services and EVPN Guide: VLL, VPLS, PBB, and EVPN.

Figure 30:  Policy-Based Forwarding for Deep Packet Inspection  

The following displays a VPLS service configuration with DPI example:

*A:ALA-48>config>service# info
----------------------------------------------
...
        vpls 10 customer 1 create
            service-mtu 1400
            split-horizon-group "dpi" residential-group create
            exit
            split-horizon-group "split" create
            exit
            stp
                shutdown
            exit
            sap 1/1/21:1 split-horizon-group "split" create
                disable-learning
                static-mac 00:00:00:31:11:01 create
            exit
            sap 1/1/22:1 split-horizon-group "dpi" create
                disable-learning
                static-mac 00:00:00:31:12:01 create
            exit
            sap 1/1/23:5 create
                static-mac 00:00:00:31:13:05 create
            exit
            no shutdown
        exit
...
----------------------------------------------
*A:ALA-48>config>service#

The following displays a MAC filter configuration example:

*A:ALA-48>config>filter# info
----------------------------------------------
...
        mac-filter 100 create
            default-action forward
            entry 10 create
                match
                    dot1p 7 7
                exit
                log 101
                action forward sap 1/1/22:1
            exit
        exit
...
----------------------------------------------
*A:ALA-48>config>filter#

The following displays the MAC filter added to the VPLS service configuration:

*A:ALA-48>config>service# info
----------------------------------------------
...
        vpls 10 customer 1 create
            service-mtu 1400
            split-horizon-group "dpi" residential-group create
            exit
            split-horizon-group "split" create
            exit
            stp
                shutdown
            exit
            sap 1/1/5:5 split-horizon-group "split" create
                ingress
                    filter mac 100
                exit
                static-mac 00:00:00:31:15:05 create
            exit
            sap 1/1/21:1 split-horizon-group "split" create
                disable-learning
                static-mac 00:00:00:31:11:01 create
            exit
            sap 1/1/22:1 split-horizon-group "dpi" create
                disable-learning
                static-mac 00:00:00:31:12:01 create
            exit
            sap 1/1/23:5 create
                static-mac 00:00:00:31:13:05 create
            exit
            spoke-sdp 3:5 create
            exit
            no shutdown
        exit
....
----------------------------------------------
*A:ALA-48>config>service#
 

4.1.2.17. Storing Filter Entries

FP2-, FP3-, and FP4-based cards store filter policy entries in dedicated memory banks in hardware, also referred to as CAM tables:

  1. IP/MAC ingress
  2. IP/MAC egress
  3. IPv6 ingress
  4. IPv6 egress

Additional CAM tables for CPM filters are used on SR-1, SR-1s, SR-2s line cards for MAC, IP and IPv6.

4.1.2.17.1. FP4-based Cards

To optimize both scale and performance, policy entries configured by the operator are compressed by each FP4 line card prior to being installed in hardware.

This compression can result, in an unexpected scenario typically only achieved in a lab environment, in an overload condition for a specified FP CAM line card. This overload condition can occur when applying a filter policy for the first time on a line card FP or when adding entries to a filter policy.

For a line card ACL filter, the system raises a trap if a specified FP CAM utilization goes beyond 85% utilization.

Applying a Filter Policy

A policy is installed for the first time on a line card FP if no router interface, service interface, SAP, spoke SDP, mesh SDP, or ESM subscriber host was using the policy on this FP.

A policy installed for the first time on a line card FP can lead to a compression failure resulting in an overload condition for this policy on this FP CAM. In this case, none of the entries for the affected filter policy are programmed and traffic is forwarded as if no filter was installed.

Adding Filter Entries

Adding an additional entry to a filter policy can lead to a compression failure resulting in an overload condition.

In this case, the newly added entry is not programmed on the affected FP CAM. Additional entries added to the same policy after the first overload condition are also not programmed on the affected FP CAM as the system attempts to install all outstanding additions in order.

A trap is raised when an overload condition occurs. After the first overload event is detected for a specified ACL FP CAM, the CPM interactively rejects the addition of filter policies or filter entries applied to the same FP CAM, thus providing an interactive error message to the operator or the dynamic provisioning interfaces such as RADIUS.

Note:

The filter resource management task on the CPM controls the maximum number of filter entries per FP. If the operator attempts to go over the scaling limit, the system returns an interactive error message. This mechanism is independent from the overload state of the FP CAM.

Removing Filter Entries

Removing filter entries from a filter policy is always accepted and is used to resolve the overload events.

Resolving Overload

The overload condition should be resolved by the network operator before adding new entries or policies in the affected FP CAM.

To identify the affected policy, the system logs the overload event providing slot number, FP number, and impacted CAM. Based on this information, the tools>dump>filter>overload command allows the operator to identify the affected policy and policy entries in the system that cannot be programmed on a given FP CAM.

To resolve the overload condition, the network operator can remove the newly added entries from the affected policy or assign a different policy.