3. IP Multicast

This chapter provides information on multicast for IPv4 and IPv6, including Internet Group Management Protocol (IGMP), Multicast Listener Discovery (MLD), Protocol Independent Multicast Source-Specific Multicast (PIM-SSM), Protocol Independent Multicast Source-Sparse Mode (PIM-SM), and Multicast Source Discovery Protocol (MSDP).

Topics in this chapter include:

3.1. Overview of IP Multicast

IP multicast is a method of sending a single stream of traffic (or a single copy of data) to multiple recipients. IP multicast provides an effective method of one-to-many and many-to-many communication.

In the case of unicast delivery, IP packets are sent from a single source to a single host receiver. The source inserts the address of the target receiver in the IP header destination field of an IP datagram, and intermediate routers (if present) forward the datagram towards the target in accordance with their respective routing tables.

However, some applications, such as audio or video streaming broadcasts, require the delivery of individual IP packets to multiple destinations. In such applications, IP multicast is used to distribute datagrams from one or more source hosts to a set of receivers that may be distributed over different (sub) networks.

Multicast sources can send a single copy of data using a single address for the entire group of recipients. The routers between the source and recipients route the data using the group address route. Multicast packets are delivered to a multicast group. A multicast group specifies a set of recipients who are interested in a particular data stream and is represented by an IP address from a specified range. Data addressed to the IP address is forwarded to the members of the group. A source host sends data to a multicast group by specifying the multicast group address in the datagram’s destination IP address. A source does not have to register to a Rendezvous Point (RP) in order to send data to a group nor do they need to be a member of the group.

Routers and Layer 3 switches use the Internet Group Management Protocol (IGMP) to manage group membership for IPv4 multicast, and Multicast Listener Discovery (MLD) to manage group membership for IPv6 multicast. When a host receiver wants to receive one or more multicast sessions, it signals its local router by sending a join message to each multicast group it wants to join. When a host wants to leave a multicast group, it sends a Leave message. The local router forwards its group membership information to the core routers to inform the core routers about which multicast flows are needed.

To extend multicast from the receivers’ local router to the Internet, the 7705 SAR uses Protocol Independent Multicast (PIM), which employs both unicast and multicast routing tables to keep track of route information. As more routers in the Internet become multicast-capable, using both unicast and multicast routing tables becomes more and more important.

The 7705 SAR also uses the Multicast Source Discovery Protocol (MSDP) as a mechanism to connect multiple IPv4 PIM-SM domains together. Each PIM-SM domain uses its own independent Rendezvous Point (RP) and does not have to depend on RPs in other domains.

To summarize, host receivers connect to a local router (7705 SAR) via IGMP or MLD access or network interfaces. The 7705 SAR connects to the network—and eventually to the multicast source device at the far end—via PIM network interfaces. At the multicast source end, the 7705 SAR connects to the source via a PIM access or network interface.

3.1.1. Multicast in IP-VPN Networks

Multicast can be deployed as part of IP-VPN networks using VPRN. For details, refer to the “Multicast VPN (MVPN)” section in the 7705 SAR Services Guide.

3.1.2. Mobile Backhaul IP Multicast Example

A typical mobile backhaul infrastructure designed for Multimedia Broadcast Multicast Service (MBMS) is shown in Figure 1.

Figure 1:  IP Multicast for MBMS 

The 7705 SAR listens for IGMP and MLD receiver requests from the eNB and relays the requested group information back to the requester in one of two ways:

  1. via local replication, if the stream for the given group is already available (that is, one or more receivers have already joined the group), or
  2. via first relaying the request to the upstream router and then replicating the stream to the new receiver once the stream is available

Figure 1 illustrates this as follows.

  1. The 7705 SAR listens to IGMPv3 on an eNB IPv4 interface, or to MLDv2 on an eNB IPv6 interface.
  2. The 7705 SAR uses PIM-SSM to send multicast traffic upstream and communicate with the 7750 SR at the MTSO. Similarly, the 7750 SR runs PIM-SSM towards the 7705 SAR.
  3. The 7750 SR can then run a wide variety of different multicast protocols (such as point-to-multipoint LSPs, and multicast IP-VPN) towards the network core where the source (video head end) is located.

3.1.3. Multicast Models (ASM and SSM)

ASM provides network layer support for many-to-many delivery. SSM supports one-to-many delivery.

This section provides information on the following topics:

3.1.3.1. Any-Source Multicast (ASM)

ASM is the IP multicast service model defined in RFC 1112, Host extensions for IP Multicasting. An IP datagram is transmitted to a host group, which is a set of zero or more end-hosts identified by a single IP destination address (224.0.0.0 through 239.255.255.255 for IPv4). End-hosts can join and leave the group at any time and there is no restriction on their location or number. This model supports multicast groups with an arbitrary number of senders. Any end-host can transmit to a host group even if it is not a member of that group.

The ASM service model identifies a group by a (*, G) pair, where “*” (star) represents one or more sources.

3.1.3.2. Source-Specific Multicast (SSM)

The SSM service model defines a channel identified by an (S,G) pair, where S is a source address and G is an SSM destination address. In contrast to the ASM model, SSM only provides network-layer support for one-to-many delivery.

The SSM service model attempts to alleviate the following deployment problems that ASM has presented:

  1. address allocation — SSM defines channels on a per-source basis. For example, the channel (S1,G) is distinct from the channel (S2,G), where S1 and S2 are source addresses. This avoids the problem of global allocation of SSM destination addresses and makes each source independently responsible for resolving address collisions for the various channels it creates.
  2. access control — SSM provides an efficient solution to the access control problem. When a receiver subscribes to an (S,G) channel, it receives data sent only by the source, S. In contrast, any host can transmit to an ASM host group. At the same time, when a sender picks a channel (S,G) to transmit on, it is automatically ensured that no other sender will be transmitting on the same channel (except in the case of malicious acts such as address spoofing). This makes it harder to spam an SSM channel than it is to spam an ASM multicast group.
  3. handling of well-known sources — SSM requires only source-based forwarding trees. This eliminates the need for a shared tree infrastructure. Thus the complexity of the multicast routing infrastructure for SSM is low, making it viable for immediate deployment.

3.1.4. IGMP Snooping and MLD Snooping For VPLS and Routed VPLS

The 7705 SAR supports IGMP snooping and MLD snooping for VPLS and routed VPLS (r-VPLS) services. IGMP and MLD snooping is enabled or disabled at the VPLS and r-VPLS service level. Further configurations are available at that level, as well as at the SAP and SDP (spoke and mesh) level of the service.

For details on IGMP and MLD snooping, refer to the “Multicast for VPLS and Routed VPLS (IGMP and MLD Snooping)” section in the 7705 SAR Services Guide.

3.1.5. Multicast Over Layer 3 Spoke SDP

The 7705 SAR supports PIM on Layer 3 spoke SDP interfaces. PIM-SM and PIM-SSM are supported for IPv4 in VPRN and IES. PIM-SSM is supported for IPv6 in IES.

A Layer 3 spoke SDP interface can be configured in the same way existing interfaces to be part of PIM IPv4 or PIM IPv6, which allows the interface to be used for multicast signaling and to transport multicast PDUs. GRE, MPLS, LDP, and SR transport tunnels are supported.

When signaling, PIM is tunneled through the Layer 3 spoke SDP interface multihop. Multicast PDUs are tunneled in the reverse direction through the Layer 3 spoke SDP interface. In both cases, packets are encapsulated with the Layer 3 spoke SDP header.

For more information on PIM functionality, see PIM.

3.2. IGMP

Internet Group Management Protocol (IGMP) enables multicast applications over native IPv4 networks. This section contains information on the following topics:

3.2.1. IGMP Overview

Internet Group Management Protocol (IGMP) is used by IPv4 hosts and routers to report their IP multicast group memberships to neighboring multicast routers. A multicast router keeps a list of multicast group memberships for each attached network, and a timer for each membership.

Multicast group memberships include at least one member of a multicast group on an attached network. In each of its attached networks, a multicast router can assume one of two roles, querier or non-querier. There is normally only one querier per physical network. The querier is the router elected as the router that issues queries for all routers on a LAN segment. Non-queriers listen for reports.

The querier issues two types of queries, a general query and a group-specific query. General queries are issued to solicit membership information with regard to any multicast group. Group-specific queries are issued when a router receives a Leave message from the node it perceives as being the last remaining group member on that network segment. See Query Messages for additional information.

SSM translation allows an IGMPv1 or IGMPv2 device to join an SSM multicast network through the router that provides such a translation capability. SSM translation can done at the node (IGMP protocol) level, and at the interface level, which offers the ability to have the same (*,G) mapped to two different (S,G)s on two different interfaces to provide flexibility. Since PIM-SSM does not support (*,G), SSM translation is required to support IGMPv1 and IGMPv2.

Hosts wanting to receive a multicast session issue a multicast group membership report. These reports must be sent to all multicast-enabled routers.

3.2.2. IGMP Versions and Interoperability Requirements

If routers run different versions of IGMP, they will negotiate to run the lowest common version of IGMP that is supported on their subnet and operate in that version. The 7705 SAR supports three versions of IGMP:

  1. version 1 — specified in RFC 1112, Host extensions for IP Multicasting, IGMPv1 was the first widely deployed version and the first version to become an Internet standard
  2. version 2 — specified in RFC 2236, Internet Group Management Protocol, IGMPv2 added support for “low leave latency”; that is, a reduction in the time it takes for a multicast router to learn that there are no longer any members of a particular group present on an attached network
  3. version 3 — specified in RFC 3376, Internet Group Management Protocol, IGMPv3 added support for source filtering; that is, the ability for a system to report interest in receiving packets only from specific source addresses, as required to support Source-Specific Multicast or from all but specific source addresses, sent to a particular multicast address (see Multicast Models (ASM and SSM))

IGMPv3 must keep track of each group’s state for each attached network. The group state consists of a filter-mode, a list of sources, and various timers. For each attached network running IGMP, a multicast router records the desired reception state for that network.

3.2.3. IGMP Version Transition

Nokia 7705 SAR routers are capable of interoperating with routers and hosts running IGMPv1, IGMPv2, and/or IGMPv3. RFC 5186, Internet Group Management Protocol version 3 (IGMPv3)/Multicast Listener Discovery version 2 (MLDv2) and Multicast Routing Protocol Interaction explores some of the interoperability issues and how they affect the various routing protocols.

IGMPv3 specifies that, if at any time a router receives an older IGMP version query message on an interface, it must immediately switch to a mode that is compatible with that earlier version. Since the previous versions of IGMP are source-aware, should this occur and the interface switch to version 1 or version 2 compatibility mode, any previously learned group memberships with specific sources (learned via the IGMPv3 specific INCLUDE or EXCLUDE mechanisms) must be converted to non-source-specific group memberships. The routing protocol will then treat this as if there is no EXCLUDE definition present.

3.2.4. Query Messages

The IGMP query source address is configurable at two hierarchal levels. It can be configured globally at each router instance IGMP level and can be configured individually at the group interface level. The group interface level overrides the source IP address configured at the router instance level.

By default, subscribers with IGMP policies send IGMP queries with an all-zero source IP address (0.0.0.0). However, some systems only accept and process IGMP query messages with non-zero source IP addresses. The query messages feature allows the Broadband Network Gateway (BNG) to interoperate with such systems.

3.2.5. Source-Specific Multicast Groups (IPv4)

IGMPv3 allows a receiver to join a group and specify that it only wants to receive traffic for a multicast group if that traffic comes from a particular source. If a receiver does this, and no other receiver on the LAN requires all the traffic for the group, the designated router (DR) can omit performing a (*,G) join to set up the shared tree, and instead issue a source-specific (S,G) join only.

The range of multicast addresses from 232.0.0.0 to 232.255.255.255 is currently set aside for source-specific multicast in IPv4. For groups in this range, receivers should only issue source-specific IGMPv3 joins. If a PIM router receives a non-source-specific join message for a group in this range, it should ignore it.

A 7705 SAR PIM router must silently ignore a received (*,G) PIM join message when “G” is a multicast group address from the multicast address group range that has been explicitly configured for SSM. This occurrence should generate an event. If configured, the IGMPv2 request can be translated into IGMPv3. The 7705 SAR allows for the conversion of an IGMPv2 (*,G) request into a IGMPv3 (S,G) request based on manual entries. A maximum of 32 SSM ranges is supported.

IGMPv3 also permits a receiver to join a group and specify that it only wants to receive traffic for a group if that traffic does not come from a specific source or sources. In this case, the designated router (DR) will perform a (*,G) join as normal, but can combine this with a prune for each of the sources the receiver does not wish to receive.

3.3. MLD

Multicast Listener Discovery (MLD) enables multicast applications over native IPv6 networks. This section contains information on the following topics:

3.3.1. MLD Overview

MLD is the IPv6 version of IGMP. The purpose of MLD is to allow each IPv6 router to discover the presence of multicast listeners on its directly attached links, and to discover specifically which multicast groups are of interest to those neighboring nodes.

MLD is a sub-protocol of ICMPv6. MLD message types are a subset of the set of ICMPv6 messages, and MLD messages are identified in IPv6 packets by a preceding Next Header value of 58. All MLD messages are sent with a link-local IPv6 source address, a Hop Limit of 1, and an IPv6 Router Alert option in the Hop-by-Hop Options header.

3.3.2. MLDv1

Similar to IGMPv2, MLDv1 reports include only the multicast group addresses that listeners are interested in, and do not include the source addresses. In order to work with the PIM-SSM model, an SSM translation function similar to that used for IGMPv1 and IGMPv2 is required when MLDv1 is used.

SSM translation allows an MLDv1 device to join an SSM multicast network through the router that provides such a translation capability. SSM translation can done at the node (MLD protocol) level, and at the interface level, which offers the ability to have the same (*,G) mapped to two different (S,G)s on two different interfaces to provide flexibility. Since PIM-SSM does not support (*,G), SSM translation is required to support MLDv1.

3.3.3. MLDv2

MLDv2 is backwards-compatible with MLDv1 and adds the ability for a node to report interest in listening to packets with a particular multicast group only from specific source addresses or from all sources except for specific source addresses.

3.4. PIM

The 7705 SAR supports PIM-SM according to RFC 4601, Protocol Independent Multicast - Sparse Mode (PIM-SM): Protocol Specification (Revised), and PIM-SSM according to RFC 4607, Source-Specific Multicast for IP, as described in this section.

For information on PIM-SSM support for IPv4 and IPv6, refer to Source-Specific Multicast Groups (IPv4) and IPv6 PIM Models.

This section contains information on the following topics:

3.4.1. PIM-SM Overview

PIM-SM leverages the unicast routing protocols that are used to create the unicast routing table: OSPF, IS-IS, BGP, and static routes. Because PIM uses this unicast routing information to perform the multicast forwarding function, it is effectively IP protocol-independent. Unlike the distance vector multicast routing protocol (DVMRP), PIM does not send multicast routing table updates to its neighbors.

PIM-SM uses the unicast routing table to perform the Reverse Path Forwarding (RPF) check function instead of building up a completely independent multicast routing table.

PIM-SM only forwards data to network segments with active receivers that have explicitly requested the multicast group. Initially, PIM-SM in the ASM model uses a shared tree to distribute information about active sources. Depending on the configuration options, the traffic can remain on the shared tree or switch over to an optimized source distribution tree. As multicast traffic starts to flow down the shared tree, routers along the path determine if there is a better path to the source. If a more direct path exists, the router closest to the receiver sends a join message toward the source and reroutes the traffic along this path.

PIM-SM relies on an underlying topology-gathering protocol to populate a routing table with routes. This routing table is called the multicast routing information base (MRIB). The routes in this table can be taken directly from the unicast routing table or they can be different and provided by a separate routing protocol such as multicast BGP (MBGP). The primary role of the MRIB in the PIM-SM protocol is to provide the next-hop router along a multicast-capable path to each destination subnet. The MRIB is used to determine the next-hop neighbor to whom any PIM join/prune message is sent. Data flows along the reverse path of the join messages. Therefore, in contrast to the unicast RIB that specifies the next hop that a data packet would take to get to a subnet, the MRIB gives reverse-path information and indicates the path that a multicast data packet would take from its origin subnet to the router that has the MRIB.

Note:

  1. For proper functioning of the PIM protocol, multicast data packets must be received by the CSM CPU. Therefore, CSM filters and management access filters must be configured to allow forwarding of multicast data packets. For details on CSM filters and management access filters, refer to the “Security” chapter in the 7705 SAR System Management Guide.
  2. Although the Control and Switching module on the 7705 SAR is called a CSM, the CSM filters are referred to as CPM filters in the CLI in order to maintain consistency with other SR routers.

3.4.2. PIM-SM Functions

PIM-SM functions in three phases:

3.4.2.1. Phase One

In this phase, a multicast receiver expresses its interest in receiving traffic destined for a multicast group. Typically it does this using IGMP or MLD, but other mechanisms might also serve this purpose. One of the receiver’s local routers is elected as the designated router (DR) for that subnet. When the expression of interest is received, the DR sends a PIM join message towards the RP for that multicast group. This join message is known as a (*,G) join because it joins group G for all sources to that group. The (*,G) join travels hop-by-hop towards the RP for the group, and in each router it passes through, the multicast tree state for group G is instantiated. Eventually, the (*,G) join either reaches the RP or reaches a router that already has the (*,G) join state for that group. When many receivers join the group, their join messages converge on the RP and form a distribution tree for group G that is rooted at the RP. The distribution tree is called the RP tree or the shared tree (because it is shared by all sources sending to that group). Join messages are re-sent periodically as long as the receiver remains in the group. When all receivers on a leaf network leave the group, the DR will send a PIM (*,G) prune message towards the RP for that multicast group. However, if the prune message is not sent for any reason, the state will eventually time out.

A multicast data sender starts sending data destined for a multicast group. The sender’s local router (the DR) takes these data packets, unicast-encapsulates them, and sends them directly to the RP. The RP receives these encapsulated data packets, removes the encapsulation, and forwards them to the shared tree. The packets then follow the (*,G) multicast tree state in the routers on the RP tree, are replicated wherever the RP tree branches, and eventually reach all the receivers for that multicast group. The process of encapsulating data packets to the RP is called registering, and the encapsulation packets are known as PIM register packets.

At the end of phase one, multicast traffic is flowing encapsulated to the RP, and then natively over the RP tree to the multicast receivers.

3.4.2.2. Phase Two

In this phase, register-encapsulation of data packets is performed. However, register-encapsulation of data packets is inefficient for the following reasons.

  1. Encapsulation and de-encapsulation can be resource-intensive operations for a router to perform depending on whether the router has appropriate hardware for the tasks.
  2. Traveling to the RP and then back down the shared tree can cause the packets to travel a relatively long distance to reach receivers that are close to the sender. For some applications, increased latency is unwanted.

Although register-encapsulation can continue indefinitely, for the reasons above, the RP will normally switch to native forwarding. To do this, when the RP receives a register-encapsulated data packet from source S on group G, it will normally initiate an (S,G) source-specific join towards S. This join message travels hop-by-hop towards S, instantiating an (S,G) multicast tree state in the routers along the path. The (S,G) multicast tree state is used only to forward packets for group G if those packets come from source S. Eventually, the join message reaches S’s subnet or a router that already has the (S,G) multicast tree state, and packets from S the start to flow following the (S,G) tree state towards the RP. These data packets can also reach routers with a (*,G) state along the path towards the RP, and if this occurs, they take a shortcut to the RP tree at this point.

While the RP is in the process of joining the source-specific tree for S, the data packets continue being encapsulated to the RP. When packets from S also start to arrive natively at the RP, the RP receives two copies of each of these packets. At this point, the RP starts to discard the encapsulated copy of these packets and sends a register-stop message back to S’s DR to prevent the DR from unnecessarily encapsulating the packets. At the end of phase two, traffic is flowing natively from S along a source-specific tree to the RP and from there along the shared tree to the receivers. Where the two trees intersect, traffic can transfer from the shared RP tree to the shorter source tree.

Note:

A sender can start sending before or after a receiver joins the group, and thus phase two may occur before the shared tree to the receiver is built.

3.4.2.3. Phase Three

In this phase, the RP joins back towards the source using the shortest path tree (SPT). Although having the RP join back towards the source removes the encapsulation overhead, it does not completely optimize the forwarding paths. For many receivers, the route via the RP can involve a significant detour when compared with the shortest path from the source to the receiver.

To obtain lower latencies, a router on the receiver’s LAN, typically the DR, may optionally initiate a transfer from the shared tree to a source-specific SPT. To do this, it issues an (S,G) join towards S. This instantiates the (S,G) state in the routers along the path to S. Eventually, this join either reaches S’s subnet or reaches a router that already has the (S,G) state. When this happens, data packets from S start to flow following the (S,G) state until they reach the receiver.

At this point, the receiver (or a router upstream of the receiver) is receiving two copies of the data—one from the SPT and one from the RP tree. When the first traffic starts to arrive from the SPT, the DR or upstream router starts to drop the packets for G from S that arrive via the RP tree. In addition, it sends an (S,G) prune message towards the RP. The prune message travels hop-by-hop, instantiating an (S,G) state along the path towards the RP indicating that traffic from S for G should not be forwarded in this direction. The prune message is propagated until it reaches the RP or a router that still needs the traffic from S for other receivers.

By now, the receiver is receiving traffic from S along the SPT between the receiver and S. In addition, the RP is receiving the traffic from S, but this traffic is no longer reaching the receiver along the RP tree. As far as the receiver is concerned, this is the final distribution tree.

3.4.3. Encapsulating Data Packets in the Register Tunnel

Conceptually, the register tunnel is an interface with a smaller MTU than the underlying IP interface towards the RP. IP fragmentation on packets forwarded on the register tunnel is performed based on this smaller MTU. The encapsulating DR can perform path-MTU discovery to the RP to determine the effective MTU of the tunnel. This smaller MTU takes both the outer IP header and the PIM register header overhead into consideration.

3.4.4. PIM Bootstrap Router Mechanism

For proper operation, every PIM-SM router within a PIM domain must be able to map a particular global-scope multicast group address to the same RP. If this is not possible, black holes can appear (this is where some receivers in the domain cannot receive some groups). A domain in this context is a contiguous set of routers that all implement PIM and are configured to operate within a common boundary.

The Bootstrap Router (BSR) mechanism provides a way in which viable group-to-RP mappings can be created and distributed to all the PIM-SM routers in a domain. Each candidate BSR originates bootstrap messages (BSMs). Every BSM contains a BSR priority field. Routers within the domain flood the BSMs throughout the domain. A candidate BSR that hears about a higher-priority candidate BSR suppresses sending more BSMs for a period of time. The single remaining candidate BSR becomes the elected BSR and its BSMs inform the other routers in the domain that it is the elected BSR.

The PIM bootstrap routing mechanism is adaptive, meaning that if an RP becomes unreachable, the event will be detected and the mapping tables will be modified so that the unreachable RP is no longer used and the new tables are rapidly distributed throughout the domain.

3.4.5. PIM-SM Routing Policies

Multicast traffic can be restricted from certain source addresses by creating routing policies. Join messages can be filtered using import filters. PIM join policies can be used to reduce denial of service attacks and subsequent PIM state explosion in the router and to remove unwanted multicast streams at the edge of the network before they are carried across the core. Route policies are created in the config>router>policy-options context. Join and register route policy match criteria for PIM-SM can specify the following:

  1. router interfaces specified by name or IP address
  2. neighbor address (the source address in the IP header of the join and prune message)
  3. multicast group address embedded in the join and prune message
  4. multicast source address embedded in the join and prune message

Join policies can be used to filter PIM join messages so that no (*,G) or (S,G) state is created on the router. Table 2 describes the match conditions.

Table 2:  Join Filter Policy Match Conditions 

Match Condition

Matches:

Interface

The router interface by name

Neighbor

The neighbor source address in the IP header

Group address

The multicast group address in the join/prune message

Source address

The source address in the join/prune message

PIM register messages are sent by the first-hop designated router that has a direct connection to the source. This serves a dual purpose:

  1. notifies the RP that a source has active data for the group
  2. delivers the multicast stream in register encapsulation to the RP and its potential receivers. If no routers have joined the group at the RP, the RP ignores the register requests.

In an environment where the sources to particular multicast groups are always known, register filters can be applied at the RP to prevent any unwanted sources from transmitting a multicast stream. These filters can also be applied at the edge so that register data does not travel unnecessarily over the network towards the RP. Table 3 describes the match conditions.

Table 3:  Register Filter Policy Match Conditions 

Match Condition

Matches:

Interface

The router interface by name

Group address

The multicast group address in the join/prune message

Source address

The source address in the join/prune message

3.4.6. Reverse Path Forwarding Checks

Multicast implements a reverse path forwarding (RPF) check. RPF checks the path that multicast packets take between their sources and the destinations to prevent loops. Multicast requires that an incoming interface be the outgoing interface used by unicast routing to reach the source of the multicast packet. RPF forwards a multicast packet only if it is received on an interface that is used by the router to route to the source.

If the forwarding paths are modified due to routing topology changes, any dynamic filters that may have been applied must be re-evaluated. If filters are removed, the associated alarms are also cleared.

3.4.7. Anycast RP for PIM-SM

The implementation of anycast Rendezvous Point (RP) for PIM-SM environments enables fast convergence if a PIM RP router fails by allowing receivers and sources to rendezvous at the closest RP. It allows an arbitrary number of RPs per group in a single shared tree PIM-SM domain. This is particularly important for triple play configurations that choose to distribute multicast traffic using PIM-SM, not SSM. In this case, RP convergence must be fast enough to avoid the loss of multicast streams that could cause loss-of-TV delivery to the end customer.

Anycast RP for PIM-SM environments is supported in the base routing PIM-SM instance of the service router. This feature is also supported in Layer 3 VPRN instances that are configured with PIM.

3.4.7.1. Implementation

The anycast RP for PIM-SM implementation is defined in RFC 4610, Anycast-RP Using Protocol Independent Multicast (PIM), and is similar to that described in RFC 3446, Anycast Rendevous Point (RP) mechanism using Protocol Independent Multicast (PIM) and Multicast Source Discovery Protocol (MSDP). The implementation extends the register mechanism in PIM so that anycast RP functionality can be retained without using MSDP. For details, refer to the “Multicast VPN (MVPN)” section in the 7705 SAR Services Guide.

The mechanism works as follows.

  1. An IP address is chosen as the RP address. This address is statically configured or distributed using a dynamic protocol to all PIM routers throughout the domain.
  2. A set of routers in the domain are chosen to act as RPs for this RP address. These routers are called the anycast-RP set.
  3. Each router in the anycast-RP set is configured with a loopback interface using the RP address.
  4. Each router in the anycast-RP set also needs a separate IP address to be used for communication between the RPs.
  5. The RP address, or a prefix that covers the RP address, is injected into the unicast routing system inside the domain.
  6. Each router in the anycast-RP set is configured with the addresses of all other routers in the anycast-RP set. This must be consistently configured for all RPs in the set.
Figure 2:  Anycast RP for PIM-SM Implementation 

Figure 2 shows a scenario where all routers connected, and where R1A, R1B, and R2 are receivers for a group, and S1 and S2 send to that group. In the example, RP1, RP2, and RP3 are all assigned the same IP address that is used as the anycast-RP address (RPA).

Note:

The address used for the RP address in the domain (the RPA address) must be different from the addresses used by the anycast-RP routers to communicate with each other.

The following procedure is used when S1 starts sourcing traffic:

  1. S1 sends a multicast packet.
  2. The DR directly attached to S1 forms a PIM register message to send to the RPA. The unicast routing system delivers the PIM register message to the nearest RP, in this case RP1.
  3. RP1 receives the PIM register message, de-encapsulates it, and sends the packet down the shared tree to receivers R1A and R1B.
  4. RP1 is configured with the IP addresses of RP2 and RP3. Because the register message did not come from one of the RPs in the anycast-RP set, RP1 assumes the packet came from a DR. If the register message is not addressed to the RPA, an error has occurred and it should be rate-limited logged.
  5. RP1 sends a copy of the register message from S1’s DR to both RP2 and RP3. RP1 uses its own IP address as the source address for the PIM register message.
  6. RP1 may join back to the source tree by triggering an (S1,G) join message toward S1; however, RP1 must create an (S1,G) state.
  7. RP2 receives the register message from RP1, de-encapsulates it, and also sends the packet down the shared tree to receiver R2.
  8. RP2 sends a register-stop message back to RP1. RP2 may wait to send the register-stop message if it decides to join the source tree. RP2 should wait until it has received data from the source on the source tree before sending the register-stop message. If RP2 decides to wait, the register-stop message will be sent when the next register is received. If RP2 decides not to wait, the register-stop message is sent immediately.
  9. RP2 may join back to the source tree by triggering an (S1,G) join message toward S1; however, RP2 must create an (S1,G) state.
  10. RP3 receives the register message from RP1 and de-encapsulates it, but since there are no receivers joined for the group, it discards the packet.
  11. RP3 sends a register-stop message back to RP1.
  12. RP3 creates an (S1,G) state so when a receiver joins after S1 starts sending, RP3 can join quickly to the source tree for S1.
  13. RP1 processes the register-stop messages from RP2 and RP3. RP1 may cache–on a per-RP/per-(S,G) basis—the receipt of register-stop messages from the RPs in the anycast-RP set. This option is performed to increase the reliability of register message delivery to each RP. When this option is used, subsequent register messages received by RP1 are sent only to the RPs in the anycast-RP set that have not previously sent register-stop messages for the (S,G) entry.
  14. RP1 sends a register-stop message back to the DR the next time a register message is received from the DR and, when the option in step 13 is in use, if all RPs in the anycast-RP set have returned register-stop messages for a particular (S,G) route.

The procedure for S2 sending follows the same steps as above, but it is RP3 that sends a copy of the register originated by S2’s DR to RP1 and RP2. Therefore, this example shows how sources anywhere in the domain, associated with different RPs, can reach all receivers, also associated with different RPs, in the same domain.

3.4.8. Multicast-only Fast Reroute (MoFRR)

The 7705 SAR supports multicast-only fast reroute (MoFRR) in the context of GRT for mLDP, where the multicast traffic is duplicated on a primary mLDP multicast tree and on a secondary mLDP multicast tree. These are two separate mLDP LSPs, and therefore they are set up separately. The root node transmits multicast PDUs on both active and inactive LSPs. The PDUs are duplicated using the multicast tree and sent through the network to the leaf node on both the active and the inactive LSPs. The leaf listens only to the active LSP and drops PDUs from the secondary, inactive LSP.

The MoFRR functionality relies on detecting failures on the primary path and switching to forwarding the traffic to the standby path. The traffic failure can happen with or without physical links or nodes going down. Various mechanisms for link or node failure detections are supported. However, for best performance and resilience, enable MoFRR on every node in the network and use hop-by-hop BFD for detection of fast link failure or data plane failure on each upstream link. If BFD is not used, PIM adjacency loss or a route change can be used to detect traffic failure.

Figure 3 and Figure 4 illustrate MoFRR with no failure and with a failure. MoFRR is enabled on P1, P2, P3, P4, PE2, and PE3. Primary streams (solid gray lines) are active; one stream is from PE1 to PE2 and another is from PE1 to PE3. Secondary streams (gray-black lines) are blocked (circles with “X” inside). In Figure 4, PE3 detects a link failure between P4 and PE3, and switches to the standby (secondary) stream from P2.

Figure 3:  MoFRR in Steady State with No Failure 
Figure 4:  MoFRR in Failure State 

3.4.9. Automatic Discovery of Group-to-RP Mappings (Auto-RP)

Auto-RP is a proprietary group discovery and mapping mechanism for IPv4 PIM that is described in cisco-ipmulticast/pim-autorp-spec, Auto-RP: Automatic discovery of Group-to-RP mappings for IP multicast. The functionality is similar to the IETF standard BSR mechanism that is described in RFC 5059, Bootstrap Router (BSR) Mechanism for Protocol Independent Multicast (PIM), to dynamically learn about the availability of RPs in a network.

When a router is configured as an RP mapping agent with the pim>rp>auto-rp-discovery command, it listens to the CISCO-RP-ANNOUNCE (224.0.1.39) group and caches the announced mappings. The RP mapping agent then periodically sends out RP mapping packets to the CISCO-RP-DISCOVERY (224.0.1.40) group. PIM dense mode (PIM-DM) as described in RFC 3973, Protocol Independent Multicast - Dense Mode (PIM-DM): Protocol Specification (Revised), is used for the auto-RP groups to support multihoming and redundancy. The RP mapping agent supports announcing, mapping, and discovery functions; candidate RP functionality is not supported.

Auto-RP is supported for IPv4 in multicast VPNs and in the global routing instance. Either BSR or auto-RP for IPv4 can be configured; the two mechanisms cannot be enabled together. In a multicast VPN, auto-RP cannot be enabled together with sender-only or receiver-only multicast distribution trees (MDTs), or wildcard S-PMSI configurations that could block flooding.

3.5. IPv6 PIM Models

IPv6 multicast enables multicast applications over native IPv6 networks. There are two service models: Any Source Multicast (ASM) and Source Specific Multicast (SSM) which includes PIM-SSM and MLD. SSM does not require source discovery and only supports a single source for a specific multicast stream. As a result, SSM is easier to operate in a large-scale deployment that uses the one-to-many service model.

3.5.1. PIM-SSM

The IPv6 address family for the SSM model is supported. OSPFv3 and static routing have extensions to support submission of routes into the IPv6 multicast RTM.

3.5.2. PIM-ASM

IPv4 PIM-ASM is supported. All PIM-ASM related functions—such as bootstrap router and RP—support the IPv4 address family.

3.6. IP Multicast Debugging Tools

This section describes multicast debugging tools for the 7705 SAR. The debugging tools for multicast consist of three elements, which are accessed from the CLI <global> level:

3.6.1. Mtrace

Assessing problems in the distribution of IP multicast traffic can be difficult. The multicast traceroute (mtrace) feature uses a tracing feature implemented in multicast routers that is accessed via an extension to the IGMP protocol. The mtrace feature is used to print the path from the source to a receiver; it does this by passing a trace query hop-by-hop along the reverse path from the receiver to the source. At each hop, information such as the hop address, routing error conditions, and packet statistics are gathered and returned to the requester.

Data added by each hop includes:

  1. query arrival time
  2. incoming interface
  3. outgoing interface
  4. previous hop router address
  5. input packet count
  6. output packet count
  7. total packets for this source/group
  8. routing protocol
  9. TTL threshold
  10. forwarding/error code

The information enables the network administrator to determine:

  1. the flow of the multicast stream
  2. where multicast flows stop

When the trace response packet reaches the first-hop router (the router that is directly connected to the source’s network interface), that router sends the completed response to the response destination (receiver) address specified in the trace query.

If a multicast router along the path does not implement the mtrace feature or if there is an outage, then no response is returned. To solve this problem, the trace query includes a maximum hop count field to limit the number of hops traced before the response is returned. This allows a partial path to be traced.

The reports inserted by each router contain not only the address of the hop, but also the TTL required to forward the packets and flags to indicate routing errors, plus counts of the total number of packets on the incoming and outgoing interfaces and those forwarded for the specified group. Examining the differences in these counts for two separate traces and comparing the output packet counts from one hop with the input packet counts of the next hop allows the calculation of packet rate and packet loss statistics for each hop to isolate congestion problems.

3.6.1.1. Finding the Last-hop Router

The trace query must be sent to the multicast router that is the last hop on the path from the source to the receiver. If the receiver is on the local subnet (as determined by the subnet mask), then the default method is to send the trace query to all-routers.mcast.net (224.0.0.2) with a TTL of 1. Otherwise, the trace query is sent to the group address since the last-hop router will be a member of that group if the receiver is. Therefore, it is necessary to specify a group that the intended receiver has joined. This multicast query is sent with a default TTL of 64, which may not be sufficient for all cases.

When tracing from a multihomed host or router, the default receiver address may not be the desired interface for the path from the source. In that case, the desired interface should be specified explicitly as the receiver.

3.6.1.2. Directing the Response

By default, mtrace first attempts to trace the full reverse path, unless the number of hops to trace is explicitly set with the hop option. If there is no response within a 3-s timeout interval, a “*” is displayed and the probing switches to hop-by-hop mode. Trace queries are issued starting with a maximum hop count of one and increasing by one until the full path is traced or no response is received. At each hop, multiple probes are sent. The first attempt is made with the unicast address of the host running mtrace as the destination for the response. Since the unicast route may be blocked, the remainder of attempts request that the response be sent to mtrace.mcast.net (224.0.1.32) with the TTL set to 32 more than what is needed to pass the thresholds seen so far along the path to the receiver. For the last attempts, the TTL is increased by another 32.

Alternatively, the TTL may be set explicitly with the TTL option.

For each attempt, if no response is received within the timeout, a “*” is displayed. After the specified number of attempts have failed, mtrace will try to query the next-hop router with a DVMRP_ASK_NEIGHBORS2 request (as used by the mrinfo feature) to determine the router type.

The output of mtrace is a short listing of the hops in the order they are queried, that is, in the reverse of the order from the source to the receiver. For each hop, a line is displayed showing:

  1. the hop number (counted negatively to indicate that this is the reverse path)
  2. the multicast routing protocol
  3. the threshold required to forward data (to the previous hop in the listing as indicated by the up-arrow character)
  4. the cumulative delay for the query to reach that hop (valid only if the clocks are synchronized)

The response ends with a line showing the round-trip time which measures the interval from when the query is issued until the response is received, both derived from the local system clock.

Mtrace/mstat packets use special IGMP packets with IGMP type codes of 0x1E and 0x1F.

3.6.2. Mstat

The mstat feature adds the capability to show the multicast path in a limited graphic display and indicates drops, duplicates, TTLs, and delays at each node. This information is useful to the network operator because it identifies nodes with high drop and duplicate counts. Duplicate counts are shown as negative drops.

The output of mstat provides a limited pictorial view of the path in the forward direction with data flow indicated by arrows pointing downward and the query path indicated by arrows pointing upward. For each hop, both the entry and exit addresses of the router are shown if different, along with the initial TTL required on the packet in order to be forwarded at this hop and the propagation delay across the hop assuming that the routers at both ends have synchronized clocks. The output consists of two columns, one for the overall multicast packet rate that does not contain lost/sent packets and the other for the (S,G)-specific case. The (S,G) statistics also do not contain lost/sent packets.

3.6.3. Mrinfo

The mrinfo feature is a simple mechanism to display the configuration information from the target multicast router. The type of information displayed includes the multicast capabilities of the router, code version, metrics, TTL thresholds, protocols, and status. This information can be used by network operators, for example, to verify if bidirectional adjacencies exist. Once the specified multicast router responds, the configuration is displayed.

3.7. MSDP

Multicast Source Discovery Protocol (MSDP) is a mechanism that allows rendezvous points (RPs) to share information about active sources. When RPs in remote domains hear about the active sources, they can pass on that information to the local receivers and multicast data can be forwarded between the domains. MSDP allows each domain to maintain an independent RP that does not rely on other domains, but it also enables RPs to forward traffic between domains. PIM-SM is used to forward the traffic between the multicast domains.

Using PIM-SM, multicast sources and receivers register with their local RP via the closest multicast router. The RP maintains information about the sources and receivers for any particular group. RPs in other domains do not have any knowledge about sources located in other domains.

MSDP-speaking routers in a PIM-SM domain have an MSDP peering relationship with MSDP peers in another PIM-SM domain. The peering relationship is made up of a TCP connection in which control information is exchanged. Each domain has one or more connections to this virtual topology.

When a PIM-SM RP learns about a new multicast source within its own domain from a standard PIM register mechanism, it encapsulates the first data packet in an MSDP source-active (SA) message and sends it to all MSDP peers.

After an RPF check, the SA message is flooded by each peer to its MSDP peers until the SA message reaches every MSDP router in the interconnected networks. If the receiving MSDP peer is an RP, and the RP has a (*,G) entry (receiver) for the group, the RP creates a (*,G) state for the source and joins the shortest path tree for the source. The encapsulated data is de-encapsulated and forwarded down the shared tree of that RP. When the packet is received by the last-hop router of the receiver, the last-hop router can also join the shortest path tree to the source.

The MSDP speaker periodically sends SA messages that include all sources.

This section contains information on the following topics:

The 7705 SAR supports MSDP in the base router context and on MVPNs in the VPRN service context. For information about MSDP on MVPNs, refer to “Multicast Source Discovery Protocol” in the Multicast VPN (MVPN) section of the 7705 SAR Services Guide.

3.7.1. MSDP and Anycast RP

MSDP is required to provide inter-domain multicast services using Any Source Multicast (ASM). Anycast RP for MSDP enables fast convergence when an MSDP PIM RP router fails by allowing receivers and sources to rendezvous at the closest RP.

3.7.2. MSDP Procedure

When an RP in a PIM-SM domain first learns of a new sender—for example, by PIM register messages—it constructs an SA message and sends it to its MSDP peers. The SA message contains the following fields:

  1. source address of the data source
  2. group address the data source sends to
  3. IP address of the RP
Note:

An RP that is not a designated router on a shared network does not originate SAs for directly connected sources on that shared network. An RP only originates SAs in response to receiving register messages from the designated router.

Each MSDP peer receives and then forwards the SA message away from the RP address in a peer-RPF flooding fashion. The RPF multicast routing information base (MRIB) is examined to determine which peer towards the originating RP of the SA message is selected. This peer is called an RPF peer. The MSDP peer performs peer-RPF forwarding by comparing the RP address carried in the SA message against the MSDP peer from which the message was received.

If the MSDP peer receives the SA from a non-RPF peer towards the originating RP, it will drop the message. Otherwise, it forwards the message to all its MSDP peers (except the one from which it received the SA message).

When an MSDP peer that is also an RP for its own domain receives a new SA message, it determines if there are any group members within the domain interested in any group described by an (S,G) entry within the SA message. That is, the RP checks for a (*,G) entry with a non-empty outgoing interface list. This implies that some router in the domain is interested in the group. In this case, the RP triggers an (S,G) join event toward the data source as if a join/prune message was received addressed to the RP. This sets up a branch of the source tree to this domain. Subsequent data packets arrive at the RP by this tree branch and are forwarded down the shared tree inside the domain. If leaf routers choose to join the source tree, they have the option to do so according to existing PIM-SM conventions. If an RP in a domain receives a PIM join message for a new group G, the RP must trigger an (S,G) join event for each active (S,G) for that group in its cache.

This procedure is called flood-and-join because if any RP is not interested in the group, the SA message can be ignored; otherwise, the RPs join a distribution tree.

3.7.2.1. MSDP Peering Scenarios

The 7705 SAR conforms to draft-ietf-mboned-msdp-deploy-nn.txt, Multicast Source Discovery Protocol (MSDP) Deployment Scenarios, which describes how PIM-SM and MP-BGP work together to provide intra- and inter-domain ASM service.

The 7705 SAR supports the following intra-domain MSDP peering deployment options:

  1. peering between routers configured for both MSDP and MBGP
  2. MSDP peer is not a BGP peer (or no BGP peer)

The 7705 SAR supports the following inter-domain MSDP peering deployment options:

  1. peering between PIM border routers (single-hop peering)
  2. peering between non-border routers (multi-hop peering)
  3. MSDP peering without BGP
  4. MSDP peering between mesh groups
  5. MSDP peering at a multicast exchange

3.7.3. MSDP Peer Groups

MSDP peer groups are typically created when multiple peers have a set of common operational parameters. Group parameters that are not specifically configured are inherited from the global level.

3.7.4. MSDP Mesh Groups

MSDP mesh groups are used to reduce SA flooding primarily in intra-domain configurations. When multiple speakers in an MSDP domain are fully meshed, they can be configured as a mesh group. The originator of the SA message forwards the message to all members of the mesh group; therefore, forwarding the SA message between non-originating members of the mesh group is not necessary.

3.7.5. MSDP Routing Policies

MSDP routing policies allow for filtering of inbound and outbound SA messages. Policies can be configured at different levels:

  1. global level — applies to all peers
  2. group level — applies to all peers in a peer group
  3. neighbor level — applies only to a specified peer

The most specific level is used. If multiple policy names are specified, the policies are evaluated in the order they are specified. The first policy that matches is applied. If no policy is applied, SA messages are passed.

Match conditions include:

  1. neighbor — the policy matches on a neighbor address that is the source address in the IP header of the SA message
  2. route filter — the policy matches on a multicast group address embedded in the SA message
  3. source address filter — the policy matches on a multicast source address embedded in the SA message

3.7.6. Auto-RP (Discovery Mode Only) in Multicast VPN

Auto-RP is a vendor proprietary protocol used to dynamically learn about the availability of RPs in a network. The Auto-RP protocol consists of announcing, mapping, and discovery functions. The 7705 SAR supports the discovery mode of Auto-RP, which includes mapping and forwarding of RP-mapping messages and RP-candidate messages. Discovery mode also includes receiving RP-mapping messages locally in order to learn and maintain the RP-candidate database.

The Auto-RP protocol is supported for multicast VPN and in the global routing instance. Either BSR or Auto-RP can be configured per routing instance. Both mechanisms cannot be enabled together.

3.8. Unicast and Multicast Address Translation

The 7705 SAR supports unicast-to-multicast address translation and multicast-to-multicast address translation.

For unicast-to-multicast translation, the 7705 SAR translates the destination IP address of the unicast flow to a multicast group.

For multicast-to-multicast translation, the 7705 SAR acts as a host to upstream (S,G)s and performs address translation to the downstream (S,G).

Unicast and multicast address translation is supported on the following adapter cards and platforms:

  1. on the 7705 SAR-8 Shelf V2 and the 7705 SAR-18:
    1. 2-port 10GigE (Ethernet) Adapter card
    2. 6-port Ethernet 10Gbps Adapter card
    3. 8-port Gigabit Ethernet Adapter card, version 3
    4. 10-port 1GigE/1-port 10GigE X-Adapter card, version 2 (supported on the 7705 SAR-18 only)
  2. 7705 SAR-Ax
  3. 7705 SAR-H
  4. 7705 SAR-Hc
  5. 7705 SAR-Wx
  6. 7705 SAR-X

3.8.1. Unicast-to-Multicast Address Translation

With unicast-to-multicast address translation, unicast packets destined for a local loopback interface on the 7705 SAR are translated to a multicast (S,G).

Unicast-to-multicast translation is supported in the global routing table (GRT) and in VPRNs. Both IPv4 and IPv6 address families are supported for the GRT, while IPv4 addressing on SAP-to-SAP connections is supported for VPRNs.

For the 7705 SAR to perform unicast-to-multicast address translation, the following is required.

  1. The unicast traffic must be destined for a loopback IP address on the translator router (7705 SAR).
  2. The multicast source must be a loopback IP address on the 7705 SAR that is also configured under a PIM, IGMP, or MLD interface.
  3. The 7705 SAR only forwards multicast traffic on the outgoing interfaces that receive the (S,G) join.
  4. The unicast domain must support resilience functionality such as LFA, ECMP, or LAG.
  5. All hosts in the multicast domain must join a loopback IP on the 7705 SAR that is doing unicast-to-multicast translation.
Note:

  1. For IES and VPRN services, the normal operation of unicast-to-multicast translation does not require an ingress IES or VPRN physical interface to be added under the PIM context. However, if separate QoS treatment is required for the unicast traffic that is translated to multicast traffic, the multicast traffic must be mapped to a multicast queue in a SAP ingress QoS policy assigned to the SAP for the IES or VPRN ingress interface. In order for this multicast queue to be created, the physical IES or VPRN interface must also be added under the PIM context.
  2. To prevent PIM Hello messages from being sent from the IES or VPRN interface back to the unicast domain, the interface should be shut down under PIM.

Figure 5 shows an example of the 7705 SAR acting as a translator router for unicast-to-multicast address translation.

Figure 5:  Unicast-to-Multicast Translation on the 7705 SAR 

In Figure 5, the host (leaf) nodes in a multicast group are connected to a local router via an interface configured for IGMP or MLD; however, the hosts can also be connected to the translator directly via an IGMP or MLD interface. The local router is connected to the translator router via a PIM interface. To receive a multicast session, the local router must send a PIM join message to the translator router. After translating the traffic from unicast to multicast, the 7705 SAR forwards multicast traffic on the outgoing interfaces that receive the (S,G) join.

The following CLI example shows the configuration to enable multicast translation on three loopback interfaces on the 7705 SAR translator router based on the scenario shown in Figure 5. The loopback interfaces are the destination for the unicast traffic.

Example:
config>router# interface loop1
config>router>if# multicast-translation
config>router>if# exit
config>router# interface loop2
config>router>if# multicast-translation
config>router>if# exit
config>router# interface loop3
config>router>if# multicast-translation
config>router>if# exit

The following CLI example shows the configuration to translate the unicast source addresses to a destination multicast group based on the scenario shown in Figure 5.

Example:
config>router# pim
config>router>pim# interface loop1
config>router>pim>if# unicast-to-multicast unicast-start 1.1.1.1 unicast-end 1.1.1.1 destination 100.1.1.1 to-multicast 230.0.0.1
config>router>pim>if# no shutdown
config>router>pim>if# exit
config>router>pim# interface loop2
config>router>pim>if# unicast-to-multicast unicast-start 1.1.1.3 unicast-end 1.1.1.3 destination 200.1.1.1 to-multicast 230.0.0.2
config>router>pim>if# no shutdown
config>router>pim>if# exit
config>router>pim# interface loop2
config>router>pim>if# unicast-to-multicast unicast-start 2.1.1.1 unicast-end 2.1.1.2 destination 200.1.1.1 to-multicast 230.1.0.1
config>router>pim>if# no shutdown
config>router>pim>if# exit

The outcome of the configuration is as follows:

  1. unicast source (1.1.1.1, 100.1.1.1) translates to multicast destination (100.1.1.1, 230.0.0.1) for interface loop1
  2. unicast source (1.1.1.3, 200.1.1.1) translates to multicast destination (200.1.1.1, 230.0.0.2) for interface loop2
  3. unicast source (2.1.1.1 to 2.1.1.2, 200.1.1.1) translates to multicast destination (200.1.1.1, 230.1.0.1 to 230.1.0.2) for interface loop2

The 7705 SAR supports both single-source multicast (SSM) and any-source multicast (ASM) models for unicast-to-multicast address translation.

With SSM, when hosts join a loopback address on the 7705 SAR that is doing the translation, IGP routes the PIM joins to this router. The PIM joins are routed to the 7705 SAR because the 7705 SAR translator router is configured as the source of the multicast traffic on the hosts. All multicast functionality is valid in the multicast domain, except that the multicast source is the 7705 SAR loopback IP address rather than a source that is connected to the 7705 SAR. Reverse Path Forwarding (RPF) is performed against the loopback address of the translated (S,G), so if the multicast traffic (S,G) arrives on a non-loopback interface, it will be dropped.

With ASM, if the 7705 SAR is both the Rendezvous Point (RP) and the unicast-to-multicast translator router, it receives packets from the unicast domain and translates their destination address to a multicast source address based on the configuration of the unicast-to-multicast command. The hosts (leaves) send a (*,G) join message to the RP (which is the translator router), and the translator router forwards (loopback,G) traffic to the leaves. The leaves then send a join (loopback,G) message back to the RP.

If the 7705 SAR is not the RP but is the unicast-to-multicast translator router, it must be configured with RP parameters under the PIM interface. The 7705 SAR translates the unicast stream to (loopback,G), then encapsulates the (loopback,G) packets in a unicast packet and sends the unicast packet to the RP. This unicast packet is known as a “register” message in PIM. The RP removes the outer IP address and forwards the (loopback,G) packets to the leaves. The leaves then send a join (loopback,G) message back to the 7705 SAR translator router.

For ASM, only IPv4 addresses are supported.

3.8.2. Multicast-to-Multicast Address Translation

With multicast-to-multicast address translation, the 7705 SAR acts as a host to upstream (S,G)s. (S,G) packets arriving on the 7705 SAR are translated to a new downstream (S,G). Multiple upstream sources can be translated to a single downstream source.The overlapping groups between the two sources merge into one and are configured to a range of groups on the single downstream source.

Multicast-to-multicast configuration on the 7705 SAR is very similar to unicast-to-multicast configuration except that the 7705 SAR performing the address translation is configured with static IGMP join requests for interested (S,G)s toward the upstream source. For these (S,G)s, the 7705 SAR acts as the host and translates these streams based on the configuration of the multicast-to-multicast command. On the downstream (receiver domain), the 7705 SAR acts as the source of the translated streams to the hosts that want to join these (S,G)s.

Multicast-to-multicast translation is supported in the GRT and in VPRNs.

Figure 6 shows an example of the 7705 SAR acting as a multicast-to-multicast address translator.

Figure 6:  Multicast-to-Multicast Address Translation on the 7705 SAR 

In Figure 6, the 7705 SAR router with system IP address 100.0.0.1 is performing the translation. The translator router sends a static IGMP join request to interested streams in the source domain. The translator router is the host for these streams. The example below shows the configuration for a static IGMP join request.

Example:
config>router#
config>router>igmp# interface “to-source-domain”
config>router>igmp>if# static
config>router>igmp>if>static# group 230.0.0.1
config>router>igmp>if>static>group# source 1.1.1.1
config>router>igmp>if>static>group>source# exit
config>router>igmp>if>static# group 230.0.0.2
config>router>igmp>if>static>group# source 1.1.1.1
config>router>igmp>if>static>group>source# exit
config>router>igmp>if>static# group 230.0.0.100
config>router>igmp>if>static>group# source 1.1.1.1
config>router>igmp>if>static>group>source# exit
config>router>igmp>if>static# group 230.0.0.10
config>router>igmp>if>static>group# source 2.1.1.1
config>router>igmp>if>static>group>source# exit
config>router>igmp>if>static# group 230.0.0.1
config>router>igmp>if>static>group# source 2.1.1.2
config>router>igmp>if>static>group>source# exit
config>router>igmp>if>static# group 230.0.0.10
config>router>igmp>if>static>group# source 2.1.1.2
config>router>igmp>if>static>group>source# exit

To translate these source domain streams to receiver domain streams, a loopback interface on the translator router must be enabled for multicast translation. The example below shows the configuration.

Example:
config>router# interface “to-source-domain”
config>router>if# multicast-translation
config>router>if# exit

Under the loopback IP address on the PIM interface, the 7705 SAR creates a mapping between the upstream (S,G) and the downstream (S,G). The example below shows the configuration.

Example:
config>router# pim
config>router>pim# interface loop2
config>router>pim>if# multicast-to-multicast source 1.1.1.1 group-start 230.0.0.1 group-end 230.0.0.100 to-multicast 230.0.0.1
config>router>pim>if# multicast-to-multicast source 2.1.1.1 group-start 230.0.0.1 group-end 230.0.0.10 to-multicast 230.0.0.101
config>router>pim>if# no shutdown
config>router>pim>if# exit
config>router>pim# interface loop3
config>router>pim>if# multicast-to-multicast source 2.1.1.2 group-start 230.0.0.1 group-end 230.0.0.10 to-multicast 230.0.0.1
config>router>pim>if# no shutdown
config>router>pim>if# exit

The outcome of the configuration is as follows:

  1. multicast (1.1.1.1, 230.0.0.1) translates to multicast (200.1.1.1, 230.0.0.1)
  2. multicast (1.1.1.1, 230.0.0.100) translates to multicast (200.1.1.1, 230.0.0.100)
  3. multicast (2.1.1.1, 230.0.0.1) translates to multicast (200.1.1.1, 230.0.0.101)
  4. multicast (2.1.1.1, 230.0.0.10) translates to multicast (200.1.1.1, 230.0.0.110)
  5. multicast (2.1.1.2, 230.0.0.1) translates to multicast (210.1.1.1, 230.0.0.1)
  6. multicast (2.1.1.2, 230.0.0.10) translates to multicast (210.1.1.1, 230.0.0.10)

The configuration above also merges two different source domain sources, source 1.1.1.1 and 2.1.1.1, into a single receiver domain, 200.1.1.1. As well, overlapping groups are spread out into a group range of 230.0.0.1 to 230.0.0.110.

3.9. IP Multicast Configuration Process Overview

Figure 7 shows the process to configure multicast parameters.

Figure 7:  IP Multicast Configuration Process 

3.10. Configuration Notes

3.10.1. General

This section describes multicast configuration guidelines and caveats:

  1. a multicast stream is required by one or more multicast clients
  2. a multicast stream is offered by one or more multicast servers
  3. unlike 7750 SR nodes, when the maximum number of groups per node limit is exceeded, the additional groups are not stored at the CSM layer and an alarm is raised immediately

3.10.2. Reference Sources

For information on supported IETF drafts and standards, as well as standard and proprietary MIBs, refer to Standards and Protocol Support.