This chapter provides information about the parameters required to create a virtual chassis and how to configure them. A virtual chassis is also known as a stack of nodes.
Topics in this chapter include provisioning, booting, and preprovisioning the virtual chassis; provisioning service entities; replacing and upgrading nodes; and virtual chassis boot scenarios and split scenarios.
The 7210 SAS supports the grouping together, or stacking, of a set of nodes to create a virtual chassis (VC). The following nodes can be stacked to form a VC:
A VC provides operational simplicity because it uses a single IP address and MAC address for the set of stacked nodes. The set of nodes can be provisioned and monitored as a single platform rather than as individual nodes. For example, users can provision services for all ports in the VC without having to log in and access each node individually. As well, the NSP NFM-P management system can manage and monitor the VC as a single platform rather than the individual nodes that comprise it.
In addition to operational simplicity, a VC increases service reliability by providing redundancy in the following ways.
Up to three nodes can be provisioned in a single VC. Nodes can be added to or removed from the VC to increase the total available ports or to replace a failed unit.
The nodes of the VC are connected through their stacking ports. The total capacity provided by the stacking ports is equivalent to the sum of the bandwidth provided by the individual port. The stacking ports are used for forwarding service traffic and for exchanging management and control messages between the nodes in the VC. The VC control traffic is prioritized over other traffic on the stacking ports.
The 7210 SAS supports VC in standalone mode.
Nodes in the VC can take on the role of active CPM, standby CPM, or line card (also known as an IMM). In a VC, two of the nodes can be designated as CPM nodes (along with a built in IMM) and the other node is designated as an IMM-only node. See Node Roles in the VC for more information. Each of the nodes in the VC is provisioned as a card, similar to the cards in a chassis.
A VC can support uplink over-subscription, where only a few uplinks from the end nodes of the stack are connected to the network, and all customer services are delivered through these uplink ports.
A VC can also support a configuration where the uplinks are not over-subscribed and operators can connect any number of ports from each node in the stack to the network core. In release 11.0, the VC software module keeps track of the shortest path from a specific source IMM to a destination IMM and uses it to forward packets through the stack using the stacking ports.
A VC can operate as a Layer 2 device, where only VLANs are used for uplink connectivity, or as a full-fledged IP/MPLS platform with support for IP/MPLS-based Layer 2 and Layer 3 VPNs.
Note: The following functionality is currently not supported on VCs:
|
The nodes in a VC are modeled in software as either CPM nodes or IMM nodes. The CPMs and the IMMs are interconnected through the stacking ports.
In a VC, CPMs also act as IMMs. Because of their dual role, CPMs are referred to as CPM-IMM nodes. Up to three nodes are allowed in a VC. Two of the nodes can be configured as CPM-IMM. The two nodes configured as CPM-IMM nodes provide control plane redundancy. The remaining node in the VC must be an IMM node. The IMM node is referred to as an IMM-only node. By default, the nodes in a VC are IMM-only nodes. If more than two nodes are configured as CPM-IMM nodes, the software logs an error.
During the boot process, the system selects one of the CPM-IMM nodes to be the active CPM-IMM and the other to be the standby CPM-IMM. If the active CPM-IMM fails, the standby CPM-IMM takes over as the active CPM-IMM. It is possible to upgrade an existing IMM-only node to be a CPM-IMM node so that if an existing CPM-IMM node fails, another node can take on its role. It is also possible to add a new node to the VC as a CPM-IMM node in the event that an existing CPM-IMM node must be replaced; however, there are restrictions. See Permitted Platform Combinations in a VC for more information. For information about adding and removing nodes in different scenarios, see VC Boot Scenarios.
Not all combinations of 7210 SAS-Sx 1/10GE and 7210 SAS-S 1/10GE platform variants can be configured as CPM-IMM nodes and assume the role of the active and standby CPM in a VC. Only platforms with similar capabilities can be configured as CPM-IMM nodes.
This restriction does not apply to nodes acting as IMM-only nodes. Any 7210 SAS-Sx 1/10GE variant or 7210 SAS-S 1/10GE variant (PoE and non-PoE, fiber and copper) can be configured as an IMM-only node to operate with any of the supported combinations of CPM-IMM nodes.
Table 25 lists the supported combinations of 7210 SAS-Sx 1/10GE and 7210 SAS-S 1/10GE variants that can be configured as CPM-IMM nodes.
CPM-A | CPM-B | CPM-IMM |
7210 SAS-Sx 1/10GE (any variant) | 7210 SAS-S 1/10GE (any variant) | No |
7210 SAS-S 1/10GE (any variant) | 7210 SAS-S 1/10GE (any variant) | Yes |
7210 SAS-Sx 1/10GE (any variant) | 7210 SAS-Sx 1/10GE (any variant) | Yes |
The 7210 SAS supports manual bootup for a VC.
To provision a node to belong to a VC, it must be configured with the chassis-role parameter set to standalone-vc. The chassis-role parameter is a BOF parameter in the boot loader context.
Of the two nodes configured as CPM-IMM, one becomes the active CPM and the other becomes the standby CPM, ready to move to the active role should the current active CPM fail.The node designated as the active CPM uses the configuration file available locally or remotely at boot up. It distributes the configuration to all members of the VC. Setting the chassis-role parameter to standalone-vc ensures that the member nodes of the VC do not use any local configuration at boot up.
The IMM-only node configuration and the service configuration is always done on the active CPM. The active CPM-IMM node distributes the configuration information to other VC members when the user has saved the TiMOS configuration.The IMM-only nodes receive the service configuration from the active CPM through VC management messages, while the standby CPM receives the service configuration through the high availability (HA) reconcile mechanism.
The active CPM-IMM also sends out VC management packets with VC configuration information to IMM-only member nodes. The IMM-only nodes use the management packets to obtain the VC configuration and join the VC.
The VC requires a single IP address and MAC address to be managed as a single entity. The current active CPM owns the IP address and responds to all the messages sent to this IP address and uses the MAC address as required (for example, for ARP).
The show>chassis command displays power, fan, and alarm statuses of the VC by consolidating the statuses of all cards provisioned in the VC. The show>card command displays power, fan, and alarm statuses for individual cards.
See Procedure to Boot in the Standalone-VC Mode for information about booting the 7210 SAS-Sx/S 1/10GE in standalone-VC mode, including the booting sequenced, BOF prompts, and sample logs.
When the node is shipped from the factory, the chassis-role parameter in the boot loader file is set to factory-default. For the node to be part of a VC, the user must explicitly set the chassis-role parameter in the boot loader to standalone-vc.
The user must also configure the following boot loader parameters to operate the VC:
Note: Before these boot loader parameters are configured, verity that the chassis-role must be set to standalone-vc. |
In addition, to operate the VC users must also configure the following commands in the TiMOS BOF context:
The CPM-IMM nodes are configured through the BOF and do not require further configuration to be part of the VC. Perform the following steps to manually boot and configure CPM-IMM nodes in a VC.
You can get the TiMOS image (both.tim) for the CPM-IMM nodes from the local SD card or through the network by specifying the primary/secondary/tertiary image locations appropriately.
The following output is an example of a CPM-IMM node configuration.
The IMM-only node can be booted independently of the CPM-IMM nodes. Perform the following steps to manually configure IMM-only nodes in a VC.
During bootup, the IMM uses the configured MAC address to identify whether the IMM is part of the stack and to determine its VC stack name and slot number before loading the TiMOS images.
The IMM-only node communicates with the active CPM node in the boot loader context. If the user has configured it to be part of the stack, the IMM-only node obtains its slot number and the TiMOS images required to boot up from the active CPM.
When the IMM-only node receives a VC discovery message containing the MAC address and slot number, the IMM-only node obtains the location of the TiMOS image (both.tim) and downloads it using the information received in the discovery message. If an IMM-only node does not receive the VC discovery message with its MAC address and slot number, it does not load the TiMOS image. Instead, the IMM-only node will wait for a specific amount of time defined by the system, reboot, and wait again to receive a VC discovery message with its MAC address. The IMM-only node repeats this process indefinitely or until the operator removes the node from the VC by disconnecting its stacking ports.
The following output is an example of a BOF configuration for an IMM-only node.
Users cannot use the console or Telnet to view the BOF or execute the BOF commands on an IMM-only node. To update the VC parameters associated with the IMM-only node, users should reboot the node, interrupt the boot process, and use the menu prompts displayed in the boot loader context.
To configure an IMM-only node in the VC, use the command config>system>vc-stack-node slot-number mac-address mac-address [create].
An IMM-only node in a VC is identified by its slot-number and its MAC address. The slot number assigned to the node can be an arbitrary number from 1 to 8. It is used in service provisioning to identify ports on a VC member node, and on service objects on those ports. The port on a VC member node is identified using the format slot-id/1/port-id;sap-id. For example, port 20 on the front panel of a VC member node in slot 4 with a SAP ID of 300 is identified as 4/1/20:300.
Slot numbers are unique to each VC, which are identified by their vc-stack-name, and can be reused across different VCs. Software ensures the uniqueness of the slot number in each VC by raising an error if two nodes in the same VC are assigned the same slot number.
When an IMM-only node boots up, it uses the MAC address and slot number received in the VC discovery messages sent by the CPM node to discover VC configuration information. The MAC address used for VC configuration is the chassis MAC address printed on the label of the node. If it does not match the MAC address retrieved from the EEPROM, the node will not be able use the VC configuration information. For CPM-IMM nodes, this will result in boot failure. The IMM-only nodes will wait indefinitely for a VC discovery message with the correct MAC address from the CPM-IMM node, or until the operator removes the node from the VC by disconnecting its stacking ports.
When the TiMOS image boots up on the CPM-IMM nodes and the IMM-only node, users must provision the card type on each node so that the software knows which hardware platforms are members of the VC. The software can then determine the logical IMM types, which are the slots of the VC. Provisioning the card type is a mandatory step in the bootup process for all nodes of the VC in order for the nodes/cards to be fully functional. Use the following CLI command:
The card type can be preprovisioned and allows for preprovisioning of services. If the software-determined card type does not match the configured card type for the slot number, a provisioning mismatch error is logged. The software determines the card type by reading the information on the EEPROM of the platform.
Table 26 lists the supported card types associated with the 7210 SAS-Sx 1/10GE platform.
Card Type | 7210 SAS-Sx 1/10GE Platform |
sas-sx-24sfp-4sfpp | 7210 SAS-Sx 1/10GE 22F 2C 4SFP+ |
sas-sx-48sfp-4sfpp | 7210 SAS-Sx 1/10GE 46F 2C 4SFP+ |
sas-sx-24t-4sfpp | 7210 SAS-Sx 1/10GE 24T 4SFP+ |
sas-sx-48t-4sfpp | 7210 SAS-Sx 1/10GE 48T 4SFP+ |
sas-sx-24tp-4sfpp | 7210 SAS-Sx 1/10GE 24Tp 4SFP+ PoE |
sas-sx-48tp-4sfpp | 7210 SAS-Sx 1/10GE 48Tp 4SFP+ PoE |
Table 27 lists the supported card types associated with the 7210 SAS-S 1/10GE platform.
Card Type | 7210 SAS-S 1/10GE Platform |
sas-s-24sfp-4sfpp | 7210 SAS-S 1/10GE 24F 4SFP+ |
sas-s-48sfp-4sfpp | 7210 SAS-S 1/10GE 48F 4SFP+ |
sas-s-24t-4sfpp | 7210 SAS-S 1/10GE 24T 4SFP+ |
sas-s-48t-4sfpp | 7210 SAS-S 1/10GE 48T 4SFP+ |
sas-s-24tp-4sfpp | 7210 SAS-S 1/10GE 24Tp 4SFP+ PoE |
sas-s-48tp-4sfpp | 7210 SAS-S 1/10GE 48Tp 4SFP+ PoE |
The nodes in a VC are represented as the cards of a chassis. Each card is assigned a slot number. Using the slot number, services are provisioned on the card using the format slot-number/mda-number/port-number:sap-id, where the slot-number is configured for every node/card, the mda-number is always set to 1 for all 7210 SAS-Sx/S 1/10GE platforms supporting VC, and the port-number corresponds to the front-panel Ethernet port number and ranges from 1 to 28 or from 1 to 52 for 7210 SAS-Sx/S 1/10GE platforms.
To preprovision a VC, users must configure the card-type for the slot they want to preprovision. When the card type is configured, the ports are automatically created for that slot, allowing configuration of the services using the ports of the slot.
The vc-stack-node command can be configured later and is independent of the card-type configuration. The vc-stack-node configuration identifies the slot and the MAC address of the node and is used only by the IMM-only node to become part of the VC. If the software detects a mismatch between the hardware type of the IMM-only node and the configured/provisioned card type, it logs an error and reboots the card, allowing the user to correct the value and fix the problem.
A node system resource profile is divided into two parts:
The per card/slot resource profile is useful for allocating resources to different features on each card to suit the application or service being delivered on those access ports.
The following output is an example of the per-node global system resource profile for a VC.
The following output shows an example of the per-card/slot resource profile for a VC.
This section describes the required configurations and the expected behaviors of different boot up scenarios.
The following steps outline how to manually bring up a VC using nodes shipped from the factory that are being booted up for the first time.
Users can boot up a single node or a set of nodes that are to be part of a VC. As well, users can power up one node, all the nodes, or a set of nodes; there is no restriction on the power sequence. The following steps describe the sequence when all nodes are booted up together; however, the steps here do not dictate the order of service provisioning and do not preclude preprovisioning of services on the active CPM. Service preprovisioning is allowed after the VC configuration, including the VC node member configuration, is complete.
Note: The CPM-IMM nodes are configured through the BOF and do not require further configuration to be part of the VC. |
Note: The vc-stack-node-type parameter is stored in the boot flash (and not in the boot option file) and must be modified using the BOF menu prompts. |
When the VC configuration information has been committed in the CLI, the active CPM node sends the VC configuration to all other nodes in the VC through VC management messages over the stacking ports.
When the IMM-only nodes receive the VC management messages from the active CPM, they confirm the VC configuration by matching their chassis address to the MAC address contained in the VC message.They then retrieve the TiMOS image from the active CPM and use it to boot up.
While booting up with the TiMOS image, the standby CPM node undergoes another CPM election arbitration process, which results in the node detecting itself as standby. The node then executes the HA reconcile process and synchronizes its management and control plane state with the active CPM. It receives the current running configuration with the VC configuration and the current BOF configuration present on the active CPM.
The VC is now ready for use.
This scenario assumes that the BOF configured on each node during the first time boot up is saved locally and contains the VC configuration required for the node to boot up. It also assumes that the chassis-role parameter in the bootflash is set to standalone-vc for each node.
The following sequence occurs in the boot loader context and in the TiMOS context. There is no user intervention required.
If both the cpm-imm nodes come up as active, which is detected by both nodes receiving each other's VC management messages claiming themselves to be active, the node with the higher slot number reboots itself.
This section describes how to:
When replacing a node, the SD card used to boot the replacement node must contain a version of the boot.tim and both.tim images that match that of the active CPM. If the versions do not match, the bootup will not be successful.
If a VC is configured with only a single CPM-IMM node and that node fails, the entire VC is down, including the IMM-only nodes. The IMM-only nodes reboot if they do not hear from the active CPM node after some time.
If the stacking ports are not connected together to form a ring connecting all the nodes of the VC, any failure in the VC (whether a link or a node failure) will isolate some nodes off of the VC, affecting all the services on those nodes. In addition, if the uplink is through one of the nodes that has been isolated or has failed, services using the uplink for the entire VC are affected. The procedure here assumes a ring of stacking ports is configured.
In the following scenario, it is assumed that a CPM-IMM node has failed and needs to be replaced. It is assumed that a redundant CPM node is configured and another CPM-IMM node is currently active. The replacement node might be a new node from the factory or a node that has been previously used as an IMM node.
Services are not interrupted during the preceding process because the active CPM (CPM-A) is present and has not been rebooted (assuming the active CPM has an uplink to the network). If one of the IMM-only nodes was converted to a CPM-IMM node, the services on that IMM are affected as the node is rebooted to effect the change to a CPM-IMM node.
In this scenario, it is assumed that the IMM-only node has failed and needs to be replaced. The replacement node can be a new node from the factory or a node that has been previously used as an IMM node in another VC. In the former case, the procedure is similar to adding a node received from factory; see First Time Manual Boot of Nodes in the Stack starting at Step 9.
The SD card used to boot the node must contain a version of the TiMOS image (boot.tim) that matches that of the active CPM. If the version do not match, the boot will not be successful.
If a ring of stacking ports is not configured, any failure in the stack (either a link or node failure) will isolate some nodes off of the stack, affecting all the services on those nodes. In addition, if the uplink is available through one of the nodes that is now isolated or failed, services using the uplink for the entire VC are affected.
The following procedure assumes a ring is configured and that the replacement node had been previously used as an IMM node in another VC.
Services are not interrupted, as the active CPM (CPM-A) is present and has not been rebooted (assuming the active CPM has an uplink to the network). If one of the IMM-only nodes is converted to a CPM-IMM node, the services on that IMM node are affected as the node is rebooted to effect the change to a CPM-IMM node.
If the active CPM node needs to be replaced and has not failed, switchover to the standby CPM can be forced before using the procedure outlined in Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node.
This scenario is similar to Replacing an IMM-only Node with Another Node except that instead of replacing an existing IMM-only node, a new node is being added. This requires the configuration file on the active CPM to be updated by adding the new IMM-only node VC configuration parameters using the config>system>vc-stack>vc-stack-node command, without deleting or modifying any existing VC configuration parameters.
If the node to be removed is the standby CPM-IMM node, see Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node and update the BOF on the active CPM node by removing the VC configuration parameters for the node being removed.
If the node to be removed is an IMM-only node, see Replacing an IMM-only Node with Another Node and update the configuration file on the active CPM node by removing the VC configuration parameters for the node being removed.
This scenario is similar to Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node, with the VC configuration information of the new CPM-IMM node being added to the BOF of the active CPM node.
The following configuration guidelines apply for upgrading, adding, or removing a VC node.
A VC split can occur as a result of the following failures in a VC:
In the event of single stacking link failure in a VC, all nodes including the active and standby CPM-IMM nodes can continue to communicate with each other. There is no impact on the control plane and no services are lost because an alternate path exists around the point of failure. Services on all the IMM-only nodes continue to forward traffic; however, there might be an impact to the switching throughput as the stacking port bandwidth reduces by half.
Similarly, in the event of a single node failure in a VC, the VC can continue to operate with the active CPM-IMM node (or the standby CPM-IMM node if the active failed). There is no impact to the control plane or services and services on all the IMM continue to forward traffic. However, services provisioned on the failed node are lost and there might be an impact to the switching throughput as the stacking port bandwidth reduces by half.
If a double failure occurs, because two links fail, or two nodes fail, or a link and a node fail, the VC will have two islands of nodes/cards. One of these islands needs to own the VC. To decide which island of nodes will own the VC and continue normal functions, the following occurs:
CPM-IMM nodes store information to detect whether a reboot is occurring due to a VC split. If a split has occurred, the node attempts to connect to the current active CPM-IMM. When it talks to the active CPM-IMM and boots up successfully, it clears the stored VC split information. If the CPM-IMM node is unable to talk to the current active CPM node, it will reboot and start the process all over again until it can successfully talk to an active CPM.
Users can clear the VC split information stored in the database and stop the reboot process by interrupting the boot process and providing a Yes response at the boot loader prompt to clear the VC split information.