5. Virtual Chassis

This chapter provides information about the parameters required to create a virtual chassis and how to configure them. A virtual chassis is also known as a stack of nodes.

Topics in this chapter include provisioning, booting, and preprovisioning the virtual chassis; provisioning service entities; replacing and upgrading nodes; and virtual chassis boot scenarios and split scenarios.

5.1. Overview

The 7210 SAS supports the grouping together, or stacking, of a set of nodes to create a virtual chassis (VC). The following nodes can be stacked to form a VC:

  1. the 7210 SAS-Sx 1/10GE and its variants
  2. the 7210 SAS-S 1/10GE and its variants

A VC provides operational simplicity because it uses a single IP address and MAC address for the set of stacked nodes. The set of nodes can be provisioned and monitored as a single platform rather than as individual nodes. For example, users can provision services for all ports in the VC without having to log in and access each node individually. As well, the NSP NFM-P management system can manage and monitor the VC as a single platform rather than the individual nodes that comprise it.

In addition to operational simplicity, a VC increases service reliability by providing redundancy in the following ways.

  1. If the node running control plane protocols in the VC fails, a standby node will take over the master node function, minimizing service loss.
  2. Access devices can be connected using a mechanism such as LAG or G.8032 so that the impact of a node failure in a VC is minimized for services on the access device.

Up to three nodes can be provisioned in a single VC. Nodes can be added to or removed from the VC to increase the total available ports or to replace a failed unit.

The nodes of the VC are connected through their stacking ports. The total capacity provided by the stacking ports is equivalent to the sum of the bandwidth provided by the individual port. The stacking ports are used for forwarding service traffic and for exchanging management and control messages between the nodes in the VC. The VC control traffic is prioritized over other traffic on the stacking ports.

The 7210 SAS supports VC in standalone mode.

Nodes in the VC can take on the role of active CPM, standby CPM, or line card (also known as an IMM). In a VC, two of the nodes can be designated as CPM nodes (along with a built in IMM) and the other node is designated as an IMM-only node. See Node Roles in the VC for more information. Each of the nodes in the VC is provisioned as a card, similar to the cards in a chassis.

A VC can support uplink over-subscription, where only a few uplinks from the end nodes of the stack are connected to the network, and all customer services are delivered through these uplink ports.

A VC can also support a configuration where the uplinks are not over-subscribed and operators can connect any number of ports from each node in the stack to the network core. In release 11.0, the VC software module keeps track of the shortest path from a specific source IMM to a destination IMM and uses it to forward packets through the stack using the stacking ports.

A VC can operate as a Layer 2 device, where only VLANs are used for uplink connectivity, or as a full-fledged IP/MPLS platform with support for IP/MPLS-based Layer 2 and Layer 3 VPNs.

Note:

The following functionality is currently not supported on VCs:

  1. Synchronous Ethernet network synchronization
  2. 1588v2 PTP network synchronization

5.1.1. Node Roles in the VC

The nodes in a VC are modeled in software as either CPM nodes or IMM nodes. The CPMs and the IMMs are interconnected through the stacking ports.

In a VC, CPMs also act as IMMs. Because of their dual role, CPMs are referred to as CPM-IMM nodes. Up to three nodes are allowed in a VC. Two of the nodes can be configured as CPM-IMM. The two nodes configured as CPM-IMM nodes provide control plane redundancy. The remaining node in the VC must be an IMM node. The IMM node is referred to as an IMM-only node. By default, the nodes in a VC are IMM-only nodes. If more than two nodes are configured as CPM-IMM nodes, the software logs an error.

During the boot process, the system selects one of the CPM-IMM nodes to be the active CPM-IMM and the other to be the standby CPM-IMM. If the active CPM-IMM fails, the standby CPM-IMM takes over as the active CPM-IMM. It is possible to upgrade an existing IMM-only node to be a CPM-IMM node so that if an existing CPM-IMM node fails, another node can take on its role. It is also possible to add a new node to the VC as a CPM-IMM node in the event that an existing CPM-IMM node must be replaced; however, there are restrictions. See Permitted Platform Combinations in a VC for more information. For information about adding and removing nodes in different scenarios, see VC Boot Scenarios.

5.1.2. Permitted Platform Combinations in a VC

Not all combinations of 7210 SAS-Sx 1/10GE and 7210 SAS-S 1/10GE platform variants can be configured as CPM-IMM nodes and assume the role of the active and standby CPM in a VC. Only platforms with similar capabilities can be configured as CPM-IMM nodes.

This restriction does not apply to nodes acting as IMM-only nodes. Any 7210 SAS-Sx 1/10GE variant or 7210 SAS-S 1/10GE variant (PoE and non-PoE, fiber and copper) can be configured as an IMM-only node to operate with any of the supported combinations of CPM-IMM nodes.

Table 25 lists the supported combinations of 7210 SAS-Sx 1/10GE and 7210 SAS-S 1/10GE variants that can be configured as CPM-IMM nodes.

Table 25:  Supported Node Combinations for CPM-IMM Configuration  

CPM-A

CPM-B

CPM-IMM

7210 SAS-Sx 1/10GE (any variant)

7210 SAS-S 1/10GE (any variant)

No

7210 SAS-S 1/10GE (any variant)

7210 SAS-S 1/10GE (any variant)

Yes

7210 SAS-Sx 1/10GE (any variant)

7210 SAS-Sx 1/10GE (any variant)

Yes

5.2. Provisioning and Booting Up the VC in Standalone Mode

The 7210 SAS supports manual bootup for a VC.

To provision a node to belong to a VC, it must be configured with the chassis-role parameter set to standalone-vc. The chassis-role parameter is a BOF parameter in the boot loader context.

Of the two nodes configured as CPM-IMM, one becomes the active CPM and the other becomes the standby CPM, ready to move to the active role should the current active CPM fail.The node designated as the active CPM uses the configuration file available locally or remotely at boot up. It distributes the configuration to all members of the VC. Setting the chassis-role parameter to standalone-vc ensures that the member nodes of the VC do not use any local configuration at boot up.

The IMM-only node configuration and the service configuration is always done on the active CPM. The active CPM-IMM node distributes the configuration information to other VC members when the user has saved the TiMOS configuration.The IMM-only nodes receive the service configuration from the active CPM through VC management messages, while the standby CPM receives the service configuration through the high availability (HA) reconcile mechanism.

The active CPM-IMM also sends out VC management packets with VC configuration information to IMM-only member nodes. The IMM-only nodes use the management packets to obtain the VC configuration and join the VC.

The VC requires a single IP address and MAC address to be managed as a single entity. The current active CPM owns the IP address and responds to all the messages sent to this IP address and uses the MAC address as required (for example, for ARP).

The show>chassis command displays power, fan, and alarm statuses of the VC by consolidating the statuses of all cards provisioned in the VC. The show>card command displays power, fan, and alarm statuses for individual cards.

See Procedure to Boot in the Standalone-VC Mode for information about booting the 7210 SAS-Sx/S 1/10GE in standalone-VC mode, including the booting sequenced, BOF prompts, and sample logs.

5.2.1. Required BOF Parameters

When the node is shipped from the factory, the chassis-role parameter in the boot loader file is set to factory-default. For the node to be part of a VC, the user must explicitly set the chassis-role parameter in the boot loader to standalone-vc.

The user must also configure the following boot loader parameters to operate the VC:

Note:

Before these boot loader parameters are configured, verity that the chassis-role must be set to standalone-vc.

  1. vc-stack-name
    Configure the boot loader parameter vc-stack-name on the active CPM-IMM node to identify the VC of which the nodes are members. A node configured with a vc-stack-name of “my-stack” is a member node of the VC “my-stack”. The system-name variable in the configure system name command can be reused as the boot loader vc-stack-name value. A node cannot be added to a VC if its vc-stack-name value does not match that of the current active CPM-IMM.
  2. vc-stack-node-type
    Specify the node role as either cpm-imm or imm-only using the boot loader BOF parameter vc-stack-node-type. The default setting is imm-only. Users need to modify the default only for nodes that can take on the role of CPM by changing the setting to cpm-imm. The BOF parameter does not need to be modified for nodes whose role is IMM. Users can configure two nodes in the stack to be cpm-imm nodes.

In addition, to operate the VC users must also configure the following commands in the TiMOS BOF context:

  1. vc-stack-node
    Use the command bof>vc-stack-node {cpmA | cpmB} slot-num slot-num mac-addr mac-address to configure the slot number and the MAC address of the CPM nodes. The VC stack node cpmA and cpmB parameters can also be configured through the boot loader BOF menu prompts.
    The slot number identifies a specific node in the VC (similar to the slot ID of a card in a chassis-based system) and is used in addressing service objects created during provisioning. The slot number also is used to light up the correct LED on the front-panel of the node, enabling operators to see which slot is configured with the VC node (the stack LED also denotes that the node is part of a VC and is fully functional). The slot number of the active and standby CPM-IMMs can be set to any number from 1 to 8, as long as it is not in use by another VC member node. An arbitration process during the boot phase ensures that one of the CPM nodes is the active CPM and the other is the standby CPM.
    The MAC address must match the base MAC address printed on the label of the chassis.
  2. eth-mgt-address
    Configure the management IP address for the two CPM nodes using the command bof>eth-mgmt-address ip-prefix/ip-prefix-length [active | standby]. The Ethernet management IP address configured as active provides the IP address of the active node, which a management station can use to operate the stack. The management IP address configured as standby provides connectivity to the standby node.

5.2.2. Manually Configuring Nodes to Boot as CPM-IMM in a VC

The CPM-IMM nodes are configured through the BOF and do not require further configuration to be part of the VC. Perform the following steps to manually boot and configure CPM-IMM nodes in a VC.

  1. Identify the two nodes that will take on the CPM role.
  2. Ensure there is console access to the CPM nodes to drive the boot process. See the following for information: Procedure to Connect to a Console, 7210 SAS-S 1/10GE Console Port, and 7210 SAS-Sx 10/100GE Console Port.
  3. Ensure the CPM nodes are powered on.
  4. Ensure that the SD card delivered with the so ftware license containing the boot loader (boot.tim) and TiMOS software is installed in the SD card slot.
  5. Manually interrupt the boot process at the boot loader BOF prompt and configure the following:
    1. specify standalone-vc for the chassis-role parameter
    2. specify a name for the vc-stack-name parameter
    3. specify cpm-imm for the vc-stack-node-type parameter
    4. specify the slot number and MAC address for CPM A and CPM B in the vc-stack-node parameter
    5. specify the IP prefix information for the active and standby options in the eth-mgmt-address parameter
  6. Configure other BOF parameters such as image location, configuration file location, and route information, similar to a 7210 node operating in the standalone mode. See System Boot Options for 7210 SAS-Mxp, 7210 SAS-S 1/10GE, 7210 SAS-Sx 1/10GE, and 7210 SAS-Sx 10/100GE in Standalone Mode for information.

You can get the TiMOS image (both.tim) for the CPM-IMM nodes from the local SD card or through the network by specifying the primary/secondary/tertiary image locations appropriately.

The following output is an example of a CPM-IMM node configuration.

A:Dut-A# show bof
========================================================================
BOF (Memory)
========================================================================
    primary-image      ftp://*:*@135.250.127.36/./images/7xxx-hops/
    primary-config     cf1:\default.cfg
#eth-mgmt Port Settings:
    no  eth-mgmt-disabled
    eth-mgmt-address   10.135.17.166/24 active
    eth-mgmt-address   10.135.17.167/24 standby
    eth-mgmt-route     10.250.0.0/16 next-hop 10.135.17.1
    eth-mgmt-autoneg
    eth-mgmt-duplex    full
    eth-mgmt-speed     100
#System Settings:
    wait               3
    persist            off
    console-speed      115200
    no  console-disabled
    vc-stack-name      VC-3
    vc-stack-node-type cpm-imm
    vc-stack-node cpmA slot-num 1 mac-addr d0:99:d5:91:1c:41
    vc-stack-node cpmB slot-num 2 mac-addr d0:99:d5:90:1e:41
=======================================================================

5.2.3. Manually Booting a VC IMM-Only Node

The IMM-only node can be booted independently of the CPM-IMM nodes. Perform the following steps to manually configure IMM-only nodes in a VC.

  1. To boot the IMM-only node, use the SD card provided with the software license that contains the appropriate boot.tim image.
  2. Upon power up, manually interrupt the boot process at the BOF prompt and configure the following:
    1. Specify standalone-vc for the boot loader parameter chassis-role.
    2. Specify imm-only for the boot loader parameter vc-stack-node-type.
    3. Note the MAC address of the IMM-only node. The MAC address used for VC configuration is the chassis MAC address printed on the label of the node.
      You will need this information to configure the node in the TiMOS context using the config>system>vc-stack-node slot-number mac-address mac-address command on the active CPM, when the CPM is up.
    The slot number, image location, configuration location, IP addresses, and route information do not need to be configured on the IMM-only node.
  3. Save the file and allow the boot loader to proceed with the boot process.
  4. After the CPM has booted up, on the active CPM, follow the prompts in the TiMOS context to configure the IMM-only slot number and MAC address.

During bootup, the IMM uses the configured MAC address to identify whether the IMM is part of the stack and to determine its VC stack name and slot number before loading the TiMOS images.

The IMM-only node communicates with the active CPM node in the boot loader context. If the user has configured it to be part of the stack, the IMM-only node obtains its slot number and the TiMOS images required to boot up from the active CPM.

When the IMM-only node receives a VC discovery message containing the MAC address and slot number, the IMM-only node obtains the location of the TiMOS image (both.tim) and downloads it using the information received in the discovery message. If an IMM-only node does not receive the VC discovery message with its MAC address and slot number, it does not load the TiMOS image. Instead, the IMM-only node will wait for a specific amount of time defined by the system, reboot, and wait again to receive a VC discovery message with its MAC address. The IMM-only node repeats this process indefinitely or until the operator removes the node from the VC by disconnecting its stacking ports.

The following output is an example of a BOF configuration for an IMM-only node.

========================================================================
BOF (Memory)
========================================================================
#System Settings:
    wait               3
    persist            off
    console-speed      115200
    no console-disabled
    vc-stack-node-type imm-only
=======================================================================

Users cannot use the console or Telnet to view the BOF or execute the BOF commands on an IMM-only node. To update the VC parameters associated with the IMM-only node, users should reboot the node, interrupt the boot process, and use the menu prompts displayed in the boot loader context.

5.2.4. Configuring an IMM-Only Node in a VC

To configure an IMM-only node in the VC, use the command config>system>vc-stack-node slot-number mac-address mac-address [create].

An IMM-only node in a VC is identified by its slot-number and its MAC address. The slot number assigned to the node can be an arbitrary number from 1 to 8. It is used in service provisioning to identify ports on a VC member node, and on service objects on those ports. The port on a VC member node is identified using the format slot-id/1/port-id;sap-id. For example, port 20 on the front panel of a VC member node in slot 4 with a SAP ID of 300 is identified as 4/1/20:300.

Slot numbers are unique to each VC, which are identified by their vc-stack-name, and can be reused across different VCs. Software ensures the uniqueness of the slot number in each VC by raising an error if two nodes in the same VC are assigned the same slot number.

When an IMM-only node boots up, it uses the MAC address and slot number received in the VC discovery messages sent by the CPM node to discover VC configuration information. The MAC address used for VC configuration is the chassis MAC address printed on the label of the node. If it does not match the MAC address retrieved from the EEPROM, the node will not be able use the VC configuration information. For CPM-IMM nodes, this will result in boot failure. The IMM-only nodes will wait indefinitely for a VC discovery message with the correct MAC address from the CPM-IMM node, or until the operator removes the node from the VC by disconnecting its stacking ports.

5.2.5. Provisioning the Card Type for All Nodes in a VC

When the TiMOS image boots up on the CPM-IMM nodes and the IMM-only node, users must provision the card type on each node so that the software knows which hardware platforms are members of the VC. The software can then determine the logical IMM types, which are the slots of the VC. Provisioning the card type is a mandatory step in the bootup process for all nodes of the VC in order for the nodes/cards to be fully functional. Use the following CLI command:

A:Dut-A# configure card 1 card-type
  - card-type <card-type>
  - no card-type
 <card-type>          : sas-sx-24sfp-4sfpp | sas-sx-48sfp-4sfpp | sas-sx-24t-
4sfpp | sas-sx-48t-4sfpp | sas-s-24sfp-4sfpp|sas-s-48sfp-4sfpp|sas-s-24t-4sfpp|sas-
s-48t-4sfpp

The card type can be preprovisioned and allows for preprovisioning of services. If the software-determined card type does not match the configured card type for the slot number, a provisioning mismatch error is logged. The software determines the card type by reading the information on the EEPROM of the platform.

Table 26 lists the supported card types associated with the 7210 SAS-Sx 1/10GE platform.

Table 26:  Card Types for 7210 SAS-Sx 1/10GE Platform 

Card Type

7210 SAS-Sx 1/10GE Platform

sas-sx-24sfp-4sfpp

7210 SAS-Sx 1/10GE 22F 2C 4SFP+

sas-sx-48sfp-4sfpp

7210 SAS-Sx 1/10GE 46F 2C 4SFP+

sas-sx-24t-4sfpp

7210 SAS-Sx 1/10GE 24T 4SFP+

sas-sx-48t-4sfpp

7210 SAS-Sx 1/10GE 48T 4SFP+

sas-sx-24tp-4sfpp

7210 SAS-Sx 1/10GE 24Tp 4SFP+ PoE

sas-sx-48tp-4sfpp

7210 SAS-Sx 1/10GE 48Tp 4SFP+ PoE

Table 27 lists the supported card types associated with the 7210 SAS-S 1/10GE platform.

Table 27:  Card Types for 7210 SAS-S 1/10GE Platform 

Card Type

7210 SAS-S 1/10GE Platform

sas-s-24sfp-4sfpp

7210 SAS-S 1/10GE 24F 4SFP+

sas-s-48sfp-4sfpp

7210 SAS-S 1/10GE 48F 4SFP+

sas-s-24t-4sfpp

7210 SAS-S 1/10GE 24T 4SFP+

sas-s-48t-4sfpp

7210 SAS-S 1/10GE 48T 4SFP+

sas-s-24tp-4sfpp

7210 SAS-S 1/10GE 24Tp 4SFP+ PoE

sas-s-48tp-4sfpp

7210 SAS-S 1/10GE 48Tp 4SFP+ PoE

5.3. Provisioning Service Entities

The nodes in a VC are represented as the cards of a chassis. Each card is assigned a slot number. Using the slot number, services are provisioned on the card using the format slot-number/mda-number/port-number:sap-id, where the slot-number is configured for every node/card, the mda-number is always set to 1 for all 7210 SAS-Sx/S 1/10GE platforms supporting VC, and the port-number corresponds to the front-panel Ethernet port number and ranges from 1 to 28 or from 1 to 52 for 7210 SAS-Sx/S 1/10GE platforms.

5.4. Preprovisioning a VC

To preprovision a VC, users must configure the card-type for the slot they want to preprovision. When the card type is configured, the ports are automatically created for that slot, allowing configuration of the services using the ports of the slot.

The vc-stack-node command can be configured later and is independent of the card-type configuration. The vc-stack-node configuration identifies the slot and the MAC address of the node and is used only by the IMM-only node to become part of the VC. If the software detects a mismatch between the hardware type of the IMM-only node and the configured/provisioned card type, it logs an error and reboots the card, allowing the user to correct the value and fix the problem.

5.5. Configuring a System Resource Profile for a VC

A node system resource profile is divided into two parts:

  1. a global resource profile to manage the VC-wide resources for certain features
  2. a resource profile per card/slot to manage the resources on a specific slot

The per card/slot resource profile is useful for allocating resources to different features on each card to suit the application or service being delivered on those access ports.

The following output is an example of the per-node global system resource profile for a VC.

A:Dut-A# configure system global-res-profile
A:Dut-A>config>system>glob-res# info detail
----------------------------------------------
            router
                no ecmp
                no ldp-ecmp
                no max-ip-subnets 
                no max-ipv6-routes 
            exit
----------------------------------------------
A:Dut-A>config>system>glob-res#

The following output shows an example of the per-card/slot resource profile for a VC.

*A:Dut-A# configure system resource-profile 1 
*A:Dut-A>config>system>res-prof# info detail 
----------------------------------------------
            description "Default System Resource Profile Policy"
            no g8032-control-sap-tags
            ingress-internal-tcam 
                sap-aggregate-meter 2
                qos-sap-ingress-resource 4
                    mac-match-enable 6  
                    ipv4-match-enable 6  
                    ipv6-ipv4-match-enable 6   
                    ipv4-mac-match-enable 6    
                exit
                acl-sap-ingress 2  
                    ipv4-match-enable 5  
                    ipv6-64-only-match-enable 5  
                    ipv4-ipv6-128-match-enable 5   
                    mac-match-enable 5   
                exit
                eth-cfm 2 
                    no up-mep 
                    sap-down-mep 1  
                    sdp-down-mep-ing-mip 1  
                exit
            exit
            egress-internal-tcam 
                acl-sap-egress 2  
                    mac-ipv4-match-enable 2
                    no ipv6-128bit-match-enable  
                    no mac-ipv6-64bit-match-enable 
                    no mac-match-enable  
                exit
                no egress-sap-aggregate-meter 
            exit
----------------------------------------------
*A:Dut-A# show card 1 active-resource-profile 
=======================================================================
Active System Resource Profile Information
=======================================================================
-----------------------------------------------------------------------
IPv6 FIB
-----------------------------------------------------------------------
max-ipv6-routes            : disable    
-----------------------------------------------------------------------
Router
-----------------------------------------------------------------------
max-ip-subnets             : 2000       
-----------------------------------------------------------------------
-----------------------------------------------------------------------
Router
-----------------------------------------------------------------------
system-max-ecmp            : 1          
L3-max-ecmp-groups         : 1024       
Ldp-max-ecmp-groups        : disable    
Ldp-ecmp-percent-value     : disable    
-----------------------------------------------------------------------
Ingress Internal CAM       : 8          
-----------------------------------------------------------------------
Sap Ingress Qos resource   : 4          Sap  Aggregate Meter (#)   : 2
-----------------------------------------------------------------------
IPv4 Resource              : max        Mac  Resource              : max
IPv4-IPv6 Resource         : disable    IPv4-Mac Resource          : max
-------------------------------------------------------------------------------
Net Ingress Qos resource   : disable          
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Sap Ingress ACL resource   : 2          
-------------------------------------------------------------------------------
IPv4 Resource              : max        Mac  Resource              : max
IPv4-IPv6 128 bit Resource : disable    IPv6 64 bit Resource       : disable
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Eth CFM                    : 2          
-------------------------------------------------------------------------------
sap-down-mep                   : 1     up-mep                     : disable
sdp-down-mep-ing-mip           : 1
-------------------------------------------------------------------------------
Egress Internal CAM        : 2          
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Sap Egress ACL resource    : 2          
-------------------------------------------------------------------------------
Mac and IPv4 Resource      : 2          Mac-only Resource          : disable
IPv6 128 bit Resource      : disable    Mac and IPv6 64 bit Resour*: disable
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Egr sap agg-meter          : disable    
-------------------------------------------------------------------------------
MBS pool                   : Node       
-------------------------------------------------------------------------------
Decommissioned Ports       : Not Applicable
-------------------------------------------------------------------------------
===============================================================================
* indicates that the corresponding row element may have been truncated.
# indicates that the value will take effect only after reboot or clear card.

5.6. VC Boot Scenarios

This section describes the required configurations and the expected behaviors of different boot up scenarios.

5.6.1. First Time Manual Boot of Nodes in the Stack

The following steps outline how to manually bring up a VC using nodes shipped from the factory that are being booted up for the first time.

Users can boot up a single node or a set of nodes that are to be part of a VC. As well, users can power up one node, all the nodes, or a set of nodes; there is no restriction on the power sequence. The following steps describe the sequence when all nodes are booted up together; however, the steps here do not dictate the order of service provisioning and do not preclude preprovisioning of services on the active CPM. Service preprovisioning is allowed after the VC configuration, including the VC node member configuration, is complete.

Note:

The CPM-IMM nodes are configured through the BOF and do not require further configuration to be part of the VC.

  1. Identify the two nodes that will take on the CPM role.
  2. Ensure there is console access to each CPM node to drive the boot process. See the following for information: 7210 SAS-Sx 10/100GE Console Port, 7210 SAS-S 1/10GE Console Port, and Procedure to Connect to a Console.
  3. Ensure the CPM nodes are powered on.
  4. Ensure that the SD card containing the boot loader (boot.tim) and TiMOS (both.tim) software is installed in the SD card slot for each CPM node.
    In Release 11.0, the TiMOS image (both.tim) for the CPM-IMM nodes can be obtained from the local SD card or obtained through the network by specifying the primary/secondary/tertiary image locations appropriately.
  5. On the node that will be the active CPM for the first time boot, log in through the console port and manually interrupt the boot process.
    The boot prompt appears.
  6. Configure BOF parameters as follows:
    1. specify standalone-vc for the boot loader parameter chassis-role
    2. specify a name for the boot loader parameter vc-stack-name
    3. specify cpm-imm for the boot loader parameter vc-stack-node-type
    4. specify the slot number and MAC address of each CPM-IMM node in the bof>vc-stack-node command
    5. specify the management IP address for the active and standby CPM-IMM nodes in the bof>eth-mgmt-address command
    6. configure the TiMOS image location
    7. configure the configuration file location
    The following output shows an example of a BOF configuration for a CPM-IMM node.
    A:Dut-A# show bof
    ========================================================================
    BOF (Memory)
    ========================================================================
        primary-image      ftp://*:*@135.250.127.36/./images/7xxx-hops/
        primary-config     cf1:\default.cfg
    #eth-mgmt Port Settings:
        no  eth-mgmt-disabled
        eth-mgmt-address   10.135.17.166/24 active
        eth-mgmt-address   10.135.17.167/24 standby
        eth-mgmt-route     10.250.0.0/16 next-hop 10.135.17.1
        eth-mgmt-autoneg
        eth-mgmt-duplex    full
        eth-mgmt-speed     100
    #System Settings:
        wait               3
        persist            off
        console-speed      115200
        no  console-disabled
        vc-stack-name      VC-3
        vc-stack-node-type cpm-imm
        vc-stack-node cpmA slot-num 1 mac-addr d0:99:d5:91:1c:41
        vc-stack-node cpmB slot-num 2 mac-addr d0:99:d5:90:1e:41
    =======================================================================
  7. On the node that will be the standby CPM for the first time boot, log in through the console port and manually interrupt the boot process.
    The boot prompt appears.
  8. Configure BOF parameters as follows:
    1. specify standalone-vc for the boot loader parameter chassis-role
    2. specify a name for or the boot loader parameter vc-stack-name
    3. specify cpm-imm for the boot loader parameter vc-stack-node-type
    4. specify the slot number and MAC address of each CPM-IMM node in the bof>vc-stack-node command
    5. configure the TiMOS image location
    The CPM-IMM nodes undergo an arbitration process where the node assigned the lower slot number gets chosen as the active CPM node.
    The node chosen as the active CPM node loads the TiMOS image and boots up as the active CPM-IMM node. The standby CPM also proceeds with boot up. However, it does not have the VC configuration on first-time boot and needs to wait for the active CPM to provide the VC configuration through VC management messages.
  9. Identify the node that will take on the IMM role.
  10. Ensure there is console access to the IMM node to drive the boot process.
  11. Ensure that the SD card containing the appropriate boot loader (boot.tim) software is installed in the SD card slot.
  12. Power up the node that will be the IMM node, log in through the console port and manually interrupt the boot process.
    The boot prompt appears.
  13. Configure the boot loader BOF parameter chassis-role as standalone-vc.
    The boot loader parameter vc-stack-node-type retains its default value of imm-only.
    Note:

    The vc-stack-node-type parameter is stored in the boot flash (and not in the boot option file) and must be modified using the BOF menu prompts.

    The VC stack name, slot number, image location, configuration location, IP addresses, and route information do not need to be configured on the IMM-only node.
    The following output shows an example of a BOF configuration for a IMM-only node.
    ========================================================================
    BOF (Memory)
    ========================================================================
    #System Settings:
        wait               3
        persist            off
        console-speed      115200
        no console-disabled
        vc-stack-node-type imm-only
    =======================================================================
  14. After the IMM-only BOF parameters are configured, save the BOF by typing Yes when prompted to save the BOF. Let the boot loader proceed with the boot process.
    The IMM-only node waits to receive VC configuration information from the active CPM-IMM node.
  15. Take note of the MAC address of the node chosen as the IMM-only node. The MAC address is the chassis MAC address printed on the label of the node.
  16. At the TiMOS prompt on the active CPM node, proceed with the VC configuration by using the config>system>vc-stack>vc-stack-node command to configure the IMM-only node slot number and MAC address.

When the VC configuration information has been committed in the CLI, the active CPM node sends the VC configuration to all other nodes in the VC through VC management messages over the stacking ports.

When the IMM-only nodes receive the VC management messages from the active CPM, they confirm the VC configuration by matching their chassis address to the MAC address contained in the VC message.They then retrieve the TiMOS image from the active CPM and use it to boot up.

While booting up with the TiMOS image, the standby CPM node undergoes another CPM election arbitration process, which results in the node detecting itself as standby. The node then executes the HA reconcile process and synchronizes its management and control plane state with the active CPM. It receives the current running configuration with the VC configuration and the current BOF configuration present on the active CPM.

The VC is now ready for use.

5.6.2. Subsequent Reboot of the Stack (with Correct BOF Present)

This scenario assumes that the BOF configured on each node during the first time boot up is saved locally and contains the VC configuration required for the node to boot up. It also assumes that the chassis-role parameter in the bootflash is set to standalone-vc for each node.

The following sequence occurs in the boot loader context and in the TiMOS context. There is no user intervention required.

  1. In the boot loader context, all the nodes in the VC read their bootflash and the BOFs that are present locally, and receive their VC configuration.
  2. In the boot loader context, the nodes configured as cpm-imm go through a CPM election process and the node with the lower slot number is chosen as the active CPM. Both nodes proceed to boot up using the TiMOS image (both.tim).
  3. In the TiMOS context, both the cpm-imm nodes again read the BOFs that are present locally and receive their VC configuration.
  4. In the boot loader context, the imm-only nodes wait to receive VC management messages from the active CPM, when the active CPM has booted up. When the imm-only nodes receive VC management messages that contain their MAC address and slot number, they proceed to boot up using the TiMOS image (both.tim). A local copy of the TiMOS image is used if it matches the version available on the active CPM; otherwise, the imm-only nodes retrieve the newer version from the active CPM. The imm-only nodes do not participate in the CPM election process.
  5. In the TiMOS context, the cpm-imm nodes go through the CPM election arbitration process to elect the active CPM node. The active CPM node then initializes the chassis manager and initiates a HA reconcile process with the standby CPM node.

If both the cpm-imm nodes come up as active, which is detected by both nodes receiving each other's VC management messages claiming themselves to be active, the node with the higher slot number reboots itself.

5.7. Replacing and Upgrading a Node in a VC

This section describes how to:

  1. replace a CPM-IMM node or IMM-only node
  2. upgrade an existing IMM-only node to a CPM-IMM node
  3. downgrade a CPM-IMM node to an IMM-only node

5.7.1. Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node

When replacing a node, the SD card used to boot the replacement node must contain a version of the boot.tim and both.tim images that match that of the active CPM. If the versions do not match, the bootup will not be successful.

If a VC is configured with only a single CPM-IMM node and that node fails, the entire VC is down, including the IMM-only nodes. The IMM-only nodes reboot if they do not hear from the active CPM node after some time.

If the stacking ports are not connected together to form a ring connecting all the nodes of the VC, any failure in the VC (whether a link or a node failure) will isolate some nodes off of the VC, affecting all the services on those nodes. In addition, if the uplink is through one of the nodes that has been isolated or has failed, services using the uplink for the entire VC are affected. The procedure here assumes a ring of stacking ports is configured.

In the following scenario, it is assumed that a CPM-IMM node has failed and needs to be replaced. It is assumed that a redundant CPM node is configured and another CPM-IMM node is currently active. The replacement node might be a new node from the factory or a node that has been previously used as an IMM node.

  1. Use the MAC address or slot number to determine whether the failed node is CPM-A or CPM-B. The node to be added as a CPM-IMM node must be configured based on which one failed. For the purposes of this example, it is assumed that CPM-B (the standby CPM) has failed.
  2. Disconnect the stacking port on the failed node and complete the stacking ring.
  3. On the active CPM, remove the configuration of the failed CPM-IMM node (CPM-B) from the BOF using the bof no vc-stack-node command.
  4. On the active CPM, add the configuration of the new CPM-IMM node as CPM-B to the BOF using the bof vc-stack-node command.
  5. Save the BOF for the configuration to take effect immediately.
  6. If the replacement node is an existing IMM-only node, follow these steps.
    1. Remove the IMM-only node configuration from the configuration file of the active CPM using the config system vc-stack no vc-stack-node command.
    2. Connect to the console of the IMM-only node.
    3. Reboot the node by toggling the power button.
    4. On boot up, interrupt the boot process and configure the node as CPM-IMM (CPM-B) by configuring the following BOF parameters:
      1. specify cpm-imm for vc-stack-node-type
      2. specify the slot number and MAC address of each CPM-IMM node in the bof>vc-stack-node command
  7. If the replacement node is a new node, follow these steps.
    1. Physically connect the new node into the VC ring (the stacking ports are connected to form a ring so that an alternate path always exists).
    2. Reboot the node.
    3. Connect to the console of the new node.
    4. Power on the node.
    5. On boot up, interrupt the boot process and configure the node as CPM-IMM node by configuring the following boot loader parameters:
      1. specify cpm-imm for vc-stack-node-type
      2. specify the slot number and MAC address of each CPM-IMM node in the bof vc-stack-node command
  8. Save the BOF and let the boot proceed.
    The node comes up as a standby CPM node and talks to the active CPM to start the HA synchronization process. The master LED on the standby CPM will be blinking. The master LED on the active CPM will glow steady.

Services are not interrupted during the preceding process because the active CPM (CPM-A) is present and has not been rebooted (assuming the active CPM has an uplink to the network). If one of the IMM-only nodes was converted to a CPM-IMM node, the services on that IMM are affected as the node is rebooted to effect the change to a CPM-IMM node.

5.7.2. Replacing an IMM-only Node with Another Node

In this scenario, it is assumed that the IMM-only node has failed and needs to be replaced. The replacement node can be a new node from the factory or a node that has been previously used as an IMM node in another VC. In the former case, the procedure is similar to adding a node received from factory; see First Time Manual Boot of Nodes in the Stack starting at Step 9.

The SD card used to boot the node must contain a version of the TiMOS image (boot.tim) that matches that of the active CPM. If the version do not match, the boot will not be successful.

If a ring of stacking ports is not configured, any failure in the stack (either a link or node failure) will isolate some nodes off of the stack, affecting all the services on those nodes. In addition, if the uplink is available through one of the nodes that is now isolated or failed, services using the uplink for the entire VC are affected.

The following procedure assumes a ring is configured and that the replacement node had been previously used as an IMM node in another VC.

  1. Disconnect the stacking port of the failed IMM-only node.
  2. Connect to the console of the active CPM node.
  3. On the active CPM node, use the no vc-stack-node command to remove the failed IMM-only node configuration from the configuration file of the active CPM.
  4. On the active CPM node, add the configuration for the new IMM-only node using the config>system>vc-stack>vc-stack-node command, specifying the MAC address of the new IMM-only node and reusing the same slot number as the failed IMM-only node.
  5. Connect to the console of the replacement IMM node.
  6. Connect the new IMM node to the stacking ring.
  7. Reboot the IMM node by toggling the power to it
  8. On boot up, interrupt the boot process and specify imm-only for the vc-stack-node-type boot loader.
  9. Save the BOF configuration and let the boot proceed.
    The node will boot up as an IMM-only node. The appropriate stack LED on the front panel of the node will glow steady.

Services are not interrupted, as the active CPM (CPM-A) is present and has not been rebooted (assuming the active CPM has an uplink to the network). If one of the IMM-only nodes is converted to a CPM-IMM node, the services on that IMM node are affected as the node is rebooted to effect the change to a CPM-IMM node.

5.7.3. Replacing the Current Active CPM node with Another Node

If the active CPM node needs to be replaced and has not failed, switchover to the standby CPM can be forced before using the procedure outlined in Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node.

5.7.4. Expanding a VC by Adding a New IMM-only Node

This scenario is similar to Replacing an IMM-only Node with Another Node except that instead of replacing an existing IMM-only node, a new node is being added. This requires the configuration file on the active CPM to be updated by adding the new IMM-only node VC configuration parameters using the config>system>vc-stack>vc-stack-node command, without deleting or modifying any existing VC configuration parameters.

5.7.5. Removing a Node from a VC (Standby CPM or IMM)

If the node to be removed is the standby CPM-IMM node, see Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node and update the BOF on the active CPM node by removing the VC configuration parameters for the node being removed.

If the node to be removed is an IMM-only node, see Replacing an IMM-only Node with Another Node and update the configuration file on the active CPM node by removing the VC configuration parameters for the node being removed.

5.7.6. Adding a New Standby CPM Node Into an Existing VC

This scenario is similar to Replacing a Standby/Active CPM-IMM Node with Another CPM-IMM Node, with the VC configuration information of the new CPM-IMM node being added to the BOF of the active CPM node.

5.7.7. Configuration Guidelines for Upgrading, Adding, or Removing a VC Node

The following configuration guidelines apply for upgrading, adding, or removing a VC node.

  1. When upgrading an IMM-only node to a CPM-IMM node, ensure connectivity is available between the IMM and the current active CPM-IMM node over the stacking links.
  2. A power cycle may be required if the role is changed from a CPM-IMM node to an IMM-only node.

5.8. VC Split Scenarios

A VC split can occur as a result of the following failures in a VC:

  1. a single stacking link failure
  2. a single node failure
  3. a double failure consisting of two failed links, two failed nodes fail, or a failed link and a failed node

In the event of single stacking link failure in a VC, all nodes including the active and standby CPM-IMM nodes can continue to communicate with each other. There is no impact on the control plane and no services are lost because an alternate path exists around the point of failure. Services on all the IMM-only nodes continue to forward traffic; however, there might be an impact to the switching throughput as the stacking port bandwidth reduces by half.

Similarly, in the event of a single node failure in a VC, the VC can continue to operate with the active CPM-IMM node (or the standby CPM-IMM node if the active failed). There is no impact to the control plane or services and services on all the IMM continue to forward traffic. However, services provisioned on the failed node are lost and there might be an impact to the switching throughput as the stacking port bandwidth reduces by half.

If a double failure occurs, because two links fail, or two nodes fail, or a link and a node fail, the VC will have two islands of nodes/cards. One of these islands needs to own the VC. To decide which island of nodes will own the VC and continue normal functions, the following occurs:

  1. If an island has only IMM-only nodes, they all reboot.
  2. If one of the islands has both active and standby CPM-IMM nodes, nothing happens to the nodes in that island. They continue to work as normal and services configured on these nodes continue to operate in the VC without impact. The nodes on the second island reboot.
  3. If one of the islands has the active CPM-IMM node and the other has the standby CPM-IMM node, the island with the greater number of nodes continues to be functional in the VC and all the nodes in the other island reboot. If software determines that the island with the active CPM-IMM is also the island with the greater number of nodes, that island continues normal operation while the nodes in island with the standby CPM-IMM all reboot (including the standby node). If software determines that the island with the standby CPM-IMM is the island with the greater number of nodes, the standby node takes the role of active CPM-IMM and that island continues operations. The nodes in the other island all reboot.
  4. If both islands have the same number of nodes, the node that was the standby CPM-IMM node before the failure occurred becomes the active CPM-IMM. All the nodes in the island with the previously active CPM-IMM node (that is, active before the failure) will reboot. While rebooting, if the CPM-IMM node is unable to contact the current active node in boot.tim, they will reboot again. The IMM-only node will reboot and wait to hear from the active CPM (basically, it will wait for the operator to fix the problem and join the islands).

CPM-IMM nodes store information to detect whether a reboot is occurring due to a VC split. If a split has occurred, the node attempts to connect to the current active CPM-IMM. When it talks to the active CPM-IMM and boots up successfully, it clears the stored VC split information. If the CPM-IMM node is unable to talk to the current active CPM node, it will reboot and start the process all over again until it can successfully talk to an active CPM.

Users can clear the VC split information stored in the database and stop the reboot process by interrupting the boot process and providing a Yes response at the boot loader prompt to clear the VC split information.