In this post i will talk about new feature provided by NX-OS when installed at Nexus 7K, which is the Virtual Device Context or VDC, so in this post i will talk about its operation, configuration and verification commands from CLI point of view.
Virtual Device Context (VDC):
The Virtual Device Context or VDC is considered as a feature that is provided by the NX-OS when it is installed at the Nexus 7K, at which it is used to divide or partition this single physical chassis (i mean the single Nexus 7K chassis) into multiple logical/virtual devices (hence the name of Virtual Device come), at which it has the same concept of configuring or creating multiple Virtual Routing and Forwarding or VRFs on the device, when we configure multiple VRFs, this is equivalent to dividing the default routing and forwarding tables into multiple routing and forwarding tables each belongs to certain VRF (i know that the concept is not fully the same, but just to simplify the concept). When configure the VDC feature on the Nexus 7K chassis, this means that we divide this single physical chassis into multiple logical/virtual devices each has its own resources either from Software or Hardware point of view (for sure not all Hardware nor Software resources can be unique on each individual VDC), as an example, we can’t assign unique Supervisor Engine to each individual VDC, we can’t assign unique Fabric modules to each individual VDC,…. and so on.
So what are the advantages of this division or separation ??
- It is used to separate the control plane of each VDC from each other, this means that the control plane protocols (STP instances, Routing protocol processes, ….) running at each VDC is totally isolated from each other and will not communicate with each other, except for you provide physical connection from one VDC to the other, so at this case the control plane protocols running at each VDC can communicate with each other and hence the separation is no longer provided from physical point of view, but still the separation is already provided internally (i mean within the Nexus 7K chassis itself) by the NX-OS, it totally isolate the control plane protocols of each VDC from communicating with the other configured VDC.
- It is used to separate the Data plane of each VDC from each other, this means that data plane traffic (the users traffic carried over this chassis) of each VDC is totally isolated from each other and will not communicate with each other, at which you may configure VDC for customer A, while the second VDC belongs to customer B, so for sure we don’t need traffic belongs to customer A to be seen inside customer B network and vice versa, for this reason the two VDCs shouldn’t be connected via external physical connection to avoid such situation and to keep the traffic belong to each customer to be totally secured and isolated from the other traffic.
- It is used to separate the management plane of each VDC from each other, this means that the management plane traffic of each VDC is totally isolated from each other, so that management of each VDC is carried only inside its separate management network and shouldn’t be seen inside the management network of the other VDCs, except for you provide physical connection from one VDC to the other, so at this case the management plane traffic belongs to each VDC can be seen on the same management network, so by this action we can make single management network to be shared by all the VDCs for VDCs management purpose.
- It is used to help in isolating any failure or bad changes to the VDC belong to this failure or change, so that this failure or change will not affect the operation of the other configured VDCs.
- It is used to isolate the different resources for each individual VDC, this means that if we configure VRFs, VLANs, Addresses, …. this means that each VDC has its own VRFs, VLANs, Addresses,…. and they can have the same name as they are totally isolated from each other and will not conflict.
As mentioned before that each VDC can act as a unique separate device, it means that it has its own resources only used by this VDC, so don’t worry about any thing that you think it is a conflict between the configured VDC as NX-OS take care of this. Each VDC can define as well High Availability policy, which is used by the Chassis Admin to define the actions needed to be taken by the NX-OS when there is a failure happen to the configured VDC, so that once the failure happen, the NX-OS will take certain action defined by the Admin so that you can restore the VDC operation in an Automatic way without manual intervention from the Admin. The Action depend on the implementation itself, this means that if there is only one Supervisor engine installed on the Nexus 7K chassis, the Admin can define an action to be taken by the High Availability policy to shutdown or reload the VDC or reload the Supervisor engine itself, while if there are two Supervisor engines are installed on the chassis, the admin can define an action to be taken by the High Availability policy to shutdown or reload the VDC, or make Stateful Switchover (SSO), so that the Standby Supervisor engine become the new Active one.
The following figure shows the physical and logical representation of the Nexus 7K chassis and its configured VDC:
All the Hardware components that can be replaced, added or removed are shared between all the configured VDCs as the following:
- Supervisor engine(s).
- Fabric Module(s).
- Fan tray(s).
- Power Supply(ies).
- Single Kernel instance that run all the instances and processes used by the different configured VDCs.
There are two types of VDCs:
- Default/Admin VDC:
It is the default defined VDC by the NX-OS which support full functions and capabilities, as well it is called by Admin VDC, it has special tasks and functions that are supported only by this default/Admin VDC and can’t be performed using any other user-defined VDC such as:
a-Creating and Deleting user-defined VDCs.
b-Allocating resources to each VDC.
c-Make NX-OS upgrade for the user-defined VDCs.
d-Configure Eth-Analyzer Captures for both Control and Data plane traffic.
- Non-Default user-defined VDC:
It is a VDC that is defined by the user/Admin of the Chassis and it can be configured from the Default VDC, as well it support fully functions and capabilities, but the tasks that can be performed only by the Default VDC can’t be performed by the non-default user-defined VDC, the following points describe what can be done on the non-default user-defined VDC:
a-The admin can make changes on this non-default user-defined VDC, but these changes will affect only this VDC operation and has no effect at all on the other configured VDC.
b-The protocols and processes defined on the VDC are unique and don’t conflict with the other VDCs, and will affect only this VDC operation.
c-The configuration file is unique for this VDC.
d-If we define checkpoint file for this VDC, so as well it will be unique for this VDC.
e-The VRFs, VLANs, Addresses, …. defined for this VDC are unique for this VDC and don’t conflict with the other VDCs.
As mentioned before we can assign Hardware resources for each VDC such as physical ports and memory allocated for the different data structure. The physical ports that can be assigned to the VDC must belong to certain switching module or I/O module, and there are different switching or I/O module types these ports belong to, so before assign physical ports to the VDC, we need at the first to define which switching or I/O module type can be used with this VDC, so once we define which switching or I/O module type can be used with the VDC, then we can determine which physical ports belong to which I/O module can be assigned to this VDC. There are different I/O module types can be used with the VDC, m1, m1xl, f1, f2 and m2xl at which each I/O module type support certain features, functionalities and capabilities running with different throughput.
NX-OS supports as well creating of non-default user-defined Storage VDC that is used for SAN purpose, the type of I/O module that can be used with this Storage VSC depend on the installed NX-OS on the supervisor engine, as there are some NX-OS support only the f1 module to be used with the storage VDC and other NX-OS support the f2 and f2e module to be used with the storage VDC.
As mentioned before, the physical port can be assigned to only one VDC and can’t be shared between multiple VDCs, but this rule has an exception and it is applicable with the shared interface/port, at which this shared interface/port can be shared between a LAN and SAN VDC, this means that this shared interface/port is used by both single LAN VDC and single SAN VDC and at this case if the switch receive an Ethernet frame, it will check the EtherType field carried inside this Ethernet frame header, and check if this field indicates that the payload is Fibre Channel frame or normal payload belongs to LAN implementation, so the NX-OS will determine which VDC (LAN or SAN) should process this Ethernet frame.
There are multiple steps for full VDC configuration on the Nexus 7K chassis, as mentioned before the VDCs are configured from the default/admin VDC as it is considered as one of the tasks that can be performed only by the default/admin VDC. There is a maximum number for the VDCs that can be configured on the Nexus 7K chassis, and this maximum number is based on the installed supervisor engine, at which the Supervisor 1 supports only 4 VDCs (1 default/Admin VDC + 3 non-default VDCs), Supervisor 2 supports 5 VDCs (1 default/Admin VDC + 4 non-default VDCs), and Supervisor 2e supports 9 VDCs (1 default/Admin VDC + 8 non-default VDCs).
1-First step is to configure the VDC, so you can configure the VDC using the following commands:
2-Second step is we need to define which switching or I/O module type can be used with this VDC, so once we define which switching or I/O module type can be used with the VDC, then we can determine which physical ports belong to which I/O module can be assigned to this VDC. As mentioned before, there are different I/O module types can be used with the VDC, m1, m1xl, f1, f2 and m2xl at which each I/O module type support certain features, functionalities and capabilities running with different throughput, so you can limit the types of I/O module types that can be used with the VDC using the following commands:
3-Third step is to allocate physical interfaces/ports to the VDC, so you can allocate interfaces to the VDC using the following commands:
Based on the installed switching or I/O module, there is a certain limitation that force you to allocate certain physical interfaces/ports to the same VDC, for an example, there is a certain terminology called by “Port Group” which means that some of the ports are grouped together based on the ASIC design, so from VDC point of view all the interfaces/ports that belong to the same Port Group must be allocated to the same VDC because of the ASIC design limitation, so you are forced to allocate all the interfaces/ports that belong to the same Port Group to the same VDC.
Here i will not mention all the available I/O modules, i will just mention some of these modules and whether they require certain interface/port allocation requirements or not. Some of I/O modules require all the ports belong to the same port group to be allocated to the same VDC, the port group may consist of 4 ports, and their locations may vary based on the ASIC design of the module, some port groups has the ports (1, 3, 5 and 7), i mean that the first 4 odd ports belong to the same first odd port group, the second 4 odd ports (9, 11, 13 and 15) belong to the next odd port group, … and so on, same with the even ports, at which the first 4 even ports (2, 4, 6 and 8) belong to the same first even port group, the second 4 even ports (10, 12, 14 and 16) belong to the next even port group, … and so on. Another design requires that the port group has the ports (1, 2, 3 and 4) which means that the ports at this design are in sequence, this means that the ports (1, 2, 3 and 4) belong to the first port group, the ports (5, 6, 7 and 8) belong to the second port group, … and so on. Some modules don’t require any port allocation requirements, which means that any port can be allocated to any VDC without any limitations, and some modules require two ports to be allocated to the same VDC, i mean ports (1 and 2) should be assigned to the same VDC, ports (3 and 4) should be assigned to the same VDC, … and so on. The following figures show different I/O modules with their interface/port allocation requirements:
- I/O module consists of Port groups, each one consist of 4 ports (either odd or even):
- I/O module consists of Port groups, each one consist of 4 ports in sequence (i.e 1, 2, 3 and 4):
- I/O module consists of Port groups, each one consist of 2 ports in sequence (i.e 1 and 2):
- I/O module doesn’t consist of port groups, at which each port can be assigned individually to any VDC, hence no interface/port allocation requirements.
So before determine which ports should be allocated to which VDC, we need to know first which switching or I/O module we are dealing with, as your design may conflict with the interface/port allocation requirements, so you are forced to follow this limitation and modify your design.
4-Fourth step is to allocate physical resources to the VDC, at which we can control the amount of memory that can be assigned to the different data structures on this VDC such as IPv4 unicast routing table, IPv6 unicast routing table, IPv4 multicast routing table, IPv6 multicast routing table, number of VLANs, number of VRFs and number of port channels can be configured under this VDC. So you can control these numbers for the VDC using the following commands:
5-Fifth step is to define the Action that is needed to be taken for the High Availability policy for the two supervisor implementations (either single supervisor or dual supervisor), so you can configure the High Availability policy for the single and dual supervisor implementations using the following commands:
You can verify the VDC configuration using the following command:
You can verify which interfaces/ports are member at which VDC using the following command:
You can verify resources (IPv4 unicast route, IPv6 unicast route, ….. VLANs, VRFs and Port Channels) you assigned to the VDC using the following command:
Hope that the post is helpful.