All of us know why any enterprise may need to use devices such as “Nexus switches” on its network infrastructure , the answer is simple, as it is based on the nature of services that it is providing either to its local users or its customers and type/amount of traffic passed over its network, so the enterprise needs such equipment because of the great capabilities offered by the Nexus switches along with its operating system (NX-OS) than the other devices/switches, simply such enterprise requires more powerful device that can handle this amount of traffic, so it needs device that has powerful capabilities to handle Gbps or even Tbps of traffic, so this requires device with specific hardware and software capabilities that is designed for such situation, from H/W specs it needs more physical ports, more advanced ASICs (Application-specific integrated circuits) that should be designed to work with high data rate and to support more advanced applications, as well it needs more and more resources from processing and memory point of view and this for sure does not exist with the well-known access/distribution/core catalyst switches.
Now, let’s give an overview about the operating system (NX-OS) running on Cisco Nexus Switches used for the Date Center environment. The NX-OS is considered as the next generation operating system developed by Cisco for the data center environment, at which it is designed to provide high scalability, flexibility, modularity and virtualization tasks, as well designed to ensure that the services and the functionality are always operational and this is because of the nature of the data center environment which is considered to be critical and requires the network to be 100% available, so this ensures the high availability as well.
The NX-OS provides some benefits and features that make it more powerful than the other IOSes (Classic IOS, IOS-XE) used with the well-known catalyst switches, so let’s see what is offered by the NX-OS:
- Virtual Device Context (VDC): This feature is offered by the NX-OS to create multiple contexts from the same physical switch, which allows you to partition the physical switch (only Nexus 7K platform) into multiple virtual switches, this means that the single physical switch is partitioned into multiple virtual/logical switches, this means that you can use each virtual/logical switch as an independent switch that has its own control plane, data plane and management plane and each one has its own allocated interfaces that is dedicated only for its usage, as an example you can use the VDC for different purposes, you can partition the Nexus 7K into three VDCs, one for the normal operation, one for the test phase and the last one for experiments, this will not cause any problem because the three planes of the three VDCs are totally isolated from each others so don’t be worry, you must worry only about the physical connections, as you may connect the VDCs with each other using external cable (connect the port allocated to VDC1 to the port allocated to VDC2), so you must take care of your external connection, if everything is ok, then don’t worry.
- Virtual Port Channel (vPC): This feature is similar to the Virtual Switching system (VSS) that is offered by Cisco Catalyst 6500 series, but it becomes little different because of more added features that are offered now by NX-OS, so to remind you the vPC is used to support Etherchannel, not the normal Etherchannel but more advanced Etherchannel which is Multi-Chassis Etherchannel (MCEC), which means that the Etherchannel is created between three chassis (two upstream switches and one downstream LACP-capable device) at our case, the two upstream switches are two Nexus switches, while the downstream device maybe any LACP-capable device (maybe switch, router, server,…), but the point here is the downstream device see the two upstream Nexus switches as only one switch (logical switch) and this is thanks to vPC feature, the below figure shows exactly what i mean, at which we have two upstream switches, and two independent downstream switches at the “Physical view”, while the two independent downstream switches see the two upstream switches as single logical switch (of course they don’t know that there are two upstream switches) at the “Logical View”.
- Overlay Transport Virtualization (OTV): This feature is offered by the NX-OS that is used to provide Layer 2 connectivity among Layer 3 network, this means that it allow remote data center sites to be connected at Layer 2 (i.e Layer 2 extension) to extend the Layer 2 between the two remote data center sites while in reality they can reach each others via Layer 3 underlay network, but because of the OTV it form Layer 2 overlay network. The below figure represents simple topology for the purpose of the OTV:
- Fabric Path: This feature is offered by the NX-OS to totally eliminate the Spanning-tree from the Layer 2 switched network, the main reason to eliminate the STP is because of the Active/Standby behavior that is should be happening by the STP because of the STP rules that is used to avoid the Layer 2 loop that may result, so when we use the STP (and this should be done in the normal Layer2 switched network) and we have two uplinks between each access switch and the distribution switches, this will result in one uplink is active (i.e carrying traffic), while the other uplink is standby in case any failure happened to the active one and detected by the STP, hence we don’t make use of all the available BW for the uplinks we have, so by using Fabric path, we don’t use STP anymore and as well we make use of all the available BW for the uplinks we have because all the uplinks become Active, hence we are working Active/ Active behavior, as well we can perform the load balancing. The below figure shows the different in the behavior when we use the STP and when we use the Fabric Path offered by NX-OS, it is showing that there are 3 uplinks from each access switch to the three distribution switches and because of STP is running, this results in only one access-distribution uplink is Active, hence we have only 4 uplinks are Active although we have 12 uplinks, but when we implement the Fabric Path, this results in all the 12 uplinks are Active.
- 100% system available: This feature is offered by the NX-OS and used to make sure that the system is always available and operational and this is because of not only the existence of dual supervisor engine (one Active and the other is Standby) but also because of certain feature called by In-Service Software Upgrade (ISSU) which allow you to make software upgrade for one supervisor while the other is Active and operational to handle the locally-processed traffic (control-plane traffic included) without any issue and once you finish the software upgrade, you can swap rules and make the upgraded one to be Active and the other to be upgraded, so this results in the system is 100% operational and minimizing the outage and your downtime.
- Fabric Extender (FEX): This feature is offered by the NX-OS and used to increase the number of ports managed by the switch, this means that we can connect another switch (Child switch) to (Parent switch) so that the child switch act as remote Fabric or remote Line card that belongs to this parent switch but it is not included within the same chassis, this means that we can consider it as remote line card for this parent switch, hence it is still managed or configurable from the parent switch, and by this action we increased the number of ports on this parent switch, for better understanding let’s consider that we have parent switch (Nexus 7K or 5K) with 24 ports (just for simplification) and child switch (Nexus 2K) with 24 port, and we connect two links between the parent switch and the child switch, and because of the FEX feature the parent switch is managing 24 ports (locally attached ports) + 24 ports of the child switch, from logical point of view we can consider that we can connect 22(excluding the two fabric ports) + 24 = 46 hosts to this parent switch, because from logical point of view those 46 ports are configurable from the parent switch CLI and we can consider them as locally attached ports. The below figure shows the purpose of the FEX from topology point of view:
Hope that the post is helpful.