VMware NSX with Cisco ACI Underlay

VMware NSX with Cisco ACI Underlay

Let's have a look at some technical terms when building NSX over ACI.


NSX Overlay Features

  • Network virtualization
  • L3 routing in the hypervisor
  • Micro-segmentation
  • Services including load balancing, NAT, L2 VPN
  • Multi-site data center integration, service insertion for guest and network introspection
  • DR with SRM integration
  • Monitoring – Traffic and process visibility
  • Diagnostics – Embedded packet capture and flow visibility tools
  • Federation, Multi-Site Deployment
  • Multi-Tenancy, multi-VRF, VRF-Lite support


ACI Underlay Features

  • Underlay IP connectivity
  • Physical fabric management and troubleshooting tools
  • Physical provisioning
  • Endpoint groups as VLANs for VMkernel networking, transit edge routing, and bare metal hosts


NSX-V / NSX-T

  • NSX Data Center for vSphere uses a VTEP (virtual tunnel endpoint) for its VXLAN overlay.
  • NSX-T Data Center employs a TEP (tunnel endpoint) for its Geneve overlay.


Two major requirements of NSX Data Center are essentially the same as any other underlay network:

  • IP network – The NSX platform operates on any IP switch fabric.
  • MTU size for Transport or Overlay – NSX requires the minimum MTU size of 1600 bytes. Application performance is optimized in an environment with a jumbo frame size setting of 9000 (a vSphere VDS maximum) across the entire fabric for operational ease.



There are 4 infrastructure traffic types, with one traffic type, vMotion, specifically used for vSphere ESXi clusters. Each traffic type is recommended to be run within its own VLAN. These four VLANs are defined for segregating the infrastructure traffic types, or the like number of VMkernel interfaces in the case of ESXi hosts. 


  1. Management - ESXi and NSX management plane
  2. vMotion - (vSphere ESXi only) VM mobility
  3. IP Storage - VLAN Applications and infrastructure data store connectivity
  4. Transport Zone (Overlay Network) - Overlay VTEP (NSX for vSphere) Overlay TEP (NSX-T)


Regardless of the fabric’s use of either encapsulation, VLAN or VXLAN, packet forwarding on an NSX platform is performed at line rate.

Traffic flow for an NSX overlay on a Cisco ACI fabric is treated similarly to any other switch fabric from an application workload viewpoint. The Cisco ACI fabric is not aware of the NSX overlay encapsulated traffic and treats the traffic received from the tunnel endpoints as normal IP traffic. The ACI fabric forwards all communication within the switch fabric using its own non-standard proprietary VXLAN header for inter-rack traffic, with the ACI span of control terminating at the top of rack as a VLAN.


The use of an NSX overlay is quite formidable. Use of NSX overlays provides the foundation for a truly software-programmable network that leaves the hardware infrastructure untouched, simplifying its operation and maintenance while only requiring a small and stable set of VLANs for the infrastructure traffic.


3 options for deployment of the compute services:

  • Three separate functional compute clusters consisting of Management, Compute, and the Edge cluster
  • Collapsed Management and Edge cluster and a separate Compute cluster
  • Collapsed Management, Compute, and Edge deployment


For ACI, the NSX edges will peer with the ACI border leaves using ECMP routing connectivity.


Cisco ACI fabric or any VXLAN-based underlay appear as a large layer 2 switch from an endpoint perspective. Even though each ToR in the fabric can serve as the distributed gateway for all subnets instilled within the rack of hosts served by the ToR, the physical underlay’s VXLAN fabric also has the ability to provide L2 adjacency to endpoints of any network device of the physical underlay. This can be leveraged by the NSX infrastructure design to avoid messy multiple subnet designs for infrastructure communication. This further reduces the requirements for the number of ACI tenant abstractions, and more specifically EPGs representing the groupings of the NSX infrastructure endpoints.



The construction of the ACI underlay broken into two parts allows for modular design variability in terms of the following:

  • Scaling the switch fabric using minor adjustments to one or more of the constructed Fabric Access Policies
  • Choice of fabric connectivity using either independent Active/Active connectivity versus Cisco ACI vPC
  • Additional building of network-centric security groupings for inclusion of other virtual or physical VLAN-based workloads



Abstracting the underlay from the deployed workloads removes dependency upon the underlay for service provisioning, the inherent security of NSX Data Center, and the agility to seamlessly manage and move workloads between sites and public clouds.


NSX brings significant benefits to customers. These include secure micro-segmentation, end-to-end automation of networking and security services, and enabling application continuity. Cisco ACI provides a limited set of functionalities out of the box for network and security virtualization. While NX-OS mode is the recommended option for Cisco Nexus environments because of the flexibility of topology and features supported by a variety of Nexus lines of switches (Nexus 3xxx, 56xx, 6xxx, 7xxx, and 9xxx), customers who choose ACI would still be able to leverage the full benefits of NSX.

To view or add a comment, sign in

More articles by Victor Mahdal

  • NEXUS DASHBOARD INSIGHTS

    NEXUS DASHBOARD INSIGHTS

    Overview Nexus Dashboard Insights is part of the Cisco Nexus Dashboard. It unifies various network management and…

    2 Comments
  • Cisco ACI 6.x NEW ARCHITECTURE

    Cisco ACI 6.x NEW ARCHITECTURE

    Cisco ACI (Application Centric Infrastructure) version 6.0 brought several important enhancements and architectural…

    1 Comment
  • Cisco ACI acronyms and terms

    Cisco ACI acronyms and terms

    ACI: Cisco Application Centric Infrastructure, a software-defined networking (SDN) solution for data centers AEP:…

    1 Comment
  • ACI - POD -MULTI POD - ANYWHERE - CLOUD

    ACI - POD -MULTI POD - ANYWHERE - CLOUD

    ACI multi-pod In the first few versions of ACI, all leaf switches had to connect to all the spines. This meant the ACI…

    1 Comment
  • DUO vs OKTA MFA

    DUO vs OKTA MFA

    Identity and access management solutions like Okta and Duo are extremely valuable for maintaining organizational data…

  • CI/CD PLAN>CODE > BUILD > PACKAGE > TEST > RELEASE

    CI/CD PLAN>CODE > BUILD > PACKAGE > TEST > RELEASE

    Continuous integration vs continuous delivery vs continuous deployment In software development, the process starts with…

  • VXLAN and EVPN for Datacenter

    VXLAN and EVPN for Datacenter

    VXLAN VLAN IDs are 12-bit long, which limits the total number of VLANs to 4094. .

  • Cisco ACI 5.2 - 15.2 DESIGN OPTIONS

    Cisco ACI 5.2 - 15.2 DESIGN OPTIONS

    CISCO ACI 5.2 - DESIGN OPTIONS Cisco ACI 5.

  • ACI - Containers and VMs - k8

    ACI - Containers and VMs - k8

    𝗩𝗶𝗿𝘁𝘂𝗮𝗹 𝗠𝗮𝗰𝗵𝗶𝗻𝗲 ➡ A Virtual machine essentially emulates and mimics the hardware and software of a…

  • Hardware Telemetry / Cisco Insights

    Hardware Telemetry / Cisco Insights

    Q. Which platforms support software telemetry? A.

Insights from the community

Others also viewed

Explore topics