Cisco ACI with VMware VDS Integration
This chapter covers the integration of Cisco Application Policy Infrastructure Controller (APIC) with VMware vSphere Distributed Switch (VDS).
Key sections include:
- Configuring Virtual Machine Networking Policies
- Creating a VMM Domain Profile
- Creating VDS Uplink Port Groups
- Creating a Trunk Port Group
- Using VMware vSphere vMotion
- Working with Blade Servers
- Troubleshooting the Cisco ACI and VMware VMM System Integration
- Additional Reference Sections
Configuring Virtual Machine Networking Policies
Cisco Application Policy Infrastructure Controller (APIC) integrates with third-party Virtual Machine Managers (VMMs), such as VMware vCenter, to extend Cisco Application Centric Infrastructure (ACI) benefits to the virtualized infrastructure. APIC allows administrators to use ACI policies within the VMM system.
Supported integration modes include:
- VMware VDS: When integrated with Cisco ACI, VMware vSphere Distributed Switch (VDS) enables VM networking configuration within the ACI fabric.
- Cisco ACI Virtual Edge: For installation and configuration details, refer to the Cisco ACI Virtual Edge Installation Guide and the Cisco ACI Virtual Edge Configuration Guide available on Cisco.com.
Note: Beginning with Cisco APIC Release 5.0(1), Cisco Application Virtual Switch (AVS) is no longer supported. Upgrading to this release with Cisco AVS may lead to fabric issues and faults for the Cisco AVS domain. It is recommended to migrate to Cisco ACI Virtual Edge. Refer to the Cisco ACI Virtual Edge Installation Guide, Release 3.0(x) on Cisco.com.
Cisco APIC Supported VMware VDS Versions
Note: When a Cisco APIC is connected to a VMware vCenter with numerous folders, a delay may occur when pushing new port groups from APIC to vCenter.
Different versions of VMware vSphere Distributed Switch (DVS) are compatible with different versions of Cisco Application Policy Infrastructure Controller (APIC). Consult the Cisco ACI Virtualization Compatibility Matrix for specific compatibility information between VMware components and Cisco APIC.
VMware vSphere: Refer to the ACI Virtualization Compatibility Matrix for supported release versions.
Adding ESXi Host Considerations
When adding VMware ESXi hosts to a Virtual Machine Manager (VMM) domain with VMware vSphere Distributed Switch (VDS), ensure the ESXi host version is compatible with the Distributed Virtual Switch (DVS) version already deployed in vCenter. Consult VMware documentation for detailed compatibility requirements. Incompatibility between ESXi host and DVS versions will prevent vCenter from adding the ESXi host to the DVS, resulting in an error. Modifying the DVS version setting via Cisco APIC is not possible; lowering the DVS version requires removing and reapplying the VMM domain configuration with a lower setting.
ESXi 6.5 Hosts with VIC Cards and UCS Servers
Important: For ESXi 6.5 hosts using UCS B-Series or C-Series servers with VIC cards, vmnics may go down during a port state event (e.g., link flap, TOR reload). To prevent this, install the eNIC driver from the VMware website instead of using the default driver: VMware Website Driver Link.
VMware vCenter High Availability
VMware vCenter High Availability (VCHA), introduced in VMware vSphere 6.5, addresses the single point of failure for VMware vCenter. In case of an active node failure, the passive node takes over with the same IP address, credentials, and information. No new VMM configuration is required for VCHA. Cisco APIC will automatically reconnect once the passive node is reachable.
Guidelines for Upgrading VMware DVS from 5.x to 6.x and VMM Integration
This section outlines guidelines for upgrading VMware Distributed Virtual Switch (DVS) from version 5.x to 6.x and integrating it with VMM.
Guidelines for VMware VDS Integration
When integrating VMware vSphere Distributed Switch (VDS) into Cisco Application Centric Infrastructure (ACI), follow these guidelines:
- DVS versioning applies only to VMware DVS, not Cisco Application Virtual Switch (AVS). DVS upgrades are managed by VMware vCenter or the relevant orchestration tool, not ACI. The 'Upgrade Version' option is unavailable for AVS switches in vCenter.
- Upgrading DVS from 5.x to 6.x requires upgrading vCenter Server to version 6.0 and all connected hosts to ESXi 6.0. For detailed upgrade procedures, consult VMware's documentation. To upgrade DVS, use the Web Client: Home > Networking > DatacenterX > DVS-X > Actions Menu > Upgrade Distributed Switch.
- There is no functional impact on DVS features, capability, performance, or scale if the DVS version shown in vCenter differs from the VMM domain DVS version configured on APIC. The APIC and VMM Domain DVS Version are used solely for initial deployment.
- VMM integration for DVS mode facilitates the configuration of port-channels between leaf switch ports and ESXi hypervisor ports via APIC. LACP is supported in either enhanced or basic mode for port channels.
Table 1: LACP Support
ACI release prior to 3.2.7 | ACI release after 3.2.7 | VMware DVS release prior to 6.6 | VMware DVS release after 6.6 | |
Basic LACP | Yes | Yes | Yes | No |
Enhanced LACP | No | Yes | Yes | Yes |
When upgrading VMware DVS to version 6.6 or higher, LACP must be reconfigured from Basic to Enhanced mode. If enhanced LACP (eLACP) was already configured with prior DVS versions (before 6.6), no reconfiguration is needed for eLACP when upgrading to DVS 6.6.
Note: Basic LACP is not supported starting with DVS version 6.6. Migrating LACP from basic to enhanced mode may cause traffic loss; perform this migration during a maintenance window.
For more details on eLACP and adding it to a VMM domain, see the Enhanced LACP Policy Support section later in this chapter.
Follow these guidelines when integrating VMware vSphere Distributed Switch (VDS) into Cisco Application Centric Infrastructure (ACI):
- Do not modify the following settings on a VMware VDS configured for VMM integration:
- VMware vCenter hostname (if using DNS).
- VMware vCenter IP address (if using IP).
- VMware vCenter credentials used by Cisco APIC.
- Data center name.
Mapping Cisco ACI and VMware Constructs
When configuring VMM integration, avoid changing the following in VMware vCenter:
- Folder, VDS, or portgroup names.
- Folder structure containing the VMware VDS (e.g., do not place the folder within another folder).
- Uplink port-channel configuration, including LACP/port channel, LLDP, and CDP configuration.
- VLAN on a portgroup.
- Active uplinks for portgroups pushed by Cisco APIC.
- Security parameters (promiscuous mode, MAC address changes, forged transmits) for portgroups pushed by Cisco APIC.
Ensure you use supported versions of VMware vCenter/vSphere that are compatible with your Cisco ACI version. When adding or removing portgroups, use Cisco APIC or the Cisco ACI vCenter plug-in. Be aware that Cisco APIC may overwrite certain changes made in VMware vCenter, such as portgroup, port binding, promiscuous mode, and load-balancing settings.
Table 2: Mapping of Cisco Application Centric Infrastructure (ACI) and VMware Constructs
Cisco ACI Terms | VMware Terms |
Endpoint group (EPG) | Port group |
LACP Active |
|
LACP Passive |
|
MAC Pinning |
|
MAC Pinning-Physical-NIC-Load |
|
Static Channel - Mode ON |
|
Virtual Machine Manager (VMM) Domain | vSphere Distributed Switch (VDS) |
VMware VDS Parameters Managed By APIC
The following tables detail the VMware VDS and VDS Port Group parameters managed by APIC.
VMware VDS Parameters Managed by APIC
VMware VDS | Default Value | Configurable Using Cisco APIC Policy? |
Name | VMM domain name | Yes (Derived from Domain) |
Description | APIC Virtual Switch | No |
Folder Name | VMM domain name | Yes (Derived from Domain) |
Version | Highest supported by vCenter | Yes |
Discovery Protocol | LLDP | Yes |
Uplink Ports and Uplink Names | 8 | Yes (From Cisco APIC Release 4.2(1)) |
Uplink Name Prefix | uplink | Yes (From Cisco APIC Release 4.2(1)) |
Maximum MTU | 9000 | Yes |
LACP policy | disabled | Yes |
Alarms | 2 alarms added at the folder level | No |
Note: Cisco APIC does not manage port mirroring. Port mirroring can be configured directly from VMware vCenter. If APIC manages the configuration, it raises a fault; otherwise, it does not.
VDS Port Group Parameters Managed by APIC
VMware VDS Port Group | Default Value | Configurable using APIC Policy |
Name | Tenant Name | Application Profile Name | EPG Name | Yes (Derived from EPG) |
Port binding | Static binding | No |
VLAN | Picked from VLAN pool | Yes |
Load balancing algorithm | Derived based on port-channel policy on APIC | Yes |
Promiscuous mode | Disabled | Yes |
Forged transmit | Disabled | Yes |
Mac change | Disabled | Yes |
Block all ports | False | No |
Creating a VMM Domain Profile
VMM domain profiles define connectivity policies enabling virtual machine controllers to connect to the Cisco Application Centric Infrastructure (ACI) fabric. They group VM controllers with similar networking policy requirements, allowing them to share VLAN pools and application endpoint groups (EPGs). APIC communicates with the controller to publish network configurations, such as port groups, applied to virtual workloads. Refer to the Cisco Application Centric Infrastructure Fundamentals on Cisco.com for detailed information.
Note: In this section, vCenter domains are used as examples of VMM domains.
Pushing the VMM Domain After Deleting It
If a VMware Distributed Virtual Switch (DVS) created in Cisco APIC is accidentally deleted from VMware vCenter, the APIC policy will not be reapplied. To push the VMM domain again, disconnect and then reconnect the Cisco APIC VMware vCenter connectivity. This action ensures APIC reapplies the VMM domain and recreates the DVS in vCenter.
Read-Only VMM Domains
Introduced in Cisco APIC Release 3.1(1), read-only VMM domains allow viewing inventory information for a VDS in VMware vCenter that APIC does not manage. While you can associate EPGs and configure policies, these policies are not pushed to the VDS, and no faults are raised for read-only domains. The workflow and prerequisites are similar to creating other VMM domains.
Prerequisites for Creating a VMM Domain Profile
To configure a VMM domain profile, ensure the following:
- All fabric nodes are discovered and configured.
- Inband (inb) or out-of-band (oob) management has been configured on the APIC.
vCenter Domain Operational Workflow
A Virtual Machine Manager (VMM) must be installed, configured, and reachable via the inband/out-of-band management network (e.g., a vCenter).
vCenter Domain Operational Workflow
The diagram illustrates a sequential workflow for vCenter domain operations:
- The APIC Administrator creates an Application Policy.
- The APIC connects to VMware vCenter.
- The VMware vCenter Server creates Port Groups.
- ESXi Hosts are attached to the Virtual Distributed Switch (VDS).
- APIC automatically maps Endpoint Groups (EPGs) to Port Groups.
- The APIC pushes the policy to the ACI Fabric, completing the Application Network Profile.
The APIC administrator configures vCenter domain policies in APIC, providing connectivity information such as:
- vCenter IP address, vCenter credentials, VMM domain policies, and VMM domain SPAN configuration.
- Policies including VLAN pools and domain types (e.g., VMware VDS, Cisco Nexus 1000V switch).
- Connectivity to physical leaf interfaces using Attach Entity Profiles (AEPs).
The APIC automatically connects to vCenter and creates a VDS (or uses an existing one) matching the VMM domain name.
Note: If using an existing VDS, it must reside within a folder of the same name.
Creating a vCenter Domain Profile Using the GUI
This section provides an overview of tasks for creating a vCenter Domain:
- Create or select a switch profile.
- Create or select an interface profile.
- Create or select an interface policy group.
- Create or select a VLAN pool.
- Create vCenter domain.
- Create vCenter credentials.
Procedure
- On the menu bar, navigate to Fabric > Access Policies.
- In the navigation pane, click Quick Start, then in the central pane, click Configure an interface, PC, and VPC.
- In the 'Configure an interface, PC, and VPC' dialog box:
- Expand Configured Switch Interfaces.
- Click the '+' icon.
- Ensure the 'Quick' radio button is selected.
- From the 'Switches' drop-down list, select the appropriate leaf ID. The 'Switch Profile Name' field will auto-populate.
- Click the '+' icon to configure switch interfaces.
- In the 'Interface Type' area, select the appropriate radio button.
- In the 'Interfaces' field, enter the desired interface range. The 'Interface Selector Name' field will auto-populate.
- In the 'Interface Policy Group' area, select the 'Create One' radio button.
- From the 'Link Level Policy' drop-down list, choose the desired link level policy.
- From the 'CDP Policy' drop-down list, choose the desired CDP policy.
- Similarly, choose desired interface policies from other available policy areas.
- In the 'Attached Device Type' area, select ESX Hosts.
- In the 'Domain' area, ensure 'Create One' is selected. Enter the domain name in the 'Domain Name' field.
- In the 'VLAN' area, ensure 'Create One' is selected. Enter the VLAN range in the 'VLAN Range' field.
- It is recommended to use a range of at least 200 VLAN numbers. Avoid including your manually assigned infra VLAN in this range to prevent potential faults, especially with OpFlex integration.
- In the 'vCenter Login Name' field, enter the login name.
- Optionally, from the 'Security Domains' drop-down list, choose the appropriate security domain.
- Enter the password in the 'Password' field and re-enter it in the 'Confirm Password' field.
- Expand 'vCenter'.
- In the 'Create vCenter Controller' dialog box, enter the required information and click OK.
- In the 'Configure Interface, PC, And VPC' dialog box, complete the following actions:
- If Port Channel Mode and vSwitch Policy areas are not specified, policies configured earlier will apply.
- From the 'Port Channel Mode' drop-down list, choose a mode.
- In the 'vSwitch Policy' area, select the radio button to enable CDP or LLDP.
- From the 'NetFlow Exporter Policy' drop-down list, choose or create a policy. A NetFlow exporter policy configures external collector reachability.
- Choose values from the 'Active Flow TimeOut', 'Idle Flow Timeout', and 'Sampling Rate' drop-down lists.
- Click SAVE twice, then click SUBMIT.
- Verify the new domain and profiles by navigating to Virtual Networking > Inventory, expanding VMM Domains > VMware > Domain_name > vCenter_name. In the work pane, check the 'Properties' for the VMM domain name and vCenter properties to confirm the controller is online and inventory is available.
Creating a Read-Only VMM Domain
Cisco APIC Release 3.1(1) introduced read-only VMM domains, allowing visibility into VMware vCenter inventory for VDSs not managed by APIC. You can view hypervisors, VMs, NIC status, and other inventory data. EPGs can be associated, and policies configured, but policies are not pushed to the VDS, and no faults are raised. Creation can be done via Cisco APIC GUI, NX-OS style CLI, or REST API.
Creating a Read-Only VMM Domain Using the Cisco APIC GUI
To create a read-only VMM domain, use the 'Create vCenter Domain' dialog box under the Virtual Networking tab. Do not use the procedure in 'Creating a vCenter Domain Profile Using the GUI' as it does not allow setting an access mode.
Before you begin:
- Fulfill prerequisites in the 'Prerequisites for Creating a VMM Domain Profile' section (page 6).
- In VMware vCenter, ensure the VDS under the Networking tab is contained within a folder named exactly the same as the read-only VMM domain you plan to create.
Procedure:
- Log in to Cisco APIC.
- Choose Virtual Networking > Inventory and expand the VMM Domains folder.
- Right-click the VMM Domains folder and choose Create vCenter Domain.
- In the 'Create vCenter Domain' dialog box:
- In the 'Virtual Switch Name' field, enter the domain name.
- The read-only domain name must match the VDS and folder name in VMware vCenter.
- In the 'Virtual Switch' area, select VMware vSphere Distributed Switch.
- In the 'Access Mode' area, select Read Only Mode.
- In the 'vCenter Credentials' area, click '+' to create vCenter credentials.
- In the 'vCenter' area, click '+' to add a vCenter controller.
- Click Submit.
What to do next: You can attach an EPG to the read-only VMM domain and configure policies, but these policies will not be pushed to the VDS in VMware vCenter.
Promoting a Read-Only VMM Domain to Read-Write
Cisco APIC Release 4.0(1) allows promoting an existing read-only VMM domain to a fully managed read-write domain. This enables APIC to manage the VDS and allows EPG association as Port Groups.
Refer to 'Creating a Read-Only VMM Domain' (page 10) for creating read-only domains and 'Promoting a Read-Only VMM Domain Caveats' (page 11) for guidelines and limitations.
Promotion can be done via Cisco APIC GUI, NX-OS style CLI, or REST API.
Promoting a Read-Only VMM Domain Caveats
- Promoting a read-only domain requires a specific network folder structure for the VDS on the vCenter server. If the VDS is not in a folder, create one with the same name as the VDS and move the VDS into it before promoting to read-write. Failure to do so may result in APIC creating a new VDS in a new folder.
- When creating port-groups for read-only VMM domains intended for promotion, use the format
<tenant-name>|<application-name>|<EPG-name>
.
Promoting a Read-Only VMM Domain Using the Cisco APIC GUI
When a VMM domain is promoted to fully managed and an EPG is associated, port-groups named in the standard format are automatically added to the EPG. If a different naming format was used, VMs must be manually reassigned from old port-groups to new APIC-created ones.
- Create an EPG and associate it with the VMM domain. A fault will occur if the port-group lacks an EPG policy.
- Remove VMs from the old port-group and attach them to the EPG.
Note: This process may cause traffic loss.
- After detaching VMs, delete the old port-group from vCenter.
- When migrating from read-only to read-write, use a unique VLAN range separate from the physical domain range to avoid VLAN exhaustion.
- To use the same EPG across multiple VMMs and vCenters, configure a Link Aggregation Group (LAG) policy with the same name as the domain. An EPG can only connect to one LAG policy. For different LAG policies, associate each with a different EPG. See 'Enhanced LACP Policy Support' (page 13).
Procedure:
- Log in to Cisco APIC.
- Associate an Access Entity Profile (AEP) with the read-only VMM domain: Navigate to Fabric > Access Policies > Policies > Global > Attachable Access Entity Profiles. Select an AEP and associate it with the read-only VMM domain.
- Promote the VMM domain: Navigate to Virtual Networking > Inventory, expand the VMM Domains > VMware folder, and select the read-only VMM Domain. Change the 'Access Mode' to Read Write Mode. Select a VLAN Pool and click Submit.
- Create a new Link Aggregation Group (LAG) policy if using vCenter 5.5 or later, as described in 'Create LAGs for DVS Uplink Port Groups Using the Cisco APIC GUI' (page 14).
- Associate the LAG policy with appropriate EPGs if using vCenter 5.5 or later, as described in 'Associate Application EPGs to VMware vCenter Domains with Enhanced LACP Policies Using the Cisco APIC GUI' (page 15).
What to do next: EPGs attached to the VMM domain and configured policies will now be pushed to the VDS in VMware vCenter.
Enhanced LACP Policy Support
Cisco APIC Release 3.2(7) enhances uplink load balancing by allowing different Link Aggregation Control Protocol (LACP) policies for distributed virtual switch (DVS) uplink port groups. APIC now supports VMware's Enhanced LACP feature (available for DVS 5.5+). Previously, a single LACP policy applied to all DVS uplink port groups, and VMware LAGs could not be managed by APIC.
Enabling Enhanced LACP policy on the ACI side pushes the configuration to DVS. Once enabled, enhanced LACP remains available on the DVS side even if the policy is removed from ACI, as it cannot be reverted.
Note: Enhanced LACP can be enabled on either the ACI or DVS side.
When creating a VMware vCenter VMM domain for Cisco ACI Virtual Edge or VMware VDS, you can choose from up to 20 load-balancing algorithms and apply different policies to uplink portgroups.
Enhanced LACP Limitations
You can configure up to eight DVS uplink portgroups, with at least two uplinks per policy, allowing up to four LACP policies per DVS. Enhanced LACP supports only active and passive LACP modes.
Note: For Cisco ACI Virtual Edge VXLAN mode, a UDP port-based load-balancing algorithm is mandatory. The 'Source and Destination TCP/UDP Port' algorithm is recommended. In VXLAN mode, traffic is always between one IP address pair (VTEP to FTEP IP), making the UDP port number the only differentiator.
Beginning with Release 5.2, Enhanced LACP policy is supported on L4-L7 service device interfaces used in service graphs. Refer to the Cisco APIC Layer 4 to Layer 7 Services Deployment Guide for details.
Be aware of the following limitations when using enhanced Link Aggregation Control Protocol (LACP) policies:
- You cannot revert to a previous LACP version after upgrading to enhanced LACP.
- Downgrading Cisco APIC to a version earlier than 3.2(7) requires removing the enhanced LACP configuration. See 'Remove the Enhanced LACP Configuration Before a Downgrade' (page 16).
- For Cisco Application Centric Infrastructure (ACI) Virtual Edge, VXLAN mode traffic uses the source IP address as the TEP IP address. For optimal load balancing, use the 'Source and Destination TCP/UDP Port' algorithm.
- If traffic is present for a Cisco ACI Virtual Edge domain over enhanced LACP, increasing or decreasing uplinks may cause 5-10 seconds of traffic loss.
- Traffic disruption occurs if an enhanced LACP LAG policy name conflicts with a previous enhanced LACP LAG policy uplink name. If an enhanced LACP LAG policy is named 'ELACP-DVS' for a DVS domain, its uplinks are named 'ELACP-DVS-1', 'ELACP-DVS-2', etc. Configuring another enhanced LAG policy with a conflicting name will cause traffic loss. To resolve this, delete the conflicting LAG policy and recreate it with a different name.
Create LAGs for DVS Uplink Port Groups Using the Cisco APIC GUI
Improve distributed virtual switch (DVS) uplink port group load balancing by grouping port groups into Link Aggregation Groups (LAGs) and associating them with specific load-balancing algorithms using the Cisco APIC GUI.
Before you begin:
- You must have created a VMware vCenter virtual machine manager (VMM) domain for VMware VDS or Cisco Application Centric Infrastructure (ACI) Virtual Edge.
- If a vSwitch policy container does not exist, create one.
Note: A port channel policy must be configured before creating an enhanced LAG policy. This can be done during vCenter domain profile creation.
Procedure:
- Log into the Cisco APIC.
- Navigate to Virtual Networking > Inventory > VMM Domains > VMware > domain.
- In the work pane, choose Policy > VSwitch Policy.
- If not already done, choose a policy in the 'Properties' area.
- In the 'Enhanced LAG Policy' area, click the '+' icon and configure:
- Name: Enter the LAG name.
- Mode: Choose LACP Active or LACP Passive.
- Load Balancing Mode: Choose a load-balancing method.
- Number of Links: Select the number of DVS uplink port groups (2 to 8) to include in the LAG.
- Click Update, then Submit.
- Repeat Step 5 to create additional LAGs for the DVS.
What to do next: For VMware VDS, associate endpoint groups (EPGs) to the domain with the enhanced LACP policy. For Cisco ACI Virtual Edge, associate internally created port groups with the enhanced LACP policy, then associate EPGs to the domain.
Associate Application EPGs to VMware vCenter Domains with Enhanced LACP Policies Using the Cisco APIC GUI
Associate application endpoint groups (EPGs) with VMware vCenter domains using LAGs and a load-balancing algorithm via the Cisco APIC GUI.
Before you begin:
- You must have created Link Aggregation Groups (LAGs) for DVS uplink port groups and associated a load-balancing algorithm to the LAGs.
Note: This procedure assumes no application EPG has been associated with the VMware vCenter domain yet. If one has, edit the domain association.
Remove the Enhanced LACP Configuration Before a Downgrade
Before downgrading Cisco APIC to a release earlier than 3.2(7), you must remove the enhanced LACP configuration. Follow these steps:
Note: Consult 'Enhanced LACP Limitations' (page 14) for required actions based on LAG support before downgrading.
Procedure:
- Log into Cisco APIC.
- Navigate to Tenants > tenant > Application Profiles > application_profile > Application EPGs > EPG > Domains (VMs and Bare-Metals).
- Right-click 'Domains (VMs and Bare-Metals)' and choose Add VMM Domain Association.
- In the 'Add VMM Domain Association' dialog box:
- From the 'VMM Domain Profile' drop-down list, choose the domain to associate the EPG with.
- From the 'Enhanced Lag Policy', choose the configured policy for the domain.
- Optionally, in the 'Delimiter' field, enter one of the following:
|
,~
,!
,@
,^
,+
, or=
. If no symbol is entered, the default|
delimiter is used. - Add other desired values for domain association and click Submit.
- Repeat steps 2 through 4 for other application EPGs in the tenant.
Remove the Enhanced LACP Configuration Before a Downgrade
Before downgrading Cisco Application Policy Infrastructure Controller (APIC) to a release earlier than 3.2(7), the enhanced LACP configuration must be removed. Follow this procedure.
Procedure:
- Reassign uplinks on all ESXi hosts from link aggregation groups (LAGs) to normal uplinks.
- Remove LAG associations from all EPGs and interfaces of L4-L7 service devices used in service graphs, associated with the distributed virtual switch (DVS). Expect traffic loss during this step.
- Change port channel settings to static channel or MAC pinning to recover traffic once the port channel is up.
- Remove all LAG-related configuration from the virtual machine manager (VMM).
- Verify that all LAG-related policies are deleted from VMware vCenter.
What to do next: Downgrade to a Cisco APIC release earlier than 3.2(7).
Endpoint Retention Configuration
After creating a vCenter domain, you can configure endpoint retention to delay endpoint deletion, reducing the chance of dropped traffic. Endpoint retention is configured via the APIC GUI, NX-OS style CLI, or REST API.
Configuring Endpoint Retention Using the GUI
Before you begin: You must have created a vCenter domain.
Procedure:
- Log in to Cisco APIC.
- Choose VM Networking > Inventory.
- In the left navigation pane, expand the VMware folder and click the vCenter domain.
- In the central Domain work pane, ensure the 'Policy' and 'General' tabs are selected.
- In the 'End Point Retention Time (seconds)' counter, choose the retention duration (0 to 600 seconds; default is 0).
- Click Submit.
Creating VDS Uplink Port Groups
Each VMM domain appears in vCenter as a vSphere Distributed Switch (VDS). The virtualization administrator associates hosts to the APIC-created VDS and selects the vmnics for the specific VDS. VDS uplink configuration is performed from the APIC controller by modifying the vSwitch configuration via the Attach Entity Profile (AEP) associated with the VMM domain. The AEP can be found in the APIC GUI under Fabric Access Policies.
Note: When integrating ACI and vSphere VMM, Link Aggregation Groups (LAGs) are not supported for creating interface teams on APIC-created distributed switches. APIC pushes interface teaming configuration based on Interface Policy Group and/or AEP vSwitch policy settings. Manual creation of interface teams in vCenter is not supported or required.
Creating a Trunk Port Group
A trunk port group aggregates traffic for endpoint groups (EPGs) for VMware virtual machine manager (VMM) domains.
Refer to the 'About Trunk Port Group' section for details and the following sections for procedures:
- Creating a Trunk Port Group Using the GUI (page 18)
- Creating a Trunk Port Group Using the NX-OS Style CLI
- Creating a Trunk Port Group Using the REST API
Creating a Trunk Port Group Using the GUI
This section details creating a trunk port group via the GUI.
Before you begin: Ensure the trunk port group is tenant independent.
Procedure:
- Log in to the APIC GUI.
- On the menu bar, choose Virtual Networking.
- In the navigation pane, choose VMM Domains > VMware > domain > Trunk Port Groups and right-click Create Trunk Port Group.
- In the 'Create Trunk Port Group' dialog box:
- Name: Enter the EPG name.
- Promiscuous Mode: Select 'Disabled' (default) or 'Enabled'. 'Enabled' allows VMs to receive unicast traffic not destined for their MAC addresses.
- Trunk Portgroup Immediacy: Select 'Immediate' or 'On Demand' (default). This determines when policies are resolved on leaf switches.
- MAC changes: Select 'Disabled' or 'Enabled' (default). 'Enabled' allows new MAC addresses for the VM network adapter.
- Forged transmits: Select 'Disabled' or 'Enabled' (default). 'Enabled' allows forged transmits, where a network adapter sends traffic identifying itself as another. This security policy ensures the virtual network adapter's effective address matches the source address in the Ethernet frame.
- Enhanced Lag Policy: Choose the uplink with the desired Link Aggregation Control Protocol (LACP) policy. This policy uses DVS uplink port groups configured in LAGs with a load-balancing algorithm. At least one uplink must have an LACP policy applied to improve load balancing. Refer to 'Enhanced LACP Policy Support' (page 13) for more information.
- VLAN Ranges: Click '+' and enter the VLAN range (e.g., vlan-100 vlan-200).
- If no VLAN Range is specified, the VLAN list is taken from the domain's VLAN namespace.
- Click Update.
- Click Submit.
Using VMware vSphere vMotion
VMware vSphere vMotion enables moving virtual machines (VMs) between physical hosts without service interruption.
Refer to the VMware website for vSphere vMotion information and documentation.
When moving a VM connected via a VMware distributed virtual switch (DVS), traffic interruption can occur for several seconds to minutes, potentially up to 15 minutes (the default local endpoint retention interval). Interruption occurs under these conditions:
- When virtual switches rely solely on Reverse Address Resolution Protocol (RARP) to indicate VM moves.
- When a bridge domain is associated with a First Hop Security (FHS) policy with IP Inspection enabled. To resolve this, disassociate the FHS policy from the bridge domain or disable IP inspection in the policy.
Working with Blade Servers
Guidelines for Cisco UCS B-Series Servers
When integrating blade server systems into Cisco ACI for VMM integration (e.g., Cisco Unified Computing System (UCS) blade servers), consider these guidelines:
Note: This example demonstrates configuring a port channel access policy for Cisco UCS blade servers. Similar steps apply for virtual port channels or individual link access policies based on uplink connection. If no port channel is explicitly configured on APIC for UCS blade server uplinks, the default behavior is MAC pinning.
- VM endpoint learning relies on Cisco Discovery Protocol (CDP) or Link Layer Discovery Protocol (LLDP). If supported, CDP must be enabled from the leaf switch port through blade switches to the blade adapters.
- Ensure the management address type, length, and value (TLV) is enabled on the blade switch (CDP or LLDP) and advertised to servers and fabric switches. Management TLV address configuration must be consistent across CDP and LLDP on the blade switch.
- Cisco APIC does not manage fabric interconnects or blade servers; UCS-specific policies like CDP or port channel must be configured via UCS Manager.
- VLANs defined in the VLAN pool used by the attachable access entity profile on APIC must also be manually created on UCS and allowed on appropriate uplinks connecting to the fabric, including the infrastructure VLAN if applicable. Refer to the Cisco UCS Manager GUI Configuration Guide.
- Both CDP and LLDP are supported with Cisco UCS B-series servers from UCSM 2.2.4b. LLDP is not supported with earlier firmware.
- CDP is disabled by default in Cisco UCS Manager. Enable CDP by creating a Network Control Policy.
- Do not enable fabric failover on adapters in UCS server service profiles. Cisco recommends allowing the hypervisor to handle failover at the virtual switch layer for proper load balancing.
Setting up an Access Policy for a Blade Server Using the GUI
Note: Symptom: Changes to management IP of unmanaged nodes (e.g., blade switch, fabric interconnect) update in VMware vCenter, but vCenter does not send events to Cisco APIC, causing APIC to be out of sync. Workaround: Trigger an inventory pull for the VMware vCenter controller managing ESX servers behind the unmanaged node.
Before you begin: To operate with Cisco APIC, Cisco UCS Fabric Interconnect must be version 2.2(1c) or later. All components (BIOS, CIMC, adapter) must also be version 2.2(1c) or later. Refer to the Cisco UCS Manager CLI Configuration Guide for details.
Procedure:
- On the menu bar, choose Fabric > Access Policies.
- In the navigation pane, click Quick Start.
- In the central pane, click Configure an interface, PC, and VPC.
- In the 'Configure Interface, PC, and VPC' dialog box, click the '+' icon to select switches.
- In the 'Switches' field, choose the desired switch IDs from the drop-down list.
- Click the '+' icon to configure switch interfaces.
- In the 'Interface Type' field, click the VPC radio button.
- In the 'Interfaces' field, enter the appropriate interface or interface range connected to the blade server.
- In the 'Interface Selector Name' field, enter a name.
- From the 'CDP Policy' drop-down list, choose 'default'. (CDP must be disabled between the leaf switch and the blade server.)
- From the 'LLDP Policy' drop-down list, choose 'default'. (LLDP must be enabled for receive and transmit states between the leaf switch and the blade server.)
- From the 'LACP Policy' drop-down list, choose Create LACP Policy. (LACP policy must be set to active between the leaf switch and the blade server.)
- In the 'Create LACP Policy' dialog box:
- Name: Enter a name for the policy.
- Mode: Select the 'Active' radio button.
- Keep default values and click Submit.
- From the 'Attached Device Type' field drop-down list, choose ESX Hosts.
- In the 'Domain Name' field, enter an appropriate name.
- In the 'VLAN Range' field, enter the range.
- In the 'vCenter Login Name' field, enter the login name.
- In the 'Password' and 'Confirm Password' fields, enter the password.
- Expand the 'vCenter' field, and in the 'Create vCenter Controller' dialog box, enter the content and click OK.
- In the 'vSwitch Policy' field, configure as follows: (Between blade server and ESX hypervisor: CDP enabled, LLDP disabled, LACP disabled; MAC Pinning must be set.)
- Check the 'MAC Pinning' box.
- Check the 'CDP' box.
- Leave the 'LLDP' box unchecked as LLDP must remain disabled.
- Click Save, then Save again. Click Submit. The access policy is set.
Troubleshooting the Cisco ACI and VMware VMM System Integration
For troubleshooting information, refer to the following links:
- Cisco APIC Troubleshooting Guide
- ACI Troubleshooting Book
Additional Reference Sections
Custom User Account with Minimum VMware vCenter Privileges
Setting specific VMware vCenter privileges allows Cisco APIC to send API commands for DVS creation, publish port groups, and relay alerts. For APIC to configure VMware vCenter, your credentials must have the following minimum privileges:
- Alarms: APIC creates two alarms (DVS and port-group). Alarms are raised when EPG or Domain policy is deleted on APIC. Alarms for DVS or port-group cannot be deleted if VMs are attached.
- Distributed Switch
- dvPort Group
- Folder
- Network: APIC manages network settings like port-group addition/deletion, host/DVS MTU, LLDP/CDP, and LACP.
- Host:
- Host.Configuration.Advanced settings
- Host.Local operations.Reconfigure virtual machine
- Host.Configuration.Network configuration
- Virtual machine: If using Service Graph, the 'Virtual machine' privilege is needed for virtual appliances used in Service Graph.
- Virtual machine.Configuration.Modify device settings
- Virtual machine.Configuration.Settings
For deploying service VMs using the service VM orchestration feature, enable these additional privileges:
- Datastore:
- Allocate space
- Browse datastore
- Low level file operations
- Remove file
- Host:
- Local operations.Delete virtual machine
- Local operations.Reconfigure virtual machine
- Resource:
- Assign virtual machine to resource pool
- Virtual machine:
- Inventory.Create new
- Inventory.Create from existing
- Inventory.Remove
- Configuration.Add new disk
- Provisioning.Deploy template
Quarantine Port Groups
The quarantine port group feature provides a method to clear port group assignments. When a VMware vSphere Distributed Switch (VDS) is created in VMware vCenter, a quarantine port group is created by default, blocking all ports. As part of integration with Layer 4 to Layer 7 virtual service appliances (e.g., load balancers, firewalls), APIC creates service port groups in vCenter for service stitching and orchestrates virtual appliance placement. When a service graph is deleted, service VMs are automatically moved to the quarantine port group. This auto-move applies only to APIC-orchestrated service VMs.
You can further manage ports in the quarantine port group, such as migrating them to another port group (e.g., a VM network).
The quarantine port group mechanism does not apply to regular tenant endpoint groups (EPGs) or their associated port groups and VMs. If a tenant EPG is deleted, tenant VMs in its associated port group remain intact and are not moved to the quarantine port group.
On-Demand VMM Inventory Refresh
Triggered Inventory offers a manual option to refresh APIC inventory from the VMM controller. It is not required for normal operations and should be used judiciously when errors occur. APIC automatically pulls inventory during process restarts, leadership changes, or periodic audits to align VMM inventory with the controller's. If VMware vCenter APIs error out, APIC might not download the full inventory despite retries, indicated by a fault. Triggered inventory allows initiating an inventory pull from APIC to vCenter.
APIC does not synchronize VMM configuration with VMware vCenter VDS configuration. Direct changes to VDS settings in vCenter are not overwritten by APIC, except for PVLAN configuration.
Physically Migrating the ESXi Host
Complete the following tasks to physically migrate ESXi hosts:
- Put the host into maintenance mode or evacuate VM workloads by another method.
- Remove the ESXi host from the VMware VDS, Cisco Application Centric Infrastructure (ACI) Virtual Edge, or Cisco Application Virtual Switch.
- Physically recable the ESXi host to the new leaf switch or pair of leaf switches.
- Add the ESXi host back to the VMware VDS, Cisco Application Centric Infrastructure (ACI) Virtual Edge, or Cisco Application Virtual Switch.
Guidelines for Migrating a vCenter Hypervisor VMK0 to an ACI Inband VLAN
Follow these guidelines to migrate the default vCenter hypervisor VMK0's out-of-band connectivity to ACI inband ports. An ACI fabric infrastructure administrator configures APIC with necessary policies, and then the vCenter administrator migrates VMK0 to the appropriate ACI port group.
Create the Necessary Management EPG Policies in APIC
As an ACI fabric infrastructure administrator, use these guidelines when creating management tenant and VMM domain policies:
- Choose a VLAN for ESX management.
- Add the chosen VLAN to a range (or Encap Block) in the VLAN pool associated with the target VMM domain. This range must have static allocation mode.
- Create a management EPG in the ACI management tenant (
mgmt
). - Verify that the bridge domain associated with the management EPG is also associated with the private network (inb).
- Associate the management EPG with the target VMM domain:
- Use 'resolution immediacy' as pre-provision.
- Specify the management VLAN in the 'Port Encap' field of the VM domain profile association.
Migrate the VMK0 to the Inband ACI VLAN
By default, vCenter configures VMK0 on the hypervisor management interface. The APIC policies created above enable the vCenter administrator to migrate the default VMK0 to the APIC-created port group, freeing up the hypervisor management port.