Optimizing Your ASAv Deployment
First Published: 2020-04-14
Last Modified: 2020-04-30
Optimizing Your ASAv Deployment
The Cisco Adaptive Security Virtual Appliance (ASAv) provides comprehensive firewall functionality for virtualized environments, securing data center traffic and multitenant setups. Management and monitoring can be performed using ASDM or CLI.
Important: For optimal performance, ensure you are running ASA Version 9.13(1) or later. Refer to the Cisco Adaptive Security Virtual Appliance (ASAv) support page for the latest ASAv Getting Started Guide.
Licensing for the ASAv
The ASAv utilizes Cisco Smart Software Licensing. For detailed information, consult the Smart Software Licensing (ASAv, ASA on Firepower) documentation.
Note: A smart license must be installed on the ASAv. Without a license, throughput is limited to 100 Kbps, suitable for preliminary connectivity tests. A smart license is mandatory for regular operation.
Starting with version 9.13(1), any ASAv license is compatible with any supported ASAv vCPU/memory configuration, allowing deployment across diverse VM resource footprints. Session limits for AnyConnect and TLS Proxy are determined by the ASAv platform entitlement, not a specific model type.
Refer to the following sections for details on ASAv licensing entitlements and resource specifications for private and public deployment targets.
About Smart License Entitlements
Any ASAv license can be used with any supported ASAv vCPU/memory configuration, offering flexibility in deploying the ASAv on various VM resource footprints and supporting a wider range of AWS and Azure instance types. When configuring the ASAv VM, the maximum supported vCPUs are 8, and the maximum supported memory is 64GB RAM.
- vCPUs: The ASAv supports 1 to 8 vCPUs.
- Memory: The ASAv supports 2 GB to 64 GB of RAM.
ASAv Private Cloud Entitlements (VMware, KVM, Hyper-V)
Important: Beginning with version 9.13(1), the minimum memory requirement for the ASAv is 2GB. If your current ASAv has less than 2GB of memory, you must increase the memory or redeploy a new ASAv VM with version 9.13(1) to upgrade. When deploying an ASAv with more than 1 vCPU, the minimum memory requirement is 4GB.
Session Limits for Licensed Features
Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced by a rate limiter. The following table outlines these limits:
Entitlement | AnyConnect Premium Peers | Total TLS Proxy Sessions | Rate Limiter |
---|---|---|---|
Standard Tier, 100M | 50 | 500 | 150 Mbps |
Standard Tier, 1G | 250 | 500 | 1 Gbps |
Standard Tier, 2G | 750 | 1000 | 2 Gbps |
Standard Tier, 10G | 10,000 | 10,000 | 10 Gbps |
Platform Limits for Licensed Features
Platform session limits are based on the provisioned memory for the ASAv. The maximum ASAv VM dimensions are 8 vCPUs and 64 GB of memory.
Provisioned Memory | AnyConnect Premium Peers | Total TLS Proxy Sessions |
---|---|---|
2 GB to 8 GB | 250 | 500 |
8 GB to 16 GB | 750 | 1000 |
16 GB - 64 GB | 10,000 | 10,000 |
The flexibility of using any ASAv license with any supported ASAv vCPU/memory configuration allows deployment in private cloud environments (VMware, KVM, Hyper-V). Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter.
ASAv Public Cloud Entitlements (AWS)
The following table summarizes session limits and rate limiters based on entitlement tier for ASAv deployed in private cloud environments.
RAM (GB) | Entitlement Support* | |||||
---|---|---|---|---|---|---|
Min | Max | Standard Tier, 100M | Standard Tier, 1G | Standard Tier, 2G | Standard Tier, 10G | Standard Tier, 20G |
2 - 7.9 | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500/20G | |
8 - 159 | 50/500/100M | 250/500/1G | 750/1000/2G | 750/1000/10G | 750/1000/20G | |
16 - 319 | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 10K/10K/20G | |
32 - 64 | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 20K/20K/20G |
*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.
The ASAv can be deployed on various AWS instance types. Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter. The following table details session limits and rate limiters based on entitlement tier for AWS instance types. Refer to "About ASAv Deployment On the AWS Cloud" for AWS VM dimensions (vCPUs and memory).
Instance | BYOL Entitlement Support* | PAYG** | |||
---|---|---|---|---|---|
Standard Tier, 100M | Standard Tier, 1G | Standard Tier, 2G | Standard Tier, 10G | ||
c5.xlarge | 50/500/100M | 250/500/1G | 750/1000/2G | 750/1000/10G | 750/1000 |
c5.2xlarge | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 10K/10K |
c4.large | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500 |
c4.xlarge | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500 |
c4.2xlarge | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 750/1000 |
c3.large | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500 |
*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.
**AnyConnect Sessions / TLS Proxy Sessions. Rate Limiter is not employed in PAYG mode.
ASAv Public Cloud Entitlements (Azure)
The ASAv can be deployed on various Azure instance types. Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter. The following table summarizes session limits and rate limiters based on entitlement tier for Azure instance types. Refer to "About ASAv Deployment On the Microsoft Azure Cloud" for Azure VM dimensions (vCPUs and memory).
Instance | BYOL Entitlement Support* | PAYG** | |||
---|---|---|---|---|---|
Standard Tier, 100M | Standard Tier, 1G | Standard Tier, 2G | Standard Tier, 10G | ||
c3.xlarge | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500 |
c3.2xlarge | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 750/1000 |
m4.large | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 250/500 |
m4.xlarge | 50/500/100M | 250/500/1G | 250/500/2G | 250/500/10G | 10K/10K |
m4.2xlarge | 50/500/100M | 250/500/1G | 750/1000/2G | 10K/10K/10G | 10K/10K |
*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.
**AnyConnect Sessions / TLS Proxy Sessions. Rate Limiter is not employed in PAYG mode.
Pay-As-You-Go (PAYG) Mode: The following table summarizes Smart Licensing entitlements for hourly billing (PAYG) mode, based on allocated memory.
RAM (GB) | Hourly Billing Mode Entitlement |
---|---|
2 GB to 8 GB | Standard Tier, 1G |
8 GB to 16 GB | Standard Tier, 2G |
16 GB - 64 GB | Standard Tier, 10G |
Note: Pay-As-You-Go (PAYG) Mode is currently not supported for the ASAv on Azure.
ASAv and SR-IOV Interface Provisioning
Single Root I/O Virtualization (SR-IOV) enables multiple VMs to share a single PCIe network adapter on a host server. SR-IOV facilitates direct data transfer between a VM and the network adapter, bypassing the hypervisor for improved network throughput and reduced server CPU load. Enhancements like Intel VT-d technology support direct memory transfers essential for SR-IOV.
SR-IOV Specification Defines Two Device Types:
- Physical Function (PF): A static NIC, which is a full PCIe device with SR-IOV capabilities. PFs are managed as standard PCIe devices and can provide management and configuration for a set of Virtual Functions (VFs).
- Virtual Function (VF): A dynamic vNIC, a lightweight virtual PCIe device primarily for data movement. VFs are derived from and managed by a PF. One or more VFs can be assigned to a VM.
Guidelines and Limitations for SR-IOV Interfaces
Guidelines for SR-IOV Interfaces
SR-IOV provisioning on the ASAv requires careful planning, including the operating system level, hardware and CPU, adapter types, and adapter settings. Licensing for the ASAv is detailed on page 1, which covers compliant resource scenarios matching license entitlements for different ASAv platforms. SR-IOV Virtual Functions require specific system resources.
Host Operating System and Hypervisor Support
SR-IOV support and VF drivers are available for:
- Linux 2.6.30 kernel or later
The ASAv with SR-IOV interfaces is currently supported on the following hypervisors:
- VMware vSphere/ESXi
- QEMU/KVM
- AWS
Hardware Platform Support
Note: Deploy the ASAv on any server-class x86 CPU device capable of running the supported virtualization platforms.
This section provides hardware guidelines for SR-IOV interfaces. While these are guidelines, using hardware that does not meet them may lead to functionality issues or reduced performance.
A server supporting SR-IOV and equipped with an SR-IOV-capable PCIe adapter is required. Consider the following hardware aspects:
- SR-IOV NIC capabilities, including the number of available VFs, vary by vendor and device.
- Not all PCIe slots support SR-IOV.
- SR-IOV-capable PCIe slots may have different capabilities.
Note: Consult your manufacturer's documentation for SR-IOV support on your system.
- For VT-d enabled chipsets, motherboards, and CPUs, refer to the virtualization-capable IOMMU supporting hardware page. VT-d is a mandatory BIOS setting for SR-IOV systems.
- For VMware, consult their online Compatibility Guide for SR-IOV support.
- For KVM, verify CPU compatibility. Note that for the ASAv on KVM, only x86 hardware is supported.
Note: Testing was performed with the Cisco UCS C-Series Rack Server. The Cisco UCS-B server does not support the ixgbe-vf vNIC.
Supported NICs for SR-IOV
- Intel Ethernet Server Adapter X520 - DA2
CPUs
- x86_64 multicore CPU
- Intel Sandy Bridge or later (Recommended)
Note: Testing was performed on an Intel Broadwell CPU (E5-2699-v4) at 2.3GHz.
Cores
- Minimum of 8 physical cores per CPU socket.
- The 8 cores must be on a single socket.
Note: CPU pinning is recommended for achieving full throughput rates on the ASAv50 and ASAv100. Refer to "Increasing Performance on ESXi Configurations" (page 11) and "Increasing Performance on KVM Configurations" (page 23) for more details.
BIOS Settings
SR-IOV requires support in both the BIOS and the operating system instance or hypervisor. Check your system BIOS for the following settings:
- SR-IOV is enabled
- VT-x (Virtualization Technology) is enabled
- VT-d is enabled
- (Optional) Hyperthreading is disabled
Note: Verify the process with vendor documentation, as systems differ in accessing and changing BIOS settings.
ASAv on VMware Guidelines and Limitations
Limitations
Be aware of the following limitations when using ixgbe-vf interfaces:
- The guest VM cannot set the VF to promiscuous mode, preventing transparent mode operation.
- The guest VM cannot set the MAC address on the VF, which affects MAC address transfer during HA failover compared to other ASA platforms. HA failover relies on IP address transfer from active to standby.
- The Cisco UCS-B server does not support the ixgbe-vf vNIC.
ASAv on VMware ESXi System Requirements
Multiple ASAv instances can be created and deployed on an ESXi server. Hardware requirements vary based on the number of instances and usage needs. Each virtual appliance requires minimum resource allocation for memory, CPUs, and disk space.
Review the following guidelines and limitations before ASAv deployment.
- Host CPU: Must be a server-class x86-based Intel or AMD CPU with virtualization extensions. For example, Cisco Unified Computing System™ (Cisco UCS®) C series M4 server with Intel® Xeon® CPU E5-2690v4 processors running at 2.6GHz is used for ASAv performance testing.
- ASAv Support: ESXi versions 6.0, 6.5, and 6.7 are supported.
Recommended vNICs
The following vNICs are recommended for optimum performance:
- i40e in PCI passthrough: Dedicates the server's physical NIC to the VM, transferring packet data via DMA (Direct Memory Access). No CPU cycles are needed for packet movement.
- i40evf/ixgbe-vf: Similar to the above, these DMA packets but allow the NIC to be shared across multiple VMs. SR-IOV is generally preferred for its deployment flexibility. Refer to "Guidelines and Limitations" (page 16).
- vmxnet3: A para-virtualized network driver supporting 10Gbps operation, but it requires CPU cycles. This is the VMware default.
When using vmxnet3, disable Large Receive Offload (LRO) to prevent poor TCP performance.
Performance Optimizations
Adjustments to both the VM and the host can enhance ASAv performance. Refer to "Performance Tuning for the ASAv on VMware" (page 11) for more details.
- NUMA: Isolate guest VM CPU resources to a single non-uniform memory access (NUMA) node to improve performance. Refer to "NUMA Guidelines" (page 12) for more information.
- Receive Side Scaling (RSS): The ASAv supports RSS, a technology that distributes network receive traffic across multiple processor cores. Supported on Version 9.13(1) and later. Refer to "Multiple RX Queues for Receive Side Scaling (RSS)" (page 13) for more information.
- VPN Optimization: Refer to "VPN Optimization" (page 16) for considerations on optimizing VPN performance with the ASAv.
OVF File Guidelines
The choice between asav-vi.ovf
or asav-esxi.ovf
depends on the deployment target:
asav-vi
: For deployment on vCenter.asav-esxi
: For deployment on ESXi (without vCenter).
- ASAv OVF deployment does not support localization. Ensure VMware vCenter and LDAP servers are in ASCII-compatible mode.
- Set your keyboard to United States English before installing the ASAv and for using the VM console.
- When the ASAv is deployed, two ISO images are mounted on the ESXi hypervisor:
- The first drive contains OVF environment variables generated by vSphere.
- The second drive contains
day0.iso
.
Attention: Both drives can be unmounted after the ASAv virtual machine boots. However, Drive 1 (with OVF environment variables) remains mounted after power off/on, even if "Connect at Power On" is unchecked.
Failover for High Availability Guidelines
For failover deployments, ensure the standby unit has the same license entitlement as the active unit (e.g., both units should have the 2Gbps entitlement).
Important: When creating a high availability pair using ASAv, data interfaces must be added to each ASAv in the same order. Mismatched interface order can cause errors and affect failover functionality.
ASAv on KVM Guidelines and Limitations
Deployment hardware for ASAv can vary based on the number of instances and usage requirements. Each virtual appliance requires minimum resource allocation for memory, CPUs, and disk space.
Review the following guidelines and limitations before ASAv deployment.
ASAv on KVM System Requirements
- Host CPU: Must be a server-class x86-based Intel or AMD CPU with virtualization extensions. For example, Cisco Unified Computing System™ (Cisco UCS®) C series M4 server with Intel® Xeon® CPU E5-2690v4 processors running at 2.6GHz is used for ASAv performance testing.
Recommended vNICs
- i40e in PCI passthrough: Dedicates the server's physical NIC to the VM, transferring packet data via DMA (Direct Memory Access). No CPU cycles are needed for packet movement.
- i40evf/ixgbe-vf: Similar to the above, these DMA packets but allow the NIC to be shared across multiple VMs. SR-IOV is generally preferred for its deployment flexibility.
- virtio: A para-virtualized network driver supporting 10Gbps operation, but it requires CPU cycles.
Performance Optimizations
Adjustments to both the VM and the host can enhance ASAv performance. Refer to "Performance Tuning for the ASAv on KVM" (page 23) for more details.
- NUMA: Isolate guest VM CPU resources to a single non-uniform memory access (NUMA) node to improve performance. Refer to "NUMA Guidelines" (page 24) for more information.
- Receive Side Scaling (RSS): The ASAv supports RSS, a technology that distributes network receive traffic across multiple processor cores. Refer to "Multiple RX Queues for Receive Side Scaling (RSS)" (page 26) for more information.
- VPN Optimization: Refer to "VPN Optimization" (page 16) for considerations on optimizing VPN performance with the ASAv.
CPU Pinning
CPU pinning is required for the ASAv to function in a KVM environment. Refer to "Enable CPU Pinning" (page 23) for instructions.
Failover for High Availability Guidelines
For failover deployments, ensure the standby unit has the same license entitlement as the active unit (e.g., both units should have the 2Gbps entitlement).
Important: When creating a high availability pair using ASAv, data interfaces must be added to each ASAv in the same order. Mismatched interface order can cause errors and affect failover functionality.
Prerequisites for the ASAv and KVM
- Download the ASAv qcow2 file from Cisco.com and place it on your Linux host: http://www.cisco.com/go/asa-software
Note: A Cisco.com login and Cisco service contract are required.
For the sample deployment in this document, Ubuntu 18.04 LTS is assumed. Install the following packages on the Ubuntu 18.04 LTS host:
qemu-kvm
libvirt-bin
bridge-utils
virt-manager
virtinst
virsh tools
genisoimage
Performance is influenced by the host and its configuration. Maximize ASAv throughput on KVM by tuning your host. For general host-tuning concepts, refer to "NFV Delivers Packet Processing Performance with Intel".
Useful optimizations for Ubuntu 18.04 include:
- macvtap: A high-performance Linux bridge. Configure specific settings to use macvtap instead of a Linux bridge.
- Transparent Huge Pages: Increases memory page size and is enabled by default in Ubuntu 18.04.
- Hyperthread disabled: Reduces two vCPUs to one single core.
- txqueuelength: Increases the default txqueuelength to 4000 packets, reducing the drop rate.
- pinning: Pins qemu and vhost processes to specific CPU cores. Under certain conditions, pinning significantly boosts performance.
For optimizing a RHEL-based distribution, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide.
For ASA software and ASAv hypervisor compatibility, see Cisco ASA Compatibility.
Performance Tuning for the ASAv on KVM
Increasing Performance on KVM Configurations
Enhance ASAv performance in a KVM environment by adjusting KVM host settings. These settings are independent of the host server's configuration. This option is available in Red Hat Enterprise Linux 7.0 KVM.
Improve KVM configurations by enabling CPU pinning.
Enable CPU Pinning
ASAv requires KVM CPU affinity options for performance enhancement in KVM environments. Processor affinity, or CPU pinning, binds a process or thread to a specific CPU or range of CPUs, ensuring execution only on designated CPUs.
Configure host aggregates to deploy instances using CPU pinning on different hosts from those that do not, to prevent unpinned instances from consuming resources needed by pinned instances.
Attention: Do not deploy instances with NUMA topology on the same hosts as instances without NUMA topology.
To use this option, configure CPU pinning on the KVM host.
Procedure
- In the KVM host environment, verify host topology to determine available vCPUs for pinning using the command:
virsh nodeinfo
- Verify available vCPU numbers using the command:
virsh capabilities
- Pin the vCPUs to sets of processor cores using the command:
virsh vcpupin <vm-name> <vcpu-number> <host-core-number>
. This command must be executed for each vCPU on your ASAv.
Note: When configuring CPU pinning, carefully consider the host server's CPU topology. Avoid configuring CPU pinning across multiple sockets if the server has multiple cores. The downside of performance improvement in KVM configuration is the requirement for dedicated system resources.
NUMA Guidelines
Non-Uniform Memory Access (NUMA) describes a shared memory architecture where main memory modules are placed relative to processors in a multiprocessor system. When a processor accesses memory outside its own node (remote memory), data transfer occurs over the NUMA connection at a slower rate than accessing local memory.
The x86 server architecture comprises multiple sockets and cores within each socket. Each CPU socket, along with its memory and I/O, forms a NUMA node. For efficient packet reading from memory, guest applications and associated peripherals (like the NIC) should reside within the same node.
For Optimum ASAv Performance:
- The ASAv VM must run on a single NUMA node. Deploying a single ASAv across two sockets will significantly degrade performance.
- An 8-core ASAv requires each socket on the host CPU to have a minimum of 8 cores per socket. Consider other VMs running on the server.
- A 16-core ASAv requires each socket on the host CPU to have a minimum of 16 cores per socket. Consider other VMs running on the server.
- The NIC should be on the same NUMA node as the ASAv VM.
The following figures illustrate server configurations for NUMA architecture examples:
NUMA Optimization
Optimally, the ASAv VM should run on the same NUMA node as the NICs. To achieve this:
- Determine the NUMA node of the NICs using "lstopo" to view the node diagram. Note the NICs and their attached nodes.
- At the KVM Host, use
virsh list
to find the ASAv. - Edit the VM using
virsh edit <VM Number>
. - Align the ASAv on the chosen node. Examples for 18-core nodes are provided.
- Save the XML changes and power cycle the ASAv VM.
- To ensure the VM runs on the desired node, use
ps aux | grep <name of your ASAV VM>
to get the process ID. - Run
sudo numastat -c <ASAv VM Process ID>
to verify proper ASAv VM alignment.
More information on NUMA tuning with KVM can be found in the Red Hat document "9.3. libvirt NUMA Tuning".
Multiple RX Queues for Receive Side Scaling (RSS)
The ASAv supports Receive Side Scaling (RSS), a technology that distributes network receive traffic across multiple processor cores. For maximum throughput, each vCPU (core) must have its own NIC RX queue. A typical RA VPN deployment might use a single inside/outside pair of interfaces.
Important: ASAv Version 9.13(1) or greater is required for multiple RX queues. For KVM, libvirt version 1.0.6 or higher is needed.
For an 8-core VM with an inside/outside pair of interfaces, each interface will have 4 RX queues, as shown in Figure 3: 8-Core ASAv RSS RX Queues (page 14).
For a 16-core VM with an inside/outside pair of interfaces, each interface will have 8 RX queues, as shown in Figure 4: 16-Core ASAv RSS RX Queues (page 14).
The following table presents ASAv's vNICs for VMware and the number of supported RX queues. Refer to "Recommended vNICs" (page 8) for descriptions of supported vNICs.
NIC Card | VNIC Driver | Driver Technology | Number of RX Queues | Performance |
---|---|---|---|---|
x710* | i40e | PCI Passthrough | 8 max | PCI Passthrough offers the highest performance of the NICs tested. In passthrough mode, the NIC is dedicated to the ASAv and is not an optimal choice for virtual. |
i40evf | SR-IOV | 4 | SR-IOV with the x710 NIC has lower throughput (~30%) than PCI Passthrough. i40evf on VMware has a maximum of 4 RX queues per i40evf. 8 RX queues are needed for maximum throughput on a 16 core VM. | |
x520 | ixgbe-vf | SR-IOV | 4 | The ixgbe-vf driver (in SR-IOV mode) has performance issues that are under investigation. |
ixgbe | PCI Passthrough | 4 | The ixgbe driver (in PCI Passthrough mode) has 4 RX queues. Performance is on par with i40evf (SR-IOV). | |
N/A | vmxnet3 | Para-virtualized | 8 max | Not recommended for ASAv100. |
N/A | e1000 | Not recommended by VMware. |
*The ASAv is not compatible with the 1.9.5 i40en host driver for the x710 NIC. Older or newer driver versions will work. See "Identify NIC Drivers and Firmware Versions" (page 15) for information on ESXCLI commands to identify or verify NIC driver and firmware versions.
Identify NIC Drivers and Firmware Versions
To identify or verify specific firmware and driver version information, use ESXCLI commands:
- To list installed NICs, SSH to the host and run the
esxcli network nic list
command. This provides a record of devices and general information. - After listing installed NICs, pull detailed configuration information using
esxcli network nic get -n <nic name>
.
Note: General network adapter information can also be viewed from the VMware vSphere Client under Physical Adapters within the Configure tab.
VPN Optimization
Considerations for optimizing VPN performance with the ASAv include:
- IPSec offers higher throughput than DTLS.
- Cipher GCM provides approximately twice the throughput of CBC.
SR-IOV Interface Provisioning
SR-IOV allows multiple VMs to share a single PCIe network adapter on a host server. SR-IOV defines these functions:
- Physical function (PF): Full PCIe functions that include SR-IOV capabilities, appearing as regular static NICs on the host server.
- Virtual function (VF): Lightweight PCIe functions for data transfer, derived from and managed by a PF.
VFs provide up to 10 Gbps connectivity to ASAv virtual machines in a virtualized operating system framework. This section explains VF configuration in a KVM environment. SR-IOV support on the ASAv is detailed in "ASAv and SR-IOV Interface Provisioning" (page 5).
Requirements for SR-IOV Interface Provisioning
To attach SR-IOV-enabled VFs or Virtual NICs (vNICs) to an ASAv instance, the physical NIC must support SR-IOV. SR-IOV also requires support in the BIOS and the operating system instance or hypervisor.
General guidelines for SR-IOV interface provisioning for the ASAv in a KVM environment:
- An SR-IOV-capable physical NIC is required in the host server; see "Guidelines and Limitations for SR-IOV Interfaces" (page 6).
- Virtualization must be enabled in the BIOS on your host server. Consult your vendor documentation for details.
- IOMMU global support for SR-IOV must be enabled in the BIOS on your host server. Consult your hardware vendor documentation for details.
Modify the KVM Host BIOS and Host OS
This section outlines setup and configuration steps for provisioning SR-IOV interfaces on a KVM system. The information is based on a lab environment using Ubuntu 14.04 on a Cisco UCS C Series server with an Intel Ethernet Server Adapter X520 - DA2.
Before you begin
- Ensure an SR-IOV-compatible network interface card (NIC) is installed.
- Ensure Intel Virtualization Technology (VT-x) and VT-d features are enabled.
Note: Some system manufacturers disable these extensions by default. Verify the process with vendor documentation, as systems differ in accessing and changing BIOS settings.
- Ensure all Linux KVM modules, libraries, user tools, and utilities are installed during OS installation; see "Prerequisites for the ASAv and KVM" (page 22).
- Ensure the physical interface is in the UP state. Verify with
ifconfig <ethname>
.
Procedure
- Log in to your system as the "root" user.
- Verify that Intel VT-d is enabled using the command:
dmesg | grep -e DMAR -e IOMMU
. The output indicating "DMAR: IOMMU enabled" confirms VT-d is active. - Activate Intel VT-d in the kernel by appending the
intel_iommu=on
parameter to theGRUB_CMDLINE_LINUX
entry in the/etc/default/grub
configuration file. If using an AMD processor, appendamd_iommu=on
instead. - Reboot the server for the iommu change to take effect using
shutdown -r now
. - Create VFs by writing an appropriate value to the
sriov_numvfs
parameter via the sysfs interface using the format:echo n > /sys/class/net/device name/device/sriov_numvfs
. To ensure VFs are created upon server power-cycle, append this command to therc.local
file in/etc/rc.d/
. The following example shows creating one VF per port (your interfaces may vary):echo '1' > /sys/class/net/eth4/device/sriov_numvfs
echo '1' > /sys/class/net/eth5/device/sriov_numvfs
echo '1' > /sys/class/net/eth6/device/sriov_numvfs
echo '1' > /sys/class/net/eth7/device/sriov_numvfs
- Reboot the server using
shutdown -r now
. - Verify VF creation using
lspci | grep -i "Virtual Function"
.
Assign PCI Devices to the ASAv
After creating VFs, add them to the ASAv like any other PCI device. The following example demonstrates adding an Ethernet VF controller to an ASAv using the graphical virt-manager tool.
Procedure
- Open the ASAv and click the Add Hardware button to add a new device to the virtual machine.
- From the Hardware list in the left pane, select PCI Host Device. The list of PCI devices, including VFs, appears in the center pane.
- Select one of the available Virtual Functions and click Finish. The PCI Device appears in the Hardware List, described as an Ethernet Controller Virtual Function.
What to do next:
- Use the
show interface
command from the ASAv command line to verify newly configured interfaces. - Use interface configuration mode on the ASAv to configure and enable the interface for transmitting and receiving traffic. Refer to the "Basic Interface Configuration" chapter of the Cisco ASA Series General Operations CLI Configuration Guide for more information.
About ASAv Deployment On the AWS Cloud
The Cisco Adaptive Security Virtual Appliance (ASAv) offers the same proven security functionality as physical Cisco ASAs in a virtual form factor. The ASAv can be deployed in the public AWS cloud and configured to protect virtual and physical data center workloads that expand, contract, or shift location over time.
The ASAv supports the following AWS instance types:
Instance | Attributes | Interfaces | |
---|---|---|---|
vCPUs | Memory (GB) | ||
c5.xlarge | 4 | 8 | 4 |
c5.2xlarge | 8 | 16 | 4 |
c4.large | 2 | 3.75 | 3 |
c4.xlarge | 4 | 7.5 | 4 |
c4.2xlarge | 8 | 15 | 4 |
c3.large | 2 | 3.75 | 3 |
c3.xlarge | 4 | 7.5 | 4 |
c3.2xlarge | 8 | 15 | 4 |
m4.large | 2 | 4 | 3 |
m4.xlarge | 4 | 16 | 4 |
m4.2xlarge | 8 | 32 | 4 |
Create an account on AWS, set up the ASAv using the AWS Wizard, and choose an Amazon Machine Image (AMI). The AMI is a template containing the software configuration needed to launch your instance.
Important: AMI images are not available for download outside the AWS environment.
Performance Tuning for the ASAv on AWS
VPN Optimization
The AWS c5 instances offer significantly higher performance than older c3, c4, and m4 instances. Approximate RA VPN throughput (DTLS using 450B TCP traffic with AES-CBC encryption) on the c5 instance family should be:
- 0.5Gbps on c5.large
- 1Gbps on c5.xlarge
- 2Gbps on c5.2xlarge