Optimizing Your ASAv Deployment

First Published: 2020-04-14

Last Modified: 2020-04-30

Optimizing Your ASAv Deployment

The Cisco Adaptive Security Virtual Appliance (ASAv) provides comprehensive firewall functionality for virtualized environments, securing data center traffic and multitenant setups. Management and monitoring can be performed using ASDM or CLI.

Important: For optimal performance, ensure you are running ASA Version 9.13(1) or later. Refer to the Cisco Adaptive Security Virtual Appliance (ASAv) support page for the latest ASAv Getting Started Guide.

Licensing for the ASAv

The ASAv utilizes Cisco Smart Software Licensing. For detailed information, consult the Smart Software Licensing (ASAv, ASA on Firepower) documentation.

Note: A smart license must be installed on the ASAv. Without a license, throughput is limited to 100 Kbps, suitable for preliminary connectivity tests. A smart license is mandatory for regular operation.

Starting with version 9.13(1), any ASAv license is compatible with any supported ASAv vCPU/memory configuration, allowing deployment across diverse VM resource footprints. Session limits for AnyConnect and TLS Proxy are determined by the ASAv platform entitlement, not a specific model type.

Refer to the following sections for details on ASAv licensing entitlements and resource specifications for private and public deployment targets.

About Smart License Entitlements

Any ASAv license can be used with any supported ASAv vCPU/memory configuration, offering flexibility in deploying the ASAv on various VM resource footprints and supporting a wider range of AWS and Azure instance types. When configuring the ASAv VM, the maximum supported vCPUs are 8, and the maximum supported memory is 64GB RAM.

ASAv Private Cloud Entitlements (VMware, KVM, Hyper-V)

Important: Beginning with version 9.13(1), the minimum memory requirement for the ASAv is 2GB. If your current ASAv has less than 2GB of memory, you must increase the memory or redeploy a new ASAv VM with version 9.13(1) to upgrade. When deploying an ASAv with more than 1 vCPU, the minimum memory requirement is 4GB.

Session Limits for Licensed Features

Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced by a rate limiter. The following table outlines these limits:

Table 1: ASAv Licensed Feature Limits Based on Entitlement
EntitlementAnyConnect Premium PeersTotal TLS Proxy SessionsRate Limiter
Standard Tier, 100M50500150 Mbps
Standard Tier, 1G2505001 Gbps
Standard Tier, 2G75010002 Gbps
Standard Tier, 10G10,00010,00010 Gbps

Platform Limits for Licensed Features

Platform session limits are based on the provisioned memory for the ASAv. The maximum ASAv VM dimensions are 8 vCPUs and 64 GB of memory.

Table 2: ASAv Licensed Feature Limits Based on Memory
Provisioned MemoryAnyConnect Premium PeersTotal TLS Proxy Sessions
2 GB to 8 GB250500
8 GB to 16 GB7501000
16 GB - 64 GB10,00010,000

The flexibility of using any ASAv license with any supported ASAv vCPU/memory configuration allows deployment in private cloud environments (VMware, KVM, Hyper-V). Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter.

ASAv Public Cloud Entitlements (AWS)

The following table summarizes session limits and rate limiters based on entitlement tier for ASAv deployed in private cloud environments.

Table 3: ASAv on VMware/KVM/HyperV Private Cloud - Licensed Feature Limits Based on Entitlement
RAM (GB)Entitlement Support*
MinMaxStandard Tier, 100MStandard Tier, 1GStandard Tier, 2GStandard Tier, 10GStandard Tier, 20G
2 - 7.950/500/100M250/500/1G250/500/2G250/500/10G250/500/20G
8 - 15950/500/100M250/500/1G750/1000/2G750/1000/10G750/1000/20G
16 - 31950/500/100M250/500/1G750/1000/2G10K/10K/10G10K/10K/20G
32 - 6450/500/100M250/500/1G750/1000/2G10K/10K/10G20K/20K/20G

*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.

The ASAv can be deployed on various AWS instance types. Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter. The following table details session limits and rate limiters based on entitlement tier for AWS instance types. Refer to "About ASAv Deployment On the AWS Cloud" for AWS VM dimensions (vCPUs and memory).

Table 4: ASAv on AWS - Licensed Feature Limits Based on Entitlement
InstanceBYOL Entitlement Support*PAYG**
Standard Tier, 100MStandard Tier, 1GStandard Tier, 2GStandard Tier, 10G
c5.xlarge50/500/100M250/500/1G750/1000/2G750/1000/10G750/1000
c5.2xlarge50/500/100M250/500/1G750/1000/2G10K/10K/10G10K/10K
c4.large50/500/100M250/500/1G250/500/2G250/500/10G250/500
c4.xlarge50/500/100M250/500/1G250/500/2G250/500/10G250/500
c4.2xlarge50/500/100M250/500/1G750/1000/2G10K/10K/10G750/1000
c3.large50/500/100M250/500/1G250/500/2G250/500/10G250/500

*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.

**AnyConnect Sessions / TLS Proxy Sessions. Rate Limiter is not employed in PAYG mode.

ASAv Public Cloud Entitlements (Azure)

The ASAv can be deployed on various Azure instance types. Session limits for AnyConnect and TLS Proxy are determined by the installed ASAv platform entitlement tier and enforced via a rate limiter. The following table summarizes session limits and rate limiters based on entitlement tier for Azure instance types. Refer to "About ASAv Deployment On the Microsoft Azure Cloud" for Azure VM dimensions (vCPUs and memory).

Table 6: ASAv on Azure - Licensed Feature Limits Based on Entitlement
InstanceBYOL Entitlement Support*PAYG**
Standard Tier, 100MStandard Tier, 1GStandard Tier, 2GStandard Tier, 10G
c3.xlarge50/500/100M250/500/1G250/500/2G250/500/10G250/500
c3.2xlarge50/500/100M250/500/1G750/1000/2G10K/10K/10G750/1000
m4.large50/500/100M250/500/1G250/500/2G250/500/10G250/500
m4.xlarge50/500/100M250/500/1G250/500/2G250/500/10G10K/10K
m4.2xlarge50/500/100M250/500/1G750/1000/2G10K/10K/10G10K/10K

*AnyConnect Sessions / TLS Proxy Sessions / Rate Limiter per entitlement/instance.

**AnyConnect Sessions / TLS Proxy Sessions. Rate Limiter is not employed in PAYG mode.

Pay-As-You-Go (PAYG) Mode: The following table summarizes Smart Licensing entitlements for hourly billing (PAYG) mode, based on allocated memory.

Table 5: ASAv on AWS - Smart License Entitlements for PAYG
RAM (GB)Hourly Billing Mode Entitlement
2 GB to 8 GBStandard Tier, 1G
8 GB to 16 GBStandard Tier, 2G
16 GB - 64 GBStandard Tier, 10G

Note: Pay-As-You-Go (PAYG) Mode is currently not supported for the ASAv on Azure.

ASAv and SR-IOV Interface Provisioning

Single Root I/O Virtualization (SR-IOV) enables multiple VMs to share a single PCIe network adapter on a host server. SR-IOV facilitates direct data transfer between a VM and the network adapter, bypassing the hypervisor for improved network throughput and reduced server CPU load. Enhancements like Intel VT-d technology support direct memory transfers essential for SR-IOV.

SR-IOV Specification Defines Two Device Types:

Guidelines and Limitations for SR-IOV Interfaces

Guidelines for SR-IOV Interfaces

SR-IOV provisioning on the ASAv requires careful planning, including the operating system level, hardware and CPU, adapter types, and adapter settings. Licensing for the ASAv is detailed on page 1, which covers compliant resource scenarios matching license entitlements for different ASAv platforms. SR-IOV Virtual Functions require specific system resources.

Host Operating System and Hypervisor Support

SR-IOV support and VF drivers are available for:

The ASAv with SR-IOV interfaces is currently supported on the following hypervisors:

Hardware Platform Support

Note: Deploy the ASAv on any server-class x86 CPU device capable of running the supported virtualization platforms.

This section provides hardware guidelines for SR-IOV interfaces. While these are guidelines, using hardware that does not meet them may lead to functionality issues or reduced performance.

A server supporting SR-IOV and equipped with an SR-IOV-capable PCIe adapter is required. Consider the following hardware aspects:

Note: Consult your manufacturer's documentation for SR-IOV support on your system.

Note: Testing was performed with the Cisco UCS C-Series Rack Server. The Cisco UCS-B server does not support the ixgbe-vf vNIC.

Supported NICs for SR-IOV

CPUs

Note: Testing was performed on an Intel Broadwell CPU (E5-2699-v4) at 2.3GHz.

Cores

Note: CPU pinning is recommended for achieving full throughput rates on the ASAv50 and ASAv100. Refer to "Increasing Performance on ESXi Configurations" (page 11) and "Increasing Performance on KVM Configurations" (page 23) for more details.

BIOS Settings

SR-IOV requires support in both the BIOS and the operating system instance or hypervisor. Check your system BIOS for the following settings:

Note: Verify the process with vendor documentation, as systems differ in accessing and changing BIOS settings.

ASAv on VMware Guidelines and Limitations

Limitations

Be aware of the following limitations when using ixgbe-vf interfaces:

ASAv on VMware ESXi System Requirements

Multiple ASAv instances can be created and deployed on an ESXi server. Hardware requirements vary based on the number of instances and usage needs. Each virtual appliance requires minimum resource allocation for memory, CPUs, and disk space.

Review the following guidelines and limitations before ASAv deployment.

Recommended vNICs

The following vNICs are recommended for optimum performance:

When using vmxnet3, disable Large Receive Offload (LRO) to prevent poor TCP performance.

Performance Optimizations

Adjustments to both the VM and the host can enhance ASAv performance. Refer to "Performance Tuning for the ASAv on VMware" (page 11) for more details.

OVF File Guidelines

The choice between asav-vi.ovf or asav-esxi.ovf depends on the deployment target:

Attention: Both drives can be unmounted after the ASAv virtual machine boots. However, Drive 1 (with OVF environment variables) remains mounted after power off/on, even if "Connect at Power On" is unchecked.

Failover for High Availability Guidelines

For failover deployments, ensure the standby unit has the same license entitlement as the active unit (e.g., both units should have the 2Gbps entitlement).

Important: When creating a high availability pair using ASAv, data interfaces must be added to each ASAv in the same order. Mismatched interface order can cause errors and affect failover functionality.

ASAv on KVM Guidelines and Limitations

Deployment hardware for ASAv can vary based on the number of instances and usage requirements. Each virtual appliance requires minimum resource allocation for memory, CPUs, and disk space.

Review the following guidelines and limitations before ASAv deployment.

ASAv on KVM System Requirements

Recommended vNICs

Performance Optimizations

Adjustments to both the VM and the host can enhance ASAv performance. Refer to "Performance Tuning for the ASAv on KVM" (page 23) for more details.

CPU Pinning

CPU pinning is required for the ASAv to function in a KVM environment. Refer to "Enable CPU Pinning" (page 23) for instructions.

Failover for High Availability Guidelines

For failover deployments, ensure the standby unit has the same license entitlement as the active unit (e.g., both units should have the 2Gbps entitlement).

Important: When creating a high availability pair using ASAv, data interfaces must be added to each ASAv in the same order. Mismatched interface order can cause errors and affect failover functionality.

Prerequisites for the ASAv and KVM

Note: A Cisco.com login and Cisco service contract are required.

For the sample deployment in this document, Ubuntu 18.04 LTS is assumed. Install the following packages on the Ubuntu 18.04 LTS host:

Performance is influenced by the host and its configuration. Maximize ASAv throughput on KVM by tuning your host. For general host-tuning concepts, refer to "NFV Delivers Packet Processing Performance with Intel".

Useful optimizations for Ubuntu 18.04 include:

For optimizing a RHEL-based distribution, see the Red Hat Enterprise Linux 7 Virtualization Tuning and Optimization Guide.

For ASA software and ASAv hypervisor compatibility, see Cisco ASA Compatibility.

Performance Tuning for the ASAv on KVM

Increasing Performance on KVM Configurations

Enhance ASAv performance in a KVM environment by adjusting KVM host settings. These settings are independent of the host server's configuration. This option is available in Red Hat Enterprise Linux 7.0 KVM.

Improve KVM configurations by enabling CPU pinning.

Enable CPU Pinning

ASAv requires KVM CPU affinity options for performance enhancement in KVM environments. Processor affinity, or CPU pinning, binds a process or thread to a specific CPU or range of CPUs, ensuring execution only on designated CPUs.

Configure host aggregates to deploy instances using CPU pinning on different hosts from those that do not, to prevent unpinned instances from consuming resources needed by pinned instances.

Attention: Do not deploy instances with NUMA topology on the same hosts as instances without NUMA topology.

To use this option, configure CPU pinning on the KVM host.

Procedure

  1. In the KVM host environment, verify host topology to determine available vCPUs for pinning using the command: virsh nodeinfo
  2. Verify available vCPU numbers using the command: virsh capabilities
  3. Pin the vCPUs to sets of processor cores using the command: virsh vcpupin <vm-name> <vcpu-number> <host-core-number>. This command must be executed for each vCPU on your ASAv.

Note: When configuring CPU pinning, carefully consider the host server's CPU topology. Avoid configuring CPU pinning across multiple sockets if the server has multiple cores. The downside of performance improvement in KVM configuration is the requirement for dedicated system resources.

NUMA Guidelines

Non-Uniform Memory Access (NUMA) describes a shared memory architecture where main memory modules are placed relative to processors in a multiprocessor system. When a processor accesses memory outside its own node (remote memory), data transfer occurs over the NUMA connection at a slower rate than accessing local memory.

The x86 server architecture comprises multiple sockets and cores within each socket. Each CPU socket, along with its memory and I/O, forms a NUMA node. For efficient packet reading from memory, guest applications and associated peripherals (like the NIC) should reside within the same node.

For Optimum ASAv Performance:

The following figures illustrate server configurations for NUMA architecture examples:

Figure 1: 8-Core NUMA Architecture Example
Figure 2: 16-Core ASAv NUMA Architecture Example

NUMA Optimization

Optimally, the ASAv VM should run on the same NUMA node as the NICs. To achieve this:

  1. Determine the NUMA node of the NICs using "lstopo" to view the node diagram. Note the NICs and their attached nodes.
  2. At the KVM Host, use virsh list to find the ASAv.
  3. Edit the VM using virsh edit <VM Number>.
  4. Align the ASAv on the chosen node. Examples for 18-core nodes are provided.
  5. Save the XML changes and power cycle the ASAv VM.
  6. To ensure the VM runs on the desired node, use ps aux | grep <name of your ASAV VM> to get the process ID.
  7. Run sudo numastat -c <ASAv VM Process ID> to verify proper ASAv VM alignment.

More information on NUMA tuning with KVM can be found in the Red Hat document "9.3. libvirt NUMA Tuning".

Multiple RX Queues for Receive Side Scaling (RSS)

The ASAv supports Receive Side Scaling (RSS), a technology that distributes network receive traffic across multiple processor cores. For maximum throughput, each vCPU (core) must have its own NIC RX queue. A typical RA VPN deployment might use a single inside/outside pair of interfaces.

Important: ASAv Version 9.13(1) or greater is required for multiple RX queues. For KVM, libvirt version 1.0.6 or higher is needed.

For an 8-core VM with an inside/outside pair of interfaces, each interface will have 4 RX queues, as shown in Figure 3: 8-Core ASAv RSS RX Queues (page 14).

For a 16-core VM with an inside/outside pair of interfaces, each interface will have 8 RX queues, as shown in Figure 4: 16-Core ASAv RSS RX Queues (page 14).

The following table presents ASAv's vNICs for VMware and the number of supported RX queues. Refer to "Recommended vNICs" (page 8) for descriptions of supported vNICs.

Table 7: VMware Recommended NICs/vNICs
NIC CardVNIC DriverDriver TechnologyNumber of RX QueuesPerformance
x710*i40ePCI Passthrough8 maxPCI Passthrough offers the highest performance of the NICs tested. In passthrough mode, the NIC is dedicated to the ASAv and is not an optimal choice for virtual.
i40evfSR-IOV4SR-IOV with the x710 NIC has lower throughput (~30%) than PCI Passthrough. i40evf on VMware has a maximum of 4 RX queues per i40evf. 8 RX queues are needed for maximum throughput on a 16 core VM.
x520ixgbe-vfSR-IOV4The ixgbe-vf driver (in SR-IOV mode) has performance issues that are under investigation.
ixgbePCI Passthrough4The ixgbe driver (in PCI Passthrough mode) has 4 RX queues. Performance is on par with i40evf (SR-IOV).
N/Avmxnet3Para-virtualized8 maxNot recommended for ASAv100.
N/Ae1000Not recommended by VMware.

*The ASAv is not compatible with the 1.9.5 i40en host driver for the x710 NIC. Older or newer driver versions will work. See "Identify NIC Drivers and Firmware Versions" (page 15) for information on ESXCLI commands to identify or verify NIC driver and firmware versions.

Identify NIC Drivers and Firmware Versions

To identify or verify specific firmware and driver version information, use ESXCLI commands:

Note: General network adapter information can also be viewed from the VMware vSphere Client under Physical Adapters within the Configure tab.

VPN Optimization

Considerations for optimizing VPN performance with the ASAv include:

SR-IOV Interface Provisioning

SR-IOV allows multiple VMs to share a single PCIe network adapter on a host server. SR-IOV defines these functions:

VFs provide up to 10 Gbps connectivity to ASAv virtual machines in a virtualized operating system framework. This section explains VF configuration in a KVM environment. SR-IOV support on the ASAv is detailed in "ASAv and SR-IOV Interface Provisioning" (page 5).

Requirements for SR-IOV Interface Provisioning

To attach SR-IOV-enabled VFs or Virtual NICs (vNICs) to an ASAv instance, the physical NIC must support SR-IOV. SR-IOV also requires support in the BIOS and the operating system instance or hypervisor.

General guidelines for SR-IOV interface provisioning for the ASAv in a KVM environment:

Modify the KVM Host BIOS and Host OS

This section outlines setup and configuration steps for provisioning SR-IOV interfaces on a KVM system. The information is based on a lab environment using Ubuntu 14.04 on a Cisco UCS C Series server with an Intel Ethernet Server Adapter X520 - DA2.

Before you begin

Note: Some system manufacturers disable these extensions by default. Verify the process with vendor documentation, as systems differ in accessing and changing BIOS settings.

Procedure

  1. Log in to your system as the "root" user.
  2. Verify that Intel VT-d is enabled using the command: dmesg | grep -e DMAR -e IOMMU. The output indicating "DMAR: IOMMU enabled" confirms VT-d is active.
  3. Activate Intel VT-d in the kernel by appending the intel_iommu=on parameter to the GRUB_CMDLINE_LINUX entry in the /etc/default/grub configuration file. If using an AMD processor, append amd_iommu=on instead.
  4. Reboot the server for the iommu change to take effect using shutdown -r now.
  5. Create VFs by writing an appropriate value to the sriov_numvfs parameter via the sysfs interface using the format: echo n > /sys/class/net/device name/device/sriov_numvfs. To ensure VFs are created upon server power-cycle, append this command to the rc.local file in /etc/rc.d/. The following example shows creating one VF per port (your interfaces may vary):
    • echo '1' > /sys/class/net/eth4/device/sriov_numvfs
    • echo '1' > /sys/class/net/eth5/device/sriov_numvfs
    • echo '1' > /sys/class/net/eth6/device/sriov_numvfs
    • echo '1' > /sys/class/net/eth7/device/sriov_numvfs
  6. Reboot the server using shutdown -r now.
  7. Verify VF creation using lspci | grep -i "Virtual Function".

Assign PCI Devices to the ASAv

After creating VFs, add them to the ASAv like any other PCI device. The following example demonstrates adding an Ethernet VF controller to an ASAv using the graphical virt-manager tool.

Procedure

  1. Open the ASAv and click the Add Hardware button to add a new device to the virtual machine.
  2. From the Hardware list in the left pane, select PCI Host Device. The list of PCI devices, including VFs, appears in the center pane.
  3. Select one of the available Virtual Functions and click Finish. The PCI Device appears in the Hardware List, described as an Ethernet Controller Virtual Function.

What to do next:

About ASAv Deployment On the AWS Cloud

The Cisco Adaptive Security Virtual Appliance (ASAv) offers the same proven security functionality as physical Cisco ASAs in a virtual form factor. The ASAv can be deployed in the public AWS cloud and configured to protect virtual and physical data center workloads that expand, contract, or shift location over time.

The ASAv supports the following AWS instance types:

Table 9: AWS Supported Instance Types
InstanceAttributesInterfaces
vCPUsMemory (GB)
c5.xlarge484
c5.2xlarge8164
c4.large23.753
c4.xlarge47.54
c4.2xlarge8154
c3.large23.753
c3.xlarge47.54
c3.2xlarge8154
m4.large243
m4.xlarge4164
m4.2xlarge8324

Create an account on AWS, set up the ASAv using the AWS Wizard, and choose an Amazon Machine Image (AMI). The AMI is a template containing the software configuration needed to launch your instance.

Important: AMI images are not available for download outside the AWS environment.

Performance Tuning for the ASAv on AWS

VPN Optimization

The AWS c5 instances offer significantly higher performance than older c3, c4, and m4 instances. Approximate RA VPN throughput (DTLS using 450B TCP traffic with AES-CBC encryption) on the c5 instance family should be:

PDF preview unavailable. Download the PDF instead.

optimizing-your-asav-deployment DITA Open Toolkit XEP 4.9 build 20070312; modified using iText 2.1.7 by 1T3XT

Related Documents

PreviewIntroduction to Cisco ASAv: Features, Licensing, and Guidelines
An overview of the Cisco Adaptive Security Virtual Appliance (ASAv), covering its features, hypervisor support, licensing models, guidelines, limitations, and virtual network interface configurations.
PreviewCisco Secure Email and Web Virtual Appliance Installation Guide
Comprehensive installation guide for Cisco Secure Email and Web Virtual Appliances. Learn how to deploy on Hyper-V, KVM, VMware ESXi, and AWS EC2. Covers system requirements, model specifications, and troubleshooting for Cisco's network security solutions.
PreviewCisco ASA Compatibility Guide: Software and Hardware Matrix
Comprehensive compatibility guide for Cisco ASA (Adaptive Security Appliance) software and hardware, including ASDM, FXOS, ASAv, Firepower, and various modules. Updated October 5, 2016.
PreviewCisco ASA Series General Operations ASDM Configuration Guide
Comprehensive guide detailing the configuration of Cisco ASA Series devices using the Adaptive Security Device Manager (ASDM), covering general operations, setup, interfaces, security policies, VPNs, and more.
PreviewCisco Secure Email and Web Virtual Appliance Installation Guide
This guide provides comprehensive instructions for installing Cisco Secure Email and Web Virtual Appliances. It covers system requirements, deployment procedures for various platforms like Microsoft Hyper-V, KVM, and VMware ESXi, as well as AWS EC2 deployments. The document also details license installation, configuration, and troubleshooting steps.
PreviewCisco ACI Virtual Edge Installation Guide, Release 2.2(x)
Explore the Cisco ACI Virtual Edge Installation Guide, Release 2.2(x). This comprehensive manual details the installation, configuration, upgrade, and uninstallation processes for Cisco ACI Virtual Edge, a vital virtual switch solution for Cisco's Application Centric Infrastructure (ACI). Learn how to integrate with VMware vCenter and ESXi, utilize various deployment methods like PowerCLI and Python, and manage network policies effectively in virtualized data center environments.
PreviewCisco ASA Series General Operations ASDM Configuration Guide, Version 7.22
This guide provides comprehensive instructions for the general operations and configuration of Cisco ASA Series devices using the Adaptive Security Device Manager (ASDM). It covers essential topics such as initial setup, interface configuration, security policies, VPNs, and licensing.
PreviewCisco Smart Software Licensing: ASAv and Firepower Configuration Guide
A comprehensive guide to understanding and implementing Cisco Smart Software Licensing for ASAv and ASA on Firepower devices, covering setup, license management, and troubleshooting.