About This Guide
This guide details the installation of the virtual Junos-switch (vJunos-switch). The vJunos-switch is a virtualized version of the Junos-based EX switching platform, operating within a Kernel-based Virtual Machine (KVM) environment. It is built upon Juniper Networks' vMX Virtual Router (vMX) nested architecture. This document also covers basic vJunos-switch configuration and management procedures. For advanced software configuration, consult the Junos OS documentation.
Related Documentation: Junos OS for EX Series Documentation
Chapter 1: Understand vJunos-switch
This chapter provides an overview of vJunos-switch, its architecture, key features, benefits, limitations, and use cases.
vJunos-switch Overview
The vJunos-switch is a virtual Juniper switch running Junos OS on an x86 server. It can be managed like a physical switch and is suitable for lab environments. It is based on the EX9214 platform, supporting a single Routing Engine and Flexible PIC Concentrator (FPC). The vJunos-switch offers up to 100 Mbps aggregated bandwidth across all interfaces without requiring a bandwidth license. It serves as a tool for testing network configurations and protocols.
Key Features Supported
- Supports up to 96 switch interfaces.
- Can simulate data center IP underlay and overlay topologies.
- Supports EVPN-VXLAN leaf functionality.
- Supports Edge-Routed Bridging (ERB).
- Supports EVPN LAG multihoming in EVPN-VXLAN (ESI-LAG).
Benefits and Uses
- Reduced capital expenditure (CapEx) on lab: Free to use for building test labs, reducing costs associated with physical switches.
- Reduced deployment time: Enables virtual topology testing without physical hardware, allowing for instant lab setup.
- Eliminate need and time for lab hardware: Available for instant download, removing procurement waiting times.
- Education and training: Facilitates building labs for learning and employee training.
- Proof of concept and validation testing: Allows validation of data center switching topologies and pre-build configurations.
Limitations
- Single Routing Engine and single FPC architecture.
- Does not support in-service software upgrade (ISSU).
- Interface attachment or detachment is not supported while the system is running.
- SR-IOV is not supported for vJunos-switch use cases and throughput.
- Due to its nested architecture, vJunos-switch cannot launch instances from within a VM.
- Supports a maximum bandwidth of 100 Mbps over all interfaces.
- Junos OS cannot be upgraded on a running system; a new instance must be deployed.
- Multicast is not supported.
vJunos-switch Architecture
The vJunos-switch is a single, nested VM solution where the virtual forwarding plane (VFP) and Packet Forwarding Engine (PFE) reside in the outer VM. When started, the VFP launches a nested VM running the Junos Virtual Control Plane (VCP) image, utilizing the KVM hypervisor. The architecture is layered: vJunos-switch at the top, KVM hypervisor and related system software in the middle, and the x86 server at the bottom.
The vJunos-switch can support up to 100 Mbps throughput using 4 cores and 5GB of memory. Additional cores and memory are allocated to the VCP. The 4 cores and 5GB memory are sufficient for lab use cases.
Figure 1: vJunos-switch Architecture depicts a diagram showing the Linux Host, vJunos-Switch with VFP VM and VCP VM, KVM, and the x86 server components.
Chapter 2: Hardware and Software Requirements vJunos-switch on KVM
Minimum Hardware and Software Requirements
This section outlines the hardware and software requirements for running a vJunos-switch instance.
Table 1: Minimum Hardware Requirements for vJunos-switch
Description | Value |
---|---|
Sample system configuration | For lab simulation and low performance (less than 100 Mbps) use cases, any Intel x86 processor with VT-x capability. Intel Ivy Bridge processors or later. Example: Intel Xeon E5-2667 v2 @ 3.30 GHz, 25 MB cache. |
Number of cores | A minimum of four cores are required. Three cores are allocated to the VFP, and one core to the VCP. Additional cores are provided to the VCP. |
Memory | A minimum of 5GB memory is required. Approximately 3GB is allocated to VFP and 2GB to VCP. If more than 6GB total memory is provided, VFP memory is capped at 4GB, with additional memory allocated to VCP. |
Other requirements |
|
Table 2: Software Requirements for Ubuntu
Description | Value |
---|---|
Operating system |
NOTE: Only English localization is supported. |
Virtualization | QEMU-KVM. The default version for each Ubuntu or Debian version is sufficient. |
Required packages |
NOTE: Use the apt-get install pkg-name or sudo apt-get install <pkg-name> commands to install a package. |
Supported Deployment Environments | QEMU-KVM using libvirt. EVE-NG bare metal deployment is also supported. Note: vJunos-switch is not supported on EVE-NG or other deployments that launch vJunos from within a VM due to deeply nested virtualization constraints. |
vJunos-switch Images | Images can be accessed from the lab download area of juniper.net at: Test Drive Juniper |
Chapter 3: Install and Deploy vJunos-switch on KVM
Install vJunos-switch on KVM
This topic explains how to install vJunos-switch in the KVM environment.
Prepare the Linux Host Servers to Install vJunos-switch
This section applies to both Ubuntu and Debian host servers.
- Install the standard package versions for your Ubuntu or Debian host server to ensure minimum hardware and software requirements are met.
- Verify that Intel VT-x technology is enabled by running the
lscpu
command. Check the 'Virtualization' field in the output. If VT-x is not enabled, consult your server documentation for BIOS settings.
Deploy and Manage vJunos-switch on KVM
This topic covers deploying and managing the vJunos-switch instance after installation.
- Bring up the vJunos-switch on KVM servers using libvirt.
- Configure CPU and memory, set up necessary bridges for connectivity, and configure the serial port.
- Use relevant XML file sections for configurations and selections.
NOTE: Download the sample XML file and vJunos-switch image from the Juniper website.
Set Up the vJunos-switch Deployment on the Host Server
This topic describes setting up the vJunos-switch deployment on the host server.
NOTE: This topic highlights selected sections of the XML file used for deploying vJunos-switch via libvirt. The complete XML file (vjunos.xml) is available for download with the VM image and associated documentation.
Ensure packages from the Minimum Software Requirements section are installed. Refer to "Minimum Hardware and Software Requirements" on page 8.
- Create a Linux bridge for each Gigabit Ethernet interface of the vJunos-switch you plan to use. Example:
ip link add ge-000 type bridge
andip link add ge-001 type bridge
for ge/0/0/0 and ge/0/0/1. - Bring up each Linux Bridge. Example:
ip link set ge-000 up
. - Make a live disk copy of the provided QCOW2 vJunos image. Example:
cp vjunos-switch-23.1R1.8.qcow2 vjunos-sw1-live.qcow2
. Create a distinct copy for each vJunos deployment to avoid modifying the original image. The live image must be writable by the deploying user (typically root). - Specify the number of cores for vJunos by modifying the CPU stanza in the XML file. The minimum is 4 cores, sufficient for lab use cases.
- Increase memory if needed by modifying the memory stanza. The default memory is sufficient for most applications.
- Specify the name and location of your vJunos-switch image by modifying the XML file. Example:
<source file="/root/vjunos-sw1-live.qcow2"/>
. Each vJunos VM requires a uniquely named QCOW2 image. - Create the disk image using the script:
./make-config.sh <juniper.conf> <config.qcow2>
. This connects a second disk with configuration to the VM. The XML file references this configuration drive. - NOTE: If initial configuration is not preferred, remove the configuration drive stanza from the XML file.
- Set up the management Ethernet port using the provided XML interface stanza for connecting to the VCP 'fxp0'. A routable IP address is required for fxp0, configured via DHCP or CLI. The 'eth0' in the stanza refers to the host server's external connectivity interface.
- Enable SSH to the VCP management port:
set system services ssh root-login allow command
. - Create a Linux bridge for each port specified in the XML file. Example stanzas for
ge-000
andge-001
are provided. The conventionge-Oxy
maps to Junosge/0/0/0
andge/0/0/1
interfaces. - Provide a unique serial console port number for each vJunos-switch. Example:
<source host="127.0.0.1" mode="bind" service="8610"/>
. - The smbios stanza indicates that it is a vJunos-switch and should not be modified.
- Create the vJunos-sw1 VM using the
virsh create vjunos-sw1.xml
command. The 'sw1' suffix indicates the first VM; subsequent VMs can be named 'sw2', 'sw3', etc. - Check
/etc/libvirt/qemu.conf
and uncomment relevant XML lines for user and group settings if commented out. - Restart libvirtd:
systemctl restart libvirtd
. - Safely shut down the vJunos-switch VM using
virsh shutdown vjunos-sw1
. This sends a graceful shutdown signal.
NOTE: Do not use the “virsh destroy” command as it can corrupt the vJunos-switch VM disk. If a VM stops booting after using “virsh destroy”, create a live QCOW2 disk copy from the original QCOW2 image.
Verify the vJunos-switch VM
This section describes how to verify if the vJunos-switch is running.
- Verify if the vJunos-switch is up and running using
virsh list
. The command displays the VM name and state (running, idle, paused, shutdown, crashed, or dying). - Connect to the serial console of the VCP using
telnet localhost <portnum>
, where<portnum>
is specified in the XML configuration file. - Disable auto image upgrade by setting the root password and committing the change:
set system root-authentication plain-text-password
, thendelete chassis auto-image-upgrade
. - Verify that the
ge
interfaces specified in the vJunos-switch XML file are up and available using theshow interfaces terse
command. For example, ifge-000
andge-001
are specified, their corresponding interfacesge/0/0/0
andge/0/0/1
should be in the 'up' state. - Verify that a vnet interface is configured under each corresponding "ge" bridge using the
brctl show <bridge_name>
command. Example:brctl show ge-000
.
Chapter 4: Configure vJunos-switch on KVM
Connect to vJunos-switch
Connect to vJunos-switch by telnetting to the serial console number specified in the XML file. Refer to "Deploy and Manage vJunos-switch on KVM" on page 11 for details.
You can also SSH to the Junos-switch VCP.
Configure Active Ports
Specify the number of active ports for vJunos-switch to match the number of NICs added to the VFP VM. The default is 10 ports, but you can set it to any value between 1 and 96. Use the command: set chassis fpc 0 pic 0 number-of-ports 96
.
Interface Naming
vJunos-switch supports only Gigabit Ethernet (ge) interfaces. Attempting to change interface names to 10-Gigabit Ethernet (xe) or 100-Gigabit Ethernet (et) will still result in them being displayed as "ge" in configuration and interface commands.
Configure the Media MTU
Configure the media maximum transmission unit (MTU) in the range of 256 to 9192. Values outside this range are rejected. Configure the MTU using the statement at the [edit interface interface-name] hierarchy level. Example: set interface ge-0/0/0 mtu 9192
. The maximum supported MTU value is 9192 bytes.
Troubleshoot
Troubleshoot vJunos-switch
This topic provides information for verifying vJunos-switch configuration and troubleshooting issues.
Verify That the VM is Running
Verify if the vJunos-switch is running after installation using the virsh list
command.
You can stop and start VMs using virsh shutdown
and virsh start
commands respectively.
Verify CPU Information
Use the lscpu
command on the host server to display CPU information, including the total number of CPUs, cores per socket, and CPU sockets.
View Log Files
View system logs using the show log
command on the vJunos-switch instance. For example, to view chassis daemon logs, run show log chassisd
.
Collect Core Dumps
Use the show system core-dumps
command to view collected core files. These can be transferred to an external server for analysis via the fxp0 management interface.