CISCO-logo

CISCO Linux KVM Nexus Dashboard

CISCO-Linux-KVM-Nexus-Dashboard-product

Mga detalye

  • libvirt version: 4.5.0-23.el7_7.1.x86_64
  • Bersyon sa Nexus Dashboard: 8.0.0

Mga Kinahanglanon ug Giya

Before you proceed with deploying the Nexus Dashboard cluster in Linux KVM, you must:

  • Ensure that the KVM form factor supports your scale and services requirements.
  • Scale and services support and co-hosting vary based on the cluster form factor. You can use the Nexus Dashboard Capacity Planning tool to verify that the virtual form factor satisfies your deployment requirements.
  • Review and complete the general prerequisites described in Prerequisites: Nexus Dashboard.
  • Review and complete any additional prerequisites described in the Release Notes for the services you plan to deploy.
  • Ensure that the CPU family used for the Nexus Dashboard VMs supports AVX instruction set.
  • Ensure you have enough system resources:

Lamesa 1: Deployment Requirements

Mga kinahanglanon

  • KVM deployments are supported for Nexus Dashboard Fabric Controller services only.
  • You must deploy in CentOS 7.9 or Red Hat Enterprise Linux 8.6
  • You must have the supported versions of Kernel and KVM:
  • For CentOS 7.9, Kernel version 3.10.0-957.el7.x86_64 and KVM version
  • libvirt-4.5.0-23.el7_7.1.x86_64
  • For RHEL 8.6, Kernel version 4.18.0-372.9.1.el8.x86_64 and KVM version libvert 8.0.0
  • 16 vCPUs
  • 64 GB sa RAM
  • 550 GB disk
  • Each node requires a dedicated disk partition
  • The disk must have I/O latency of 20ms or less.

To verify the I/O latency:

  1. Create a test directory.
    Kay example, test-data.
  2. Pagdalagan ang mosunod nga sugo:
    # fio –rw=write –ioengine=sync –fdatasync=1 –directory=test-data –size=22m –bs=2300 –name=mytest
  3. After the command is executed, confirm that the 99.00th=[<value>] in the
    fsync/fdatasync/sync_file_range nga seksyon ubos sa 20ms.
    • We recommend that each Nexus Dashboard node is deployed in a different KVM hypervisor.

Pag-deploy sa Nexus Dashboard sa Linux KVM

Kini nga seksyon naghulagway kung giunsa ang pag-deploy sa Cisco Nexus Dashboard cluster sa Linux KVM.

Sa dili ka pa magsugod

Ensure that you meet the requirements and guidelines described in Prerequisites and Guidelines, on page 1.

Pamaagi

Lakang 1
Download the Cisco Nexus Dashboard image.

Lakang 2
Copy the image to the Linux KVM servers where you will host the nodes.
You can use scp to copy the image, for example: # scp nd-dk9.<version>.qcow2 root@<kvm-host-ip>:/home/nd-base
Ang mosunod nga mga lakang nagtuo nga imong gikopya ang hulagway ngadto sa /home/nd-base nga direktoryo.

Lakang 3
Create the required disk images for the first node.
You will create a snapshot of the base qcow2 image you downloaded and use the snapshots as the disk images for the nodes’ VMs. You will also need to create a second disk image for each node.

  • Log in to your KVM host as the root user.
  • Paghimo og direktoryo alang sa snapshot sa node.
    Ang mosunod nga mga lakang nagtuo nga imong gimugna ang snapshot sa /home/nd-node1 nga direktoryo.
    # mkdir -p /home/nd-node1/
    # cd /home/nd-node1
  • Create the snapshot.
    In the following command, replace /home/nd-base/nd-dk9.<version>.qcow2with the location of the base image you created in the previous step.
    # qemu-img create -f qcow2 -b /home/nd-base/nd-dk9.<version>.qcow2
    /home/nd-node1/nd-node1-disk1.qcow2

Nota
If you are deploying in RHEL 8.6, you may need to provide an additional parameter to define the destination snapshot’s format as well. In that case, update the above command to the following:
# qemu-img create -f qcow2 -b /home/nd-base/nd-dk9.2.1.1a.qcow2 /home/nd-node1/nd-node1-disk1.qcow2- F qcow2

  • Create the additional disk image for the node.
    Each node requires two disks: a snapshot of the base Nexus Dashboard qcow2 image and a second 500GB disk. # qemu-img create -f qcow2 /home/nd-node1/nd-node1-disk2.qcow2 500G

Lakang 4
Balika ang miaging lakang aron mahimo ang mga imahe sa disk alang sa ikaduha ug ikatulo nga mga node. Sa dili ka pa mopadayon sa sunod nga lakang, kinahanglan nimo ang mosunod:
For the first node, /home/nd-node1/ directory with two disk images:

  • /home/nd-node1/nd-node1-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node1/nd-node1-disk2.qcow2, which is a new 500GB disk you created.
  • For the second node, /home/nd-node2/ directory with two disk images:
  • /home/nd-node2/nd-node2-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node2/nd-node2-disk2.qcow2, which is a new 500GB disk you created.
  • For the third node, /home/nd-node3/ directory with two disk images:
  • /home/nd-node1/nd-node3-disk1.qcow2, which is a snapshot of the base qcow2 image you downloaded in Step 1.
  • /home/nd-node1/nd-node3-disk2.qcow2, which is a new 500GB disk you created.

Lakang 5
Create the first node’s VM.

  • Open the KVM console and click New Virtual Machine.
    You can open the KVM console from the command line using the virt-manager command.
    If your Linux KVM environment does not have a desktop GUI, run the following command instead and proceed to step 6.
    virt-install –import –name <node-name> –memory 65536 –vcpus 16 –os-type generic –disk path=/path/to/disk1/nd-node1-d1.qcow2,format=qcow2,bus=virtio –disk
    path=/path/to/disk2/nd-node1-d2.qcow2,format=qcow2,bus=virtio –network
    bridge=<mgmt-bridge-name>,model=virtio –network bridge=<data-bridge-name>,model=virtio –console pty,target_type=serial –noautoconsole –autostart
  • In the New VM screen, choose Import existing disk image option and click Forward.
  • In the Provide existing storage path field, click Browse and select the nd-node1-disk1.qcow2 file.
    Among girekomendar nga ang disk image sa matag node itago sa kaugalingong disk partition.
  • Choose Generic for the OS type and Version, then click Forward.
  • Specify 64GB memory and 16 CPUs, then click Forward.
  • Enter the Name of the virtual machine, for example nd-node1 and check the Customize configuration before install option. Then click Finish.

Nota
You must select the Customize configuration before install checkbox to be able to make the disk and network card customizations required for the node.
Ang bintana sa mga detalye sa VM maablihan.

In the VM details window, change the NIC’s device model:

  • Select NIC <mac>.
  • For Device model, choose e1000.
  • For Network Source, choose the bridge device and provide the name of the “mgmt” bridge. Note

Ang paghimo sa mga bridge device wala sa sulud sa kini nga giya ug nagdepende sa pag-apod-apod ug bersyon sa operating system. Konsultaha ang dokumentasyon sa operating system, sama sa Red Hat's Configuring a network bridge, para sa dugang impormasyon.

Sa bintana sa mga detalye sa VM, idugang ang ikaduha nga NIC:

  • I-klik ang Add Hardware.
  • In the Add New Virtual Hardware screen, select Network.
  • For Network Source, choose the bridge device and provide the name of the created “data” bridge.
  • Leave the default Mac address value.
  • For Device model, choose e1000.

In the VM details window, add the second disk image

  • I-klik ang Add Hardware.
  • In the Add New Virtual Hardware screen, select Storage.
  • For the disk’s bus driver, choose IDE.
  • Select Select or create custom storage, click Manage, and select the nd-node1-disk2.qcow2 file imong gibuhat.
  • Click Finish to add the second disk.

Nota
Ensure that you enable the Copy host CPU configuration option in the Virtual Machine Manager UI.
Sa katapusan, i-klik ang Begin Installation aron mahuman ang paghimo sa VM sa node.

Lakang 6
Balika ang nangaging mga lakang aron i-deploy ang ikaduha ug ikatulo nga node, dayon sugdi ang tanang VM.

Nota
If you are deploying a single-node cluster, you can skip this step.

Lakang 7
Open one of the node’s console and configure the node’s basic information. If your Linux KVM environment does not have a desktop GUI, run the virsh console <node-name> command to access the console of the node.

  • Press any key to begin initial setup.
    • Maaghat ka sa pagpadagan sa una nga higayon nga setup utility:
    • [ OK ] Started atomix-boot-setup.
    • Starting Initial cloud-init job (pre-networking)…
    • Starting logrotate…
    • Starting logwatch…
    • Starting keyhole…
    • [ OK ] Started keyhole.
    • [ OK ] Started logrotate.
    • [ OK ] Started logwatch.
    • Pindota ang bisan unsang yawe aron ipadagan ang first-boot setup sa kini nga console…
  • Enter and confirm the admin password
    • Kini nga password gamiton alang sa rescue-user SSH login ingon man sa inisyal nga GUI password.
      Nota
      You must provide the same password for all nodes or the cluster creation will fail.
    • Password sa Admin:
    • Reenter Admin Password:
    • Enter the management network information.
    • Management Network:
    • IP Address/Mask: 192.168.9.172/24
    • Gateway: 192.168.9.1
  • For the first node only, designate it as the “Cluster Leader”.
    • Mo log in ka sa cluster leader node para tapuson ang configuration ug kompletoha ang pagmugna sa cluster.
    • Mao ba kini ang lider sa cluster?: y
  • Review ug kumpirmahi ang gisulod nga impormasyon.
    • You will be asked if you want to change the entered information. If all the fields are correct, choose n to proceed.
    • If you want to change any of the entered information, enter y to re-start the basic configuration script.
    • Palihug pag-usabview ang config
    • Management network:
    • Gateway: 192.168.9.1
    • IP Address/Mask: 192.168.9.172/24
    • Cluster leader: yes
    • Pagsulod pag-usab sa config? (y/N): n

Lakang 8 Balika ang miaging lakang aron ma-configure ang inisyal nga kasayuran alang sa ikaduha ug ikatulo nga mga node.
Dili nimo kinahanglan nga maghulat nga makompleto ang una nga pag-configure sa node, mahimo nimong sugdan ang pag-configure sa laing duha ka mga node nga dungan.

Nota
You must provide the same password for all nodes or the cluster creation will fail.
Ang mga lakang sa pag-deploy sa ikaduha ug ikatulo nga mga node parehas nga ang bugtong eksepsiyon mao nga kinahanglan nimo ipakita nga dili sila ang Cluster Leader.

Lakang 9 Paghulat alang sa inisyal nga proseso sa bootstrap nga makompleto sa tanan nga mga node.
Human nimo ihatag ug kumpirmahon ang impormasyon sa pagdumala sa network, ang inisyal nga setup sa unang node (Cluster Leader) mo-configure sa networking ug mopatungha sa UI, nga imong gamiton aron makadugang ug laing duha ka node ug makompleto ang cluster deployment.
Please wait for system to boot: [#########################] 100%
System up, please wait for UI to be online.
System UI online, palihug login sa https://192.168.9.172 para makapadayon.

Lakang 10 Open your browser and navigate to https://<node-mgmt-ip> to open the GUI.
Ang nahabilin sa pag-configure sa workflow mahitabo gikan sa usa sa GUI sa node. Mahimo nimong pilion ang bisan kinsa sa mga node nga imong gipakatap aron masugdan ang proseso sa bootstrap ug dili nimo kinahanglan nga mag log in o i-configure ang laing duha ka mga node direkta.
Pagsulod sa password nga imong gihatag sa miaging lakang ug i-klik ang Login

CISCO-Linux-KVM-Nexus-Dashboard- (1)

Lakang 11
Provide the Cluster Details.
In the Cluster Details screen of the Cluster Bringup wizard, provide the following information:

CISCO-Linux-KVM-Nexus-Dashboard- (2)

  • Provide the Cluster Name for this Nexus Dashboard cluster.
    The cluster name must follow the RFC-1123 requirements.
  • (Optional) If you want to enable IPv6 functionality for the cluster, check the Enable IPv6 checkbox.
  • Click +Add DNS Provider to add one or more DNS servers.
    Human nimo masulod ang impormasyon, i-klik ang checkmark icon aron i-save kini.
  • (Optional) Click +Add DNS Search Domain to add a search domain.

Human nimo masulod ang impormasyon, i-klik ang checkmark icon aron i-save kini.

  • (Optional) If you want to enable NTP server authentication, enable the NTP Authentication checkbox and click Add NTP Key.
    In the additional fields, provide the following information:
    • NTP Key – a cryptographic key that is used to authenticate the NTP traffic between the Nexus Dashboard and the NTP server(s). You will define the NTP servers in the following step, and multiple NTP servers can use the same NTP key.
    • Key ID – each NTP key must be assigned a unique key ID, which is used to identify the appropriate key to use when verifying the NTP packet.
    • Auth Type – this release supports MD5, SHA, and AES128CMAC authentication types.
    • Choose whether this key is Trusted. Untrusted keys cannot be used for NTP authentication.

Nota
Human nimo masulod ang impormasyon, i-klik ang checkmark icon aron i-save kini.
For the complete list of NTP authentication requirements and guidelines, see Prerequisites and Guidelines.

  • Click +Add NTP Host Name/IP Address to add one or more NTP servers.
    In the additional fields, provide the following information:
  • NTP Host – you must provide an IP address; fully qualified domain name (FQDN) are not supported.
  • Key ID – if you want to enable NTP authentication for this server, provide the key ID of the NTP key you defined in the previous step.
    If NTP authentication is disabled, this field is grayed out.
  • Choose whether this NTP server is Preferred.
    Human nimo masulod ang impormasyon, i-klik ang checkmark icon aron i-save kini.

Nota
If the node into which you are logged in is configured with only an IPv4 address, but you have checked Enable IPv6 in a previous step and provided an IPv6 address for an NTP server, you will get the following validation error:CISCO-Linux-KVM-Nexus-Dashboard- (3)

This is because the node does not have an IPv6 address yet (you will provide it in the next step) and is unable to connect to an IPv6 address of the NTP server.
In this case, simply finish providing the other required information as described in the following steps and click Next to proceed to the next screen where you will provide IPv6 addresses for the nodes.
Kung gusto nimo maghatag dugang nga mga server sa NTP, i-klik ang +Add NTP Host pag-usab ug balika kini nga substep.

  • Provide a Proxy Server, then click Validate it.
    For clusters that do not have direct connectivity to Cisco cloud, we recommend configuring a proxy server to establish the connectivity. This allows you to mitigate risk from exposure to non-conformant hardware and software
    in your fabrics.
  • You can also choose to provide one or more IP addresses communication with which should skip proxy by clicking+Add Ignore Host.
    Ang proxy server kinahanglang adunay mosunod URLs nakahimo:
  • Kung gusto nimong laktawan ang configuration sa proxy, i-klik ang Skip Proxy.
  • (Optional) If your proxy server required authentication, enable Authentication required for Proxy, provide the login credentials, then click Validate.
  • (Optional) Expand the Advanced Settings category and change the settings if required.
    Ubos sa advanced settings, mahimo nimong i-configure ang mosunod:
  • Provide custom App Network and Service Network.
    Ang application overlay network naghubit sa address space nga gigamit sa mga serbisyo sa aplikasyon nga nagdagan sa Nexus Dashboard. Ang uma kay napuno na sa default nga 172.17.0.1/16 nga kantidad.
    Ang network sa mga serbisyo usa ka internal nga network nga gigamit sa Nexus Dashboard ug sa mga proseso niini. Ang uma kay napuno na sa default nga 100.80.0.0/16 nga kantidad.
    Kung imong gisusi ang opsyon sa Enable IPv6 sa sayo pa, mahimo usab nimo ipasabut ang IPv6 subnets alang sa App ug Service network.
    Ang mga network sa Aplikasyon ug Serbisyo gihulagway sa seksyon nga Mga Kinahanglanon ug Mga Giya sa sayo pa niini nga dokumento.
  • I-klik ang Sunod aron magpadayon.

Lakang 12
Sa screen sa Mga Detalye sa Node, i-update ang impormasyon sa unang node.
You have defined the Management network and IP address for the node into which you are currently logged in during the initial node configuration in earlier steps, but you must also provide the Data network information for the node
before you can proceed with adding the other primary nodes and creating the cluster.CISCO-Linux-KVM-Nexus-Dashboard- (4) CISCO-Linux-KVM-Nexus-Dashboard- (5) CISCO-Linux-KVM-Nexus-Dashboard- (6)

  • Click the Edit button next to the first node.
    Ang Serial Number sa node, impormasyon sa Management Network, ug Type awtomatik nga mapuno apan kinahanglan ka maghatag ug ubang impormasyon.
  • Provide the Name for the node.
    The node’s Name will be set as its hostname, so it must follow the RFC-1123 requirements.
  • From the Type dropdown, select Primary.
    Ang unang 3 ka node sa cluster kinahanglang ibutang sa Primary. Imong idugang ang mga sekondaryang node sa usa ka ulahi nga lakang kung gikinahanglan aron mahimo ang cohosting sa mga serbisyo ug mas taas nga sukod.
  • In the Data Network area, provide the node’s Data Network information.
    You must provide the data network IP address, netmask, and gateway. Optionally, you can also provide the VLAN ID for the network. For most deployments, you can leave the VLAN ID field blank.
    If you had enabled IPv6 functionality in a previous screen, you must also provide the IPv6 address, netmask, and gateway.
    Nota
    If you want to provide IPv6 information, you must do it during cluster bootstrap process. To change IP configuration later, you would need to redeploy the cluster.
    All nodes in the cluster must be configured with either only IPv4, only IPv6, or dual stack IPv4/IPv6.
  • (Optional) If your cluster is deployed in L3 HA mode, Enable BGP for the data network.
    BGP configuration is required for the Persistent IPs feature used by some services, such as Insights and Fabric Controller. This feature is described in more detail in Prerequisites and Guidelines and the “Persistent IP Addresses” sections of the Cisco Nexus Dashboard User Guide.
    Nota
    You can enable BGP at this time or in the Nexus Dashboard GUI after the cluster is deployed.
    If you choose to enable BGP, you must also provide the following information:
  • ASN (BGP Autonomous System Number) of this node.
    You can configure the same ASN for all nodes or a different ASN per node.
  • For pure IPv6, the Router ID of this node.
    The router ID must be an IPv4 address, for exampug 1.1.1.1
  • BGP Peer Details, which includes the peer’s IPv4 or IPv6 address and peer’s ASN.
  • I-klik ang Save aron i-save ang mga pagbag-o.

Lakang 13
In the Node Details screen, click Add Node to add the second node to the cluster.
If you are deploying a single-node cluster, skip this step. CISCO-Linux-KVM-Nexus-Dashboard- (5)CISCO-Linux-KVM-Nexus-Dashboard- (6)

  • In the Deployment Details area, provide the Management IP Address and Password for the second node You defined the management network information and the password during the initial node configuration steps.
  • Click Validate to verify connectivity to the node.
    The node’s Serial Number and the Management Network information are automatically populated after connectivity is validated.
  • Provide the Name for the node.
  • From the Type dropdown, select Primary.
    Ang unang 3 ka node sa cluster kinahanglang ibutang sa Primary. Imong idugang ang mga sekondaryang node sa usa ka ulahi nga lakang kung gikinahanglan aron mahimo ang cohosting sa mga serbisyo ug mas taas nga sukod.
  • In the Data Network area, provide the node’s Data Network information.
    You must provide the data network IP address, netmask, and gateway. Optionally, you can also provide the VLAN ID for the network. For most deployments, you can leave the VLAN ID field blank.
    If you had enabled IPv6 functionality in a previous screen, you must also provide the IPv6 address, netmask, and gateway.
    Nota
    If you want to provide IPv6 information, you must do it during cluster bootstrap process. To change IP configuration later, you would need to redeploy the cluster.
    All nodes in the cluster must be configured with either only IPv4, only IPv6, or dual stack IPv4/IPv6.
  • (Optional) If your cluster is deployed in L3 HA mode, Enable BGP for the data network.
    BGP configuration is required for the Persistent IPs feature used by some services, such as Insights and Fabric Controller. This feature is described in more detail in Prerequisites and Guidelines and the “Persistent IP Addresses” sections of the Cisco Nexus Dashboard User Guide.
    Nota
    You can enable BGP at this time or in the Nexus Dashboard GUI after the cluster is deployed.
    If you choose to enable BGP, you must also provide the following information:
  • ASN (BGP Autonomous System Number) of this node.
    You can configure the same ASN for all nodes or a different ASN per node.
  • For pure IPv6, the Router ID of this node.
    The router ID must be an IPv4 address, for exampug 1.1.1.1
  • BGP Peer Details, which includes the peer’s IPv4 or IPv6 address and peer’s ASN.
  • I-klik ang Save aron i-save ang mga pagbag-o.
  • Repeat this step for the final (third) primary node of the cluster.

Lakang 14
In the Node Details page, verify the provided information and click Next to continue.
CISCO-Linux-KVM-Nexus-Dashboard- (8)Lakang 15
Choose the Deployment Mode for the cluster.

  • Choose the services you want to enable.
    Sa wala pa buhian ang 3.1(1), kinahanglan nimo nga i-download ug i-install ang mga indibidwal nga serbisyo pagkahuman makompleto ang una nga pag-deploy sa cluster. Karon mahimo nimong pilion nga mahimo ang mga serbisyo sa panahon sa una nga pag-install.
    Nota
    Depending on the number of nodes in the cluster, some services or cohosting scenarios may not be supported. If you are unable to choose the desired number of services, click Back and ensure that you have provided enough secondary nodes in the previous step.
  • Click Add Persistent Service IPs/Pools to provide one or more persistent IPs required by Insights or Fabric Controller services.
    Para sa dugang nga impormasyon bahin sa padayon nga IPs, tan-awa ang Prerequisites and Guidelines section.
  • I-klik ang Sunod aron magpadayon.

Lakang 16
Diha sa Summary screen, review ug pamatud-i ang kasayuran sa pagsumpo ug i-klik ang Save aron matukod ang cluster.
Atol sa node bootstrap ug cluster bring-up, ang kinatibuk-ang pag-uswag ingon man ang indibidwal nga pag-uswag sa matag node ipakita sa UI. Kung dili nimo makita ang pag-uswag sa bootstrap, mano-mano nga i-refresh ang panid sa imong browser aron ma-update ang status.
Mahimong moabot ug 30 ka minuto aron maporma ang cluster ug masugdan ang tanang serbisyo. Kung kompleto na ang configuration sa cluster, i-reload ang page sa Nexus Dashboard GUI.

Lakang 17
Tinoa nga himsog ang cluster.
Mahimong moabot ug 30 ka minuto aron maporma ang cluster ug masugdan ang tanang serbisyo.
After the cluster becomes available, you can access it by browsing to any one of your nodes’ management IP addresses.
The default password for the adminuser is the same as the rescue-userpassword you chose for the first node. During this time, the UI will display a banner at the top stating “Service Installation is in progress, Nexus Dashboard configuration tasks are currently disabled”:CISCO-Linux-KVM-Nexus-Dashboard- (3)

Human ma-deploy ang tanang cluster ug masugdan na ang tanang serbisyo, mahimo nimong susihon ang Overview panid aron maseguro nga himsog ang cluster:

CISCO-Linux-KVM-Nexus-Dashboard- (10)

Sa laing bahin, mahimo kang mag-log in sa bisan unsang node pinaagi sa SSH isip rescue-user gamit ang password nga imong gihatag atol sa node deployment ug gamit ang acs health command aron masusi ang status::

  • While the cluster is converging, you may see the following outputs:
    • $acs panglawas
  • Ang pag-instalar sa k8s nagpadayon
    • $acs panglawas
  • k8s services not in desired state – […]
    • $acs panglawas
    • k8s: Etcd cluster is not ready
  • When the cluster is up and running, the following output will be displayed:
    • $acs panglawas
  • All components are healthy

Nota
In some situations, you might power cycle a node (power it off and then back on) and find it stuck in this stage:
deploy base system services
This is due to an issue with etcd on the node after a reboot of the pND (Physical Nexus Dashboard) cluster. To resolve the issue, enter the acs reboot clean command on the affected node.

Lakang 18
Human nimo ma-deploy ang imong Nexus Dashboard ug mga serbisyo, mahimo nimong i-configure ang matag serbisyo sama sa gihulagway sa mga artikulo sa configuration ug operasyon niini.

  • For Fabric Controller, see the NDFC persona configuration white paper and documentation library.
  • For Orchestrator, see the documentation page.
  • For Insights, see the documentation library.

FAQ

What are the deployment requirements for Nexus Dashboard in Linux KVM?

The deployment requires libvirt version 4.5.0-23.el7_7.1.x86_64 and Nexus Dashboard version 8.0.0.

How can I verify I/O latency for the deployment?

To verify I/O latency, create a test directory, run the specified command using fio, and confirm that the latency is below 20ms.

Mga Dokumento / Mga Kapanguhaan

CISCO Linux KVM Nexus Dashboard [pdf] Mga instruksiyon
Linux KVM Nexus Dashboard, KVM Nexus Dashboard, Nexus Dashboard

Mga pakisayran

Pagbilin ug komento

Ang imong email address dili mamantala. Ang gikinahanglan nga mga natad gimarkahan *