document provides instructions on drivers for Mellanox Technologies ConnectX based adapter cards with Red Hat Enterprise Linux (RHEL) 8.3 Inbox Driver environment. This version supports the following uplinks to servers: Table 1: Supported Uplinks to Servers HCAs Uplink Speed Supported Driver ConnectX -6 Dx • Ethernet: 1GigE, 10GigE, 25GigE,
document provides instructions on drivers for Mellanox Technologies ConnectX based adapter cards with Red Hat Enterprise Linux (RHEL) 8.3 Inbox Driver ...
Red Hat Enterprise Linux (RHEL) Driver Release Notes RHEL 8.3 Table of Contents 1 Overview..............................................................................................................4 Supported HCAs Firmware Versions........................................................................................... 5 SR-IOV Support............................................................................................................................. 5 RoCE Support ............................................................................................................................... 6 VXLAN Support ............................................................................................................................. 6 DPDK Support .............................................................................................................................. 6 Open vSwitch Hardware Offloads Support.................................................................................. 6 2 Changes and New Features................................................................................7 3 Certifications ......................................................................................................8 RHEL NIC Qualification ................................................................................................................ 8 4 Known Inbox-Related Issues ..............................................................................9 Red Hat Enterprise Linux (RHEL) Driver Release Notes | ii List of Tables Table 1: Supported Uplinks to Servers ..........................................................................................4 Table 2: Supported HCAs Firmware Versions ...............................................................................5 Table 3: SR-IOV Support .................................................................................................................5 Table 4: RoCE Support ....................................................................................................................6 Table 5: VXLAN Support..................................................................................................................6 Table 7: DPDK Support ...................................................................................................................6 Table 6: Open vSwitch Hardware Offloads Support ......................................................................6 Table 8: Changes and New Features .............................................................................................7 Red Hat Enterprise Linux (RHEL) Driver Release Notes | iii 1 Overview These are the release notes of Red Hat Enterprise Linux (RHEL) 8.3 Driver Release Notes. This document provides instructions on drivers for Mellanox Technologies ConnectX® based adapter cards with Red Hat Enterprise Linux (RHEL) 8.3 Inbox Driver environment. This version supports the following uplinks to servers: Table 1: Supported Uplinks to Servers HCAs Uplink Speed Supported Driver ConnectX®-6 Dx · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, 100GigE and 200 GigE mlx5_core (includes the ETH functionality as well), mlx5_ib ConnectX®-6 · InfiniBand: SDR, EDR, HDR · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE and 100GigE mlx5_core (includes the ETH functionality as well), mlx5_ib BlueField®a · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, and 100GigE mlx5_core (includes the ETH functionality as well) · Ethernet: 10GigE, 40GigE mlx5_core (includes the ETH functionality as well) ConnectX®-5 · InfiniBand: SDR, QDR, FDR, FDR10, mlx5_core (includes the ETH EDR functionality as well), mlx5_ib · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, 56GigEb, and 100GigE ConnectX®-4 · InfiniBand: SDR, QDR, FDR, FDR10, mlx5_core (includes the ETH EDR functionality as well), mlx5_ib · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, 50GigE, 56GigEb, and 100GigE ConnectX®-4 Lx · Ethernet: 1GigE, 10GigE, 25GigE, 40GigE, and 50GigE mlx5_core (includes the ETH functionality as well) ConnectX®-3/ ConnectX®-3 Pro · InfiniBand: SDR, QDR, FDR10, FDR · Ethernet: 10GigE, 40GigE and 56GigEb mlx4_core, mlx4_en, mlx4_ib Connect-IB® · InfiniBand: SDR, QDR, FDR10, FDR mlx5_core, mlx5_ib a. BlueField is supported as a standard ConnectX-5 Ethernet NIC only. b. 56GbE is a Mellanox propriety link speed and can be achieved while connecting a Mellanox adapter cards to Mellanox SX10XX switch series or connecting a Mellanox adapter card to another Mellanox adapter card. Red Hat Enterprise Linux (RHEL) Driver Release Notes | 4 Supported HCAs Firmware Versions Red Hat Enterprise Linux (RHEL) 8.3 driver supports the following Mellanox network adapter cards firmware versions: Table 2: Supported HCAs Firmware Versions HCA ConnectX®-6 Dx ConnectX®-6 BlueField® (Technical Preview) ConnectX®-5 ConnectX®-4 Lx ConnectX®-4 ConnectX®-3 Pro ConnectX®-3 Connect-IB® Recommended Firmware Rev. 22.28.2006 20.28.2006 18.28.2006 16.28.2006 14.28.2006 12.28.2006 2.42.5000 2.42.5000 14.22.1002 10.16.1002 SR-IOV Support Table 3: SR-IOV Support Driver mlx4_core, mlx4_en, mlx4_ib mlx5_core (includes ETH functionality), mlx5_ib Support Eth InfiniBand: Technical Preview Eth InfiniBand: Technical Preview Notes Running InfiniBand (IB) SR-IOV requires IB Virtualization support on the OpenSM (Session Manager). This capability is supported only on OpenSM provided by Mellanox, that is not available Inbox. This support can be achieved by running the highest-priority OpenSM on a Mellanox switch in an IB fabric. The switch SM can support this feature by enabling the virt flag (# ib sm virt enable). Note: This capability is not tested over Inbox environment and considered Tech Preview. Red Hat Enterprise Linux (RHEL) Driver Release Notes | 5 RoCE Support Table 4: RoCE Support Driver mlx4 - RoCE v1/v2 mlx5 - RoCE v1/v2 Support Yes Yes VXLAN Support Table 5: VXLAN Support Driver mlx4 - VXLAN offload mlx5 - VXLAN offload Support Yes Yes (without RSS) DPDK Support Table 6: DPDK Support Driver mlx4 mlx5 Support Mellanox PMD is enabled by default. Mellanox PMD is enabled by default. Open vSwitch Hardware Offloads Support Table 7: Open vSwitch Hardware Offloads Support Driver mlx4 mlx5 Support No Yes Red Hat Enterprise Linux (RHEL) Driver Release Notes | 6 2 Changes and New Features Table 8: Changes and New Features Component mlx5 mlx4 rdma-core mstflint libvma ucx Feature/Change Connection Tracking Offload Kernel Software Steering Remote Mirroring Rx Reporter in Devlink Health RoCE Disablement via devlink kTLS TX Support for ConnectX-6 Dx Devlink Health State Notifications Flow Autogroup Default via Devlink General Driver Update General Driver Update RDMA user-space mstflint user-space VMA UCX Description Added support for offloading TC filters containing connection tracking matches and actions. Added kernel support for OVS remote mirroring to allow hardware traffic mirroring to multiple ports. Added support for monitoring and recovering from errors that occur on the Rx queue, such as CQE errors and timeout. Added the option to disable RoCE traffic handling. This enables forwarding of traffic over UDP port 4791 that is handled as RoCE traffic when RoCE is enabled. When RoCE is disabled, there is no GID table, only Raw Ethernet QP type is supported and RoCE traffic is handled as regular Ethernet traffic. Use the `devlink` utility for disabling RoCE support. Added support for hardware offload encryption of kTLS TX traffic to improve performance. Added support for receiving notifications on devlink health state changes when an error is reported or recovered by one of the reporters. These notifications can be seen using the userspace `devlink monitor` command. Added a devlink parameter to control the number of large groups in an auto-grouped flow table. The default value is 15, and the range is between 1 and 1024. Aligned the mlx5 driver to the Linux upstream kernel driver version 5.6 Aligned the mlx4 driver to the Linux upstream kernel driver version 5.6 Updated the RDMA package to version 29.0-3.el8 Updated mstflint package to version 4.14.0-1.el8 Updated VMA package to version 9.0.2-1.el8 Updated UCX package to version 1.8.0-1.el8 Red Hat Enterprise Linux (RHEL) Driver Release Notes | 7 3 Certifications RHEL NIC Qualification RHEL 8.0, Successfully passed RHEL NIC qualification has passed successfully as described in: https://github.com/ctrautma/RHEL_NIC_QUALIFICATION/tree/8.0-Beta Covering: ConnextX-4 Lx and ConnectX-5 adapter cards OVS functional, OVS non-offload, OVS-offload, OVS-DPDK Red Hat Enterprise Linux (RHEL) Driver Release Notes | 8 4 Known Inbox-Related Issues The following table describes known issues in this release and possible workarounds. Internal Ref. Bugzilla Ref. Description 2345747 1890261 1816660 1816660 Description: RHEL installer fails to start when InfiniBand network interfaces are configured using installer boot options. Workaround: Create a new installation media including the updated Anaconda and NetworkManager packages, using the Lorax tool. For more information on how to do so, please see here. Keywords: PXE, IPoIB, InfiniBand Description: When the NUM_OF_VFS parameter configured in the Firmware (using the mstconfig tool) is higher than 64, VF LAG mode will not be supported while deploying OVS offload. Workaround: N/A Keywords: ConnectX-5, VF LAG, ASAP2, SwitchDev Description: An internal firmware error occurs either when attempting to disable single-root input/output virtualization, or when unbinding PF using a function (such as ifdown and ip link) under the following condition: Being in VF LAG mode in an OVS offload deployment, where at least one VF of any PF is still bound on the host or attached to a VM. Workaround: Unbind or detach VFs before you perform these actions as follows. 1. Shutdown and detach any VMs. 2. Remove VF LAG bond interface from OVS. 3. Unbind VFs, perform for each configured VF: # echo <VF PCIe BDF> > /sys/bus/pci/drivers/mlx5_core/unbind 4. Disable SR-IOV, perform for each PF: # echo 0 > /sys/class/net/<PF>/device/sriov_numvfs 1284047 - Keywords: ConnectX-5, VF LAG, ASAP2, SwitchDev Description: BW degradations due to PTI (Page Table Isolation) in Intel's CPU security fix. Workaround: PTI can be disabled in run time by writing 0 to /sys/ kernel/debug/x86/pti_enabled. Another option is adding "nopti" or "pti=off" to grub.conf. Keywords: Performance Red Hat Enterprise Linux (RHEL) Driver Release Notes | 9 Internal Ref. 1610281 1609804 1578022 Bugzilla Ref. - - - Description Description: Setting speed to 56Gb/s on ConnectX-4 causes FW syndrome (0x1a303e). Workaround: N/A Keywords: ConnectX-4, syndrome Description: Kernel panic during MTU change under stress traffic. Workaround: N/A Keywords: Panic, MTU Description: OVS offload: fragmented traffic is not offload. When sending traffic with packets bigger than MTU, traffic runs but is not offloaded. Workaround: N/A Keywords: OVS offload, fragmentation Red Hat Enterprise Linux (RHEL) Driver Release Notes | 10 Notice This document is provid ed for in formation purposes only and shall n ot be regarded as a warranty of a certain functionality, c ondition, or quality of a product. NVIDIA Corporation NVID IA makes no representations or warranties, expressed or implied, as to the accu racy or completeness of the in formation contain ed in this docum en t an d assumes n o responsibility for any errors con tained herein. NVIDIA shall have no li ability for th e consequen ces or u se of su ch information or for any infringemen t of patents or other rights of third parties that may result f rom its use. Th is document is not a commitm en t to develop, release, or deliver any Material (defined below), code, or function ality. NVIDIA reserves th e right to make correction s, modification s, enhancements, improvements, and any other changes to this docum ent, at any time without notice. Customer should obtain the latest relevant information before placing orders and should verify that such inform ation is curre nt and complete. NVIDIA products are sold subject to the NVIDIA stan dard term s an d con ditions of sale supplied at the tim e of order acknowledg ement, unless otherwise agreed in an individual sales agreement sign ed by au thorized represen tatives of NVIDIA an d customer Terms of Sale NVIDIA h ereby expressly object s to applying any customer gen eral term s and conditions with regards to th e purchase of th e NVIDIA product re feren ced in this document. No contractual obligation s are formed either directly or in directly by this document. NVIDIA products are not designed, authorized, or warran ted to be su itable for use in medical, m ilitary, aircraft, space, or l ife support eq uipment, nor in applications where failu re or malfu nction of th e NVIDIA product can reason ably be expected to result in personal in jury, deat h, or property or environm en tal damage. NVIDIA accepts no liability for in clusion and/or use of NVIDIA products in such equipm ent or applicatio ns and therefore su ch in clusion an d/or use is at customer s own risk. NVIDIA makes no representation or warran ty th at products based on this docu ment will be suitable for any specified use. Testi ng of all parameters of each product is not necessarily performed by NVIDIA. It is cu stomer s sole responsibilit y to evaluate and determine the applicability of an y inform ation contained in th is document, ensu re the product is suit able and fit for th e application planned by cu stomer, and perform the n ecessary testing for the application in order to avoid a d efault of the application or the product. Weakn esses in cu stomer s product designs may affect th e quality an d reliability of the NVIDIA product and may result in addition al or different condition s and/or requirements beyon d those con tained in this document. NVIDIA accepts no liability related to any defau lt, dam age, costs, or problem which may be based on or attributable to: (i) th e use of th e NVIDIA produ ct in any mann er that is contrary to th is document or (ii) cu stomer product d esign s. No license, either expressed or implied, is granted under an y NVIDIA paten t right, copyright, or oth er NVIDIA in tellectu al pr operty righ t und er this document. Inform ation p ublish ed by NVIDIA regard ing third-party products or services does not constitute a licen se from NVIDIA to use such products or services or a warranty or endorsement th ereof. Use of su ch inform ation may require a license from a third party un der the patents or other in tellectu al property rig hts of the th ird party, or a license from NVIDIA un der the patents or oth er intellectual property ri ghts of NVIDIA. Reproduction of in formation in th is document is p ermissible on ly if approved in advance by NVIDIA in writing, reprodu ced with out alteration and in full compliance with all applicable export laws and regulations, and accomp an ied by all associated conditions, limitations, an d no tices. Trademarks NVIDIA, the NVIDIA logo, and Mellan ox are trademarks and/or registered trademarks of NVIDIA Corporation in the U.S. and other countries. O ther company and product names may b e tradem arks of th e respective companies with which they are associated. For t he complete an d most up dated list of Mellanox trademarks, visit http://www.mellanox.com /page/tradem arks . Copyright © 2020 NVIDIA Corporation. All rights reserved. NVIDIA Corporation | 2788 San Tomas Expressway, Santa Clara, CA 95051 http://www.nvidia.com Red Hat Enterprise Linux (RHEL) Driver Release Notes | 11Microsoft Word for Office 365 Microsoft Word for Office 365