Dell PowerVault MD Series Support Matrix
This document provides information on supported software and hardware for Dell PowerVault MD3400, MD3420, MD3800i, MD3820i, MD3800f, MD3820f, MD3460, MD3860i, and MD3860f storage arrays, as well as usage considerations, recommendations and rules.
Note: This Support Matrix contains the latest compatibility and interoperability information. Should you encounter inconsistencies between this information and other MD-series documentation, this document should be considered superseding.
Changes in Version A01
- Fibre channel direct attach configuration support
- Updated hard drive support
Introduction
Unless specified, all information in this document is applicable to the most current RAID controller firmware version available from support.dell.com.
Table 1. MD-Series Models and Data Protocols Supported
MD Array Model | Data Protocol |
MD3400 | 12 Gbps Direct Attached SAS storage array with 12 drives (3.5 inch) |
MD3420 | 12 Gbps Direct Attached SAS storage array with 24 drives (2.5 inch) |
MD3800i | 10 Gbps iSCSI network storage array with 12 drives (3.5 inch) |
MD3820i | 10 Gbps iSCSI network storage array with 24 drives (2.5 inch) |
MD3800f | 16 Gbps Fibre Channel network storage array with 12 drives (3.5 inch) |
MD3820f | 16 Gbps Fibre Channel network storage array with 24 drives (2.5 inch) |
MD3460 | 12 Gbps Direct Attached SAS storage dense array |
MD3860i | 10 Gbps iSCSI network storage dense array |
MD3860f | 16 Gbps Fibre Channel network storage dense array |
Notes:
- MD3400, MD3420, MD3800i, MD3820i, MD3800f and MD3820f models support 120 physical disks/slots in base configuration; with premium feature activation, 192 physical disks/slots are supported
- MD3x60i/f dense array has default 120 drives/slots support (with 20 drives minimum), and 180 drives/slots support with PFK
- Premium Key Feature (PFK) is optional on all models
Dell PowerVault MD-Series Storage Array Rules
This section contains both general and model-specific connectivity and consideration rules for MD storage arrays. Only the rules shown in Table 2 apply to ALL storage array models. For rules applying to specific MD models, see Tables 3 and 4.
Table 2. MD-Series Storage Array Rules for All Models
Note: MD3460, MD3860i and MD3860f platforms are supported in dual-RAID controller (duplex) configurations only.
RULE | MD3400 Series 12Gbps SAS | MD3800i series 10Gbps iSCSI | MD3800f series 16Gbps Fibre Channel |
Maximum number of host servers a single storage array can connect to with one RAID Controller Module installed | 4 | 64 | 64 |
Maximum number of host servers a single storage array can connect to with two RAID Controller Modules installed | 8 (4 if using high availability) | 64 | 64 |
Maximum number of Dell 12Gb SAS HBA cards supported in a single host server attached to single array. (It is recommended to use two Dell 12Gb SAS HBA cards for all redundant cabling configurations.) | 2 (each card has two ports) | N/A | N/A |
Unused ports on a Dell 12Gb SAS HBA card already connected to an MD3460 cannot be connected to another device (such as a tape drive or other model storage array). | ✓ | N/A | N/A |
Maximum number of MD Series Storage Arrays a host server may connect to: | 2 (HA) | 4 | 4 |
SAS and iSCSI storage arrays can be connected to the same host server. | ✓ | ✓ | The I/O co-existence between the Fibre Channel and any other protocol on same host is not supported. |
A hot spare for a disk group must be a physical disk of equal or greater size than any of the member disks. | ✓ | ✓ | ✓ |
When using out-of-band management with SMcli by specifying the RAID Controller management port IP addresses on the MD Storage Array, SMcli commands that change the attributes of a virtual disk, virtual disk copy, or snapshot virtual disk, must have management access to the owning RAID Controller Module. Where applicable, it is best practice to specify both management port IP addresses on the SMcli invocation: SMcli 192.168.128.101 192.168.128.102 -c | ✓ | ✓ | ✓ |
On Linux systems Device Mapper multipathing drivers are required for multipath support | ✓ | ✓ | ✓ |
Co-existence of several Linux multi-path drivers is not supported. When using a MD3400 or MD3800 series array with Linux host servers only the Linux Device Mapper failover driver is supported.* | ✓ | ✓ | ✓ |
Virtual disks on MD Series Storage Arrays cannot be used for booting. | ✓ | ✓ | ✓ |
Disk Groups can be migrated between a Dell PowerVault MD3460/3860i/3860f by following the appropriate Disk Group migration procedure*** | ✓ | ✓ | ✓ |
Disk pools cannot be migrated. | ✓ | ✓ | ✓ |
Greater than 180 drives for Disk Pooling is not currently supported. | ✓ | ✓ | ✓ |
Maximum capacity per array for dynamic disk pooling. | 1024 TB | 1024 TB | 1024 TB |
All iSCSI Host ports on a controller have to be at the same port speed | N/A | ✓ | N/A |
iSCSI Host ports will only auto-negotiate to the port speed set in MDSM | N/A | ✓ | N/A |
If the iSCSI initiators are connected to MD3800i series through the network switches, make sure that your switches support IEEE 802.3x flow control, and the flow control is enabled for both sending and receiving on all switch ports and server NIC ports. If you do not enable the flow control, your iSCSI storage array may experience the degradation of the I/O performance. In addition to enabling the Ethernet IEEE 802.3x flow control it is also recommended to disable unicast broadcast storm control on the switch ports connected to the iSCSI initiators and target arrays and turn on the "PortFast" mode of the spanning tree protocol (STP) on the switch ports connected to the iSCSI initiators and target arrays. Note that turning on the "PortFast" mode is different from turning off the whole operation of STP on the switch. With "PortFast" on, the STP is still enabled on the switch ports. Turning STP off may affect the entire network and can leave the network vulnerable to physical topology loops. | N/A | ✓ | N/A |
For optimal I/O performance, avoid having more than one iSCSI session originating from one host iSCSI port to the same controller. Ideally, the iSCSI host NIC should be connected to only one iSCSI target port on the storage subsystem. | N/A | ✓ | N/A |
For Dell-Oracle Tested and Validated solutions on the MD arrays, please refer to the following site < http://en.community.dell.com/techcenter/enterprise-solutions/w/oracle_solutions/current-release.aspx > | ✓ | ✓ | ✓ |
The number of VD copies limited to a maximum of 511 with a maximum of 8 concurrent copies (applicable to RAID controller firmware version 08.10.05.60) | ✓ | ✓ | ✓ |
Remote Replication is not supported in Simplex Mode. | N/A | ✓ | ✓ |
For Fibre Channel and iSCSI controllers, if the SAS host ports are used, they much be connected to a SAS HBA on a separate host | N/A | ✓ | ✓ |
Table 3. MD-Series Storage Array Rules for Non-Dense, 2U Models Only (MD3400, MD3420, MD3800i, MD3820i, MD3800f, and MD3820f)
RULE | MD3400 series 12Gbps SAS | MD3800i series 10Gbps iSCSI | MD3800f series 16Gbps Fibre Channel |
Support for up to 120 physical slots (system default configuration). | ✓ | ✓ | ✓ |
Up to nine MD1200 and/or MD1220 series expansion enclosures can be attached to an MD storage array. Any mixture of MD1200 and MD1220 enc-losures for a total of 120 physical slots is supported. | ✓ | ✓ | ✓ |
Support for up to 192 physical slots through a premium feature option. | ✓ | ✓ | ✓ |
Up to fifteen MD1200 and/or MD1220 series expansion enclosures can be attached to an MD storage array. Any mixture of MD1200 and MD1220 enclosures for a total of 192 physical slots is supported. | ✓ | ✓ | ✓ |
Maximum number of physical disks in a RAID0, RAID1/10 is 120. | ✓ | ✓ | ✓ |
Maximum number of physical disks in a RAID5 or RAID6 disk group is 30. | ✓ | ✓ | ✓ |
Attached MD1200 series expansion enclosures must be run in unified mode | ✓ | ✓ | ✓ |
The number of Snapshots (Legacy) is limited to: maximum of 256 per array, maximum of 16 per VD | ✓ | ✓ | ✓ |
The number of Remote Replicas (Legacy) is limited to: maximum of 32 per array | N/A | ✓ | ✓ |
Table 4. MD-Series Storage Array Rules for Dense, 4U Models Only (MD3460, MD3860i, and MD3860f)
RULE | MD3460 series 12Gbps SAS | MD3860i series 10Gbps iSCSI | MD3860f series 16Gbps Fibre Channel |
Support for up to 180 physical slots with a premium feature activation. | ✓ | ✓ | ✓ |
Maximum number of physical disks in a RAID0, RAID1/10 is 120. | ✓ | ✓ | ✓ |
Up to two MD3060e series expansion enclosures can be attached to MD3460/MD3860i/MD3860f dense storage arrays for a total of 3 MD3x60 enclosures. | ✓ | ✓ | ✓ |
Support for up to 120 physical slots (system default configuration). | ✓ | ✓ | ✓ |
A minimum of 20 SAS or SSD drives are required in each MD3x60 enclosure (4 in front slots of each drawer) | ✓ | ✓ | ✓ |
Support for up to 25 SSD drives. | ✓ | ✓ | ✓ |
Default IPv4 settings for the Management Ports on the MD Series Storage Arrays
Note: No default gateway is set.
By default, the management ports on the storage array are set to DHCP. If DHCP fails, the following IPv4 settings will be used:
Table 5. Default IPv4 Management Port Adresses
Controller/Port | IPv4 address | Subnet Mask |
Controller 0, Port 0 | 192.168.128.101 | 255.255.255.0 |
Controller 1, Port 0 | 192.168.128.102 | 255.255.255.0 |
Default IPv4 settings for the iSCSI Ports on MD38x0i array
Note: No default gateway is set.
By default, the iSCSI ports on the storage array are set to the following static IPv4 settings:
Table 6. Default iSCSI Port IPv4 Addresses on MD38x0i Storage Arrays
Controller/Port (MD38x0i) | IPv4 address | Subnet Mask | Port # |
Controller 0, Port 0 | 192.168.130.101 | 255.255.255.0 | 3260 |
Controller 0, Port 1 | 192.168.131.101 | 255.255.255.0 | 3260 |
Controller 1, Port 0 | 192.168.130.102 | 255.255.255.0 | 3260 |
Controller 1, Port 1 | 192.168.131.102 | 255.255.255.0 | 3260 |
Supported RAID Controller Firmware and NVSRAM
Note: It is advisable to gather support information before performing any firmware upgrade.
Note: Only drivers and firmware released by Dell are supported. For the latest driver and firmware releases, see the Downloads section at support.dell.com.
Table 7. Latest RAID Controller Firmware and NVSRAM Versions
Software | Version |
RAID Controller Firmware | 08.10.05.60 |
RAID Controller NVSRAM | N2701-810890-002 |
Supported iSCSI Software Initiators
Table 8. Supported iSCSI Initiators
Operating System | SW Initiator Vendor | SW Initiator Version | Notes |
Windows Server OS | Microsoft | RTM or later | Included w/OS |
Red Hat Enterprise Linux | Red Hat | RTM or later | Included w/OS |
SUSE Linux Enterprise Server | SUSE | RTM or later | Included w/OS |
VMware ESX | VMware | RTM or later | Included w/OS |
Note: For detailed information on OS support, refer to Supported Operating Systems section of this document.
Supported Protocol Offload (TOE / iSCSI) Adapters
Standard Gigabit and 10 Gigabit Ethernet adapters are supported when used with supported software iSCSI initiators. Hosts must have a standards compliant iSCSI initiator to access MD Series storage. Initiator support is provided by the initiator/operating system vendor. Dell PowerVault does not support Converged Network Adapters (CNA) in Converged mode. Although PowerVault does not endorse or support initiators directly, this support matrix provides some useful configuration information for common initiators.
Dell's PowerVault MD Series Arrays will work with any RFC 3720 iSCSI compliant initiators. The initiator MUST support all mandatory iSCSI features (IPSec is not required). This information is subject to change without notice. Dell is not responsible for any errors in this information. Hardware initiators are not expressly supported by Dell.
Also be sure to read the initiator documentation and release notes from the particular vendors, as well as the MD Series release notes for up to date configuration recommendations.
Fibre Channel SFP+ Transceiver Support
Table 9. Supported Fibre Channel SFP+ Transceivers
Description | Manufacturer | Mfr. Part Number | Dell P/N |
16G SFP (FC) Short wave | Finisar | FTLF8529P3BCVA | TDTCP |
16G SFP (FC) Short wave | Avago | AFBR-57F5MZ-NA2 | TDTCP |
Supported Physical Disks
Only physical disks with a Dell P/N from the table below are supported. All other physical disk drives purchased from the Dell Software and Peripheral store with a part number other than specified below are not supported.
Refer to the MD3400/MD3800i/MD3860i/MD3860f Drivers and Downloads section on support.dell.com for the latest available physical disk firmware.
Table 10. Supported Physical Disk Models
Form Factor | Model | Capacity | Speed | Vendor |
2.5" | HUC101212CSS600 | 1.2 TB | 10K | HGST |
2.5" | HUC156030CSS204 | 300GB | 15K | HGST |
2.5" | HUC156060CSS204 | 600GB | 15K | HGST |
3.5" | HUS724020ALS640 | 2TB | 7.2K | HGST |
3.5" | HUS724030ALS640 | 3TB | 7.2K | HGST |
3.5" | HUS724040ALS640 | 4TB | 7.2K | HGST |
2.5" | HUC103014CSS600 | 146GB | 10K | Hitachi |
2.5" | HUC103030CSS600 | 300GB | 10K | Hitachi |
2.5" | HUC106030CSS600 | 300GB | 10K | Hitachi |
2.5" | HUC106060CSS600 | 600GB | 10K | Hitachi |
2.5" | HUC109030CSS600 | 300GB | 1 | Hitachi |
2.5" | HUC109060CSS600 | 600GB | 10K | Hitachi |
2.5" | HUC109090CSS600 | 900GB | 10K | Hitachi |
2.5" | HUC151473CSS600 | 73GB | 15K | Hitachi |
2.5" | HUC151414CSS600 | 146GB | 15K | Hitachi |
3.5" | HUS156030VLS600 | 300GB | 15K* | Hitachi |
3.5" | HUS156045VLS600 | 450GB | 15K* | Hitachi |
3.5" | HUS156060VLS600 | 600GB | 15K* | Hitachi |
3.5" | HUS723020ALS640 | 2TB | 7.2K | Hitachi |
3.5" | HUS723030ALS640 | 3TB | 7.2K | Hitachi |
2.5" | LB206M | 200GB | SSD | Pliant (SanDisk) |
2.5" | LB406M | 400GB | SSD | Pliant (SanDisk) |
2.5" | LB806M | 800GB | SSD | Pliant (SanDisk) |
2.5" | LB406R | 400GB | SSD | Pliant (SanDisk) |
2.5" | LB806R | 800GB | SSD | Pliant (SanDisk) |
2.5" | LB1606R | 1.6TB | SSD | Pliant (SanDisk) |
2.5" | LB206S | 200GB | SSD | Pliant (SanDisk) |
2.5" | LB406S | 400GB | SSD | Pliant (SanDisk) |
2.5" | ST1200MM0007 | 1.2TB | 10K | Seagate |
2.5" | ST900MM0007 | 900GB | 10K | Seagate |
2.5" | ST1200MM0027 | 1.2TB | 10K | Seagate |
2.5" | ST9300605SS | 300GB | 10K | Seagate |
2.5" | ST9600205SS | 600GB | 10K | Seagate |
2.5" | ST9900805SS | 900GB | 10K | Seagate |
2.5" | ST9900605SS | 900GB | 10K | Seagate |
2.5" | ST9600204SS | 600GB | 10K | Seagate |
2.5" | ST9600104SS | 600GB | 10K | Seagate |
2.5" | ST9146803SS | 146GB | 10K | Seagate |
2.5" | ST9300603SS | 300GB | 10K | Seagate |
2.5" | ST9300503SS | 300GB | 10K | Seagate |
2.5" | ST300MM0006 | 300GB | 10K | Seagate |
2.5" | ST600MM0006 | 600GB | 10K | Seagate |
2.5" | ST900MM0006 | 900GB | 10K | Seagate |
2.5" | ST900MM0036 | 900GB | 10K | Seagate |
2.5" | ST973452SS | 73GB | 15K | Seagate |
2.5" | ST9146852SS | 146GB | 15K | Seagate |
2.5" | ST9146752SS | 146GB | 15K | Seagate |
2.5" | ST300MP0004 | 300GB | 15K | Seagate |
2.5" | ST9146853SS | 146GB | 15K | Seagate |
2.5" | ST9300653SS | 300GB | 15K | Seagate |
2.5" | ST9300453SS | 300GB | 15K | Seagate |
2.5" | ST9500620SS | 500GB | 7.2K | Seagate |
2.5" | ST91000640SS | 1TB | 7.2K | Seagate |
2.5" | ST91000642SS | 1TB | 7.2K | Seagate |
2.5" | ST9500430SS | 500GB | 7.2K | Seagate |
2.5" | ST9500431SS | 500GB | 7.2K | Seagate |
3.5" | ST3600002SS | 600GB | 10K* | Seagate |
3.5" | ST3300657SS | 300GB | 15K* | Seagate |
3.5" | ST3450857SS | 450GB | 15K* | Seagate |
3.5" | ST3600057SS | 600GB | 15K* | Seagate |
3.5" | ST3450757SS | 450GB | 15K* | Seagate |
3.5" | ST3600957SS | 600GB | 15K* | Seagate |
3.5" | ST1000NM0023 | 1TB | 7.2K | Seagate |
3.5" | ST2000NM0023 | 2TB | 7.2K | Seagate |
3.5" | ST3000NM0023 | 3TB | 7.2K | Seagate |
3.5" | ST4000NM0023 | 4TB | 7.2K | Seagate |
3.5" | ST4000NM0063 | 4TB | 7.2K | Seagate |
3.5" | ST3500414SS | 500GB | 7.2K | Seagate |
3.5" | ST31000424SS | 1TB | 7.2K | Seagate |
3.5" | ST32000444SS | 2TB | 7.2K | Seagate |
3.5" | ST31000425SS | 1TB | 7.2K | Seagate |
3.5" | ST32000445SS | 2TB | 7.2K | Seagate |
3.5" | ST500NM0001 | 500GB | 7.2K | Seagate |
3.5" | ST1000NM0001 | 1TB | 7.2K | Seagate |
3.5" | ST2000NM0001 | 2TB | 7.2K | Seagate |
3.5" | ST32000645SS | 2TB | 7.2K | Seagate |
3.5" | ST33000650SS | 3TB | 7.2K | Seagate |
3.5" | ST33000652SS | 3TB | 7.2K | Seagate |
2.5" | MBF2300RC | 300GB | 10K | Toshiba |
2.5" | MBF2600RC | 600GB | 10K | Toshiba |
2.5" | AL13SEB300 | 300GB | 10K | Toshiba |
2.5" | AL13SEB600 | 600GB | 10K | Toshiba |
2.5" | AL13SEB900 | 900GB | 10K | Toshiba |
2.5" | MK1401GRRB | 146GB | 15K | Toshiba |
2.5" | MK3001GRRB | 300GB | 15K | Toshiba |
2.5" | AL13SXB300N | 300GB | 15K | Toshiba |
2.5" | AL13SXB600N | 600GB | 15K | Toshiba |
2.5" | MK2001GRZB | 200GB | SSD | Toshiba |
2.5" | MK4001GRZB | 400GB | SSD | Toshiba |
3.5" | MK1001TRKB | 1TB | 7.2K* | Toshiba |
3.5" | MK2001TRKB | 2TB | 7.2K* | Toshiba |
3.5" | MG03SCA100 | 1TB | 7.2K | Toshiba |
3.5" | MG03SCA200 | 2TB | 7.2K | Toshiba |
3.5" | MG03SCA300 | 3TB | 7.2K | Toshiba |
3.5" | MG03SCA400 | 4TB | 7.2K | Toshiba |
2.5" | WD3000BKHG | 300GB | 10K | Western Digital |
2.5" | WD6000BKHG | 600GB | 10K | Western Digital |
2.5" | WD3001BKHG | 300GB | 10K | Western Digital |
2.5" | WD6001BKHG | 600GB | 10K | Western Digital |
2.5" | WD9001BKHG | 900GB | 10K | Western Digital |
2.5" | WD3002BKTG | 300GB | 10K | Western Digital |
2.5" | WD6002BKTG | 600GB | 10K | Western Digital |
2.5" | WD9002BKTG | 900GB | 10K | Western Digital |
3.5" | WD1000FYYG | 1TB | 7.2K | Western Digital |
3.5" | WD2000FYYG | 2TB | 7.2K | Western Digital |
3.5" | WD1001FYYG | 1TB | 7.2K | Western Digital |
3.5" | WD2001FYYG | 2TB | 7.2K | Western Digital |
3.5" | WD3001FYYG | 3TB | 7.2K | Western Digital |
3.5" | WD4001FYYG | 4TB | 7.2K | Western Digital |
Supported Expansion Enclosures
MD 3x60 Series Dense Storage Arrays support a maximum of 180 physical disk slots (with premium feature activation). These additional slots support can only be provided by up to 2 MD3060e expansion enclosures. For a system without premium feature activation, this physical disk slots limit is 120.
Table 11. Expansion Enclosures Supported on Dense (4U) Storage Arrays
Enclosure Model | Minimum Firmware Version |
MD3060e | 03.95 |
MD34xx/38xx series storage arrays support a maximum of 192 physical disk slots (with premium feature activation). These additional slots can be provided by up to 15 MD1200 expansion enclosures, seven MD1220 expansion enclosures, or a combination of both. When a combination of expansion enclosures is used, total number of disk drive slots in the system cannot exceed 192. For a system without premium feature activation, this physical disk limit is 120.
Table 12. Expansion Enclosures Supported on Non-Dense (2U) Storage Arrays
Enclosure Model | Minimum Firmware Version |
MD1200 | 1.01 |
MD1220 | 1.01 |
Note: Attaching a 4U (dense) expansion enclosure to a 2U (non-dense) RAID storage array is not supported; alternately, a 2U expansion enclosure cannot be attached to a 4U RAID storage array. All EMM's in an expansion stack must be at the same firmware level.
Supported Management Software
The MD Storage Software is composed of the Modular Disk Storage Manager (MDSM) and the Modular Disk Configuration Utility (MDCU). These management utilities are available on the Resource DVD provided with your system and online at dell.com/support. MD storage software is supported on all operating systems and guest operating systems listed in the "Supported Operating Systems" section. The management station must meet the following minimum requirements:
- 2GB of free hard drive space
- For MDSM and MDCU a graphical user interface is required
The MD-Series Resource DVD and other supported management software details are shown in the following tables.
Table 13. Supported Management Software (Windows)
Software Component | Version | Notes |
MD Series Dense Storage Arrays Resource DVD | 5.0.0.70 | |
Modular Disk Storage Manager | 11.10.0A06.0005 | |
Modular Disk Configuration Utility | 2.1.0.47 | Supported on iSCSI only |
MD34XX/38XX series Hardware Provider VDS/VSS Providers* | 11.10.0G06.0001 | Supported on: Windows Server® 2008 R2 SP1 (64-bit version only) Windows Server 2012 Windows Server 2012 R2 |
MD Storage Array vCenter Plug-in | see v Center Plug-in Support table below | |
MD Storage Array VASA Provider (iSCSI and Fibre -Channel only) | see VASA Provider Support table below | |
MD Storage Array Storage Replication Adapter (SRA) (Fibre Channel only) | see Storage Replication Adapter Support table below |
* Maximum number of concurrent backups supported while using the HW provider VSS provider with Clustered Shared Volumes is 2
Table 14. Supported Management Software (Linux)
Software Component | Version | Notes |
MD Series Dense Storage Arrays Resource DVD | 5.0.0.70 | |
Modular Disk Storage Manager | 11.10.0A06.0005 | |
Modular Disk Configuration Utility | 2.1.0.47 | Supported with iSCSI storage arrays Only |
Table 15. Supported Management Software (VMware vCenter Plug-in)
vCenter Plug-in Version | VMware version supported | Notes |
2.7 | All Protocols
| This is compatible only with firmware 08.10.05.60 only |
2.5 | All protocols
| This is compatible only with firmware 08.10.05.60 only |
Table 16. Supported Management Software (VASA)
VASA Provider Support (supported on Fibre Channel and iSCSI arrays only)
VASA Version | VMware version supported | Notes |
5.1 | vSphere™™ Client 5.0/5.1/5.5 vCenter Server 5.0/5.1/5.5 | Supported on 08.10.05.60 firmware only |
Table 17. Supported Management Software (Storage Replication Adapter)
Storage Replication Adapter Support
SRA Version | VMware version supported | Notes |
5.1 | vSphere™™ Client 5.0/5.1/5.5 vCenter Server 5.0/5.1/5.5 | Supported on 08.10.05.60 firmware only |
Supported Operating Systems
Where clustering is supported by the operating system, it is also supported on the MD34xx, MD38xxi, and MD38xxf series storage arrays, subject to the following limitations:
Table 18. MD-Series Operating System Support
Operating System | Management Station | SAS Host Server | iSCSI Host Server | Fibre Channel Host Server | Notes & Required Hotfixes |
Windows Server 2012 R2* | |||||
Standard Server and Core | ✓ | ✓ | ✓ | ✓ | |
Datacenter Server and Core | ✓ | ✓ | ✓ | ✓ | |
Foundation Server and Core | ✓ | ✓ | ✓ | ✓ | |
Windows Server 2012 * | |||||
Standard Server and Core | ✓ | ✓ | ✓ | ✓ | KB2822241 |
Datacenter Server and Core | ✓ | ✓ | ✓ | ✓ | KB2822241 |
Essentials Server and Core | ✓ | ✓ | ✓ | ✓ | KB2822241 |
Windows Server 2008 R2 SP1* | |||||
Windows 2008 R2 SP1 Standard and Core | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 R2 SP1 Enterprise and Core | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 R2 SP1 Data Center and Core | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 R2 SP1 Foundation | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 R2 SP1 Web and Core | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 Storage Server R2 SP1 all editions | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Windows 2008 R2 SP1 HPC Server | ✓ | ✓ | ✓ | ✓ | KB2522766 |
Red Hat Enterprise Linux (RHEL) | |||||
Red Hat Enterprise Linux 6.5 (x64 only) | ✓ | ✓ | ✓ | ✓ | Basic Server install (Minimum) |
Red Hat Enterprise Linux 6.4 (x64 only) | ✓ | ✓ | ✓ | ✓ | Basic Server install (Minimum) |
SUSE Linux Enterprise Server (SLES) | |||||
SUSE Linux Enterprise Server 11.3 (x64 only) | ✓ | ✓ | ✓ | ✓ | |
SUSE Linux Enterprise Server 11.2 (x64 only) | ✓ | ✓ | ✓ | ✓ | |
Virtualization Hosts / Hypervisors | |||||
VMware ESXi 5.5 | ✓ | ✓ | ✓ | * for supported array FW versions, please see VMware HCL *Supported path policies: MRU, RR | |
VMware ESXi 5.1U1 | ✓ | ✓ | ✓ | * for supported array FW versions, please see VMware HCL *Supported path policies: MRU, RR | |
VMware ESXi 5.0 U2 | ✓ | ✓ | ✓ | * for supported array FW versions, please see VMware HCL * Hardware iSCSI initiators are not supported. * Supported path policies: MRU, RR | |
Windows Server 2012 w/Hyper-V | ✓ | ✓ | ✓ | ✓ | |
Hyper-V Server 2008 R2 SP1 | ✓ | ✓ | ✓ | ✓ | |
Windows Server 2008 R2 SP1 with Hyper-V | ✓ | ✓ | ✓ | ✓ | |
Windows Desktop Operating Systems | |||||
Windows 8 (x64 only)
| ✓ | ||||
Windows 7 (x86, x64) | ✓ |
*NOTE: Core editions of Windows server can only manage storage arrays via the SMcli client.
ALUA Support on Supported Host Operating System
The following operating systems supported by your MD Series storage arrays support ALUA natively. No configuration steps are required to enable ALUA on these operating systems.
- Microsoft Windows 2008 R2 and newer.
- Red Hat Enterprise Linux 6.4 and newer.
- SUSE Linux Enterprise Server 11.2 with Service Pack 2 and newer.
VMWARE ESXi is not natively configured to support ALUA on the MD Series storage arrays. To enable ALUA you must manually configure it. Configuration details are provided in the MD Series Administrator's documents found on support.dell.com.
Table 19. Supported Device Mapper Software
Operating System | Component | Supported Version |
SUSE Linux Enterprise Server 11.3 | Native | Native * |
SUSE Linux Enterprise Server 11.2 | Native | Native * |
Red Hat Enterprise Linux 6.5 | Native | Native * |
Red Hat Enterprise Linux 6.4 | Native | Native * |
Red Hat Enterprise Linux 5.9 | Native | Native * |
Supported SAS Host Bus Adapters
Please go to support.dell.com to download the latest supported version of the 12Gbps SAS HBA firmware and drivers for your specific server hardware platform.
Table 20. Supported 12Gbps SAS HBA's
Vendor | Model | Dell P/N |
LSI | 9300-8e | 156NC |
Supported Fibre Channel Host Bus Adapters
Table 21. Supported Fibre Channel HBAs
Host Bus Adapter Name | Direct-attach Configuration** | Fabric Configuration | Dell P/N | Available from |
Qlogic* | ||||
QLE2670 | ✓ | ✓ | www.qlogic.com | |
QLE2672 | ✓ | ✓ | www.qlogic.com | |
QLE2660 | ✓ | ✓ | 0187V TC40H (low-profile) | Dell |
QLE2662 | ✓ | ✓ | 9J1RG 7JKH4 (low-profile) | Dell |
Emulex* | ||||
LPe16000 | ✓ | ✓ | W12YJ | www.emulex.com |
LPe16002 | ✓ | 4G6WF | www.emulex.com |
* See Required Timeout Settings for Fibre Channel Host Bus Adapters for required timeout settings by manufacturer.
**Only certain OS's support direct attach configurations. See Supported Fibre Channel Direct Attached Operating Systems for detailed list
Supported Fibre Channel Direct-Attach Configuration Operating Systems
- Windows Server 2012 R2
- Windows Server 2012
- Windows Server 2008 R2 SP1
- Red Hat Enterprise Linux 6.4
Required Timeout Settings for Fibre Channel Host Bus Adapters
This table shows required timeout settings for all Dell-supported fibre channel (FC) HBAs, by manufacturer and OS. Make sure that any FC HBA connected to your MD3860f storage array has these timeout values set as shown.
Use one of these manufacturer utilities to set these values on your HBA:
- Emulex® HBAnyware® or OneCommand™™ Manager
- QLogic SANsurfer FC HBA Manager
Table 22. Fibre Channel HBA Timeout Values (by Manufacturer)
HBA Manufacturer | Timeout Parameter | Required Value (in seconds) |
Qlogic | Link DownTimeout | 10 |
Windows Server 2008 R2 SP1 | PortDown RetryCount | 10 |
Linux only | qlport_down_retry | 10 |
Emulex | ||
Windows only | Link Timeout | 10 |
Linux only | Node Timeout | 10 |
lpfc_devloss_tmo | 10 |
Supported Fibre Channel Switches
Supported only on Fibre Channel storage arrays running most current RAID firmware versions.
Table 23. Supported Fibre Channel Switches
Switches | Description |
Brocade | |
300 | 8Gbps 24 port FC switch |
5100 | 8Gbps 40 port FC switch |
5300 | 8Gbps 80 port FC switch |
8000 | 8Gbps 8 port FC switch |
6505 | 16Gpbs 24 port FC switch |
6510 | 16Gpbs 48 port FC switch |
6520 | 16Gpbs 96 port FC switch |
DCX | Director class switch chassis |
DCX-4S | Director class switch chassis |
48000 | Director class switch chassis |
DCX8510-x | Director class switch chassis |
Qlogic SANbox | |
3800 | 8Gbps 8 port FC switch |
5800 | 8Gbps scalable up to 120 ports FC switch stack |
9000 | 8Gbps FC blade chassis switch |
Cisco | |
9148 | 8Gbps 48 port FC switch |
9506 | 8Gbps 192 port FC switch |
9509 | 8Gbps 48 port FC switch |
9513 | 8Gbps 528 port FC switch |