Generation 6 Installation Guide
Install new Generation 6 hardware
June 2019
Drive types
This procedure applies to nodes that contain any of the following drive types: self-encrypting drives (SEDs), hard disk drives (HDDs), and solid state drives (SSDs).
CAUTION Only install the drives that were shipped with the node. Do not mix drives of different capacities in your node. If you remove drive sleds from the chassis during installation, make sure to label the sleds clearly. You must replace the drive sleds in the same sled bay you removed them from. If drive sleds are mixed between nodes, even prior to configuration, the system will be inoperable.
If you are performing this procedure with a node containing SEDs, the node might take up to two hours longer to join the cluster than a node with standard drives. Do not power down the node during the join process.
Unpack and verify components
Before you install any equipment, inspect it to make sure that no damage occurred during transit.
Procedure
- Remove all components from the shipping package and inspect the components for any sign of damage. If the components appear damaged in any way, notify Isilon Technical Support. Do not use a damaged component.
DANGER To avoid personal injury or damage to the hardware, always use multiple people to lift and move heavy equipment.
Installation types
You may be able to skip certain sections of this procedure based on the type of installation you are performing.
New cluster
If you are installing a new cluster, follow every step in this procedure. Repeat the procedure for each chassis you install. If you are installing a new cluster with more than 22 nodes or growing an existing cluster to include more than 22 nodes, follow the instructions in the Install a new cluster using Leaf-Spine configuration. See the Isilon Site Preparation and Planning Guide for more information about the Leaf-Spine networking topology.
New chassis
If you are adding a new Generation 6 chassis to an existing cluster, follow every step in this procedure.
New node pair
If you are installing a new node pair in an existing chassis, you can skip the steps in this procedure that describe how to install rails and a chassis.
Install the chassis rails
About this task
You can install your chassis in standard ANSI/EIA RS310D 19-inch rack systems, including all Dell EMC racks. The rail kit is compatible with rack cabinets with the following hole types:
- 3/8 inch square holes
- 9/32 inch round holes
- 10-32, 12-24, M5X.8, or M6X1 pre-threaded holes
The rails adjust in length from 24 inches to 36 inches to accommodate a variety of cabinet depths. The rails are not left-specific or right-specific and can be installed on either side of the rack. The two rails are packaged separately inside the chassis shipping container.
Procedure
- Separate a rail into front and back pieces. Pull up on the locking tab and slide the two sections of the rail apart.
Diagram showing a rail separated into front and back pieces by pulling a locking tab.
- Remove the mounting screws from the back section of the rail. The back section is the thinner of the two rail sections. There are three mounting screws attached to the back bracket. There are also two smaller alignment screws. Do not remove the alignment screws.
- Attach the back section of the rail to the rack with the three mounting screws. Make sure that the locking tab is on the outside of the rail.
Diagram illustrating the attachment of the back section of a rail to a rack using mounting screws.
- Remove the mounting screws from the front section of the rail. The front section is the wider of the two rail sections. There are three mounting screws attached to the front bracket. There are also two smaller alignment screws. Do not remove the alignment screws.
- Slide the front section of the rail onto the back section that is secured to the rack.
Diagram showing the front section of a rail being attached to the rack.
- Adjust the rail until you can insert the alignment screws on the front bracket into the rack.
- Attach the front section of the rail to the rack with only two of the mounting screws. Attach the mounting screws in the holes between the top and bottom alignment screws. You will install mounting screws in the top and bottom holes after the chassis is installed, to secure the chassis to the rack.
Diagram showing the front section of the rail being slid onto the back section.
- Repeat these steps to install the second rail in the rack.
Install the chassis
Slide the chassis onto the installed rails and secure the chassis to the rack.
Before you begin
DANGER A chassis that contains drives and nodes can weigh up to 285 pounds. We recommend that you attach the chassis to a lift to install it in a rack. If a lift is not available, you must remove all drive sleds and nodes from the chassis before you attempt to lift it. Even when the chassis is empty, only attempt to lift and install the chassis with multiple people.
CAUTION If you remove drive sleds from the chassis during installation, make sure to label the sleds clearly. You must replace the drive sleds in the same sled bay you removed them from. If drive sleds are mixed between nodes, even prior to configuration, the system will be inoperable.
Procedure
- Align the chassis with the rails that are attached to the rack.
- Slide the first few inches of the back of the chassis onto the supporting ledge of the rails.
- Release the lift casters and carefully slide the chassis into the cabinet as far as the lift will allow.
- Secure the lift casters on the floor.
- Carefully push the chassis off the lift arms and into the rack.
CAUTION Make sure to leave the lift under the chassis until the chassis is safely balanced and secured within the cabinet.
- Install two mounting screws at the top and bottom of each rail to secure the chassis to the rack.
Diagram illustrating the chassis being slid onto the installed rails and into the cabinet.
- If you removed the drives and nodes prior to installing the chassis, re-install them now.
Install compute modules and drive sleds
Follow the steps in this section if you are installing a new node pair into an existing chassis, or if you needed to remove compute modules and drive sleds to safely install the chassis in the rack.
About this task
CAUTION Remember, you must install drive sleds with the compute module they were packaged with on arrival to the site. If you removed the compute nodes and drive sleds to rack the chassis, you must replace the drive sleds and compute modules in the same bays you removed them from. If drive sleds are mixed between nodes, even prior to configuration, the system will be inoperable. If all compute nodes and drive sleds are already installed in the chassis, you can skip this section.
Procedure
- At the back of the chassis, locate the empty node bay where you will install the node.
- Pull the release lever away from the node. Keep the lever in the open position until the node is pushed all the way in to the node bay.
- Slide the node into the node bay. Note: Support the compute node with both hands until it is fully inserted in the drive bay.
Diagram showing a compute node being inserted into an empty node bay, with a release lever.
- Push the release lever in against the node back panel. You can feel the lever pull the node into place in the bay. If you do not feel the lever pull the node into the bay, pull the lever back into the open position, make sure that the node is pushed all the way into the node bay, then push the lever in against the node again.
- Tighten the thumbscrew on the release lever to secure the lever in place.
- At the front of the chassis, locate the empty drive sled bays where you will install the drive sleds that correspond to the compute module you just installed.
- Make sure the drive sled handle is open before inserting the drive sled.
- With two hands, slide the a drive sled into the sled bay.
- Push the drive sled handle back into the face of the sled to secure the drive sled in the bay.
Back panel
The back panel provides connections for power, network access, and serial communication, as well as access to the power supplies and cache SSDs.
Component | Description |
---|---|
1. 1Gb management port | 6. Multi-function button |
2. Internal network ports | 7. Power supply |
3. External network ports | 8. Cache SSDs |
4. Console connector | 9. USB connector |
5. Do Not Remove LED | 10. HDMI debugging port |
Diagram of the back panel of a node, with numbered ports and components.
CAUTION Only trained Dell EMC Isilon personnel should connect to the node with the USB or HDMI debugging ports. For direct access to the node, connect to the console connector.
CAUTION Do not connect mobile devices to the USB connector for charging.
Multi-function button
You can perform two different functions with the multi-function button. With a short press of the button, you can begin a stack dump. With a long press of the button, you can force the node to power down.
Note: We recommend powering down nodes from the OneFS command line. Only power down a node with the multi-function button if the node doesn't respond to the OneFS command.
Rail installation for switches
Switches ship with the proper rails or tray to install the switch in the rack.
The following internal network switches ship with Dell EMC rails to install the switch. The switch rails are adjustable to fit NEMA front rail to rear rail spacing ranging from 22 in to 34 in.
Note: The Celestica Ethernet rails are designed to overhang the rear NEMA rails to align the switch with the Generation 6 chassis at the rear of the rack. The rails require a minimum clearance of 36 in from the front NEMA rail to the rear of the rack to ensure that the rack door can be closed.
Table 1 Infiniband switches
Switch | Ports | Network |
---|---|---|
Mellanox Neptune MSX6790 | 36-port | QDR InfiniBand |
Mellanox SX6018 | 18-port | |
Mellanox QDR | 8-port | |
Mellanox MIS5022Q-1BFR | 8-port | |
Mellanox MSX6506-NR | 108-port | FDR chassis switch |
Mellanox MSX6512-NR | 216-port | FDR chassis switch |
Table 2 Dell Z9100-ON Ethernet switch
Switch | Ports | Network |
---|---|---|
Dell Z9100-ON | 32-port | 32x100GbE, 32x40GbE, 128x10GbE (with breakout cables) |
Note: In OneFS 8.2.1, the Dell Z9100-ON switch is required if you plan to implement Leaf-Spine networking for large clusters.
Table 3 Celestica Ethernet switches
Switch | Ports | Network |
---|---|---|
Celestica D2024 | 24-port | 10GbE |
Celestica D4040 | 32-port | 40GbE |
Celestica D2060 | 48-port | 10GbE |
Note: There is no breakout cable support for Arista switches. However, you can add a 10GbE or 40GbE line card depending on the Arista switch model. Details are included in the following table.
Table 4 Arista Ethernet switches
Switch | Nodes |
---|---|
Arista DCS-7304 | Shipped with 2 line cards, each with 48 10GbE ports to a maximum of 144 nodes, you can add either of the following:
|
Arista DCS-7308 | Shipped with 2 line cards. Each with 32 port of 40GbE to a maximum of 144 nodes, you can add either of the following:
|
Table 5 Arista switch requirements
Switch | Rack enclosure | PDU |
---|---|---|
Arista DCS-7304 | 8U | C19 |
Arista DCS-7308 | 13U |
Installing the switch
About this task
The switches ship with the proper rails or tray to install the switch in the rack.
Note: If the installation instructions in this section do not apply to the switch you are using, follow the procedures provided by your switch manufacturer.
CAUTION If the switch you are installing features power connectors on the front of the switch, it is important to leave space between appliances to run power cables to the back of your rack. There is no 0U cable management option available at this time.
Procedure
- Remove rails and hardware from packaging.
- Verify that all components are included.
- Locate the inner and outer rails and secure the inner rail to the outer rail.
- Attach the rails assembly to the rack using the eight screws as illustrated in the following figure. Note: The rail assembly is adjustable for NEMA, front to rear spacing extends from 22 in to 34 in.
Diagram showing the assembly of inner and outer rail sections.
- Attach the switch rails to the switch by placing the larger side of the mounting holes on the inner rail over the shoulder studs on the switch. Press the rail even against the switch. Note: The orientation of the rail tabs for the front NEMA rail are located on the power supply side of the switch.
Diagram illustrating the attachment of switch rails to a switch.
- Slide the inner rail towards the rear of the switch slide into the smaller side of each of the mounting holes on the inner rail. Ensure the inner rail is firmly in place.
- Secure the switch to the rail, securing the bezel clip and switch to the rack using the two screws as illustrated in the following figure.
Diagram showing the switch secured to the rail with screws.
- Snap the bezel in place.
Attaching network and power cables
Network and power cables must be attached to make sure that there are redundant power and network connections, and dressed to allow for easy maintenance in the future.
The following image shows you how to attach your internal network and power cables for a node pair. Both node pairs in a chassis must be cabled in the same way.
Diagram illustrating the connection of internal network cables (to switches) and power cables (to PDUs) for a node pair.
1. To internal network switch 2 | 2. To internal network switch 1 |
3. To PDU 1 | 4. To PDU 2 |
Work with the site manager to determine external network connections, and bundle the additional network cables together with the internal network cables from the same node pair.
It is important to keep future maintenance in mind as you dress the network and power cables. Cables must be dressed loosely enough to allow you to:
- remove any of the four compute nodes from the back of the Generation 6 chassis.
- remove power supplies from the back of compute nodes.
In order to avoid dense bundles of cables, you can dress the cables from the node pairs to either side of the rack. For example, dress the cables from nodes 1 and 2 toward the lower right corner of the chassis, and dress the cables from nodes 3 and 4 toward the lower left corner of the chassis.
Wrap network cables and power cables into two separate bundles to avoid EMI (electromagnetic interference) issues, but make sure that both bundles easily shift together away from components that need to be removed during maintenance, such as compute nodes and power supplies.
Cable management
To protect the cable connections, organize cables for proper airflow around the cluster, and to ensure fault-free maintenance of the Isilon nodes.
Protect cables
Damage to the InfiniBand or Ethernet cables (copper or optical Fibre) can affect the Isilon cluster performance. Consider the following to protect cables and cluster integrity:
- Never bend cables beyond the recommended bend radius. The recommended bend radius for any cable is at least 10–12 times the diameter of the cable. For example, if a cable is 1.6 inches, round up to 2 inches and multiply by 10 for an acceptable bend radius. Cables differ, so follow the recommendations of the cable manufacturer.
- As illustrated in the following figure, the most important design attribute for bend radius consideration is the minimum mated cable clearance (Mmcc). Mmcc is the distance from the bulkhead of the chassis through the mated connectors/strain relief including the depth of the associated 90 degree bend. Multimode fiber has many modes of light (fiber optic) traveling through the core. As each of these modes moves closer to the edge of the core, light and the signal are more likely to be reduced, especially if the cable is bent. In a traditional multimode cable, as the bend radius is decreased, the amount of light that leaks out of the core increases, and the signal decreases.
Diagram illustrating cable bend radius, showing minimum mated cable clearance (Mmcc), mated connector length, raw cable bend radius, raw cable diameter, strain relief, connector body, and chassis bulkhead.
- Keep cables away from sharp edges or metal corners.
- Never bundle network cables with power cables. If network and power cables are not bundled separately, electromagnetic interference (EMI) can affect the data stream.
- When bundling cables, do not pinch or constrict the cables.
- Avoid using zip ties to bundle cables, instead use velcro hook-and-loop ties that do not have hard edges, and can be removed without cutting. Fastening cables with velcro ties also reduces the impact of gravity on the bend radius. Note: Gravity decreases the bend radius and results in the loss of light (fiber optic), signal power, and quality.
- For overhead cable supports:
- Ensure that the supports are anchored adequately to withstand the significant weight of bundled cables. Anchor cables to the overhead supports, then again to the rack to add a second point of support.
- Do not let cables sag through gaps in the supports. Gravity can stretch and damage cables over time. You can anchor cables to the rack with velcro ties at the mid-point of the cables to protect your cable bundles from sagging.
- Place drop points in the supports that allow cables to reach racks without bending or pulling.
- If the cable is running from overhead supports or from underneath a raised floor, be sure to include vertical distances when calculating necessary cable lengths.
Ensure airflow
Bundled cables can obstruct the movement of conditioned air around the cluster.
- Secure cables away from fans.
- To keep conditioned air from escaping through cable holes, employ flooring seals or grommets.
Prepare for maintenance
To accommodate future work on the cluster, design the cable infrastructure. Think ahead to required tasks on the cluster, such as locating specific pathways or connections, isolating a network fault, or adding and removing nodes and switches.
- Label both ends of every cable to denote the node or switch to which it should connect.
- Leave a service loop of cable behind nodes. Service technicians should be able to slide a node out of the rack without pulling on power or network connections. In the case of Generation 6 nodes, you should be able to slide any of the four nodes out of the chassis without disconnecting any cables from the other three nodes. WARNING If adequate service loops are not included during installation, downtime might be required to add service loops later.
- Allow for future expansion without the need for tearing down portions of the cluster.
Connect the internal cluster network
Internal network cables connect nodes to the cluster's internal network so the nodes can communicate with each other.
Procedure
- Depending on the type of internal network card in the node, connect a network cable between the int-a port, the bottom of the two ports, and the network switch for the Internal A network.
- For the secondary internal network, connect the int-b port, the top port, to a separate network switch for the int-b network.
Diagram showing the connection of internal network cables from node ports (int-a and int-b) to network switches.
Connect the external client network
Ethernet cables connect the node to the cluster's external network so the node can communicate with external clients.
About this task
Complete the following steps to connect the node with the switch for the external network.
Procedure
- With an ethernet cable, connect the ext-1 port on the node, the bottom of the two ports, to the switch for the external network. For additional connections, use the ext-2 port, the top port.
Diagram showing the connection of external network cables from node ports (ext-1 and ext-2) to an external network switch.
Install transformers
This step only applies if you are installing F800 or H600 nodes into a low-line power environment (100v-120v). If you are not installing F800 or H600 nodes, or if you are not installing into a low-line power environment, you can skip this section.
About this task
CAUTION If you are installing F800 or H600 nodes into a low-line power environment (100v-120v), you must install transformers that will regulate power to the requirements for these nodes (200v-240v). To make sure that there are redundant power sources, you will install two transformers, then connect the node power supplies from peer nodes, one to each transformer. The transformers will connect to the power source.
Reference the Isilon Generation 6 Transformer Installation Guide for complete instructions on installing a transformer and connecting your power supplies.
Connect the power supplies
Connect a power cable to each node in the back of the chassis.
About this task
Perform these steps for every node installed in the back of the chassis. Nodes will automatically power up when they are connected to power.
Procedure
- Connect the power cord to your power source. CAUTION In order for node pairs to provide redundant power to one another, you must not connect both nodes in a node pair to the same power source. Nodes in a node pair must be connected to separate power sources. From the back of a Generation 6 chassis, nodes in bays 1 and 2 are a node pair and nodes in bays 3 and 4 are a node pair.
- Connect the power cord to the power supply.
Diagram showing a power cord connected to a power supply unit.
- Rotate the metal bale down over the power cord to hold the cord in place.
Diagram showing a power cord secured in place by a metal bale.
Configure the node
Before using the node, you must either create a new cluster or add the node to an existing cluster.
Federal installations
You can configure nodes to comply with United States federal regulations. If you are installing an EMC Isilon cluster for a United States federal agency, you can configure the cluster's external network with IPv6 addresses. In order to comply with Federal requirements, if the OneFS cluster is configured for IPv6, enablement of link-local is required.
As part of the installation procedure, you can configure the external cluster for IPv6 addresses in the Isilon configuration wizard after you power up a node. After you install the cluster, you can enable link-local addresses by following the instructions in the KB article How to enable link-local addresses for IPv6.
SmartLock compliance mode
You can configure nodes to operate in SmartLock compliance mode. You should only choose to run your cluster in SmartLock compliance mode if your data environment must comply with SEC rule 17-a4(f).
Compliance mode controls how SmartLock directories function and limits access to the cluster in alignment with SEC rule 17-a4(f). A valid SmartLock license is required to configure a node in compliance mode.
CAUTION Once you select to run a node in SmartLock compliance mode, you cannot leave compliance mode without reformatting the node.
SmartLock compliance mode is incompatible with Isilon for vCenter, VMware vSphere API for Storage Awareness (VASA), and the VMware vSphere API for Array Integration (VAAI) NAS Plug-In for Isilon.
Connect to the node using a serial cable
You can use a null modem serial cable to provide a direct connection to a node.
Before you begin
If no serial ports are available, you can use a USB-to-serial converter.
Procedure
- Connect a null modem serial cable to the serial port of a computer, such as a laptop.
- Connect the other end of the serial cable to the serial port on the back panel of the node.
- Start a serial communication utility such as Minicom (UNIX) or PuTTY (Windows).
- Configure the connection utility to use the following port settings:
Setting | Value |
---|---|
Transfer rate | 115,200 bps |
Data bits | 8 |
Parity | None |
Stop bits | 1 |
Flow control (RTS/CTS) | Hardware |
- Open a connection to the node.
Run the configuration wizard
The Isilon configuration wizard starts automatically when a new node is powered on. The wizard provides step-by-step guidance for configuring a new cluster or adding a new node to an existing cluster.
About this task
The following procedure assumes that there is an open serial connection to a new node.
Note: You can type back at most prompts to return to the previous step in the wizard.
Procedure
- Depending on whether you are creating a new cluster, joining a node to an existing cluster, or preparing a node to run in SmartLock compliance mode, choose one of the following options:
- To create a new cluster, type 1.
- To join the node to an existing cluster, type 2.
- To exit the wizard and configure the node manually, type 3.
- To restart the node in SmartLock compliance mode, type 4.
CAUTION If you choose to restart the node in SmartLock compliance mode, the node restarts and returns to this step. Selection 4 changes to enable you to disable SmartLock compliance mode. This is the last opportunity to back out of compliance mode without reformatting the node.
For new clusters, the following table lists the information necessary to configure the cluster. To make sure the installation process is not interrupted, we recommend that you collect this information prior to installation.
Setting | Description |
---|---|
SmartLock compliance license | A valid SmartLock license. For clusters in compliance mode only. |
Root password | The password for the root user. Clusters in compliance mode do not allow a root user and request a compliance administrator (compadmin) password in place of a root user. |
Admin password | The password for the administrator user. |
Cluster name | The name used to identify the cluster. Note: Cluster names must begin with a letter and can contain only numbers, letters, and hyphens. |
Character encoding | The character encoding for the cluster. The default character encoding is UTF-8. |
int-a network settings
| The network settings used by the int-a network. The int-a network is used for communication between nodes. The int-a network must be configured with IPv4. The int-a network must be on a separate subnet from an int-b/failover network. |
int-b / failover network settings
| The network settings used by the optional int-b/ failover network. The int-b network is used for communication between nodes and provides redundancy with the int-a network. The int-b network must be configured with IPv4. The int-a and int-b networks must be on separate subnets. The failover IP range is a virtual IP range that is resolved to either one of the active ports during failover. |
ext-1 network settings
| The network settings used by the ext-1 network. The ext-1 network is used by clients to access the cluster. The default ext-1 network can be configured with IPv4 or IPv6 addresses. You can configure the external network with IPv6 addresses by entering an integer less than 128 for the netmask value. The standard external netmask value for IPv6 addresses is 64. If you enter a netmask value with dot-decimal notation, you must use IPv4 addresses for your IP range. |
Default gateway | The IP address of the optional gateway server through which the cluster communicates with clients outside the subnet. Enter an IPv4 or IPv6 address, depending on how you configured the ext-1 network settings. |
SmartConnect settings
| SmartConnect balances client connections across nodes in a cluster. For information about configuring SmartConnect, see the OneFS Administration Guide. |
DNS settings
| The DNS settings for the cluster. Enter a comma-separated list to specify multiple DNS servers or search domains. Enter IPv4 or IPv6 addresses, depending on how you configured the ext-1 network settings. |
Date and time settings
| The day and time settings for the cluster. |
Cluster join mode | The method that the cluster uses to add new nodes. Choose one of the following options:
|
Updating node firmware
To make sure that the most recent firmware is installed on a node, update the node firmware.
Follow instructions in the Isilon Node Firmware Package Release Notes to update your node to the most recent Isilon Node Firmware Package.
Licensing and remote support
After you configure new hardware, update your OneFS license and configure the new hardware for remote support.
For instructions on updating your OneFS license and configuring remote support (ESRS), refer to the OneFS CLI Administration Guide or OneFS Web Administration Guide.
Front panel LCD menu
You can perform certain actions and check a node's status from the LCD menu on the front panel of the node.
LCD Interface
The LCD interface is located on the node front panel. The interface consists of the LCD screen, a round button labeled ENTER for making selections, and four arrow buttons for navigating menus. There are also four LEDs across the bottom of the interface that indicate which node you are communicating with. You can change which node you are communicating with the arrow buttons. The LCD screen will be dark until you activate it. To activate the LCD screen and view the menu, press the square selection button. You can press the right arrow button to move to the next level of a menu.
Attach menu
The Attach menu contains the following sub-menu:
- Drive: Adds a drive to the node. After you select this command, you can select the drive bay that contains the drive you would like to add.
Status menu
The Status menu contains the following sub-menus:
- Alerts: Displays the number of critical, warning, and informational alerts that are active on the cluster.
Cluster
The Cluster menu contains the following sub-menus:
- Details: Displays the cluster name, the version of OneFS installed on the cluster, the health status of the cluster, and the number of nodes in the cluster.
- Capacity: Displays the total capacity of the cluster and the percentage of used and available space on the cluster.
- Throughput: Displays throughput numbers for the cluster as <in> | <out> | <total>.
Node
The Node menu contains the following sub-menus:
- Details: Displays the node ID, the node serial number, the health status of the node, and the node uptime as <days>, <hours>:<minutes>:<seconds>
- Capacity: Displays the total capacity of the node and the percentage of used and available space on the node.
- Network: Displays the IP and MAC addresses for the node.
- Throughput: Displays throughput numbers for the node as <in> | <out> | <total>.
- Disk/CPU: Displays the current access status of the node, either Read-Write or Read-Only. Also displays the current CPU throttling status, either Unthrottled or Throttled.
- Drives: Displays the status of each drive bay in the node. You can browse through all the drives in the node with the right and left navigation buttons. You can view the drives in other nodes in the cluster with the up and down navigation buttons. The node you are viewing will display above the drive grid as Drives on node:<node number>.
- Hardware: Displays the current hardware status of the node as <cluster name>-<node number>:<status>. Also displays the Statistics menu.
- Statistics: Displays a list of hardware components. Select one of the hardware components to view statistics related to that component.
Update menu
The Update menu allows you to update OneFS on the node. Press the selection button to confirm that you would like to update the node. You can press the left navigation button to back out of this menu without updating.
Service menu
The Service menu contains the following sub-menus:
- Throttle: Displays the percentage at which the CPU is currently running. Press the selection button to throttle the CPU speed.
- Unthrottle: Displays the percentage at which the CPU is currently running.
Press the selection button to set CPU speed to 100%.
Read-Only: Press the selection button to set node access to read-only.
Read-Write: Press the selection button to set node access to read-write.
UnitLED On: Press the selection button to turn on the unit LED.
UnitLED Off: Press the selection button to turn off the unit LED.
Shutdown menu
The Shutdown menu allows you to shut down or reboot the node. This menu also allows you to shut down or reboot the entire cluster. Press the up or down navigation button to cycle through the four shut down and reboot options, or to cancel out of the menu. Press the selection button to confirm the command. You can press the left navigation button to back out of this menu without shutting down or rebooting.
Update the install database
After all work is complete, update the install database.
Procedure
- Browse to the Business Services portal.
- Select the Product Registration and Install Base Maintenance option.
- To open the form, select the IB Status Change option.
- Complete the form with the applicable information.
- To submit the form, click Submit.
Where to go for support
This topic contains resources for getting answers to questions about Isilon products.
Online support
- Live Chat
- Create a Service Request
For questions about accessing online support, send an email to support@emc.com.
Telephone support
- United States: 1-800-SVC-4EMC (1-800-782-4362)
- Canada: 1-800-543-4782
- Worldwide: 1-508-497-7901
- Local phone numbers for a specific country are available at Dell EMC Customer Support Centers.
Isilon Community Network
The Isilon Community Network connects you to a central hub of information and experts to help you maximize your current storage solution. From this site, you can demonstrate Isilon products, ask questions, view technical videos, and get the latest Isilon product documentation.
Isilon Info Hubs
For the list of Isilon info hubs, see the Isilon Info Hubs page on the Isilon Community Network. Use these info hubs to find product documentation, troubleshooting guides, videos, blogs, and other information resources about the Isilon products and features you're interested in.