DS4000 Implementation Cookbook v1.4

71
DS4000 Implementation Cookbook This cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM . Business Partners are responsible for assuring their own technical solutions to customers Jodi Noack Field Technical Sales Support email: [email protected] Version 1.4 dated 6/09/2006 1

Transcript of DS4000 Implementation Cookbook v1.4

Page 1: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Jodi NoackField Technical Sales Support

email: [email protected]

Version 1.4 dated 6/09/2006 1

Page 2: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Prior to Installation

TOOLS: Please Have the following Tools:#2 Phillips Screwdriver, Label Maker, Null Modem Serial Cable (this is a Serial Crossover Cable), Ethernet Cables (1 for each Controller in a DS4000 unit).Field Tip: It can be very handy to have a small 8 port Ethernet switch. This will allow you to connect your laptop to both DS4000 controllers and multiple DS4000 units when doing the configuration.

Review IBM TotalStorage FAStT Storage Manager Version 9 Installation and Support Guide from

http://www-03.ibm.com/servers/storage/support/disk/ds4500/stormgr1.html IBM TotalStorage FAStT Best Practices Guide from www.ibm.com/redbooks

Power – Each DS4000 and EXP unit will require two power sources. The Power Cord is(125V, 10A, 2.8 m)

Rack Mounting - Review the documentation that comes with your rack cabinet.• Maintain 15 cm (6 in.) of clearance around your controller unit for air circulation.• Ensure that the room air temperature is below 35°C (95°F).• Plan the controller unit installation starting from the bottom of the rack.• Remove the rack doors and side panels to provide easier access during installation.• Position the template to the rack so that the edges of the template do not overlap any other devices.• Connect all power cords to electrical outlets that are properly wired and grounded.• Take precautions to prevent overloading the electrical outlets when you install multiple devices in a rack.

Software - Make sure you download the latest Firmware, NVSRAM and Drive Code for the DS4000 Storage System! IMPORTANT: To be notified of important product updates, you must first register at the IBM Support and Download Web site: http://www-03.ibm.com/servers/storage/support/ Perform the following steps to register at the IBM Support and Download Web site:

1. Click on My Support in the Additional Support box in the right side of the DS4000 Support web page.

Version 1.4 dated 6/09/2006 2

Page 3: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

2. The Sign In window displays. Either enter your IBM ID and Password in the sign in area to sign in and proceed to step 6 or, if you are not currently registered with the site, click Register Now.

3. The My IBM Registration window Step 1 of 2 opens. Enter the appropriate information and click Continue to get to the second My IBM Registration window.

4. In the My IBM Registration window Step 2 of 2, enter the appropriate information and click Submit to register. 5. The My IBM Registration windows opens with the following message, “Thank you for registering with ibm.com. Continue to explore ibm.com as a

registered user.” Click Edit Profile in the My IBM box on the right side of the window. 6. The My Support window opens. Click Add Products to add products to your profile. 7. Use the pull-down menus to choose the appropriate DS4000 storage server and expansion enclosures that you want to add to your profile.8. To add the product to your profile, select the appropriate box or boxes next to the product names and click Add Product. 9. Once the product or products are added to your profile, click Subscribe to Email folder tab 10. Select Storage in the pull down menu. Select Please send these documents by weekly email and select Downloads and drivers and Flashes to

receive important information about product updates. Click Updates. 11. Click Sign Out to log out of My Support.

IP Requirement Obtain 2 IP addresses for each DS4000 storage subsystem: IP ________________________ IP ___________________ Obtain 1 IP address for each SAN switch IP: _______________________________ IP: _________________________

Host Information - Please review the README.txt files for any particular OS limitations. Please understand these limitations before going any farther in this doc.

Verify OS, SAN and HBA interoperability matrix at: http://www-03.ibm.com/servers/storage/disk/ds4000/pdf/interop-matrix.pdfBe careful not to put all the high-speed adapters on a single system bus; otherwise the computer bus becomes the performance bottleneck

OS: ________________ HBA ________________________ OS: ________________ HBA ________________________ OS: ________________ HBA ________________________ OS: ________________ HBA ________________________

BladeCenter Fabric Support: Details for SAN attachment are in the IBM eServer BladeCenter interoperability guide. McData – You will have to be sure to acquire the OPM Optical Pass thru Module on BladeCenter in order to attach to McData

otherwise no support is available for the ED5000 and ES1000 as they cannot operate in "open" mode.

Boot from SAN -- At present, you can only boot from Windows NT, Windows 2000, Windows 2003, Linux Red Hat Enterprise Linux, AIX and Solaris operating systems.  Boot support for Netware, HP-UX is not available at this time.

STORAGE MANAGER CLIENT

Version 1.4 dated 6/09/2006 3

Page 4: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

–New in 9.16.35.08

Support for attachment of DS4000 EXP810 Storage Expansion to the DS4800 storage subsystems with controller firmware 06.16.xx.xx installed. Supports automatic ESM firmware synchronization with the new EXP810 storage expansion enclosure. When you install a new ESM into an existing

EXP810 that is attached to a DS4000 storage subsystem that supports automatic ESM firmware synchronization, this feature resolves firmware mismatch conditions by automatically synchronizing the firmware in the new ESM with the firmware in the existing ESM.

In Storage Manager 9.16, all of the host software packages (SMruntime, SMclient, SMutil and SMagent) are included in a single DS4000 Storage Manager host software installer wizard. During the execution of the wizard, you will have a choice to install all or only certain software packages depending the need for a given server. (RDAC is NOT included). A graphics adapter is required so please check the readme.txt file for the OS you are using for the specifics.

Controller firmware 06.16.xx.xx does not support EXP100 storage expansion enclosures. Do not download it into DS4000 storage subsystems that have EXP100 enclosures attached.

** NEW in 9.12.16 ** released 12/19/2005

Support the new 400G SATA drive options and CRUs (P/Ns 39M4570 and 39M4575, respectively) in addition to the new 250G SATA drive CRUs (P/N 40K6837.) This new 250G SATA drive CRU is functionally equivalent to the previously-released CRUs (P/N 90P1349).

Support the DS4100 storage subsystem with Single Controller option When having this controller firmware version 06.12.16.00 or higher installed, the DS4100 is now supported the DS4000 Copy Services Remote

Mirroring and VolumeCopy premium features.

Please refer to the New Features section of the IBM DS4000 Storage Manager version 9 Installation and Support Guide for the appropriate host server operating system environment for additional information about the IBM DS4000 Storage Manager version 9 enhancements.

DS4100 and EXP100 NOTES

DO NOT use the tens digit (x10) setting on the EXP enclosures. Use only the ones digit (x1) setting to set unique server IDs or enclosure IDs. .

EXP100 >> IBM recommends 1 hot spares per 2 EXP100 drive expansion enclosure. I try to use 1 per drawer if you can afford it. DS 4100 >> When configuring a DS 100 it is highly recommended to configure the 14 disks that are in the base frame to use the same

Controller. This will prevent I/O shipping and a possible degradation in performance. . AIX version 4.3.3 and earlier are not supported with Storage Manager version 8.41 and EXP100 drawers Booting from a DS4000 subsystem utilizing SATA drives for the boot image is supported but not recommended due to performance

reasons.

Version 1.4 dated 6/09/2006 4

Page 5: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Firmware version 06.12.16.00 or higher, the DS4100 with standard option is now supported Copy Service Remote Mirroring and VolumeCopy premium features.

Storage Controller Firmware versions for SATA:

- DS4100 Standard: SNAP_282X_06121600- DS4100 SCU : SNAP_282X_06121600

Storage Controller NVSRAM versions for SATA:

- DS4100 Standard: N1724F100R912V05 - DS4100 SCU : N1724F10LR912V01

ESM: Version 9563

SATA NOTE: The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes.

Version 1.4 dated 6/09/2006 5

Page 6: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

DS4000 Hardware Setup

Drive-side Fibre Channel cabling -

DS4400 and DS4500 Drive Side Cabling Differences Published: December, 20, 2005 http://www.redbooks.ibm.com/abstracts/tips0593.htmlFor the DS4400, you should pair mini-hubs 1 and 2 for the first drive loop pair, and pair mini-hub 3 and 4 for the second drive loop pair.Note: The in and out ports of the drive side mini-hubs on the DS4000 do not matter. In other words, you can connect to the top or bottom port. Just connect to only one (top or bottom)DS4500

1. Redundant Drive Loop A&B – Channels 1 & 32. Redundant Drive Loop C&D – Channels 2 & 43. Wire drive loop 1 to channels 1 & 34. Wire drive loop 2 to channels 2 & 4

1. Connect the Out port of drive-side mini hub 4 to the OUT port on the left ESM board of the last storage expansion enclosure in Group 2.

2. Connect the In port of drive-side mini hub 2 to the IN port on the right ESM board of the first storage expansion enclosure in Group 2.

3. Repeat Steps 1 & 2 but use mini hubs 1 & 3 to create Group 1

4. Ensure that each storage expansion enclosure has a unique ID (switch setting) and that the left and right ESM board switch settings on each storage expansion enclosure are identical.

When instructed to remove and reinsert or to replace a hard drive, wait at least 70 seconds before removing the failed drive from the drive slot or inserting a new or existing drive. Similarly, wait at least 70 seconds before reinserting either the removed existing ESM module or the new ESM module into the empty ESM slot in the EXP drive expansion enclosure. There is no work-around.

DS4100 (100), 200 and 600 Dual expansion unit Fibre Channel cablingVersion 1.4 dated 6/09/2006 6

Page 7: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

1. If you are cabling two expansion units to the storage server, use a fibre channel cable to connect the In port on the left ESM board of the first expansion unit to the Out port on the left ESM board of the second expansion unit. Connect the In port on the right ESM board of the first expansion unit to the Out port on the right ESM board of the second expansion unit.2. Connect the SFP Expansion port on Controller A to the In port on the left ESM board of the second expansion unit. Connect the SFP Expansion unit port on Controller B to the Out port on the right ESM board of the first expansion unit3. Ensure that each expansion unit has a unique ID (switch setting). The two host ports in each controller are independent. They are not connected in the controller module as they would be in a hub configuration. So, there are a total of 4 host ports in the DS4300 (600), 2 in the FAStT200.When instructed to remove and reinsert or to replace a hard drive, wait at least 70 seconds before inserting either the removed existing drive or the new drive into the drive slot. Similarly, wait at least 70 seconds before reinserting either the removed existing ESM module or the new ESM module into the empty ESM slot in the EXP drive expansion enclosure. There is no work-around.

Version 1.4 dated 6/09/2006 7

Page 8: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

DS4800 with EXP710 CablingThere are two "redundant drive loops" made up of two channels each; not four drive loops. This is different than what was on the DS4400 and DS4500 configuration only in that we added the second set of channels to the loops.Even though there are 4 loops on the back end of the DS4800 controller, as far as enclosure IDs are concerned there are only two.Drive loops 1 and 2 share the same "address space" and must have unique IDs (the last digit must be unique). The same is true for loops 3 and 4.So, a loop cannot have more than eight expansions; and the id rules for a loop should apply across the redundant loop as well. You must not repeat the ones digit in multiple ids on the loop.

“The tray IDs should never be numbered 10,20,30,40.... or 13,23,33,43 or any other combination where the 1's digit is the same. This causes theloop ids to double up and is certain situation end up with two devices on the same loop with the same id. This causes, depending on which way thepacket is going, a confusion as to which device is suppose to pick up the packet.”

The correct numbering would be:

10 24 30 4411 25 31 4512 26 32 4613 27 33 47

Height:6.9 in (17.5 cm)Depth:24.8 in (63.0 cm)Width:19.0 in (48.3 cm)Unit height: 4u*Weight:80.5 lb (36.5 kg)

PowerU.S: 115 V, 15 A, NEMA 5–15International:230 V, 10 A

Configure DS4800 with drive trays in multiples of four

Distribute the drives equally between the

Version 1.4 dated 6/09/2006 8

P1P2

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Ch 3 Ch 4

P1 P2 P3 P4DRIVE CHANNELS (Ch)Dual-Ported (P)

Controller B

Ch 2Ch 1

P3P4 DRIVE CHANNELS (Ch)

Dual-Ported (P)

Controller A

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Drive Tray

In OutIn Out

Page 9: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

drive trays For each disk array controller, use four fibre channel loops if you have more than 4 expansion units Based on recommended cabling from above

– Dedicate the drives in tray stacks 2 & 4 to disk array controller “B”– Dedicate the drives in tray stacks 1 & 3 to disk array controller “A” – I/O paths to each controller should be established for full redundancy and failover protection

Refer to the Installation, User’s and Maintenance guide ( gc26-7748-00) for other configurations that have only 1 or 2 expansion drawers.

EXP810 CablingPlease read all notes!

Currently only the DS4700 & DS4800 subsystem supports the EXP810’s

In the initial EXP810 release, you can only intermix the EXP810 and EXP710. DS4800 configurations with EXP100 and EXP810 are not supported at this time.

IMPORTANT: If you are connecting an EXP810 to a DS4800 storage subsystem that is already connected to EXP710 drive enclosures only or has a combination of EXP100s (or EXP700) drive enclosures and EXP710 drive enclosures, you will need to remove Controller A to upgrade the firmware. If you do not remove Controller A from the DS4000 storage subsystem, the new firmware may not be downloaded correctly on both controllers and the controllers will not recognize the EXP810 storage expansion enclosure. Refer to the steps in “Upgrading controller firmware for an existing DS4800 EXP710 configuration to support EXP810” on page 108 to upgrade the controller firmware to 06.16.xx.xx. ftp://ftp.software.ibm.com/pc/pccbbs/pc_servers_pdf/gc26779800.pdf

You can intermix the EXP810 and EXP710 in the same drive loop, but the Link Rate switch for all storage expansion enclosures in the intermixed drive loop must be set to 2 Gbps.

It’s recommended you cable like devices ( EXP710 and EXP810 ) on the same drive loops on the DS4000 (seen on next page diagram)

It is recommended to use all drive ports when connecting EXP810 enclosures to controller. In other words, if you have 4 EXP810’s spread them so you use all of the DS4000 drive ports. (shown in diagram on this page)

Version 1.4 dated 6/09/2006 9

Page 10: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Important Rule: The DS4000 controller drive port must always be connected to the EXP810 port labeled 1B. Because the left and right EXP810 ESMs (ESMs A and B) are inserted in the ESM bays in different orientation, please make sure that the port is labeled 1B before making the Fibre Channel connections. This rule applies to the EXP810 only.

There are maximum limits to the number of fibre channel hard drives that you can configure in redundant drive channel pairs. The maximum number of drives dictates a fixed number of supported EXP710 and EXP810 drive enclosure combinations. The table above lists the numbers of EXP710 and EXP810 storage expansion enclosures that you can combine for use in a single redundant drive channel/loop pair. This table assumes that a DS4000 storage subsystem is generally capable of addressing the maximum of 112 fibre channel hard drives for each redundant drive channel/loop pair.

DS4700 Cabling and Link Rate Switch

Version 1.4 dated 6/09/2006 10

Page 11: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Each controller has one channel for expansion but has two ports. Having a dual ported channel provides an additional level of redundancy, reliability and performance.

To best use the dual ported channel – you will need to connect your expansion units in two sets instead of one long set of expansion units for the channel

The Link Rate Switch is located on the front – right side and it determines the enclosure speed of the backend.

It should be set to 2 Gbps if 2 Gbps FC drives are present.

4 Gbps drives will run at 2 Gbps. If set to 4 Gbps, 2 Gbps drives will show up as

bypassed drives in the storage manager software No support for both speeds concurrently DS4800 can support 4 Gbps loops and 2 Gbps

loops

Version 1.4 dated 6/09/2006 11

Controller enclosure

Expansion enclosure

Expansion enclosure

Expansion enclosure

Expansion enclosure

Expansion enclosure

Expansion enclosure

Page 12: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Insure the drive enclosures have different id's ( EXP100, EXP700, EXP710 ) There's a switch/dial on the back of each EXP500/700 that identifies the enclosure. Each enclosure must have different id, otherwise you will receive

conflict errors. DS4100 and DS4300 enclosures are always ID 0 BEST PRACTICE > Change the drive enclosures to something other than the default of ‘00’. New EXP drawers always come with id of ‘00’ and this will

prevent errors in the event you forget to change it before adding it to the DS4000 subsystem. Within a subsystem each drive enclosure must have a unique ID. Within a loop the subsystem IDs should be unique in the ones column. All drive trays on any given loop should have complete unique ID's assigned to them. Example for a maximum config DS4000 900: trays on one loop should be assigned id's 10-17 and trays on the second loop assigned id's 20-27. Tray id's 00 ~ 09 should not be used and tray id's with the same single digit such as 11 and 21 should not be used on the same drive loop.

When connecting the EXP100 enclosures, DO NOT use the tens digit (x10) setting. Use only the ones digit (x1) setting to set unique server IDs or enclosure IDs. This is to prevent the possibility that the controller blade has the same ALPA as one of the drives in the EXP100 enclosures under certain DS4000 controller reboot scenarios.

EXP810: The dual seven-segment enclosure ID is located on the back each ESM next to the indicator lights. It provides a unique identifier for each module in the storage subsystem. The storage management software automatically sets the enclosure ID number. You can change the setting through the storage management software if necessary. Both ESM enclosure ID numbers are identical under normal operating conditions. The allowable ranges for enclosure ID settings are 0-99. However, IBM recommends that you do not set the enclosure ID to 00 or any number greater than 80.

The power down procedure includes the following steps:

Turn off servers Turn off DST controller Turn off switches Turn off drives

The power up procedure includes the following steps:

Turn on drives and wait one minutes Turn on switches Turn on DS4000 controller Turn on servers

NOTE: With firmware 05.30 and above, the controllers have a built in pause/delay to wait for the drives to stabilize but it is still a good practice to follow the proper power up sequences to prevent any loss of data.

Verify status of the box by checking status lights. Verify 2 GB light and Conflict light.

The hub provides two status LEDs for each port. Use these LEDs to help you quickly diagnose and recover from problems. Green LED Amber LED Pprt StateOff Off No GBIC InstalledOn Off Operational GBIC; Valid SignalOff On Faulty GBIC; Port Bypassed

Version 1.4 dated 6/09/2006 12

Page 13: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

On On Operational GBIC; No Valid Signal; Port Bypassed

IMPORTANT NOTE:When powering up a new system for the first time it is recommended that you power up one EXP unit at a time and then add only 2 drives at a time! This means that you should pull out every drive in a new system and slowly add them into the system (2 at a time) until recognized. There have been problems with the Controller discovering large configurations all at once which can result in loss of drives, ESM, GBIC’s, etc. Field Tip: My recommendation is to

1. Power up the controller and only 1 EXP unit with 1 drive installed. 2. Install Storage Manager client on a workstation and connect to the DS4000 >> Setting up the Network & DS4000 Storage Manager Setup3. Once you have Storage Manager connected to the DS, continue adding drives (2 at a time) and EXP units. Verify with Storage Manager the DS4000 sees

the drives before you continue to add units/drives.4. Continue with the rest of the setup “Collect your Storage System Profile”

Connect Ethernet cables between RAID controllers and network switch There is a RETAIN tip regarding sensing problem. If you have problems locating the unit over Ethernet, try the following:

1. Make sure the Ethernet switch is set to auto sense. It does not work well with hard set ports at 100Mb.2. If that doesn’t work, hard set the ports to 10Mb. 3. The DS4000 controller sometimes won’t work well with 100Mb or auto-sensing.

Setting up the Network - Logging into the Controllers

If the storage subsystem controllers have firmware version 05.30 or later, then DS4000 will have default IP settings only if NO DHCP/BOOTP server is found.Controller IP address Subnet maskA 192.168.128.101 255.255.255.0B 192.168.128.102 255.255.255.0A2 192.168.129.101 255.255.255.0 DS4800 Only. The DS4800 has four Ethernet ports: two on each controller blade (A, B, A2 and B2). B2 192.168.129.102 255.255.255.0 DS4800 Only

In 5.30 code and above, you can change the IP via the Storage Manager GUI but you’ll have to change your TCP/IP setting on your laptop or workstation to an IP address that’s something like 192.168.128.10….255.255.255.0. I use a linksys switch to connect my laptop to both controllers to do this. First, discover the DS4000 and then right click on each controller…..Change>IPIf you can’t do this via the Ethernet, you’ll have to do it through the serial port which requires that null modem serial cable….

Field TIP: The newer laptops from IBM like the T and R models do not come with a Serial port. Obvious issue when you need to log into the controllers. Some have successfully used the IO gear GUC232A USB to Serial 9 pin adapter and then used standard Null modem cable. Some say that the Belkin product did not work for several people. Just a tip that may save some time when trying to figure out what to do to get the storage system configured.

When making serial connections to the DS4000 storage controller, the baud rate is recommended to be set at either 38200 or 57600

Version 1.4 dated 6/09/2006 13

Page 14: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

How to Connect to the DS4000 system using PuTTy from Windows

Launch PuTTY.exeType in the Controller IP addressSelect the "Rlogin" protocolClick Open

How to Connect to the DS4000 system using Hyperterm

1. Attach the null modem serial cable to Controller A or Controller B Launch HyperTerm. Try to get HyperTerm baud rate to 38200 or 57600, 8, 1, None>> Using flow control other than "None" can cause HyperTerm lock ups when connected to DS4000 controllers.

Note: Sending a single Serial-Break tells the controller someone is requesting shell access and to print "Press within 5 seconds: <ESC> for SHELL, <BREAK> for baud rate." Two consecutive Serial-Breaks cause the controller to cycle one step through its baud rate table. A pause of at least one second between Serial-Breaks is a good idea.

Initial Shell Access Procedure:2. Press <CTL> and <BREAK> (you may have to keep doing this until you get a response)3. When you see "Press Space Bar within 5 Seconds To Set Baud rate," press the Space Bar. Wait till you see that the Baud rate reports being set

before going to step 4.4. Press <CTL> and <BREAK> once.5. When you see "Press Space Bar within 5 Seconds To Set Baud rate, or Press <ESC> To Exit To Shell", press <ESC>.6. You will now be prompted for the password infiniti

Field Tip: If you cannot break into controller, the culprit is usually the cable. Please ensure you have a 'null modem serial cable'

Troubleshooting: If you cannot break into the controller and you’re sure you have a null-modem serial cable, I find that re-booting my laptop resolves the problem. Sometimes the COM1 port on your laptop gets hung. This especially happens if you’ve been connecting and re-connecting to various serial ports.

Please follow the procedure outlined here exactly, because some commands that can be issued from the serial console can cause data loss. You will need one IP address for each controller. Ctlr A __________________________ Ctlr B ___________________________

1. Log into Controller A 2. Type netCfgSet.

a. This command will display each line, one line at a time. When each line is displayed, the cursor will be placed to the right of the current value waiting for user input. Entering a "Carriage Return" at the cursor will cause the current line to be skipped with no changes being made to its value. Entering a "." (period) at the prompt will cause the value of the current line to be reset to the factory default. To change the value of a field, simply enter the new value at the cursor. Make sure to include the "." in IP addresses and to precede the Network Init Flags with "0x" if this value is to be

Version 1.4 dated 6/09/2006 14

Page 15: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

changed. As a minimum requirement, four fields must have non-default values in order to grant a particular host access to a controller. These fields are titled:

i. "My IP Address",ii. "Gateway IP Address"iii. "Subnet Mask"

3. Repeat the process for Controller B

Notes: It is strongly recommended that during an initial configuration that all values be set to their default values using a ".", except for the three required settings Change Network Init Flag to 0x01 if using a static IP address (keep it at 0x00 if using DHCP or Bootp server). In some HW versions of the DS4300,

the default value is not 0x00 its 0x80, and in that case it should be set to 0x81, so the last digit is the key. By request, here is everything you wanted to know about the Network Init Flag:

The Network Init Flags are used to control the initialization of the network interfaces of a controller. The one-byte field displayed next to the Network Init Flags title when the netCfgSet command is run can be used to modify these flags. Each bit in this one-byte field corresponds to a flag. The function of each of these flags is listed below:

bit 0 =1: Do not use BOOTP for any reason =0: Use BOOTP as needed bit 1 =1: Use BOOTP unconditionally =0: Use BOOTP only as necessary bit 2 =0: Start NFS services =1: Do not start NFS services bit 3 =0: Use “0.0.0.0” default route =1: Do not use “0.0.0.0” default route bit 4: =0: Do not mount all NFS volumes =1: Mount all NFS volumes bit 5: =0: Allow remote login to shell =1: Disable remote login to shell bit 6: =0: Use remote access authorization =1: Do not require authorization

The preceding bits can be hex-added in order to enable more than one flag. For example, a Network Init Flags value of 0x21 would set bits 5 and 0, disabling remote login to the shell and causing the controller software to not broadcast to a BOOTP server. While all of the flags available in the network software are listed in the above table, the only flags which the end user should ever need to modify are 0x01, 0x02, and 0x20. If the controller network interface is being manually configured via the controller shell, then bit 0 should be set to 1. Changes to any of these flags can lead to network connection problems. The 0x20 flag can be used for added protection if network security is an issue. If this flag is not set, then anyone can access the controller shell if they know the IP address of a controller, and the controller shell password. On the other hand, leaving this flag unset can be useful for debugging purposes since it enables access to a controller shell via rlogin.

Version 1.4 dated 6/09/2006 15

Page 16: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Verify Network IP Settings Check settings using netCfgShow command via serial connection and/or Ping IP address from Command prompt. netCfgShow will dump the following: ->

netCfgShow==== NETWORK CONFIGURATION ====Interface Name : dse0My Host Name : DS_aMy IP Address : 100.100.100.236Server Host Name : hostServer IP Address : 0.0.0.0Gateway IP Address : 100.100.100.9Subnet Mask : 255.255.255.0Network Init Flags : 0x01 Network Mgmt Timeout : 30Shell Password : ************User Name : guestUser Password : ************NFS Root Path : (null)NFS Group ID Number : 0NFS User ID Number: 0n.value = 27 = 0x1b

DacStore

DACStore is a region on each drive that is reserved for the use of the DS4000 Controller Unit. It stores information about the drive state/status, logical drive state/status and other information needed by the controller. The DACStore contains the following information:

1. Failed drive information2. Global Hot Spare state/status3. Storage array password4. Media scan rate5. Cache configuration of the storage array6. Storage user label7. MEL logs8. Logical drive/LUN mappings, host types, etc.9. Controller NVSRAM

DACStore region extends 512MB – DS4500 (900) (firmware level 5.3)

DACStore Cleaning MethodsIf you don't want to keep any of the data – Use the Storage Manager > Configuration > Reset or sysWipe/sysReboot from the serial port to both controllers via the

Version 1.4 dated 6/09/2006 16

Page 17: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

shell > THIS WIPES OUT ALL DATA!To preserve data on the DS4000 but you want to clean one drive –

1. Assign a disk as a Global Hot Spare. 2. Fail the GHS then remove it.3. Put suspected dirty DACStore disk into GHS slot. A new DACStore will be written4. Remove the now clean disk and put it into the empty slot.

Adding EXP Units to an existing DS4000 configurationTips:

• Call Support First (1-800-IBM-SERV)• When adding drives to an expansion unit, do not add more than two drives at a time.• Subsystem should be in an optimal state• Make sure drive tray ids are set - set to 2Gb • Connect one fibre cable at a time wait until that loop sees the new tray then repeat the process on the other loop • Always save a copy of the profile and configuration before doing any work additions

a. Run DisableDriveMigration.scr script on DS4000 from DS4000 Storage Manager Enterprise Management Window. This ensures that if you have a drive with dirty DACStore, it will not propagate to the DS. You can clean the DACStore by following the procedure above.

b. Only put one (1) HDD in the newly connected EXP unit.c. Check the DS4000 Storage Manager. You should be able to see the new HDD in the new EXP unit.d. Add two(2) new HDD at a time, verifying each time that each drive appears in the DS4000 Storage Manager.e. Run Enable Drive Migration script on DS4000 from DS4000 Storage Manager Enterprise Management Window.

EnableDriveMigration.scr: The Enable drive migration script scans the drives for configurations when they are added to a storage system and will also attempt to use configurations found on the newly added drives. The NVSRAM default will enable drive migration.DisableDriveMigration.scr: The disable drive migration script allows the addition of disk drives while the system is running. In addition, it identifies the drives as unused capacity regardless of past drive configuration. The NVSRAM default will enable drive migration.

Version 1.4 dated 6/09/2006 17

Page 18: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Host Server Preparation Tasks

Install Server HBA's and update Driver

Check http://knowledge.storage.ibm.com/HBA/HBASearch for the latest supported host adapters, driver levels, bios, and updated readme.

Be careful not to put all the high-speed adapters on a single system bus; otherwise the computer bus becomes the performance bottleneck. Make a note of what slot in each server Record WWPN of each HBA and what slot it’s in.

INTEL: Ensure HBA’s are in a higher priority slot than ServeRAID adapters. If not booting from the Host HBA's, it doesn't matter whether or not they

are a higher PCI scan priority. Install and update the driver for the IBM FAStT Host Adapter.

1. Install the hardware by using the instructions that come with the adapter. 2. Install the IBM FAStT Host Adapter driver by using the instructions provided in the readme.txt file located in the Host Adapter directory on this

installation CD.

A configuration in which a server with only one FC HBA connects directly to any DS4000 storage subsystem with dual controllers is not supported. The supported configuration is the one in which the server with only one FC host bus adapter connects to both controller ports of any DS4000 storage subsystem with dual controllers via a SAN switch.

DS4300 (600) Note for Windows, Novell NetWare or x86 Linux"Direct connection of the DS4300 (600) to a host system running Microsoft Windows, Novell NetWare or x86 Linux is only supported using the IBM FAStT FC2-133 Host Bus Adapter (HBA), IBM feature code 2104 with the last 6 digits of the 13 digit serial number of the HBA is H21160 or higher. For example, if the serial number is FFC0308H21161, the last 6 digits will be H21161, which indicates that this HBA meets the DS4300 (600) direct connect prerequisite.

Install RDAC on to Hosts according to installation manual. RDAC is recommended regardless of whether or not there are multiple HBA's AIX - The AIX ”fcp.array” driver suite files (RDAC)are not included on the DS4000 installation CD. Either install them from the AIX Operating Systems

CD, if the correct version is included, or download them from the following Web site: techsupport.services.ibm.com/server/fixesUpdating RDAC on SAN BOOT

Before updating the RDAC driver in a server that is setting up as SAN boot, one of the two Fibre Channel (FC) cables that are connected to the server must be disconnected. The servers will bluescreen during the reboot after the un-installation of the RDAC driver if one of the FC cables was not removed. (258127 (82146))

Version 1.4 dated 6/09/2006 18

Page 19: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

DS4000 Storage Manager Setup

The following components are Mandatory for all DS4000 Environments:

RDAC (Regardless of whether or not there are multiple paths) for Windows NT, Windows 2000 and Solaris QLRemote =FastT MSJ, and it is only needed when you don't use Linux RDAC (regardless of whether or not there are multiple paths) Client - somewhere to be able to configure the solution

The following components are optional based on the needs of the customer:

Agent (All Operating Systems) - This is only needed if you wish to configure the Fibre Channel through a direct Fibre Connection.  If you only want to manage the DS4000 unit over the network, it's not necessary

IBMSAN.CDM - This is the multi-path software for Netware 5.1. It's only needed if you have multiple paths. SMxUtil - These utilities are not required, but RECOMMENDED because they add additional functionality for troubleshooting and hot-adding devices

to the OS.  NOTE:  In SM8.0, they will be required for FlashCopy Functionality. If you plan to use SM Client through a firewall. SM Client uses Port 2463 TCP.

FAStT MSJ - Not required but RECOMMENDED because it adds Fibre Path Diagnostic capability to the system.  It is recommended that customer's always install this software and leave it on the system.

Be sure you install the host-bus adapter and driver before you install the storage management software.

For in-band management, you must install the software on the host in the following order:

1. Microsoft Virtual Machine (Windows NT 4.0 and Windows Server 2003 only)2. SMclient3. RDAC4. SMagent5. SMutil

For out-band management, you must install storage management software on a management station in the following order: 1. Microsoft Virtual Machine (Windows NT 4.0 and Windows Server 2003 only)2. SMclient

Install Storage Manager from CD or download. You must use 9.15 Storage Manager Client to manage some DS4000 systems (Firmware 8.34 for example). If you are upgrading a system to 9.15 and have previously installed Event Monitor, you must make sure all of the Event Monitor services are stopped before you can upgrade.

Determine if client or agent needs to be installed (Client for out-band, Agent AND Client for in-band) Execute SETUP.EXE file

Version 1.4 dated 6/09/2006 19

Page 20: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

During the installation, it will ask you if you want to install the event monitor. Only install Event Monitor if you are at a workstation that will be responsible for monitoring the DS. Otherwise, skip it.

Launch Storage Manager

To perform an initial automatic discovery of storage subsystems, perform the following steps:1. Click Start → Programs.2. Click IBM DS4000 Storage Manager Client v09.1.G5.xx. The client software starts and displays the Enterprise Management window and

the Confirm Initial Automatic Discovery window.Note: The Enterprise Management window can take several minutes to open. No wait cursor (such as an hourglass) is displayed.

3. Click Yes to begin an initial automatic discovery of hosts and storage subsystems attached to the local subnetwork. After the initial automatic discovery is complete, the Enterprise Management window displays all hosts and storage subsystems attached to the local subnetwork.

Note: The Enterprise Management window can take up to a minute to refresh after an initial automatic discovery.

Direct Management: If the Automatic Discovery doesn’t work.

Go to EDIT>Add Device Enter the IP Address of controller A. Click Add. Enter IP Address for controller B. Click Add. Click Done.

Storage Controllers should appear. It is likely that they will show "Needs Attention” This is common since the battery will be charging. Power cycle the DS4000 controller if it doesn't appear.

Manage the DS4000

Once you have discovered the DS4000 systems, you will start to manage them individually. Go through these items for each DS4000 controller. Double click on the DS4000 system and launch the manager.

Rename the DS4000 Controller

Click Storage Subsystem → Rename. The Rename Storage Subsystem window opens. Type the name of the storage subsystem. Then click OK.

If you have multiple controllers, it is helpful to enter the IP addresses or some other unique identifier for each subsystem controller.

Change Enclosure Order

Version 1.4 dated 6/09/2006 20

Page 21: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

The EXP enclosures will likely show in the GUI different than how you have them installed into the rack. You can change the GUI to look like how the enclosures are installed. You do this by going under File>Change>Enclosure Order and move the enclosures up or down to correctly reflect how they are installed into the rack.

Check that each Controller is Online and Active

Right click each Controller and place Online and change to Active (if applicable)

Collect your DS4000 Storage System Profile

1) Firmware and NVSRAM. I find the easiest way to do this is to: Go to Storage System > View Profile Click on the tab “Controllers” Look for something like this…. Firmware version: 06.10.16.00

Appware version: 06.10.16.00 Bootware version: 06.10.01.00 NVSRAM version: N1722F433R910V01

2) Drive Code Click on the Drive tab and note the Product ID and Firmware Version

TRAY, SLOT STATUS CAPACITY CURRENT DATA RATE PRODUCT ID FIRMWARE VERSION 1, 1 Optimal 36.72 GB 2 Gbps B337 F454

3) ESM Code: Click on the Encloses tab and find the firmware version (it’s a little difficult to locate) and check EACH ESM!

ESM card Status: Optimal Firmware version: 9140

If you see that you need to update any of these components, the next step is to download the firmware. I find the best way to do this is to follow this process.(these links are subject to change)

1) Go to http://www-03.ibm.com/servers/storage/disk/2) Click on the DS4000 storage unit you’re working on.3) Click on Product support 4) Click on the tab labeled Download

Version 1.4 dated 6/09/2006 21

Page 22: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

5) Click on the Storage Manager, firmware, HBA and tools (including readmes) 6) Click on the current firmware and readme’s listed, for example: 06 June 2005 v06.12.03.xx …Click on the ESM & HDD tab to get that too.7) You’ll want the .zip file AND the .txt file. Make sure to go through the readme.txt file. It’s loaded with information!8) Unzip the package and put it in a folder where you can point to it later. It will have all firmware for every DS4000 box but don’t worry, Storage

Manager will know which one to use when you go through the update process.

Update Microcode

Field Tip: Do not upgrade the Firmware/NVS/ESM or HDD code if the DS4000 is in anything but ‘optimal’ state!! Doing so may cause disastrous results.

Always check the README files (especially the Dependencies section) that are packaged together with the firmware files for any required minimum firmware level requirements and the firmware download sequence for the DS4000 drive expansion enclosure ESM, the DS4000 storage server controller and the hard drive firmware.

NOTE: When you have a direct connection (no switch) to the DS4000 controllers, do not download firmware and NVSRAM files to the controllers with I/Os being generated. There is no I/Os restriction in SAN Fabric environment i.e. when the HBA ports are connected to the DS4000 controller ports via Fibre Channel switches.

If you are upgrading the NVSRAM with Storage Partitions, you may have to change the default host type to match the host system OS. After upgrading the NVSRAM, the default host type is reset to Windows 2000/Server 2003 non-clustered for DS4000 storage subsystem with controller firmware version 06.14.xx.xx or later. For DS4000 storage server with controller firmware version 06.12.xx.xx or earlier, it is reset to Windows non-clustered (SP5 or higher), instead.

AIX Concurrent download Online concurrent firmware and NVSRAM upgrades of FC arrays are only supported when upgrading from 06.10.06.XX to another version of 06.1X.XX.XX. APAR_aix_51 = IY64463 APAR_aix_52 = IY64585 APAR_aix_53 = IY64475You cannot use concurrent firmware download if you change the default setting of the Object Data Manager (ODM) attribute switch_retries. The default is 5.

IMPORTANT NOTE: You can not do a concurrent firmware download on the DS4800 from 6.14.xx to 6.15.xx(ALL IOs must be stopped during the upgrading of the DS4800 controller firmware and NVSRAM)

Fibre Code levels at this time 5/25/2006 is as follows: This is for 9.12.27 & 9.15.20.55 (DS4800 only)

Version 1.4 dated 6/09/2006 22

Page 23: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

1. Storage Controller Firmware versions: a. DS4800: FW_DS4800_06152055 a. DS4500: FW_06122700_06100700 b. DS4400: FW_06122700_06100700 c. DS4300 Standard: SNAP_288X_06122700 d. DS4300 Turbo : SNAP_288X_06122700 e. DS4100 Standard: SNAP_282X_06122700 f. DS4100 SCU : SNAP_282X_0612270 Storage Controller NVSRAM versions:

ATTENTION: The DS4000 storage subsystem controller firmware version 06.12.xx.xx uses the FC/SATA intermix premium key file to enable the FC/SATA intermix functionality. It does not rely on a certain version of the NVSRAM file like the controller firmware version 06.10.xx.xx. Do not apply this version 06.12.xx.xx of the firmware until a FC/SATA intermix premium feature key file is generated and available.

DS4800: N1815D480R915V05 DS4500: N1742F900R912V08 DS4400: N1742F700R912V07 DS4300 Standard: N1722F600R912V07

(for EXP700/EXP710 ONLY attachment or when FC/SATA intermix premium feature is enabled) DS4300 Turbo : N1722F600R912V07

(for EXP700/EXP710 ONLY attachment or when FC/SATA intermix premium feature is enabled) DS4100 Standard: N1724F100R912V07 DS4100 SCU : N1724F10LR912V02 DS4300 Standard: N1722F600R28enc4(for EXP100 ONLY attachment) DS4300 Turbo : N1722F600R28enc4(for EXP100 ONLY attachment)

o DO NOT LOAD THIS NVSRAM FILE N1722F600R18enc4 ONTO ANY OF THE DS4300 STORAGE SUBSYSTEMS THAT HAVE DRIVES IN THE DS4300 CHASSIS DRIVE SLOTS. THIS WILL CAUSE THE DS4300 storage subsystem NOT RECOGNIZING ANY OF THE DRIVES IN THE DS4300 CHASSIS.

Important: You must install the firmware update prior to downloading the NVSRAM update.1. Controller firmware (5 to 15 minutes)2. NVSRAM (2-5 minutes)3. ESM firmware (10 to 15 minutes) NOTE: EXP500 Firmware 9166 before EXP700 firmware 93244. Hard Drive firmware (1 minute per drive but you can do many concurrently)

Updating the Firmware and NVSRAM

If you click Stop while a firmware download is in progress, the current download will finish before the operation stops. The Status field for the remaining enclosures changes to Canceled.

Version 1.4 dated 6/09/2006 23

Page 24: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Online Firmware and NVSRAM firmware upgrades are only supported when upgrading from 05.40.06.XX to a new version of 05.40.XX.XX. It is highly recommended that Online FW upgrades be scheduled during low peak I/O loads.

Online Firmware and NVSRAM upgrades of SATA arrays are only supported when upgrading from firmware 05.41.56.XX to a higher firmware version of 05.41.5x.xx.

AIX Notes: Online upgrades are not supported on AIX 4.3.3. I/O must be quiesced prior to performing the upgrade.Using concurrent download: Depending on your system’s current firmware and AIX device driver levels, you might be able to use concurrent download. Attention:1. You cannot use concurrent firmware download if you change the default setting of the Object Data Manager (ODM) attribute switch_retries. The default is 5. 2. If you do not have the correct firmware versions to use concurrent download, you must ensure that all I/O to the controllers is stopped before you upgrade the firmware or NVSRAM.

The upgrade procedure needs two independent connections to the DS4000 Storage Server, one for each controller. It is not possible to perform a microcode update with only one controller connected. Therefore, both controllers must be accessible either via Fibre Channel or Ethernet. Both controllers must also be in the active state.

Update the Firmware first and then NVSRAM Have the Firmware and NVSRAM downloaded from http://www-1.ibm.com/servers/storage/disk/

To download firmware, do the following:

Open the Subsystem Management window. Click Advanced => Maintenance > Download => Controller Firmware (Follow the online instructions)

To download NVSRAM, do the following:

Open the Subsystem Management window. Click Advanced => Maintenance > Download => Controller NVSRAM (Follow the online instructions)

After updating the NVSRAM, the system resets all settings stored in the NVSRAM to their defaults so if you made any changes manually using a script, you will have to reapply them.

Updating ESM - not common for new installs – Note: The latest ESM firmware versions for EXP100, EXP700 and EXP710 drive expansion enclosures are 9563, 9329 and 9631, respectively.

Allow approximately 5-10 minutes per ESM to complete the firmware update.

Version 1.4 dated 6/09/2006 24

Page 25: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

With Storage Manager 9.1 and controller firmware 05.4x.xx.xx or higher, it is possible to update the ESM firmware during host I/O to the logical drives. BUT You must suspend all I/O while ESM firmware downloads if you select multiple enclosures for downloading ESM firmware. If you select only one enclosure for download at a time, you can download ESM firmware with I/Os. However, IBM recommends that you suspend all I/O activity to perform firmware upgrades.

The ESM firmware version must be the same in all EXP drive enclosures of the same type in a given DS. Updating ESM firmware requires downtime if the controller firmware level is 05.21 or lower.

Perform the following steps to download the ESM card firmware:

1. From the Subsystem Management window, select Advanced → Download → Environmental (ESM) Card Firmware. 2. Select Enclosures field, highlight each enclosure to which you want to download firmware or click Select All to highlight all drive enclosures in the

storage subsystem. Each drive enclosure that you select should have the same product ID. 3. Enter the firmware file to download 4. Click Start. Note: The Start button will be unavailable until both a drive enclosure and a firmware file are selected. 5. Confirm your selections and then click Yes to continue with the firmware download

Update Drive Code – not common for new installs

Note: Drive firmware download is an offline management event. You must schedule downtime for the download because no IOs to the storage server are allowed during the drive firmware download process.

In Storage Manager 9.1 there is added support for parallel hard drive firmware download support. Up to four different drive firmware packages can be downloaded to multiple drives of four different drive types simultaneously.

If you have both SATA-(EXP100) and Fibre drives (EXP700/EXP710) behind the same DS4000 storage server, do not download drives firmware to both SATA- Fibre-Channel drives at the same time. Download the drive firmware to drives of a single drive technology (either SATA or FC) at a time.

Do not pull or insert drives during the drive firmware download. In addition, ALL IOs must also be stopped during the drive firmware download. Otherwise, drives may be shown as missing, unavailable or failed.

Download the latest HDD code if you haven’t already Updated the drive code using SM 9.1 > Advanced > update Drive Code

Set Storage Subsystem Clock -

Since the DS4000 Storage Server stores its own event log, synchronize the controller clocks with the time of the host system. This simplifies error determination when you start comparing the different event log. Be sure that your local system is working using the correct time. Then click Storage

Version 1.4 dated 6/09/2006 25

Page 26: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Subsystem > Set Controller Clock

Ensure Storage Partitioning is Enabled.

The DS4300 (600) does not come with Storage Partitioning enabled. There is an activation kit that comes with the storage system that will point you to a website for key activation. You can still get the key from the support or level2 (see below)

DS4500 has 16 storage partitions installed. An upgrade from 16 to 64 partition storage partitioning premium feature option can be purchased.

Storage partitioning allows you to connect multiple host systems to the same Storage Server. It is a way of assigning logical drives to specific host systems or groups of hosts — this is known as LUN masking. Logical drives in a storage partition will only be visible and accessible by their assigned group or individual hosts.Without the use of Storage Partitioning, all logical drives appear within what is called the Default Host Group, and they can be accessed by any fibre channel initiator that has access to the DS4000 host port. When homogeneous host servers are directly attached to the DS4000 Storage server, access to all logical drives may be satisfactory, or when attached to a SAN, zoning within the fabric can be used to limit access to the DS4000 host ports to specific set of hosts.

Go to Storage Subsystem>Premium Features>List You should see "Storage Partitioning Enabled"

o If you do not see that it is enabled you’ll have to get the feature key and enable it. Make note of the 32 digit feature key number. Customers call 800-IBM-SERV, enter the 4 digit machine type and tell the help desk that you need a feature key generated. An FTSS can go to http://ssgtech1.sanjose.ibm.com/Raleigh/FastT%20PE%20Support.nsf/Home?OpenPage > Feature Key Generator > Enter customer name, machine type, serial number and SM version.

1. Download key file to Storage Manager to client or agent.2. To enable feature3. Go to Storage Subsystem>Premium Features>Enable4. Point to the key file.

Create Hot Spares 1 Hot Spare per drive tray is optimal but it’ll depend on your capacity requirements. I recommend no less than 1 spare for every 20-30 drives. Also

keep the rebuild times in mind depending upon the size of the drives installed. EXP100 >> IBM recommends 1 hot spares per 2 EXP100 drive expansion enclosures. I try to use 1 per drawer if you can afford it. One in an even

slot and the other in an odd slot. Ensure that spare drives are placed on different disk drive channels (odd and even slots) to reduce the risk of a single disk channel causing a

loss of access to all spare drives in the subsystem. In the DS4500 configuration with two redundant drive loops, it is recommended to put half of the hot-spares in one redundant drive loop and

the rest on the other redundant drive loop. The DS4800 has 4 drive loops so try to put at least one spare in each drive loop.

Note: a total of 15 hot-spares can be defined per DS4000 storage server configuration

Version 1.4 dated 6/09/2006 26

Page 27: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Performance and Configuration Notes

Tips: More physical disks for the same overall capacity give you better performance. By doubling the number of the physical drives, you can expect up to

50% increase in throughput performance. Capacity / LUN may impose restrictions

Cable and configure the DS4000 storage controller to use all drive loops if you have more than 2/4 EXP710’s (not applicable to DS4300 (600)). You will get better performance.

Spread your arrays across multiple enclosures and drive channels rather than in one enclosure so that a failure of a single enclosure does not take a whole array offline. Another benefit is performance increases since the I/O requests are processed by multiple ESM boards along multiple paths (loop side).

Recommended drives / LUN varieso Capacity / LUN may impose restrictionso Application queue depth / LUN may impose restrictionso Up to 10 drives / LUN is reasonable

10K drives - “Typical” environments yield between 130-190 IOPS per drive max15K drives - “Typical” environments yield between 150-220 IOPS per drive max

Filesystem Partitioning Never create a filesystem and log on the same RAID logical disk >> this can cause thrashing of the disks. Creating a single disk partition encompassing all usable space is recommended. Creating more than 3 concurrently active disk partitions will also cause disk thrashing thus degrading performance.

What type of I/O is best for which raid level:

Raw I/O is best suited for small transaction based I/O applications such as databases. Buffered filesystem I/O could be used here but better performance is generally found using raw I/O.

Large sequential I/O is best suited for direct I/O with a filesystem. You can achieve near raw performance and gain the benefits of having a filesystem.

RAID 1, 1/0 and 5 benefit from concurrent I/O’s I/O Rates plateau around 14 drives

SVC – If you are configuring a DS4000 for a SVC, it has been recommended to use either a 4+P or 8+ P RAID5 configurations. It is also recommended that you configure 2 luns per array for optimal performance with the SVC.

Version 1.4 dated 6/09/2006 27

Page 28: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Selecting RAID Parameters Number of Disks to use in a RAID Array

o High Bandwidth Applications RAID 5 8+1 drive configurations

o High I/O rate Applications For large I/O environments consider 4+1 or 8+1

Single I/O on a single data stripe (segment size X number of drives) RAID 1/0 striped in multiple of 2 drives More spindles will provide Higher I/O Rates Capacity requirements will limit the number of spindles

Important Configuration Note:DS4000 SM will automatically configure a RAID10 array that has more than 4 drives (2 drives mirrored). If you want RAID1 (no RAID10) then you must keep your array no larger than 4 drives and if you want a RAID10 array, you must have more than 4 drives.

Important Note for Raid5: I recommend no more than a 12+P as the maximum RAID5 array size. 8-12 disks per array are the sweet spot if you don’t know what the workload is. An array which contains more than 12 drives results in:

Substantial increase in the time needed to rebuild a failed drive onto a spare drive, increasing the probability of data loss due to a second drive failure in the same array during the rebuild process

Elongation of the time required to perform a CHKDSK operation against the logical drive to multiple hours, even days.

Version 1.4 dated 6/09/2006 28

Page 29: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Most recommend in the field a 4+P, 6+P, 8+P, 12+ P configurations. It has to do with the segment sizing working out evenly across the physical disks ( # of disks x segment size = size of stripe of data)

Write-back caching and Write-through

Write-through means that data is always going to be written directly to the disk drives, bypassing cache. Good for freeing up cache for reads. Write-back caching data is not written straight to the disk drives; it is only written to the cache.

o Enable write-back caching with cache mirroring Allows immediate write complete acknowledgement to host Flush to disk due to cache full (demand) or age (10 seconds)

o Write-back mode appears to be faster than write-through mode but if workload is high, cache will become inefficient

Maximizing Performance for IOPS/Sec. Monitor these setting with Performance Monitor Disable array prefetch for random IO

o Set on a LUN basis Enable prefetch if some sequential host IO

Segment Size >> The Segment is amount of data that the controller writes on a single drive in a Logical Drive before writing data on the next drive. If you have a large I/O environment try to get a single I/O on a single data stripe (segment size X number of drives) If I/O size is larger than segment size consider increasing segment size

Version 1.4 dated 6/09/2006 29

Page 30: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Most common segment size misconception is small segment size is needed for small IO sizes. This may be true for other vendors products but not the DS4000

DS4000 9.1 Storage Manager has 512KB segments for larger full-stripe widths which could be performance enhancements for large database applications.

Cache Block Size Set array cache block size to 4K Minimum memory allocation unit Two cache block sizes, 4K (default) and 16K Highest efficiency if IO is aligned, and a full cache block

Set Cache Read Ahead Multiplier to 4 (default is 1) This is good for most installations. If you have a very heavy sequential workload, you can increase it but monitor it with the Performance Monitor. You should see a higher cache hit ratio if it’s working.In DSM 9.1, a pre-fetch multiplier of zero disables automatic pre-fetch and any other value enables automatic pre-fetch. This method still uses a multiplier, but the value is strictly internal and automatically adjusted up or down depending on cache hits and misses of the sequential host read requests.

Determining I/O bottlenecks The Performance Monitor can be used to help determine a number of items Actual I/O size being received by the storage subsystem

– Max. throughput / max. number of IOPS = I/O size How many IOPS are being processed by a given RAID set

– Number of IOPS divided by number of data drives in RAID set will give average number of IOPS per disk in the set– Continuous high values could indicate a disk bottleneck

Drive Failure on EXP700 –I recommend that when a drive fails on the EXP700, you should make sure to pull the bad or failed drive after the drive has been successfully spared out. (The drives electronics are actively being used whether or not the drive is failed; as a result it is possible for a failed drive to continue causing problems).

Version 1.4 dated 6/09/2006 30

Page 31: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

How long does it take media scan to complete? (created by Hans Hudelson)

The setting of 30 days for media scan is for the maximum time the scan will complete on all LUNs that have media scan (MS) activated. It runs continuously in the background, using spare cycles to complete its work. It reads a stripe into cache memory, and if all blocks can be read, it discards them and proceeds to the next stripe. If one block can not be read, it retries 3 times to make sure the drive cannot be read. When it fails to read, the data for that block is reconstructed, and the controller issues a write with verify command. Write with verify tells the drive to write the data to the block, read it back, and tell the controller when the drive has successfully written the data and read it back. In the case of a bad block, the write will either overwrite data that was weakened, or it will fail to write to the block so that the data can be read back. If the write to that block fails, the drive will allocate other blocks until the data has been verified.

This continues, LUN by LUN, on the controller until all LUNs have been verified, then it starts over. If completion takes less than the setting, it will start again, and will schedule its completion for 30 days. Along the way, it calculates how much more scanning it has to do, and moves up the priority on the scan when calculations show that it will not complete on schedule. As it gets closer to the set completion time, 30 days by default, it will continue increasing priority so that the 30 day end time will be reached.

For example, scanning all the LUN on a system is set for 30 days. There are plenty of spare processor and cache cycles, so it completes the scan of all the LUNs in 21 days. It will start over, right away, and schedule the next completion 30 days from the time this scan started. During these 30 days, the customer uses the controller a lot more, and spare processor cycles are decreased a lot. At 10 days, the controller calculates it will take 21 days to finish, so the priority on the MS will increase until the scheduled end date is 20 days away. At 15 days, the controller calculates it will take 16 days to finish, so priority of the MS application when assigning processor cycles increases again. As we get closer to the end date, the priority of MS may get high enough that production is slowed, because the controllers priority will be to finish the MS in the time allotted, 30 days. The same process holds for whatever duration you set.

We have seen no effect on I/O with a 30 day setting unless the processor is utilized in excess of 95%. The length of time that it will take to scan the LUNs depends on the capacity of all the LUNs on the system and the utilization of the controller.

Create arrays and logical drives

IMPORTANT: I’ve had in previous checklists to let the DS4000 create your array. I no longer recommend this. When you select the drives for an array, make sure to select a drive from each EXP700/710 and alternate drive slots (odd and even) This will ensure maximum availability and performance. Try to avoid creating an array where more than 1 drive is in 1 tray. This may not be feasible for many installations.

DS4100: When configuring a DS 4100 it is highly recommended to configure the 14 disks that are in the base frame to use the same Controller. This will prevent I/O shipping and a possible degradation in performance.

SATA NOTE: The ideal configuration for SATA drives is one drive in each EXP per array, one logical drive per array and one OS disk partition per logical drive. This configuration minimizes the random head movements that increase stress on the SATA drives. As the number of drive locations to which the heads have to move increases, application performance and drive reliability may be impacted. If more logical drives are configured, but not all of them used

Version 1.4 dated 6/09/2006 31

Page 32: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

simultaneously, some of the randomness can be avoided. SATA drives are best used for long sequential reads and writes.A good practice is to have the logical lun size to be the size of the array. This may not be feasible or recommended in some configurations (you may only have 10 drives and want multiple RAID10 luns).

You are limited to having no more than 30 disks in an array.

Most hosts will be able to have 256 LUNs mapped per storage partition. Windows NT, Solaris with RDAC, NetWare 5.1, and HP-UX 11.0 are restricted to 32 LUNs. If you try to map a logical drive to a LUN that is greater than 32 on these operating systems, the host will be unable to access it. Solaris will require use of Veritas DMP for failover for 256 LUNs.

1. In the Logical (left) panel of the Subsystem Management window, highlight either Free Capacity (from an existing array) or Unconfigured Capacity (create a new array).

2. Click Logical Drive —> Create. The Introduction (Create Logical Drive) window opens showing the amount of free or unconfigured capacity available and provides a link for tips and examples on allocating capacity. Click Next to continue

3. Follow the Wizard carefully. I found that most people create arrays and luns without paying much attention to the details.

Continue setting up drives for either this Same Array or a Different ArrayDepending upon the size of the 4 storage system, the formatting may not be completed for many hours. Start this process and come back the following day!

Verifying and defining the default host typeHomogenous Host AttachmentThe host type determines how DS4000 will work with each connected. If all host computers connected to the same storage subsystem is running the same operating system, and you do not want to define partitioning, you can define a default host type.

Click Storage subsystem → Change → Default host-type. The Default Host-type window opens. From the pull-down list, select the host type. Click OK.

Heterogeneous Host AttachmentImportant: To use the heterogeneous host feature, you must meet the following conditions:

You must enable storage partitioning. During host-port definition, you must set each host type to the appropriate operating system so that the firmware on each controller can respond

correctly to the host.

Version 1.4 dated 6/09/2006 32

Page 33: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

SAN Configuration GuideConfigure SAN Switch –

Follow the SAN switch installation and users guide for settings IP address of switch(s). Set the switch name Set Domain ID Verify Firmware of switch and upgrade if necessary following installation guide.

BROCADE SetupThese are Telnet commands to configure the Brocade switch. Brocade needs straight thru serial cable unlike the DS4000.

- To set the IP address ( ipaddrset )

- To set the Domain ID ( configure ) hit CTL+D to accept all the rest of the defaults and commit changes

- To set the Core PID Setting this should be set to ‘1’. ( configure ) ) hit CTL+D to accept all the rest of the defaults and commit changes)

- Set telnet timeout value ( timeout 10 ) [optional, but recommended]

Fabric Topology - It is a best practice that all Fibre Channel (FC) Switches must be zoned such that a single FC Host HBA can only access one controller per storage array. This zoning requirement will ensure the maximum number of host connections can be seen and log into the DS controller FC host port. This is because if a FC HBA port is seen by both controller A and B host ports, it will be counted as two host connections to the storage subsystem - one for controller A port and one for controller B port.

Zoning –

Most errors and problems with setting up the DS4000 storage systems are made in the zoning configurations. To avoid possible problem at the host level, all Fibre Channel Switches should be zoned such that a single host bus adapter can only access one controller per storage array. I tend to get a lot of feedback about different ways to zone but this is the recommended method and will not cause strange behaviors.

Understanding the DS4000 storage controllers will help you understand how zoning should be configured. This is an example of a more complicated configuration. This shows 2 dual HBA hosts that want to access data from 2 DS4000 storage systems. Notice how each DS4000 has both ports of Controller A going to one switch (I’ll call it SAN A) and both ports of Controller B going to SAN B. You would then create a zone where HBA1/Server1 is zoned with ControllerA/Port1 >> in SAN A HBA2/Server1 is zoned with ControllerB/Port1 >> in SAN BHBA1/Server2 is zoned with ControllerA/Port2 >> in SAN AHBA2/Server2 is zoned with ControllerB/Port2 >> in SAN B …. Then repeat the process for adding the other DS4000 storage system.

Version 1.4 dated 6/09/2006 33

Page 34: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

AIX: AIX hosts can support multiple host bus adapters (HBAs) and DS4000 devices. However, there is a restriction on the number of HBAs that can be connected to a single DS4000 storage subsystem. You can configure up to two HBAs per partition and up to two partitions per DS4000 storage subsystem. Additional HBAs can be added for additional DS4000 storage subsystems and other SAN devices, up to the limits of your specific subsystem platform. The maximum number of partitions per AIX host, per DS4000 storage subsystem, is two.Exception: With DS4800 storage subsystems, the maximum number of partitions per AIX host, per DS4800 storage subsystem, is four.

You can attach up to 4 DS4000 to a single Pseries server. You will need two HBAs in the server and 10 ports on the SAN (two from HBAs to switch, and two each from the switch to each DS4000). Four connections from the switch to each DS4000 would be even better.

Make sure you check your zoning so the following rules apply: Every adapter in the AIX system can see only one controller (these are AIX specific zoning restrictions, not HACMP specific). Other storage devices, such as tape devices or other disk storage, must be connected through separate HBAs and SAN zones. (AIX specific) Multiple HBA’s in the same server cannot “see” the same DS4000 controller port. (each in their separate zone) The HBA’s are isolated from each other (ZONED) if they are connected to the same switch that is connected to the same DS4000 controller port. Each HBA and controller port must be in its own fabric zone, if they are connecting through a single switch.

Connect SAN Switches to mini-hub host ports of DS4000 controller (indicate port per switch). DS4300 (600) NOTE>> The two host ports in each controller are independent. They are not connected in the controller module as they would be in a hub configuration. So, there are a total of 4 host ports in the DS4300 (600).

I recommend creating an alias for each Host HBA and each DS4000 controller to easily identify the WWPN. Zone fiber switches. Each HBA is to be zoned to each controller

Version 1.4 dated 6/09/2006 34

Page 35: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

NOTE : The DS4000 controller host ports or the Fibre Channel HBA ports can not be connected to a Cisco FC switch ports with "trunking" enable. You might encounter failover and failback problems if you do not change the Cisco FC switch port to "non-trunking" using the following procedure: a. Launch the Cicso FC switch Device Manager GUI. b. Select one or more ports by a single click. c. Right click the port(s) and select Configure, a new window pops up d. Select the "Trunk Config" tab from this window, a new window opens e. In this window under Admin, select the "non-trunk" radio button, it is set to auto by default. f. Refresh the entire fabric.

Enhanced Remote Volume Mirroring Note Using Cisco:

If you have FCIP on a Cisco switch….make certain you configure the CISCO with "Inorder delivery guaranteed" enabled; normal is disabled. Cisco will say it is not needed; but that is not true for the DS4000; must have inorder packets or it throws them all away. FCIP allows packets to get out of order.

Version 1.4 dated 6/09/2006 35

Page 36: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Define Host Groups and Host to Lun Mapping

For a new installation, after creating new arrays and logical drives; New Host Groups/Hosts/HBA ports:

1. In the Topology section of the Mappings window, highlight the name of the storage subsystem, and click Mappings —> Define —> Host Group Note: Make sure that the storage subsystem is highlighted in the left panel of the Subsystem Management window. Do not highlight Undefined Mappings.

2. Type the name of the new host group (for example, AIX). Click Add, and then click Close.3. In the Topology section of the Mappings view of the Subsystem Management window, right-click the new host group and select

Define Host. The Introduction (Define Host) window opens. 4. Click Next. The Specify Host Name/HBA Attribute (Define Host) window opens. 5. Type the host name in the Specify Host Name/HBA Attribute (Define Host) window. In the left panel, select the correct WWPN

of the HBA host port. Click Add.6. You must now provide an alias name for the host port. Click Edit, and then type an alias name (for example, Port1). 7. On configurations with two or more HBAs, repeat step 5 and step 6 for each host port that you need to define, then proceed to step 8. 8. Click Next. The Specify Host Type window opens. 9. Select the correct host type from the drop-down menu and click Next. The Review window opens. Failure to change the host type from the default

to the correct host type may cause undesired results. 10. Review the information for accuracy, and make any necessary changes. Then click Next.11. After Storage Manager finishes defining the host and host ports, a dialog window opens. If you need to define another host, select Define another host.

To finish, click Exit. The wizard closes.

Create the AIX host group at the storage subsystem level. Do not create host groups at the default group level. Exceptions: If you are running a DS4300 configuration without partitioning enabled, you can use the default host group.

Mapping Luns1. In the Topology section of the Mappings view of the Subsystem Management window, right-click the host and select Define storage

partitioning. The Storage Partitioning wizard opens. 2. Click Next. The Select Host or Host Group window opens. 3. Right-click to select a host or host group in the list. The Select Logical Drive/LUN window opens. Note: Selecting “host group”

allows all the hosts to share LUNs. In most cases, you should not select “host group.” 4. Right-click the name of a logical drive that you want to add to the partition, and select the LUN number from the resulting menu. 5. Click Add. 6. Repeat for each logical drive that you want to add to the host partition. 7. When you finish selecting logical drives, click Finish. A progress window displays as Storage Manager adds the logical drives to the

new partition.

Version 1.4 dated 6/09/2006 36

Page 37: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Repeat these steps for all Host Server storage assignments

If you have a single server in a host group that has one or more LUN assigned to it, it is recommended to assign the mapping to the host and not the host group. All servers having the same host type, for example all Windows NT servers, can be in the same group if you want, but, by mapping at the host level you can define what specific server accesses what specific LUN.

If you have a cluster, it is good practice to assign the LUNS to the host group, so that all of the servers in the host group have access to the LUNs. In a normal partition, assign the luns to the host or host port.

To make the logical drives available to the host systems without rebooting, the DS4000 Utilities package provides the hot_add command line tool for some operating systems. You simply run hot_add, and all host bus adapters are re-scanned for new devices and the devices are assigned within the operating system.Delete the Access Logical Volume – (LUN 31)

The DS4000 storage system will automatically create a LUN 31 for each Host attached. This is used for in-band management so if you do not plan to manage the DS4000 storage subsystem from that host, you can delete LUN 31 which will give you one more LUN to use per Host.If you attached AIX 4.3 to the DS4000 Storage Server, you need to delete the mapping of the access LUN.

Right click on LUN 31 and remove mapping

Save the configuration:

1. From the Subsystem Management window click on Configure -> Save Configuration

Reset MEL log and RLS Statistics

Save Profile

1. Storage Subsystem —> View Profile in the Subsystem Management window, and clicking Save As.

Version 1.4 dated 6/09/2006 37

Page 38: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Windows/Intel

To run the Storage Manager Client software on a Windows XP Professional OS, use the software package for the Microsoft Windows Server 2003 Intel Architecture 32bit (IA32).The version of the host- software installer wizard for this release is SMIA-WS64-09.16.35.08.exe. This wizard will install the following version of the host-software packages:

1. IBM DS4000 Storage Manager Client version: 09.16.G5.08 2. IBM DS4000 Storage Manager RDAC version: 09.01.35.20 3. IBM DS4000 Storage Manager Agent version: 09.16.35.00 (in band management) 4. IBM DS4000 Storage Manager Utilities version: 09.16.35.00

Occasionally, after mapping logical drives to a Windows Server 2003 host, the host can take a longer time to boot than usual. The amount of time is based on how many logical drives are mapped to the host. There is not any workaround. This is a known issue with the Plug-and-Play (PnP) Manager in the Windows Server 2003 operating system. Regardless of the long boot time, this cycle must complete. (added 2/10/2006)

For the Microsoft Windows servers in MSCS configuration, the IBM FC2-13 HBA adapter "Port Down Retry Count" BIOS parameter must be change to 70 from the default of 30. You can either use the FAStT Management Suite Java (FAStT MSJ) program or the FC2-133 HBA Fast!UTIL program to change the setting.

A Windows "signature" needs to be configured for each host in order to see the drives assigned to itNOTE (Dual pathing in Win2K) :

1) The paths must be zoned so that the LUN is not seen on both paths at the same time. 2) The LUN and both HBA’s must be in the same partition.3) On each Host Server go to Start - Programs - Administrative Tools - Computer Management. Then click Disk Management4) The Signature Upgrade Disk Wizard should start5) Click to select disks to write a signature to6) At the Upgrade to Dynamic Disks, deselect all disks and click OK7) Right click unallocated space on first disk and click Create Partition. The Create Partition Wizard begins.8) If Upgrade Disk Wizard doesn't start, right-click DiskX and choose Upgrade to Dynamic Disk9) Confirm Primary Partition is selected10) Confirm maximum amount of disk space is selected11) Assign a drive letter to this drive12) On Format Partition screen, leave all defaults and Perform a Quick Format13) Click Finish14) Repeat same process with each drive15) Repeat same process for each host

Version 1.4 dated 6/09/2006 38

Page 39: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

NOTE: For DS4000 configurations with Windows using Veritas Volume Manager (DMP) as the multipath driver, you have to obtain the correct asl file for DS4000 storage subsystem using the following Veritas Support URL. http://seer.support.veritas.com/docs/273458.htm

FastT200, 500 > Veritas Volume Manager (DMP) 3.1 is supported with Windows 2000. Refer to document http://www-1.ibm.com/support/docview.wss?uid=psg1MIGR-57485&rs=555 for details.

Veritas DMP 4.1 is not supported at this time in Windows operating environment. Veritas Volume Manager (DMP) 4.2 is supported with Windows Server 2003 and Windows 2000.

Limitations of Booting from Windows 2000

After the server is configured with DS4000 storage access and path redundancy, the following limitations have been observed:1. You cannot boot from a DS4000 Storage Server and use it as a clustering device. This is a Microsoft physical limitation.2. If there is a path failure and the host is generating I/O, the boot drive will move to the other path. However, while this transition is occurring, the system

will appear to freeze for up to 30 seconds.3. If you have two adapters and reboot the system while the primary path is failed, you must manually go into the QLogic BIOS for both adapters and

disable the BIOS on the first adapter and enable the BIOS on the secondary adapter.4. You cannot enable the BIOS for both adapters at the same time. If you do and there is a path failure on the primary adapter (and the adapter is still

active), the system will trap with an INACCESSIBLE_BOOT_DEVICE error on reboot.5. If the boot device, (LUN 0), is not on the same path as the bootable HBA port, you will receive an INACCESSIBLE_BOOT_DEVICE error message.6. If you suffer major path problems (LIPs) or controller panics, it can hang the server indefinitely as RDAC tries to find a stable path.7. By booting from the DS4000 storage device, most of the online diagnostic strategies are effectively canceled and Path PD must be done from the

Ctrl+Q diagnostics panel instead of DS4000 MSJ.8. The IDE disk devices should not be re-enabled.

Install FAStT MSJ

1) Locate and execute the FAStT MSJ SETUP.EXE2) Select GUI or Agent (or both). Install GUI and Agent on Management Workstation and Agent on each host server3) Password to update any advanced features of FAStT MSJ is "config"4) Run through the features of the FAStT MSJ utility

Ensure Java Plug-in is enabled on management machine

1) Go to Start > Settings > Control Panel. 2) Double click Internet Options. 3) Click Advanced tab. 4) Scroll down to Microsoft VM and place a check in Java Console Enabled box. Click OK

Version 1.4 dated 6/09/2006 39

Page 40: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

5) Restart this machine.BladeCenter attached to a Brocade Switch – (This only applies if you don’t have Brocade Module in the BladeCenter)

The Brocade switch must be in Interoperability mode to be FC-SW2 compliant.Interoperability mode cannot be set using Brocade’s Web Tools; use the Brocade CLI.ATTENTION!! This procedure requires a reboot of the switch.Login: adminPassword: xxxxxxxxBrocade3800:admin> switchdisableBrocade3800:admin> interopmode 1Run this command without the 1 to see its current setting.Brocade3800:admin> fastboot

Notes: The RDAC driver must be digitally signed by Microsoft in order for it to work correctly. Always use the IBM provided signed RDAC driver package. RDAC for Windows supports round-robin load-balancing You must always uninstall IBM DS4000 Storage Manager RDAC before you uninstall the Host Bus Adapter driver. Failure to do so may result in

system hung or blue-screen condition. If you define a large number of arrays, you may not be able to right-click a logical drive and get a pop-up menu in the Physical View of the Subsystem

management window. The workaround is to use the Logical Drive pull-down menu to select the logical drive options.

Version 1.4 dated 6/09/2006 40

Page 41: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Linux RDAC

IMPORTANT: There are two separate IBM DS4000 Storage Manager host software version 9.16 packages for the 32bit X86 and 64bit x86_64 and the 64bit IA64 Linux operating system environments. Please download the correct one for your operating system environment.

Make sure you read the README.TXT files for V9.16 Linux RDAC, HBA and Storage Manager for Linux. When using the Linux RDAC as the multipathing driver, the "LNXCL" host type must be used and there is not a requirement that the UTM(Access LUN)

must be removed from the LNXCL Storage Partitioning partition. When using Linux RDAC driver as the multipath failover/failback driver, the non-failover version of the Linux FC HBA device driver must be installed

instead of the failover version. Properly build and install the non-failover HBA device driver on the Host server prior to Linux RDAC driver installation. The Linux RDAC driver cannot co-exist with a HBA-level multipath failover/failback driver such as the 6.06.63-fo driver. You might have to modify the

driver make file for it to be compiled in the non-failover mode. This Linux RDAC release does not support auto-volume transfer/auto-disk transfer (AVT/ADT) mode. AVT/ADT is automatically enabled in the Linux

storage partitioning host type. Disable it by using the script that is bundled in the IBM Linux RDAC Web package or in the \Scripts directory of the DS4000 Storage Manager Version 9 Linux CD. The name of the script file is DisableAVT_Linux.scr. Use the following steps to disable the AVT/ADT mode in your Linux host type partition.

When the RDAC driver detects that all paths to a DS4000 Storage Server are failed, it will immediately report I/O failure. This behavior differs from that of the IBM fibre channel HBA failover driver, which will wait for a certain time period before reporting an I/O failure to the host application. There is no work-around for this behavioral difference.

Although the Host server can have different FC HBAs from multiple vendors or different FC HBA models from the same vendors, only one model of FC HBAs can be connected to the IBM DS4000 Storage Servers.

If a host server has multiple HBA ports and each HBA port sees both controllers (via a un-zoned switch), the Linux RDAC driver may return I/O errors during controller failover.

The Linux SCSI layer does not support skipped (sparse) LUNs. If the mapped LUNs are not contiguous, the Linux kernel will not scan the remaining LUNs. Therefore, LUNs following the skipped LUN will not be available to the host server. Always map LUNs using consecutive LUN numbers, and use the Storage Manager hot_add utiliy program.

The RDAC driver stack, which includes the mpp_Upper and mpp_Vhba tools, and the low level HBA driver, should not be loaded or unloaded using the modprobe kernel module utility. Any use of modprobe on the RDAC driver stack is entirely the responsibility of the user. IBM does not support the use of modprobe with the RDAC driver stack. Use of rmmod to remove all the drivers in the RDAC driver stack, in order, is also not supported. Recommendation: Reboot the system whenever you need to unload the driver stack..

Version 1.4 dated 6/09/2006 41

Page 42: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

AIX SetupRequirements

Make sure you have these fileset versions or later. PTF/APARs can be downloaded from: http://techsupport.services.ibm.com/server/aix.fdc

AIX 5.1 ------- Required Maintenance Level - 5100-9 devices.fcp.disk.array.rte - 5.1.0.67 devices.pci.df1000f7.com - 5.1.0.75 devices.pci.df1000f7.rte - 5.1.0.75 devices.pci.df1000f9.rte - 5.1.0.75 devices.pci.df1000fa.rte - Not supported AIX 5.2 ------- Required Maintenance Level - 5200-7 devices.fcp.disk.array.rte - 5.2.0.86 devices.pci.df1000f7.com - 5.2.0.75 devices.pci.df1000f7.rte - 5.2.0.75 devices.pci.df1000f9.rte - 5.2.0.75 devices.pci.df1000fa.rte - 5.2.0.75 AIX 5.3 ------- Required Maintenance Level - 5300-3 devices.fcp.disk.array.rte - 5.2.0.41 devices.pci.df1000f7.com - 5.3.0.31 devices.pci.df1000f7.rte - 5.3.0.30 devices.pci.df1000f9.rte - 5.3.0.30 devices.pci.df1000fa.rte - 5.3.0.30

* AIX 4.3 contains no support for features beyond Storage Manager 8.3.

Host Bus Adapter(s): IBM Feature Code 6227, 6228, 6239 or 5716

HBA Firmware Levels: FC 6227 - 3.30X1

Version 1.4 dated 6/09/2006 42

Page 43: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

FC 6228 - 3.91A1 FC 6239 - 1.81X1 FC 5716 - 1.90A4

For booting from DS, the following HBA Firmware levels are requiredFC 6227 - 3.22A1 or above FC 6228 - 3.82A01 or aboveFC 6239 - 1.00X5 or above

EXP100 Limitations

Booting from a DS4000 subsystem utilizing SATA drives for the boot image is supported but not recommended due to performance reasons.

AIX Configuration and Usage Notes

1. Veritas Volume Manager 4.0 is supported with AIX 5.1, 5.2 and 5.3. RDAC multipathing required for multipath configurations. Veritas Cluster Server 4.0 and Storage Foundation 4.0 for Oracle RAC are supported with AIX 5.2 and 5.3.

2. Dynamic Volume Expansion (DVE) is only supported on AIX 5.2 and 5.3. AIX 5.3 must have PTF U499974 installed before using DVE.3. In-band Storage Management is supported on AIX 5.1 and 5.2.4. Booting up from a DS4000 device is supported only on AIX 5.1 and 5.2.5. SAN (switch) ports should be configured an “F” port (some switches/directors default ports to type 'Gx'.) Use the SAN switch/director mgt

console to force the port to "F"6. Online concurrent firmware and NVSRAM upgrades of FC arrays are only supported when upgrading from 06.10.06.XX to another version of

06.10.XX.XX. There is an exception for DS4800’s for 9.12 to 9.15. APAR_aix_51 = IY64463APAR_aix_52 = IY64585APAR_aix_53 = IY64475

It is highly recommended that Online FW upgrades be scheduled during low peak I/O loads.Upgrading firmware from 05.xx.xx.xx to version 06.xx.xx.xx must be performed with no IOs. There is no work-around.

7. Interoperability with IBM 2105 and SDD Software is supported on separate HBA and switch zones.8. Interoperability with tape devices is supported on separate HBA and switch zones.9. When using FlashCopy, the Repository Volume failure policy must be set to "Fail FlashCopy logical drive", which is the default setting. The "Fail

writes to base logical drive" policy is not supported on AIX, as data could be lost to the base logical drive.10. It is important to set the queue depth to a correct size for AIX hosts. Having too large of a queue depth can result in lost filesystems and host panics.11. F-RAID Manager is not supported12. For most installations, AIX hosts attach to DS4000 with pairs of HBA’s. For each adapter pair, one HBA must be configured to connect to controller

"A" and the other to controller "B". An AIX host with 4 HBA’s will require you to configure 2 DS partitions (or Host Groups).13. Each AIX host (server) can support 1 or 2 DS4000 Partitions (or Host Groups), each with a maximum of 256 Logical Drives (AIX 5.1 or 5.2 and

Version 1.4 dated 6/09/2006 43

Page 44: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

SM8.4)a. AIX 4.3.3 is restricted to 32 Logical Drives on each partition.

14. Single-switch configurations are allowed, but each HBA and DS4000 controller combination must be in a separate SAN zone.15. Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the DS4000 be connected to the host.16. In a switch environment, both controllers must be connected to the switch within the same SAN zone as the HBA.17. In a direct-attach configurations, both controllers must be "daisy-chained” together. This can only be done on DS4400/4500 (FAStT700/900).18. When you start from a DS4000 device, both paths to the boot device must be up and operational.19. Path failover is not supported during the AIX boot process. Once the AIX host has started, failover operates normally.

SAN ZONING Notes:1. Multiple HBAs within the same server cannot “see” the same DS4000 controller port. 2. The HBAs are isolated from each other (zoned) if they are connected to the same switch that is connected to the same DS4000 controller port. 3. Each fibre-channel HBA and controller port must be in its own fabric zone, if they are connecting through a single fibre-channel switch, such as

2109-F16.

Recommendation: The DS4000 should be configured with at least 1 LUN assigned to the AIX server before the AIX server is allowed to see the DS. This prevents problems with the auto-generated dac/dar relationship.

If you see something like this… hdisk11 dacNONE 0 <------ There is no lun 0Newer firmware handles lun 0 differently. Go back to the Storage Manager and change one of the luns to LUN 0 , rmdev all of the devices and rerun cfgmgr!

For most Direct-attach applications, the connections (FC-AL) of DS4000 storage arrays on AIX should be configured with two HBAs for complete path availability. As such, dual-path configurations would be restricted to the following:

DS4300 (600) - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B host-side controller ports.

DS4400 (700)/900 - one or two server configurations only (2 or 4 HBAs) Each HBA pair must be connected to both A & B controllers. Only 1 connection on each host-side mini-hub can be used.

Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the DS4000 be connected to the host.

In a switch environment, both controllers must be connected to the switch within the same SAN zone as the HBA.In a direct-attach configurations, both controllers must be "daisy-chained" together.

AIX RESTRICTIONRestrictions when booting up your system

If you create more than 32 LUNs on a partition, you cannot use the release CD to install AIX on a DS4000 device on that partition. Therefore, if your

Version 1.4 dated 6/09/2006 44

Page 45: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

system is booted from a DS4000 device, do not create more than 32 LUNs on the partition that you are booting from. When you boot your system from a DS4000 device, both paths to the DS4000 storage server must be up and running. The system cannot use path

failover during the AIX boot process. Once the AIX host has started, failover operates normally. You cannot boot your system from any device that has one or more EXP100 SATA expansion units attached.

Partitioning restrictions The maximum number of partitions per AIX host, per DS4000 storage server, is two. All logical drives that are configured for AIX must be mapped to an AIX host group. For more information, see “Storage Partitioning: Defining an AIX

host group” On each controller, you must configure at least one LUN with an ID between 0 and 31 that is not a UTM or access logical drive.

AIX RDAC (FCP.ARRAY filesets)

All AIX hosts in your storage subsystem must have the RDAC multipath driver installed.In a single server environment, AIX allows load sharing (also called load balancing). You can set the load balancing parameter to yes. In case of heavy workload on one path the driver will move other LUNs to the controller with less workload and, if the workload reduces back to the preferred controller. Problem that can occur is disk thrashing. That means that the driver moves the LUN back and forth from one controller to the other. As a result the controller is more occupied by moving disks around than servicing I/O. The recommendation is to NOT load balance on an AIX system. The performance increase is minimal (or performance could actually get worse).

RDAC (fcp.array filesets) for AIX support round-robin load-balancing

Setting the attributes of the RDAC driver for AIXThe AIX RDAC driver files are not included on the DS4000 installation CD.Either install them from the AIX Operating Systems CD, if the correct version is included, or download them from the following Web site:techsupport.services.ibm.com/server/fixes

You must change some of these parameters for AIX to operate properly, while others can be changed to enhance the operability of your configuration. Attribute settings for dar devices: For multi-initiator configurations, the autorecovery attribute must be set to no. On single host systems, the load_balancing attribute can be set to yes ( I usually recommend no) On multihost systems, the load_balancing attribute must be set to no. Setting the queue_depth attribute to the appropriate value is important for system performance. For large, multihost configurations, always set the

attribute to less than 10. Use the following formula to determine the maximum queue depth for your system:

512 / (number-of-hosts * LUNs-per-host )Attention: If you do not set the queue depth to the proper level, you might experience loss of filesystems and system panics.

fast_fail - Enables fast I/O failure. Fast I/O failure can be useful in multipath configurations. It can decrease the I/O fail times due to link loss between the storage device and the switch, and can allow faster failover to alternate paths >> chdev -l fscsi0 -a fc_err_recov=fast_fail (do for every HBA)delayed_fail is the default setting and should be used in a single path environment.

Version 1.4 dated 6/09/2006 45

Page 46: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Notes: 1. The fast_fail attribute only affects failover that occurs between the switch and the DS4000 storage server. It does not affect failover that occurs

between the host and the switch. 2. Set the fast_fail attribute on each HBA that is configured to the DS4000 storage server. 3. You can use fast I/O failure only in a SAN environment. You cannot use it in a direct-attach environment.

HACMP NOTES

Configuration limitations from 1/4/2006 at ftp://ftp.software.ibm.com/storage/fastt/fastt500/HACMP_config_info.pdfThe following limitations apply to HACMP configurations:

Switched fabric connections between the host nodes and the DS4000 storage system are recommended. However, direct attachment from the host nodes to the DS4000 storage system in an HACMP environment is supported if, and only if, all the following restrictions and limitations are adhered to: Only dual controller versions of the DS4000 storage systems are supported for direct attachment in a high-availability (HA)

configuration. The level of the AIX operating system is Version 5.2 or later. The level of the HACMP clustering software is Version 5.1 or later. All host nodes directly attaching to the DS4000 storage system must be part of the same HACMP cluster. All logical drives (LUN's) surfaced by the DS4000 storage system are part of one or more enhanced concurrent mode volume

groups. Only the host node owning the HACMP non-concurrent resource group (which contains the enhanced concurrent mode volume

group or groups) will have the volume group varyon in the active state. All other host nodes in the HACMP cluster will have the enhanced concurrent mode volume group varyon in the passive state.

No direct operations on the volumes in the enhanced concurrent mode volume groups may be performed (from any host nodes in the HACMP cluster) if that operation bypasses the Logical Volume Manager (LVM) layer of the AIX operating system. For example, using a DD command while logged in as the root user.

Each host node in the HACMP cluster must have redundant (two) fibre channel connections to the DS4000 storage system. One direct fibrechannel connection must be to controller A in the DS4000 storage system and the other direct fibre-channel connection must be to controller B in the DS4000 storage system.

The maximum number of host nodes in an HACMP cluster that may be directly attached to a dual-controller version of the DS4100 or the dual-controller version of the DS4300 is two.

The maximum number of host nodes in an HACMP cluster that may be directly attached to the DS4000 is two. Each host node must have redundant (two) direct Fibre Channel connections to the DS4000 storage system. The two direct Fibre Channel connections from each host node must to independent mini-hubs in the DS4400/4500 storage system. Therefore, this configuration requires that four host mini-hubs (Feature code 3507) be installed in the DS4400/4500 storage system, two host mini-hubs for each host node in the HACMP cluster.

HACMP C-SPOC cannot be used to add a DS4000 disk to AIX via the 'Add a Disk to the Cluster' facility.

Version 1.4 dated 6/09/2006 46

Page 47: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Single HBA configurations are allowed, but each single HBA configuration requires that both controllers in the DS4000 be connected to a switch within the same SAN zone as the HBA.Attention: While Single HBA configurations are supported, it is not recommended for HACMP environments due to the fact that it introduces a single-point-of-failure in the storage I/O path.

Usage notes specific to HACMP environments HACMP clusters can support 2 - 32 servers per DS4000 partition. In this environment, be sure to read and understand the AIX device drivers queue

depth settings as documented in the IBM DS4000 Storage Manager Installation and Support Guide for UNIX and AIX environments publication. Non-clustered AIX hosts can be connected to the same DS4000 that is attached to an HACMP cluster, but must be configured on separate DS4000

host partitions. All disk types and DS4000 Expansion drawers are supported in HACMP clustered environments. HACMP is supported in Heterogeneous server environments. For more information regarding a particular Operating System environment, refer to

the specific Installation and Support Guide. Also, check the DS4000 Storage Interoperability matrix at: http://www-1.ibm.com/servers/storage/disk/ds4000/interopmatrix.html for information about various platform, OS, HBA, and application support with DS4000.

Every adapter in the AIX system can see only one controller (these are AIX specific zoning restrictions).

Discover Disks

After the DS4000 storage subsystem has been set up, volumes have been assigned to the host, and the RDAC driver has been installed Run # cfgmgr –v Verify the device driver recognizes the DS, run lsdev -Cc disk

Connectivity

Determine how many DS4000’s are seen by AIX server: 1 dar for each DS4000. If you see more than one dar per DS4000, check your zoning. rmdev all your devices and run cfgmgr

lsdev -C | grep dardar0 Available 1742 Disk Array Router

ATTENTION!!! If you see something like this when listing your hdisks… hdisk11 dacNONE 0 <------ There is no lun 0If there is no lun 0 we report zeros for everything including the IEEE (wwn) of the hdisk. AIX is smart enough to not like all zeros as the wwn. A hdisk gets built for lun 0 even though it is not there and the dars do not get built right.. Newer firmware handles lun 0 differently. Change the lun id for one of the luns to 0 in Storage Manager. This will present a valid IEEE (wwn) to AIX. rmdev all of the devices and rerun cfgmgr! There is to be a software (AIX) / firmware (DS4800) fix coming out.

Determine how many DS4000 Controllers are seen by AIX server:

Version 1.4 dated 6/09/2006 47

Page 48: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

lsdev -C | grep dacdac0 Available 91-08-01 1742 Disk Array Controllerdac1 Available 11-08-01 1742 Disk Array Controller

Using the fget_config command

The fget_config command displays the current owner of each hdisk. The following example shows that it is a quick way to determine which LUN (hdisk) is actively owned by a controller. The following example shows that both controllers (dac0 and dac1) are in the Active state. This is normal when the DS4000 storage subsystem is configured correctly. Other possible states could be:NONE > The controller is not defined or is offline.RESET >The controller is in the reset state.

fget_config –A | -l |-v dar_name-ADisplays output for all the configured dars in the subsystem. If you use this parameter, do not specify a dar name.-lDisplays output only for the dar that you specify.-vDisplays more information about the dar orders, such as the user array names that were created when you configured the DS4000 subsystem.

Check the attribute settings for a disk array router (dar0)lsattr -El dar0

If you want or need to move or change the preferred ownership of luns from one controller to another, like from controller A to controller B, you will have to rmdev all of the hdisk from the AIX system, otherwise your changes on the DS4000 will revert back. Make sure your SAN zoning matches!

rmdev –dl hdiskx > removes the hdisk, run for each hdiskrmdev –dl dac0 > remove DS4000 controller Armdev –dl dac1 > remove DS4000 controller Brmdev –dl dar0 > remove DSrmdev –Rdl fcsx > removes fibre adapter and other child devicescfgmgr -l <hba> This is optional. Done to make sure HBA's are there before bringing disk in.cfgmgr –v

To map DS4000 luns to the AIX system. Command:lsattr -El hdiskx lscfg -vl hdiskx (maybe only the output of one of the commands is needed).The LUN number is in the output of either lsattr or lscfg and you can compare to the DS4000 profile file

Version 1.4 dated 6/09/2006 48

Page 49: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Datacollection on AIX host_________________________________________________________________________________

Run "snap -gfiLGc" and re-direct output to a file.Valid file formats for output file are: snap.tar.Z, snap.tar, snap.pax.Z and snap.pax

Upload file to Austin Datacase server @http://fieldsupport.austin.ibm.com/cgi-bin/pfe/pfe.pl

Version 1.4 dated 6/09/2006 49

Page 50: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Solaris Setup – No DMP

If you are using Veritas DMP, please refer to the Storage Manager Installation Guide for Solaris. You must have the Veritas ASL file to properly support the DS4000 model otherwise you will encounter the controller’s continually swapping the luns.DS4700 - Solaris 8, 9 and 10Veritas Volume Manager (DMP) 4.0/4.1 are supported with Solaris 8 and 9; 4.1 is supported with Solaris 10; DMP is required for greater than 32 logical drives per storage partition; DMP is the only supported multipathing driver with Solaris 10; Support for Sun's MPXIO multipathing driver may be requested via RPQ process; Veritas Cluster Server 4.0/4.1 and Storage Foundation 4.0/4.1 for Oracle RAC are supported with Solaris 8 and 9; 4.1 with Solaris 10. VCS 4.1 with fencing is not supported. DS4800 - Veritas Volume Manager (DMP) 4.0/4.1 are supported with Solaris 8 and 9; 4.1 is supported with Solaris 10; DMP is required for greater than 32 logical drives per storage partition; DMP is the only supported multipathing driver with Solaris 10; Support for Sun's MPXIO multipathing driver may be requested via RPQ process; Veritas Cluster Server 4.0/4.1 and Storage Foundation 4.0/4.1 for Oracle RAC are supported with Solaris 8 and 9; 4.1 with Solaris 10. VCS 4.1 with fencing is not supported. DS4300- Solaris 7, 8, 9 and 10Veritas Volume Manager (DMP) 3.5 is supported with Solaris 7, 8, and 9; Veritas Volume Manager (DMP) 4.0/4.1 are supported with Solaris 8 and 9; 4.1 is supported with Solaris 10; DMP is required for greater than 32 logical drives per storage partition; DMP is the only supported multipathing driver with Solaris 10; Support for Sun's MPXIO multipathing driver may be requested via RPQ process; Veritas Cluster Server 4.0/4.1 and Storage Foundation 4.0/4.1 for Oracle RAC are supported with Solaris 8 and 9; 4.1 with 10. VCS 4.1 with fencing is not supported. For DS4300 Single Controller firmware level 05.3x and prior, please refer to document: http://www-1.ibm.com/support/docview.wss?uid=psg1MIGR-57485&rs=555 for details.DS4500 & DS4400 Solaris 7, 8, 9 and 10Veritas Volume Manager (DMP) 3.5 is supported with Solaris 7, 8, and 9; Veritas Volume Manager (DMP) 4.0/4.1 are supported with Solaris 8 and 9; 4.1 is supported with Solaris 10; DMP is required for greater than 32 logical drives per storage partition; DMP is the only supported multipathing driver with Solaris 10; Support for Sun's MPXIO multipathing driver may be requested via RPQ process; Veritas Cluster Server 4.0/4.1 and Storage Foundation 4.0/4.1 for Oracle RAC are supported with Solaris 8 and 9; 4.1 with 10. VCS 4.1 with fencing is not supported.

The DS4000 Storage Manager version 9.14 and higher host software installer wizard requires the installation of a graphics adapter in the SUN Solaris server for it to run. For SUN Solaris servers without the graphics adapter, individual host software installation packages are provided in the DS4000 Storage Manager Version 9.1x (where x is an appropriate release version like 9.14, 9.15, ... ) Support for SUN Solaris OSes CD under the directory named "/Solaris/Individual packages".

1. IBM DS4000 Storage Manager Client version: 09.15.G5.01 2. IBM DS4000 Storage Manager RDAC version: 09.10.05.01 3. IBM DS4000 Storage Manager Agent version: 09.14.05.03 4. IBM DS4000 Storage Manager Utilities version: 09.14.05.01 5. IBM DS4000 Storage Manager RunTime version: 09.14.05.01

The patches listed in this document can be superseded by more recent versions.

Version 1.4 dated 6/09/2006 50

Page 51: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

1. Solaris 7 with the following patches (minimum versions): – 106541-23 – 108376-42 (or later) – 107544-032. Solaris 8 with the following patches (minimum versions): – 06 Jumbo Patch – 108528-18 ; 111293-04 ; 111310-01 ; 111111-03 ; 108987-123. Solaris 9.0 > Request the patch associated with Sun BugID #4630273 . Install 113454-144. RDAC and dynamic multipathing (DMP) coexistence is not supported.5. Dynamic multipathing (DMP) requires the installation of VERITAS Volume Manager. 6. Verify Interoperability at > http://www.storage.ibm.com/disk/FAStT/pdf/interop-matrix.pdf7. Multiple HBAs within the same server must be unable to “see” the same DS4000 controller port.8. The JNI HBAs must be isolated from each other if they are connected to the same switch that is connected to the same DS4000 controller port. 9. Each HBA and controller port must be in its own fabric zone, if they are connecting through a single switch, such as a 2109-F16.10. When configuring DS4000 Logical volumes to be used with Veritas Volume Manager on Solaris, the maximum capacity per disk is limited to 999GB. For

1024GB disk group support using Veritas Volume Manager, it is recommended to use multiple disks of sizes less than 999GB.11. Auto LUN/Array Transfer is not supported. If a controller fails over arrays to the alternate controller and the failed controller is replaced and brought back

online, the arrays do not automatically transfer to the preferred controller. This must be done manually by redistributing arrays.12. DVE is not supported by SUN Solaris or HP-UX operating systems.

Note: Before RDAC is installed, Solaris will see every lun x (# of HBA’s in host) so if you have one lun and two hba’s, it will see two luns. 1 for each target (hba).

DO NOT INSTALL RDAC BEFORE YOU CONFIGURE AND EDIT YOUR HOST BUS ADAPTER SETTINGS AND SD.CONFZONE:

Attach the JNI HBA’s to the SAN Switch Attach the DS4000 Controllers to the SAN Switch Create an alias for each device connected to the switch and give it a name that tells you what it is…for example: DS_controllerA_port1 Create a zone that separates each HBA and controller. DO NOT PUT BOTH FAST CONTROLLERS INTO THE SAME ZONE! Reboot Solaris and look in the var/adm/messages file.

o Make a note of what target id was assigned to each HBA >>> jnic146x0 , jnic146x1, jnic146x2, jnic146x3, etc.

Configuring the HBA and Persistent Binding – Do not begin unless you’ve attached to the SAN and completed the zoning

1. Download and install the most current adapter driver package Refer to the IBM website for the correct HBA driver package…. http://knowledge.storage.ibm.com/HBA/HBASearch

The RDAC package includes shell scripts that you can use to configure JNI cards for use with connections through fibre-channel switches.a. For FCI-1063 cards, /etc/raid/bin/genfcaconf b. For all jnic.conf cards, /etc/raid/bin/genjniconf c. For FC64-1063 and all other cards, /etc/raid/bin/genscsiconf, which calls other scripts as appropriate

2. Open and Edit the JNI configuration file.3. Loop Settings: Applies to FCC-6460, FCE2-1473, or FCE-21473

Version 1.4 dated 6/09/2006 51

Page 52: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

i. If you have Brocade 2 Gb switch, you need to force the HBA to be a public loop device. FcLoopEnable=1 FcFabricEnable=1

ii. If you have Cisco, McData or a Brocade 1 Gb switch, the loop setting will be a private loop device:FcLoopEnable=0FcFabricEnable=1

This is an example of the JNI loop settings. See how there are two jnic146x entries? There’s a target 0 and target 1. This is because there are two HBA’s. Make sure you set these parameters for each HBA installed on the server.

jnic 146x0-FcLoopEnabled=1;jnic 146x0-FcFabricEnabled=0;jnic 146x1-FcLoopEnabled=1;jnic 146x1-FcFabricEnabled=0;

You can determine what target was assigned to each HBA by looking in the var/adm/messages file. You should have a Solaris target id for each HBA:Hba: JNI,FCR Model: FCE-6460-N May 6 10:02:06 solar unix: jnic146x0:FCode: Version 3.8.9 [ba79] ‘’‘’Hba: JNI,FCR Model: FCE-6460-N May 6 10:02:06 solar unix: jnic146x1: FCode: Version 3.8.9 [ba79] ‘’

Persistent Binding: This function allows a subset of discovered targets to be bound to an HBA. Solaris does not guarantee that devices will always be allocated the same SCSI target and LUN IDs after a reboot. Once a configuration has been set, it will survive reboots and any hardware configuration changes. Binding can be implemented by WWNN or WWPN.

1. Make note of the WWPN of the DS4000 controllers that each JNI HBA has seen. You see this in the messages file too by looking for something like this…May 6 10:02:11 solar unix: jnic146x0:Port 011000 (WWN 200200a0b80f478e:200300a0b80f478f) online.

So the WWPN 200300a0b80f478f is the WWPN of the DS4000 controller and it will be bound to Target 0.

target0_hba = "jnic146x0";target1_hba = "jnic146x1";target0_wwpn = "200300a0b80f478f ";target1_wwpn = "200400a0b80f478f ";

Version 1.4 dated 6/09/2006 52

Page 53: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

2. Uncomment and change the other HBA settings according to the Installation Guide for Solaris. (FailoverDelay, JniCreationDelay, etc.)3. Edit the sd.conf file to reflect your bindings.

Complete the installation and verify…

1. Install RDAC according to installation guide. 2. Verify with SMdevices.

FAILOVER:Auto Volume Transfer (AVT) is disabled on Solaris hosts. Therefore, if a controller failover occurs, you must manually redistribute logical drives to their preferred paths once the failed controller is repaired. Complete the following steps to redistribute logical drives to their preferred paths:

Repair or replace any faulty components. Redistribute volumes to their preferred paths by clicking Advanced —> Recovery —> Redistribute Logical Drives.

Setting the parameters for the Sun host system

The sd_max_throttle parameter specifies the maximum number of commands that the system can queue to the HBA device driver. This value is global, affecting each sd (disk) device recognized by the driver. The sd_max_throttle variable assigns the default value lpfc will use to limit the number of outstanding commands per sd device.The default (or maximum) value is 256, but you must set the parameter to a value less than or equal to a maximum queue depth for each LUN connected.Determine the value by using the following formula:

256 ÷ LUNs per adapter = sd_max_throttle value

Where LUNs per adapter is the largest number of LUNs assigned to a single adapter. To set the sd_max_throttle parameter for the DSLUNs, you would add the following line to the /etc/system file:

set sd:sd_max_throttle= |calculated value| (round down the calculated value to a whole number)

For example, a server with two HBA's installed, 10 LUN's defined to HBA1, and 16 LUN's defined to HBA2.HBA1= 256 / 10 = 25.6 and HBA2 = 256 / 16 = 16Rounding down yields 25 for HBA1 and 16 for HBA2. In this example, the correct ‘sd_max_throttle' setting would be the lowest value obtained or 16.

Solaris SAN –

Please follow the installation and planning guide for Solaris. The following diagram was provided by Murray Finch to help you understand the persistent binding requirement for Solaris.

Version 1.4 dated 6/09/2006 53

Page 54: DS4000 Implementation Cookbook v1.4

DS4000 Implementation CookbookThis cookbook is primarily for new DS4000 implementations and is intended as a personal productivity tool. It is not intended to be comprehensive, and is

provided for guidance only, on an 'as is basis' without warranty of any kind. Please be aware that its contents have not been certified by IBM. Business Partners are responsible for assuring their own technical solutions to customers

Version 1.4 dated 6/09/2006 54