Dell-Compellent AIX Best Practices

26
Compellent Storage Center AIX 5.2, 5.3 and 6.1 Best Practices Compellent Corporate Office Compellent Technologies 7625 Smetana Lane Eden Prairie, Minnesota 55344 www.compellent.com

Transcript of Dell-Compellent AIX Best Practices

Page 1: Dell-Compellent AIX Best Practices

Compellent Storage Center

AIX 5.2, 5.3 and 6.1

Best Practices

Compellent Corporate Office Compellent Technologies 7625 Smetana Lane Eden Prairie, Minnesota 55344

www.compellent.com

Page 2: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 2

Contents

Contents .............................................................................................................................. 2

Preface ................................................................................................................................ 3

Customer Support ........................................................................................................... 3

Disclaimers ...................................................................................................................... 3

General Syntax ................................................................................................................ 3

Document Revision ......................................................................................................... 3

AIX 5.2, 5.3, 6.1 and 7.1 Best Practices ............................................................................. 4

IBM AIX Overview ........................................................................................................... 4

IBM AIX and Compellent Storage Center ....................................................................... 4

Fiber Channel Switches .................................................................................................. 5

Fiber Channel Connectivity ............................................................................................. 5

Dynamic Tracking with IBM Fiber Channel Cards .......................................................... 5

Compellent Storage Center Legacy vs. Virtual Ports and AIX ........................................ 6

Fiber Channel Boot from Storage Center ........................................................................ 6

MPIO Multipath (MPIO debuted in AIX BOS 5L 5200-01................................................ 8

AIX with Storage Center Volumes without a Compellent ODM/PCM ............................. 8

AIX with Storage Center Volumes with the Compellent ODM/PCM ............................... 9

iSCSI Software Initiator Connectivity............................................................................. 10

Add iscsi file sets for aix 6.1 .......................................................................................... 10

Logical Volume Management ........................................................................................ 15

Migration Options .......................................................................................................... 16

Create a fiber channel SAN boot disk via migratepv..................................................... 17

AIX alt_disk_copy .......................................................................................................... 17

Clean off source server H/W information to create a Gold Copy .................................. 18

Making Storage Center Volumes Visible to AIX on the fly ............................................ 19

Mirror Creation and usage via AIX ................................................................................ 20

Migratepv usage in AIX ................................................................................................. 21

Growing File Systems online ......................................................................................... 22

Discovering the new space on a SAN volume which has been expanded from the Storage Center .............................................................................................................. 22

Replay Creation and Mapping a Local Recovery back to the same server. ................. 23

Replay Creation and Mapping a Local Recovery to a different server.......................... 23

Advanced POWER Virtualization (APV) Virtual Input Output Server (VIOS) ............... 24

Page 3: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 3

Preface

Customer Support

Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, and 365 days a year. For additional support, email Compellent at [email protected]. Compellent responds to emails during normal business hours.

Disclaimers

Information in this document is subject to change without notice. © 2011 Compellent Technologies. All rights reserved. Reproduction in any manner without the express written permission of Compellent Technologies is strictly prohibited. Trademarks used in this text are property of Compellent Technologies, or their respective owners.

General Syntax

Item Convention

Menu items, dialog box titles, field names, keys Bold

Mouse click required Click:

User Input Monospace Font

User typing required Type:

Website addresses http://www.compellent.com

Email addresses [email protected]

Document Revision

Date Revision Description

March 2011 4.0 Fiber Channel and iSCSI Updates

Page 4: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 4

AIX 5.2, 5.3, 6.1 and 7.1 Best Practices

IBM AIX Overview

IBM provides the P-Series of RISC (Reduced Instruction Set) CPU equipped servers along with the AIX operating system to provide robust and resilient server environments in a wide variety of enterprise configurations.

AIX has a robust Logical Volume Manager built in to the operating system which provides a capability to manipulate and modify the hard disks the server is connected to. These disks may be local SCSI, Fiber Channel or iSCSI volumes presented from a variety of sources. The Compellent Storage Center provides AIX compatible disk volumes which appear as the familiar hdisk# when viewed from within AIX.

The full range of AIX supplied utilities such as mirroring, backup, multiple file system types, multipath, boot from SAN and disaster recovery are used with Compellent volumes.

IBM AIX and Compellent Storage Center

The Compellent Storage Center provides SCSI-3 compliant volumes to AIX that removes much of the complexity of allocating, using and protecting the mission critical data found on most AIX based servers.

A properly configured Storage Center can remove the need for cumbersome physical disk configuration exercises along with complex RAID configuration mathematics. The use of the Storage Center removes the need to stripe and mirror from the AIX system level since the Storage Center is already providing RAID 10 speed and reliability at the storage array level.

On the other hand, the application or operating system specific procedures recommend or required by the server can be used without modification when using the Storage Center volumes. These site specific requirements are configured using AIX provided utilities which removes the need for Compellent specific command sets on the server, thus reducing the complexity and the chance for error in the configuration and usage of mission critical servers.

Page 5: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 5

Fiber Channel Switches

Persistent Fiber Channel ID support for Storage Center Front End Ports

Compellent Best Practice calls for the use of Persistent Fiber Channel Id’s (PFCID) on the fabric switch ports which are connected to the Front End Primary (FEP) and Front End Reserved (FER) Ports of the Compellent Storage Center.

This feature is present on Cisco switches and Brocade 8Gb capable fiber channel switches running the FOS 6.4 version of code and keeps the same Fiber Channel ID for the FEP/FER pairs regardless of a reboot of a switch or if the FEP/FER pairs are moved from one Storage Center controller to another during a failover or maintenance event.

Static Domain IDs and Persistent FC IDs for Storage Center Ports

Cisco fabrics

Cisco fabric switches have Persistent FCID enabled by default and do not require any administrator intervention to use.

https://www.cisco.com/en/US/docs/storage/san_switches/mds9000/sw/san-os/quick/guide/qcg_ids.html

Brocade fabrics

8Gb capable Brocade Fabric switches do not have Persistent WWN based PID assignment enabled by default and the minimum version to use is the 6.4 version of the Brocade Fabric Operating System (FOS).

Fiber Channel Connectivity

IBM provides a robust and feature filled fiber channel and iSCSI architecture as part of the AIX 5.2, 5.3, 6.1 and 7.1 operating systems. IBM is in the perfect position to create, test and distribute the HBA’s, drivers, multipath software and disk migration utilities which allow AIX administrators to manage the Storage Center.

Compellent Best Practice is to utilize the hardware and software provided by AIX without modifications. The only addition to the AIX server will be the Compellent MPIO Object Database Manager (ODM) Path Control Module (PCM). This installp compliant module will notify the AIX operating system that Compellent hdisk’s should be treated as MPIO capable devices.

Dynamic Tracking with IBM Fiber Channel Cards

Previous releases of AIX required a user to unconfigure FC storage device and adapter device instances before making changes on the Storage Area Network (SAN) that might result in an N_Port ID (SCSI ID) change of any remote storage ports.

If dynamic tracking of FC devices is enabled, the FC adapter driver detects when the Fiber Channel N_Port ID of a device changes. The FC adapter driver then reroutes traffic destined for that device to the new address while the devices are still online.

Page 6: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 6

Events that can cause an N_Port ID to change include moving a cable between a switch and storage device from one switch port to another, connecting two separate switches using an inter-switch link (ISL), rebooting a switch or during a Storage Center failover event for maintenance or due to a failure within the Storage Center.

The dynamic tracking feature has a hard coded fifteen second timeout value set on the AIX server.

Testing in the Compellent labs has shown that a heavily loaded Storage Center (More than 400 objects defined) can exceed the fifteen second timeout value of dynamic tracking resulting in lost connections between the AIX server and the Storage Center providing the volumes.

Compellent Storage Center Legacy vs. Virtual Ports and AIX

A Storage Center 5 must be configured in either Legacy or Virtual Port mode. The two modes are mutually exclusive. Storage Center Virtual Front End Ports provides multiple addresses for each of the Front End Ports on the Storage Center and removes the Legacy concept of Front End Primary and Front End Reserve Ports

The Compellent Storage Center Virtual Ports feature is not supported with AIX servers using the fiber channel protocol but is supported when using the iSCSI protocol.

.

Fiber Channel Boot from Storage Center

Multiple SAN boot options

IBM AIX provides the ability to install/boot from a single path to a Storage Center volume without the presence of the Compellent MPIO ODM Path Control Module.

This section assumes there is not an existing internal disk on the AIX server. If there is an existing local hard disk which the server is currently using to run AIX, there are multiple options to either change to a Storage Center volume as the only boot volume, to migrate from an existing internal hdisk or to mirror the existing boot disk with a volume from the Storage Center.

Please refer to the sections in this document on the migratepv, alt_disk_copy, and mirrorvg commands for AIX to understand some of the options available to an administrator.

AIX and Storage Center hdisk discovery

If there is not an existing hdisk on the AIX server and you want to install and boot from the Storage Center, the AIX server must be forced to scan the fiber channel bus so that the fiber channel card appears in the Create Server/ Select Server HBA screens on the Compellent Storage Center.

This can be accomplished by either rebooting the server via the installation media a second time or by scanning from the AIX SMI menus. If the HBA does not appear in the active Storage Center Server object screen, please un-check the Show Only Active

Page 7: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 7

Connections check box and allow the Storage Center to display all HBA’s that the Storage Center has seen, whether the connection is active or not.

Once the HBA has been mapped to the server object and an appropriately size Storage Center volume, the AIX installation media will list the Storage Center as a valid installation option via the normal installation utilities.

Please note that at this point, the Storage Center is treated as a basic third party storage device. Do not map a second path to the Storage Center volume until after the operating system installation is completed successfully and the Compellent MPIO ODM Path Control Module is installed on the operating system boot disk.

Page 8: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 8

MPIO Multipath (MPIO debuted in AIX BOS 5L 5200-01

Compellent has made the technical decision to utilize the IBM/AIX provided MPIO multipathing module for multipath connectivity to the Storage Center volumes.

MPIO uses a round robin algorithm by default, which will write down one path and then switch to the other path. There is also a failover algorithm available in case there is a preference to use one path until it fails due to protocol, speed or any other reason the customer wants to use one path more than another. To see the latest available MPIO file sets for your AIX server, please reference the IBM/AIX webpage:

http://www-01.ibm.com/support/docview.wss?uid=isg1fileset1200931818

Compellent ODM/PCM location and installation

Compellent Technologies has developed and tested an Object Database Manager (ODM) Path Control Module (PCM) for AIX 5.2, 5.3, 6.1 and 7.1 which is in the form of a standard AIX installp file set.

The Compellent ODM/PCM file and installation instructions are available for customers on the Compellent Customer Portal:

http://customer.compellent.com/login.aspx?item=%2fdefault&user=extranet%5cAnonymous&site=customerportal

Compellent Best Practice calls for the installation of this ODM/PCM before the AIX server is exposed to the Storage Center for the first time.

In the case of a VIOS/VIOC environment, the Compellent MPIO ODM/PCM should be installed in the VIOS partition only using the $ oem_setup_env command to enter the super user shell of VIOS. Use the # installp command in the VIOS server super shell to install the Path Control Module as shown in the Compellent MPIO for AIX installation instructions.

Do not install the MPIO ODM/PCM in the VIO Client. VIOC partitions depend on VIOS for all connection to actual devices. The default queue depth for VIOC partitions is 3.

AIX with Storage Center Volumes without a Compellent ODM/PCM

Storage Center volumes are not treated as multipath capable by AIX, the queue depth of the Storage Center hdisk is locked at a value of one with the rw_timeout is set to 30 seconds.

Storage Center volumes are listed as “Other FC SCSI Drive” type on the AIX server. The volumes can be used as is for initial operating system installation, but there should only be one path to a Storage Center volume in this state.

This state is shown in the output of the lsdev –Cc disk command on non- virtualized AIX servers or the #lsdev –type disk command on VIOS.

$> lsdev -Cc disk hdisk0 Available 01-08-00-1,0 SCSI Disk Drive hdisk1 Available 01-10-01 Other FC SCSI Disk Drive

Page 9: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 9

AIX with Storage Center Volumes with the Compellent ODM/PCM

The Compellent ODM/PCM enables and/or sets these features:

1. Increase the queue depth to 32 2. Increase the rw_timeout value to 60 3. Notify AIX that Compellent volumes are bootable under the Compellent name 4. Utilize the built in AIX multipathing software called MPIO 5. Set the queue depth to 32 for all iSCSI volumes mapped from Storage Center

Once the ODM/PCM is initially installed on the server and the server is rebooted, the output of the same command will show the Storage Center volumes as Compellent FC SCSI Disk Drive(s).

#> lsdev -Cc disk hdisk0 Available 01-08-00-1,0 SCSI Disk Drive hdisk1 Available 01-10-01 Compellent FC SCSI Disk Drive

The attributes of any future Storage Center Volume(s) mapped to this particular server have the queue_depth value set to 32 and the rw_timeout value set to 60 to allow time for the Storage Center Front End Port movement during a Storage Center maintenance event or in the unlikely event of a Storage Center controller failure.

#> lsattr -HEl hdisk1 attribute value description user_settable PCM PCM/friend/compellent_sc Path Control Module False algorithm round_robin Algorithm True queue_depth 32 Queue DEPTH True rw_timeout 60 READ/WRITE time out True

Page 10: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 10

iSCSI Software Initiator Connectivity

Legacy Mode Storage Center Configuration

(Front End Primary and Front End Reserved Ports)

Beginning with AIX 5200-03, the iSCSI protocol driver is included as part of AIX Base Operating System

Beginning with AIX 5200-04 iSCSI disk devices are supported as MPIO devices.

Verify the Maintenance level the server is running and the presence of the iSCSI file sets on the server.

# oslevel -r 6100-06

Add iscsi file sets for aix 6.1

Verify the iSCSI target Node Name on the Storage Center by selecting Controllers, and then select the controller you want to use, select IO Cards, select iSCSI, select the iSCSI card you want to use, finally select General from the right window tab.

Add the IP Address of the Compellent iSCSI card, the default iSCSI port number (3260 for all Compellent Devices), and iSCSI Node Name of the Storage Center to the end of the /etc/iscsi/targets file on the AIX server. When finished write and close the file.

EXAMPLE:

172.31.32.103 3260 iqn.2002-03.com.compellent:5000d31000019301

Verify the iSCSI device is available in the kernel: # lsdev -C | grep iscsi iscsi0 Available iSCSI Protocol Device Verify the iSCSI Node Name of the AIX server: # lsattr -El iscsi0 disc_filename /etc/iscsi/targets Configuration file False disc_policy file Discovery Policy True initiator_name iqn.hostid.0a3cc1dc iSCSI Initiator Name True max_targets 16 Maximum Targets Allowed True num_cmd_elems 200 Maximum number of commands to queue to driver True Scan via the iSCSI port using the new information from the /etc/iscsi/targets file.

# cfgmgr -l iscsi0

Create a new server and map the new iSCSI port of the AIX server to the new server.

NOTE: you may need to unselect the Only Show Active/UP Connections box to see the new iSCSI item.

Page 11: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 11

Create a Volume on the Storage Center and map it to the new server. Rescan the AIX iSCSI subsystem for new devices: # cfgmgr -l iscsi0 Verify that the iSCSI block devices have been created on the AIX server: # lsdev -Cc disk hdisk0 Available 10-60-00-4,0 16 Bit SCSI Disk Drive hdisk1 Available 10-60-00-5,0 16 Bit SCSI Disk Drive hdisk2 Available Other iSCSI Disk Drive

Virtual Mode Storage Center Configuration

Beginning with the AIX 5300-06 the iSCSI driver supports redirect to virtual iSCSI ports. When the Storage Center is configured in the Virtual Ports mode the concept of Front End Primary and Front End Reserve ports is replaced by Virtual Ports consisting of an iSCSI Control Port and an associated iSCSI Virtual Port.

Software Initiator

Discover the iSCSI Control Port IP Address and iSCSI Virtual Port iqn on the Storage Center by selecting Controllers, and then select the controller you want to use, select IO Cards, select iSCSI.

Record the IP address shown for the iSCSI Control Port and the iqn for the iSCSI Virtual Port.

Edit the /etc/iscsi/targets file on the server with the IP address of the New Domain 6 iSCSI Control Port, the default port of 3260 and the iqn name of the New Domain 6 iSCSI Virtual Port Repeat for the second entry which consists of the IP address of the New Domain 7 iSCSI Control Port, the default port of 3260 and the iqn name of the New Domain 7 iSCSI Virtual Port as shown below. NOTE: Your Domain numbers and iqn numbers will be different than this example. Do not use the iSCSi physical adapter address(s) for this purpose. #New Domain 6 iSCSI Control Port address 172.17.32.4 #Port is 3260 #New Domain 6 iSCSI Virtual Port iqn iqn.2002-03.com.compellent:5000d310000199a8 #172.17.32.4 3260 iqn.2002-03.com.compellent:5000d310000199a8 #New Domain 7 iSCSI Control Port address 172.17.34.4 #Port is 3260 #New Domain 7 iSCSi Virtual Port iqn.2002-03.com.compellent:5000d310000199a9 # 172.17.32.4 3260 iqn.2002-03.com.compellent:5000d310000199ac 172.17.32.4 3260 iqn.2002-03.com.compellent:5000d310000199a8 172.17.34.4 3260 iqn.2002-03.com.compellent:5000d310000199ad 172.17.34.4 3260 iqn.2002-03.com.compellent:5000d310000199a9

Page 12: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 12

When finished write and close the file.

Scan via the iSCSI port using the new information from the /etc/iscsi/targets file.

# lsdev | grep iscsi

iscsi0 Available 02-09-01 iSCSI Protocol Device

iscsi1 Available 03-09-01 iSCSI Protocol Device

# cfgmgr -l iscsi0

Method error (/usr/lib/methods/cfgqliscsi -l iscsi0 ):

0514-061 Cannot find a child device.

# cfgmgr -l iscsi1

Method error (/usr/lib/methods/cfgqliscsi -l iscsi1 ):

0514-061 Cannot find a child device.

On the Storage Center, create a new server and map the new iSCSI port of the AIX server to the new server.

NOTE: you may need to unselect the Only Show Active/UP Connections box to see the new iSCSI item.

Create a Volume on the Storage Center and map it to the new server. Rescan the AIX iSCSI subsystem for new devices: # cfgmgr -l iscsi0 Verify that the iSCSI block devices have been created on the AIX server: # lsdev -Cc disk hdisk0 Available 10-60-00-4,0 16 Bit SCSI Disk Drive hdisk1 Available 10-60-00-5,0 16 Bit SCSI Disk Drive

Page 13: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 13

IBM AIX iSCSI TOE Cards

To locate the name(s) of the iSCSI TOE cards in your server enter: # lsdev | grep isc iscsi0 Available 02-09-01 iSCSI Protocol Device iscsi1 Available 03-09-01 iSCSI Protocol Device To see the iqn name and current settings of the iSCSI TOE cards: # lsattr -HE -l ics0

# lsattr -HE -l ics1

Copy the /etc/iscsi/targetshw file to the /etc/iscsi/targetshw0 to configure the first iSCSI TOE card called iscsi0. Set the following: Discovery Filename to /etc/iscsi/targetshw0 Discovery Policy to file Adapter IP Address Desired network address Adapter Subnet Mask Desired Netmask address Adapter Gateway Desired Gateway

Page 14: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 14

Copy the /etc/iscsi/targetshw file to the /etc/iscsi/targetshw1 to configure the second iSCSI TOE card called iscsi1. Set the following: Discovery Filename to /etc/iscsi/targetshw1 Discovery Policy to file Adapter IP Address Desired network address Adapter Subnet Mask Desired Netmask address Adapter Gateway Desired Gateway

Page 15: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 15

Modify the /etc/iscsi/targetshw# file

# tail -10 /etc/iscsi/targetshw0 # "123ismysecretpassword.fc1b" #iscsi virtual port on SN409 under ctrlprt 6 #New Domain 6 Control Port iSCSI Control Port IP Address is 172.17.32.4 #Port Number is 3260 #New Domain 6 iSCSI Virtual Port iqn is #iqn.2002-03.com.compellent:5000d310000199a8 # 172.17.32.4 3260 iqn.2002-03.com.compellent:5000d310000199a8 # #New Domain 6 Control Port iSCSI Control Port IP Address is 172.17.32.4 #Port Number is 3260 #New Domain 6 iSCSI Virtual Port iqn is #iqn.2002-03.com.compellent:5000d310000199ac 172.17.32.4 3260 iqn.2002-03.com.compellent:5000d310000199ac # tail -10 /etc/iscsi/targetshw1 #New Domain 7 Control Port iSCSI Control Port IP Address is 172.17.34.4 #Port Number is 3260 #New Domain 7 iSCSI Virtual Port iqn is iqn.2002-03.com.compellent:5000d310000199ad 172.17.34.4 3260 iqn.2002-03.com.compellent:5000d310000199a9 # iscsi virtual port on SN410 under control port 7 #New Domain 6 Control Port iSCSI Control Port IP Address is 172.17.34.4 #Port Number is 3260 #New Domain 6 iSCSI Virtual Port iqn is iqn.2002-03.com.compellent:5000d310000199ac 172.17.34.4 3260 iqn.2002-03.com.compellent:5000d310000199ad

Logical Volume Management

AIX provides both a robust command line based interface and a System Management Interface Tool (SMIT) which provides a menu-based alternative to the command line for managing and maintaining the AIX operating system.

SMIT can run in one of two modes: ASCII (non-graphical) or X Window (graphical). The ASCII mode of SMIT can run on either terminals or graphical displays. The graphical

Page 16: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 16

mode of SMIT (which supports a mouse and point-and-click operations) can be run only on a graphical display running an X Window manager. The ASCII mode is often the preferred method to run SMIT because it can be run from any machine.

Migration Options

UNIX cp command

The most basic way to move smaller multiple static data sets into the Storage Center is with the AIX command. The files written to the Storage Center volumes are thin provisioned. The –R and –p options are recursive and maintain file attributes. Please consult your AIX documentation for various cp options.

# cp –Rp /source /target

The procedure is to create a new target volume on the Storage Center, mount the new volume on the AIX server and then copy the data to the new volume.

Once the data has been copied, the source volume is unmounted from the server, the target volume is unmounted from the server and then the target is mounted into the original mount point of the source volume. The use of the original mount point for the new volume allows for the use of any existing path variables without modification. The original data can then be kept in reserve as a backup until the testing of the new data set is completed to the satisfaction of the administrator.

Be aware that the copy command is a single threaded process and may take a substantial amount of time to complete based on the amount of data to be copied.

AIX tar command

The UNIX tar command can also be used for the migration of data, but the target volume will need to be large enough to contain the compressed tar archive (i.e. tarball) created by the tar command and the subsequent data set which will be written once the tar archive is extracted. The tar archive (i.e. tarball) and the data will consume actual space and will count against the Dynamic Capacity values of the target file system on the Storage Center.

AIX dd command

The use of an image level copy technique such as the dd command will consume the same amount of the disk space on the Storage Center target volume which was allocated in the source volume whether there is data in the space or not.

Therefore, to preserve the Dynamic Capacity advantages of the Storage Center, it is not recommended to use image level data transfer techniques when moving data from less advanced storage to the Storage Center.

Page 17: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 17

Create a fiber channel SAN boot disk via migratepv

Use the AIX migratepv command to migrate the boot image from the existing internal disk (hidisk0) to an existing Storage Center volume (hdisk2). In this example, the Compellent ODM/PCM MPIO module has not been installed. The steps are the same in either case.

Create a volume on the Storage Center. Map it to one of the cards in the server Run cfgmgr to see the new disk.

# lsdev -Cc disk hdisk0 Available 10-60-00-4,0 16 Bit SCSI Disk Drive hdisk1 Available 10-60-00-5,0 16 Bit SCSI Disk Drive hdisk2 Available 20-58-01 Other FC SCSI Disk Drive # lsvg -l rootvg LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 1 1 closed/syncd N/A # migratepv -l hd5 hdisk0 hdisk2

0516-1246 migratepv: If hd5 is the boot logical volume, please run 'chpv -c hdisk0' as root user to clear the boot record and avoid a potential boot off an old boot image that may reside on the disk from which this logical volume is moved/removed.

# chpv -c hdisk0 # bosboot -ad /dev/hdisk2

bosboot: Boot image is 29924 512 byte blocks.

# migratepv hdisk0 hdisk2 # reducevg -d rootvg hdisk0 # bootlist -m normal hdisk2

Reboot server

AIX alt_disk_copy

Clone local boot disk (hdisk#) to Compellent Storage Center volume (hdisk1) and run the bootlist command to change the boot order so the san volume is the boot disk (Five minutes to complete in our test bed)

# alt_disk_copy -d hdisk1

Calling mkszfile to create new /image.data file. Checking disk sizes. Creating cloned rootvg volume group and associated logical volumes. Creating logical volume alt_hd5. Creating logical volume alt_hd6. Creating logical volume alt_paging00. Creating logical volume alt_hd8. Creating logical volume alt_hd4. Creating logical volume alt_hd2. Creating logical volume alt_hd9var. Creating logical volume alt_hd3. Creating logical volume alt_hd1.

Page 18: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 18

Creating logical volume alt_hd10opt. Creating logical volume alt_lg_dumplv. Creating /alt_inst/ file system. /alt_inst filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/home file system. /alt_inst/home filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/opt file system. /alt_inst/opt filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/tmp file system. /alt_inst/tmp filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/usr file system. /alt_inst/usr filesystem not converted. Small inode extents are already enabled. Creating /alt_inst/var file system. /alt_inst/var filesystem not converted. Small inode extents are already enabled. Generating a list of files for backup and restore into the alternate file system... Backing-up the rootvg files and restoring them to the alternate file system... Building boot image on cloned disk. forced unmount of /alt_inst/var forced unmount of /alt_inst/var forced unmount of /alt_inst/usr forced unmount of /alt_inst/usr forced unmount of /alt_inst/tmp forced unmount of /alt_inst/tmp forced unmount of /alt_inst/opt forced unmount of /alt_inst/opt forced unmount of /alt_inst/home forced unmount of /alt_inst/home forced unmount of /alt_inst forced unmount of /alt_inst Changing logical volume names in volume group descriptor area. Fixing LV control blocks... Fixing file system superblocks... Bootlist is set to the boot disk: hdisk1 blv=hd5

Reboot the server to use the Storage Center volume as the boot disk.

Clean off source server H/W information to create a Gold Copy

The alt_disk_copy command has various options. There are two in particular that are particularly useful with the Storage Center. Please note the –d hdisk# portion of the command denotes the target disk for the command. In the following examples, the target is a Storage Center volume, but it could be any hdisk that the AIX server can see.

The -O (as in Oscar) Performs a device reset on the target hdisk which has a volume group name of altinst_rootvg. This causes the alternate disk install to not retain any user-defined device configurations such as host name, IP address or any physical slot designations . The target hdisk is perfect for use as a Gold Copy. Once the operation is complete, create a reply of the target volume on the Storage Center and you can then

Page 19: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 19

generate a local volume recovery on the Storage Center for use as a bootable hdisk for other servers.

As always, make sure you have the proper licenses for your AIX operating system.

The -B (as in Bravo) option tells the AIX system to NOT change the current boot disk. This allows for the current boot disk to remain the active boot disk and to retain all of its hardware and user defined specifics.

To create a hardware neutral image on Storage Center volume hdisk7 and to NOT change the existing boot disk parameters, the following command can be run:

# alt_disk_copy –B –O –d hdisk7

Making Storage Center Volumes Visible to AIX on the fly

Once the initial Storage Center server object is configured and the initial volume is mapped, there is no need to reboot the AIX server to see subsequent new Storage Center volumes. Simply create any new volumes and query from the AIX host to discover and configure the device files for the new hdisks. Create the volume and map it to the AIX server HBA. Run the # cfgmgr command: This command will rescan the server and build the device files for any new volume on the fiber channel or iSCSI adapters.

Run the # lspv command:

The first column lists the hdisk# name of the new volume. The second column is the unique Physical Volume ID (PVID) assigned by AIX The third column is the name of the Volume Group that the hdisk belongs to. The fourth column shows if the hdisk is actively in use. hdisk0 0040657ac59b71c2 rootvg active hdisk1 none None

Page 20: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 20

Mirror Creation and usage via AIX

Using mirroring to migrate data

The mirrorvg command takes all the logical volumes on a given volume group and mirrors those logical volumes. This same functionality may also be accomplished manually if you execute the mklvcopy command for each individual logical volume in a volume group. As with mklvcopy, the target physical drives to be mirrored with data must already be members of the volume group. To add disks to a volume group, run the extendvg command.

By default, mirrorvg attempts to mirror the logical volumes onto any of the disks in a volume group. If you wish to control which drives are used for mirroring, you must include the list of disks in the input parameters, PhysicalVolume. Mirror strictness is enforced.

Additionally, mirrorvg mirrors the logical volumes, using the default settings of the logical volume being mirrored. If you wish to violate mirror strictness or affect the policy by which the mirror is created, you must execute the mirroring of all logical volumes manually with the mklvcopy command.

When mirrorvg is executed, the default behavior of the command requires that the synchronization of the mirrors must complete before the command returns to the user. If you wish to avoid the delay, use the -S or -s option. Additionally, the default value of 2 copies is always used. To specify a value other than 2, use the -c option.

Move a Volume Group name datavg from hdisk1 to hdisk2

Create a new Storage Center volume which will be seen on the server as hdisk2. Add the new hdisk to the existing volume group Mirror the existing volume group to the new hdisk

# extendvg datavg hdisk2 # mirrorvg datavg hdisk2

Wait for mirrors to synchronize Un-mirror the original hdisk from the volume group Remove the original hdisk from the volume group

# unmirrorvg datavg hdisk1 # reducevg datavg hdisk1

Page 21: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 21

Migratepv usage in AIX

Using the migratepv command

Use the following information as a guide for how to use the logical volume manager (LVM) migratepv command to migrate data that is associated with physical volumes.

You can use the LVM migratepv command to migrate data that is associated with physical volumes.

The following examples show how to use the migratepv command.

# migratepv hdisk1 hdisk2

In the example, all data migrates from hdisk1 to hdisk2.

The migratepv command updates all LVM references. From the time that the command completes, the LVM no longer uses hdisk1 to access data that was previously stored there.

As the data is physically moved, the target physical volume must have enough spare physical volumes to accommodate data from the source physical volumes. After this command completes, you can remove the source-physical volume from the volume group.

The migratepv command migrates data by performing the following actions:

• Creating a mirror of the logical volumes that you are moving

• Synchronizing the logical volumes

• Removing the original logical volume

You can use the migratepv command to move data from one physical volume to another physical volume within the same volume group.

Note: You can specify more than one destination physical volume. First, identify the source disk from which you want to migrate the data. Then, identify the target disk to which you want to migrate the data. You can only migrate to disks that are already in the rootvg volume group. To get a list of disks in the rootvg volume group, run the lsvg -p rootvg command.

# lsvg -p rootvg

rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 515 116 57.00..00..00..59 hdisk1 active 515 515 00.00..00..00..00

Now, determine the space that is currently in use on the disk that you want to migrate. This is the total physical partitions (PPs) value minus the free PPs value for the desired disk. In the preceding example, refer to hdisk0, which is using (515 - 116) PPs or 399 physical partitions.

Next, find a disk or disks that have the available space. In this case, hdisk1 has 515 free physical partitions, which is more than the required space of 399 physical partitions.

Page 22: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 22

.

Growing File Systems online

After AIX discovers a new hdisk, the administrator can choose to allocate all the space in the new volume to the new file system created within LVM filesystem structure or only a portion of the space. This choice allows the administrator to limit the amount of space available to the end user.

This provides flexibility and control for space allocation. This is the default behavior for the AIX installation media, which then allows for the online growth of the critical operating system file systems when future events either unexpectedly fill up a critical file system or normal growth occurs over time.

AIX provides a powerful utility called chfs –a size=SOMENUMBER FileSystemName which provides a wide variety of control features for use with file systems. This section will highlight the size command option of the chfs command

-a size=NewSize

This command specifies the size of the Enhanced Journaled File System in 512-byte blocks, Megabytes or Gigabytes.

If Value has the M suffix, it is interpreted to be in Megabytes.

If Value has a G suffix, it is interpreted to be in Gigabytes.

If Value begins with a +, it is interpreted as a request to increase the file system size by the specified amount.

If Value begins with a -, it is interpreted as a request to reduce the file system size by the specified amount.

If the specified size does not begin with a + or -, but it is greater or smaller than the file system current size, it is also a request to increase or reduce the file system size.

AIX chfs –a size=+### /filesystem

If there is sufficient space remaining in the file system volume that is currently in use the file system which has been filled up can be grown online by using the following command from AIX. Grow the /usr filesystem online by one Gigabyte.

# chfs -a size=+1G /usr

Discovering the new space on a SAN volume which has been expanded from the Storage Center

#chvg –g volumegroup

Page 23: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 23

Replay Creation and Mapping a Local Recovery back to the same server.

1. Create a replay on the Storage Center 2. Perform a local recovery of the replay to create a new volume. 3. Map it to the AIX server. 4. Run the cfgmgr command to discover the new volume. 5. Run the lspv command to display the PVID of the new volume. 6. Note: That the new volume has the same PVID as the original volume. 7. The new volume PVID must be changed. 8. For example if the new volume shows up as hdisk4. 9. Run the chdev –l hdisk4 –a pv=clear command to clear the PVID. 10. Run the chdev –l hdisk4 –a pv=yes command to auto-assign a new PVID.

11. Run the recreatevg -y newname hdisk4 command to recreate the Volume

Group present on the replay volume

Replay Creation and Mapping a Local Recovery to a different server.

#> lsdev -Cc disk hdisk0 Available 04-08-00-3,0 16 Bit LVD SCSI Disk Drive hdisk1 Available 00-09-01 Compellent FC SCSI Disk Drive hdisk2 Available 00-08-02 Compellent FC SCSI Disk Drive

#> mkvg -S -y vg500GB hdisk2

0516-1254 mkvg: Changing the PVID in the ODM. vg500GB

#> lspv hdisk2

PHYSICAL VOLUME: hdisk2 VOLUME GROUP: vg500GB PV IDENTIFIER: 0000093e9d100998 VG IDENTIFIER 0000093e0000d700000001259d100b35 PP SIZE: 256 megabyte(s) LOGICAL VOLUMES: 0 TOTAL PPs: 1999 (511744 megabytes) VG DESCRIPTORS: 2 FREE PPs: 1999 (511744 megabytes) HOT SPARE: no

#> mklv -t jfs2 -y lv500GB vg500GB 1999

lv500GB

#> mkdir /500gb

#> crfs -v jfs2 -a log=INLINE -d lv500GB -m /500gb

File system created successfully. 523485388 kilobytes total disk space. New File System size is 1048051712

#> mount /dev/lv500GB /500gb

#> df -g

Filesystem GB blocks Free %Used Iused %Iused Mounted on /dev/lv500GB 499.75 499.17 1% 4 1% /500gb

Page 24: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 24

#> varyonvg vg500GB

Create a replay of source volume (tyrant_500gb_Lun1) and create a local recovery using the default name (tyrant_500gb_Lun1 View1) on the Storage Center.

Map the local recovery to the second server and run cfgmgr on the second server to discover new volume

2ndServer#> cfgmgr

#> lsdev -Cc disk

hdisk0 Available 01-08-00-1,0 SCSI Disk Drive hdisk1 Available 01-10-01 Compellent FC SCSI Disk Drive hdisk2 Available 01-11-01 Compellent FC SCSI Disk Drive

# mkdir /500gbreplay

# importvg -y vg500GB hdisk2

#> mount /dev/lv500GB /500gb

Replaying log for /dev/lv500GB.

#> df -g /500gb

Filesystem GB blocks Free %Used Iused %Iused Mounted on /dev/lv500GB 499.75 499.17 1% 5 1% /500gb

Advanced POWER Virtualization (APV) Virtual Input Output Server (VIOS)

IBM P Series servers from IBM have, since October 2001, allowed a machine to be divided into LPARs, with each LPAR running a different OS image -- effectively a server within a server. You can achieve this by logically splitting up a large machine into smaller units with CPU, memory, and PCI adapter slot allocations.

POWER5, Power6 and Power7 machines can also run an LPAR with less than one whole CPU -- up to ten LPARs per CPU. So, for example, on a four CPU machine, 20 LPARs can easily be running. With each LPAR needing a minimum of one SCSI adapter for disk I/O and one Ethernet adapter for networking, the example of 20 LPARs would require the server to have at least 40 PCI adapters. This is where the VIO Server helps.

The VIO Server owns real PCI adapters (Ethernet, SCSI, or SAN), but allow other LPARs to share physical resources using the built-in Hypervisor services. These other LPARs are called Virtual I/O client partitions (VIOC). And because they don't need real physical disks or real physical Ethernet adapters to run, they can be created quickly and cheaply.

The important point about VIOS/VIOC configurations is that the VIO Server owns all the physical hardware on the server. VIO Server can allocate Storage Center volumes to the various VIO Clients for use as boot and data disks. All connectivity to Storage Center volumes is controlled by the VIO Server!

Page 25: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 25

VIOS padmin vs. oem_setup_env MPIO module installation

The Compellent MPIO ODM/PCM is installed only in the VIO Server partition. The restricted shell provided by the padmin login does not allow for the use of the installp command which is needed to install the Compellent ODM/PCM module. The user must invoke the oem_setup_env command to allow for the installation of the Path Control Module to allow multipath functionality in the server and client partitions.

The VIO Server has a padmin login which provides a root like environment for control of the VIO clients

VIOS cfgdev vs. AIX cfgmgr device discovery commands

VIO Clients are by default set to a queue depth of 3 for all Virtual SCSI connections.

All subsequent VIO Server operations will utilize VIOS specific methods for Storage Center volume discovery, multipath and usage.

IVM (Integrated Virtualization Manager vs. HMC (Hardware

Management System)

On smaller AIX systems, IBM provides a web based interface called the IVM or Integrated Virtualization Manager which can be used to control a single VIOS server. while larger configurations are normally controlled by separate server called an HMC or Hardware Management Console which provides the ability to control multiple servers.

Here is a quick reference guide showing some of the various tools available under the VIO Server software umbrella. http://www.tablespace.net/quicksheet/apv-quicksheet.pdf

Note that an HMC can control multiple servers, any one of which may have a Logical partition created and containing a VIO Server partition. Additional Logical Partitions may contain VIOS, AIX or IBM Linux virtual servers.

Modify Default ODM Settings

By default, the Compellent provided ODM sets LUNs to PR_exclusive meaning any LUNs created on the Compellent Storage Center and mapped to a server, the LUNs will be shown as PR_exclusive which will in turn cause SCSI reservations to be created on the SAN. In a configuration where IBM requires you to set the LUNs to be in no_reserve state, you need to make a change to the ODM by performing the below steps:

1. Create a text file called “reserve.odmadd” and put the following in the file:

PdAt: uniquetype = "disk/fcp/compellent_sc" attribute = "reserve_policy" deflt = "no_reserve" values = "PR_exclusive,no_reserve,single_path" width = "" type = "R" generic = "DU" rep = "sl" nls_index = 96

Page 26: Dell-Compellent AIX Best Practices

Compellent Storage Center AIX Best Practices

© Compellent Technologies Page 26

PdAt: uniquetype = "disk/iscsi/compellent_sc" attribute = "reserve_policy" deflt = "no_reserve" values = "PR_exclusive,no_reserve,single_path" width = "" type = "R" generic = "DU" rep = "sl" nls_index = 96

2. Delete the old entries in the ODM by executing the following commands:

# odmdelete -o PdAt -q "uniquetype = disk/fcp/compellent_sc and attribute = reserve_policy" # odmdelete -o PdAt -q "uniquetype = disk/iscsi/compellent_sc and attribute = reserve_policy"

3. Create the new entries in the ODM by executing the following commands: # odmadd reserve.odmadd

4. Verify the new entries in the ODM by executing the following commands:

# odmget -q "uniquetype = disk/iscsi/compellent_sc and attribute = reserve_policy" PdAt # odmget -q "uniquetype = disk/fcp/compellent_sc and attribute = reserve_policy" PdAt

You should notice the deflt value under the PdAT output as “no_reserve”. Note that after you make this change to the ODM, the default behavior will be changed from PR_exclusive to no_reserve for any new LUNs created and mapped to this server (a reboot is not required to take effect). All existing LUNs settings before the change will remain the intact.