San

858
ibm.com/redbooks Front cover Implementing the IBM System Storage SAN Volume Controller V5.1 Jon Tate Pall Beck Angelo Bernasconi Werner Eggli Install, use, and troubleshoot the SAN Volume Controller Learn about and how to attach iSCSI hosts Understand what solid-state drives have to offer

Transcript of San

Page 1: San

ibm.com/redbooks

Front cover

Implementing the IBM System Storage SAN Volume Controller V5.1

Jon TatePall Beck

Angelo BernasconiWerner Eggli

Install, use, and troubleshoot the SAN Volume Controller

Learn about and how to attach iSCSI hosts

Understand what solid-state drives have to offer

Page 2: San
Page 3: San

International Technical Support Organization

Implementing the IBM System Storage SAN Volume Controller V5.1

March 2010

SG24-6423-07

Page 4: San

© Copyright International Business Machines Corporation 2010. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

Eighth Edition (March 2010)

This edition applies to Version 5 Release 1 Modification 0 of the IBM System Storage SAN Volume Controller and is based on pre-GA versions of code.

Note: Before using this information and the product it supports, read the information in “Notices” on page xvii.

Note: This book is based on a pre-GA version of a product and might not apply when the product becomes generally available. We recommend that you consult the product documentation or follow-on versions of this book for more current information.

Page 5: San

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviiTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xviii

Summary of changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xixMarch 2010, Eighth Edition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiThe team who wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiNow you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiiiStay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xxiv

Chapter 1. Introduction to storage virtualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 User requirements that drive storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51.3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Chapter 2. IBM System Storage SAN Volume Controller. . . . . . . . . . . . . . . . . . . . . . . . . 72.1 SVC history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Architectural overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.2.1 SVC virtualization concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.2 MDisk overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2.3 VDisk overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.5 Managed mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.6 Cache mode and cache-disabled VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.2.7 Mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.8 Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.2.9 VDisk I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.2.10 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.2.11 Usage of IP addresses and Ethernet ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.2.12 iSCSI VDisk discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2.13 iSCSI authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302.2.14 iSCSI multipathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.2.15 Advanced Copy Services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312.2.16 FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.3 SVC cluster overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342.3.1 Quorum disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352.3.2 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.3.3 Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372.3.4 Cluster management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382.3.5 User authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 402.3.6 SVC roles and user groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412.3.7 SVC local authentication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 422.3.8 SVC remote authentication and single sign-on. . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.4 SVC hardware overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 462.4.1 Fibre Channel interfaces. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 472.4.2 LAN interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

2.5 Solid-state drives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

© Copyright IBM Corp. 2010. All rights reserved. iii

Page 6: San

2.5.1 Storage bottleneck problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492.5.2 Solid-state drive solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502.5.3 Solid-state drive market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.6 Solid-state drives in the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512.6.1 Solid-state drive configuration rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 522.6.2 SVC 5.1 supported hardware list, device driver, and firmware levels . . . . . . . . . . 552.6.3 SVC 4.3.1 features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 562.6.4 New with SVC 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

2.7 Maximum supported configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 582.8 Useful SVC links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 592.9 Commonly encountered terms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Chapter 3. Planning and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 653.1 General planning rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.2 Physical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.2.1 Preparing your uninterruptible power supply unit environment . . . . . . . . . . . . . . . 683.2.2 Physical rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 693.2.3 Cable connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

3.3 Logical planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743.3.1 Management IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 743.3.2 SAN zoning and SAN connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 763.3.3 iSCSI IP addressing plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 813.3.4 Back-end storage subsystem configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843.3.5 SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 863.3.6 Managed Disk Group configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.3.7 Virtual disk configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 903.3.8 Host mapping (LUN masking) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 923.3.9 Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 933.3.10 SAN boot support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 993.3.11 Data migration from a non-virtualized storage subsystem . . . . . . . . . . . . . . . . . 993.3.12 SVC configuration backup procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

3.4 Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1003.4.1 SAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013.4.2 Disk subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1013.4.3 SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1023.4.4 Performance monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Chapter 4. SAN Volume Controller initial configuration . . . . . . . . . . . . . . . . . . . . . . . 1034.1 Managing the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

4.1.1 TCP/IP requirements for SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . 1044.2 Systems Storage Productivity Center overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

4.2.1 IBM System Storage Productivity Center hardware . . . . . . . . . . . . . . . . . . . . . . 1084.2.2 SVC installation planning information for System Storage Productivity Center . 1094.2.3 SVC installation planning information for the HMC. . . . . . . . . . . . . . . . . . . . . . . 110

4.3 Setting up the SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1114.3.1 Creating the cluster (first time) using the service panel . . . . . . . . . . . . . . . . . . . 1114.3.2 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1144.3.3 Initial configuration using the service panel . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.4 Adding the cluster to the SSPC or the SVC HMC. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1164.4.1 Configuring the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

4.5 Secure Shell overview and CIM Agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1254.5.1 Generating public and private SSH key pairs using PuTTY . . . . . . . . . . . . . . . . 1264.5.2 Uploading the SSH public key to the SVC cluster. . . . . . . . . . . . . . . . . . . . . . . . 129

iv Implementing the IBM System Storage SAN Volume Controller V5.1

Page 7: San

4.5.3 Configuring the PuTTY session for the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1304.5.4 Starting the PuTTY CLI session . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1344.5.5 Configuring SSH for AIX clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

4.6 Using IPv6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1364.6.1 Migrating a cluster from IPv4 to IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1374.6.2 Migrating a cluster from IPv6 to IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

4.7 Upgrading the SVC Console software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Chapter 5. Host configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1535.1 SVC setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

5.1.1 Fibre Channel and SAN setup overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1545.1.2 Port mask . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

5.2 iSCSI overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585.2.1 Initiators and targets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585.2.2 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1585.2.3 IQN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

5.3 VDisk discovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1595.4 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1605.5 AIX-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

5.5.1 Configuring the AIX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625.5.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . . 1625.5.3 HBAs for IBM System p hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1625.5.4 Configuring for fast fail and dynamic tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . 1635.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM) . . . . . . . . . . 1655.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3 . . . . . . . . . . . . . . 1675.5.7 Using SDD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1705.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDD . . . . . . . . . 1725.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCM . . . . . . . . . . . . . 1725.5.10 Using SDDPCM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1765.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCM. . . . . . . 1775.5.12 Expanding an AIX volume. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1775.5.13 Removing an SVC volume on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1815.5.14 Running SVC commands from an AIX host system . . . . . . . . . . . . . . . . . . . . . 181

5.6 Windows-specific information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows Server

2008 hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825.6.2 Configuring Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1825.6.3 Hardware lists, device driver, HBAs, and firmware levels. . . . . . . . . . . . . . . . . . 1835.6.4 Host adapter installation and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1835.6.5 Changing the disk timeout on Microsoft Windows Server. . . . . . . . . . . . . . . . . . 1855.6.6 Installing the SDD driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1855.6.7 Installing the SDDDSM driver on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

5.7 Discovering assigned VDisks in Windows Server 2000 and Windows 2003 Server. . 1905.7.1 Extending a Windows Server 2000 or Windows 2003 Server volume . . . . . . . . 195

5.8 Example configuration of attaching an SVC to a Windows Server 2008 host. . . . . . . 2005.8.1 Installing SDDDSM on a Windows Server 2008 host . . . . . . . . . . . . . . . . . . . . . 2005.8.2 Installing SDDDSM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2035.8.3 Attaching SVC VDisks to Windows Server 2008 . . . . . . . . . . . . . . . . . . . . . . . . 2055.8.4 Extending a Windows Server 2008 volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2115.8.5 Removing a disk on Windows. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

5.9 Using the SVC CLI from a Windows host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2145.10 Microsoft Volume Shadow Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215

5.10.1 Installation overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

Contents v

Page 8: San

5.10.2 System requirements for the IBM System Storage hardware provider . . . . . . . 2165.10.3 Installing the IBM System Storage hardware provider . . . . . . . . . . . . . . . . . . . 2165.10.4 Verifying the installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2205.10.5 Creating the free and reserved pools of volumes . . . . . . . . . . . . . . . . . . . . . . . 2215.10.6 Changing the configuration parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222

5.11 Specific Linux (on Intel) information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255.11.1 Configuring the Linux host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255.11.2 Configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255.11.3 Disabling automatic Linux system updates. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2255.11.4 Setting queue depth with QLogic HBAs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265.11.5 Multipathing in Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2265.11.6 Creating and preparing the SDD volumes for use . . . . . . . . . . . . . . . . . . . . . . 2315.11.7 Using the operating system MPIO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2335.11.8 Creating and preparing MPIO volumes for use. . . . . . . . . . . . . . . . . . . . . . . . . 233

5.12 VMware configuration information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2375.12.1 Configuring VMware hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2385.12.2 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 2385.12.3 Guest operating systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2385.12.4 HBAs for hosts running VMware. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2385.12.5 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2395.12.6 VMware storage and zoning recommendations . . . . . . . . . . . . . . . . . . . . . . . . 2405.12.7 Setting the HBA timeout for failover in VMware . . . . . . . . . . . . . . . . . . . . . . . . 2415.12.8 Multipathing in ESX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2425.12.9 Attaching VMware to VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2425.12.10 VDisk naming in VMware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2455.12.11 Setting the Microsoft guest operating system timeout . . . . . . . . . . . . . . . . . . 2465.12.12 Extending a VMFS volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2465.12.13 Removing a datastore from an ESX host . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248

5.13 SUN Solaris support information. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2495.13.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 2495.13.2 SDD dynamic pathing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249

5.14 Hewlett-Packard UNIX configuration information . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505.14.1 Operating system versions and maintenance levels. . . . . . . . . . . . . . . . . . . . . 2505.14.2 Multipath solutions supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505.14.3 Co-existence of SDD and PV Links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2505.14.4 Using an SVC VDisk as a cluster lock disk. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2515.14.5 Support for HP-UX with greater than eight LUNs . . . . . . . . . . . . . . . . . . . . . . . 251

5.15 Using SDDDSM, SDDPCM, and SDD Web interface . . . . . . . . . . . . . . . . . . . . . . . . 2515.16 Calculating the queue depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2525.17 Further sources of information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

5.17.1 Publications containing SVC storage subsystem attachment guidelines . . . . . 253

Chapter 6. Advanced Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2556.1 FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256

6.1.1 Business requirement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2566.1.2 Moving and migrating data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2566.1.3 Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576.1.4 Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576.1.5 Application testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2576.1.6 SVC FlashCopy features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

6.2 Reverse FlashCopy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2586.2.1 FlashCopy and Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259

6.3 How FlashCopy works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261

vi Implementing the IBM System Storage SAN Volume Controller V5.1

Page 9: San

6.4 Implementing SVC FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626.4.1 FlashCopy mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2626.4.2 Multiple Target FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2636.4.3 Consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2646.4.4 FlashCopy indirection layer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666.4.5 Grains and the FlashCopy bitmap. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2666.4.6 Interaction and dependency between Multiple Target FlashCopy mappings . . . 2676.4.7 Summary of the FlashCopy indirection layer algorithm. . . . . . . . . . . . . . . . . . . . 2696.4.8 Interaction with the cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2696.4.9 FlashCopy rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2706.4.10 FlashCopy and image mode disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2706.4.11 FlashCopy mapping events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2716.4.12 FlashCopy mapping states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2746.4.13 Space-efficient FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2766.4.14 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2776.4.15 Synthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2786.4.16 Serialization of I/O by FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2786.4.17 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2786.4.18 Asynchronous notifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2806.4.19 Interoperation with Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . 2806.4.20 Recovering data from FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281

6.5 Metro Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2816.5.1 Metro Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2816.5.2 Remote copy techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2826.5.3 SVC Metro Mirror features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2836.5.4 Multiple Cluster Mirroring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2846.5.5 Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2876.5.6 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2886.5.7 How Metro Mirror works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2916.5.8 Metro Mirror process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2926.5.9 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2926.5.10 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2956.5.11 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2986.5.12 Practical use of Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3016.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 3026.5.14 Metro Mirror configuration limits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302

6.6 Metro Mirror commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3036.6.1 Listing available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3036.6.2 Creating the SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3046.6.3 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3046.6.4 Creating a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3056.6.5 Changing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3056.6.6 Changing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3066.6.7 Starting a Metro Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3066.6.8 Stopping a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3066.6.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3076.6.10 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 3076.6.11 Deleting a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3076.6.12 Deleting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3086.6.13 Reversing a Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3086.6.14 Reversing a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 3086.6.15 Background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

6.7 Global Mirror overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

Contents vii

Page 10: San

6.7.1 Intracluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3096.7.2 Intercluster Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309

6.8 Remote copy techniques. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3106.8.1 Asynchronous remote copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3106.8.2 SVC Global Mirror features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311

6.9 Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3136.9.1 Global Mirror relationship between primary and secondary VDisks . . . . . . . . . . 3136.9.2 Importance of write ordering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3136.9.3 Dependent writes that span multiple VDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3146.9.4 Global Mirror consistency groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315

6.10 Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3176.10.1 Intercluster communication and zoning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3176.10.2 SVC cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3176.10.3 Maintenance of the intercluster link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3176.10.4 Distribution of work among nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3186.10.5 Background copy performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3186.10.6 Space-efficient background copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319

6.11 Global Mirror process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3196.11.1 Methods of synchronization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3196.11.2 State overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3226.11.3 Detailed states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3246.11.4 Practical use of Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3286.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions. 3296.11.6 Global Mirror configuration limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

6.12 Global Mirror commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3296.12.1 Listing the available SVC cluster partners . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3306.12.2 Creating an SVC cluster partnership. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3336.12.3 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 3346.12.4 Creating a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3346.12.5 Changing a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3346.12.6 Changing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 3356.12.7 Starting a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3356.12.8 Stopping a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3356.12.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3366.12.10 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 3366.12.11 Deleting a Global Mirror relationship. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3366.12.12 Deleting a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 3376.12.13 Reversing a Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3376.12.14 Reversing a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . 337

Chapter 7. SAN Volume Controller operations using the command-line interface. . 3397.1 Normal operations using CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340

7.1.1 Command syntax and online help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3407.2 Working with managed disks and disk controller systems . . . . . . . . . . . . . . . . . . . . . 340

7.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3407.2.2 Renaming a controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3417.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3427.2.4 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3427.2.5 Viewing MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3437.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3447.2.7 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3457.2.8 Adding MDisks to a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3467.2.9 Showing the Managed Disk Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 346

viii Implementing the IBM System Storage SAN Volume Controller V5.1

Page 11: San

7.2.10 Showing MDisks in an managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . 3467.2.11 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3467.2.12 Creating a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3477.2.13 Viewing Managed Disk Group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3487.2.14 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3487.2.15 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3497.2.16 Removing MDisks from a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . 349

7.3 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3507.3.1 Creating a Fibre Channel-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3507.3.2 Creating an iSCSI-attached host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3517.3.3 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3537.3.4 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3547.3.5 Adding ports to a defined host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3547.3.6 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355

7.4 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3567.4.1 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3567.4.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3587.4.3 Creating a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3587.4.4 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3597.4.5 Adding a mirrored VDisk copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3607.4.6 Splitting a VDisk Copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3637.4.7 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3647.4.8 I/O governing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3657.4.9 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3677.4.10 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3677.4.11 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3687.4.12 Showing VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3697.4.13 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3707.4.14 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3707.4.15 Migrate a VDisk to an image mode VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3717.4.16 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3727.4.17 Showing a VDisk on an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3737.4.18 Showing VDisks using a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . 3737.4.19 Showing which MDisks are used by a specific VDisk . . . . . . . . . . . . . . . . . . . . 3747.4.20 Showing from which Managed Disk Group a VDisk has its extents . . . . . . . . . 3747.4.21 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 3757.4.22 Showing the VDisk to which the host is mapped . . . . . . . . . . . . . . . . . . . . . . . 3767.4.23 Tracing a VDisk from a host back to its physical disk . . . . . . . . . . . . . . . . . . . . 376

7.5 Scripting under the CLI for SVC task automation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3787.6 SVC advanced operations using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378

7.6.1 Command syntax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3787.6.2 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379

7.7 Managing the cluster using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3807.7.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3807.7.2 Changing cluster settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3817.7.3 Cluster authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3817.7.4 iSCSI configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3827.7.5 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3837.7.6 Supported IP address formats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3837.7.7 Setting the cluster time zone and time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3847.7.8 Start statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3857.7.9 Stopping a statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3867.7.10 Status of copy operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386

Contents ix

Page 12: San

7.7.11 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3867.8 Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387

7.8.1 Viewing node details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3887.8.2 Adding a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3887.8.3 Renaming a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3907.8.4 Deleting a node. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3907.8.5 Shutting down a node . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390

7.9 I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3917.9.1 Viewing I/O Group details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3917.9.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3927.9.3 Adding and removing hostiogrp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3927.9.4 Listing I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

7.10 Managing authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3947.10.1 Managing users using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3947.10.2 Managing user roles and groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3957.10.3 Changing a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3967.10.4 Audit log command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396

7.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3977.11.1 FlashCopy operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3977.11.2 Setting up FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3987.11.3 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3987.11.4 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3997.11.5 Preparing (pre-triggering) the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . 4017.11.6 Preparing (pre-triggering) the FlashCopy consistency group . . . . . . . . . . . . . . 4027.11.7 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4027.11.8 Starting (triggering) FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . 4047.11.9 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4047.11.10 Stopping the FlashCopy mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4057.11.11 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 4067.11.12 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4067.11.13 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . 4077.11.14 Migrating a VDisk to a Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 4077.11.15 Reverse FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4127.11.16 Split-stopping of FlashCopy maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412

7.12 Metro Mirror operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4137.12.1 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4147.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 4157.12.3 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4167.12.4 Creating the Metro Mirror relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4177.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 4187.12.6 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4197.12.7 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4207.12.8 Monitoring the background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4207.12.9 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4227.12.10 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 4227.12.11 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 4237.12.12 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 4247.12.13 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 4247.12.14 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4257.12.15 Switching copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . . . . 4257.12.16 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 4267.12.17 Creating an SVC partnership among many clusters . . . . . . . . . . . . . . . . . . . . 4277.12.18 Star configuration partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428

x Implementing the IBM System Storage SAN Volume Controller V5.1

Page 13: San

7.13 Global Mirror operation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4347.13.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4357.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4 . . . . . . . . 4367.13.3 Changing link tolerance and cluster delay simulation . . . . . . . . . . . . . . . . . . . . 4377.13.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 4397.13.5 Creating Global Mirror relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4397.13.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 4417.13.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4417.13.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 4417.13.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 4427.13.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4437.13.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4447.13.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 4447.13.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 4457.13.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 4467.13.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 4467.13.16 Changing direction for Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4477.13.17 Switching copy direction for a Global Mirror relationship . . . . . . . . . . . . . . . . 4477.13.18 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 448

7.14 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4497.14.1 Upgrading software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4507.14.2 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4567.14.3 Setting up SNMP notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4587.14.4 Set syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4587.14.5 Configuring error notification using an e-mail server. . . . . . . . . . . . . . . . . . . . . 4597.14.6 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4607.14.7 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4617.14.8 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4627.14.9 Backing up the SVC cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4667.14.10 Restoring the SVC cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4677.14.11 Deleting configuration backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468

7.15 SAN troubleshooting and data collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4687.16 T3 recovery process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 468

Chapter 8. SAN Volume Controller operations using the GUI. . . . . . . . . . . . . . . . . . . 4698.1 SVC normal operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 470

8.1.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4708.1.2 Documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4758.1.3 Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4758.1.4 General housekeeping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4768.1.5 Viewing progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 476

8.2 Working with managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4778.2.1 Viewing disk controller details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4778.2.2 Renaming a disk controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4788.2.3 Discovery status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4798.2.4 Managed disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4798.2.5 MDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4798.2.6 Renaming an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4808.2.7 Discovering MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4818.2.8 Including an MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4818.2.9 Showing a VDisk using a certain MDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 482

8.3 Working with Managed Disk Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4838.3.1 Viewing MDisk group information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 483

Contents xi

Page 14: San

8.3.2 Creating MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4848.3.3 Renaming a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4868.3.4 Deleting a managed disk group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4878.3.5 Adding MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4888.3.6 Removing MDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4898.3.7 Displaying MDisks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4908.3.8 Showing MDisks in this group. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4918.3.9 Showing the VDisks that are associated with an MDisk group . . . . . . . . . . . . . . 492

8.4 Working with hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4938.4.1 Host information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4948.4.2 Creating a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4958.4.3 Fibre Channel-attached hosts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4958.4.4 iSCSI-attached hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4978.4.5 Modifying a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4998.4.6 Deleting a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5008.4.7 Adding ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5018.4.8 Deleting ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 502

8.5 Working with VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5048.5.1 Using the Viewing VDisks using MDisk window . . . . . . . . . . . . . . . . . . . . . . . . . 5048.5.2 VDisk information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5058.5.3 Creating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5058.5.4 Creating a Space-Efficient VDisk with autoexpand. . . . . . . . . . . . . . . . . . . . . . . 5098.5.5 Deleting a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5138.5.6 Deleting a VDisk-to-host mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5148.5.7 Expanding a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5148.5.8 Assigning a VDisk to a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5168.5.9 Modifying a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5178.5.10 Migrating a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5188.5.11 Migrating a VDisk to an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . 5198.5.12 Creating a VDisk Mirror from an existing VDisk . . . . . . . . . . . . . . . . . . . . . . . . 5218.5.13 Creating a mirrored VDisk. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5238.5.14 Creating a VDisk in image mode. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5268.5.15 Creating an image mode mirrored VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5298.5.16 Migrating to a Space-Efficient VDisk using VDisk Mirroring . . . . . . . . . . . . . . . 5328.5.17 Deleting a VDisk copy from a VDisk mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5348.5.18 Splitting a VDisk copy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5358.5.19 Shrinking a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5368.5.20 Showing the MDisks that are used by a VDisk . . . . . . . . . . . . . . . . . . . . . . . . . 5378.5.21 Showing the MDG to which a VDisk belongs . . . . . . . . . . . . . . . . . . . . . . . . . . 5388.5.22 Showing the host to which the VDisk is mapped . . . . . . . . . . . . . . . . . . . . . . . 5388.5.23 Showing capacity information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5388.5.24 Showing VDisks mapped to a particular host . . . . . . . . . . . . . . . . . . . . . . . . . . 5398.5.25 Deleting VDisks from a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540

8.6 Working with solid-state drives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5408.6.1 Solid-state drive introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540

8.7 SVC advanced operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5438.7.1 Organizing on window content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543

8.8 Managing the cluster using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5448.8.1 Viewing cluster properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5448.8.2 Modifying IP addresses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5458.8.3 Starting the statistics collection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5478.8.4 Stopping the statistics collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5488.8.5 Metro Mirror and Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549

xii Implementing the IBM System Storage SAN Volume Controller V5.1

Page 15: San

8.8.6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5498.8.7 Setting the cluster time and configuring the Network Time Protocol server . . . . 5498.8.8 Shutting down a cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550

8.9 Manage authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5528.9.1 Modify current user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5538.9.2 Creating a user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5548.9.3 Modifying a user role. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5568.9.4 Deleting a user role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5568.9.5 User groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5578.9.6 Cluster password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5588.9.7 Remote authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 558

8.10 Working with nodes using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5598.10.1 I/O Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5598.10.2 Renaming an I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5598.10.3 Adding nodes to the cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5608.10.4 Configuring iSCSI ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 563

8.11 Managing Copy Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5668.12 FlashCopy operations using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5668.13 Creating a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566

8.13.1 Creating a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5688.13.2 Preparing (pre-triggering) the FlashCopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5738.13.3 Starting (triggering) FlashCopy mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5748.13.4 Starting (triggering) a FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . 5748.13.5 Monitoring the FlashCopy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5758.13.6 Stopping the FlashCopy consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 5768.13.7 Deleting the FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5788.13.8 Deleting the FlashCopy consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5798.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDisk . . . . . . 5808.13.10 Reversing and splitting a FlashCopy mapping . . . . . . . . . . . . . . . . . . . . . . . . 580

8.14 Metro Mirror operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5828.14.1 Cluster partnership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5828.14.2 Setting up Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5848.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . 5858.14.4 Creating a Metro Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5878.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_Pri . . . . 5908.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_Pri. . . . . . . . . . 5948.14.7 Starting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5978.14.8 Starting a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . . 5978.14.9 Starting a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5988.14.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5998.14.11 Stopping and restarting Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5998.14.12 Stopping a stand-alone Metro Mirror relationship . . . . . . . . . . . . . . . . . . . . . . 6008.14.13 Stopping a Metro Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6008.14.14 Restarting a Metro Mirror relationship in the Idling state. . . . . . . . . . . . . . . . . 6028.14.15 Restarting a Metro Mirror consistency group in the Idling state . . . . . . . . . . . 6038.14.16 Changing copy direction for Metro Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6048.14.17 Switching copy direction for a Metro Mirror consistency group. . . . . . . . . . . . 6058.14.18 Switching the copy direction for a Metro Mirror relationship . . . . . . . . . . . . . . 606

8.15 Global Mirror operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6078.15.1 Setting up Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6088.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2 . . . . . . . . 6098.15.3 Global Mirror link tolerance and delay simulations . . . . . . . . . . . . . . . . . . . . . . 6128.15.4 Creating a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . . 614

Contents xiii

Page 16: San

8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri . . . . 6178.15.6 Creating the stand-alone Global Mirror relationship for GM_App_Pri. . . . . . . . 6208.15.7 Starting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6248.15.8 Starting a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . . . 6248.15.9 Starting a Global Mirror consistency group. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6258.15.10 Monitoring background copy progress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6268.15.11 Stopping and restarting Global Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6278.15.12 Stopping a stand-alone Global Mirror relationship . . . . . . . . . . . . . . . . . . . . . 6278.15.13 Stopping a Global Mirror consistency group . . . . . . . . . . . . . . . . . . . . . . . . . . 6288.15.14 Restarting a Global Mirror relationship in the Idling state . . . . . . . . . . . . . . . . 6308.15.15 Restarting a Global Mirror consistency group in the Idling state. . . . . . . . . . . 6318.15.16 Changing copy direction for Global Mirror. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6328.15.17 Switching copy direction for a Global Mirror consistency group . . . . . . . . . . . 634

8.16 Service and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6358.17 Upgrading software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636

8.17.1 Package numbering and version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6368.17.2 Upgrade status utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6368.17.3 Precautions before upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6378.17.4 SVC software upgrade test utility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6388.17.5 Upgrade procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6398.17.6 Running maintenance procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6458.17.7 Setting up error notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6478.17.8 Setting syslog event notification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6498.17.9 Set e-mail features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6518.17.10 Analyzing the error log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6558.17.11 License settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6598.17.12 Viewing the license settings log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6628.17.13 Dumping the cluster configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6638.17.14 Listing dumps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6638.17.15 Setting up a quorum disk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666

8.18 Backing up the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6688.18.1 Backup procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6698.18.2 Saving the SVC configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6708.18.3 Restoring the SVC configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6728.18.4 Deleting the configuration backup files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6728.18.5 Fabrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6728.18.6 Common Information Model object manager log configuration. . . . . . . . . . . . . 673

Chapter 9. Data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6759.1 Migration overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6769.2 Migration operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 676

9.2.1 Migrating multiple extents (within an MDG) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6769.2.2 Migrating extents off of an MDisk that is being deleted. . . . . . . . . . . . . . . . . . . . 6779.2.3 Migrating a VDisk between MDGs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6789.2.4 Migrating the VDisk to image mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6809.2.5 Migrating a VDisk between I/O Groups. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6809.2.6 Monitoring the migration progress. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 681

9.3 Functional overview of migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6829.3.1 Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6829.3.2 Error handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6839.3.3 Migration algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 683

9.4 Migrating data from an image mode VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6859.4.1 Image mode VDisk migration concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 685

xiv Implementing the IBM System Storage SAN Volume Controller V5.1

Page 17: San

9.4.2 Migration tips. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6879.5 Data migration for Windows using the SVC GUI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 687

9.5.1 Windows Server 2008 host system connected directly to the DS4700. . . . . . . . 6889.5.2 Adding the SVC between the host system and the DS4700. . . . . . . . . . . . . . . . 6909.5.3 Putting the migrated disks onto an online Windows Server 2008 host . . . . . . . . 6989.5.4 Migrating the VDisk from image mode to managed mode . . . . . . . . . . . . . . . . . 7009.5.5 Migrating the VDisk from managed mode to image mode . . . . . . . . . . . . . . . . . 7029.5.6 Migrating the VDisk from image mode to image mode . . . . . . . . . . . . . . . . . . . . 7059.5.7 Free the data from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7099.5.8 Put the free disks online on Windows Server 2008. . . . . . . . . . . . . . . . . . . . . . . 711

9.6 Migrating Linux SAN disks to SVC disks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7129.6.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7149.6.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7159.6.3 Move the LUNs to the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7199.6.4 Migrate the image mode VDisks to managed MDisks . . . . . . . . . . . . . . . . . . . . 7229.6.5 Preparing to migrate from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7259.6.6 Migrate the VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7289.6.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 729

9.7 Migrating ESX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7329.7.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7339.7.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7359.7.3 Move the LUNs to the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7399.7.4 Migrating the image mode VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7429.7.5 Preparing to migrate from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7459.7.6 Migrating the managed VDisks to image mode VDisks . . . . . . . . . . . . . . . . . . . 7479.7.7 Remove the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 748

9.8 Migrating AIX SAN disks to SVC disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7519.8.1 Connecting the SVC to your SAN fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7539.8.2 Preparing your SVC to virtualize disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7549.8.3 Moving the LUNs to the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7599.8.4 Migrating image mode VDisks to VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7619.8.5 Preparing to migrate from the SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7639.8.6 Migrating the managed VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7669.8.7 Removing the LUNs from the SVC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767

9.9 Using SVC for storage migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7709.10 Using VDisk Mirroring and Space-Efficient VDisks together . . . . . . . . . . . . . . . . . . . 771

9.10.1 Zero detect feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7719.10.2 VDisk Mirroring With Space-Efficient VDisks . . . . . . . . . . . . . . . . . . . . . . . . . . 7739.10.3 Metro Mirror and Space-Efficient VDisk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 779

Appendix A. Scripting. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 785Scripting structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 786Automated virtual disk creation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787SVC tree. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 790Scripting alternatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797

Appendix B. Node replacement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 799Replacing nodes nondisruptively . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 800Expanding an existing SVC cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 804Moving VDisks to a new I/O Group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 806Replacing nodes disruptively (rezoning the SAN) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807

Appendix C. Performance data and statistics gathering. . . . . . . . . . . . . . . . . . . . . . . 809SVC performance overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810

Contents xv

Page 18: San

Performance considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810SVC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810

Performance monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810Collecting performance statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 810Performance data collection and TotalStorage Productivity Center for Disk . . . . . . . . 812

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 815Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816How to get IBM Redbooks publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 819

xvi Implementing the IBM System Storage SAN Volume Controller V5.1

Page 19: San

Notices

This information was developed for products and services offered in the U.S.A.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not give you any license to these patents. You can send license inquiries, in writing, to: IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.

The following paragraph does not apply to the United Kingdom or any other country where such provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM Web sites are provided for convenience only and do not in any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the materials for this IBM product and use of those Web sites is at your own risk.

IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring any obligation to you.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.

© Copyright IBM Corp. 2010. All rights reserved. xvii

Page 20: San

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. These and other IBM trademarked terms are marked on their first occurrence in this information with the appropriate symbol (® or ™), indicating US registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks of the International Business Machines Corporation in the United States, other countries, or both:

AIX 5L™AIX®developerWorks®DS4000®DS6000™DS8000®Enterprise Storage Server®FlashCopy®GPFS™

IBM Systems Director Active Energy Manager™

IBM®Power Systems™Redbooks®Redbooks (logo) ®Solid®System i®System p®

System Storage™System Storage DS®System x®System z®Tivoli®TotalStorage®WebSphere®XIV®z/OS®

The following terms are trademarks of other companies:

Emulex, and the Emulex logo are trademarks or registered trademarks of Emulex Corporation.

Novell, SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and other countries.

QLogic, and the QLogic logo are registered trademarks of QLogic Corporation. SANblade is a registered trademark in the United States.

ACS, Red Hat, and the Shadowman logo are trademarks or registered trademarks of Red Hat, Inc. in the U.S. and other countries.

VMotion, VMware, the VMware "boxes" logo and design are registered trademarks or trademarks of VMware, Inc. in the United States and/or other jurisdictions.

Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

Microsoft, Windows NT, Windows, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.

Intel Xeon, Intel, Pentium, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

UNIX is a registered trademark of The Open Group in the United States and other countries.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

xviii Implementing the IBM System Storage SAN Volume Controller V5.1

Page 21: San

Summary of changes

This section describes the technical changes made in this edition of the book and in previous editions. This edition might also include minor corrections and editorial changes that are not identified.

Summary of Changesfor SG24-6423-07for Implementing the IBM System Storage SAN Volume Controller V5.1as created or updated on March 30, 2010.

March 2010, Eighth Edition

This revision reflects the addition, deletion, or modification of new and changed information described next.

New information� Added iSCSI information� Added Solid® State Drive information

Changed information� Removed duplicate information� Consolidated chapters� Removed dated material

© Copyright IBM Corp. 2010. All rights reserved. xix

Page 22: San

xx Implementing the IBM System Storage SAN Volume Controller V5.1

Page 23: San

Preface

This IBM® Redbooks® publication is a detailed technical guide to the IBM System Storage™ SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized volumes visible to hosts and applications to physical volumes on storage devices. Each server within the storage area network (SAN) has its own set of virtual storage addresses, which are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. Therefore, volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves management of information at the “block” level in a network, enabling applications and servers to share storage devices on a network. This book is intended to allow you to implement the SVC at a 5.1.0 release level with a minimum of effort.

The team who wrote this book

This book was produced by a team of specialists from around the world working at Brocade Communications, San Jose, and the International Technical Support Organization, San Jose Center.

Jon Tate is a Project Manager for IBM System Storage SAN Solutions at the International Technical Support Organization, San Jose Center. Before joining the ITSO in 1999, he worked in the IBM Technical Support Center, providing Level 2 and 3 support for IBM storage products. Jon has 24 years of experience in storage software and management, services, and support, and is both an IBM Certified IT Specialist and an IBM SAN Certified Specialist. He is also the UK Chairman of the Storage Networking Industry Association.

Pall Beck is a SAN Technical Team Lead in IBM Nordic. He has 12 years of experience working with storage and joined the IBM ITD DK in 2005. Prior to working for IBM in Denmark, he worked as an IBM service representative performing hardware installations and repairs for IBM System i®, System p®, and System z® in Iceland. As a SAN Technical Team Lead for ITD DK, he led a team of administrators running several of the largest SAN installations in Europe. His current position involves the creation and implementation of operational standards and aligning best practices throughout the Nordics. Pall has a diploma as an Electronic Technician from Odense Tekniske Skole in Denmark and IR in Reykjavik, Iceland.

Angelo Bernasconi is a Certified ITS Senior Storage and SAN Software Specialist in IBM Italy. He has 24 years of experience in the delivery of maintenance and professional services for IBM Enterprise clients in z/OS® and open systems. He holds a degree in Electronics and his areas of expertise include storage hardware, SAN, storage virtualization, de-duplication, and disaster recovery solutions. He has written extensively about SAN and virtualization products in three IBM Redbooks publications, and he is the Technical Leader of the Italian Open System Storage Professional Services Community.

Werner Eggli is a Senior IT Specialist with IBM Switzerland. He has more than 25 years of experience in Software Development, Project Management, and Consulting concentrating in the Networking and Telecommunication Segment. Werner joined IBM in 2001 and works in pre-sales as a Storage Systems Engineer for Open Systems. His expertise is the design and implementation of IBM Storage Solutions. He holds a degree in Dipl.Informatiker (FH) from Fachhochschule Konstanz, Germany.

We extend our thanks to the following people for their contributions to this project.

© Copyright IBM Corp. 2010. All rights reserved. xxi

Page 24: San

There are many people who contributed to this book. In particular, we thank the development and PFE teams in Hursley. Matt Smith was also instrumental in moving any issues along and ensuring that they maintained a high profile.

In particular, we thank the previous authors of this book:

Matt Amanat Angelo BernasconiSteve CodySean CrawfordSameer DhulekarKatja GebuhrDeon GeorgeAmarnath HiriyannappaThorsten HossJuerg HossliPhilippe JachimczykKamalakkannan J JayaramanDan KoeckBent LeragerCraig McKennaAndy McManusJoao Marcos LeiteBarry MellishSuad MusovichMassimo RosatiFred ScholtenRobert SymonsMarcus ThordalXiao Peng Zhao

We also want to thank the following people for their contributions to previous editions and to those people who contributed to this edition:

John AgombarAlex AinscowTrevor BoardmanChris CantoPeter EcclesCarlos FuenteAlex HowellColin JewellPaul MasonPaul MerrisonJon ParkesSteve RandleLucy RawBill ScalesDave SinclairMatt SmithSteve WhiteBarry WhyteIBM Hursley

Bill WiegandIBM Advanced Technical Support

xxii Implementing the IBM System Storage SAN Volume Controller V5.1

Page 25: San

Dorothy FaurotIBM Raleigh

Sharon WangIBM Chicago

Chris SaulIBM San Jose

Sangam RacherlaIBM ITSO

A special mention must go to Brocade for their unparalleled support of this residency in terms of equipment and support in many areas throughout. Namely:

Jim BaldygaYong ChoiSilviano GaonaBrian StefflerSteven TongBrocade Communications Systems

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author - all at the same time! Join an ITSO residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base. Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us.

We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review IBM Redbooks form found at:

ibm.com/redbooks

� Send your comments in an e-mail to:

[email protected]

� Mail your comments to:

IBM Corporation, International Technical Support OrganizationDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

Preface xxiii

Page 26: San

Stay connected to IBM Redbooks

� Find us on Facebook:

http://www.facebook.com/pages/IBM-Redbooks/178023492563?ref=ts

� Follow us on twitter:

http://twitter.com/ibmredbooks

� Look for us on LinkedIn:

http://www.linkedin.com/groups?home=&gid=2130806

� Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm

� Stay current on recent Redbooks publications with RSS Feeds:

http://www.redbooks.ibm.com/rss.html

xxiv Implementing the IBM System Storage SAN Volume Controller V5.1

Page 27: San

Chapter 1. Introduction to storage virtualization

This chapter defines storage virtualization. It gives a short overview of today’s most critical storage issues and explains how storage virtualization can help you solve these issues.

1

© Copyright IBM Corp. 2010. All rights reserved. 1

Page 28: San

1.1 Storage virtualizationStorage virtualization is an overused term. Often, people use it as a buzzword to claim that a product is virtualized. Almost every storage hardware and software product can technically claim to provide a form of block-level virtualization. So, where do we define actual storage virtualization? Does the fact that a mobile computer has logical volumes that are created from a single physical drive mean that the computer is virtual? Not really.

So, what is storage virtualization? The IBM explanation of storage virtualization is clear:

� Storage virtualization is a technology that makes one set of resources look and feel like another set of resources, preferably with more desirable characteristics.

� It is a logical representation of resources not constrained by physical limitations:

– Hides part of the complexity– Adds or integrates new function with existing services– Can be nested or applied to multiple layers of a system

When discussing storage virtualization, it is important to understand that virtualization can be implemented on separate layers in the I/O stack. We have to clearly distinguish between virtualization on the file system layer and virtualization on the block, that is, the disk layer.

The focus of this book is block-level virtualization, that is, the block aggregation layer. File system virtualization is out of the intended scope of this book.

If you are interested in file system virtualization, refer to IBM General Parallel File System (GPFS™) or IBM scale out file services, which is based on GPFS. For more information and an overview of the IBM General Parallel File System (GPFS) Version 3, Release 2 for AIX®, Linux®, and Windows®, go to this Web site:

http://www-03.ibm.com/systems/clusters/software/whitepapers/gpfs_intro.html

For the IBM scale out file services, go to this Web site:

http://www-935.ibm.com/services/us/its/html/sofs-landing.html

The Storage Networking Industry Association’s (SNIA) block aggregation model (Figure 1-1 on page 3) provides a good overview of the storage domain and its layers.

Figure 1-1 on page 3 shows the three layers of a storage domain: the file, the block aggregation, and the block subsystem layers. The model splits the block aggregation layer into three sublayers. Block aggregation can be realized within hosts (servers), in the storage network (storage routers and storage controllers), or in storage devices (intelligent disk arrays).

The IBM implementation of a block aggregation solution is the IBM System Storage SAN Volume Controller (SVC). The SVC is implemented as a clustered appliance in the storage network layer. Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 provides a more in-depth discussion of why IBM has chosen to implement its IBM System Storage SAN Volume Controller in the storage network layer.

2 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 29: San

Figure 1-1 SNIA block aggregation model

The key concept of virtualization is to decouple the storage (which is delivered by commodity two-way Redundant Array of Independent Disks (RAID) controllers attaching physical disk drives) from the storage functions that are expected from servers in today’s storage area network (SAN) environment.

Decoupling is abstracting the physical location of data from the logical representation that an application on a server uses to access data. The virtualization engine presents logical entities, which are called volumes, to the user and internally manages the process of mapping the volume to the actual physical location. The realization of this mapping depends on the specific implementation. Another implementation-specific issue is the granularity of the mapping, which can range from a small fraction of a physical disk, up to the full capacity of a single physical disk. A single block of information in this environment is identified by its logical unit identifier (LUN), which is the physical disk, and an offset within that LUN, which is known as a logical block address (LBA).

Be aware that the term physical disk that is used in this context describes a piece of storage that might be carved out of a RAID array in the underlying disk subsystem.

The address space is mapped between the logical entity, which is usually referred to as a virtual disk (VDisk), and the physical disks, which are identified by their LUNs. We refer to these LUNs, which are provided by the storage controllers to the virtualization layer, as managed disks (MDisks) throughout this book.

Figure 1-2 on page 4 shows an overview of block-level virtualization.

Chapter 1. Introduction to storage virtualization 3

Page 30: San

Figure 1-2 Block level virtualization overview

The server and the application only know about logical entities and access these logical entities via a consistent interface that is provided by the virtualization layer. Each logical entity owns a common and well defined set of functionality that is independent of where the physical representation is located.

The functionality of a VDisk that is presented to a server, such as expanding or reducing the size of a VDisk, mirroring a VDisk to a secondary site, creating a FlashCopy/Snapshot, thin provisioning/over-allocating, and so on, is implemented in the virtualization layer and does not rely in any way on the functionality that is provided by the disk subsystems that deliver the MDisks. Data that is stored in a virtualized environment is stored in a location-independent way, which allows a user to move or migrate its data, or parts of it, to another place or storage pool, that is, the place where the data really belongs.

The logical entity can be resized, moved, replaced, replicated, over-allocated, mirrored, migrated, and so on without any disruption to the server and the application. After you have an abstraction layer in the SAN, you can perform almost any task.

We refer to block-level storage virtualization as the cornerstones of virtualization, which are the core advantages that a product, such as the SVC, can provide over traditional directly attached SAN storage:

� The SVC provides online volume migration while applications are running, which is possibly the greatest advantage for storage virtualization. With online migration while applications are running, you can put your data where it belongs, and, if the requirements change over time, move it to the right place or storage pool without impacting your server or application. Implementing a tiered storage environment can provide various storage classes for information life cycle management (ILM), can balance I/O across controllers,

4 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 31: San

can allow you to add, upgrade, and retire storage; in essence, it allows you to put your data where it really belongs.

� The SVC simplifies storage management by providing a single image for multiple controllers and a consistent user interface for provisioning heterogeneous storage (after the initial array setup).

� The SVC provides enterprise-level copy services for existing storage. You can license a function one time and use it everywhere. You can purchase new storage as low-cost RAID “bricks.” The source and target of a copy relationship can be on separate controllers.

� You can increase storage utilization by pooling storage across the SAN.

� You have the potential to increase system performance by reducing hot spots, striping disks across many arrays and controllers, and in certain implementations, providing additional caching.

The ability to deliver these functions in a homogeneous way on a scalable and highly available platform, over any attached storage and to every attached server, is the key challenge for every block-level virtualization solution.

1.2 User requirements that drive storage virtualizationIn today’s environment with emphasis on a smarter planet and a dynamic infrastructure, you need a storage environment that is as flexible as the application and server mobility. Business demands change quickly.

These key client concerns drive storage virtualization:

� Growth in datacenter costs� Inability of IT organizations to respond quickly to business demands� Poor asset utilization� Poor availability or service levels� Lack of skilled staff for storage administration

You can see the importance of addressing the complexity of managing storage networks by applying the total cost of ownership (TCO) metric to storage networks. Industry analyses show that storage acquisition costs are only about 20% of the TCO. Most of the remaining costs are related to managing the storage system.

How much of managing multiple systems with separate interfaces can be managed as a single entity? In an non-virtualized storage environment, every system is an island. Even if you have a large system that claims to virtualize, that system is an island that you will need to replace in the future.

With the SVC, you can reduce the number of separate environments that you need to manage to one environment ideally. However, depending on how many tens or thousands of systems you have, even reducing the number is a step in the right direction.

The SVC provides a single interface for storage management. Of course, there is an initial effort for the setup of the disk subsystems; however, all of the day-to-day storage management can be performed on the SVC. For example, you can use the data migration functionality of the SVC for data migration as disk subsystems are phased out. SVC can move the data online and without any impact on your servers.

Also, the virtualization layer offers advanced functions, such as data mirroring or FlashCopy® so there is no need to purchase them again for each new disk subsystem.

Chapter 1. Introduction to storage virtualization 5

Page 32: San

Today, it is typical that open systems run at significantly less than 50% of the usable capacity that the RAID disk subsystems provide. Using the installed raw capacity in the disk subsystems will, dependent on the RAID level that is used, show utilization numbers of less than 35%. A block-level virtualization solution, such as the SVC, will support you to increase that utilization to approximately 75 - 80%.

With the SVC, you do not need to keep and manage free space in each disk subsystem. You do not need to worry whether there is sufficient free space on the right storage tier, or in a single system.

Even if there is enough free space in one system, it might not be accessible in a non-virtualized environment for a specific server or application due to multipath driver issues. The SVC is able to handle the storage resources that it manages as a single storage pool. Disk space allocation from this pool is a matter of minutes for every server connected to the SVC, because you provision the capacity as needed, without disrupting applications.

1.3 ConclusionStorage virtualization is no longer merely a concept or an unproven technology. All major storage vendors offer storage virtualization products. Making use of storage virtualization as the foundation for a flexible and reliable storage solution helps a company better align business and IT by optimizing the storage infrastructure and storage management to meet business demands.

The IBM System Storage SAN Volume Controller is a mature, fifth generation virtualization solution, which uses open standards and is consistent with the Storage Networking Industry Association (SNIA) storage model. The SVC is an appliance-based in-band block virtualization process, in which intelligence, including advanced storage functions, is migrated from individual storage devices to the storage network.

We expect the use of SVC will improve the utilization of your storage resources, simplify the storage management, and improve the availability of your applications.

6 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 33: San

Chapter 2. IBM System Storage SAN Volume Controller

This chapter describes the major concepts of the IBM System Storage SAN Volume Controller (SVC). It not only covers the hardware architecture but also the software concepts. We provide a brief history of the product, and we describe the additional functionalities that will be available with the newest release.

2

© Copyright IBM Corp. 2010. All rights reserved. 7

Page 34: San

2.1 SVC history The IBM implementation of block-level storage virtualization, the IBM System Storage SAN Volume Controller (SVC), is based on an IBM project that was initiated in the second half of 1999 at the IBM Almaden Research Center. The project was called COMPASS (COMmodity PArts Storage System). One of its goals was to build a system almost exclusively built from off-the-shelf standard parts. As any enterprise-level storage control system, it had to deliver high performance and availability that were comparable to the highly optimized storage controllers of previous generations. The idea of building a storage control system that is based on a scalable cluster of lower performance Pentium®-based servers, instead of a monolithic architecture of two nodes, is still a compelling idea.

COMPASS also had to address a major challenge for the heterogeneous open systems environment, namely to reduce the complexity of managing storage on block devices.

The first publications covering this project were released to the public in 2003 in the form of the IBM SYSTEMS JOURNAL, VOL 42, NO 2, 2003, “The architecture of a SAN storage control system”, by J. S. Glider, C. F. Fuente, and W. J. Scales, which you can read at this Web site:

http://domino.research.ibm.com/tchjr/journalindex.nsf/e90fc5d047e64ebf85256bc80066919c/b97a551f7e510eff85256d660078a12e?OpenDocument

The results of the COMPASS project defined the fundamentals for the product architecture. The announcement of the first release of the IBM System Storage SAN Volume Controller took place in July 2003.

The following releases brought new, more powerful hardware nodes, which approximately doubled the I/O performance and throughput of its predecessors, provided new functionality, and offered additional interoperability with new elements in host environments, disk subsystems, and the storage area network (SAN).

Major steps in the product’s evolution were:

� SVC Release 2, February 2005

� SVC Release 3, October 2005

New 8F2 node hardware (based on IBM X336, 8 GB cache, 4 x 2 Gb Fibre Channel (FC) port)

� SVC Release 4.1, May 2006

New 8F4 node hardware (based on IBM X336, 8 GB cache, 4 x 4 Gb FC port)

� SVC Release 4.2, May 2007:

– New 8A4 entry-level node hardware (based on IBM X3250, 8 GB cache, 4 x 4 Gb FC port)

– New 8G4 node hardware (based on IBM X3550, 8 GB cache, 4 x 4 Gb FC port)

� SVC Release 4.3, May 2008

In 2008, the 15,000th SVC engine was shipped by IBM. More than 5,000 SVC systems worldwide are in operation.

With the new release of SVC that is introduced in this book, we will get a new generation of hardware nodes. This hardware, which will approximately double the performance of its predecessors, also provides solid-state drive (SSD) support. New software features are iSCSI support (which will be available on all hardware nodes that support the new firmware) and

8 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 35: San

multiple SVC partnerships, which will support data replication between the members of a group of up to four SVC clusters.

2.2 Architectural overviewThe IBM System Storage SAN Volume Controller is a SAN block aggregation appliance that is designed for attachment to a variety of host computer systems.

There are three major approaches in use today to be considered for the implementation of block-level aggregation:

� Network-based: Appliance

The device is a SAN appliance that sits in the data path, and all I/O flows through the device. This kind of implementation is also referred to as symmetric virtualization or in-band. The device is both target and initiator. It is the target of I/O requests from the host perspective and the initiator of I/O requests from the storage perspective. The redirection is performed by issuing new I/O requests to the storage.

� Switch-based: Split-path

The device is usually an intelligent SAN switch that intercepts I/O requests on the fabric and redirects the frames to the correct storage location. The actual I/O requests are themselves redirected. This kind of implementation is also referred to as asymmetric virtualization or out-of-band. Data and the control data path are separated, and a specific (preferably highly available and disaster tolerant) controller outside of the switch holds the metainformation and the configuration to manage the split data paths.

� Controller-based

The device is a storage controller that provides an internal switch for external storage attachment. In this approach, the storage controller intercepts and redirects I/O requests to the external storage as it does for internal storage.

Figure 2-1 on page 10 shows the three approaches.

Chapter 2. IBM System Storage SAN Volume Controller 9

Page 36: San

Figure 2-1 Overview of the block-level aggregation architectures

While all of these approaches provide in essence the same cornerstones of virtualization, several have interesting side effects.

All three approaches can provide the required functionality. Although, the implementation (especially the switch-based split I/O architecture) can make it more difficult to implement part of the required functionality.

This challenge is especially true for FlashCopy services. Taking a point-in-time clone of a device in a split I/O architecture means that all of the data has to be copied from the source to the target first.

The drawback is that the target copy cannot be brought online until the entire copy has completed, that is, minutes or hours later. Think of using this approach for implementing a sparse flash, which is a flash copy without a background copy where the target disk is only populated with the blocks or extents that are modified after the point in time when the flash copy was taken (or an incremental series of cascaded copies).

Scalability is another issue, because it might be difficult to try to scale out to n-way clusters of intelligent line cards. A multiway switch design is also difficult to code and implement, because of the issues in maintaining fast updates to metadata to keep the metadata synchronized across all processing blades; the updates must occur at wire speed or you lose that claim.

For the same reason, space-efficient copies and replication are also difficult to implement. Both synchronous and asynchronous replication require a level of buffering of I/O requests - while switches have buffering built in, the number of additional buffers is huge and grows as the link distance increases. Most of today’s intelligent line cards do not provide anywhere near this level of local storage. The most common solution is to use an external system to provide the replication services, which means another system to manage and maintain, which conflicts with the concept of virtualization.

10 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 37: San

Also, remember when choosing a split I/O architecture, your virtualization implementation is limited to the actual switch type and the hardware that you use, which makes it hard to implement any future changes.

The controller-based approach has high functionality, but it fails in terms of scalability or upgradability. Because of the nature of its design, there is no true decoupling with this approach, which becomes an issue for the life cycle of this solution, such as a controller. You will be challenged with data migration issues and questions, such as how to reconnect the servers to the new controller, and how to reconnect them online without any impact to your applications.

Be aware that you not only replace a controller in this scenario, but also, implicitly, replace your entire virtualization solution. You not only have to replace your hardware, but you also must update or repurchase the licenses for the virtualization feature, advanced copy functions, and so on.

With a network-based appliance solution that is based on a scale-out cluster architecture, life cycle management tasks, such as adding or replacing new disk subsystems or migrating data between them, are extremely simple. Servers and applications remain online, data migration takes place transparently on the virtualization platform, and licenses for virtualization and copy services require no update, that is, cause no additional costs when disk subsystems have to be replaced. Only the network-based appliance solution provides you with an independent and scalable virtualization platform that can provide enterprise-class copy services, is open for future interfaces and protocols, lets you choose the disk subsystems that best fit your requirements, and does not lock you into specific SAN hardware.

For these reasons, IBM has chosen the network-based appliance approach for the implementation of the IBM System Storage SAN Volume Controller.

The SVC has these key characteristics:

� Highly scalable: Easy growth path to two-n nodes (grow in a pair of nodes)

� SAN interface-independent: Actually supports FC and iSCSI, but it is also open for future enhancements, such as InfiniBand or other enhancements

� Host-independent: For fixed block-based Open Systems environments

� Storage (RAID controller)-independent: Ongoing plan to qualify additional types of Redundant Array of Independent Disks (RAID) controllers

� Able to utilize commodity RAID controllers: Also known as “low complexity RAID bricks”

� Able to utilize node internal disks (solid state disks)

On the SAN storage that is provided by the disk subsystems, the SVC can offer the following services:

� The ability to create and manage a single pool of storage attached to the SAN

� Block-level virtualization (logical unit virtualization)

� Advanced functions to the entire SAN, such as:

– Large scalable cache

– Advanced Copy Services:

• FlashCopy (point-in-time copy)

• Metro Mirror and Global Mirror (remote copy, synchronous/asynchronous)

• Data migration

Chapter 2. IBM System Storage SAN Volume Controller 11

Page 38: San

This feature list will grow for future releases. This additional layer can provide future features, such as policy-based space management mapping your storage resources based on desired performance characteristics, or the dynamic reallocation of entire virtual disks (VDisks) or part of a VDisk according to user-definable performance policies. Extensive functionality is possible as soon as you set up the decoupling properly (installed an additional layer between the server and the storage).

You can configure SAN-based storage infrastructures using SVC with two or more SVC nodes, which are arranged in a cluster. These nodes are attached to the SAN fabric, along with RAID controllers and host systems. The SAN fabric is zoned to allow the SVC to “see” the RAID controllers, and for the hosts to “see” the SVC. The hosts are not usually able to directly “see” or operate on the RAID controllers unless a “split controller” configuration is in use. You can use the zoning capabilities of the SAN switch to create these distinct zones. The assumptions that are made about the SAN fabric will be limited to make it possible to support a number of separate SAN fabrics with a minimum development effort. Anticipated SAN fabrics include FC, iSCSI over Gigabit Ethernet, and other types might follow in the future.

Figure 2-2 shows a conceptual diagram of a storage system utilizing the SVC. It shows a number of hosts that are connected to a SAN fabric or LAN. In practical implementations that have high availability requirements (the majority of the target clients for SVC), the SAN fabric “cloud” represents a redundant SAN. A redundant SAN is composed of a fault-tolerant arrangement of two or more counterpart SANs, therefore providing alternate paths for each SAN-attached device.

Both scenarios (using a single network and using two physically separate networks) are supported for iSCSI-based/LAN-based access networks to the SVC. Redundant paths to VDisks can be provided for both scenarios.

Figure 2-2 SVC conceptual overview

A cluster of SVC nodes are connected to the same fabric and present VDisks to the hosts. These VDisks are created from MDisks that are presented by the RAID controllers. There are two distinct zones shown in the fabric: a host zone, in which the hosts can see and address

12 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 39: San

the SVC nodes, and a storage zone, in which the SVC nodes can see and address the MDisk/logical unit numbers (LUNs) presented by the RAID controllers. Hosts are not permitted to operate on the RAID LUNs directly, and all data transfer happens through the SVC nodes. This design is commonly described as symmetric virtualization. Figure 2-3 shows the SVC logical topology.

Figure 2-3 SVC topology overview

For simplicity, Figure 2-3 shows only one SAN fabric and two types of zones. In an actual environment, we recommend using two redundant SAN fabrics. The SVC can be connected to up to four fabrics. You set up zoning for each host, disk subsystem, and fabric. Learn about zoning details in 3.3.2, “SAN zoning and SAN connections” on page 76.

For iSCSI-based access, using two networks and separating iSCSI traffic within the networks by using a dedicated virtual local area network (VLAN) path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host server’s access to the VDisk LUNs.

2.2.1 SVC virtualization conceptsThe SVC product provides block-level aggregation and volume management for disk storage within the SAN. In simpler terms, SVC manages a number of back-end storage controllers and maps the physical storage within those controllers into logical disk images that can be seen by application servers and workstations in the SAN.

The SAN is zoned so that the application servers cannot see the back-end physical storage, which prevents any possible conflict between the SVC and the application servers both trying to manage the back-end storage. The SVC is based on the following virtualization concepts, which are discussed more throughout this chapter.

A node is an SVC, which provides virtualization, cache, and copy services to the SAN. SVC nodes are deployed in pairs, to make up a cluster. A cluster can have between one and four SVC node pairs in it, which is a product limit not an architectural limit.

Chapter 2. IBM System Storage SAN Volume Controller 13

Page 40: San

Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster might have between one and up to four I/O Groups. A specific virtual disk or VDisk is always presented to a host server by a single I/O Group of the cluster.

When a host server performs I/O to one of its VDisks, all the I/Os for a specific VDisk are directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os for a specific VDisk are always processed by the same node of the I/O Group. This node is referred to as the preferred node for this specific VDisk.

Both nodes of an I/O Group act as the preferred node for its specific subset of the total number of VDisks that the I/O Group presents to the host servers. But, both nodes also act as failover nodes for their specific partner node in the I/O Group. A node will take over the I/O handling from its partner node, if required.

In an SVC-based environment, the I/O handling for a VDisk can switch between the two nodes of an I/O Group. Therefore, it is mandatory for servers that are connected through FC to use multipath drivers to be able to handle these failover situations.

SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, is still through FC. The node failover can be handled without a multipath driver installed on the server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, a multipath driver is mandatory.

The SVC I/O Groups are connected to the SAN so that all application servers accessing VDisks from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group; these host server objects can consume VDisks that are provided by this specific I/O Group.

If required, host servers can be mapped to more than one I/O Group of an SVC cluster; therefore, they can access VDisks from separate I/O Groups. You can move VDisks between I/O Groups to redistribute the load between the I/O Groups. With the current release of SVC, I/Os to the VDisk that is being moved have to be quiesced for a short time for the duration of the move.

The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The application servers however do not see the MDisks at all. Instead, they see a number of logical disks, which are known as virtual disks or VDisks, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage that is provisioned out of one Managed Disk Group (MDG), or if it is a mirrored VDisk, out of two MDGs.

An MDG is a collection of up to 128 MDisks, which creates the storage pools out of which VDisks are provisioned. A single cluster can manage up to 128 MDGs. The size of these pools can be changed (expanded or shrunk) at run time without taking the MDG or the VDisks that are provided by it offline. At any point in time, an MDisk can only be a member in one MDG with one exception (image mode VDisk), which will be explained later in this chapter.

MDisks that are used in a specific MDG must have the following characteristics:

� They must have the same hardware characteristics, for example, the same RAID type, RAID array size, disk type, and disk revolutions per minute (RPMs). Be aware that it is

14 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 41: San

always the weakest element (MDisk) in a chain of elements that defines the maximum strength of that chain (MDG).

� The disk subsystems providing the MDisks must have similar characteristics, for example, maximum input/output operations per second (IOPS), response time, cache, and throughput.

� We recommend that you use MDisks of the same size and MDisks that provide the same number of extents, which you need to remember when adding MDisks to an existing MDG. If that is not feasible, check the distribution of the VDisks’ extents in that MDG.

For further details, refer to SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, at this Web site:

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

VDisks can be mapped to a host to allow access for a specific server to a set of VDisks. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by “fake” WWPNs, or WWPNs that are generated by the SVC. VDisks might be mapped to multiple hosts, for example, a VDisk that is accessed by multiple hosts of a server cluster.

Figure 2-4 shows the relationships of these entities to each other.

Figure 2-4 SVC I/O Group overview

An MDisk can be provided by a SAN disk subsystem or by the solid state drives that are provided by the SVC nodes themselves. Each MDisk is divided into a number of extents. The size of the extent will be selected by the user at the creation time of an MDG. The size of the extent ranges from 16 MB (default) up to 2 GB.

We recommend that you use the same extent size for all MDGs in a cluster, which is a prerequisite for supporting VDisk migration between two MDGs. If the extent size does not fit, you must use VDisk Mirroring (see 2.2.7, “Mirrored VDisk” on page 21) as a workaround. For

Chapter 2. IBM System Storage SAN Volume Controller 15

Page 42: San

copying (not migrating) the data into another MDG to a new VDisk, you can use SVC Advanced Copy Services.

Figure 2-5 shows the two most popular ways to provision VDisks out of an MDG. Striped mode is the recommended method for most cases. Sequential extent allocation mode might slightly increase the sequential performance for certain workloads.

Figure 2-5 MDG overview

You can allocate the extents for a VDisk in many ways. The process is under full user control at VDisk creation time and can be changed at any time by migrating single extents of a VDisk to another MDisk within the MDG. You can obtain details of how to create VDisks and migrate extents via GUI or CLI in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339, Chapter 8, “SAN Volume Controller operations using the GUI” on page 469, and Chapter 9, “Data migration” on page 675.

SVC limits the number of extents in a cluster. The number is currently 222 ~= 4 million extents, and this number might change in future releases. Because the number of addressable extents is limited, the total capacity of an SVC cluster depends on the extent size that is chosen by the user. The capacity numbers that are specified in Table 2-1 for an SVC cluster assume that all defined MDGs have been created with the same extent size.

Table 2-1 Extent size to addressability matrix

For most clusters, a capacity of 1 - 2 PB is sufficient. We therefore recommend that you use 256 MB or, for larger clusters, 512 MB as the standard extent size.

Extent size maximum Cluster capacity Extent size maximum Cluster capacity

16 MB 64 TB 256 MB 1 PB

32 MB 128 TB 512 MB 2 PB

64 MB 256 TB 1024 MB 4 PB

128 MB 512 TB 2048 MB 8 PB

16 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 43: San

2.2.2 MDisk overviewThe maximum size of an MDisk is 2 TB. An SVC cluster supports up to 4,096 MDisks. At any point of time, an MDisk is in one of the following three modes:

� Unmanaged MDisk

An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged MDisk is not associated with any VDisks and has no metadata stored on it. SVC will not write to an MDisk, which is in unmanaged mode, except when it attempts to change the mode of the MDisk to one of the other modes. SVC can see the resource, but it is not assigned to a pool, that is, an MDG.

� Managed MDisk

Managed mode MDisks are always members of an MDG and contribute extents to the pool of extents available in the MDG. Zero or more VDisks (if not operated in image mode, which we discuss next) can use these extents. MDisks operating in managed mode might have metadata extents allocated from them and can be used as quorum disks.

� Image mode MDisk

Image mode provides a direct block-for-block translation from the MDisk to the VDisk by using virtualization. This mode is provided to satisfy three major usage scenarios:

– Image mode allows virtualization of MDisks that already contain data that was written directly, not through an SVC. It allows a client to insert the SVC into the data path of an existing storage configuration with minimal downtime. Chapter 9, “Data migration” on page 675 provides details of the data migration process.

– Image mode allows a VDisk that is managed by the SVC to be used with the copy services that are provided by the underlying RAID controller. In order to avoid the loss of data integrity when the SVC is used in this way, it is important that you disable the SVC cache for the VDisk.

– SVC provides the ability to migrate to image mode, which allows the SVC to export VDisks and access them directly without the SVC from the server.

An image mode MDisk is associated with exactly one VDisk. The last extent is partial if the (image mode) MDisk is not a multiple of the MDisk Group’s extent size (see Figure 2-6 on page 18). An image mode VDisk is a pass-through one-to-one map of its MDisk. It cannot be a quorum disk and will not have any SVC metadata extents allocated on it. Managed or image mode MDisks are always members of an MDG.

Chapter 2. IBM System Storage SAN Volume Controller 17

Page 44: San

Figure 2-6 Image mode MDisk overview

It is a best practice if you work with image mode MDisks to put them in a dedicated MDG and use a special name for it (Example: MDG_IMG_xxx). And, remember that the extent size chosen for this specific MDG has to be the same as the extent size in which you plan to migrate the data. All of SVC copy services can be applied to image mode disks.

2.2.3 VDisk overviewThe maximum size of an VDisk is 256 TB. An SVC cluster supports up to 4,096 VDisks.

VDisks support the following services:

� You can create and delete a VDisk.

� You can change the size of a VDisk (expand or shrink).

� VDisks can be migrated (full or partially) at run time to another MDisk or a storage pool (MDG).

� VDisks can be created as fully allocated or Space-Efficient VDisks. A conversion from a fully allocated to a Space-Efficient VDisk and vice versa can be done at run time.

� VDisks can be stored in MDGs (mirrored) to make them resistant to disk subsystem failures or to improve the read performance.

� VDisks can be mirrored synchronously for distances up to 100 KM or asynchronously for longer distances. An SVC cluster can run active data mirrors to a maximum of three other SVC clusters.

� You can use FlashCopy on VDisks. Multiple snapshots and quick restore from snapshots (reverse flash copy) are supported.

VDisks have two modes: image mode and managed mode. The following state diagram in Figure 2-7 on page 19 shows the state transitions.

18 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 45: San

Figure 2-7 VDisk state transitions

Managed mode VDisks have two policies: the sequential policy and the striped policy. Policies define how the extents of a VDisk are carved out of an MDG.

2.2.4 Image mode VDiskImage mode provides a one-to-one mapping between the logical block addresses (LBAs) between a VDisk and an MDisk. Image mode VDisks have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is mapped to one and only one image mode VDisk. The VDisk capacity that is specified must be less than or equal to the size of the image mode MDisk. When you create an image mode VDisk, the specified MDisk must be in “unmanaged” mode and must not be a member of an MDG. The MDisk is made a member of the specified MDG (MDG_IMG_xxx) as a result of the creation of the image mode VDisk. The SVC also supports the reverse process in which a managed mode VDisk can be migrated to image mode VDisks. If a VDisk is migrated to another MDisk, it is represented as being in managed mode during the migration and only represented as an image mode VDisk after has reached the state where it is a straight-through mapping.

2.2.5 Managed mode VDiskVDisks operating in managed mode provide a full set of virtualization functions. Within an MDG, SVC supports an arbitrary relationship between extents on (managed mode) VDisks and extents on MDisks. Subject to the constraints in which each MDisk extent is contained, at most, one VDisk, each VDisk extent maps to exactly one MDisk extent.

Figure 2-8 on page 20 represents this diagrammatically. It shows VDisk V, which is made up of a number of extents. Each of these extents is mapped to an extent on one of the MDisks: A, B, or C. The mapping table stores the details of this indirection. Note that several of the MDisk extents are unused. There is no VDisk extent, which maps to them. These unused extents are available for use in creating new VDisks, migration, expansion, and so on.

create imagemode vdisk

Doesn'texist

Imagemode

deletevdisk

create managedmode vdisk

deletevdisk

Managedmode

completemigrate

migrate toimage mode

Managedmode

migrating

migrate toimage mode

Chapter 2. IBM System Storage SAN Volume Controller 19

Page 46: San

Figure 2-8 Simple view of block virtualization

A managed mode VDisk can have a size of zero blocks, in which case, it occupies zero extents. This type of a VDisk cannot be mapped to a host or take part in any Advanced Copy Services functions.

The allocation of a specific number of extents from a specific set of MDisks is performed by the following algorithm: If the set of MDisks from which to allocate extents contains more than one disk, extents are allocated from MDisks in a round-robin fashion. If an MDisk has no free extents when its turn arrives, its turn is missed and the round-robin moves to the next MDisk in the set that has a free extent.

Beginning with SVC 5.1, when creating a new VDisk, the first MDisk from which to allocate an extent is chosen in a pseudo random way rather than simply choosing the next disk in a round-robin fashion. The pseudo random algorithm avoids the situation whereby the “striping effect” inherent in a round-robin algorithm places the first extent for a large number of VDisks on the same MDisk. Placing the first extent of a number of VDisks on the same MDisk might lead to poor performance for workloads that place a large I/O load on the first extent of each VDisk or that create multiple sequential streams.

2.2.6 Cache mode and cache-disabled VDisksPrior to SVC V3.1, enabling any copy services function in a RAID array controller for a LUN that was being virtualized by SVC was not supported, because the behavior of the write-back cache in the SVC led to data corruption. With the advent of cache-disabled VDisks, it becomes possible to enable copy services in the underlying RAID array controller for LUNs that are virtualized by the SVC.

Wherever possible, we recommend using SVC copy services in preference to the underlying controller copy services.

20 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 47: San

2.2.7 Mirrored VDiskStarting with SVC 4.3, the mirrored VDisk feature provides a simple RAID-1 function, which allows a VDisk to remain accessible even when an MDisk on which it depends has become inaccessible.

This function is achieved using two copies of the VDisk, which are typically allocated from separate MDGs or using image-mode copies. The VDisk is the entity that participates in FlashCopy and a Remote Copy relationship, is served by an I/O Group, and has a preferred node. The copy now has the virtualization attributes, such as MDG and policy (striped, sequential, or image).

A copy is not a separate object and cannot be created or manipulated except in the context of the VDisk. Copies are identified via the configuration interface with a copy ID of their parent VDisk. This copy ID can be either 0 or 1. Depending on the configuration history, a single copy can have an ID of either 0 or 1.

The feature does provide a “point-in-time” copy functionality that is achieved by “splitting” a copy from the VDisk. The feature does not address other forms of mirroring based on Remote Copy (sometimes called “Hyperswap”), which mirrors VDisks across I/O Groups or clusters, nor is it intended to manage mirroring or remote copy functions in back-end controllers.

Figure 2-9 gives an overview of VDisk Mirroring.

Figure 2-9 VDisk Mirroring overview

A copy can be added to a VDisk with only one copy or removed from a VDisk with two copies. Checks will prevent the accidental removal the sole copy of a VDisk. A newly created, unformatted VDisk with two copies will initially have the copies out-of-synchronization. The primary copy will be defined as “fresh” and the secondary copy as “stale”. The synchronization process will update the secondary copy until it is synchronized, which will be done at the default “synchronization rate” or one defined when creating the VDisk or subsequently modifying it.

Chapter 2. IBM System Storage SAN Volume Controller 21

Page 48: San

If a two-copy mirrored VDisk is created with the format parameter, both copies are formatted in parallel and the VDisk comes online when both operations are complete with the copies in sync.

If mirrored VDisks get expanded or shrunk, all of their copies also get expanded or shrunk.

If it is known that MDisk space, which will be used for creating copies, is already formatted, or if the user does not require read stability, a “no synchronization” option can be selected which declares the copies as “synchronized” (even when they are not).

The time for a copy, which has become unsynchronized, to resynchronize is minimized by copying only those 256 KB grains that have been written to since synchronization was lost. This approach is known as an “incremental synchronization”. Only those changed grains need be copied to restore synchronization.

Where there are two copies of a VDisk, one copy is known as the primary copy. If the primary is available and synchronized, reads from the VDisk are directed to it. The user can select the primary when creating the VDisk or can change it later. Selecting the copy allocated on the higher-performance controller will maximize the read performance of the VDisk. The write performance will be constrained by the lower-performance controller, because writes must complete to both copies before the VDisk is considered to have been successfully written. Remember that writes to both copies must complete to be considered successfully written when VDisk Mirroring creates one copy in a solid-state drive MDG and the second copy in an MDG populated with resources from a disk subsystem.

A VDisk with copies can be checked to see whether all of the copies are identical. If a medium error is encountered while reading from any copy, it will be repaired using data from another fresh copy. This process can be asynchronous but will give up if the copy with the error goes offline.

Mirrored VDisks consume bitmap space at a rate of 1 bit per 256 KB grain, which translates to 1 MB of bitmap space supporting 2 TB-worth of mirrored VDisk. The default allocation of bitmap space in 20 MB, which supports 40 TB of mirrored VDisk. If all 512 MB of variable bitmap space is allocated to mirrored VDisks, 1 PB of mirrored VDisks can be supported.

The advent of the mirrored VDisk feature will inevitably lead clients to think about two-site solutions for cluster and VDisk availability.

Generally, the advice is not to split a cluster, that is, the single I/O Groups, across sites. But there are certain configurations that will be effective. Be careful that you prevent a situation that is referred to as a “split brain” scenario (caused, for example, by a power outage on the SAN switches; the SVC nodes are protected by their own uninterruptible power supply unit). In this scenario, the connectivity between components will be lost and a contest for the SVC cluster quorum disk occurs. Which set of nodes wins is effectively arbitrary. If the set of nodes which won the quorum disk then experiences a permanent power loss, the cluster is lost. The way to prevent this split brain scenario is to use a configuration that will provide effective

Important: An unmirrored VDisk can be migrated from a source to a destination by adding a copy at the desired destination, waiting for the two copies to synchronize, and then removing the original copy. This operation can be stopped at any time.The two copies can be in separate MDGs with separate extent sizes.

Note: SVC does not prevent you from creating the two copies in one or more solid-state drive MDGs of the same node. Although doing so means that you lose redundancy and might therefore be faced with access loss to your VDisk if the node fails or restarts.

22 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 49: San

redundancy because of the exact placement of system components in “fault domains”. You can obtain the details of this configuration and the required prerequisites in Chapter 3, “Planning and configuration” on page 65.

2.2.8 Space-Efficient VDisksStarting with SVC 4.3, VDisks can be configured to either be “Space-Efficient” or “Fully Allocated”. A Space-Efficient VDisk (SE VDisk) will behave with respect to application reads and writes as though they were fully allocated, including the requirements of Read Stability and Write Atomicity. When an SE VDisk is created, the user will specify two capacities: the real capacity of the VDisk and its virtual capacity.

The real capacity will determine the quantity of MDisk extents that will be allocated for the VDisk. The virtual capacity will be the capacity of the VDisk reported to other SVC components (for example, FlashCopy, Cache, and Remote Copy) and to the host servers.

The real capacity will be used to store both the user data and the metadata for the SE VDisk. The real capacity can be specified as an absolute value or a percentage of the virtual capacity.

The Space-Efficient VDisk feature can be used on its own to create over-allocated or late-allocation VDisks, or it can be used in conjunction with FlashCopy to implement Space-Efficient FlashCopy. SE VDisk can be used in conjunction with the mirrored VDisks feature, as well, which we refer to as Space-Efficient Copies of VDisks.

When an SE VDisk is initially created, a small amount of the real capacity will be used for initial metadata. Write I/Os to grains of the SE VDisk that have not previously been written to will cause grains of the real capacity to be used to store metadata and user data. Write I/Os to grains that have previously been written to will update the grain where data was previously written. The grain is defined when the VDisk is created and can be 32 KB, 64 KB, 128 KB, or 256 KB.

Figure 2-10 on page 24 provides an overview.

Chapter 2. IBM System Storage SAN Volume Controller 23

Page 50: San

Figure 2-10 Overview SE VDisk

SE VDisks store both user data and metadata. Each grain requires metadata. The overhead will never be greater than 0.1% of the user data. The overhead is independent of the virtual capacity of the SE VDisk. If you are using SE VDisks in a FlashCopy map, use the same grain size as the map grain size for the best performance. If you are using the Space-Efficient VDisk directly with a host system, use a small grain size.

The real capacity of an SE VDisk can be changed provided that the VDisk is not in image Mode. Increasing the real capacity allows a larger amount of data and metadata to be stored on the VDisk. SE VDisks use the real capacity of a VDisk in ascending order as new data is written to the VDisk. Consequently, if the user initially assigns too much real capacity to an SE VDisk, the real capacity can be reduced to free up storage for other uses. It is not possible to reduce the real capacity of an SE VDisk to be less than the capacity that is currently in use other than by deleting the VDisk.

An SE VDisk can be configured to autoexpand, which causes SVC to automatically expand the real capacity of an SE VDisk as its real capacity is used. Autoexpand attempts to maintain a fixed amount of unused real capacity on the VDisk. This amount is known as the “contingency capacity”.

SE VDisk format: SE VDisks do not need formatting. A read I/O, which requests data from unallocated data space, will return zeroes. When a write I/O causes space to be allocated, the grain will be zeroed prior to use. Consequently, an SE VDisk will always be formatted regardless of whether the format flag is specified when the VDisk is created. The formatting flag will be ignored when an SE VDisk is created or when the real capacity is expanded; the virtualization component will never format the real capacity for an SE VDisk.

24 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 51: San

The contingency capacity is initially set to the real capacity that is assigned when the VDisk is created. If the user modifies the real capacity, the contingency capacity is reset to be the difference between the used capacity and real capacity.

A VDisk that is created with a zero contingency capacity will go offline as soon as it needs to expand whereas a VDisk with a non-zero contingency capacity will stay online until it has been used up.

Autoexpand will not cause space to be assigned to the VDisk that can never be used. Autoexpand will not cause the real capacity to grow much beyond the virtual capacity. The real capacity can be manually expanded to more than the maximum that is required by the current virtual capacity, and the contingency capacity will be recalculated.

To support the autoexpansion of SE VDisks, the MDGs from which they are allocated have a configurable warning capacity. When the used free capacity of the group exceeds the warning capacity, a warning is logged. To allow for capacity used by quorum disks and partial extents of image mode VDisks, the calculation uses the free capacity. For example, if a warning of 80% has been specified, the warning will be logged when 20% of the free capacity remains.

An SE VDisk can be converted to a fully allocated VDisk using VDisk Mirroring.

SVC 5.1.0 introduces the ability to convert a fully allocated VDisk to an SE VDisk, by using the following procedure:

1. Start with a VDisk that has one fully allocated copy.

2. Add a Space-Efficient copy to the VDisk.

3. Allow VDisk Mirroring to synchronize the copies.

4. Remove the fully allocated copy.

This procedure uses a zero-detection algorithm. Note that as of 5.1.0, this algorithm is used only for I/O that is generated by the synchronization of mirrored VDisks; I/O from other components (for example, FlashCopy) is written using normal procedures.

2.2.9 VDisk I/O governing It is possible to constrain I/O operations so that a system is constrained to the amount of I/O that it can perform to a VDisk in a period of time. You can use this governing to satisfy a quality of service constraint, or a contractual obligation (for example, a customer agrees to pay for I/Os performed, but will not pay for I/Os beyond a certain rate). Only commands that

SE VDisks: SE VDisks require additional I/O operations to read and write metadata to back-end storage and to generate additional load on the SVC nodes. We therefore do not recommend the use of SE VDisks for high performance applications.

Note: Consider SE VDisks as targets in Flash Copy relationships. Using them as a target in Metro Mirror or Global Mirror relationships makes no sense, because during the initial synchronization, the target will become fully allocated.

Chapter 2. IBM System Storage SAN Volume Controller 25

Page 52: San

access the medium (Read (6/10), Write (6/10), or Write and Verify) are subject to I/O governing.

An I/O budget is expressed as a number of I/Os, or a number of MBs, over a minute. The budget is evenly divided between all SVC nodes that service that VDisk, that is, between the nodes that form the I/O Group of which that VDisk is a member.

The algorithm operates two levels of policing. While a VDisk on each SVC node has been receiving I/O at a rate lower than the governed level, no governing is performed. A check is made every minute that the VDisk on each node is continuing to receive I/O at a rate lower than the threshold level. Where this check shows that the host has exceeded its limit on one or more nodes, policing begins for new I/Os.

The following conditions exist while policing is in force:

� A budget allowance is calculated for a 1 second period.

� I/Os are counted over a period of a second.

� If I/Os are received in excess of the one second budget on any node in the I/O Group, those I/Os and later I/Os are pended.

� When the second expires, a new budget is established, and any pended I/Os are redriven under the new budget.

This algorithm might cause I/O to backlog in the front end, which might eventually cause “Queue Full Condition” to be reported to hosts that continue to flood the system with I/O. If a host stays within its 1 second budget on all nodes in the I/O Group for a period of 1 minute, the policing is relaxed, and monitoring takes place over the 1 minute period as before.

2.2.10 iSCSI overviewSVC 4.3.1 and earlier support Fibre Channel (FC) as the sole transport protocol for communicating with hosts, storage, and other SVC clusters. SVC 5.1.0 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, still occur via FC.

In the simplest terms, iSCSI allows the transport of SCSI commands and data over a TCP/IP network, based on IP routers and Ethernet switches. iSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and thereby leverages an existing IP network, instead of requiring expensive FC HBAs and a SAN fabric infrastructure.

A pure SCSI architecture is based on the client/server model. A client (for example, server or workstation) initiates read or write requests for data from a target server (for example, a data storage system). Commands, which are sent by the client and processed by the server, are

I/O governing: I/O governing is applied to remote copy secondaries, as well as primaries. If an I/O governing rate has been set on a VDisk, which is a remote copy secondary, this governing rate will also be applied to the primary. If governing is in use on both the primary and the secondary VDisks, each governed quantity will be limited to the lower of the two specified values. Governing has no effect on FlashCopy or data migration I/O.

New iSCSI feature: The new iSCSI feature is a software feature that is provided by the new SVC 5.1 code. This feature will be available on any SVC hardware node that supports SVC 5.1 code. It is not restricted to the new 2145-CF8 nodes.

26 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 53: San

put into the Command Descriptor Block (CDB). The server executes a command, and completion is indicated by a special signal alert.

The major functions of iSCSI include encapsulation and the reliable delivery of CDB transactions between initiators and targets through the TCP/IP network, especially over a potentially unreliable IP network.

The concepts of names and addresses have been carefully separated in iSCSI:

� An iSCSI name is a location-independent, permanent identifier for an iSCSI node. An iSCSI node has one iSCSI name, which stays constant for the life of the node. The terms “initiator name” and “target name” also refer to an iSCSI name.

� An iSCSI Address specifies not only the iSCSI name of an iSCSI node, but also a location of that node. The address consists of a host name or IP address, a TCP port number (for the target), and the iSCSI name of the node. An iSCSI node can have any number of addresses, which can change at any time, particularly if they are assigned by way of Dynamic Host Configuration Protocol (DHCP). An SVC node represents an iSCSI node and provides statically allocated IP addresses.

Each iSCSI node, that is, an initiator or target, has a unique iSCSI Qualified Name (IQN), which can have a size of up to 255 bytes. The IQN is formed according to the rules adopted for Internet nodes.

The iSCSI qualified name format is defined in RFC3720 and contains (in order) these elements:

� The string “iqn”.

� A date code specifying the year and month in which the organization registered the domain or sub-domain name used as the naming authority string.

� The organizational naming authority string, which consists of a valid, reversed domain or a subdomain name.

� Optionally, a colon (:), followed by a string of the assigning organization’s choosing, which must make each assigned iSCSI name unique.

For SVC, the IQN for its iSCSI target is specified as:

iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

On a Windows server, the IQN, that is, the name for the iSCSI Initiator, can be defined as:

iqn.1991-05.com.microsoft:<computer name>

You can abbreviate IQNs by a descriptive name, known as an alias. An alias can be assigned to an initiator or a target. The alias is independent of the name and does not have to be unique. Because it is not unique, the alias must be used in a purely informational way. It cannot be used to specify a target at login or used during authentication. Both targets and initiators can have aliases.

An iSCSI name provides the correct identification of an iSCSI device irrespective of its physical location. Remember, the IQN is an identifier, not an address.

Be careful: Before changing cluster or node names for an SVC cluster that has servers connected to it by way of SCSI, be aware that because the cluster and node name are part of the SVC’s IQN, you can lose access to your data by changing these names. The SVC GUI will display a specific warning, the CLI does not.

Chapter 2. IBM System Storage SAN Volume Controller 27

Page 54: San

The iSCSI session, which consists of a login phase and a full feature phase, is completed with a special command.

The login phase of the iSCSI is identical to the FC port login process (PLOGI). It is used to adjust various parameters between two network entities and to confirm the access rights of an initiator.

If the iSCSI login phase is completed successfully, the target confirms the login for the initiator; otherwise, the login is not confirmed and the TCP connection breaks.

As soon as the login is confirmed, the iSCSI session enters the full feature phase. If more than one TCP connection was established, iSCSI requires that each command/response pair goes through one TCP connection. Thus, each separate read or write command will be carried out without the necessity to trace each request for passing separate flows. However, separate transactions can be delivered through separate TCP connections within one session.

Figure 2-11 shows an overview of the various block-level storage protocols and where the iSCSI layer is positioned.

Figure 2-11 Overview of block-level protocol stacks

2.2.11 Usage of IP addresses and Ethernet ports The addition of iSCSI changes the manner in which you configure Ethernet access to an SVC cluster. The SVC 5.1 releases of the GUI and the command-line interface (CLI) show these changes.

The existing SVC node hardware has two Ethernet ports. Until now, only one Ethernet port has been used for cluster configuration. With the introduction of iSCSI, you can now use a second port. The configuration details of the two Ethernet ports can be displayed by the GUI or CLI, but they will also be displayed on the node’s panel.

There are now two kinds of IP addresses:

� A cluster management IP address is used for access to the SVC CLI, as well as to the Common Information Model Object Manager (CIMOM) that runs on the SVC configuration node. As before, only a single configuration node presents a cluster management IP address at any one time, and failover of the configuration node is unchanged. However, there can now be two cluster management IP addresses, one for each of the two Ethernet ports.

28 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 55: San

� A port IP address is used to perform iSCSI I/O to the cluster. Each node can have a port IP address for each of its ports.

In the case of an upgrade to the SVC 5.1 code, the original cluster IP address will be retained and will always be found on the eth0 interface on the configuration node. A second, new cluster IP address can be optionally configured in SVC 5.1. This second cluster IP address will always be on the eth1 interface on the configuration node. When the configuration node fails, both configuration IP addresses will move to the new configuration node.

Figure 2-12 shows an overview of the new IP addresses on an SVC node port and the rules regarding how these IP addresses are moved between the nodes of an I/O Group.

The management IP addresses and the ISCSI target IP addresses will fail over to the partner node N2 if node N1 restarts (and vice versa). The ISCSI target IPs will fail back to their corresponding ports on node N1 when node N1 is up and running again.

Figure 2-12 SVC 5.1 IP address overview

In an SVC cluster running 5.1 code, an eight node cluster with full iSCSI coverage (maximum configuration) therefore has the following number of IP addresses:

� Two IPV4 configuration addresses (one configuration address is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other configuration address goes with eth1:0).

� One IPV4 service mode fixed address (although many DCHP addresses can also be used). This address is always associated with the eth0:0 alias for the eth0 interface of the configuration node.

� Two IPV6 configuration addresses (one address is always associated with the eth0:0 alias for the eth0 interface of the configuration node, and the other address goes with eth1:0).

� One IPV6 service mode fixed address (although many DCHP addresses can also be used). This address is always associated with the eth0:0 alias for the eth0 interface of the configuration node.

Chapter 2. IBM System Storage SAN Volume Controller 29

Page 56: San

� Sixteen IPV4 addresses are used for iSCSI access to each node (these addresses are associated with the eth0:1 or eth1:1 alias for the eth0 or eth1 interface on each node).

� Sixteen IPV6 addresses are used for iSCSI access to each node (these addresses are associated with eth0 and eth1 interfaces on each node).

We show the configuration of the SVC ports in great detail in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339 and in Chapter 8, “SAN Volume Controller operations using the GUI” on page 469.

2.2.12 iSCSI VDisk discoveryThe iSCSI target implementation on the SVC nodes makes use of the hardware off-load features that are provided by the node’s hardware. This implementation results in minimal impact on the node’s CPU load for handling iSCSI traffic and simultaneously delivers excellent throughput (up to 95 MBps user data) on each of the two 1 Gbps LAN ports. The plan is to support jumbo frames (maximum transmission unit (MTU) sizes greater than 1,500 bytes) in future SVC releases.

Hosts can discover VDisks through one of the following three mechanisms:

� Internet Storage Name Service (iSNS): SVC can register itself with an iSNS name server; you set the IP address of this server by using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets.

� Service Location Protocol (SLP): The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node, such as the CIMOM service that runs on the configuration node; the iSCSI I/O service can now also be reported.

� iSCSI Send Target request. The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260).

2.2.13 iSCSI authentication Authentication of the host sever toward the SVC cluster is optional and is disabled by default.

The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) authentication, which involves sharing a CHAP secret between the SVC cluster and the host. After the successful completion of the link establishment phase, the SVC as authenticator sends a challenge message to the specific server (peer). The server responds with a value that is calculated by using a one-way hash function on the index/secret/challenge, such as an MD5 checksum hash.

The response is checked by the SVC against its own calculation of the expected hash value. If there is a match, the SVC acknowledges the authentication. If not, the SVC will terminate the connection and will not allow any I/O to VDisks. At random intervals, the SVC might send new challenges to the peer to recheck the authentication.

You can assign a CHAP secret to each SVC host object. The host must then use CHAP authentication in order to begin a communications session with a node in the cluster. You can also assign a CHAP secret to the cluster if two-way authentication is required. While creating an iSCSI host within an SVC cluster, you will get the initiator’s IQN, for example, for a Windows server:

iqn.1991-05.com.microsoft:ITSO_W2008

In addition, you must specify an (optional) CHAP secret.

30 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 57: San

You add a VDisk to a host, or perform LUN masking, in the same way that you connect hosts by way of FC to the SVC.

Because you can use iSCSI in networks where data can be accessed illegally, the specification allows separate security methods. You can set up security, for example, via a method, such as IPSec, which is transparent for higher levels, such as iSCSI, because it is implemented at the IP level. You can obtain details about securing iSCSI in RFC3723, Securing Block Storage Protocols over IP, which is available at this Web site:

http://tools.ietf.org/html/rfc3723

2.2.14 iSCSI multipathingMultipathing drivers means that the host can send commands down multiple paths to the SVC to the same VDisk. A fundamental multipathing difference exists between FC and iSCSI environments.

If FC-attached hosts see their FC target, and VDisks go offline, for example, due to a problem in the target node, its ports, or the network, the host has to use a separate SAN path to continue I/O. A multipathing driver is therefore always required on the host.

SCSI-attached hosts see a pause in I/O when a (target) node is reset, but (this action is the key difference) the host is reconnected to the same IP target that reappears after a short period of time and its VDisks continue to be available for I/O.

A host multipathing driver for iSCSI is required if you want these capabilities:

� To protect a server from network link failures

� To protect a server from network failures, if the server is connected via two HBAs to two separate networks

� To protect a server from a server HBA failure (if two HBAs are in use)

� To provide load balancing on the server’s HBA and the network links

2.2.15 Advanced Copy Services overviewThe SVC supports the following copy services:

� Synchronous remote copy� Asynchronous remote copy � FlashCopy with a full target� Block virtualization and data migration

Copy services are implemented between VDisks within a single SVC or multiple SVC clusters. They are therefore independent of the functionalities of the underlying disk subsystems that are used to provide storage resources to an SVC cluster.

Synchronous/Asynchronous remote copyThe general application of remote copy seeks to maintain two copies of a data set. Often the two copies will be separated by distance, but not necessarily.

Be aware: With the iSCSI implementation in SVC, an IP address failover/failback between partner nodes of an I/O Group will only take place in cases of a planned or unplanned node restart. In the case of a problem in the network link (switches, ports, or links), no such failover takes place.

Chapter 2. IBM System Storage SAN Volume Controller 31

Page 58: San

The remote copy can be maintained in one of two modes: synchronous or asynchronous. The definition of an asynchronous remote copy needs to be supplemented by describing the maximum degree of asynchronicity.

With the SVC, Metro Mirror and Global Mirror are the IBM branded terms for the functions that are synchronous remote copy and asynchronous remote copy.

Synchronous remote copy ensures that updates are committed at both the primary and the secondary before the application considers the updates complete; therefore, the secondary is fully up-to-date if it is needed in a failover. However, the application is fully exposed to the latency and bandwidth limitations of the communication link to the secondary. In a truly remote situation, this extra latency can have a significant adverse effect on application performance.

SVC assumes that the FC fabric to which it is attached contains hardware that achieves the long distance requirement for the application. This hardware makes distant storage accessible as though it were local storage. Specifically, it enables a group of up to four SVC clusters to connect (FC login) to each other and establish communications in the same way as though they were located nearby on the same fabric. The only differences are in the expected latency of that communication, the bandwidth capability of the links, and the availability of the links as compared with the local fabric. Special configuration guidelines exist for SAN fabrics that are used for data replication. Issues to consider are the distance and the bandwidth of the site interconnections.

In asynchronous remote copy, the application considers an update complete before that update has necessarily been committed at the secondary. Hence, on a failover, certain updates might be missing at the secondary. The application must have an external mechanism for recovering the missing updates and reapplying them. This mechanism can involve user intervention. Asynchronous remote copy provides comparable functionality to a continuous backup process that is missing the last few updates. Recovery on the secondary site involves bringing up the application on this recent “backup” and, then, reapplying the most recent updates to bring the secondary up-to-date.

The asynchronous remote copy must present at the secondary a view to the application that might not contain the latest updates, but is always consistent. If consistency has to be guaranteed at the secondary, applying updates in an arbitrary order is not an option. At the primary side, the application is enforcing an ordering implicitly by not scheduling an I/O until a previous dependent I/O has completed. We do not know the actual ordering constraints of the application; the best approach is to choose an ordering that the application might see if I/O at the primary was stopped at a suitable point. One example is to apply I/Os at the secondary in the order that they were completed at the primary. Thus, the secondary always reflects a state that can have been seen at the primary if we froze I/O there.

The SVC Global Mirror protocol operates to identify small groups of I/Os, which are known to be active concurrently in the primary cluster. The process to identify these groups of I/Os does not significantly contribute to the latency of these I/Os when they execute at the primary. These groups are applied at the secondary in the order in which they were executed at the primary. By identifying groups of I/Os that can be applied concurrently at the secondary, the protocol maintains good throughput as the system size grows.

The relationship between the two copies is not symmetrical. One copy of the data set is considered the primary copy, which is sometimes also known as the source. This copy provides the reference for normal runtime operation. Updates to this copy are shadowed to a secondary copy, which is sometimes known as the destination or even the target. The secondary copy is not normally referenced for performing I/O. If the primary copy fails, the

32 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 59: San

secondary copy can be enabled for I/O operation. A typical use of this function might involve two sites where the first site provides service during normal operations and the second site is only activated when a failure of the first site is detected.

The secondary copy is not accessible for application I/O other than the I/Os that are performed for the remote copy process. The SVC allows read-only access to the secondary storage when it contains a consistent image. This capability is only intended to allow boot time operating system discovery to complete without error so that any hosts at the secondary site can be ready to start up the applications with minimum delay, if required. For instance, many operating systems need to read logical block address (LBA) 0 to configure a logical unit.

“Enabling” the secondary copy for active operation will require SVC, operating system, and possibly application-specific work, which needs to be performed as part of the entire failover process. The SVC software at the secondary must be instructed to stop the relationship, which makes the secondary logical unit accessible for normal I/O access. The operating system might need to mount file systems, or similar work, which can typically only happen when the logical unit is accessible for writes. The application might have a log of work to recover.

Note that this property of remote copy, the requirement to enable the secondary copy, differentiates it from RAID-1 mirroring. The latter aims to emulate a single, reliable disk, regardless of what system accesses it. Remote copy retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained.

The underlying storage at the primary or secondary of a remote copy will normally be RAID storage, but it can be any storage, which can be managed by the SVC.

Making use of a secondary copy involves a conscious policy decision by a user that a failover is required. The application work involved in establishing operation on the secondary copy is substantial. The goal is to make this rapid but not seamless. Rapid is still much faster compared to recovering from a backup copy.

Most clients will aim to automate this remote copy through failover management software. SVC provides Simple Network Management Protocol (SNMP) traps and interfaces to enable this automation. IBM Support for automation is provided by IBM Tivoli® Storage Productivity Center for Replication.

Or, you can access the documentation online at the IBM Tivoli Storage Productivity Center information center:

http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

2.2.16 FlashCopy FlashCopy makes a copy of a source VDisk to a target VDisk. The original content of the target VDisk is lost. After the copy operation has started, the target VDisk has the contents of the source VDisk as it existed at a single point in time. Although the copy operation takes time, the resulting data at the target appears as though the copy was made instantaneously.

You can run FlashCopy on multiple source and target VDisks. FlashCopy permits the management operations to be coordinated so that a common single point in time is chosen for copying target VDisks from their respective source VDisks. This capability allows a consistent copy of data, which spans multiple VDisks.

SVC also permits multiple Target VDisks to be FlashCopied from each Source VDisk. You can use this capability to create images from separate points in time for each Source VDisk, you

Chapter 2. IBM System Storage SAN Volume Controller 33

Page 60: San

can also create multiple images from a Source VDisk at a common point in time. Source and Target VDisks can be SE VDisks.

Starting with SVC 5.1, Reverse FlashCopy is supported. It enables target VDisks to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. SVC supports multiple targets and thus multiple rollback points.

FlashCopy is sometimes described as an instance of a Time-Zero copy (T0) or a Point in Time (PiT) copy technology. Although the FlashCopy operation takes a finite time, this time is several orders of magnitude less than the time that is required to copy the data using conventional techniques.

Most clients aim to integrate the FlashCopy feature for point in time copies and quick recovery of their applications and databases. IBM Support is provided by Tivoli Storage FlashCopy Manager:

http://www-01.ibm.com/software/tivoli/products/storage-flashcopy-mgr/

You can read a detailed description of Data Mirroring and FlashCopy copy services in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339. We discuss data migration in Chapter 6, “Advanced Copy Services” on page 255.

2.3 SVC cluster overviewIn simple terms, a cluster is a collection of servers that, together, provide a set of resources to a client. The key point is that the client has no knowledge of the underlying physical hardware of the cluster. The client is isolated and protected from changes to the physical hardware, which offers many benefits, most significantly, high availability.

Resources on clustered servers act as highly available versions of unclustered resources. If a node (an individual computer) in the cluster is unavailable, or too busy to respond to a request for a resource, the request is transparently passed to another node capable of processing it, so that clients are unaware of the exact locations of the resources they are using.

For example, a client can request the use of an application without being concerned about either where the application resides or which physical server is processing the request. The user simply gains access to the application in a timely and reliable manner. Another benefit is scalability. If you need to add users or applications to your system and want performance to be maintained at existing levels, additional systems can be incorporated into the cluster.

The SVC is a collection of up to eight cluster nodes, which are added in pairs. In future releases, the cluster size might be increased to permit further performance scalability. These nodes are managed as a set (cluster) and present a single point of control to the administrator for configuration and service activity.

The actual eight node limit within an SVC cluster is a limitation of the actual product, not an architectural one. Larger clusters are possible without changing the underlying architecture.

SVC demonstrated its ability to scale during a recently run project:

http://www-03.ibm.com/press/us/en/pressrelease/24996.wss

Based on a 14-node cluster, coupled with solid-state drive controllers, the project achieved a data rate of over one million IOPS with a response time of under 1 millisecond (ms).

34 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 61: San

Although the SVC code is based on a purpose-optimized Linux kernel, the clustering feature is not based on Linux clustering code. The cluster software used within SVC, that is, the event manager cluster framework, is based on the outcome of the COMPASS research project. It is the key element to isolate the SVC application from the underlying hardware nodes. The cluster software makes the code portable and provides the means to keep the single instances of the SVC code running on separate cluster nodes in sync. Node restarts (during a code upgrade), adding new nodes, or removing old nodes from a cluster or node failures therefore cannot impact the SVC’s availability.

It is key for all active nodes of a cluster to know that they are members of the cluster. Especially in situations, such as the split brain scenario where single nodes lose contact to other nodes and cannot determine if the other nodes can be reached anymore, it is key to have a solid mechanism to decide which nodes form the active cluster. A worst case scenario is a cluster that splits into two separate clusters.

Within an SVC cluster, the voting set and an optional quorum disk are responsible for the integrity of the cluster. If nodes are added to a cluster, they get added to the voting set; if nodes are removed, they will also quickly be removed from the voting set. Over time, the voting set, and hence the nodes in the cluster, can completely change so that the cluster has migrated onto a completely separate set of nodes from the set on which it started.

Within an SVC cluster, the quorum is defined in one of these ways:

� More than half the nodes in the voting set

� Exactly half of the nodes in the voting set and the quorum disk from the voting set

� When there is no quorum disk in the voting set, exactly half of the nodes in the voting set, if that half includes the node that appears first in the voting set (a node is entered into the voting set in the first available free slot)

These rules guarantee that there is only ever at most one group of nodes able to operate as the cluster, so the cluster never splits into two. The SVC cluster implements a dynamic quorum. Following a loss of nodes, if the cluster can continue operation, the cluster will adjust the quorum requirement, so that further node failure can be tolerated.

The lowest Node Unique ID in a cluster becomes the boss node for the group of nodes and proceeds to determine (from the quorum rules) whether the nodes can operate as the cluster. This node also presents the maximum two cluster IP addresses on one or both of its node’s Ethernet ports to allow access for cluster management.

2.3.1 Quorum disksThe cluster uses the quorum disk for two purposes: as a tie breaker in the event of a SAN fault, when exactly half of the nodes that were previously members of the cluster are present, and to hold a copy of important cluster configuration data. Just over 256 MB is reserved for this purpose on each quorum disk candidate. There is only one active quorum disk in a cluster; however, the cluster uses three MDisks as quorum disk candidates. The cluster automatically selects the actual active quorum disk from the pool of assigned quorum disk candidates.

If a tiebreaker condition occurs, the one half of the cluster nodes, which is able to reserve the quorum disk after the split has occurred, locks the disk and continues to operate. The other half stops its operation. This design prevents both sides from becoming inconsistent with each other.

When MDisks are added to the SVC cluster, the SVC cluster checks the MDisk to see if it can be used as a quorum disk. If the MDisk fulfills the requirements, the SVC will assign the three

Chapter 2. IBM System Storage SAN Volume Controller 35

Page 62: San

first MDisks added to the cluster as quorum candidates. One of them is selected as the active quorum disk.

If possible, the SVC will place the quorum candidates on separate disk subsystems. After the quorum disk has been selected, however, no attempt is made to ensure that the other quorum candidates are presented through separate disk subsystems.

With SVC 5.1, quorum disk candidates and the active quorum disk in a cluster can be listed by the svcinfo lsquorum command. When the set of quorum disk candidates has been chosen, it is fixed.

A new quorum disk candidate will only be chosen in one of these conditions:

� The administrator requests that a specific MDisk becomes a quorum disk by using the svctask setquorum command.

� An MDisk that is a quorum disk is deleted from an MDG.

� An MDisk that is a quorum disk changes to image mode.

An offline MDisk will not be replaced as a quorum disk candidate.

A cluster needs to be regarded as a single entity for disaster recovery purposes. The cluster and the quorum disk need to be colocated.

There are special considerations concerning the placement of the active quorum disk for a stretched cluster and stretched I/O Group configurations. Details are available at this Web site:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003311

During the normal operation of the cluster, the nodes communicate with each other. If a node is idle for a few seconds, a heartbeat signal is sent to ensure connectivity with the cluster. If a node fails for any reason, the workload that is intended for it is taken over by another node until the failed node has been restarted and readmitted to the cluster (which happens automatically). In the event that the microcode on a node becomes corrupted, resulting in a failure, the workload is transferred to another node. The code on the failed node is repaired, and the node is readmitted to the cluster (again, all automatically).

Note: To be considered eligible as a quorum disk, an LUN must meet the following criteria:

� It must be presented by a disk subsystem that is supported to provide SVC quorum disks.

� It cannot be allocated on one of the node’s internal flash disks.

� It has been manually allowed to be a quorum disk candidate using the svctask chcontroller -allow_quorum yes command.

� It must be in managed mode (no image mode disks).

� It must have sufficient free extents to hold the cluster state information, plus the stored configuration metadata.

� It must be visible to all of the nodes in the cluster.

Important: Running an SVC cluster without a quorum disk can seriously affect your operation. A lack of available quorum disks for storing metadata will prevent any migration operation (including a forced MDisk delete). Mirrored VDisks might be taken offline if there is no quorum disk available. This behavior occurs, because synchronization status for mirrored VDisks is recorded on the quorum disk.

36 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 63: San

2.3.2 I/O GroupsFor I/O purposes, the SVC nodes within the cluster are grouped into pairs, called I/O Groups, with a single pair being responsible for serving I/O on a given VDisk. One node within the I/O Group represents the preferred path for I/O to a given VDisk. The other node provides the failover path. This preference alternates between nodes as each VDisk is created within an I/O Group, which is an approach to balance the workload evenly between the two nodes.

2.3.3 CacheThe primary benefit of storage cache is to improve I/O response time. Reads and writes to a magnetic disk drive suffer from both seek and latency time at the drive level, which can result in from one to 10 ms of response time (for an enterprise-class disk).

The new 2145-CF8 nodes combined with SVC 5.1 provide 24 GB memory per node, or 48 GB per I/O Group, or 192 GB per SVC cluster. The SVC provides a flexible cache model, and the node’s memory can be used as read or write cache. The size of the write cache is limited to a maximum of 12 GB of the node’s memory. Dependent on the current I/O situation on a node, the free part of the memory (maximum 24 GB) can be fully used as read cache.

Cache is allocated in 4 KB pages. A page belongs to one track. A track is the unit of locking and destage granularity in the cache. It is 32 KB in size (eight pages). A track might only be partially populated with valid pages. The SVC coalesces writes up to the 32 KB track size if the writes reside in the same tracks prior to destage; for example, if 4 KB is written into a track, another 4 KB is written to another location in the same track. Therefore, the blocks written from the SVC to the disk subsystem can be any size between 512 bytes up to 32 KB.

When data is written by the host, the preferred node within the I/O Group saves the data in its cache. Before the cache returns completion to the host, the write must be mirrored to the partner node, or copied in the cache of its partner node, for availability reasons. After having a copy of the written data, the cache returns completion to the host.

Write data that is held in cache is not destaged to disk; therefore, if only one copy of the data is kept, you risk losing data. Write cache entries without updates during the last two minutes are automatically destaged to disk.

If one node of an I/O Group is missing, due to a restart or a hardware failure, the remaining node empties all of its write cache and proceeds in a operation mode, which is referred to as write-through mode. A node operating in write-through mode writes data directly to the disk subsystem before sending an “I/O complete” status message back to the host. Running in this mode can degrade the performance of the specific I/O Group.

Starting with SVC Version 4.2.1, write cache partitioning was introduced to the SVC. This feature restricts the maximum amount of write cache that a single MDG can allocate in a cluster. Table 2-2 shows the upper limit of write cache data that a single MDG in a cluster can occupy.

Table 2-2 Upper limit of write cache per MDG

Preferred node: The preferred node does not signify absolute ownership. The data can still be accessed by the partner node in the I/O Group in the event of a failure.

One MDG Two MDGs Three MDGs Four MDGs More than four MDGs

100% 66% 40% 33% 25%

Chapter 2. IBM System Storage SAN Volume Controller 37

Page 64: San

For in-depth information about SVC cache partitioning, we strongly recommend IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, which is available at this Web site:

http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

An SVC node can treat part or all of its physical memory as non-volatile. Non-volatile means that its contents are preserved across power losses and resets. Besides the bitmaps for Flash Copy and Remote Mirroring relationships, the Virtualization Table and the Write Cache are the most important items in the non-volatile memory. The actual amount that can be treated as non-volatile is dependent on the hardware.

In the event of a disruption or external power loss, the physical memory is copied to a file in the file system on the node’s internal disk drive, so that the contents can be recovered when external power is restored. The uninterruptible power supply units, which are delivered with each node’s hardware, ensure that there is sufficient internal power to keep a node operational to perform this dump when external power is removed. After dumping the content of the non-volatile part of the memory to disk, the SVC node shuts down.

2.3.4 Cluster managementThe SVC can be managed by one of the following three interfaces:

� A textual Command-line interface (CLI) accessed via a Secure Shell connection (SSH).

� A Web browser-based graphical user interface (GUI) written as a CIM Client (ICAT) using the SVC CIMOM. It supports flexible and rapid access to storage management information.

� A CIMOM, which can be used write alternative CIM Clients (such as IBM System Storage Productivity Center).

Starting with SVC release 4.3.1, the SVC Console (ICAT) can use the CIM Agent that is embedded in the SVC cluster. With release 5.1 of the code, using the embedded CIMOM is mandatory. This CIMOM will support the Storage Management Initiative Specification (SMI-S) Version 1.3 standard.

User account migrationDuring the upgrade from SAN Volume Controller Console Version 4.3.1 to Version 5.1, the installation program attempts to migrate user accounts that are currently defined to the CIMOM on the cluster. If the migration of those accounts fails with the installation program, you can manually migrate the user accounts with the help of a script. You can obtain details in the SVC Software Installation and Configuration Guide, SC23-6628-04.

Hardware Management ConsoleThe management console for SVC is referred to as the IBM System Storage Productivity Center. IBM System Storage Productivity Center is a hardware and software solution that includes a suite of storage infrastructure management software that can centralize, automate, and simplify the management of complex and heterogeneous storage environments.

IBM System Storage Productivity CenterIBM System Storage Productivity Center is based on server hardware (IBM System x®-based) and a set of pre-installed and optional software modules. Several of these pre-installed modules provide base functionality only, or are not activated. You can activate these modules, or the enhanced functionalities, by adding separate licenses.

IBM System Storage Productivity Center contains these functions:

38 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 65: San

� Tivoli Integrated Portal: IBM Tivoli Integrated Portal is a standards-based architecture for Web administration. The installation of Tivoli Integrated Portal is required to enable single sign-on (SSO) for Tivoli Storage Productivity Center. Tivoli Storage Productivity Center now installs Tivoli Integrated Portal along with Tivoli Storage Productivity Center.

� Tivoli Storage Productivity Center: IBM Tivoli Storage Productivity Center Basic Edition 4.1.0 is pre-installed on the IBM System Storage Productivity Center server. There are several other commercially available products of Tivoli Storage Productivity Center that provide additional functionality beyond Tivoli Storage Productivity Center Basic Edition. You can activate these packages by adding the specific licenses to the pre-installed Basic Edition:

– Tivoli Storage Productivity Center for Disk allows you to monitor storage systems for performance.

– Tivoli Storage Productivity Center for Data allows you to collect and monitor file systems and databases.

– Tivoli Storage Productivity Center Standard Edition is a bundle that includes all of the other packages, along with SAN planning tools that make use of information that is collected from the Tivoli Storage Productivity Center components.

� Tivoli Storage Productivity Center for Replication: The functions of Tivoli Storage Productivity Center for Replication provide the management of the IBM FlashCopy, Metro Mirror, and Global Mirror capabilities for the IBM Enterprise Storage Server® Model 800, IBM DS6000™, DS8000®, and IBM SAN Volume Controller. You can activate this package by adding the specific licenses.

� SVC GUI (ICAT)

� SSH Client (PuTTY)

� Windows Server 2008 Enterprise Edition

� Several base software packets that are required for Tivoli Productivity Center

� Optional software packages, such as anti-virus software or DS3000/4000/5000 Storage Manager, can be installed on the IBM System Storage Productivity Center server by the client.

Figure 2-13 on page 40 provides an overview of the SVC management components. We describe the details in Chapter 4, “SAN Volume Controller initial configuration” on page 103. You can obtain details about the IBM System Storage Productivity Center in IBM System Storage Productivity Center User’s Guide Version 1 Release 4, SC27-2336-03.

Chapter 2. IBM System Storage SAN Volume Controller 39

Page 66: San

Figure 2-13 SVC management overview

2.3.5 User authenticationWith SVC 5.1, several changes concerning user authentication for an SVC cluster have been introduced to make user authentication simpler.

Earlier SVC releases authenticated all users locally. SVC 5.1 has two authentication methods:

� Local authentication: Local authentication is similar to the existing method and will be described next.

� Remote authentication: Remote authentication supports the use of a remote authentication server, which for SVC is the Tivoli Embedded Security Services, to validate the passwords. The Tivoli Embedded Security Services is part of the Tivoli Integrated Portal, which is one of the three components that come with Tivoli Productivity Center 4.1 (Tivoli Productivity Center, Tivoli Productivity Center for Replication, and Tivoli Integrated Portal) that are pre-installed on the IBM System Storage Productivity Center 1.4. The IBM System Storage Productivity Center 1.4 is the management console for SVC 5.1 clusters.

Each SVC cluster can have multiple users defined. The cluster maintains an audit log of successfully executed commands, indicating which users made what actions at what times.

User names can contain only printable ASCII characters:

� Forbidden characters are single quotation mark (‘), colon (:), percent symbol (%), asterisk (*), comma (,), and double quotation marks (“).

� A user name cannot begin or end with a blank.

Passwords for local users do not have any forbidden characters, but passwords cannot begin or end with blanks.

40 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 67: San

SVC superuser There is a special local user called the superuser that always exists on every cluster. It cannot be deleted. Its password is set by the user during cluster initialization. The superuser password can be reset from the node’s front panel, and this function can be disabled, although doing this makes the cluster inaccessible if all of the users forget their passwords or lose their SSH keys. The superuser’s password supersedes the cluster administrator password that was present in previous software releases.

To register an SSH key for the superuser to provide command-line access, you use the GUI, usually at the end of the cluster initialization process. But, you can also add it later.

The superuser is always a member of user group 0, which has the most privileged role within the SVC.

2.3.6 SVC roles and user groupsEach user group is associated with a single role. The role for a user group cannot be changed, but additional new user groups (with one of the defined roles) can be created.

User groups are used for local and remote authentication. Because SVC knows of five roles, there are, by default, five user groups defined in an SVC cluster (see Table 2-3).

Table 2-3 User groups

The access rights for a user belonging to a specific user group are defined by the role that is assigned to the user group. It is the role that defines what a user can do (or cannot do) on an SVC cluster.

Table 2-4 on page 42 shows the roles ordered (from the top) by starting with the least privileged Monitor role down to the most privileged SecurityAdmin role.

User group ID User group Role

0 SecurityAdmin SecurityAdmin

1 Administrator Administrator

2 CopyOperator CopyOperator

3 Service Service

4 Monitor Monitor

Chapter 2. IBM System Storage SAN Volume Controller 41

Page 68: San

Table 2-4 Commands permitted for each role

2.3.7 SVC local authenticationLocal users are those users managed entirely on the cluster without the intervention of a remote authentication service. Local users must have either a password, an SSH public key, or both. The password is used for authentication and the SSH key is used for command-line or file transfer (SecureCopy) access. Therefore, for local users, the user can access the SVC cluster via the GUI only if a password is specified.

A local user always belongs to only one user group.

Figure 2-14 on page 43 shows an overview of local authentication within the SVC.

Role Allowed commands

Monitor All svcinfo commands:svctask: finderr, dumperrlog, dumpinternallog, chcurrentusersvcconfig: backup

Service All commands allowed for Monitor role, plus:svctask: applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps,settimezone, stopcluster, startstats, stopstats, settime

CopyOperator All commands allowed for Monitor role, plus:svctask: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp,chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap,startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp,startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, chpartnership

Administrator All commands, except:svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp,chusergrp, setpwdreset

SecurityAdmin All commands

Local users: Be aware that local users are created per each SVC cluster. Each user has a name, which must be unique across all users in one cluster. If you want to allow access for a user on multiple clusters, you have to define the user in each cluster with the same name and the same privileges.

42 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 69: San

Figure 2-14 Simplified overview of SVC local authentication

2.3.8 SVC remote authentication and single sign-onYou can configure an SVC cluster to use a remote authentication service. Remote users are those users that are managed by the remote authentication service and require command-line or file-transfer access.

Remote users only have to be defined in the SVC if command-line access is required. In that case, the remote authentication flag has to be set, and an SSH key and its password have to be defined for this user. Remember that for users requiring CLI access with remote authentication, defining the password locally for this user is mandatory.

Remote users cannot belong to any user group, because the remote authentication service, for example, an Lightweight Directory Access Protocol (LDAP) directory server, such as IBM Tivoli Directory Server or Microsoft® Active Directory, will deliver the user group information.

The upgrade from SVC 4.3.1 is seamless. Existing users and roles are migrated without interruption. Remote authentication can be enabled after the upgrade is complete.

Figure 2-15 on page 44 gives an overview of SVC remote authentication.

Chapter 2. IBM System Storage SAN Volume Controller 43

Page 70: San

Figure 2-15 Simplified overview of SVC 5.1 remote authentication

The authentication service supported by SVC is the Tivoli Embedded Security Services server component level 6.2.

The Tivoli Embedded Security Services server provides the following two key features:

� Tivoli Embedded Security Services isolates the SVC from the actual directory protocol in use, which means that the SVC communicates only with Tivoli Embedded Security Services to get its authentication information. The type of protocol that is used to access the central directory or the kind of the directory system that is used is transparent to SVC.

� Tivoli Embedded Security Services provides a secure token facility that is used to enable single sign-on (SSO). SSO means that users do not have to log in multiple times when using what appears to them to be a single system. It is used within Tivoli Productivity Center. When the SVC Console is launched from within Tivoli Productivity Center, the user will not have to log on to the SVC Console, because the user has already logged in to Tivoli Productivity Center.

With reference to Figure 2-16 on page 45, the user starts application A with a user name and password (1), which are authenticated using the Tivoli Embedded Security Services server (2). The server returns a token (3), which is an opaque string that can only be interpreted by the Tivoli Embedded Security Services server. The server also supplies the user’s groups and an expiry time stamp for the token. The client device (SVC in our case) is responsible for mapping an Tivoli Embedded Security Services user group to roles.

Application A needs to launch application B. Instead of getting the user to enter a new password to authenticate to application B, A passes B the Tivoli Embedded Security Services token (4). Application B passes the Tivoli Embedded Security Services token to the Tivoli Embedded Security Services server (5), which decodes the token and returns the user’s ID and groups to application B (6) along with an expiry time stamp.

44 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 71: San

Figure 2-16 SSO with Tivoli Embedded Security Services

The token expiry time stamp is advice to the Tivoli Embedded Security Services client applications A and B about credential caching. The applications are permitted to cache and use a token or user name-password combination until the time stamp expires and is returned by the server.

So, in the our example, application B can cache the fact that a particular token maps to a particular user ID and groups, which is a performance boost, because it saves the latency of querying the Tivoli Embedded Security Services server on each interaction between A and B. After the lifetime of the token has expired, application A must query the server again and obtain a new time stamp to rejuvenate the token (or alternatively discover that the credentials are now invalid).

The Tivoli Embedded Security Services server administrator can configure the length of time that is used to set expiry timestamps. This system is only effective if the Tivoli Embedded Security Services server and the applications have synchronized clocks.

Using a remote authentication serviceUse the following steps to use SVC with a remote authentication service:

1. Configure the cluster with the location of the remote authentication server.You can change the settings with this command:

svctask chauthservice.......

You can view settings with this command:

svcinfo lscluster.......

SVC supports either an HTTP or HTTPS connection to the Tivoli Embedded Security Services server. If the HTTP option is used, the user and password information is transmitted in clear text over the IP network.

2. Configure user groups on the cluster matching those user groups that are used by the authentication service. For each group of interest that is known to the authentication

1: login( u, p )

4: launch( tk ) LDAP Server

2: auth( u, p )

3: auth_ok( tk, ts, g )

Application A

5: auth( tk )

6: auth_ok( tk, ts, u, g )

Application B

ESS Server

Chapter 2. IBM System Storage SAN Volume Controller 45

Page 72: San

service, there must be an SVC user group with the same name and the remote setting enabled.

For example, you can have a group called sysadmins, whose members require the SVC Administrator role. Configure this group by using the command:

svctask mkusergrp -name sysadmins -remote -role Administrator

If none of a user’s groups match any of the SVC user groups, the user is not permitted to access the cluster.

3. Configure users that do not require SSH access. Any SVC users that are to be used with the remote authentication service and do not require SSH access need to be deleted from the system. The superuser cannot be deleted; it is a local user and cannot use the remote authentication service.

4. Configure users that do require SSH access. Any SVC users that are to be used with the remote authentication service and do require SSH access must have their remote setting enabled and the same password set on the cluster and the authentication service. The remote setting instructs SVC to consult the authentication service for group information after the SSH key authentication step to determine the user’s role. The need to configure the user’s password on the cluster in addition to the authentication service is due to a limitation in the Tivoli Embedded Security Services server software.

5. Configure the system time. For correct operation, both the SVC cluster and the system running the Tivoli Embedded Security Services server must have the exact same view of the current time; the easiest way is to have them both use the same Network Time Protocol (NTP) server.

Failure to follow this step can lead to poor interactive performance of the SVC user interface or incorrect user-role assignments.

Also, Tivoli Productivity Center 4.1 leverages the Tivoli Integrated Portal infrastructure and its underlying WebSphere® Application Server capabilities to make use of an LDAP registry and enable single sign-on (SSO).

You can obtain more information about implementing SSO within Tivoli Productivity Center 4.1 in Chapter 6 (LDAP authentication support and single sign-on) of the IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG247725, at this Web site:

http://www.redbooks.ibm.com/redpieces/abstracts/sg247725.html?Open

2.4 SVC hardware overviewThe SVC 5.1 release will also provide new, more powerful hardware nodes. Also, these new nodes will be, as defined in the underlying COMPASS architecture, based on Intel® processors with standard PCI-X adapters to interface with the SAN and the LAN.

The new SVC 2145-CF8 Storage Engine has the following key hardware features:

� New SVC engine based on Intel Core i7 2.4 GHz quad-core processor

� 24 GB memory, with future growth possibilities

� Four 8 Gbps FC ports

� Up to four solid-state drives, enabling scale-out high performance solid-state drive support with SVC

� Two power supplies

� Double bandwidth compared to its predecessor node (2145-8G4)

46 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 73: San

� Up to double IOPS compared to its predecessor node (2145-8G4)

� A 19-inch rack-mounted enclosure

� IBM Systems Director Active Energy Manager™-enabled

The new nodes can be smoothly integrated within existing SVC clusters. New nodes can be intermixed in pairs within existing SVC clusters. Mixing engine types in a cluster results in VDisk throughput characteristics of the engine type in that I/O Group. The cluster nondisruptive upgrade capability can be used to replace older engines with new 2145-CF8 engines.

They are 1U high, fit into 19 inch racks, and use the same uninterruptible power supply unit models as previous models. Integration into existing clusters requires that the cluster runs SVC 5.1 code. The only node that does not support SVC 5.1 code is the 2145-4F2-type node. An upgrade scenario for SVC clusters based, or containing, these first generation nodes will be available later this year. Figure 2-17 shows the front-side view of the new SVC 2145-CF8 node.

Figure 2-17 The SVC 2145-CF8 storage engine

Remember that several of the new features in the new SVC 5.1 release, such as iSCSI, are software features and are therefore available on all nodes supporting this release.

2.4.1 Fibre Channel interfacesThe IBM SAN Volume Controller provides the following FC interfaces on the node types:

� Supported link speed of 2/4/8 Gbps on SVC 2145-CF8 nodes

� Supported link speed of 1/2/4 Gbps on SVC 2145-8G4, SVC 2145-8A4, and SVC 2145-8F4 nodes

The nodes come with a 4-port HBA. The FC ports on these node types autonegotiate the link speed that is used with the FC switch. The ports normally operate at the maximum speed that is supported by both the SVC port and the switch. However, if a large number of link errors occur, the ports might operate at a lower speed than what is supported.

The actual port speed for each of the four ports can be displayed via the GUI, the CLI, the node’s front panel, and also by light-emitting diodes (LEDs) that are placed at the rear of the node. For details, consult the node-specific SVC hardware installation guides:

� IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356

� IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219

Chapter 2. IBM System Storage SAN Volume Controller 47

Page 74: San

� IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220

� IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221

The SVC imposes no limit on the FC optical distance between SVC nodes and host servers. FC standards, along with small form-factor pluggable optics (SFP) capabilities and cable type, dictate the maximum FC distances that are supported.

If you use longwave SFPs in the SVC node itself, the longest supported FC link between the SVC and switch is 10 km (6.21 miles).

Table 2-5 shows the actual cable length that is supported with shortwave SFPs.

Table 2-5 Overview of supported cable length

Table 2-6 shows the rules that apply with respect to the number of inter-switch link (ISL) hops allowed in a SAN fabric between SVC nodes or the cluster.

Table 2-6 Number of supported ISL hops

2.4.2 LAN interfacesThe 2145-CF8 node supports (as its predecessor nodes did) two 1 Gbps LAN ports. In SVC 4.3.1 and before, the SVC cluster presented a single IP interface, which was used by the SVC configuration interfaces (CLI and CIMOM). Although multiple physical nodes were present in the SVC cluster, only a single node (the configuration node) was active on the IP network. This configuration IP address was presented from the eth0 port of the configuration node.

If the configuration node failed, a separate node in the cluster took over the duties of the configuration node and the IP address for the cluster was then presented at the eth0 port of that new configuration. The configuration node supported concurrent access on the IPv4 and IPv6 configuration addresses on the eth0 port from SVC 4.3 onward.

Starting with SVC 5.1, the cluster configuration node can now be accessed on either eth0 or eth1. The cluster can have two IPv4 and two IPv6 addresses that are used for configuration purposes (CLI or CIMOM access). The cluster can therefore be managed by SSH clients or GUIs on System Storage Productivity Centers on separate physical IP networks. This capability provides redundancy in the event of a failure of one of these IP networks.

FC-O OM1 (M6)standard 62.2/125 microseconds

OM2 (M5)standard 50/125 microseconds

OM3 (M5E)optimized 50/125 microseconds-300

2 Gbps FC 150 m 300 m 500 m

4 Gbps FC 70 m 150 m 380 m

8 Gbps FC limiting 21 m 50 m 150 m

Between nodes in an I/O Group

Between nodes in separate I/O Groups

Between nodes and the disk subsystem

Between nodes and the host server

0(connect to the same switch)

1(recommended: 0, connect to the same switch)

1(recommended: 0, connect to the same switch)

Maximum 3

48 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 75: San

Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each SVC node port; these IP addresses are independent of the cluster configuration IP addresses.

Figure 2-12 on page 29 shows an overview.

2.5 Solid-state drivesYou can use solid-state drives, or more specifically, single layer cell (SLC) or multilayer cell (MLC) NAND Flash-based disks (for the sake of simplicity, we call them solid-state drives in the following chapters), to overcome a growing problem that is known as the memory/storage bottleneck.

2.5.1 Storage bottleneck problemThe memory/storage bottleneck describes the steadily growing gap between the time required for a CPU to access data located in its cache/memory (typically in nanoseconds) and data located on external storage (typically in milliseconds).

While CPUs and cache/memory devices continually improve their performance, this is not true in general for mechanical disks that are used as external storage.

Figure 2-18 shows these access time differences.

Figure 2-18 The memory/storage bottleneck

The single times that are shown are not that important, but look at the time differences between accessing data that is located in cache and data that is located on external disk.

We have added a second scale to Figure 2-18, which gives you an idea of how long it takes to access the data in a scenario where a single CPU cycle takes 1 second. This scale gives you an idea of the importance of future storage technologies closing or reducing the gap between access times for data stored in cache/memory versus access times for data stored on a external medium.

Chapter 2. IBM System Storage SAN Volume Controller 49

Page 76: San

Since magnetic disks were first introduced by IBM in 1956 (RAMAC), they have shown a remarkable performance regarding capacity growth, form factor/size reduction, price decrease ($/GB), and reliability.

However, the number of I/Os that a disk can handle and the response time that it takes to process a single I/O on it have not increased at the same rate — although they have certainly increased. In actual environments, we can expect from today’s enterprise-class FC serial-attached SCSI (SAS) disk up to 200 IOPS per disk with an average response time (a latency) of approximately 7 ms per I/O.

To simplify it, today rotating disks are getting, and still will, bigger in capacity (several TBs), smaller in form factor/footprint (3.5 inches, 2.5 inches, and 1.8 inches), and less expensive ($/GB), but not necessarily faster.

The limiting factor is the number of revolutions per minute (rpm) that a disk can perform (actually 15,000). This factor defines the time that is required to access a specific data block on a rotating device. There might be smaller improvements in the future, but a big step, such as doubling the number of revolutions, if technically even possible, inevitably has a massive increase in power consumption and a price increase.

2.5.2 Solid-state drive solutionThe solid-state drives can provide a solution for this dilemma. No rotating parts mean improved robustness and lower power consumption. A remarkable improvement in I/O performance and a massive reduction in the average I/O response times (latency) are the compelling reasons to use solid-state drives in today’s storage subsystems.

Enterprise-class solid-state drives deliver typically 50,000 read and 20,000 write IOPs with latencies of typically 50us for reads and 800us for writes. Their form factors (2.5 inches/3.5 inches) and their interfaces (FC/SAS/Serial Advanced Technology Attachment (SATA)) make them easy to integrate into existing disk shelves.

2.5.3 Solid-state drive marketThe solid-state drive storage market is rapidly evolving. The key differentiator among today’s solid-state drive products that are available on the market is not the storage medium, but the logic in the disk internal controllers. Optimally handling what is referred to as wear-out leveling, which defines the controller’s capability to ensure a device’s durability, and closing the remarkable gap between read and write I/O performance are the top priorities in today’s controller development.

Today’s solid-state drive technology is only a first step into the world of high performance persistent semiconductor storage. A group of the approximately 10 most promising technologies are collectively referred to as Storage Class Memory (SCM).

Storage Class MemorySCM promises a massive improvement in performance (IOPS), areal density, cost, and energy efficiency compared to today’s solid-state drive technology. IBM Research is actively engaged in these new technologies.

Adding solid-state drives: Specific performance problems might be solved by carefully adding solid-state drives to an existing disk subsystem. But be aware, solving performance problems by using solid-state drives excessively in existing disk subsystems will inevitably create performance bottlenecks on the underlying RAID controllers.

50 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 77: San

You can obtain details of nanoscale devices at this Web site:

http://www.almaden.ibm.com/st/nanoscale_st/nano_devices/

You can obtain details of Storage Class Memory at this Web site:

http://tinyurl.com/plk7as

You can read a comprehensive and worthwhile overview of the solid-state drive technology in a subset of the well known Spring 2009 SNIA Technical Tutorials, which are available on the SNIA Web site:

http://www.snia.org/education/tutorials/2009/spring/solid

When these technologies become a reality, it will fundamentally change the architecture of today’s storage infrastructures.

The next topic describes integrating the first releases of this new technology into the SVC.

2.6 Solid-state drives in the SVCThe solid-state drives in the new 2145-CF8 nodes provide a new ultra-high-performance storage option. They are available in the 2145-CF8 nodes only. Solid-state drives can be pre-installed in the new nodes or installed as a field hardware upgrade on a per disk basis at a later point in time without interrupting service.

Solid-state drives include the following features:

� Up to four solid-state drives can be installed on each SVC 2145-CF8 node.

� An IBM PCIe SAS HBA is required on each node that contains a solid-state drive.

� Each solid-state drive is a 2.5-inch Serial Attached SCSI (SAS) drive.

� Each solid-state drive provides up to 140 GB of capacity.

� Solid-state drives are hot-pluggable and hot-swappable.

Up to four solid-state drives are supported per node, which will provide up to 560 GB of usable solid-state drive capacity per node. Always install the same amount of solid-state drive capacity in both nodes of an I/O Group.

In a cluster running 5.1 code, node pairs with solid-state drives can be mixed with older node pairs, either with or without local solid-state drives installed.

This scalable architecture enables clients to take advantage of the throughput capabilities of the solid-state drive. The following performance exists per I/O Group (from solid-state drives only):

� IOPS: 200 K reads, 80 K writes, and 56 K 70/30 mix

� MBps: 800 MBps reads and 400 MBps writes

SSDs are local drives in an SVC node and are presented as MDisks to the SVC cluster. They belong to an SVC internal controller. These controller objects will have the worldwide node name (WWNN) of the node in question, but they will be reported as standard controller objects that can be renamed by the user. SVC reserves eight of these controller objects for the internal SSD controllers.

Chapter 2. IBM System Storage SAN Volume Controller 51

Page 78: San

MDisks based on SSD can be identified by showing their attributes via GUI/CLI. For these MDisks, the attributes Node ID and Node Name are set. In all other MDisk views, these attributes are blank.

2.6.1 Solid-state drive configuration rulesYou must follow the SVC solid-state drive configuration rules for nodes, I/O Groups, and clusters:

� Nodes that contain solid-state drives can coexist in a single SVC cluster with any other supported nodes.

� Do not combine nodes that contain solid-state drives and nodes that do not contain solid-state drives in a single I/O Group. It is acceptable to temporarily mix node types in an I/O Group while upgrading SVC node hardware from an older model to the 2145-CF8.

� Nodes that contain solid-state drives in a single I/O Group must share the same solid-state drive capacities.

� Quorum functionality is not supported on solid-state drives within SVC nodes.

You must follow the SVC solid-state drive configuration rules for MDisks and MDisk groups:

� Each solid-state drive is recognized by the cluster as a single MDisk.

� For each node that contains solid-state drives, create a single MDisk group that includes only the solid-state drives that are installed in that node.

When you add a new solid-state drive to an MDisk group (move it from unmanaged to managed mode), the solid-state drive is automatically formatted and set to a block size of 512 bytes.

You must follow these configuration rules for VDisks using storage from solid-state drives within SVC nodes:

� VDisks using SVC solid-state drive storage must be created in the I/O Group where the solid-state drives physically reside.

� VDisks using SVC solid-state drive storage must be mirrored to another MDG to provide fault tolerance. There are two supported mirroring configurations:

– For the highest performance, the two VDisk copies must be created in the two MDGs that correspond to the SVC solid-state drive storage in two nodes in the same I/O Group. The recommended solid-state drive configuration for highest performance is shown in Figure 2-19 on page 54.

– For the best utilization of the solid-state drive capacity, the primary VDisk copy must be placed on SVC solid-state drive storage and the secondary copy can be placed on Tier 1 storage, such as an IBM DS8000. Under certain failure scenarios, the performance of the VDisk will degrade to the performance of the non-solid-state drive storage. All read I/Os are sent to the primary copy of a mirrored VDisk; therefore, reads will experience solid-state drive performance. Write I/Os are mirrored to both locations, so performance will match the speed of the slowest copy. The recommended solid-state

Terminology: An MDG using solid-state drives contained within an SVC node will be referenced as SVC solid-state drive storage throughout this book. The configuration rules given in this book apply to SVC solid-state drive storage. Do not confuse this term with solid-state drive storage that is contained in SAN-attached storage controllers, such as the IBM DS8000 or DS5000.

52 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 79: San

drive configuration for the best solid-state drive capacity utilization is shown in Figure 2-20 on page 55.

� To balance the read workload, evenly split the primary and secondary VDisk copies on each node that contains solid-state drives.

� The preferred node of the VDisk must be the same node that contains the solid-state drives that are used by the primary VDisk copy.

Remember that VDisks that are based on SVC solid-state drive storage must always be presented by the I/O Group and, during normal operation, by the node to which the solid-state drive belongs. These rules are designed to direct all host I/O to the node containing the relevant solid-state drives.

Existing VDisks can be migrated while online to SVC solid-state drive storage. It might be necessary to move the VDisk into the correct I/O Group first, which requires quiescing I/O to this VDisk during the move.

Figure 2-19 on page 54 shows the recommended solid-state drive configuration for the highest performance.

Important: For VDisks that are provisioned out of SVC solid-state drive storage, VDisk Mirroring is mandatory to maintain access to the data that is stored on solid-state drives if one of the nodes in the I/O Group is being serviced or fails.

Chapter 2. IBM System Storage SAN Volume Controller 53

Page 80: San

Figure 2-19 Solid-state drive configuration for highest performance

For a read-intensive application, mirrored VDisks can keep their secondary copy on a SAN-based MDG, such as an IBM DS8000 providing Tier 1 storage resources to an SVC cluster.

Because all read I/Os are sent to the primary copy (which is set as the solid-state drive), reasonable performance occurs as long as the Tier 1 storage can sustain the write I/O rate. Performance will decrease if the primary copy fails. Ensure that the node on which the primary VDisk copy resides is also the preferred node for the VDisk. Figure 2-20 on page 55 shows the recommended solid-state drive configuration for the best capacity utilization.

54 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 81: San

Figure 2-20 Recommended solid-state drive configuration for best solid-state drive capacity utilization

Remember these considerations when using SVC solid-state drive storage:

� I/O requests to solid-state drives that are in other nodes are automatically forwarded. However, this forwarding introduces additional delays. Try to avoid these configurations by following the configuration rules.

� Be careful migrating image mode VDisks to SVC solid-state drive storage or deleting a copy of a mirrored VDisk based on SVC solid-state drive storage. In all of the scenarios where your data is stored in one single solid-state drive-based MDG, your data is not protected against node or disk failures any longer.

� If you delete or replace nodes containing local solid-state drives from a cluster, remember that the data stored on its solid-state drives might have to be decommissioned.

� If you shut down a node that contains SVC solid-state drive storage containing VDisks without mirrors on another node or storage system, you will lose access to any VDisks that are associated with that SVC solid-state drive storage. A force option is provided to prevent an unintended loss of access.

SVC 5.1 provides the functionality to upgrade the solid-state drive’s firmware and pre-GA code.

For details, see IBM System Storage SAN Volume Controller Software Installation and Configuration Guide Version, SC23-6628.

2.6.2 SVC 5.1 supported hardware list, device driver, and firmware levelsWith the SVC 5.1 release, as in every release, IBM offers functional enhancements and new hardware that can be integrated into existing or new SVC clusters and also interoperability

Chapter 2. IBM System Storage SAN Volume Controller 55

Page 82: San

enhancements or new support for servers, SAN switches, and disk subsystems. See the most current information at this Web site:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277

2.6.3 SVC 4.3.1 featuresBefore we introduce the new features of SVC 5.1, we review the features that were added with Release 4.3.1:

� New node type: 2145-8A4: The Entry Edition hardware comes with identical functionality as the 2145-8G4 nodes: 8 GB memory, and four 4 Gbps FC interfaces. The 2145-8A4 nodes provide approximately 60% of the performance of the actual 2145-8G4 nodes. The 2145-8A4 is an ideal choice for entry-level solutions with reduced performance requirements, but without any functional restrictions. It uses physical disk-based licensing.

� Embedded CIMOMThe CIMOM, and the associated SVC CIM Agent, is the software component that provides the industry standard CIM protocol as a management interface to SVC. Up to SVC 4.3.0, the CIMOM ran on the SVC Master Console, which was replaced in SVC 4.2.0 by the System Storage Productivity Center-based management console. The System Storage Productivity Center is an integrated package of hardware and software that provides all of the management software (SVC CIMOM and SVC GUI) that is required to manage the SVC, as well as components for managing other storage systems.

Clients can continue to use either the Master Console or IBM System Storage Productivity Center to manage SVC 4.3.1. In addition, the software components required to manage the SVC (SVC CIMOM and SVC GUI) are provided by IBM in software form, allowing clients that have a suitable hardware platform to build their own Master Console.

� Windows Server 2008 support for the SVC GUI and Master Console

� IBM System Storage Productivity Center 1.3 support

� NTP synchronization

The SVC cluster time operates in one of two exclusive modes:

– Default mode in which the cluster uses the configuration node’s system clock

– NTP mode in which the cluster uses an NTP time server as its time source and adjusts the configuration node’s system clock according to time values obtained from the NTP server. When operating in NTP mode, the SVC cluster will log an error if an NTP server is unavailable.

� Performance enhancement for overlapped Global Mirror writes

2.6.4 New with SVC 5.1We have already described most of the new features that are available with SVC Release 5.1.

This list summarizes the new features:

� New hardware nodes (CF8)

Note: With SVC 5.1, the usage of the embedded CIMOM is mandatory. We therefore recommend, when upgrading, that you switch the existing configurations from the Master Console/IBM System Storage Productivity Center-based CIMOM to the embedded CIMOM (remember to update the Tivoli Productivity Center configuration if it in use). Then, upgrade the Master Console/IBM System Storage Productivity Center, and finally, upgrade the SVC cluster.

56 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 83: San

SVC 5.1 offers a new SVC engine that is based on IBM System x3550M2 server Intel Core i7 2.4 GHz quad-core processor. It provides 24 GB of cache (with future growth possibilities) and four 8 Gbps FC ports.

It provides support for solid-state drives (up to four per SVC node) enabling scale-out high performance solid-state drive support with SVC. The new nodes can be intermixed in pairs with other engines in SVC clusters. We describe the details in 2.4, “SVC hardware overview” on page 46.

� 64-bit kernel in Model 8F2 and later

The SVC software kernel has been upgraded to take advantage of the 64-bit hardware on SVC nodes. Model 4F2 is not supported with SVC 5.1 software, but it is supported with SVC 4.3.x software. The 2145-8A4 is an effective replacement for the 4F2, and it doubles the performance of the 4F2.

Going to 64-bit mode will improve performance capability. It allows for a cache increase (24 GB) in the 2145-CF8 and will be used in future SVC releases for cache increases and other expansion options.

� Solid-state disk support

Optional solid-state drives in SVC engines provide a new ultra-high-performance storage option. Up to four solid-state drives per node (140 GB each, larger in the future) can be added to a node. This capability provides up to 540 GB of usable solid-state drive capacity per I/O Group, or more than 2 TB to an 8-node SVC cluster. The SVC’s scalable architecture enables clients to take advantage of the throughput capabilities of the solid-state drive. The solid-state drives are fully integrated into the SVC architecture. VDisks can be migrated to and from solid-state drive VDisks without application disruption. FlashCopy can be used for backup or to copy data to solid-state drive VDisks.

We describe details in 2.5, “Solid-state drives” on page 49.

� iSCSI support

SVC 5.1 provides native attachment to SVC for host systems using the iSCSI protocol. This iSCSI support is a software feature. It will be supported on older SVC nodes that support SVC 5.1. iSCSI is not used for storage attachment, for SVC cluster-to-cluster communication, or for communication between the SVC engines in a cluster. These functions will still be performed via FC.

We describe the details in 2.2.10, “iSCSI overview” on page 26.

� Multiple relationships for synchronous data mirroring (Metro Mirror)

Multiple cluster mirroring enables Metro Mirror (MM) and Global Mirror (GM) relationships to exist between a maximum of four SVC clusters. Remember that a VDisk can be in only one MM/GM relationship.

The creation of up to 8,192 Metro Mirror and Global Mirror relationships is supported. The single relationships are individually controllable (create/delete and start/stop).

We describe the details in “Synchronous/Asynchronous remote copy” on page 31.

� Enhancements to FlashCopy and support for reverse FlashCopy

SVC 5.1 enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. Multiple targets and thus multiple rollback points are supported.

We describe the details in 2.2.16, “FlashCopy” on page 33.

� Zero detection

Zero detection provides the means to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient VDisk using VDisk Mirroring. To

Chapter 2. IBM System Storage SAN Volume Controller 57

Page 84: San

migrate from a fully allocated to a Space-Efficient VDisk, add the target space-efficient copy, wait for synchronization to complete, and then remove the source fully allocated copy.

We describe the details in 2.2.7, “Mirrored VDisk” on page 21.

� User authentication changes

SVC 5.1 will support remote authentication and SSO by using an external service running on the IBM System Storage Productivity Center. The external service will be the Tivoli Embedded Security Services installed on the IBM System Storage Productivity Center. Current local authentication methods will still be supported.

We describe the details in 2.3.5, “User authentication” on page 40.

� Reliability, availability, and serviceability (RAS) enhancements

In addition to the existing SVC e-mail and SNMP trap facilities, SVC 5.1 adds syslog error event logging for those clients that are already using syslog in their configurations. This feature enables optional transmission over a syslog interface to a remote syslog daemon when parsing the Error Event Log. The format and content of messages sent to a syslog server are identical to the format and content of messages that are transmitted in a SNMP trap message.

2.7 Maximum supported configurationsFor a list of the maximum supported configurations, visit the SVC support site at this Web site:

http://www.ibm.com/storage/support/2145

Several limits have been removed with SVC 5.1, but not all of them. The following list gives an overview of the most important limits. For details, always consult the SVC support site:

� iSCSI support

All host iSCSI names are converted to an internally generated WWPN (one per iSCSI name per I/O Group). Each iSCSI name in an I/O Group consumes one WWPN that otherwise is available for a “real” FC WWPN.

So, the limits for ports per I/O Group/cluster/host object remain the same, but these limits are now shared between FC WWPNs and iSCSI names.

� The number of cluster partnerships has been lifted from one up to a maximum of three partnerships, which means that a single SVC cluster can have partnerships of up to three clusters at the same time.

� Remote Copy (RC):

– The number of RC relationships has increased from 1,024 to 8,192. Remember that a single VDisk at a single point of time can be a member of exactly one RC relationship.

– The number of RC relationships per RC consistency group has also increased to 8,192.

� VDisk

A VDisk can contain a maximum of 217 (or 131,072) extents. With an extent size of 2 GB, the maximum VDisk size is 256 TB.

58 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 85: San

2.8 Useful SVC linksThe SVC Support Page is at this Web site:

http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1

SVC online documentation is at this Web site:

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp

You can see the lBM Redbooks publications about SVC at this Web site:

http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

2.9 Commonly encountered terms

Channel extenderA channel extender is a device for long distance communication connecting other SAN fabric components. Generally, channel extenders can involve protocol conversion to asynchronous transfer mode (ATM), Internet Protocol (IP), or another long distance communication protocol.

ClusterA cluster is a group of 2,145 nodes that presents a single configuration and service interface to the user.

Consistency groupA consistency group is a group of VDisks that has copy relationships that need to be managed as a single entity.

CopiedCopied is a FlashCopy state that indicates that a copy has been triggered after the copy relationship was created. The copy process is complete, and the target disk has no further dependence on the source disk. The time of the last trigger event is normally displayed with this status.

Configuration nodeWhile the cluster is operational, a single node in the cluster is appointed to provide configuration and service functions over the network interface. This node is termed the configuration node. This configuration node manages a cache of the configuration information that describes the cluster configuration and provides a focal point for configuration commands. If the configuration node fails, another node in the cluster will assume the role.

Counterpart SANA counterpart SAN is a non-redundant portion of a redundant SAN. A counterpart SAN provides all of the connectivity of the redundant SAN, but without the 100% redundancy. An SVC node is typically connected to a redundant SAN made out of two counterpart SANs. A counterpart SAN is often called a SAN fabric.

Error codeAn error code is a value used to identify an error condition to a user. This value might map to one or more error IDs or to values that are presented on the service panel. This value is used to report error conditions to IBM and to provide an entry point into the service guide.

Chapter 2. IBM System Storage SAN Volume Controller 59

Page 86: San

Error IDAn error ID is a value that is used to identify a unique error condition detected by the 2145 cluster. An error ID is used internally in the cluster to identify the error.

ExcludedExcluded is a status condition that describes an MDisk that the 2145 cluster has decided is no longer sufficiently reliable to be managed by the cluster. The user must issue a command to include the MDisk in the cluster-managed storage.

ExtentA fixed size unit of data that is used to manage the mapping of data between MDisks and VDisks.

FC port loginsFC port logins is the number of hosts that can see any one SVC node port. Certain disk subsystems, such as the IBM DS8000, recommend limiting the number of hosts that use each port, to prevent excessive queuing at that port. Clearly, if the port fails or the path to that port fails, the host might fail over to another port and the fan-in criteria might be exceeded in this degraded mode.

Front end and back endThe SVC takes MDisks and presents these MDisks to application servers (hosts). The MDisks are looked after by the “back-end” application of the SVC. The VDisks presented to hosts are looked after by the “front-end” application in the SVC.

Field replaceable unitsField replaceable units (FRUs) are individual parts, which are held as spares by the service organization.

GrainA grain is the unit of data that is represented by a single bit in a FlashCopy bitmap (64 KB/256 KB) in the SVC. It is also the unit to extend the real size of a Space-Efficient VDisk (32,64,128 or 256 KB).

Host bus adapterA host bus adapter (HBA) is an interface card that connects between a host bus, such as a Peripheral Component Interconnect (PCI), and the SAN.

Host IDA numeric identifier assigned to a group of host FC ports or iSCSI host names for the purposes of LUN mapping. For each host ID, there is a separate mapping of SCSI IDs to VDisks. The intent is to have a one-to-one relationship between hosts and host IDs, although this relationship cannot be policed.

IQN (iSCSI qualified name)Special names refer to both iSCSI initiators and targets. One of the three name formats that iSCSI provides is IQN. The format is iqn.yyyy-mm.{reversed domain name}, for example, the default for an SVC node is: iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

60 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 87: San

iSNS (Internet storage name service)The Internet storage name service (iSNS) protocol allows automated discovery, management, and configuration of iSCSI and FC devices. It has been defined in RFC 4171.

Image modImage mod is a configuration mode similar to the router mode but with the addition of cache and copy functions. SCSI commands are not forwarded directly to the MDisk.

I/O GroupAn I/O Group is a collection of VDisk and node relationships, that is, an SVC node pair that presents a common interface to host systems. Each SVC node is associated with exactly one I/O Group. The two nodes in the I/O Group provide access to the VDisks in the I/O Group.

ISL hopAn inter-switch link (ISL) is a connection between two switches and is counted as an “ISL hop.” The number of “hops” is always counted on the shortest route between two N-ports (device connections). In an SVC environment, the number of ISL hops is counted on the shortest route between the pair of nodes farthest apart. It measures distance only in terms of ISLs in the fabric.

Local fabricBecause the SVC supports remote copy, there might be significant distances between the components in the local cluster and those components in the remote cluster. The local fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the local cluster together.

Local and remote fabric interconnectThe local fabric interconnect and the remote fabric interconnect are the SAN components that are used to connect the local and remote fabrics. They can be single-mode optical fibers that are driven by high-power gigabit interface converters (GBICs) or SFPs, or more sophisticated components, such as channel extenders or special SFP modules that are used to extend the distance between SAN components.

LU and LUNLUN is formally defined by the SCSI standards as a logical unit number. It is used as an abbreviation for an entity, which exhibits disk-like behavior, for example, a VDisk or an MDisk.

Managed disk (MDisk)An MDisk is a SCSI disk that is presented by a RAID controller and that is managed by the cluster. The MDisk is not visible to host systems on the SAN.

Managed Disk Group (MDiskgrp or MDG)A collection of MDisks that jointly contains all of the data for a specified set of VDisks.

Managed space modeThe managed space mode is a configuration mode that is similar to image mode but with the addition of space management functions.

Chapter 2. IBM System Storage SAN Volume Controller 61

Page 88: San

Master Console (MC)The Master Console is the platform on which the software used to manage the SVC runs. With Version 4.3, it is being replaced by the System Storage Productivity Center. However, V4.3 GUI console code is supported on existing Master Consoles.

NodeA node is a single processing unit, which provides virtualization, cache, and copy services for the SAN. SVC nodes are deployed in pairs called I/O Groups. One node in the cluster is designated the configuration node.

OversubscriptionOversubscription is the ratio of the sum of the traffic on the initiator N-port connection, or connections to the traffic on the most heavily loaded ISLs where more than one connection is used between these switches. Oversubscription assumes a symmetrical network, and a specific workload applied evenly from all initiators and directed evenly to all targets. A symmetrical network means that all the initiators are connected at the same level, and all the controllers are connected at the same level.

PreparePrepare is a configuration command that is used to cause cached data to be flushed in preparation for a copy trigger operation.

RASRAS stands for reliability, availability, and serviceability.

RAIDRAID stands for a redundant array of independent disks.

Redundant SANA redundant SAN is a SAN configuration in which there is no single point of failure (SPoF), so no matter what component fails, data traffic will continue. Connectivity between the devices within the SAN is maintained, although possibly with degraded performance, when an error has occurred. A redundant SAN design is normally achieved by splitting the SAN into two independent counterpart SANs (two SAN fabrics), so that if one counterpart SAN is destroyed, the other counterpart SAN keeps functioning.

Remote fabricBecause the SVC supports remote copy, there might be significant distances between the components in the local cluster and those components in the remote cluster. The remote fabric is composed of those SAN components (switches, cables, and so on) that connect the components (nodes, hosts, and switches) of the remote cluster together.

SANSAN stands for storage area network.

SAN Volume ControllerThe IBM System Storage SAN Volume Controller is a SAN-based appliance designed for attachment to a variety of host computer systems, which carries out block-level virtualization of disk storage.

SCSISCSI stands for Small Computer Systems Interface.

62 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 89: San

Service Location Protocol The Service Location Protocol (SLP) is a service discovery protocol that allows computers and other devices to find services in a local area network without prior configuration. It has been defined in RFC 2608.

IBM System Storage Productivity Center IBM System Storage Productivity Center replaces the Master Console for new installations of SAN Volume Controller Version 4.3.0. For IBM System Storage Productivity Center planning, installation, and configuration information, see the following Web site:

http://publib.boulder.ibm.com/infocenter/tivihelp/v4r1/index.jsp

Virtual disk (VDisk)A virtual disk (VDisk) is an SVC device that appears to host systems attached to the SAN as a SCSI disk. Each VDisk is associated with exactly one I/O Group.

Chapter 2. IBM System Storage SAN Volume Controller 63

Page 90: San

64 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 91: San

Chapter 3. Planning and configuration

In this chapter, we describe the steps that are required when planning the installation of an IBM System Storage SAN Volume Controller (SVC) in your storage network. We look at the implications for your storage network and discuss performance considerations.

3

© Copyright IBM Corp. 2010. All rights reserved. 65

Page 92: San

3.1 General planning rules To achieve the most benefit from the SVC, pre-installation planning must include several important steps. These steps ensure that SVC provides the best possible performance, reliability, and ease of management for your application needs. Proper configuration also helps minimize downtime by avoiding changes to the SVC and the storage area network (SAN) environment to meet future growth needs.

Planning the SVC requires that you follow these steps:

1. Collect and document the number of hosts (application servers) to attach to the SVC, the traffic profile activity (read or write, sequential or random), and the performance requirements (I/O per second (IOPS)).

2. Collect and document the storage requirements and capacities:

– The total back-end storage already present in the environment to be provisioned on the SVC

– The total back-end new storage to be provisioned on the SVC

– The required virtual storage capacity that is used as a fully managed virtual disk (VDisk) and used as a Space-Efficient VDisk

– The required storage capacity for local mirror copy (VDisk Mirroring)

– The required storage capacity for point-in-time copy (FlashCopy)

– The required storage capacity for remote copy (Metro and Global Mirror)

– Per host: Storage capacity, the host logical unit number (LUN) quantity, and sizes

3. Define the local and remote SAN fabrics and clusters, if a remote copy or a secondary site is needed.

4. Define the number of clusters and the number of pairs of nodes (between 1 and 4) for each cluster. Each pair of nodes (an I/O Group) is the container for the VDisks. The number of necessary I/O Groups depends on the overall performance requirements.

5. Design the SAN according to the requirement for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC, the SVC and the disk subsystem, between the SVC nodes, and for the inter-switch link (ISL) between the local and remote fabric.

6. Design the iSCSI network according to the requirements for high availability and best performance. Consider the total number of ports and the bandwidth needed between the host and the SVC.

7. Determine the SVC service IP address and the IBM System Storage Productivity Center (SVC Console).

8. Determine the IP addresses for the SVC cluster and for the host that is connected via iSCSI connections.

9. Define a naming convention for the SVC nodes, the host, and the storage subsystem.

Tip: The IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551, contains comprehensive information that goes into greater depth regarding the topics that we discuss here.

We also go into much more depth about these topics in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, which is available at this Web site:

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

66 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 93: San

10.Define the managed disks (MDisks) in the disk subsystem.

11.Define the Managed Disk Groups (MDGs). The MDGs depend on the disk subsystem in place and the data migration needs.

12.Plan the logical configuration of the VDisks between the I/O Groups and the MDGs in such a way as to optimize the I/O load between the hosts and the SVC. You can set up an equal repartition of all of the VDisks between the nodes or a repartition that takes into account the expected load from the hosts.

13.Plan for the physical location of the equipment in the rack.

SVC planning can be categorized into two types:

� Physical planning � Logical planning

3.2 Physical planningThere are several key factors to consider when performing the physical planning of an SVC installation. The physical site must have the following characteristics:

� Power, cooling, and location requirements are present for the SVC and the uninterruptible power supply units.

� SVC nodes and their uninterruptible power supply units must be in the same rack.

� We suggest that you place SVC nodes belonging to the same I/O Group in separate racks.

� Plan for two separate power sources if you have ordered a redundant AC power switch (available as an optional feature).

� An SVC node is one Electronic Industries Association (EIA) unit high.

� Each uninterruptible power supply unit that comes with SVC V5.1 is one EIA unit high. The uninterruptible power supply unit shipped with the earlier version of the SVC is two EIA units high.

� The IBM System Storage Productivity Center (SVC Console) is two EIA units high: one unit for the server and one unit for the keyboard and monitor.

� Other hardware devices can be in the same SVC rack, such as IBM System Storage DS4000®, IBM System Storage DS6000, SAN switches, Ethernet switch, and other devices.

� Consider the maximum power rating of the rack; it must not be exceeded.

Chapter 3. Planning and configuration 67

Page 94: San

In Figure 3-1, we show two 2145-CF8 SVC nodes.

Figure 3-1 2145-CF8 SVC nodes

3.2.1 Preparing your uninterruptible power supply unit environment Ensure that your physical site meets the installation requirements for the uninterruptible power supply unit.

2145 UPS-1U The 2145 Uninterruptible Power Supply-1U (2145 UPS-1U) is one EIA unit high and is shipped, and can only operate, on the following node types:

� SAN Volume Controller 2145-CF8� SAN Volume Controller 2145-8A4� SAN Volume Controller 2145-8G4� SAN Volume Controller 2145-8F2� SAN Volume Controller 2145-8F4

It was also shipped and will operate with the SVC 2145-4F2.

When configuring the 2145 UPS-1U, the voltage that is supplied to it must be 200 – 240 V, single phase.

Uninterruptible power supply unit: The 2145 UPS-1U is a Powerware 5115.

Tip: The 2145 UPS-1U has an integrated circuit breaker and does not require external protection.

68 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 95: San

3.2.2 Physical rulesThe SVC must be installed in pairs to provide high availability, and each node in the cluster must be connected to a separate uninterruptible power supply unit. Figure 3-2 shows an example of power connections for the 2145-8G4.

Figure 3-2 Node uninterruptible power supply unit setup

Be aware of these considerations:

� Each SVC node of an I/O Group must be connected to a separate uninterruptible power supply unit.

� Each uninterruptible power supply unit pair that supports a pair of nodes must be connected to a separate power domain (if possible) to reduce the chances of input power loss.

� The uninterruptible power supply units, for safety reasons, must be installed in the lowest positions in the rack. If necessary, move lighter units toward the top of the rack to make way for the uninterruptible power supply units.

� The power and serial connection from a node must be connected to the same uninterruptible power supply unit; otherwise, the node will not start.

� The 2145-CF8, 2145-8A4, 2145-8G4, 2145-8F2, and 2145-8F4 hardware models must be connected to a 5115 uninterruptible power supply unit. They will not start with a 5125 uninterruptible power supply unit.

Figure 3-3 on page 70 shows ports for the 2145-CF8.

Important: Do not share the SVC uninterruptible power supply unit with any other devices.

Chapter 3. Planning and configuration 69

Page 96: San

Figure 3-3 Ports for the 2145-CF8

Figure 3-4 on page 71 shows a power cabling example for the 2145-CF8.

70 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 97: San

Figure 3-4 2145-CF8 power cabling

There are guidelines to follow for Fibre Channel (FC) cable connections. Occasionally, the introduction of a new SVC hardware model means that there are internal changes. One example is the worldwide port name (WWPN) mapping in the port mapping. The 2145-8G4 and 2145-CF8 have the same mapping.

Figure 3-5 on page 72 shows the WWPN mapping.

Chapter 3. Planning and configuration 71

Page 98: San

Figure 3-5 WWPN mapping

Figure 3-6 on page 73 shows a sample layout within a separate rack.

72 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 99: San

Figure 3-6 Sample rack layout

We suggest that you place the racks in separate rooms, if possible, in order to gain protection against critical events (fire, water, power loss, and so on) that might affect one room only. Remember the maximum distances that are supported between the nodes in one I/O Group (100 m (or 320 ft., 1.13 inches)). You can extend this distance by submitting a formal SCORE request to increase the limit by following the rules that will be specified in any SCORE approval.

3.2.3 Cable connectionsCreate a cable connection table or documentation following your environment’s documentation procedure to track all of the connections that are required for the setup:

� Nodes� Uninterruptible power supply unit� Ethernet� iSCSI connections� FC ports� IBM System Storage Productivity Center (SVC Console)

Chapter 3. Planning and configuration 73

Page 100: San

3.3 Logical planning For logical planning, we intend to cover these topics:

� Management IP addressing plan� SAN zoning and SAN connections� iSCSI IP addressing plan� Back-end storage subsystem configuration� SVC cluster configuration� MDG configuration� VDisk configuration� Host mapping (LUN masking)� Advanced copy functions� SAN start-up support� Data migration from non-virtualized storage subsystems� SVC configuration backup procedure

3.3.1 Management IP addressing planFor management, remember these rules:

� In addition to an FC connection, each node has an Ethernet connection for configuration and error reporting.

� Each SVC cluster needs at least two IP addresses.

The first IP address is used for management, and the second IP address is used for service. The service IP address will become usable only when the SVC cluster is in service mode, and remember that service mode is a disruptive operation. Both IP addresses must be in the same IP subnet.

Example 3-1 Management IP address sample

management IP add. 10.11.12.120service IP add. 10.11.12.121

� Each node in an SVC cluster needs to have at least one Ethernet connection.

� IBM supports the option of having multiple console access, using the traditional SVC hardware management console (HMC) or the IBM System Storage Productivity Center console. Multiple Master Consoles or IBM System Storage Productivity Center consoles can access a single cluster, but when multiple Master Consoles access one cluster, you cannot concurrently perform configuration and service tasks.

� The Master Console can be supplied on either pre-installed hardware, or just software supplied to and subsequently installed by the user.

With SVC 5.1, the cluster configuration node can now be accessed on both Ethernet ports, and this capability means that the cluster can have two IPv4 addresses and two IPv6 addresses that are used for configuration purposes.

Figure 3-7 on page 75 shows the IP configuration possibilities.

74 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 101: San

Figure 3-7 IP configuration possibilities

The cluster can therefore be managed by IBM System Storage Productivity Centers on separate networks, which provides redundancy in the event of a failure of one of these networks.

Support for iSCSI introduces one additional IPv4 and one additional IPv6 address for each Ethernet port on every node; these IP addresses are independent of the cluster configuration IP addresses. The command-line interface (CLI) commands for managing the cluster IP addresses have therefore been moved from svctask chcluster to svctask chclusterip in SVC 5.1. And, new commands have been introduced to manage the iSCSI IP addresses.

When connecting to the SVC with Secure Shell (SSH), choose one of the available IP addresses to connect to. There is no automatic failover capability, so if one network is down, use the other IP address.

Clients might be able to use intelligence in domain name servers (DNS) to provide partial failover.

When using the GUI, clients can add the cluster to the SVC Console multiple times (one time per IP address). Failover is achieved by using the functional IP address when launching the SVC Console interface.

Chapter 3. Planning and configuration 75

Page 102: San

3.3.2 SAN zoning and SAN connectionsSAN storage systems using the SVC can be configured with two, or up to eight, SVC nodes, arranged in an SVC cluster. These SVC nodes are attached to the SAN fabric, along with disk subsystems and host systems. The SAN fabric is zoned to allow the SVCs to “see” each other’s nodes and the disk subsystems, and for the hosts to “see” the SVCs. The hosts are not able to directly “see” or operate LUNs on the disk subsystems that are assigned to the SVC cluster. The SVC nodes within an SVC cluster must be able to see each other and all of the storage that is assigned to the SVC cluster.

The zoning capabilities of the SAN switch are used to create these distinct zones. SVC 5.1 supports 2 Gbps, 4 Gbps, or 8 Gbps FC fabric, which depends on the hardware platform and on the switch where the SVC is connected.

We recommend connecting the SVC and the disk subsystem to the switch operating at the highest speed, in an environment where you have a fabric with multiple speed switches.

All SVC nodes in the SVC cluster are connected to the same SANs, and they present VDisks to the hosts. These VDisks are created from MDGs that are composed of MDisks presented by the disk subsystems. There must be three distinct zones in the fabric:

� SVC cluster zone: Create one zone per fabric with all of the SVC ports cabled to this fabric to allow SVC intracluster node communication.

� Host zones: Create an SVC host zone for each server that receives storage from the SVC cluster.

� Storage zone: Create one SVC storage zone for each storage subsystem that is virtualized by the SVC.

Zoning considerations for Metro Mirror and Global MirrorEnsure that you are familiar with the constraints for zoning a switch to support the Metro Mirror and Global Mirror feature.

SAN configurations that use intracluster Metro Mirror and Global Mirror relationships do not require additional switch zones.

SAN configurations that use intercluster Metro Mirror and Global Mirror relationships require the following additional switch zoning considerations:

� A cluster can be configured so that it can detect all of the nodes in all of the remote clusters. Alternatively, a cluster can be configured so that it detects only a subset of the nodes in the remote clusters.

� Use of inter-switch link (ISL) trunking in a switched fabric.

� Use of redundant fabrics.

For intercluster Metro Mirror and Global Mirror relationships, you must perform the following steps to create the additional required zones:

1. Configure your SAN so that FC traffic can be passed between the two clusters. To configure the SAN this way, you can connect the clusters to the same SAN, merge the SANs, or use routing technologies.

2. (Optional) Configure zoning to allow all of the nodes in the local fabric to communicate with all of the nodes in the remote fabric.

McData Eclipse routers: If you use McData Eclipse routers, Model 1620, only 64 port pairs are supported, regardless of the number of iFCP links that is used.

76 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 103: San

3. (Optional) As an alternative to Step 2, choose a subset of nodes in the local cluster to be zoned to the nodes in the remote cluster. Minimally, you must ensure that one whole I/O Group in the local cluster has connectivity to one whole I/O Group in the remote cluster. I/O between the nodes in each cluster is then routed to find a path that is permitted by the configured zoning.

Reducing the number of nodes that are zoned together can reduce the complexity of the intercluster zoning and might reduce the cost of the routing hardware that is required for large installations. Reducing the number of nodes also means that I/O must make extra hops between the nodes in the system, which increases the load on the intermediate nodes and can increase the performance impact, in particular, for Metro Mirror.

4. Optionally, modify the zoning so that the hosts that are visible to the local cluster can recognize the remote cluster. This capability allows a host to examine data in both the local and remote clusters.

5. Verify that cluster A cannot recognize any of the back-end storage that is owned by cluster B. A cluster cannot access logical units (LUs) that a host or another cluster can also access.

Figure 3-8 shows the SVC zoning topology.

Figure 3-8 SVC zoning topology

Figure 3-9 on page 78 shows an example of SVC, host, and storage subsystem connections.

Chapter 3. Planning and configuration 77

Page 104: San

Figure 3-9 Example of SVC, host, and storage subsystem connections

You must also apply the following guidelines:

� Hosts are not permitted to operate on the disk subsystem LUNs directly if the LUNs are assigned to the SVC. All data transfer happens through the SVC nodes. Under certain circumstances, a disk subsystem can present LUNs to both the SVC (as MDisks, which it then virtualizes to hosts) and to other hosts in the SAN.

� Mixed speeds are permitted within the fabric, but not for intracluster communication. You can use lower speeds to extend the distance.

� Uniform SVC port speed for 2145-4F2 and 2145-8F2 nodes: The optical fiber connections between FC switches and all 2145-4F2 or 2145-8F2 SVC nodes in a cluster must run at one speed, either 1 Gbps or 2 Gbps. The 2145-4F2 or 2145-8F2 nodes with other speeds running on the node to switch connections in a single cluster is an unsupported configuration (and is impossible to configure anyway). This rule does not apply to 2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8 nodes, because the FC ports on these nodes auto-negotiate their speed independently of one another and can run at 2 Gbps, 4 Gbps, or 8 Gbps.

� Each of the local or remote fabrics must not contain more than three ISL hops within each fabric. An operation with more ISLs is unsupported. When a local and a remote fabric are connected together for remote copy purposes, there must only be one ISL hop between the two SVC clusters. Therefore, certain ISLs can be used in a cascaded switch link between local and remote clusters, provided that the local and remote cluster internal ISL count is fewer than three. This approach gives a maximum of seven ISL hops in an SVC environment with both local and remote fabrics.

� The switch configuration in an SVC fabric must comply with the switch manufacturer’s configuration rules, which can impose restrictions on the switch configuration. For example, a switch manufacturer might limit the number of supported switches in a SAN. Operation outside of the switch manufacturer’s rules is not supported.

78 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 105: San

� The SAN contains only supported switches; operation with other switches is unsupported.

� Host bus adapters (HBAs) in dissimilar hosts or dissimilar HBAs in the same host need to be in separate zones. For example, if you have AIX and Microsoft hosts, they need to be in separate zones. Here, “dissimilar” means that the hosts are running separate operating systems or are using separate hardware platforms. Therefore, various levels of the same operating system are regarded as similar. This requirement is a SAN interoperability issue rather than an SVC requirement.

� We recommend that the host zones contain only one initiator (HBA) each, and as many SVC node ports as you need, depending on the high availability and performance that you want to have from your configuration.

Zoning examplesFigure 3-10 shows an SVC cluster zoning example.

Figure 3-10 SVC cluster zoning example

Figure 3-11 on page 80 shows a storage subsystem zoning example.

Note: In SVC Version 3.1 and later, the command svcinfo lsfabric generates a report that displays the connectivity between nodes and other controllers and hosts. This report is particularly helpful in diagnosing SAN problems.

Chapter 3. Planning and configuration 79

Page 106: San

Figure 3-11 Storage subsystem zoning example

Figure 3-12 shows a host zoning example.

Figure 3-12 Host zoning example

80 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 107: San

3.3.3 iSCSI IP addressing planSVC 5.1 supports host access via iSCSI (as an alternative to FC), and the following considerations apply:

� SVC uses the built-in Ethernet ports for iSCSI traffic.

� All node types, which can run SVC 5.1, can use the iSCSI feature.

� SVC supports the Challenge Handshake Authentication Protocol (CHAP) authentication methods for iSCSI.

� iSCSI IP addresses can fail over to the partner node in the I/O Group if a node fails. This design reduces the need for multipathing support in the iSCSI host.

� iSCSI IP addresses can be configured for one or more nodes.

� iSCSI Simple Name Server (iSNS) addresses can be configured in the SVC.

� The iSCSI qualified name (IQN) for an SVC node will be: iqn.1986-03.com.ibm:2145.<cluster_name>.<node_name>. Because the IQN contains the cluster name and the node name, it is important not to change these names after iSCSI is deployed.

� Each node can be given an iSCSI alias, as an alternative to the IQN.

� The IQN of the host to an SVC host object is added in the same way that you add FC WWPNs.

� Host objects can have both WWPNs and IQNs.

� Standard iSCSI host connection procedures can be used to discover and configure SVC as an iSCSI target.

Next, we show several ways that SVC 5.1 can be configured.

Figure 3-13 shows the use of IPv4 management and iSCSI addresses in the same subnet.

Figure 3-13 Use of IPv4 addresses

You can set up the equivalent configuration with only IPv6 addresses.

Chapter 3. Planning and configuration 81

Page 108: San

Figure 3-14 shows the use of IPv4 management and iSCSI addresses in two separate subnets.

Figure 3-14 IPv4 address plan with two subnets

Figure 3-15 shows the use of redundant networks.

Figure 3-15 Redundant networks

Figure 3-16 on page 83 shows the use of a redundant network and a third subnet for management.

82 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 109: San

Figure 3-16 Redundant network with third subnet for management

Figure 3-17 shows the use of a redundant network for both iSCSI data and management.

Figure 3-17 Redundant network for iSCSI and management

Be aware of these considerations:

� All of the examples are valid using IPv4 and IPv6 addresses.� It is valid to use IPv4 addresses on one port and IPv6 addresses on the other port.� It is valid to have separate subnet configurations for IPv4 and IPv6 addresses.

Chapter 3. Planning and configuration 83

Page 110: San

3.3.4 Back-end storage subsystem configurationBack-end storage subsystem configuration planning must be applied to all of the storage that will supply disk space to an SVC cluster. See the following Web site for the currently supported storage subsystems:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

Apply the following general guidelines for back-end storage subsystem configuration planning:

� In the SAN, disk subsystems that are used by the SVC cluster are always connected to SAN switches and nothing else.

� Other disk subsystem connections out of the SAN are possible.

� Multiple connections are allowed from the redundant controllers in the disk subsystem to improve data bandwidth performance. It is not mandatory to have a connection from each redundant controller in the disk subsystem to each counterpart SAN, but it is recommended. Therefore, controller A in the DS4000 can be connected to SAN A only, or to SAN A and SAN B, and controller B in the DS4000 can be connected to SAN B only, or to SAN B and SAN A.

� Split controller configurations are supported with certain rules and configuration guidelines. See IBM System Storage SAN Volume Controller Planning Guide, GA32-0551, for more information.

� All SVC nodes in an SVC cluster must be able to see the same set of disk subsystem ports on each disk subsystem controller. Operation in a mode where two nodes see a separate set of ports on the same controller becomes degraded. This degradation can occur if inappropriate zoning was applied to the fabric. It can also occur if inappropriate LUN masking is used. This guideline has important implications for a disk subsystem, such as DS3000, DS4000, or DS5000, which imposes exclusivity rules on which HBA worldwide names (WWNs) a storage partition can be mapped to.

In general, configure disk subsystems as though there is no SVC; however, we recommend the following specific guidelines:

� Disk drives:

– Be careful with large disk drives so that you do not have too few spindles to handle the load.

– RAID-5 is suggested, but RAID-10 is viable and useful.

� Array sizes:

– 8+P or 4+P is recommended for the DS4000 and DS5000 families, if possible.

– Use the DS4000 segment size of 128 KB or larger to help the sequential performance.

– Avoid Serial Advanced Technology Attachment (SATA) disk unless running SVC 4.2.1.x or later

– Upgrade to EXP810 drawers, if possible.

– Create LUN sizes that are equal to the RAID array/rank if it does not exceed 2 TB.

– Create a minimum of one LUN per FC port on a disk controller zoned with the SVC.

– When adding more disks to a subsystem, consider adding the new MDisks to existing MDGs versus creating additional small MDGs.

– Use a Perl script to restripe VDisk extents evenly across all MDisks in the MDG.

Go to this Web site http://www.ibm.com/alphaworks, and search using “svctools”.

84 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 111: San

� Maximum of 64 worldwide node names (WWNNs):

– EMC DMX/SYMM, All HDS, and SUN/HP HDS clones use one WWNN per port; each WWNN appears as a separate controller to the SVC.

– Upgrade to SVC 4.2.1 or later so that you can map LUNs through up to 16 FC ports, which results in 16 WWNNs/WWPNs used out of the maximum of 64.

– IBM, EMC Clariion, and HP use one WWNN per subsystem; each WWNN appears as a single controller with multiple ports/WWPNs, for a maximum of 16 ports/WWPNs per WWNN using one out of the maximum of 64.

� DS8000 using four or eight 4 port HA cards:

– Use port 1 and 3 or 2 and 4 on each card.

– This setup provides 8 or 16 ports for SVC use.

– Use 8 ports minimum up to 40 ranks.

– Use 16 ports, which is the maximum, for 40 or more ranks.

� Upgrade to SVC 4.2.1.9 or later to drive more workload to DS8000.

Increased queue depth for DS4000, DS5000, DS6000, DS8000, or EMC DMX

� DS4000/DS5000 – EMC Clariion/CX:

– Both systems have the preferred controller architecture, and SVC supports this configuration.

– Use a minimum of 4 ports, and preferably 8 or more ports up to maximum of 16 ports, so that more ports equate to more concurrent I/O that is driven by the SVC.

– Support for mapping controller A ports to Fabric A and controller B ports to Fabric B or cross connecting ports to both fabrics from both controllers. The latter approach is preferred to avoid AVT/Trespass occurring if a fabric or all paths to a fabric fail.

– Upgrade to SVC 4.3.1 or later for an SVC queue depth change for CX models, because it drives more I/O per port per MDisk.

� DS3400:

– Use a minimum of 4 ports.

– Upgrade to SVC 4.3.x or later for better resiliency if the DS3400 controllers reset.

� XIV® requirements and restrictions:

– The SVC cluster must be running Version 4.3.0.1 or later to support the XIV.

– The use of certain XIV functions on LUNs presented to the SVC is not supported.

– You cannot perform snaps, thin provisioning, synchronous replication, or LUN expansion on XIV MDisks.

– A maximum of 511 LUNs from one XIV system can be mapped to an SVC cluster.

� Full 15 module XIV recommendations – 79 TB usable:

– Use two interface host ports from each of the six interface modules.

– Use ports 1 and 3 from each interface module and zone these 12 ports with all SVC node ports.

– Create 48 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1,632 GB approximately if using the entire full frame XIV with the SVC.

– Map LUNs to the SVC as 48 MDisks, and add all of them to the one XIV MDG so that the SVC will drive the I/O to four MDisks/LUNs for each of the 12 XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately.

Chapter 3. Planning and configuration 85

Page 112: San

� Six module XIV recommendations – 27 TB usable:

– Use two interface host ports from each of the two active interface modules.

– Use ports 1 and 3 from interface modules 4 and 5. (Interface module 6 is inactive.) And, zone these four ports with all SVC node ports.

– Create 16 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1,632 GB approximately if using the entire XIV with the SVC.

– Map LUNs to the SVC as 16 MDisks, and add all of them to the one XIV MDG so that the SVC will drive I/O to four MDisks/LUNs per each of the four XIV FC ports. This design provides a good queue depth on the SVC to drive XIV adequately.

� Nine module XIV recommendations – 43 TB usable:

– Use two interface host ports from each of the four active interface modules.

– Use ports 1 and 3 from interface modules 4, 5, 7, and 8. (Interface modules 6 and 9 are inactive.) And, zone these eight ports with all of the SVC node ports.

– Create 26 LUNs of equal size, each of which is a multiple of 17 GB, and you will get 1,632 GB approximately if using the entire XIV with the SVC.

– Map LUNs to SVC as 26 MDisks, and map add all of them to the one XIV MDG, so that the SVC will drive I/O to three MDisks/LUNs on each of six ports and four MDisks/LUNs on the other two XIV FC ports. This design provides a good queue depth on SVC to drive XIV adequately.

� Configure XIV host connectivity for the SVC cluster:

– Create one host definition on XIV, and include all SVC node WWPNs.

– You can create clustered host definitions (one per I/O Group), but the preceding method is easier.

– Map all LUNs to all SVC node WWPNs.

3.3.5 SVC cluster configurationTo ensure high availability in SVC installations, consider the following guidelines when you design a SAN with the SVC:

� The 2145-4F2 and 2145-8F2 SVC nodes contain two HBAs, each of which has two FC ports. If an HBA fails, this configuration remains valid, and the node operates in degraded mode. If an HBA is physically removed from an SVC node, the configuration is unsupported. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 models have one HBA with four ports.

� All nodes in a cluster must be in the same LAN segment, because the nodes in the cluster must be able to assume the same cluster, or service IP, address. Make sure that the network configuration will allow any of the nodes to use these IP addresses. Note that if you plan to use the second Ethernet port on each node, it is possible to have two LAN segments. However, port 1 of every node must be in one LAN segment, and port 2 of every node must be in the other LAN segment.

� To maintain application uptime in the unlikely event of an individual SVC node failing, SVC nodes are always deployed in pairs (I/O Groups). If a node fails or is removed from the configuration, the remaining node operates in a degraded mode, but it is still a valid configuration. The remaining node operates in write-through mode, meaning that the data is written directly to the disk subsystem (the cache is disabled for the write).

� The uninterruptible power supply unit must be in the same rack as the node to which it provides power, and each uninterruptible power supply unit can only have one node connected.

86 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 113: San

� The FC SAN connections between the SVC node and the switches are optical fiber. These connections can run at either 2 Gbps, 4 Gbps, or 8 Gbps, depending on your SVC and switch hardware. The 2145-CF8, 2145-8A4, 2145-8G4, and 2145-8F4 SVC nodes auto-negotiate the connection speed with the switch. The 2145-4F2 and 2145-8F2 nodes are capable of a maximum of 2 Gbps, which is determined by the cluster speed.

� The SVC node ports must be connected to the FC fabric only. Direct connections between the SVC and the host, or the disk subsystem, are unsupported.

� Two SVC clusters cannot share the same LUNs in a subsystem. The consequences of sharing the same disk subsystem can result in data loss. If the same MDisk becomes visible on two separate SVC clusters, this error can cause data corruption.

� The two nodes within an I/O Group can be co-located (within the same set of racks) or can be located in separate racks and separate rooms to deploy a simple business continuity solution.

If a split node cluster (split I/O Group) solution is implemented, observe the maximum distance allowed (100 m (or 320 ft., 1.13 inches)) between the nodes in an I/O Group. Otherwise, you will require a SCORE request in order to be supported for longer distances. Ask your IBM service representative for more detailed information about the SCORE process.

If a split node cluster (split I/O Group) solution is implemented, we recommend using a business continuity solution for the storage subsystem using the VDisk Mirroring option. Note the SVC cluster quorum disk location, as shown in Figure 3-18 on page 88, where the quorum disk is located separately in a third site or room.

� The SVC uses three MDisks as a quorum disk for the cluster. We recommend that you, for redundancy purposes, locate, if possible, the three MDisks in three separate storage subsystems.

If a split node cluster (split I/O Group) solution is implemented, two of the three quorum disks can be co-located in the same room where the SVC nodes are located, but the active quorum disk (as displayed in the lsquorum output) must be in a separate room.

Figure 3-18 on page 88 shows a schematic split I/O Group solution.

Chapter 3. Planning and configuration 87

Page 114: San

Figure 3-18 Split I/O Group solution

3.3.6 Managed Disk Group configurationThe Managed Disk Group (MDG) is at the center of the many-to-many relationship between the MDisks and the VDisks. It acts as a container into which managed disks contribute chunks of disk blocks, which are known as extents, and from which VDisks consume these extents of storage.

MDisks in the SVC are LUNs assigned from the underlying disk subsystems to the SVC and can be either managed or unmanaged. A managed MDisk is an MDisk that is assigned to an MDG:

� MDGs are collections of MDisks. An MDisk is contained within exactly one MDG.

� An SVC supports up to 128 MDGs.

� There is no limit to the number of VDisks that can be in an MDG other than the limit per cluster.

� MDGs are collections of VDisks. Under normal circumstances, a VDisk is associated with exactly one MDG. The exception to this rule is when a VDisk is migrated, or mirrored, between MDGs.

SVC supports extent sizes of 16, 32, 64, 128, 256, 512, 1,024, and 2,048 MB. The extent size is a property of the MDG, which is set when the MDG is created. It cannot be changed, and all MDisks, which are contained in the MDG, have the same extent size, so all VDisks that are associated with the MDG must also have the same extent size.

Table 3-1 on page 89 shows all of the extent sizes that are available in an SVC.

88 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 115: San

Table 3-1 Extent size and maximum cluster capacities

There are several additional MDG considerations:

� Maximum cluster capacity is related to the extent size:

– 16 MB extent = 64 TB and doubles for each increment in extent size, for example, 32 MB = 128 TB. We strongly recommend a minimum 128/256 MB. The IBM Sales Productivity Center (SPC) benchmarks used a 256 MB extent.

– Pick the extent size and use that size for all MDGs.

– You cannot migrate VDisks between MDGs with various extent sizes.

� MDG reliability, availability, and serviceability (RAS) considerations:

– It might make sense to create multiple MDGs if you ensure a host only gets its VDisks built from one of the MDGs. If the MDG goes offline, it impacts only a subset of all of the hosts using the SVC; however, creating multiple MDGs can cause a high number of MDGs, approaching the SVC limits.

– If you do not isolate hosts to MDGs, create one large MDG. Creating one large MDG assumes that the physical disks are all the same size, speed, and RAID level.

– The MDG goes offline if an MDisk is unavailable even if the MDisk has no data on it. Do not put MDisks into an MDG until needed.

– Create at least one separate MDG for all the image mode VDisks.

– Make sure that the LUNs that are given to the SVC have any host persistent reserves removed.

� MDG performance considerations

It might make sense to create multiple MDGs if attempting to isolate workloads to separate disk spindles. MDGs with too few MDisks cause an MDisk overload, so it is better to have more spindle counts in an MDG to meet workload requirements.

� The MDG and SVC cache relationship

SVC 4.2.1 first introduced cache partitioning to the SVC code base. The decision was made to provide flexible partitioning, rather than hard-coding a specific number of partitions. This flexibility is provided on an MDG boundary. That is, the cache will automatically partition the available resources on a per MDG basis. Most users create a single MDG from the LUNs provided by a single disk controller, or a subset of a controller/collection of the same controllers, based on the characteristics of the LUNs themselves. Characteristics are, for example, RAID-5 versus RAID-10, 10,000 revolutions per minute (RPM) versus 15,000 RPM, and so on. The overall strategy is provided to

Extent size Maximum cluster capacity

16 MB 64 TB

32 MB 128 TB

64 MB 256 TB

128 MB 512 TB

256 MB 1 PB

512 MB 2 PB

1,024 MB 4 PB

2,048 MB 8 PB

Chapter 3. Planning and configuration 89

Page 116: San

protect from individual controller overloading or faults. If many controllers (or in this case MDGs) are overloaded, the overreached controllers can still suffer.

Table 3-2 shows the limit of the write cache data.

Table 3-2 Limit of the cache data

Think of the rule as no single partition can occupy more than its upper limit of cache capacity with write data. These limits are upper limits, and they are the points at which the SVC cache will start to limit incoming I/O rates for VDisks created from the MDG. If a particular partition reaches this upper limit, the net result is the same as a global cache resource that is full. That is, the host writes will be serviced on a one-out-one-in basis — as the cache destages writes to the back-end disks. However, only writes targeted at the full partition are limited, all I/O destined for other (non-limited) MDGs will continue as normal. Read I/O requests for the limited partition will also continue as normal. However, because the SVC is destaging write data at a rate that is obviously greater than the controller can actually sustain (otherwise the partition does not reach the upper limit), reads are likely to be serviced equally slowly.

3.3.7 Virtual disk configurationAn individual virtual disk (VDisk) is a member of one MDG and one I/O Group. When you want to create a VDisk, first of all, you have to know the purpose for which this VDisk will be created. Based on that information, you can decide which MDG you have to select to fit your requirements in terms of cost, performance, and availability:

� The MDG defines which MDisks provided by the disk subsystem make up the VDisk.

� The I/O Group (two nodes make an I/O Group) defines which SVC nodes provide I/O access to the VDisk.

Therefore, you can define the VDisks using the following considerations:

� Optimize the performance between the hosts and the SVC by distributing the VDisks between the various nodes of the SVC cluster, which means spreading the load equally on the nodes in the SVC cluster.

� Get the level of performance, reliability, and capacity you require by using the MDG that corresponds to your needs (you can access any MDG from any node), that is, choose the MDG that fulfils the demands for your VDisk, with respect to performance, reliability, and capacity.

Number of MDGs Upper limit

1 100%

2 66%

3 40%

4 30%

5 or more 25%

Note: There is no fixed relationship between I/O Groups and MDGs.

90 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 117: San

� I/O Group considerations:

– When you create a VDisk, it is associated with one node of an I/O Group. By default, every time that you create a new VDisk, it is associated with the next node using a round-robin algorithm. You can specify a preferred access node, which is the node through which you send I/O to the VDisk instead of using the round-robin algorithm. A VDisk is defined for an I/O Group.

– Even if you have eight paths for each VDisk, all I/O traffic flows only toward one node (the preferred node). Therefore, only four paths are really used by the IBM Subsystem Device Driver (SDD). The other four paths are used only in the case of a failure of the preferred node or when concurrent code upgrade is running.

� Creating image mode VDisks:

– Use image mode VDisks when an MDisk already has data on it, from a non-virtualized disk subsystem. When an image mode VDisk is created, it directly corresponds to the MDisk from which it is created. Therefore, VDisk logical block address (LBA) x = MDisk LBA x. The capacity of image mode VDisks defaults to the capacity of the supplied MDisk.

– When you create an image mode disk, the MDisk must have a mode of unmanaged and therefore does not belong to any MDG. A capacity of 0 is not allowed. Image mode VDisks can be created in sizes with a minimum granularity of 512 bytes, and they must be at least one block (512 bytes) in size.

� Creating managed mode VDisks with sequential or striped policy

When creating a managed mode VDisk with sequential or striped policy, you must use a number of MDisks containing extents that are free and of a size that is equal or greater than the size of the VDisk that you want to create. There might be sufficient extents available on the MDisk, but there might not be a contiguous block large enough to satisfy the request.

� Space-Efficient VDisk considerations:

– While creating the space-efficient volume, it is necessary to understand the utilization patterns by the applications or group users accessing this volume. Items, such as the actual size of the data, the rate of creation of new data, modifying or deletion of existing data, and so on, all need to be taken into consideration.

– There are two operating modes for Space-Efficient VDisks. Autoexpand VDisks allocate storage from an MDG on demand with minimal user intervention required, but a misbehaving application can cause a VDisk to expand until it has consumed all of the storage in an MDG. Non-autoexpand VDisks have a fixed amount of storage assigned. In this case, the user must monitor the VDisk and assign additional capacity if or when required. A misbehaving application can only cause the VDisk that it is using to fill up.

– Depending on the initial size for the real capacity, the grain size and a warning level can be set. If a disk goes offline, either through a lack of available physical storage on autoexpand, or because a disk marked as non-expand has not been expanded, there is a danger of data being left in the cache until storage is made available. This situation is not a data integrity or data loss issue, but you must not rely on the SVC cache as a backup storage mechanism.

Recommendations:

� We highly recommend that you keep a warning level on the used capacity so that it provides adequate time for the provision of more physical storage.

� Warnings must not be ignored by an administrator.

� Use the autoexpand feature of the Space-Efficient VDisks.

Chapter 3. Planning and configuration 91

Page 118: San

– The grain size allocation unit for the real capacity in the VDisk can be set as 32 KB, 64 KB, 128 KB, or 256 KB. A smaller grain size utilizes space more effectively, but it results in a larger directory map, which can reduce performance.

– Space-Efficient VDisks require more I/Os because of directory accesses. For truly random workloads with 70% read and 30% write, a Space-Efficient VDisk will require approximately one directory I/O for every user I/O so performance can be up to 50% less than that of a normal VDisk.

– The directory is two-way write-back-cached (just like the SVC fastwrite cache), so certain applications will perform better.

– Space-Efficient VDisks require more CPU processing, so the performance per I/O Group will be poorer.

– Starting with SVC 5.1, we have Space-Efficient VDisks - zero detect. This feature enables clients to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient VDisk (SEV) using VDisk Mirroring.

� VDisk Mirroring. If you are planning to use the VDisk Mirroring option, you must apply the following guidelines:

– Create or identify two separate MDGs to allocate space for your mirrored VDisk.

– If it is possible to use an MDG with MDisks that share the same characteristics; otherwise, the VDisk performance can be affected by the poorer performing MDisk.

3.3.8 Host mapping (LUN masking)For the host and application servers, the following guidelines apply:

� Each SVC node presents a VDisk to the SAN through four paths. Because two nodes are used in normal operations to provide redundant paths to the same storage, a host with two HBAs can see eight paths to each LUN that is presented by the SVC. We suggest using zoning to limit the pathing from a minimum of two paths to the maximum available of eight paths, depending on the kind of high availability and performance that you want to have in your configuration.

We recommend using zoning to limit the pathing to four paths. The hosts must run a multipathing device driver to resolve this back to a single device. The multipathing driver supported and delivered by SVC is the IBM Subsystem Device Driver (SDD). Native multipath I/O (MPIO) drivers on selected hosts are supported. For operating system specific information about MPIO support, see this Web site:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

� The number of paths to a VDisk from a host to the nodes in the I/O Group that owns the VDisk must not exceed eight, even if eight is not the maximum number of paths supported by the multipath driver (SDD supports up to 32). To restrict the number of paths to a host VDisk, the fabrics must be zoned so that each host FC port is zoned with one port from each SVC node in the I/O Group that owns the VDisk.

� If a host has multiple HBA ports, each port must be zoned to a separate set of SVC ports to maximize high availability and performance.

� In order to configure greater than 256 hosts, you will need to configure the host to iogrp mappings on the SVC. Each iogrp can contain a maximum of 256 hosts, so it is possible to create 1,024 host objects on an eight-node SVC cluster. VDisks can only be mapped to a host that is associated with the I/O Group to which the VDisk belongs.

VDisk paths: The recommended number of VDisk paths is four.

92 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 119: San

� Port masking. You can use a port mask to control the node target ports that a host can access, which satisfies two requirements:

– As part of a security policy, to limit the set of WWPNs that are able to obtain access to any VDisks through a given SVC port

– As part of a scheme to limit the number of logins with mapped VDisks visible to a host multipathing driver (such as SDD) and thus limit the number of host objects configured without resorting to switch zoning

� The port mask is an optional parameter of the svctask mkhost and chhost commands. The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). For example, a mask of 0011 enables port 1 and port 2. The default value is 1111 (all ports enabled).

� The SVC supports connection to the Cisco MDS family and Brocade family. See the following Web site for the latest support information:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.9 Advanced Copy ServicesThe SVC offers these Advanced Copy Services:

� FlashCopy� Metro Mirror� Global Mirror

SVC Advanced Copy Services must apply the following guidelines.

FlashCopy guidelines Consider these FlashCopy guidelines:

� Identify each application that must have a FlashCopy function implemented for its VDisk.

� FlashCopy is a relationship between VDisks. Those VDisks can belong to separate MDGs and separate storage subsystems.

� You can use FlashCopy for backup purposes by interacting with the Tivoli Storage Manager Agent, or for cloning a particular environment.

� Define which FlashCopy best fits your requirements: NO copy, Full copy, Space Efficient, or Incremental.

� Define which FlashCopy rate best fits your requirement in terms of performance and time to get the FlashCopy completed. The relationship of the background copy rate value to the attempted number of grains to be split per second is shown in Table 3-3 on page 94.

� Define the grain size that you want to use. Larger grain sizes can cause a longer FlashCopy elapsed time and a higher space usage in the FlashCopy target VDisk. Smaller grain sizes can have the opposite effect. Remember that the data structure and the source data location can modify those effects. In an actual environment, check the results of your FlashCopy procedure in terms of the data copied at every run and in terms of elapsed time, comparing them to the new SVC FlashCopy results, and eventually adapt the grain/second and the copy rate parameter to fit your environment’s requirements.

Chapter 3. Planning and configuration 93

Page 120: San

Table 3-3 Grain splits per second

Metro Mirror and Global Mirror guidelinesSVC supports both intracluster and intercluster Metro Mirror and Global Mirror. From the intracluster point of view, any single cluster is a reasonable candidate for a Metro Mirror or Global Mirror operation. Intercluster operation, however, needs at least two clusters, which are separated by a number of moderately high bandwidth links.

Figure 3-19 shows a schematic of Metro Mirror connections.

Figure 3-19 Metro Mirror connections

Figure 3-19 contains two redundant fabrics. Part of each fabric exists at the local cluster and at the remote cluster. There is no direct connection between the two fabrics.

User percentage Data copied per second

256 KB grain per second

64 KB grain per second

1 - 10 128 KB 0.5 2

11 - 20 256 KB 1 4

21 - 30 512 KB 2 8

31 - 40 1 MB 4 16

41 - 50 2 MB 8 32

51 - 60 4 MB 16 64

61 - 70 8 MB 32 128

71 - 80 16 Mb 64 256

81 - 90 32 MB 128 512

91 - 100 64 MB 256 1,024

94 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 121: San

Technologies for extending the distance between two SVC clusters can be broadly divided into two categories:

� FC extenders� SAN multiprotocol routers

Due to the more complex interactions involved, IBM explicitly tests products of this class for interoperability with the SVC. The current list of supported SAN routers can be found in the supported hardware list on the SVC support Web site:

http://www.ibm.com/storage/support/2145

IBM has tested a number of FC extenders and SAN router technologies with the SVC, which must be planned, installed, and tested so that the following requirements are met:

� For SVC 4.1.0.x, the round-trip latency between sites must not exceed 68 ms (34 ms one way) for FC extenders, or 20 ms (10 ms one-way) for SAN routers.

� For SVC 4.1.1.x and later, the round-trip latency between sites must not exceed 80 ms (40 ms one-way). For Global Mirror, this limit allows a distance between the primary and secondary sites of up to 8,000 km (4,970.96 miles) using a planning assumption of 100 km (62.13 miles) per 1 ms of round-trip link latency.

� The latency of long distance links depends upon the technology that is used to implement them. A point-to-point dark fiber-based link will typically provide a round-trip latency of 1ms per 100 km (62.13 miles) or better. Other technologies will provide longer round-trip latencies, which will affect the maximum supported distance.

� The configuration must be tested with the expected peak workloads.

� When Metro Mirror or Global Mirror is used, a certain amount of bandwidth will be required for SVC intercluster heartbeat traffic. The amount of traffic depends on how many nodes are in each of the two clusters.

Figure 3-20 shows the amount of heartbeat traffic, in megabits per second, that is generated by various sizes of clusters.

Figure 3-20 Amount of heartbeat traffic

� These numbers represent the total traffic between the two clusters, when no I/O is taking place to mirrored VDisks. Half of the data is sent by one cluster, and half of the data is sent by the other cluster. The traffic will be divided evenly over all available intercluster links; therefore, if you have two redundant links, half of this traffic will be sent over each link, during fault free operation.

� The bandwidth between sites must be, at the least, sized to meet the peak workload requirements while maintaining the maximum latency specified previously. The peak workload requirement must be evaluated by considering the average write workload over a period of one minute or less, plus the required synchronization copy bandwidth. With no synchronization copies active and no write I/O disks in Metro Mirror or Global Mirror relationships, the SVC protocols will operate with the bandwidth indicated in Figure 3-20,

Chapter 3. Planning and configuration 95

Page 122: San

but the true bandwidth required for the link can only be determined by considering the peak write bandwidth to VDisks participating in Metro Mirror or Global Mirror relationships and adding to it the peak synchronization copy bandwidth.

� If the link between the sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements continue to be true even during single failure conditions.

� The configuration is tested to simulate the failure of the primary site (to test the recovery capabilities and procedures), including eventual failback to the primary site from the secondary.

� The configuration must be tested to confirm that any failover mechanisms in the intercluster links interoperate satisfactorily with the SVC.

� The FC extender must be treated as a normal link.

� The bandwidth and latency measurements must be made by, or on behalf of the client, and are not part of the standard installation of the SVC by IBM. IBM recommends that these measurements are made during installation and that records are kept. Testing must be repeated following any significant changes to the equipment providing the intercluster link.

Global Mirror guidelinesConsider these guidelines:

� When using SVC Global Mirror, all components in the SAN must be capable of sustaining the workload generated by application hosts, as well as the Global Mirror background copy workload. If not true, Global Mirror can automatically stop your relationships to protect your application hosts from increased response times. Therefore, it is important to configure each component correctly.

� In addition, use a SAN performance monitoring tool, such as IBM System Storage Productivity Center, which will allow you to continuously monitor the SAN components for error conditions and performance problems. This tool will assist you to detect potential issues before they impact your disaster recovery solution.

� The long-distance link between the two clusters must be provisioned to allow for the peak application write workload to the Global Mirror source VDisks, plus the client-defined level of background copy.

� The peak application write workload must ideally be determined by analyzing the SVC performance statistics.

� Statistics must be gathered over a typical application I/O workload cycle, which might be days, weeks, or months depending on the environment on which the SVC is used. These statistics must be used to find the peak write workload that the link must be able to support.

� Characteristics of the link can change with use, for example, the latency might increase as the link is used to carry an increased bandwidth. The user must be aware of the link’s behavior in such situations and ensure that the link remains within the specified limits. If the characteristics are not known, testing must be performed to gain confidence of the link’s suitability.

� Users of Global Mirror must consider how to optimize the performance of the long-distance link, which will depend upon the technology that is used to implement the link. For example, when transmitting FC traffic over an IP link, it might be desirable to enable jumbo frames to improve efficiency.

� Using Global Mirror and Metro Mirror between the same two clusters is supported.

� It is not supported for cache-disabled VDisks to participate in a Global Mirror relationship.

96 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 123: San

� The gmlinktolerance parameter of the remote copy partnership must be set to an appropriate value. The default value is 300 seconds (5 minutes), which will be appropriate for most clients.

� During SAN maintenance, the user must either reduce the application I/O workload for the duration of the maintenance (so that the degraded SAN components are capable of the new workload), disable the gmlinktolerance feature, increase the gmlinktolerance value (meaning that application hosts might see extended response times from Global Mirror VDisks), or stop the Global Mirror relationships. If the gmlinktolerance value is increased for maintenance lasting x minutes, it must only be reset to the normal value x minutes after the end of the maintenance activity. If gmlinktolerance is disabled for the duration of the maintenance, it must be re-enabled after the maintenance is complete.

� Global Mirror VDisks must have their preferred nodes evenly distributed between the nodes of the clusters. Each VDisk within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group.

Figure 3-21 shows the correct relationship between VDisks in a Metro Mirror or Global Mirror solution.

Figure 3-21 Correct VDisk relationship

� The capabilities of the storage controllers at the secondary cluster must be provisioned to allow for the peak application workload to the Global Mirror VDisks, plus the client-defined level of background copy, plus any other I/O being performed at the secondary site. The performance of applications at the primary cluster can be limited by the performance of the back-end storage controllers at the secondary cluster to maximize the amount of I/O that applications can perform to Global Mirror VDisks.

� We do not recommend using SATA for Metro Mirror or Global Mirror secondary VDisks without complete review. Be careful using a slower disk subsystem for the secondary VDisks for high performance primary VDisks, because SVC cache might not be able to buffer all the writes, and flushing cache writes to SATA might slow I/O at the production site.

� Global Mirror VDisks at the secondary cluster must be in dedicated MDisk groups (which contain no non-Global Mirror VDisks).

Chapter 3. Planning and configuration 97

Page 124: San

� Storage controllers must be configured to support the Global Mirror workload that is required of them. Either dedicate storage controllers to only Global Mirror VDisks, configure the controller to guarantee sufficient quality of service for the disks being used by Global Mirror, or ensure that physical disks are not shared between Global Mirror VDisks and other I/O (for example, by not splitting an individual RAID array).

� MDisks within a Global Mirror MDisk group must be similar in their characteristics (for example, RAID level, physical disk count, and disk speed). This requirement is true of all MDisk groups, but it is particularly important to maintain performance when using Global Mirror.

� When a consistent relationship is stopped, for example, by a persistent I/O error on the intercluster link, the relationship enters the consistent_stopped state. I/O at the primary site continues, but the updates are not mirrored to the secondary site. Restarting the relationship will begin the process of synchronizing new data to the secondary disk. While this synchronization is in progress, the relationship will be in the inconsistent_copying state. Therefore, the Global Mirror secondary VDisk will not be in a usable state until the copy has completed and the relationship has returned to a Consistent state. Therefore, it is highly advisable to create a FlashCopy of the secondary VDisk before restarting the relationship. When started, the FlashCopy will provide a consistent copy of the data, even while the Global Mirror relationship is copying. If the Global Mirror relationship does not reach the Synchronized state (if, for example, the intercluster link experiences further persistent I/O errors), the FlashCopy target can be used at the secondary site for disaster recovery purposes.

� If you are planning to use an FCIP intercluster link, it is extremely important to design and size the pipe correctly.

Example 3-2 shows a best-guess bandwidth sizing formula.

Example 3-2 WAN link calculation example

Amount of write data within 24 hours times 4 to allow for peaksTranslate into MB/s to determine WAN link neededExample:250 GB a day250 GB * 4 = 1 TB24 hours * 3600 secs/hr. = 86400 secs1,000,000,000,000/ 86400 = approximately 12 MB/sWhich means OC3 or higher is needed (155 Mbps or higher)

� If compression is available on routers or WAN communication devices, smaller pipelines might be adequate. Note that workload is probably not evenly spread across 24 hours. If there are extended periods of high data change rates, you might want to consider suspending Global Mirror during that time frame.

� If the network bandwidth is too small to handle the traffic, application write I/O response times might be elongated. For the SVC, Global Mirror must support short term “Peak Write” bandwidth requirements. Remember that SVC Global Mirror is much more sensitive to a lack of bandwidth than the DS8000.

� You will need to consider the initial sync and re-sync workload, as well. The Global Mirror partnership’s background copy rate must be set to a value that is appropriate to the link and secondary back-end storage. Remember, the more bandwidth that you give to the sync and re-sync operation, the less workload can be delivered by the SVC for the regular data traffic.

� The Metro Mirror or Global Mirror background copy rate is predefined, the per VDisk limit is 25 MBps, and the maximum per I/O Group is roughly 250 MBps.

98 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 125: San

� Be careful using space-efficient secondary VDisks at the disaster recovery site, because a Space-Efficient VDisk can have performance of up to 50% less of a normal VDisk and can affect the performance of the VDisks at the primary site.

� Do not propose Global Mirror if the data change rate will exceed the communication bandwidth or if the round-trip latency exceeds 80 - 120 ms. Greater than 80 ms round-trip latency requires SCORE/RPQ submission.

3.3.10 SAN boot supportThe SVC supports SAN boot or startup for AIX, Windows 2003 Server, and other operating systems. SAN boot support can change from time to time, so we recommend regularly checking the following Web site:

http://www.ibm.com/systems/storage/software/virtualization/svc/interop.html

3.3.11 Data migration from a non-virtualized storage subsystemData migration is an extremely important part of an SVC implementation. So, a data migration plan must be accurately prepared. You might need to migrate your data because of one of these reasons:

� Redistributing workload within a cluster across the disk subsystem� Moving workload onto newly installed storage� Moving workload off old or failing storage, ahead of decommissioning it� Moving workload to rebalance a changed workload� Migrating data from an older disk subsystem to SVC-managed storage� Migrating data from one disk subsystem to another disk subsystem

Because there are multiple data migration methods, we suggest that you choose the data migration method that best fits your environment, your operating system platform, your kind of data, and your application’s service level agreement.

We can define data migration as belonging to three groups:

� Based on operating system Logical Volume Manager (LVM) or commands� Based on special data migration software� Based on the SVC data migration feature

With data migration, we recommend that you apply the following guidelines:

� Choose which data migration method best fits your operating system platform, your kind of data, and your service level agreement.

� Check the interoperability matrix for the storage subsystem to which your data is being migrated:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

� Choose where you want to place your data after migration in terms of the MDG related to a specific storage subsystem tier.

� Check if a sufficient amount of free space or extents are available in the target MDG.

� Decide if your data is critical and must be protected by a VDisk Mirroring option or if it has to be replicated in a remote site for disaster recovery.

� Prepare offline all of the zone and LUN masking/host mappings that you might need in order to minimize downtime during the migration.

� Prepare a detailed operation plan so that you do not overlook anything at data migration time.

Chapter 3. Planning and configuration 99

Page 126: San

� Execute a data backup before you start any data migration. Data backup must be part of the regular data management process.

� You might want to use the SVC as a data mover to migrate data from a non-virtualized storage subsystem to another non-virtualized storage subsystem. In this case, you might have to add additional checks that are related to the specific storage subsystem to which you want to migrate. Be careful using slower disk subsystems for the secondary VDisks for high performance primary VDisks, because SVC cache might not be able to buffer all the writes and flushing cache writes to SATA might slow I/O at the production site.

3.3.12 SVC configuration backup procedureWe recommend that you save the configuration externally when changes, such as adding new nodes, disk subsystems, and so on, have been performed on the cluster. Configuration saving is a crucial part of the SVC management, and various methods can be applied to back up your SVC configuration. We suggest that you implement an automatic configuration backup by applying the configuration backup command. We describe this command for the CLI and the GUI in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339 and in Chapter 8, “SAN Volume Controller operations using the GUI” on page 469.

3.4 Performance considerationsWhile storage virtualization with the SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVC’s caching capability and its ability to stripe VDisks across multiple disk arrays are the reasons why performance improvement is significant when implemented with midrange disk subsystems, because this technology is often only provided with high-end enterprise disk subsystems.

To ensure the desired performance and capacity of your storage infrastructure, we recommend that you do a performance and capacity analysis to reveal the business requirements of your storage environment. When this is done, you can use the guidelines in this chapter to design a solution that meets the business requirements.

When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby the limiting factor of a given system. At the same time, you must take into consideration the component for whose workload you identify a limiting factor, because it might not be the same component that is identified as the limiting factor for other workloads.

When designing a storage infrastructure using SVC, or implementing SVC in an existing storage infrastructure, you must therefore take into consideration the performance and capacity of the SAN, the disk subsystems, the SVC, and the known/expected workload.

Tip: Technically, almost all storage controllers provide both striping (RAID-1 or RAID-10) and a form of caching. The real advantage is the degree to which you can stripe the data, that is, across all MDisks in a group and therefore have the maximum number of spindles active at one time. The caching is secondary. The SVC provides additional caching to what midrange controllers provide (usually a couple of GB), whereas enterprise systems have much larger caches.

100 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 127: San

3.4.1 SANThe SVC now has many models: 2145-4F2, 2145-8F2, 2145-8F4, 2145-8G4, 2145-8A4, and 2145-CF8. All of them can connect to 2 Gbps, 4 Gbps, or 8 Gbps switches. From a performance point of view, it is better to connect the SVC to 8 Gbps switches.

Correct zoning on the SAN switch will bring security and performance together. We recommend that you implement a dual HBA approach at the host to access the SVC.

3.4.2 Disk subsystemsFrom a performance perspective, there are a few guidelines in connecting to an SVC:

� Connect all storage ports to the switch, and zone them to all of the SVC ports. You zone all ports on the disk back-end storage to all ports on the SVC nodes in a cluster. And, you must also make sure to configure the storage subsystem LUN masking settings to map all LUNs to all the SVC WWPNs in the cluster. The SVC is designed to handle large quantities of multiple paths from the back-end storage.

� Using as many as possible 15,000 RPM disks will improve performance considerably.

� Creating one LUN per array will help in a sequential workload environment.

In most cases, the SVC will be able to improve the performance, especially on middle to low end disk subsystems, older disk subsystems with slow controllers, or uncached disk systems, for these reasons:

� The SVC has the capability to stripe across disk arrays, and it can do so across the entire set of supported physical disk resources.

� The SVC has a 4 GB, 8 GB, or 24 GB cache in the latest 2145-CF8 model and it has an advanced caching mechanism.

The SVC’s large cache and advanced cache management algorithms also allow it to improve upon the performance of many types of underlying disk technologies. The SVC’s capability to manage, in the background, the destaging operations incurred by writes (while still supporting full data integrity) has the potential to be particularly important in achieving good database performance.

Depending upon the size, age, and technology level of the disk storage system, the total cache available in the SVC can be larger, smaller, or about the same as that associated with the disk storage. Because hits to the cache can occur in either the upper (SVC) or the lower (disk controller) level of the overall system, the system as a whole can take advantage of the larger amount of cache wherever it is located. Thus, if the storage control level of cache has the greater capacity, expect hits to this cache to occur, in addition to hits in the SVC cache.

Also, regardless of their relative capacities, both levels of cache will tend to play an important role in allowing sequentially organized data to flow smoothly through the system. The SVC cannot increase the throughput potential of the underlying disks in all cases. Its ability to do so depends upon both the underlying storage technology, as well as the degree to which the workload exhibits “hot spots” or sensitivity to cache size or cache algorithms.

IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426, shows the SVC’s cache partitioning capability:

http://www.redbooks.ibm.com/abstracts/redp4426.html?Open

Chapter 3. Planning and configuration 101

Page 128: San

3.4.3 SVCThe SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a SAN and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many MDisks as possible, therefore creating a greater level of concurrent I/O to the back end without overloading a single disk or array.

Assuming that there are no bottlenecks in the SAN or on the disk subsystem, remember that specific guidelines must be followed when you are performing these tasks:

� Creating an MDG� Creating VDisks� Connecting or configuring hosts that must receive disk space from an SVC cluster

You can obtain more detailed information about performance and best practices for the SVC in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521:

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

3.4.4 Performance monitoringPerformance monitoring must be an integral part of the overall IT environment. For the SVC, just as for the other IBM storage subsystems, the official IBM tool to collect performance statistics and supply a performance report is the TotalStorage® Productivity Center.

You can obtain more information about using the TotalStorage Productivity Center to monitor your storage subsystem in Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364:

http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

See Chapter 8, “SAN Volume Controller operations using the GUI” on page 469 for detailed information about collecting performance statistics.

102 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 129: San

Chapter 4. SAN Volume Controller initial configuration

In this chapter, we discuss these topics:

� Managing the cluster� System Storage Productivity Center overview� SAN Volume Controller (SVC) Hardware Management Console� SVC initial configuration steps� SVC ICA application upgrade

4

© Copyright IBM Corp. 2010. All rights reserved. 103

Page 130: San

4.1 Managing the clusterThere are three ways to manage the SVC:

� Using the System Storage Productivity Center (SSPC) � Using an SVC Management Console � Using a PuTTY-based SVC command-line interface

Figure 4-1 shows the three ways to manage an SVC cluster.

Figure 4-1 SVC cluster management

You still have full management control of the SVC no matter which method you choose. IBM System Storage Productivity Center is supplied by default when you purchase your SVC cluster.

If you already have a previously installed SVC cluster in your environment, it is possible that you are using the SVC Console (Hardware Management Console (HMC)). You can still use it together with IBM System Storage Productivity Center, but you can only log in to your SVC from one of them at a time.

If you decide to manage your SVC cluster with the SVC CLI, it does not matter if you are using the SVC Console or IBM System Storage Productivity Center, because the SVC CLI is located on the cluster and accessed via Secure Shell (SSH), which can be installed anywhere.

4.1.1 TCP/IP requirements for SAN Volume ControllerTo plan your installation, consider the TCP/IP address requirements of the SVC cluster and the requirements for the SVC to access other services. You must also plan the address allocation and the Ethernet router, gateway, and firewall configuration to provide the required access and network security.

Figure 4-2 shows the TCP/IP ports and services that are used by the SVC.

SSPC

•icat•http://•Putty client•TPC-SE

HMC

•icat•http://•Putty client

OEMDesktop

•http://•Putty client

104 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 131: San

Figure 4-2 TCP/IP ports

For more information about TCP/IP prerequisites, see Chapter 3, “Planning and configuration” on page 65 and also the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824.

In order to start an SVC initial configuration, Figure 4-3 shows a common flowchart that covers all of the types of management.

Chapter 4. SAN Volume Controller initial configuration 105

Page 132: San

Figure 4-3 SVC initial configuration flowchart

In the next sections, we describe each of the steps shown in Figure 4-3.

106 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 133: San

4.2 Systems Storage Productivity Center overview

The System Storage Productivity Center (SSPC) is an integrated hardware and software solution that provides a single management console for managing IBM SVC, IBM DS8000, and other components of your data storage infrastructure.

The current release of System Storage Productivity Center consists of the following components:

� IBM Tivoli Storage Productivity Center Basic Edition 4.1.1

IBM Tivoli Storage Productivity Center Basic Edition 4.1.1 is preinstalled on the System Storage Productivity Center server.

� Tivoli Storage Productivity Center for Replication is preinstalled. An additional license is required.

� IBM SAN Volume Controller Console 5.1.0

IBM SAN Volume Controller Console 5.1.0 is preinstalled on the System Storage Productivity Center server. Because this level of the console no longer requires a Common Information Model (CIM) agent to communicate with the SVC, a CIM Agent is not installed with the console. Instead, you can use the CIM Agent that is embedded in the SVC hardware. To manage prior levels of the SVC, install the corresponding CIM Agent on the IBM System Storage Productivity Center server. PuTTY remains installed on the System Storage Productivity Center and is available for key generation.

� IBM System Storage DS® Storage Manager 10.60 is available for you to optionally install on the System Storage Productivity Center server, or on a remote server. The DS Storage Manager 10.60 can manage the IBM DS3000, IBM DS4000, and IBM DS5000. With DS Storage Manager 10.60, when you use Tivoli Storage Productivity Center to add and discover a DS CIM Agent, you can launch the DS Storage Manager from the topology viewer, the Configuration Utility, or the Disk Manager of the Tivoli Storage Productivity Center.

� IBM Java™ 1.5 is preinstalled. IBM Java is preinstalled and supports DS Storage Manager 10.60. You do not need to download Java from Sun Microsystems.

� DS CIM Agent management commands. The DS CIM Agent management commands (DSCIMCLI) for 5.4.3 are preinstalled on the System Storage Productivity Center.

Figure 4-4 shows the product stack in the IBM System Storage Productivity Center Console 1.4.

Chapter 4. SAN Volume Controller initial configuration 107

Page 134: San

Figure 4-4 IBM System Storage Productivity Center 1.4 product stack

The IBM System Storage Productivity Center Console replaces the functionality of the SVC Master Console (MC), which was a dedicated management console for the SVC. The Master Console is still supported and will run the latest code levels of the SVC Console software components.

IBM System Storage Productivity Center has all of the software components preinstalled and tested on a System xTM machine model IBM System Storage Productivity Center 2805-MC4 with Windows installed on it.

All the software components installed on the IBM System Storage Productivity Center can be ordered and installed on hardware that meets or exceeds minimum requirements. The SVC Console software components are also available on the Web.

When using the IBM System Storage Productivity Center with the SVC, you have to install it and configure it before configuring the SVC. For a detailed guide to the IBM System Storage Productivity Center, we recommend that you refer to the IBM System Storage Productivity Center Software Installation and User’s Guide, SC23-8823.

For information pertaining to physical connectivity to the SVC, see Chapter 3, “Planning and configuration” on page 65.

4.2.1 IBM System Storage Productivity Center hardwareThe hardware used by the IBM System Storage Productivity Center solution is the IBM System Storage Productivity Center 2805-MC4. It is a 1U rack-mounted server. It has the following initial configuration:

� One Intel Xeon® quad-core central processing unit, with speed of 2.4 GHz, cache of 8 MB, and power consumption of 80 W

108 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 135: San

� 8 GB of RAM (eight 1-inch dual inline memory modules of double-data-rate 3 (DDR3) memory, with a data rate of 1,333 MHz

� Two 146 GB hard disk drives, each with a speed of 15,000 RPM

� One Broadcom 6708 Ethernet card

� One CD/DVD bay with read and write-read capability Microsoft Windows 2008 Enterprise Edition

It is designed to perform System Storage Productivity Center functions. If you plan to upgrade System Storage Productivity Center for more functions, you can purchase the Performance Upgrade Kit to add more capacity to your hardware.

4.2.2 SVC installation planning information for System Storage Productivity Center

Consider the following steps when planning the System Storage Productivity Center installation:

� Verify that the hardware and software prerequisites have been met.

� Determine the location of the rack where the System Storage Productivity Center is to be installed.

� Verify that the System Storage Productivity Center will be installed in line of sight to the SVC nodes.

� Verify that you have a keyboard, mouse, and monitor available to use.

� Determine the cabling required.

� Determine the network IP address.

� Determine the System Storage Productivity Center host name.

For detailed installation guidance, see the IBM System Storage Productivity Center: Introduction and Planning Guide, SC23-8824:

https://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5000033&familyind=5356448

Also, see the IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for Replication Installation and Configuration Guide, SC27-2337:

http://http://www-01.ibm.com/support/docview.wss?rs=1181&uid=ssg1S7002597

Figure 4-5 shows the front view of the System Storage Productivity Center Console based on the 2805-MC4 hardware.

Chapter 4. SAN Volume Controller initial configuration 109

Page 136: San

Figure 4-5 System Storage Productivity Center 2805-MC4 front view

Figure 4-6 shows a rear view of System Storage Productivity Center Console based on the 2805-MC4 hardware.

Figure 4-6 System Storage Productivity Center 2805-MC4 rear view

4.2.3 SVC installation planning information for the HMCConsider the following steps when planning for HMC installation:

� Verify that the hardware and software prerequisites have been met.� Determine the location of the rack where the HMC is to be installed.� Verify that the HMC will be installed in line of sight to the SVC nodes.� Verify that you have a keyboard, mouse, and monitor available to use.� Determine the cabling required.� Determine the network IP address.� Determine the HMC host name.

For detailed installation guidance, see the IBM System Storage SAN Volume Controller: Master Console Guide, SC27-2223:

http://www-01.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&dc=DA400&q1=english&q2=-Japanese&uid=ssg1S7002609&loc=en_US&cs=utf-8&lang=en

110 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 137: San

4.3 Setting up the SVC cluster

This section provides step-by-step instructions for building the SVC cluster initially.

4.3.1 Creating the cluster (first time) using the service panelThis section provides the step-by-step instructions that are needed to create the cluster for the first time using the service panel.

Use Figure 4-7 as a reference for the SVC 2145-8F2 and 2145-8F4 node model buttons to be pushed in the steps that follow. Use Figure 4-8 for the SVC Node 2145-8G4 and 2145-8A4 models. And, use Figure 4-9 as a reference for the SVC Node 2145-CF8 model.

Chapter 4. SAN Volume Controller initial configuration 111

Page 138: San

Figure 4-7 SVC 8F2 node and SVC 8F4 node front and operator panel

112 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 139: San

Figure 4-8 SVC 8G4 node front and operator panel

Figure 4-9 shows the CF8 model front panel.

Chapter 4. SAN Volume Controller initial configuration 113

Page 140: San

Figure 4-9 CF8 front panel

4.3.2 PrerequisitesEnsure that the SVC nodes are physically installed. Prior to configuring the cluster, ensure that the following information is available:

� License: The license indicates whether the client is permitted to use FlashCopy, MetroMirror, or both. It also indicates how much capacity the client is licensed to virtualize.

� For IPv4 addressing:

– Cluster IPv4 addresses: These addresses include one address for the cluster and another address for the service address.

– IPv4 subnet mask.

– Gateway IPv4 address.

� For IPv6 addressing:

– Cluster IPv6 addresses: These addresses include one address for the cluster and another address for the service address.

– IPv6 prefix.

– Gateway IPv6 address.

114 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 141: San

4.3.3 Initial configuration using the service panelAfter the hardware is physically installed into racks, complete the following steps to initially configure the cluster through the service panel:

1. Choose any node that is to become a member of the cluster being created.

2. At the service panel of that node, press and release the up or down navigation button continuously until Node: is displayed.

3. Press and release the left or right navigation button continuously until Create Cluster? is displayed. Press the select button.

4. If IPv4 Address: is displayed on line 1 of the service display, go to step 5. If Delete Cluster? is displayed on line 1 of the service display, this node is already a member of a cluster. Either the wrong node was selected, or this node was already used in a previous cluster. The ID of this existing cluster is displayed on line 2 of the service display:

a. If the wrong node was selected, this procedure can be exited by pressing the left, right, up, or down button (it cancels automatically after 60 seconds).

b. If you are certain that the existing cluster is not required, follow these steps:

i. Press and hold the up button.

ii. Press and release the select button. Then, release the up button, which deletes the cluster information from the node. Go back to step 1 and start again.

5. If you are creating the cluster with IPv4, then, press the select button; otherwise for IPv6, press the down arrow to display IPv6 Address:, and press the select button.

6. Use the up or down navigation buttons to change the value of the first field of the IP address to the value that has been chosen.

7. Use the right navigation button to move to the next field. Use the up or down navigation buttons to change the value of this field.

8. Repeat step 7 for each of the remaining fields of the IP address.

9. When the last field of the IP address has been changed, press the select button.

10.Press the right arrow button:

a. For IPv4, IPv4 Subnet: is displayed.

b. For IPv6, IPv6 Prefix: is displayed.

11.Press the select button.

Important: If a time-out occurs when entering the input for the fields during these steps, you must begin again from step 2. All of the changes are lost, so be sure to have all of the information available before beginning again.

Important: When a cluster is deleted, all of the client data that is contained in that cluster is lost.

Note: For IPv4, pressing and holding the up or down buttons will increment or decrease the IP address field by units of 10. The field value rotates from 0 to 255 with the down button, and from 255 to 0 with the up button.

For IPv6, you do the same steps except that it is a 4-digit hexadecimal field, and the individual characters will increment.

Chapter 4. SAN Volume Controller initial configuration 115

Page 142: San

12.Change the fields for IPv4 Subnet in the same way that the IPv4 IP address fields were changed. There is only a single field for IPv6 Prefix.

13.When the last field of IPv4 Subnet/IPv6 Mask has been changed, press the select button.

14.Press the right navigation button:

a. For IPv4, IPv4 Gateway: is displayed.

b. For IPv6, IPv6 Gateway: is displayed.

15.Press the select button.

16.Change the fields for the appropriate Gateway in the same way that the IPv4/IPv6 address fields were changed.

17.When the changes to all of the Gateway fields have been made, press the select button.

18.Press the right navigation button:

a. For IPv4, IPv4 Create Now? is displayed.

b. For IPv6, IPv6 Create Now? is displayed.

19.When the settings have all been verified as accurate, press the select button.

To review the settings before creating the cluster, use the right and left buttons. Make any necessary changes, return to Create Now?, and press the select button.

If the cluster is created successfully, Password: is displayed on line 1 of the service display panel. Line 2 contains a randomly generated password, which is used to complete the cluster configuration in the next section.

20.When Cluster: is displayed on line 1 of the service display and the Password: display has timed out, the cluster was created successfully. Also, the cluster IP address is displayed on line 2 when the initial creation of the cluster is completed.

If the cluster is not created, Create Failed: is displayed on line 1 of the service display. Line 2 contains an error code. Refer to the error codes that are documented in IBM System Storage SAN Volume Controller: Service Guide, GC26-7901, to identify the reason why the cluster creation failed and the corrective action to take.

4.4 Adding the cluster to the SSPC or the SVC HMC

After you have performed the activities in 4.3, “Setting up the SVC cluster” on page 111, complete the cluster setup using the SVC Console. Follow 4.4.1, “Configuring the GUI” on page 117 to create the cluster and complete the configuration.

Important: Make a note of this password now. It is case sensitive. The password is displayed only for approximately 60 seconds. If the password is not recorded, the cluster configuration procedure must be started again from the beginning.

Important: At this time, do not repeat this procedure to add other nodes to the cluster. Adding nodes to the cluster is accomplished in 7.8.2, “Adding a node” on page 388 and in 8.10.3, “Adding nodes to the cluster” on page 560.

Important: Make sure that the SVC cluster IP address (svcclusterip) can be reached successfully with a ping command from the SVC Console.

116 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 143: San

4.4.1 Configuring the GUIIf this is the first time that the SVC administration GUI is being used, you must configure it:

1. Open the GUI using one of the following methods:

– Double-click the icon marked SAN Volume Controller Console on the SVC Console’s desktop.

– Open a Web browser on the SVC Console and point to this address:

http://localhost:9080/ica (We accessed the SVC Console using this method.)

– Open a Web browser on a separate workstation and point to this address:

http://svcconsoleipaddress:9080/ica

Figure 4-10 shows the SVC 5.1 Welcome window.

Figure 4-10 Welcome window

2. Click Add SAN Volume Controller Cluster, and you will be presented with the window that is shown in Figure 4-11.

Figure 4-11 Adding the SVC cluster IP address

Chapter 4. SAN Volume Controller initial configuration 117

Page 144: San

Figure 4-12 shows the CMMVC5753E error.

Figure 4-12 CMMVC5753E error

3. Click OK and a pop-up window opens and prompts for the user ID and the password of the SVC cluster, as shown in Figure 4-13. Enter the user ID admin and the cluster admin password that was set earlier in 4.3.1, “Creating the cluster (first time) using the service panel” on page 111, and click OK.

Figure 4-13 SVC cluster user ID and password sign-on window

4. The browser accesses the SVC and displays the Create New Cluster wizard window, as shown in Figure 4-14. Click Continue.

Figure 4-14 Create New Cluster wizard

Important: Do not forget to select Create Initialize Cluster. Without this flag, you will not be able to initialize the cluster and you will get the error message CMMVC5753E.

118 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 145: San

5. At the Create New Cluster window (Figure 4-15), fill in the following details:

– A new superuser password to replace the random one that the cluster generated: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters.

– A service password to access the cluster for service operation: The password is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters.

– A cluster name: The cluster name is case sensitive and can consist of A to Z, a to z, 0 to 9, and the underscore. It cannot start with a number and has a minimum of one character and a maximum of 15 characters.

– A service IP address to access the cluster for service operations. Choose between an automatically assigned IP address from Dynamic Host Configuration Protocol (DHCP) or a static IP address.

– The fabric speed of the FC network.

– The Administrator Password Policy check box, if selected, enables a user to reset the password from the service panel (this reset is helpful, for example, if the password is forgotten). This check box is optional.

6. After you have filled in the details, click Create New Cluster (Figure 4-15).

Users: The Admin user that was previously used will no longer be needed. It will be replaced by the superuser user that will be created at the cluster initialization time. Starting from SVC 5.1, the CIM Agent has been moved inside the SVC cluster.

Tip: The service IP address differs from the cluster IP address. However, because the service IP address is configured for the cluster, it must be on the same IP subnet.

Important: The SVC must be in a secure room if this function is enabled, because anyone who knows the correct key sequence can reset the admin password:

� Use this key sequence:

a. From the Cluster: menu item displayed on the service panel, press the left or right button until Recover Cluster? is displayed.

b. Press the select button. Service Access? is displayed.

c. Press and hold the up button, and then press and release the select button. This step generates a new random password. Write it down.

� Important: Be careful, because pressing and holding the down button, and pressing and releasing the select button, places the node in service mode.

Chapter 4. SAN Volume Controller initial configuration 119

Page 146: San

Figure 4-15 Cluster details

7. A Creating New Cluster window opens, as shown in Figure 4-16. Click Continue each time when prompted.

Important: Make sure that you confirm the Administrator and Service passwords and retain them in a safe place for future use.

120 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 147: San

Figure 4-16 Creating New Cluster

8. A Created New Cluster window opens, as shown in Figure 4-17. Click Continue.

Figure 4-17 Created New Cluster

9. A Password Changed window will confirm that the password has been modified, as shown in Figure 4-18. Click Continue.

Figure 4-18 Password Changed

10.Then, you are redirected to the License setting window, as shown in Figure 4-19. Choose the type of license that is appropriate for your purchase, and click GO to continue.

Note: By this time, the service panel display on the front of the configured node displays the cluster name that was entered previously (for example, ITSO-CLS3).

Chapter 4. SAN Volume Controller initial configuration 121

Page 148: San

Figure 4-19 License Settings

11.Next, the Capacity Licensing Settings window is displayed, as shown in Figure 4-20. To continue, fill out the fields for Virtualization Limit, FlashCopy Limit, and Global and Metro Mirror Limit for the number of Terabytes that are licensed. If you do not have a license for any of these features, leave the value at 0. Click Set License Settings.

Figure 4-20 Capacity Licensing Settings

12.A confirmation window will confirm the settings for the features, as shown in Figure 4-21. Click Continue.

122 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 149: San

Figure 4-21 Capacity Licensing Settings confirmation

13.A window confirming that you have successfully created the initial settings for the cluster opens, as shown in Figure 4-22.

Figure 4-22 Cluster successfully created

Chapter 4. SAN Volume Controller initial configuration 123

Page 150: San

14.Closing the previous task window by clicking X in the upper-right corner will redirect you to the Viewing Clusters window (the cluster will appear as unauthenticated). After selecting your cluster and clicking Go, you will be asked to authenticate your access by inserting your predefined superuser user ID and password.

Figure 4-23 shows the Viewing Clusters window.

Figure 4-23 Viewing Clusters window

15.Perform the following steps to complete the SVC cluster configuration:

a. Add an additional node to the cluster.

b. Configure SSH keys for the command line user, as shown in 4.5, “Secure Shell overview and CIM Agent” on page 125.

c. Configure user authentication and authorization.

d. Set up the call home options.

e. Set up event notifications and inventory reporting.

f. Create the MDGs.

g. Add an MDisk to the MDG.

h. Identify and create VDisks.

i. Create a map host objects map.

j. Identify and configure FlashCopy mappings and Metro Mirror relationship.

k. Back up configuration data.

We describe all of these steps in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339, and in Chapter 8, “SAN Volume Controller operations using the GUI” on page 469.

124 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 151: San

4.5 Secure Shell overview and CIM Agent

Prior to SVC Version 5.1, Secure Shell (SSH) was used to secure data flow between the SVC cluster configuration node (SSH server) and a client, either a command-line client through the command-line interface (CLI) or the Common Information Model object manager (CIMOM). The connection is secured by the means of a private key and a public key pair:

1. A public key and a private key are generated together as a pair.

2. A public key is uploaded to the SSH server.

3. A private key identifies the client and is checked against the public key during the connection. The private key must be protected.

4. The SSH server must also identify itself with a specific host key.

5. If the client does not have that host key yet, it is added to a list of known hosts.

Secure Shell is the communication vehicle between the management system (usually the System Storage Productivity Center) and the SVC cluster.

SSH is a client/server network application. The SVC cluster acts as the SSH server in this relationship. The SSH client provides a secure environment from which to connect to a remote machine. It uses the principles of public and private keys for authentication.

The communication interfaces prior to SVC version 5.1 are shown in Figure 4-24.

Figure 4-24 Communication interfaces

SSH keys are generated by the SSH client software. The SSH keys include a public key, which is uploaded and maintained by the cluster, and a private key that is kept private to the workstation that is running the SSH client. These keys authorize specific users to access the administration and service functions on the cluster. Each key pair is associated with a user-defined ID string that can consist of up to 40 characters. Up to 100 keys can be stored on the cluster. New IDs and keys can be added, and unwanted IDs and keys can be deleted.

To use the CLI or, for the SVC graphical user interface (GUI) prior to SVC 5.1, an SSH client must be installed on that system, the SSH key pair must be generated on the client system, and the client’s SSH public key must be stored on the SVC cluster or clusters.

The System Storage Productivity Center and the HMC must have the freeware implementation of SSH-2 for Windows called PuTTY preinstalled. This software provides the

Chapter 4. SAN Volume Controller initial configuration 125

Page 152: San

SSH client function for users logged into the SVC Console that want to invoke the CLI or GUI to manage the SVC cluster.

Starting with SVC 5.1, the management design has been changed, and the CIM Agent has been moved into the SVC cluster.

With SVC 5.1, SSH keys authentication is no longer needed for the GUI but only for the SVC command-line interface.

Figure 4-25 shows the SVC management design.

Figure 4-25 SVC management design

4.5.1 Generating public and private SSH key pairs using PuTTYPerform the following steps to generate SSH keys on the SSH client system:

1. Start the PuTTY Key Generator to generate public and private SSH keys. From the client desktop, select Start Programs PuTTY PuTTYgen.

2. On the PuTTY Key Generator GUI window (Figure 4-26), generate the keys:

a. Select SSH2 RSA.b. Leave the number of bits in a generated key value at 1024.c. Click Generate.

Note: These keys will be used in the step documented in 4.6, “Using IPv6” on page 136.

126 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 153: San

Figure 4-26 PuTTY key generator GUI

3. Move the cursor on the blank area in order to generate the keys.

4. After the keys are generated, save them for later use:

a. Click Save public key, as shown in Figure 4-27.

To generate keys: The blank area indicated by the message is the large blank rectangle on the GUI inside the section of the GUI labelled Key. Continue to move the mouse pointer over the blank area until the progress bar reaches the far right. This action generates random characters to create a unique key pair.

Chapter 4. SAN Volume Controller initial configuration 127

Page 154: San

Figure 4-27 Saving the public key

b. You are prompted for a name (for example, pubkey) and a location for the public key (for example, C:\Support Utils\PuTTY). Click Save.

If another name or location is chosen, ensure that a record of the name or location is kept, because the name and location of this SSH public key must be specified in the steps that are documented in 4.5.2, “Uploading the SSH public key to the SVC cluster” on page 129.

c. In the PuTTY Key Generator window, click Save private key.

d. You are prompted with a warning message, as shown in Figure 4-28. Click Yes to save the private key without a passphrase.

Figure 4-28 Saving the private key without a passphrase

Tip: The PuTTY Key Generator saves the public key with no extension, by default. We recommend that you use the string “pub” in naming the public key, for example, “pubkey”, to easily differentiate the SSH public key from the SSH private key.

128 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 155: San

e. When prompted, enter a name (for example, icat) and location for the private key (for example, C:\Support Utils\PuTTY). Click Save.

If you choose another name or location, ensure that you keep a record of it, because the name and location of the SSH private key must be specified when the PuTTY session is configured in the steps that are documented in 4.6, “Using IPv6” on page 136.

We suggest that you use the default name icat.ppk, because, in SVC clusters running on versions prior to SVC 5.1, this key has been used for icat application authentication and must have this default name.

5. Close the PuTTY Key Generator GUI.

6. Navigate to the directory where the private key was saved (for example, C:\Support Utils\PuTTY).

7. Copy the private key file (for example, icat.ppk) to the C:\Program Files\IBM\svcconsole\cimom directory.

4.5.2 Uploading the SSH public key to the SVC clusterAfter you have created your SSH key pair, you need to upload your SSH private key into the SVC cluster:

1. From your browser:

http://svcconsoleipaddress:9080/ica

Select Users, and then on the next window, select Create a User from the list, as shown Figure 4-29, and click Go.

Figure 4-29 Create a user

2. From the Create a User window, insert the user ID name that you want to create and the password. At the bottom of the window, select the access level that you want to assign to your user (remember that the Security Administrator is the maximum level) and choose the location where you want to upload the SSH pub key file you have created for this user, as shown Figure 4-30. Click Ok.

Private key extension: The PuTTY Key Generator saves the private key with the PPK extension.

Important: If the private key was named something other than icat.ppk, make sure that you rename it to the icat.ppk file in the C:\Program Files\IBM\svcconsole\cimom folder. The GUI (which will be used later) expects the file to be called icat.ppk and for it to be in this location. This key is no longer used in SVC 5.1, but it is still valid for the previous version.

Chapter 4. SAN Volume Controller initial configuration 129

Page 156: San

Figure 4-30 Create user and password

3. You have completed your user creation process and uploaded the users’ SSH public key that will be paired later with the users’ private .ppk key, as described in 4.5.3, “Configuring the PuTTY session for the CLI” on page 130. Figure 4-31 shows the successful upload of the SSH admin key.

Figure 4-31 Adding the SSH admin key successfully

4. You have now completed the basic setup requirements for the SVC cluster using the SVC cluster Web interface.

4.5.3 Configuring the PuTTY session for the CLIBefore the CLI can be used, the PuTTY session must be configured using the SSH keys that were generated earlier in 4.5.1, “Generating public and private SSH key pairs using PuTTY” on page 126.

130 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 157: San

Perform these steps to configure the PuTTY session on the SSH client system:

1. From the System Storage Productivity Center Windows desktop, select Start Programs PuTTY PuTTY to open the PuTTY Configuration GUI window.

2. In the PuTTY Configuration window (Figure 4-32), from the Category pane on the left, click Session, if it is not selected.

Figure 4-32 PuTTY Configuration window

3. In the right pane, under the “Specify the destination you want to connect to” section, select SSH. Under the “Close window on exit” section, select Only on clean exit, which ensures that if there are any connection errors, they will be displayed on the user’s window.

4. From the Category pane on the left side of the PuTTY Configuration window, click Connection SSH to display the PuTTY SSH Configuration window, as shown in Figure 4-33.

Tip: The items selected in the Category pane affect the content that appears in the right pane.

Chapter 4. SAN Volume Controller initial configuration 131

Page 158: San

Figure 4-33 PuTTY SSH connection configuration window

5. In the right pane, in the “Preferred SSH protocol version” section, select 2.

6. From the Category pane on the left side of the PuTTY Configuration window, select Connection SSH Auth.

7. On Figure 4-34, in the right pane, in the “Private key file for authentication:” field under the Authentication Parameters section, either browse to or type the fully qualified directory path and file name of the SSH client private key file created earlier (for example, C:\Support Utils\PuTTY\icat.PPK). See Figure 4-34.

132 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 159: San

Figure 4-34 PuTTY Configuration: Private key location

8. From the Category pane on the left side of the PuTTY Configuration window, click Session.

9. In the right pane, follow these steps, as shown in Figure 4-35:

a. Under the “Load, save, or delete a stored session” section, select Default Settings, and click Save.

b. For the Host Name (or IP address), type the IP address of the SVC cluster.

c. In the Saved Sessions field, type a name (for example, SVC) to associate with this session.

d. Click Save.

Chapter 4. SAN Volume Controller initial configuration 133

Page 160: San

Figure 4-35 PuTTY Configuration: Saving a session

You can now either close the PuTTY Configuration window or leave it open to continue.

4.5.4 Starting the PuTTY CLI sessionThe PuTTY application is required for all CLI tasks. If it was closed for any reason, restart the session as detailed here:

1. From the SVC Console desktop, open the PuTTY application by selecting Start Programs PuTTY.

2. On the PuTTY Configuration window (Figure 4-36), select the session saved earlier (in our example, ITSO-SVC1), and click Load.

3. Click Open.

Tip: Normally, output that comes from the SVC is wider than the default PuTTY window size. We recommend that you change your PuTTY window appearance to use a font with a character size of 8. To change, click the Appearance item in the Category tree, as shown in Figure 4-35, and then, click Font. Choose a font with a character size of 8.

134 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 161: San

Figure 4-36 Open PuTTY command-line session

4. If this is the first time that the PuTTY application is being used since generating and uploading the SSH key pair, a PuTTY Security Alert window with a prompt opens stating that there is a mismatch between the private and public keys, as shown in Figure 4-37. Click Yes, which invokes the CLI.

Figure 4-37 PuTTY Security Alert

5. At the Login as: prompt, type admin and press Enter (the user ID is case sensitive). As shown in Example 4-1, the private key used in this PuTTY session is now authenticated against the public key that was uploaded to the SVC cluster.

Example 4-1 Authenticating

login as: adminAuthenticating with public key "rsa-key-20080617"Last login: Wed Aug 18 03:30:21 2009 from 10.64.210.240IBM_2145:ITSO-CL1:admin>

Chapter 4. SAN Volume Controller initial configuration 135

Page 162: San

You have now completed the tasks that are required to configure the CLI for SVC administration from the SVC Console. You can close the PuTTY session.

4.5.5 Configuring SSH for AIX clientsTo configure SSH for AIX clients, follow these steps:

1. The SVC cluster IP address must be able to be successfully reached using the ping command from the AIX workstation from which cluster access is desired.

2. Open SSL must be installed for OpenSSH to work. Install OpenSSH on the AIX client:

a. The installation images can be found at this Web site:

https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=aixbp http://sourceforge.net/projects/openssh-aix

b. Follow the instructions carefully, because OpenSSL must be installed before using SSH.

3. Generate an SSH key pair:

a. Run the cd command to go to the /.ssh directory.

b. Run the ssh-keygen -t rsa command.

c. The following message is displayed:

Generating public/private rsa key pair. Enter file in which to save the key (//.ssh/id_rsa)

d. Pressing Enter will use the default file that is shown in parentheses; otherwise, enter a file name (for example, aixkey), and press Enter.

e. The following prompt is displayed:

Enter a passphrase (empty for no passphrase)

We recommend entering a passphrase when the CLI will be used interactively, because there is no other authentication when connecting through the CLI. After typing in the passphrase, press Enter.

f. The following prompt is displayed:

Enter same passphrase again:

Type the passphrase again, and then, press Enter again.

g. A message is displayed indicating that the key pair has been created. The private key file will have the name entered previously (for example, aixkey). The public key file will have the name entered previously with an extension of .pub (for example, aixkey.pub).

4.6 Using IPv6

SVC V4.3 introduced IPv6 functionality to the console and clusters. You can use IPv4, or IPv6 in a dual stack configuration. Migrating to (or from) IPv6 can be done remotely and is nondisruptive, except that you need to remove and redefine the cluster to the SVC Console.

Using a passphrase: If you are generating an SSH keypair so that you can interactively use the CLI, we recommend that you use a passphrase so you will need to authenticate every time that you connect to the cluster. It is possible to have a passphrase-protected key for scripted usage, but you will have to use the expect command or a similar command to have the passphrase parsed into the ssh command.

136 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 163: San

4.6.1 Migrating a cluster from IPv4 to IPv6As a prerequisite, have IPv6 already enabled and configured on the System Storage Productivity Center/Windows server running the SVC Console. We have configured an interface with IPv4 and IPv6 addresses on the System Storage Productivity Center, as shown in Example 4-2.

Example 4-2 Output of ipconfig on System Storage Productivity Center

C:\Documents and Settings\Administrator>ipconfig

Windows IP Configuration

Ethernet adapter IPv6:

Connection-specific DNS Suffix . : IP Address. . . . . . . . . . . . : 10.0.1.115 Subnet Mask . . . . . . . . . . . : 255.255.255.0 IP Address. . . . . . . . . . . . : 2001:610::115 IP Address. . . . . . . . . . . . : fe80::214:5eff:fecd:9352%5 Default Gateway . . . . . . . . . :

To migrate a cluster, follow these steps:

1. Select Manage Cluster Modify IP Addresses, as shown in Figure 4-38.

Figure 4-38 Modify IP Addresses window

Using IPv6: To remotely access the SVC Console and clusters running IPv6, you are required to run Internet Explorer 7 and have IPv6 configured on your local workstation.

Chapter 4. SAN Volume Controller initial configuration 137

Page 164: San

2. In the IPv6 section that is shown in Figure 4-38, select an IPv6 interface, and click Modify.

3. Then, in the window that is shown in Figure 4-39:

a. Type an IPv6 prefix in the IPv6 Network Prefix field. The Prefix field can have a value of 0 to 127.

b. Type an IPv6 address in the Cluster IP field.

c. Type an IPv6 address in the Service IP address field.

d. Type an IPv6 gateway in the Gateway field.

e. Click Modify Settings.

Figure 4-39 Modify IP Addresses: Adding IPv6 addresses

4. A confirmation window displays (Figure 4-40). Click X in the upper-right corner to close this tab.

Figure 4-40 Modify IP Addresses window

5. Before you remove the cluster from the SVC Console, test the IPv6 connectivity using the ping command from a cmd.exe session on the System Storage Productivity Center (as shown in Example 4-3 on page 139).

138 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 165: San

Example 4-3 Testing IPv6 connectivity to the SVC cluster

C:\Documents and Settings\Administrator>ping 2001:0610:0000:0000:0000:0000:0000:119

Pinging 2001:610::119 from 2001:610::115 with 32 bytes of data:

Reply from 2001:610::119: time=3msReply from 2001:610::119: time<1msReply from 2001:610::119: time<1msReply from 2001:610::119: time<1ms

Ping statistics for 2001:610::119: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 3ms, Average = 0ms

6. In the Viewing Clusters pane, in the GUI Welcome window, select the cluster that you want to remove. Select Remove a Cluster from the list, and click Go.

7. The Viewing Clusters window reopens, without the cluster that you have removed. Select Add a Cluster from the list, and click OK (Figure 4-41).

Figure 4-41 Adding a cluster

8. The Adding a Cluster window opens. Enter your IPv6 address, as shown in Figure 4-42, and click OK.

Figure 4-42 iPv6 address

9. You will be asked to insert your CIM user ID (superuser) and your password (default=passw0rd), as shown in Figure 4-43.

Chapter 4. SAN Volume Controller initial configuration 139

Page 166: San

Figure 4-43 Insert CIM user ID and password

10.The Viewing Clusters window reopens with the cluster displaying an IPv6 address, as shown in Figure 4-44. Click Launch the SAN Volume Controller Console for the cluster, and go back to modifying IP addresses, as you did in step 1.

Figure 4-44 Viewing Clusters window: Displaying the new cluster using the IPv6 address

11.In the Modify IP Addresses window, select the IPv4 address port, select Clear Port Settings, and click GO, as shown in Figure 4-45.

140 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 167: San

Figure 4-45 Clear Port Settings

12.A confirmation message appears, as shown in Figure 4-46. Click OK.

Figure 4-46 Confirmation of IP address change

13.A second window (Figure 4-47) opens, confirming that the IPv4 stack has been disabled and the associated addresses have been removed. Click Return.

Figure 4-47 IPv4 stack has been removed

4.6.2 Migrating a cluster from IPv6 to IPv4The process of migrating a cluster from IPv6 to IPv4 is identical to the process described in 4.6.1, “Migrating a cluster from IPv4 to IPv6” on page 137, except that you add IPv4 addresses and remove the IPv6 addresses.

Chapter 4. SAN Volume Controller initial configuration 141

Page 168: San

4.7 Upgrading the SVC Console software

This section takes you through the steps to upgrade your existing SVC Console GUI. You can also use these steps to install a new SVC Console on another server.

Follow these steps:

1. Download the latest available version of the ICA application and check for compatibility with your running version from the following Web site:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1002888

2. Save your account definitions, documenting all defined users, password, and SSH keys, because you might need to reuse these users, password, and keys in case you encounter any problems during the GUI upgrade process.

Example 4-4 shows you how to list the defined accounts using the CLI.

Example 4-4 Accounts list

IBM_2145:ITSO-CLS3:admin>svcinfo lsuserid name password ssh_key remote u sergrp_id usergrp_name0 superuser yes no no 0 SecurityAdmin1 admin yes yes no 0 SecurityAdminIBM_2145:ITSO-CLS3:admin>svcinfo lsuser 0id 0name superuserpassword yesssh_key noremote nousergrp_id 0usergrp_name SecurityAdminIBM_2145:ITSO-CLS3:admin>svcinfo lsuser 1id 1name adminpassword yesssh_key yesremote nousergrp_id 0usergrp_name SecurityAdminIBM_2145:ITSO-CLS3:admin>

3. Execute the setup.exe file from the location where you have saved and unzipped the latest SVC Console file.

Figure 4-48 shows the location of the setup.exe file on our system.

142 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 169: San

Figure 4-48 Location of the setup.exe file

4. The Installation wizard will start. This first window (as shown in Figure 4-49) asks you to shut down any running Windows programs, stop all SVC services, and review the readme file.

5. Figure 4-49 Shows how to stop SVC services.

Figure 4-49 Stop CIMOM service

Chapter 4. SAN Volume Controller initial configuration 143

Page 170: San

6. Figure 4-50 shows the wizard Welcome window.

Figure 4-50 Wizard welcome window

After you have reviewed the installation instructions and the readme file, click Next.

7. The Installation will ask you to read and accept the terms of the license agreement, as shown in Figure 4-51. Click Next.

144 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 171: San

Figure 4-51 License agreement window

8. The installation detects your existing SVC Console installation (if you are upgrading). If it does detect your existing SVC Console installation, it will ask you to perform these steps:

– Select Preserve Configuration if you want to keep your existing configuration. (You must make sure that this option is checked.)

– Manually shut down the SVC Console services:

• IBM System Storage SAN Volume Controller Pegasus Server• Service Location Protocol• IBM WebSphere Application Server V6 - SVC

There might be differences in the existing services, depending on which version you are upgrading from. Follow the instructions on the dialog wizard for which services to shut down, as shown in Figure 4-52. Click Next.

Chapter 4. SAN Volume Controller initial configuration 145

Page 172: San

Figure 4-52 Product Installation Check

9. The installation wizard then checks that the appropriate services are shut down, removes the previous version, and shows the Installation Confirmation window, as shown in Figure 4-53. If the wizard detects any problems, it first shows you a page detailing the possible problems, giving you time to fix them before proceeding.

Important: If you want to keep your SVC configuration, make sure that you select Preserve Configuration. If you omit this selection, you will lose your entire SVC Console setup, and you will have to reconfigure your console as though it were a new installation.

146 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 173: San

Figure 4-53 Installation Confirmation

10.Figure 4-54 shows the progress of the installation. For our environment, it took approximately 10 minutes to complete.

Figure 4-54 Installation Progress

11.The installation process now starts the migration for the cluster user accounts. Starting with SVC 5.1, the CIMOM has been moved into the cluster, and it is no longer present in the SVC Console or System Storage Productivity Center. The CIMOM authentication login process will be performed in the ICA application when we launch the SVC management application.

Chapter 4. SAN Volume Controller initial configuration 147

Page 174: San

As part of the migration input, Figure 4-55 shows where to enter the “admin” password to each of the clusters that you already own.

This password was generated during the SVC cluster first creation and must be carefully saved.

Figure 4-55 Migration Input

12.At the end of the user accounts migration process, you might get the error that is shown in Figure 4-56.

148 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 175: San

Figure 4-56 SVC cluster user account migration error

This message is normal behavior, because, in our environment, we have implemented only the superuser user ID. The GUI upgrade wizard is intended to work only for user accounts; it is not intended to be used for migrating the superuser user.

If you get this error, when you try to access your SVC cluster using the GUI, it will require you to enter the default CIMOM user id=superuser and password=passw0rd, because the superuser account has not been migrated and you will have to use the default in the meantime.

13.Click Next. The wizard will either restart all of the appropriate SVC Console processes, or inform you that you will need to reboot, and then, give you a summary of the installation. In this case, we were told we need to reboot, as shown in Figure 4-57.

Chapter 4. SAN Volume Controller initial configuration 149

Page 176: San

Figure 4-57 Installation summary

14.The wizard requires us to restart our computer (Figure 4-58).

Figure 4-58 Installation finished: Requesting reboot

150 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 177: San

15.And finally, to see the new interface, you can launch the SVC Console by using the icon on the desktop. Log in and confirm that the upgrade was successful by noting the Console Version number on the right side of the window under the graphic. See Figure 4-59.

Figure 4-59 Launching the upgraded SVC Console

You have completed the upgrade of your SVC Console.

To access the SVC, you must click Clusters on the left pane. You will be redirected to the Viewing Clusters window, as shown in Figure 4-60.

Figure 4-60 Viewing Clusters

Chapter 4. SAN Volume Controller initial configuration 151

Page 178: San

As you can see, the cluster’s availability status is “Unauthenticated”, which is to be expected. Select the cluster, click GO, and launch the SAN Volume Controller Application. You will be required to insert your CIMOM user ID (superuser) and your password (password) as shown in Figure 4-61.

Figure 4-61 Sign on to cluster

Finally, you can manage your SVC cluster, as shown in Figure 4-62.

Figure 4-62 Cluster management window

152 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 179: San

Chapter 5. Host configuration

In this chapter, we describe the basic host configuration procedures that are required to connect supported hosts to the IBM System Storage SAN Volume Controller (SVC).

5

© Copyright IBM Corp. 2010. All rights reserved. 153

Page 180: San

5.1 SVC setup Traditionally in IBM SAN Volume Controller (SVC) environments, hosts were connected to an SVC via a storage area network (SAN). In actual implementations that have high availability requirements (the majority of the target clients for SVC), the SAN is implemented as two separate fabrics providing a fault tolerant arrangement of two or more counterpart SANs. For the hosts, each SAN provides alternate paths to the resources (virtual disks (VDisks)) that are provided by the SVC.

Starting with SVC 5.1, iSCSI is introduced as an alternative protocol to attaching hosts via a LAN to the SVC. However, within the SVC, all communications with back-end storage subsystems, and with other SVC clusters, take place via Fibre Channel (FC).

For iSCSI/LAN-based access networks to the SVC using a single network, or using two physically separated networks, is supported. The iSCSI feature is a software feature that is provided by the SVC 5.1 code. It will be available on the new CF8 nodes and also on the existing nodes that support the SVC 5.1 release. The existing SVC node hardware has multiple 1 Gbps Ethernet ports. Until now, only one 1 Gbps Ethernet port has been used, and it has been used for cluster configuration. With the introduction of iSCSI, both ports can now be used.

Redundant paths to VDisks can be provided for the SAN, as well as for the iSCSI environment.

Figure 5-1 shows the attachments that are supported with the SVC 5.1 release.

Figure 5-1 SVC host attachment overview

5.1.1 Fibre Channel and SAN setup overviewHosts using Fibre Channel (FC) as the connection to an SVC are always connected to a SAN switch. For SVC configurations, we strongly recommend the use of two redundant SAN fabrics. Therefore, each server is equipped with a minimum of two host bus adapters (HBAs), with each of the HBAs connected to a SAN switch in one of the two fabrics (assuming one port per HBA).

154 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 181: San

SVC imposes no special limit on the FC optical distance between the SVC nodes and the host servers. A server can therefore be attached to an edge switch in a core-edge configuration while the SVC cluster is at the core. SVC supports up to three inter-switch link (ISL) hops in the fabric. Therefore, the server and the SVC node can be separated by up to five actual FC links, four of which can be 10 km (6.2 miles) long if longwave small form-factor pluggables (SFPs) are used. For high performance servers, the rule is to avoid ISL hops, that is, connect the servers to the same switch to which the SVC is connected, if possible.

Remember these limits when connecting host servers to an SVC:

� Up to 256 hosts per I/O Group, which results in a total of 1,024 hosts per cluster. Note that if the same host is connected to multiple I/O Groups of a cluster, it counts as a host in each of these groups.

� A total of 512 distinct configured host worldwide port names (WWPNs) are supported per I/O Group. This limit is the sum of the FC host ports and the host iSCSI names (an internal WWPN is generated for each iSCSI name) that are associated with all of the hosts that are associated with a single I/O Group.

The access from a server to an SVC cluster via the SAN fabrics is defined by the use of zoning. Consider these rules for host zoning with the SVC:

� For configurations of fewer than 64 hosts per cluster, the SVC supports a simple set of zoning rules that enables the creation of a small set of host zones for various environments. Switch zones containing HBAs must contain fewer than 40 initiators in total, including the SVC ports that act as initiators. Thus, a valid zone is 32 host ports, plus eight SVC ports. This restriction exists, because the order N2 scaling of the number of remote status change notification messages (RSCN) with the number of initiators per zone [N] can cause problems. We recommend that you zone using single HBA port zoning, as described in the next paragraph.

� For configurations of more than 64 hosts per cluster, the SVC supports a more restrictive set of host zoning rules. Each HBA port must be placed in a separate zone. Also included in this zone is exactly one port from each SVC node in the I/O Groups that are associated with this host. We recommend that hosts are zoned this way in smaller configurations, too, but it is not mandatory.

� Switch zones containing HBAs must contain HBAs from similar hosts or similar HBAs in the same host. For example, AIX and Windows NT® hosts must be in separate zones, and t QLogic and Emulex adapters must be in separate zones.

� To obtain the best performance from a host with multiple FC ports, ensure that each FC port of a host is zoned with a separate group of SVC ports.

� To obtain the best overall performance of the subsystem and to prevent overloading, the workload to each SVC port must be equal, typically by zoning approximately the same number of host FC ports to each SVC FC port.

� For any given VDisk, the number of paths through the SAN from the SVC nodes to a host must not exceed eight. For most configurations, four paths to an I/O Group (four paths to each VDisk that is provided by this I/O Group) are sufficient.

Figure 5-2 on page 156 shows an overview for a setup with servers that have two single port HBAs each. Follow this method to connect them:

� Try to distribute the actual hosts equally between two logical sets per I/O Group. Connect hosts from each set always to the same group of SVC ports. This “port group” includes exactly one port from each SVC node in the I/O Group. The zoning defines the correct connections.

Chapter 5. Host configuration 155

Page 182: San

� The “port groups” are defined this way:

– Hosts in host set one of an I/O Group are always zoned to the P1 and P4 ports on both nodes, for example, N1/N2 of I/O Group zero.

– Hosts in host set two of an I/O Group are always zoned to the P2 and P3 ports on both nodes of an I/O Group.

� You can create aliases for these “port groups” (per I/O Group):

– Fabric A: IOGRP0_PG1 N1_P1;N2_P1,IOGRP0_PG2 N1_P3;N2_P3

– Fabric B: IOGRP0_PG1 N1_P4;N2_P4,IOGRP0_PG2 N1_P2;N2_P2

� Create host zones by always using the host port WWPN, plus the PG1 alias for hosts in the first host set. Always use the host port WWPN, plus the PG2 alias for hosts from the second host set. If a host has to be zoned to multiple I/O Groups, simply add the PG1 or PG2 aliases from the specific I/O Groups to the host zone.

Using this schema provides four paths to one I/O Group for each host. It helps to maintain an equal distribution of host connections on the SVC ports. Figure 5-2 shows an overview of this host zoning schema.

Figure 5-2 Overview of four path host zoning

We recommend whenever possible to use the minimum number of paths that are necessary to achieve sufficient redundancy in the SAN environment, for SVC environments, no more than four paths per I/O Group or VDisk.

156 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 183: San

Remember that all paths have to be managed by the multipath driver on the host side. If we assume a server is connected via four ports to the SVC, each VDisk is seen via eight paths. With 125 VDisks mapped to this server, the multipath driver has to support handling up to 1,000 active paths (8 x 125). You can obtain details and current limitations for the IBM Subsystem Device Driver (SDD) in Storage Multipath Subsystem Device Driver User’s Guide, GC52-1309-01, at this Web site:

http://www-01.ibm.com/support/docview.wss?uid=ssg1S7000303&aid=1

For hosts using four HBAs/ports with eight connections to an I/O Group, use the zoning schema that is shown in Figure 5-3. You can combine this schema with the previous four path zoning schema.

Figure 5-3 Overview of eight path host zoning

5.1.2 Port maskSVC V4.1 added the concept of a port mask. With prior releases, any particular host saw the same set of SCSI logical unit numbers (LUNs) from each of the four FC ports in each node in a particular I/O Group.

The port mask is associated with a host object. The port mask controls which SVC (target) ports any particular host can access. The port mask applies to logins from any of the host (initiator) ports associated with the host object in the configuration model. The port mask consists of four binary bits, represented in the command-line interface (CLI) as 0 or 1. The rightmost bit is associated with FC port 1 on each node. The leftmost bit is associated with port 4. A 1 in any particular bit position allows access to that port and a zero denies access. The default port mask is 1111, preserving the behavior of the product prior to the introduction of this feature.

Chapter 5. Host configuration 157

Page 184: San

For each login between an HBA port and an SVC node port, SVC decides whether to allow access or to deny access by examining the port mask that is associated with the host object to which the HBA belongs. If access is denied, SVC responds to SCSI commands as though the HBA port is unknown to the SVC.

5.2 iSCSI overviewiSCSI is a block-level protocol that encapsulates SCSI commands into TCP/IP packets and, thereby, leverages an existing IP network instead of requiring FC HBAs and SAN fabric infrastructure.

5.2.1 Initiators and targetsAn iSCSI client, which is known as an (iSCSI) initiator, sends SCSI commands over an IP network to an iSCSI target. We refer to a single iSCSI initiator or iSCSI target as an iSCSI node. An iSCSI target refers to a storage resource that is located on an iSCSI server, or, to be more precise, to one of potentially many instances of iSCSI nodes running on that server as a “target.”

5.2.2 NodesThere are one or more iSCSI nodes within a network entity. The iSCSI node is accessible via one or more network portals. A network portal is a component of a network entity that has a TCP/IP network address and that can be used by an iSCSI node.

An iSCSI node is identified by its unique iSCSI name and is referred to as an IQN. Remember that this name serves only for the identification of the node; it is not the node’s address, and in iSCSI, the name is separated from the addresses. This separation allows multiple iSCSI nodes to use the same addresses, or, while it is implemented in the SVC, the same iSCSI node to use multiple addresses.

5.2.3 IQNAn SVC cluster can provide up to eight iSCSI targets, one per node. Each SVC node has its own IQN, which by default will be in this form:

iqn.1986-03.com.ibm:2145.<clustername>.<nodename>

An iSCSI host in SVC is defined by specifying its iSCSI initiator names, for an example of an IQN of a Windows Server:

iqn.1991-05.com.microsoft:itsoserver01

During the configuration of an iSCSI host in the SVC, you must specify the host’s initiator IQNs. You can read about host creation in detail in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339, and in Chapter 8, “SAN Volume Controller operations using the GUI” on page 469.

An alias string can also be associated with an iSCSI node. The alias allows an organization to associate a user friendly string with the iSCSI name. However, the alias string is not a substitute for the iSCSI name.

Figure 5-4 on page 159 shows an overview of iSCSI implementation in the SVC.

158 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 185: San

Figure 5-4 SVC iSCSI overview

A host that is using iSCSI as the communication protocol to access its VDisks on an SVC cluster uses its single or multiple Ethernet adapters to connect to an IP LAN. The nodes of the SVC cluster are connected to the LAN by the existing 1 Gbps Ethernet ports on the node. For iSCSI, both ports can be used.

Note that Ethernet link aggregation (port trunking) or “channel bonding” for the SVC nodes’ Ethernet ports is not supported for the 1 Gbps ports in this release. The support for Jumbo Frames, that is, support for MTU sizes greater than 1,500 bytes, is planned for future SVC releases.

For each SVC node, that is, for each instance of an iSCSI target node in the SVC node, two IPv4 and two IPv6 addresses or iSCSI network portals can be defined. Figure 2-12 on page 29 shows one IPv4 and one IPv6 address per Ethernet port.

5.3 VDisk discoveryHosts can discover VDisks through one of the following three mechanisms:

� Internet Storage Name Service (iSNS)

SVC can register itself with an iSNS name server; the IP address of this server is set using the svctask chcluster command. A host can then query the iSNS server for available iSCSI targets.

� Service Location Protocol (SLP)

The SVC node runs an SLP daemon, which responds to host requests. This daemon reports the available services on the node. One service is the CIMOM, which runs on the configuration node; iSCSI I/O service can now also be reported.

Chapter 5. Host configuration 159

Page 186: San

� SCSI Send Target request

The host can also send a Send Target request using the iSCSI protocol to the iSCSI TCP/IP port (port 3260). You must define the network portal IP addresses of the iSCSI targets before a discovery can be started.

5.4 AuthenticationAuthentication of hosts is optional; by default, it is disabled. The user can choose to enable Challenge Handshake Authentication Protocol (CHAP) or CHAP authentication, which involves sharing a CHAP secret between the cluster and the host. If the correct key is not provided by the host, the SVC will not allow it to perform I/O to VDisks. The cluster can also be assigned a CHAP secret.

A new feature with iSCSI is you can move IP addresses, which are used to address an iSCSI target on the SVC node, between the nodes of an I/O Group. IP addresses will only be moved from one node to its partner node if a node goes through a planned or unplanned restart. If the Ethernet link to the SVC cluster fails due to a cause outside of the SVC (such as the cable being disconnected, the Ethernet router failing, and so on), the SVC makes no attempt to fail over an IP address to restore IP access to the cluster. To enable validation of the Ethernet access to the nodes, it will respond to ping with the standard one-per-second rate without frame loss.

The SVC 5.1 release introduced a new concept, which is used for handling the iSCSI IP address failover, that is called a “clustered Ethernet port”. A clustered Ethernet port consists of one physical Ethernet port on each node in the cluster and contains configuration settings that are shared by all of these ports. These clustered ports are referred to as Port 1 and Port 2 in the CLI or GUI on each node of an SVC cluster. Clustered Ethernet ports can be used for iSCSI or management ports.

Figure 5-5 on page 161 shows an example of an iSCSI target node failover. It gives a simplified overview of what happens during a planned or unplanned node restart in an SVC I/O Group:

1. During normal operation, one iSCSI node target node instance is running on each SVC node. All of the IP addresses (IPv4/IPv6) belonging to this iSCSI target, including the management addresses if the node acts as the configuration node, are presented on the two ports (P1/P2) of a node.

2. During a restart of an SVC node (N1), the iSCSI initiator, including all of its network portal (IPv4/IPv6) IP addresses defined on Port1/Port2 and the management (IPv4/IPv6) IP addresses (if N1 acted as the configuration node), will fail over to Port1/Port2 of the partner node within the I/O Group, that is, node N2. An iSCSI initiator running on a server will execute a reconnect to its iSCSI target, that is, the same IP addresses presented now by a new node of the SVC cluster.

3. As soon as the node (N1) has finished its restart, the iSCSI target node (including its IP addresses) running on N2 will fail back to N1. Again, the iSCSI initiator running on a server will execute a reconnect to its iSCSI target. The management addresses will not fail back. N2 will remain in the role of the configuration node for this cluster.

160 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 187: San

Figure 5-5 iSCSI node failover scenario

From the server’s point of view, it is not required to have a multipathing driver (MPIO) in place to be able to handle an SVC node failover. In the case of a node restart, the server simply reconnects to the IP addresses of the iSCSI target node that will reappear after several seconds on the ports of the partner node.

A host multipathing driver for iSCSI is required in these situations:

� To protect a server from network link failures, including port failures on the SVC nodes

� To protect a server from a server HBA failure (if two HBAs are in use)

� To protect a server form network failures, if the server is connected via two HBAs to two separate networks

� To provide load balancing on the server’s HBA and the network links

The commands for the configuration of the iSCSI IP addresses have been separated from the configuration of the cluster IP addresses.

The following commands are new commands for managing iSCSI IP addresses:

� The svcinfo lsportip command lists the iSCSI IP addresses assigned for each port on each node in the cluster.

� The svctask cfgportip command assigns an IP address to each node’s Ethernet port for iSCSI I/O.

The following commands are new commands for managing the cluster IP addresses:

� The svcinfo lsclusterip command returns a list of the cluster management IP addresses configured for each port.

� The svctask chclusterip command modifies the IP configuration parameters for the cluster.

You can obtain a detailed description of how to use these commands in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339.

Chapter 5. Host configuration 161

Page 188: San

The parameters for remote services (ssh and Web services) will remain associated with the cluster object. During a software upgrade from 4.3.1, the configuration settings for the cluster will be used to configure clustered Ethernet Port1.

For iSCSI-based access, using two separate networks and separating iSCSI traffic within the networks by using a dedicated VLAN path for storage traffic will prevent any IP interface, switch, or target port failure from compromising the host server’s access to the VDisk LUNs.

5.5 AIX-specific informationThe following section details specific information that relates to the connection of AIX-based hosts into an SVC environment.

5.5.1 Configuring the AIX hostTo configure the AIX host, follow these steps:

1. Install the HBAs in the AIX host system.

2. Ensure that you have installed the correct operating systems and version levels on your host, including any updates and Authorized Program Analysis Reports (APARs) for the operating system.

3. Connect the AIX host system to the FC switches.

4. Configure the FC switches (zoning) if needed.

5. Install and configure the 2145 and IBM Subsystem Device Driver (SDD) drivers.

6. Configure the host, VDisks, and host mapping on the SAN Volume Controller.

7. Run the cfgmgr command to discover the VDisks created on the SVC.

The following sections detail the current support information. It is vital that you check the Web sites that are listed regularly for any updates.

5.5.2 Operating system versions and maintenance levelsAt the time of writing, the following AIX levels are supported:

� AIX V4.3.3� AIX 5L™ V5.1� AIX 5L V5.2� AIX 5L V5.3� AIX V6.1.3

For the latest information, and device driver support, always refer to this site:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX

5.5.3 HBAs for IBM System p hosts Ensure that your IBM System p AIX hosts use the correct host bus adapters (HBAs).

AIX-specific information: In this section, the IBM System p information applies to all AIX hosts that are listed on the SVC interoperability support site, including IBM System i partitions and IBM JS blades.

162 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 189: San

The following IBM Web site provides current interoperability information about supported HBAs and firmware:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_pSeries

Installing the host attachment script on IBM System p hostsTo attach an IBM System p AIX host, you must install the AIX host attachment script.

Perform the following steps to install the host attachment scripts:

1. Access the following Web site:

http://www.ibm.com/servers/storage/support/software/sdd/downloading.html

2. Select Host Attachment Scripts for AIX.

3. Select either Host Attachment Script for SDDPCM or Host Attachment Scripts for SDD from the options, depending on your multipath device driver.

4. Download the AIX host attachment script for your multipath device driver.

5. Follow the instructions that are provided on the Web site or any readme files to install the script.

5.5.4 Configuring for fast fail and dynamic tracking For hosts systems that run an AIX 5L V5.2 or later operating system, you can achieve the best results by using the fast fail and dynamic tracking attributes.

Perform the following steps to configure your host system to use the fast fail and dynamic tracking attributes:

1. Issue the following command to set the FC SCSI I/O Controller Protocol Device to each Adapter:

chdev -l fscsi0 -a fc_err_recov=fast_fail

The previous command was for adapter fscsi0. Example 5-1 shows the command for both adapters on our test system running AIX 5L V5.3.

Example 5-1 Enable fast fail

#chdev -l fscsi0 -a fc_err_recov=fast_failfscsi0 changed#chdev -l fscsi1 -a fc_err_recov=fast_failfscsi1 changed

2. Issue the following command to enable dynamic tracking for each FC device:

chdev -l fscsi0 -a dyntrk=yes

The previous example command was for adapter fscsi0. Example 5-2 on page 164 shows the command for both adapters on our test system running AIX 5L V5.3.

Note: The maximum number of FC ports that are supported in a single host (or logical partition) is four. These ports can be four single-port adapters or two dual-port adapters or a combination, as long as the maximum number of ports that are attached to the SAN Volume Controller does not exceed four.

Chapter 5. Host configuration 163

Page 190: San

Example 5-2 Enable dynamic tracking

#chdev -l fscsi0 -a dyntrk=yesfscsi0 changed#chdev -l fscsi1 -a dyntrk=yesfscsi1 changed

Host adapter configuration settingsYou can check the availability of the FC host adapters by using the command shown in Example 5-3.

Example 5-3 FC host adapter availability

#lsdev -Cc adapter |grep fcsfcs0 Available 1Z-08 FC Adapterfcs1 Available 1D-08 FC Adapter

You can find the worldwide port number (WWPN) of your FC host adapter and check the firmware level, as shown in Example 5-4. The network address is the worldwide port name (WWPN) for the FC adapter.

Example 5-4 FC host adapter settings and WWPN

#lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Name: fibre-channel Model: LP9002 Node: fibre-channel@1

164 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 191: San

Device Type: fcp Physical Location: U0.1-P2-I4/Q1

5.5.5 Subsystem Device Driver (SDD) Path Control Module (SDDPCM)SDD is a pseudo device driver that is designed to support the multipath configuration environments within IBM products. It resides on a host system along with the native disk device driver and provides the following functions:

� Enhanced data availability� Dynamic I/O load balancing across multiple paths� Automatic path failover protection� Concurrent download of licensed internal code

SDD works by grouping each physical path to an SVC logical unit number (LUN), represented by individual hdisk devices within AIX, into a vpath device. For example, if you have four physical paths to an SVC LUN, this design produces four new hdisk devices within AIX). From this point forward, AIX uses this vpath device to route I/O to the SVC LUN. Therefore, when making a Logical Volume Manager (LVM) Volume Group using mkvg, we specify the vpath device as the destination and not the hdisk device.

The SDD support matrix for AIX is available at this Web site:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_AIX

SDD/SDDPCM installationAfter downloading the appropriate version of SDD, install it using the standard AIX installation procedure. The currently supported SDD Levels are available at:

http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5000033&familyind=5329528&taskind=2

Check the driver readmefile and make sure your AIX system fulfills all the prerequisites.

SDD installationIn Example 5-5, we show the appropriate version of SDD downloaded into the /tmp/sdd directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDD. Finally, we initiate the installp command, which installs SDD onto this AIX host.

Example 5-5 Installing SDD on AIX

#ls -ltotal 3032-rw-r----- 1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar#tar -tvf devices.sdd.53.rte.tar-rw-r----- 0 0 1536000 Oct 06 11:37:13 2006 devices.sdd.53.rte#tar -xvf devices.sdd.53.rte.tarx devices.sdd.53.rte, 1536000 bytes, 3000 media blocks.# inutoc .#ls -ltotal 6032-rw-r--r-- 1 root system 476 Jun 24 15:33 .toc-rw-r----- 1 root system 1536000 Oct 06 2006 devices.sdd.53.rte-rw-r----- 1 root system 1546240 Jun 24 15:29 devices.sdd.53.rte.tar

Chapter 5. Host configuration 165

Page 192: San

# installp -ac -d . all

Example 5-6 checks the installation of SDD.

Example 5-6 Checking SDD device driver

#lslpp -l | grep -i sdd devices.sdd.53.rte 1.7.0.0 COMMITTED IBM Subsystem Device Driver devices.sdd.53.rte 1.7.0.0 COMMITTED IBM Subsystem Device Driver

We can also check that the SDD server is operational, as shown in Example 5-7.

Example 5-7 SDD server is operational

#lssrc -s sddsrvSubsystem Group PID Status sddsrv 168430 active

#ps -aef | grep sdd root 135174 41454 0 15:38:20 pts/1 0:00 grep sdd root 168430 127292 0 15:10:27 - 0:00 /usr/sbin/sddsrv

Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM, and SDD Web interface” on page 251.

SDDPCM installationIn Example 5-8, we show the appropriate version of SDDPCM downloaded into the /tmp/sddpcm directory. From here, we extract it and initiate the inutoc command, which generates a dot.toc (.toc) file that is needed by the installp command prior to installing SDDPCM. Finally, we initiate the installp command, which installs SDDPCM onto this AIX host.

Example 5-8 Installing SDDPCM on AIX

# ls -ltotal 3232-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar# tar -tvf devices.sddpcm.61.rte.tar-rw-r----- 271001 449628 1638400 Oct 31 12:16:23 2007 devices.sddpcm.61.rte# tar -xvf devices.sddpcm.61.rte.tarx devices.sddpcm.61.rte, 1638400 bytes, 3200 media blocks.# inutoc .# ls -ltotal 6432-rw-r--r-- 1 root system 531 Jul 15 13:25 .toc-rw-r----- 1 271001 449628 1638400 Oct 31 2007 devices.sddpcm.61.rte-rw-r----- 1 root system 1648640 Jul 15 13:24 devices.sddpcm.61.rte.tar# installp -ac -d . all

The 2145 devices.fcp file: A specific “2145” devices.fcp file no longer exists. The standard devices.fcp file now has combined support for SVC/Enterprise Storage Server/DS8000/DS6000.

166 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 193: San

Example 5-9 checks the installation of SDDPCM.

Example 5-9 Checking SDDPCM device driver

# lslpp -l | grep sddpcm devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61 devices.sddpcm.61.rte 2.2.0.0 COMMITTED IBM SDD PCM for AIX V61

Enabling the SDD or SDDPCM Web interface is shown in 5.15, “Using SDDDSM, SDDPCM, and SDD Web interface” on page 251.

5.5.6 Discovering the assigned VDisk using SDD and AIX 5L V5.3Before adding a new volume from the SVC, the AIX host system Kanga had a simple, typical configuration, as shown in Example 5-10.

Example 5-10 Status of AIX host system Kanaga

#lspvhdisk0 0009cddaea97bf61 rootvg activehdisk1 0009cdda43c9dfd5 rootvg activehdisk2 0009cddabaef1d99 rootvg active#lsvgrootvg

In Example 5-11, we show SVC configuration information relating to our AIX host, specifically, the host definition, the VDisks created for this host, and the VDisk-to-host mappings for this configuration.

Using the SVC CLI, we can check that the host WWPNs, which are listed in Example 5-4 on page 164, are logged into the SVC for the host definition “aix_test”, by entering:

svcinfo lshost aix_test

We can also find the serial numbers of the VDisks using the following command:

svcinfo lshostvdiskmap

Example 5-11 SVC definitions for host system aix_test

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Kanagaid 2name Kanagaport_count 2type genericmask 1111iogrp_count 2WWPN 10000000C932A7FBnode_logged_in_count 2state activeWWPN 10000000C932A800node_logged_in_count 2state active

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Kanagaid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID

Chapter 5. Host configuration 167

Page 194: San

2 Kanaga 0 13 Kanaga0001 10000000C932A7FB 60050768018301BF28000000000000152 Kanaga 1 14 Kanaga0002 10000000C932A7FB 60050768018301BF28000000000000162 Kanaga 2 15 Kanaga0003 10000000C932A7FB 60050768018301BF28000000000000172 Kanaga 3 16 Kanaga0004 10000000C932A7FB 60050768018301BF2800000000000018

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0001id 13name Kanaga0001IO_group_id 0IO_group_name io_grp0status offlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 5.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000015throttling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status offlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 5.00GBreal_capacity 5.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

168 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 195: San

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap Kanaga0001id name SCSI_id host_id host_name wwpn vdisk_UID13 Kanaga0001 0 2 Kanaga 10000000C932A7FB 60050768018301BF280000000000001513 Kanaga0001 0 2 Kanaga 10000000C932A800 60050768018301BF2800000000000015

We need to run cfgmgr on the AIX host to discover the new disks and enable us to start the vpath configuration; if we run the config manager (cfgmgr) on each FC adapter, it will not create the vpaths, only the new hdisks. To configure the vpaths, we need to run the cfallvpath command after issuing the cfgmgr command on each of the FC adapters:

# cfgmgr -l fcs0# cfgmgr -l fcs1# cfallvpath

Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is extremely time intensive:

# cfgmgr -vS

The raw SVC disk configuration of the AIX host system now appears, as shown in Example 5-12. We can see the multiple hdisk devices, representing the multiple routes to the same SVC LUN, and we can see the vpath devices available for configuration.

Example 5-12 VDisks from SVC added with multiple separate paths for each VDisk

#lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Available 1Z-08-02 SAN Volume Controller Devicehdisk4 Available 1Z-08-02 SAN Volume Controller Devicehdisk5 Available 1Z-08-02 SAN Volume Controller Devicehdisk6 Available 1Z-08-02 SAN Volume Controller Devicehdisk7 Available 1D-08-02 SAN Volume Controller Devicehdisk8 Available 1D-08-02 SAN Volume Controller Devicehdisk9 Available 1D-08-02 SAN Volume Controller Devicehdisk10 Available 1D-08-02 SAN Volume Controller Devicehdisk11 Available 1Z-08-02 SAN Volume Controller Devicehdisk12 Available 1Z-08-02 SAN Volume Controller Devicehdisk13 Available 1Z-08-02 SAN Volume Controller Devicehdisk14 Available 1Z-08-02 SAN Volume Controller Devicehdisk15 Available 1D-08-02 SAN Volume Controller Devicehdisk16 Available 1D-08-02 SAN Volume Controller Devicehdisk17 Available 1D-08-02 SAN Volume Controller Devicehdisk18 Available 1D-08-02 SAN Volume Controller Devicevpath0 Available Data Path Optimizer Pseudo Device Drivervpath1 Available Data Path Optimizer Pseudo Device Drivervpath2 Available Data Path Optimizer Pseudo Device Drivervpath3 Available Data Path Optimizer Pseudo Device Driver

To make a Volume Group (for example, itsoaixvg) to host the vpath1 device, we use the mkvg command passing the vpath device as a parameter instead of the hdisk device, which is shown in Example 5-13 on page 170.

Chapter 5. Host configuration 169

Page 196: San

Example 5-13 Running the mkvg command

#mkvg -y itsoaixvg vpath10516-1254 mkvg: Changing the PVID in the ODM.itsoaixvg

Now, by running the lspv command, we can see that vpath1 has been assigned into the itsoaixvg Volume Group, as shown in Example 5-14.

Example 5-14 Showing the vpath assignment into the Volume Group

#lspvhdisk0 0009cddaea97bf61 rootvg activehdisk1 0009cdda43c9dfd5 rootvg activehdisk2 0009cddabaef1d99 rootvg activevpath1 0009cddabce27ba5 itsoaixvg active

The lsvpcfg command also displays the new relationship between vpath1 and the itsoaixvg Volume Group, but it also shows each hdisk that is associated with vpath1, as shown in Example 5-15.

Example 5-15 Displaying the vpath to hdisk to Volume Group relationship

#lsvpcfgvpath0 (Avail ) 60050768018301BF2800000000000015 = hdisk3 (Avail ) hdisk7 (Avail )vpath1 (Avail pv itsoaixvg) 60050768018301BF2800000000000016 = hdisk4 (Avail ) hdisk8 (Avail )vpath2 (Avail ) 60050768018301BF2800000000000017 = hdisk5 (Avail ) hdisk9 (Avail )vpath3 (Avail ) 60050768018301BF2800000000000018 = hdisk6 (Avail ) hdisk10 (Avail )

In Example 5-16, running the lspv vpath1 command shows a more verbose output for vpath1.

Example 5-16 Verbose details of vpath1

#lspv vpath1PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvgPV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89PV STATE: activeSTALE PARTITIONS: 0 ALLOCATABLE: yesPP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 0TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: 2FREE PPs: 639 (5112 megabytes) HOT SPARE: noUSED PPs: 0 (0 megabytes) MAX REQUEST: 256 kilobytesFREE DISTRIBUTION: 128..128..127..128..128USED DISTRIBUTION: 00..00..00..00..00

5.5.7 Using SDDWithin SDD, we are able to check the status of the adapters and devices now under SDD control with the use of the datapath command set. In Example 5-17 on page 171, we can see the status of both HBA cards as NORMAL and ACTIVE.

170 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 197: San

Example 5-17 SDD commands used to check the availability of the adapters

#datapath query adapter

Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active 0 fscsi0 NORMAL ACTIVE 0 0 4 1 1 fscsi1 NORMAL ACTIVE 56 0 4 1

In Example 5-18, we see detailed information about each vpath device. Initially, we see that vpath1 is the only vpath device in an open status. It is open, because it is the only vpath that is currently assigned to a Volume Group. Additionally, for vpath1, we see that only path 1 and path 3 have been selected (used) by SDD. These paths are the two physical paths that connect to the preferred node of the I/O Group of this SVC cluster. The remaining two paths within this vpath device are only accessed in a failover scenario.

Example 5-18 SDD commands that are used to check the availability of the devices

#datapath query device

Total Devices : 4

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018301BF2800000000000015==========================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 CLOSE NORMAL 0 0 1 fscsi1/hdisk7 CLOSE NORMAL 0 0 2 fscsi0/hdisk11 CLOSE NORMAL 0 0 3 fscsi1/hdisk15 CLOSE NORMAL 0 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018301BF2800000000000016==========================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 0 0 1 fscsi1/hdisk8 OPEN NORMAL 28 0 2 fscsi0/hdisk12 OPEN NORMAL 32 0 3 fscsi1/hdisk16 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: vpath2 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018301BF2800000000000017==========================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk5 CLOSE NORMAL 0 0 1 fscsi1/hdisk9 CLOSE NORMAL 0 0 2 fscsi0/hdisk13 CLOSE NORMAL 0 0 3 fscsi1/hdisk17 CLOSE NORMAL 0 0

DEV#: 3 DEVICE NAME: vpath3 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018301BF2800000000000018==========================================================================Path# Adapter/Hard Disk State Mode Select Errors

Chapter 5. Host configuration 171

Page 198: San

0 fscsi0/hdisk6 CLOSE NORMAL 0 0 1 fscsi1/hdisk10 CLOSE NORMAL 0 0 2 fscsi0/hdisk14 CLOSE NORMAL 0 0 3 fscsi1/hdisk18 CLOSE NORMAL 0 0

5.5.8 Creating and preparing volumes for use with AIX 5L V5.3 and SDDThe itsoaixvg Volume Group is created using vpath1. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-19.

Example 5-19 Host system new Volume Group and file system configuration

#lsvg -oitsoaixvgrootvg#lsvg -l itsoaixvgitsoaixvg:LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTloglv01 jfs2log 1 1 1 open/syncd N/Afslv00 jfs2 128 128 1 open/syncd /teslv1fslv01 jfs2 128 128 1 open/syncd /teslv2#df -gFilesystem GB blocks Free %Used Iused %Iused Mounted on/dev/hd4 0.03 0.01 62% 1357 31% //dev/hd2 9.06 4.32 53% 17341 2% /usr/dev/hd9var 0.03 0.03 10% 137 3% /var/dev/hd3 0.12 0.12 7% 31 1% /tmp/dev/hd1 0.03 0.03 2% 11 1% /home/proc - - - - - /proc/dev/hd10opt 0.09 0.01 86% 1947 38% /opt/dev/lv00 0.41 0.39 4% 19 1% /usr/sys/inst.images/dev/fslv00 2.00 2.00 1% 4 1% /teslv1/dev/fslv01 2.00 2.00 1% 4 1% /teslv2

5.5.9 Discovering the assigned VDisk using AIX V6.1 and SDDPCMBefore adding a new volume from the SVC, the AIX host system Atlantic had a simple, typical configuration, as shown in Example 5-20.

Example 5-20 Status of AIX host system Kanaga

# lspvhdisk0 0009cdcaeb48d3a3 rootvg activehdisk1 0009cdcac26dbb7c rootvg activehdisk2 0009cdcab5657239 rootvg active# lsvgrootvg

In Example 5-22 on page 174, we show the SVC configuration information relating to our AIX host, specifically the host definition, the VDisks that were created for this host, and the VDisk-to-host mappings for this configuration.

172 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 199: San

Our example host is named Atlantic. Example 5-21 shows the HBA information for our example host.

Example 5-21 Example of HBA information for the host Atlantic

## lsdev -Cc adapter | grep fcsfcs1 Available 1H-08 FC Adapterfcs2 Available 1D-08 FC Adapter# lscfg -vpl fcs1 fcs1 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A644 Manufacturer................001E Customer Card ID Number.....2765 FRU Number.................. 00P4495 Network Address.............10000000C932A865 ROS Level and ID............02C039D0 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401411 Device Specific.(Z5)........02C039D0 Device Specific.(Z6)........064339D0 Device Specific.(Z7)........074339D0 Device Specific.(Z8)........20000000C932A865 Device Specific.(Z9)........CS3.93A0 Device Specific.(ZA)........C1D3.93A0 Device Specific.(ZB)........C2D3.93A0 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1## lscfg -vpl fcs2 fcs2 U0.1-P2-I5/Q1 FC Adapter

Part Number.................80P4383 EC Level....................A Serial Number...............1F5350CD42 Manufacturer................001F Customer Card ID Number.....2765 FRU Number.................. 80P4384 Network Address.............10000000C94C8C1C ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000

Chapter 5. Host configuration 173

Page 200: San

Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C94C8C1C Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(ZC)........00000000 Hardware Location Code......U0.1-P2-I5/Q1

PLATFORM SPECIFIC

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1#

Using the SVC CLI, we can check that the host WWPNs, as listed in Example 5-22, are logged into the SVC for the host definition Atlantic, by entering this command:

svcinfo lshost Atlantic

We can also discover the serial numbers of the VDisks by using the following command:

svcinfo lshostvdiskmap Atlantic

Example 5-22 SVC definitions for host system Atlantic

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Atlanticid 8name Atlanticport_count 2type genericmask 1111iogrp_count 4WWPN 10000000C94C8C1Cnode_logged_in_count 2state activeWWPN 10000000C932A865node_logged_in_count 2state activeIBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Atlanticid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID8 Atlantic 0 14 Atlantic0001 10000000C94C8C1C 6005076801A180E908000000000000608 Atlantic 1 22 Atlantic0002 10000000C94C8C1C 6005076801A180E908000000000000618 Atlantic 2 23 Atlantic0003 10000000C94C8C1C 6005076801A180E90800000000000062IBM_2145:ITSO-CLS2:admin>

174 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 201: San

We need to run the cfgmgr command on the AIX host to discover the new disks and to enable us to use the disks:

# cfgmgr -l fcs1# cfgmgr -l fcs2

Alternatively, use the cfgmgr -vS command to check the complete system. This command will probe the devices sequentially across all FC adapters and attached disks; however, it is extremely time intensive:

# cfgmgr -vS

The raw SVC disk configuration of the AIX host system now appears, as shown in Example 5-23. We can see the multiple MPIO FC 2145 devices, representing the SVC LUN.

Example 5-23 VDisks from SVC added with multiple various paths for each VDisk

# lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Available 1D-08-02 MPIO FC 2145hdisk4 Available 1D-08-02 MPIO FC 2145hdisk5 Available 1D-08-02 MPIO FC 2145

To make a Volume Group (for example, itsoaixvg) to host the LUNs, we use the mkvg command passing the device as a parameter. This action is shown in Example 5-24.

Example 5-24 Running the mkvg command

# mkvg -y itsoaixvg hdisk30516-1254 mkvg: Changing the PVID in the ODM.itsoaixvg# mkvg -y itsoaixvg1 hdisk40516-1254 mkvg: Changing the PVID in the ODM.itsoaixvg1# mkvg -y itsoaixvg2 hdisk50516-1254 mkvg: Changing the PVID in the ODM.itsoaixvg2

Now, by running the lspv command, we can see the disks and the assigned Volume Groups, as shown in Example 5-25.

Example 5-25 Showing the vpath assignment into the Volume Group

# lspvhdisk0 0009cdcaeb48d3a3 rootvg activehdisk1 0009cdcac26dbb7c rootvg activehdisk2 0009cdcab5657239 rootvg activehdisk3 0009cdca28b589f5 itsoaixvg activehdisk4 0009cdca28b87866 itsoaixvg1 activehdisk5 0009cdca28b8ad5b itsoaixvg2 active

In Example 5-26 on page 176, we show that running the lspv hdisk3 command shows a more verbose output for one of the SVC LUNs.

Chapter 5. Host configuration 175

Page 202: San

Example 5-26 Verbose details of hdisk3

# lspv hdisk3PHYSICAL VOLUME: hdisk3 VOLUME GROUP: itsoaixvgPV IDENTIFIER: 0009cdca28b589f5 VG IDENTIFIER 0009cdca00004c000000011b28b58ae2PV STATE: activeSTALE PARTITIONS: 0 ALLOCATABLE: yesPP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 0TOTAL PPs: 511 (4088 megabytes) VG DESCRIPTORS: 2FREE PPs: 511 (4088 megabytes) HOT SPARE: noUSED PPs: 0 (0 megabytes) MAX REQUEST: 256 kilobytesFREE DISTRIBUTION: 103..102..102..102..102USED DISTRIBUTION: 00..00..00..00..00#

5.5.10 Using SDDPCMWithin SDD, we are able to check the status of the adapters and devices that are now under SDDPCM control with the use of the pcmpath command set. In Example 5-27, we can see the status and mode of both HBA cards as NORMAL and ACTIVE.

Example 5-27 SDDPCM commands that are used to check the availability of the adapters

# pcmpath query adapter

Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active 0 fscsi1 NORMAL ACTIVE 407 0 6 6 1 fscsi2 NORMAL ACTIVE 425 0 6 6

From Example 5-28, we see detailed information about each MPIO device. The asterisk (*) next to the path numbers shows which paths have been selected (used) by SDDPCM. These paths are the two physical paths that connect to the preferred node of the I/O Group of this SVC cluster. The remaining two paths within this MPIO device are only accessed in a failover scenario.

Example 5-28 SDDPCM commands that are used to check the availability of the devices

# pcmpath query device

Total Devices : 3

DEV#: 3 DEVICE NAME: hdisk3 TYPE: 2145 ALGORITHM: Load BalanceSERIAL: 6005076801A180E90800000000000060==========================================================================Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 152 0 1* fscsi1/path1 OPEN NORMAL 48 0 2* fscsi2/path2 OPEN NORMAL 48 0 3 fscsi2/path3 OPEN NORMAL 160 0

DEV#: 4 DEVICE NAME: hdisk4 TYPE: 2145 ALGORITHM: Load Balance

176 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 203: San

SERIAL: 6005076801A180E90800000000000061==========================================================================Path# Adapter/Path Name State Mode Select Errors 0* fscsi1/path0 OPEN NORMAL 37 0 1 fscsi1/path1 OPEN NORMAL 66 0 2 fscsi2/path2 OPEN NORMAL 71 0 3* fscsi2/path3 OPEN NORMAL 38 0

DEV#: 5 DEVICE NAME: hdisk5 TYPE: 2145 ALGORITHM: Load BalanceSERIAL: 6005076801A180E90800000000000062==========================================================================Path# Adapter/Path Name State Mode Select Errors 0 fscsi1/path0 OPEN NORMAL 66 0 1* fscsi1/path1 OPEN NORMAL 38 0 2* fscsi2/path2 OPEN NORMAL 38 0 3 fscsi2/path3 OPEN NORMAL 70 0#

5.5.11 Creating and preparing volumes for use with AIX V6.1 and SDDPCMThe itsoaixvg Volume Group is created using hdisk3. A logical volume is created using the Volume Group. Then, the testlv1 file system is created and mounted on the /testlv1 mount point, as shown in Example 5-29.

Example 5-29 Host system new Volume Group and file system configuration

# lsvg -oitsoaixvg2itsoaixvg1itsoaixvgrootvg# crfs -v jfs2 -g itsoaixvg -a size=3G -m /itsoaixvg -p rw -a agblksize=4096File system created successfully.3145428 kilobytes total disk space.New File System size is 6291456# lsvg -l itsoaixvgitsoaixvg:LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINTloglv00 jfs2log 1 1 1 closed/syncd N/Afslv00 jfs2 384 384 1 closed/syncd /itsoaixvg#

5.5.12 Expanding an AIX volumeIt is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Certain operating systems, such as AIX 5L Version 5.2 and later-level versions, can handle the volumes being expanded, even if the host has applications running. In the following examples, we show the procedure with AIX 5L V5.3 and SDD, but the procedure is also the same procedure when using AIX V6 or SDDPCM. The Volume Group where the VDisk is assigned, if it is assigned to any Volume Group, must not be a concurrent accessible Volume Group. A VDisk that is defined in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded, unless the mapping is removed, which means that the FlashCopy, Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk.

Chapter 5. Host configuration 177

Page 204: San

The following steps show how to expand a volume on an AIX host, where the volume is a VDisk from the SVC:

1. To list a VDisk size, use the svcinfo lsvdisk <VDisk_name> command. Example 5-30 shows the Kanga0002 VDisk that we have allocated to our AIX server before we expand it. Here, the capacity is 5 GB, and the vdisk_UID is 60050768018301BF2800000000000016.

Example 5-30 Expanding a VDisk on AIX

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002id 14name Kanaga0002IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 5.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000016throttling 0preferred_node_id 2fast_write_state not_emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 5.00GBreal_capacity 5.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

178 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 205: San

2. To identify to which vpath this VDisk is associated on the AIX host, we use the datapath query device SDD command, as shown in Example 5-19 on page 172. Here, we can see that the VDisk with vdisk_UID 60050768018301BF2800000000000016 is associated with vpath1, because the vdisk_UID matches the SERIAL field on the AIX host.

3. To see the size of the volume on the AIX host, we use the lspv command, as shown in Example 5-31. This command shows that the volume size is 5,112 MB, equal to 5 GB, as shown in Example 5-30 on page 178.

Example 5-31 Finding the size of the volume in AIX

#lspv vpath1PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvgPV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89PV STATE: activeSTALE PARTITIONS: 0 ALLOCATABLE: yesPP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2TOTAL PPs: 639 (5112 megabytes) VG DESCRIPTORS: 2FREE PPs: 0 (0 megabytes) HOT SPARE: noUSED PPs: 639 (5112 megabytes) MAX REQUEST: 256 kilobytesFREE DISTRIBUTION: 00..00..00..00..00USED DISTRIBUTION: 128..128..127..128..128

4. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the VDisk. In Example 5-32, we expand the VDisk by 1 GB.

Example 5-32 Expanding a VDisk

IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 1 -unit gb Kanaga0002

5. To check that the VDisk has been expanded, use the svcinfo lsvdisk command. Here, we can see that the Kanaga0002 VDisk has been expanded to a capacity of 6 GB (Example 5-33).

Example 5-33 Verifying that the VDisk has been expanded

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Kanaga0002id 14name Kanaga0002IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 6.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000016throttling 0

Chapter 5. Host configuration 179

Page 206: San

preferred_node_id 2fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 6.00GBreal_capacity 6.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

6. AIX has not yet recognized a change in the capacity of the vpath1 volume, because no dynamic mechanism exists within the operating system to provide a configuration update communication. Therefore, to encourage AIX to recognize the extra capacity on the volume without stopping any applications, we use the chvg -g fc_source_vg command, where fc_source_vg is the name of the Volume Group to which vpath1 belongs.

If AIX does not return any messages, the command was successful, and the volume changes in this Volume Group have been saved. If AIX cannot see any changes in the volumes, it will return an explanatory message.

7. To verify that the size of vpath1 has changed, we use the lspv command again, as shown in Example 5-34.

Example 5-34 Verify that AIX can see the newly expanded VDisk

#lspv vpath1PHYSICAL VOLUME: vpath1 VOLUME GROUP: itsoaixvgPV IDENTIFIER: 0009cddabce27ba5 VG IDENTIFIER 0009cdda00004c000000011abce27c89PV STATE: activeSTALE PARTITIONS: 0 ALLOCATABLE: yesPP SIZE: 8 megabyte(s) LOGICAL VOLUMES: 2TOTAL PPs: 767 (6136 megabytes) VG DESCRIPTORS: 2FREE PPs: 128 (1024 megabytes) HOT SPARE: noUSED PPs: 639 (5112 megabytes) MAX REQUEST: 256 kilobytesFREE DISTRIBUTION: 00..00..00..00..128USED DISTRIBUTION: 154..153..153..153..26

180 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 207: San

Here, we can see that the volume now has a size of 6,136 MB, equal to 6 GB. Now, we can expand the file systems in this Volume Group to use the new capacity.

5.5.13 Removing an SVC volume on AIXBefore we remove a VDisk assigned to an AIX host, we have to make sure that there is no data on it, and that no applications are dependent upon the volume. This procedure is a standard AIX procedure. We move all data off the volume, remove the volume in the Volume Group, and delete the vpath and the hdisks that are associated with the vpath. Next, we remove the vdiskhostmap on the SVC for that volume, and that VDisk is no longer needed. Then, we delete it so that the extents will be available when we create a new VDisk on the SVC.

5.5.14 Running SVC commands from an AIX host systemTo issue CLI commands, you must install and prepare the SSH client system on the AIX host system. For AIX 5L V5.1 and later, you can get OpenSSH from the Bonus Packs. You also need its prerequisite, OpenSSL, from the AIX toolbox for Linux applications for Power Systems™. For AIX V4.3.3, the software is available from the AIX toolbox for Linux applications.

The AIX installation images from IBM developerWorks® are available at this Web site:

http://sourceforge.net/projects/openssh-aix

Perform the following steps:

1. To generate the key files on AIX, issue the following command:

ssh-keygen -t rsa -f filename

The -t parameter specifies the type of key to generate: rsa1, rsa2, or dsa. The value for rsa2 is only rsa. For rsa1, the type must be rsa1. When creating the key to the SVC, use type rsa2. The -f parameter specifies the file names of the private and public keys on the AIX server (the public key gets the extension .pub after the file name).

2. Next, you have to install the public key on the SVC, which can be done by using the Master Console. Copy the public key to the Master Console, and install the key to the SVC, as described in Chapter 4, “SAN Volume Controller initial configuration” on page 103.

3. On the AIX server, make sure that the private key and the public key are in the .ssh directory and in the home directory of the user.

4. To connect to the SVC and use a CLI session from the AIX host, issue the following command:

ssh -l admin -i filename svc

5. You can also issue the commands directly on the AIX host, which is useful when making scripts. To do this, add the SVC commands to the previous command. For example, to list the hosts that are defined on the SVC, enter the following command:

ssh -l admin -i filename svc svcinfo lshost

In this command, -l admin is the user on the SVC to which we will connect, -i filename is the filename of the private key generated, and svc is the name or IP address of the SVC.

Chapter 5. Host configuration 181

Page 208: San

5.6 Windows-specific informationIn the following sections, we detail specific information about the connection of Windows 2000-based hosts to the SVC environment.

5.6.1 Configuring Windows Server 2000, Windows 2003 Server, and Windows Server 2008 hosts

This section provides an overview of the requirements for attaching the SVC to a host running Windows Server 2000, Windows 2003 Server, or Windows Server 2008.

Before you attach the SVC to your host, make sure that all of the following requirements are fulfilled:

� For Windows Server 2003 x64 Edition operating system, you must install the Hotfix from KB 908980. If you do not install it before operation, preferred pathing is not available. You can find the Hotfix at this Web site:

http://support.microsoft.com/kb/908980

� Check LUN limitations for your host system. Ensure that there are enough FC adapters installed in the server to handle the total LUNs that you want to attach.

5.6.2 Configuring WindowsTo configure the Windows hosts, follow these steps:

1. Make sure that the latest OS Hotfixes are applied to your Microsoft server.

2. Use the latest firmware and driver levels on your host system.

3. Install the HBA or HBAs on the Windows server, as shown in 5.6.4, “Host adapter installation and configuration” on page 183.

4. Connect the Windows 2000/2003/2008 server FC host adapters to the switches.

5. Configure the switches (zoning).

6. Install the FC host adapter driver, as described in 5.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 183.

7. Configure the HBA for hosts running Windows, as described in 5.6.4, “Host adapter installation and configuration” on page 183.

8. Check the HBA driver readme file for the required Windows registry settings, as described in 5.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 183.

9. Check the disk timeout on Microsoft Windows Server, as described in 5.6.5, “Changing the disk timeout on Microsoft Windows Server” on page 185.

10.Install and configure SDD/Subsystem Device Driver Device Specific Module (SDDDSM).

11.Restart the Windows 2000/2003/2008 host system.

12.Configure the host, VDisks, and host mapping in the SVC.

13.Use Rescan disk in Computer Management of the Windows server to discover the VDisks that were created on the SAN Volume Controller.

182 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 209: San

5.6.3 Hardware lists, device driver, HBAs, and firmware levelsThe latest information about supported hardware, device driver, and firmware is available at this Web site:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_Windows

At this Web site, you will also find the hardware list for supported HBAs and the driver levels for Windows. Check the supported firmware and driver level for your HBA and follow the manufacturer’s instructions to upgrade the firmware and driver levels for each type of HBA. In most manufacturers’ driver readme files, you will find instructions for the Windows registry parameters that have to be set for the HBA driver:

� For the Emulex HBA driver, SDD requires the port driver, not the miniport port driver. � For the QLogic HBA driver, SDDDSM requires the storport version of the miniport driver.� For the QLogic HBA driver, SDD requires the scsiport version of the miniport driver.

5.6.4 Host adapter installation and configurationInstall the host adapters into your system. Refer to the manufacturer’s instructions for installation and configuration of the HBAs.

In IBM System x servers, the HBA must always be installed in the first slots. If you install, for example, two HBAs and two network cards, the HBAs must be installed in slot 1 and slot 2, and the network cards can be installed in the remaining slots.

Configure the QLogic HBA for hosts running WindowsAfter you have installed the HBA in the server, and have applied the HBA firmware and device driver, you have to configure the HBA. Perform the following steps:

1. Restart the server.

2. When you see the QLogic banner, press the Ctrl+Q keys to open the FAST!UTIL menu panel.

3. From the Select Host Adapter menu, select the Adapter Type QLA2xxx.

4. From the Fast!UTIL Options menu, select Configuration Settings.

5. From the Configuration Settings menu, click Host Adapter Settings.

6. From the Host Adapter Settings menu, select the following values:

a. Host Adapter BIOS: Disabled

b. Frame size: 2048

c. Loop Reset Delay: 5 (minimum)

d. Adapter Hard Loop ID: Disabled

e. Hard Loop ID: 0

f. Spinup Delay: Disabled

g. Connection Options: 1 - point to point only

h. Fibre Channel Tape Support: Disabled

i. Data Rate: 2

7. Press the Esc key to return to the Configuration Settings menu.

8. From the Configuration Settings menu, select Advanced Adapter Settings.

9. From the Advanced Adapter Settings menu, set the following parameters:

Chapter 5. Host configuration 183

Page 210: San

a. Execution throttle: 100

b. Luns per Target: 0

c. Enable LIP Reset: No

d. Enable LIP Full Login: Yes

e. Enable Target Reset: No Note: If you are using a subsystem device driver (SDD) lower than 1.6, set Enable Target Reset to Yes.

f. Login Retry Count: 30

g. Port Down Retry Count: 15

h. Link Down Timeout: 30

i. Extended error logging: Disabled (might be enabled for debugging)

j. RIO Operation Mode: 0

k. Interrupt Delay Timer: 0

10.Press Esc to return to the Configuration Settings menu.

11.Press Esc.

12.From the Configuration settings modified window, select Save changes.

13.From the Fast!UTIL Options menu, select Select Host Adapter if more than one QLogic adapter were installed in your system.

14.Select the other Host Adapter and repeat all steps from step 4 to 12.

15.You have to repeat this process for all installed QLogic adapters in your system. When you are done, press Esc to exit the QLogic BIOS and restart the server.

Configuring the Emulex HBA for hosts running WindowsAfter you have installed the Emulex HBA and driver, you must configure your HBA.

For the Emulex HBA StorPort driver, accept the default settings and set the topology to 1 (1 = F Port Fabric). For the Emulex HBA FC Port driver, use the default settings and change the parameters to the parameters that are provided in Table 5-1.

Table 5-1 FC port driver changes

Parameters Recommended settings

Query name server for all N-ports (BrokenRSCN) Enabled

LUN mapping (MapLuns) Enabled (1)

Automatic LUN mapping (MapLuns) Enabled (1)

Allow multiple paths to SCSI target (MultipleSCSIClaims)

Enabled

Scan in device ID order (ScanDeviceIDOrder) Disabled

Translate queue full to busy (TransleteQueueFull) Enabled

Retry timer (RetryTimer) 2000 milliseconds

Maximum number of LUNs (MaximumLun) Equal to or greater than the number of the SVC LUNs that are available to the HBA

184 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 211: San

5.6.5 Changing the disk timeout on Microsoft Windows ServerThis section describes how to change the disk I/O timeout value on Windows Server 2000, Windows 2003 Server, and Windows Server 2008 operating systems.

On your Windows server hosts, change the disk I/O timeout value to 60 in the Windows registry:

1. In Windows, click Start, and select Run.

2. In the dialog text box, type regedit and press Enter.

3. In the registry browsing tool, locate the HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Disk\TimeOutValue key.

4. Confirm that the value for the key is 60 (decimal value), and, if necessary, change the value to 60, as shown in Figure 5-6.

Figure 5-6 Regedit

5.6.6 Installing the SDD driver on WindowsAt the time of writing, the SDD levels in Table 5-2 are supported.

Table 5-2 Currently supported SDD levels

See the following Web site for the latest information about SDD for Windows:

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7001350&loc=en_US&cs=utf-8&lang=en

Note: The parameters that are shown in Table 5-1 correspond to the parameters in HBAnywhere.

Windows operating system SDD level

NT 4 1.5.1.1

Windows 2000 Server and Windows 2003 Server service pack (SP2) (32-bit)/2003 SP2 (IA-64)

1.6.3.0-2

Windows 2000 Server with Microsoft Cluster Server (MSCS) and Veritas Volume Manager/ Windows 2003 Server SP2 (32-bit) with MSCS and Veritas Volume Manager

Not available

Chapter 5. Host configuration 185

Page 212: San

Before installing the SDD driver, the HBA driver has to be installed on your system. SDD requires the HBA SCSI port driver.

After downloading the appropriate version of SDD from the Web site, extract the file and run setup.exe to install SDD. A command line will appear. Answer Y (Figure 5-7) to install the driver.

Figure 5-7 Confirm SDD installation

After the setup has completed, answer Y again to reboot your system (Figure 5-8).

Figure 5-8 Reboot system after installation

To check if your SDD installation is complete, open the Windows Device Manager, expand SCSI and RAID Controllers, right-click Subsystem Device Driver Management, and click Properties (see Figure 5-9 on page 187).

SDD: We recommend that you use SDD only on existing systems where you do not want to change from SDD to SDDDSM. New operating systems will only be supported with SDDDSM.

186 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 213: San

Figure 5-9 Subsystem Device Driver Management

Chapter 5. Host configuration 187

Page 214: San

The Subsystem Device Driver Management Properties window opens. Select the Driver tab, and make sure that you have installed the correct driver version (see Figure 5-10).

Figure 5-10 Subsystem Device Driver Management Properties Driver tab

5.6.7 Installing the SDDDSM driver on WindowsThe following sections show how to install the SDDDSM driver on Windows.

Windows 2003 Server, Windows Server 2008, and MPIOMicrosoft Multi Path Input Output (MPIO) solutions are designed to work in conjunction with device-specific modules (DSMs) written by vendors, but the MPIO driver package does not, by itself, form a complete solution. This joint solution allows the storage vendors to design device-specific solutions that are tightly integrated with the Windows operating system.

MPIO is not shipped with the Windows operating system; storage vendors must pack the MPIO drivers with their own DSM. IBM Subsystem Device Driver DSM (SDDDSM) is the IBM multipath I/O solution that is based on Microsoft MPIO technology; it is a device-specific module specifically designed to support IBM storage devices on Windows 2003 Server and Windows Server 2008 servers.

The intention of MPIO is to get a better integration of multipath storage solution with the operating system, and it allows the use of multipaths in the SAN infrastructure during the boot process for SAN boot hosts.

Subsystem Device Driver Device Specific Module (SDDDSM) for SVCSubsystem Device Driver Device Specific Module (SDDDSM) installation is a package for the SVC device for the Windows 2003 Server and Windows Server 2008 operating systems.

188 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 215: San

SDDDSM is the IBM multipath I/O solution that is based on Microsoft MPIO technology, and it is a device-specific module that is specifically designed to support IBM storage devices. Together with MPIO, it is designed to support the multipath configuration environments in the IBM System Storage SAN Volume Controller. It resides in a host system with the native disk device driver and provides the following functions:

� Enhanced data availability � Dynamic I/O load-balancing across multiple paths � Automatic path failover protection � Concurrent download of licensed internal code � Path-selection policies for the host system� No SDDDSM support for Windows Server 2000 � For the HBA driver, SDDDSM requires the StorPort version of HBA miniport driver

Table 5-3 shows, at the time of writing, the supported SDDDSM driver levels.

Table 5-3 Currently supported SDDDSM driver levels

To check which levels are available, go to the Web site:

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7001350&loc=en_US&cs=utf-8&lang=en#WindowsSDDDSM

To download SDDDSM, go to the Web site:

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=D430&uid=ssg1S4000350&loc=en_US&cs=utf-8&lang=en

The installation procedure for SDDDSM and SDD are the same, but remember that you have to use the StorPort HBA driver instead of the SCSI driver. We describe the SDD installation in 5.6.6, “Installing the SDD driver on Windows” on page 185. After completing the installation, you will see the Microsoft MPIO in Device Manager (Figure 5-11 on page 190).

Windows operating system SDD level

Windows 2003 Server SP2 (32-bit)/Windows 2003 Server SP2 (x64)

2.2.0.0-11

Windows Server 2008 (32-bit)/Windows Server 2008 (x64)

2.2.0.0-11

Chapter 5. Host configuration 189

Page 216: San

Figure 5-11 Windows Device Manager: MPIO

We describe the SDDDSM installation for Windows Server 2008 in 5.8, “Example configuration of attaching an SVC to a Windows Server 2008 host” on page 200.

5.7 Discovering assigned VDisks in Windows Server 2000 and Windows 2003 Server

In this section, we describe how to discover assigned VDisks in Windows Server 2000 and Windows 2003 Server. The screen captures show a Windows 2003 Server host with SDDDSM installed. Discovering the disks in Windows Server 2000 or with SDD is the same procedure.

Before adding a new volume from the SVC, the Windows 2003 Server host system had the configuration that is shown in Figure 5-12 on page 191, with only local disks.

190 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 217: San

Figure 5-12 Windows 2003 Server host system before adding a new volume from SVC

We can check that the WWPN is logged into the SVC for the host named Senegal by entering the following command (Example 5-35):

svcinfo lshost Senegal

Example 5-35 Host information for Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshost Senegalid 1name Senegalport_count 2type genericmask 1111iogrp_count 4WWPN 210000E08B89B9C0node_logged_in_count 2state activeWWPN 210000E08B89CCC2node_logged_in_count 2state active

The configuration of the Senegal host, the Senegal_bas0001 VDisk, and the mapping between the host and the VDisk are defined in the SVC, as described in Example 5-36. In our example, the Senegal_bas0002 and Senegal_bas003 VDisks have the same configuration as the Senegal_bas0001 VDisk.

Example 5-36 VDisk mapping: Senegal

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegalid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E90800000000000010

Chapter 5. Host configuration 191

Page 218: San

1 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001id 7name Senegal_bas0001IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_0_DS45capacity 10.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 6005076801A180E9080000000000000Fthrottling 0preferred_node_id 3fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_0_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 10.00GBreal_capacity 10.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

We can also obtain the serial number of the VDisks by entering the following command (Example 5-37):

svcinfo lsvdiskhostmap Senegal_bas0001

Example 5-37 VDisk serial number: Senegal_bas0001

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdiskhostmap Senegal_bas0001

192 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 219: San

id name SCSI_id host_id host_name wwpn vdisk_UID7 Senegal_bas0001 0 1 Senegal 210000E08B89B9C0 6005076801A180E9080000000000000F7 Senegal_bas0001 0 1 Senegal 210000E08B89CCC2 6005076801A180E9080000000000000F

After installing the necessary drivers and the rescan disks operation completes, the new disks are found in the Computer Management window, as shown in Figure 5-13.

Figure 5-13 Windows 2003 Server host system with three new volumes from SVC

In Windows Device Manager, the disks are shown as IBM 2145 SCSI Disk Device (Figure 5-14 on page 194). The number of IBM 2145 SCSI Disk Devices that you see is equal to:

(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs)

The IBM 2145 Multi-Path Disk Devices are the devices that are created by the multipath driver (Figure 5-14 on page 194). The number of these devices is equal to the number of VDisks that are presented to the host.

Chapter 5. Host configuration 193

Page 220: San

Figure 5-14 Windows 2003 Server Device Manager with assigned VDisks

When following the SAN zoning recommendation, this calculation gives us, for one VDisk and a host with two HBAs:

(number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = 4 paths

You can check if all of the paths are available if you select Start All Programs Subsystem Device Driver (DSM) Subsystem Device Driver (DSM). The SDD (DSM) command-line interface will appear. Enter the following command to see which paths are available to your system (Example 5-38).

Example 5-38 Datapath query device

Microsoft Windows [Version 5.2.3790](C) Copyright 1985-2003 Microsoft Corp.

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000002A============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 47 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 28 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000010============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0

194 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 221: San

1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 162 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 155 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000011============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 51 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 25 0

C:\Program Files\IBM\SDDDSM>

5.7.1 Extending a Windows Server 2000 or Windows 2003 Server volumeIt is possible to expand a VDisk in the SVC cluster, even if it is mapped to a host. Certain operating systems, such as Windows Server 2000 and Windows 2003 Server, can handle the volumes being expanded even if the host has applications running. A VDisk that is defined to be in a FlashCopy, Metro Mirror, or Global Mirror mapping on the SVC cannot be expanded unless the mapping is removed, which means that the FlashCopy, Metro Mirror, or Global Mirror on that VDisk has to be stopped before it is possible to expand the VDisk.

If the volume is part of a Microsoft Cluster (MSCS), Microsoft recommends that you shut down all nodes except one node, and that applications in the resource that use the volume that is going to be expanded are stopped before expanding the volume. Applications running in other resources can continue. After expanding the volume, start the application and the resource, and then restart the other nodes in the MSCS.

Note: All path states have to be OPEN. The path state can be OPEN or CLOSE. If one path state is CLOSE, it means that the system is missing a path that it saw during startup. If you restart your system, the CLOSE paths are removed from this view.

Important:

� For VDisk expansion to work on Windows Server 2000, apply Windows Server 2000 Hotfix Q327020, which is available from the Microsoft Knowledge Base at this Web site:

http://support.microsoft.com/kb/327020

� If you want to expand a logical drive in a extended partition in Windows 2003 Server, apply the Hotfix from KB 841650, which is available from the Microsoft Knowledge Base at this Web site:

http://support.microsoft.com/kb/841650/en-us

� Use the updated Diskpart version for Windows 2003 Server, which is available from the Microsoft Knowledge Base at this Web site:

http://support.microsoft.com/kb/923076/en-us

Chapter 5. Host configuration 195

Page 222: San

To expand a volume in use on Windows Server 2000 and Windows 2003 Server, we used Diskpart. The Diskpart tool is part of Windows 2003 Server; for other Windows versions, you can download it free of charge from Microsoft. Diskpart is a tool that was developed by Microsoft to ease administration of storage. It is a command-line interface where you can manage disks, partitions, and volumes, by using scripts or direct input on the command line. You can list disks and volumes, select them, and after selecting them, get more detailed information, create partitions, extend volumes, and more. For more information, see the Microsoft Web site:

http://www.microsoft.com

or

http://support.microsoft.com/default.aspx?scid=kb;en-us;304736&sd=tech

An example of how to expand a volume on a Windows 2003 Server host, where the volume is a VDisk from the SVC, is shown in the following discussion.

To list a VDisk size, use the svcinfo lsvdisk <VDisk_name> command. This command gives this information for the Senegal_bas0001 before expanding the VDisk (Example 5-36 on page 191).

Here, we can see that the capacity is 10 GB, and also what the vdisk_UID is. To find on what vpath this VDisk is on the Windows 2003 Server host, we use the datapath query device SDD command on the Windows host (Figure 5-15).

We can see that the serial 6005076801A180E9080000000000000F of Disk1 on the Windows host (Figure 5-15) matches the vdisk ID of Senegal_bas0001 (Example 5-36 on page 191).

To see the size of the volume on the Windows host, we use Disk Manager, as shown in Figure 5-15.

Figure 5-15 Windows 2003 Server: Disk Management

196 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 223: San

This window shows that the volume size is 10 GB. To expand the volume on the SVC, we use the svctask expandvdisksize command to increase the capacity on the VDisk. In this example, we expand the VDisk by 1 GB (Example 5-39).

Example 5-39 svctask expandvdisksize command

IBM_2145:ITSO-CLS2:admin>svctask expandvdisksize -size 1 -unit gb Senegal_bas0001

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk Senegal_bas0001id 7name Senegal_bas0001IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_0_DS45capacity 11.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 6005076801A180E9080000000000000Fthrottling 0preferred_node_id 3fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_0_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 11.00GBreal_capacity 11.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

To check that the VDisk has been expanded, we use the svctask expandvdisksize command. In Example 5-39, we can see that the Senegal_bas0001 VDisk has been expanded to 11 GB in capacity.

Chapter 5. Host configuration 197

Page 224: San

After performing a “Disk Rescan” in Windows, you will see the new unallocated space in Windows Disk Management, as shown in Figure 5-16.

Figure 5-16 Expanded volume in Disk Manager

This window shows that Disk1 now has 1 GB unallocated new capacity. To make this capacity available for the file system, use the following commands, as shown in Example 5-40:

diskpart Starts DiskPart in a DOS prompt

list volume Shows you all available volumes

select volume Selects the volume to expand

detail volume Displays details for the selected volume, including the unallocated capacity

extend Extends the volume to the available unallocated space

Example 5-40 Using Diskpart

C:\>diskpart

Microsoft DiskPart version 5.2.3790.3959Copyright (C) 1999-2001 Microsoft Corporation.On computer: SENEGAL

DISKPART> list volume

Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 C NTFS Partition 75 GB Healthy System Volume 1 S SVC_Senegal NTFS Partition 10 GB Healthy Volume 2 D DVD-ROM 0 B Healthy

DISKPART> select volume 1Volume 1 is the selected volume.

DISKPART> detail volume

198 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 225: San

Disk ### Status Size Free Dyn Gpt -------- ---------- ------- ------- --- ---* Disk 1 Online 11 GB 1020 MB

Readonly : NoHidden : NoNo Default Drive Letter: NoShadow Copy : No

DISKPART> extend

DiskPart successfully extended the volume.

DISKPART> detail volume

Disk ### Status Size Free Dyn Gpt -------- ---------- ------- ------- --- ---* Disk 1 Online 11 GB 0 B

Readonly : NoHidden : NoNo Default Drive Letter: NoShadow Copy : No

After extending the volume, the detail volume command shows that there is no free capacity on the volume anymore. The list volume command shows the file system size. The Disk Management window also shows the new disk size, as shown in Figure 5-17.

Figure 5-17 Disk Management after extending disk

The example here is referred to as a Windows Basic Disk. Dynamic disks can be expanded by expanding the underlying SVC VDisk. The new space will appear as unallocated space at the end of the disk.

Chapter 5. Host configuration 199

Page 226: San

In this case, you do not need to use the DiskPart tool; you can use Windows Disk Management functions to allocate the new space. Expansion works irrespective of the volume type (simple, spanned, mirrored, and so on) on the disk. Dynamic disks can be expanded without stopping I/O in most cases.

5.8 Example configuration of attaching an SVC to a Windows Server 2008 host

This section describes an example configuration that shows the attachment of a Windows Server 2008 host system to the SVC. We discuss more details about Windows Server 2008 and the SVC in 5.6, “Windows-specific information” on page 182.

5.8.1 Installing SDDDSM on a Windows Server 2008 host Download the HBA driver and the SDDDSM package and copy them to your host system. We describe information about the recommended SDDDSM package in 5.6.7, “Installing the SDDDSM driver on Windows” on page 188. We list the HBA driver details in 5.6.3, “Hardware lists, device driver, HBAs, and firmware levels” on page 183. We perform the steps that are described in 5.6.2, “Configuring Windows” on page 182 to achieve this task.

As a prerequisite for this example, we have already performed steps 1 to 5 for the hardware installation, SAN configuration is done, and the hotfixes are applied. The Disk timeout value is set to 60 seconds (see 5.6.5, “Changing the disk timeout on Microsoft Windows Server” on page 185), and we will start with the driver installation.

Installing the HBA driverPerform these steps to install the HBA driver:

1. Extract the QLogic driver package to your hard drive.

2. Select Start Run.

3. Enter the devmgmt.msc command, click OK, and the Device Manager will appear.

4. Expand Storage Controllers.

Important: Never try to upgrade your Basic Disk to Dynamic Disk or vice versa without backing up your data, because this operation is disruptive for the data, due to a change in the position of the logical block address (LBA) on the disks.

200 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 227: San

5. Right-click the HBA, and select Update driver Software (Figure 5-18).

Figure 5-18 Windows Server 2008 driver update

6. Click Browse my computer for driver software (Figure 5-19).

Figure 5-19 Windows Server 2008 driver update

7. Enter the path to the extracted QLogic driver, and click Next (Figure 5-20 on page 202).

Chapter 5. Host configuration 201

Page 228: San

Figure 5-20 Windows Server 2008 driver update

8. Windows installs the driver (Figure 5-21).

Figure 5-21 Windows Server 2008 driver installation

202 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 229: San

9. When the driver update is complete, click Close to exit the wizard (Figure 5-22).

Figure 5-22 Windows Server 2008 driver installation

10.Repeat steps 1 to 8 for all of the HBAs that are installed in the system.

5.8.2 Installing SDDDSMTo install the SDDDSM driver on your system, perform the following steps:

1. Extract the SDDDSM driver package to a folder on your hard drive.

2. Open the folder with the extracted files.

3. Run the setup.exe command, and a DOS command prompt will appear.

4. Type Y and press Enter to install SDDDSM (Figure 5-23).

Figure 5-23 Installing SDDDSM

5. After the SDDDSM Setup is finished, type Y and press Enter to restart your system.

After the reboot, the SDDDSM installation is complete. You can verify the installation completion in Device Manager, because the SDDDSM device will appear (Figure 5-24 on page 204), and the SDDDSM tools will have been installed (Figure 5-25 on page 204).

Chapter 5. Host configuration 203

Page 230: San

Figure 5-24 SDDDSM installation

Figure 5-25 SDDDSM installation

204 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 231: San

5.8.3 Attaching SVC VDisks to Windows Server 2008Create the VDisks on the SVC and map them to the Windows Server 2008 host.

In this example, we have mapped three SVC disks to the Windows Server 2008 host named Diomede, as shown in Example 5-41.

Example 5-41 SVC host mapping to host Diomede

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Diomedeid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID0 Diomede 0 20 Diomede_0001 210000E08B0541BC 6005076801A180E9080000000000002B0 Diomede 1 21 Diomede_0002 210000E08B0541BC 6005076801A180E9080000000000002C0 Diomede 2 22 Diomede_0003 210000E08B0541BC 6005076801A180E9080000000000002D

Perform the following steps to use the devices on your Windows Server 2008 host:

1. Click Start, and click Run.

2. Enter the diskmgmt.msc command, and click OK. The Disk Management window opens.

3. Select Action, and click Rescan Disks (Figure 5-26).

Figure 5-26 Windows Server 2008: Rescan disks

4. The SVC disks will now appear in the Disk Management window (Figure 5-27 on page 206).

Chapter 5. Host configuration 205

Page 232: San

Figure 5-27 Windows Server 2008 Disk Management window

After you have assigned the SVC disks, they are also available in Device Manager. The three assigned drives are represented by SDDDSM/MPIO as IBM-2145 Multipath disk devices in the Device Manager (Figure 5-28).

Figure 5-28 Windows Server 2008 Device Manager

206 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 233: San

5. To check that the disks are available, select Start All Programs Subsystem Device Driver DSM, and click Subsystem Device Driver DSM (Figure 5-29). The SDDDSM Command Line Utility will appear.

Figure 5-29 Windows Server 2008 Subsystem Device Driver DSM utility

6. Enter the datapath query device command and press Enter (Example 5-42). This command will display all of the disks and the available paths, including their states.

Example 5-42 Windows Server 2008 SDDDSM command-line utility

Microsoft Windows [Version 6.0.6001]Copyright (c) 2006 Microsoft Corporation. All rights reserved.

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000002B============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1429 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1456 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000002C============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 1520 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0

Chapter 5. Host configuration 207

Page 234: San

2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 1517 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000002D============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 27 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 1396 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 1459 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

C:\Program Files\IBM\SDDDSM>

7. Right-click the disk in Disk Management, and select Online to place the disk online (Figure 5-30).

Figure 5-30 Windows Server 2008: Place disk online

8. Repeat step 7 for all of your attached SVC disks.

9. Right-click one disk again, and select Initialize Disk (Figure 5-31).

Figure 5-31 Windows Server 2008: Initialize Disk

SAN zoning recommendation: When following the SAN zoning recommendation, we get this result, using one VDisk and a host with two HBAs, (number of VDisks) x (number of paths per I/O Group per HBA) x (number of HBAs) = 1 x 2 x 2 = four paths.

208 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 235: San

10.Mark all of the disks that you want to initialize, and click OK (Figure 5-32).

Figure 5-32 Windows Server 2008: Initialize Disk

11.Right-click the unallocated disk space, and select New Simple Volume (Figure 5-33).

Figure 5-33 Windows Server 2008: New Simple Volume

12.The New Simple Volume Wizard window opens. Click Next.

13.Enter a disk size, and click Next (Figure 5-34).

Figure 5-34 Windows Server 2008: New Simple Volume

14.Assign a drive letter, and click Next (Figure 5-35 on page 210).

Chapter 5. Host configuration 209

Page 236: San

Figure 5-35 Windows Server 2008: New Simple Volume

15.Enter a volume label, and click Next (Figure 5-36).

Figure 5-36 Windows Server 2008: New Simple Volume

210 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 237: San

16.Click Finish, and repeat this step for every SVC disk on your host system (Figure 5-37).

Figure 5-37 Windows Server 2008: Disk Management

5.8.4 Extending a Windows Server 2008 volumeUsing SVC and Windows Server 2008 gives you the ability to extend volumes while they are in use. We describe the steps to extend a volume in 5.7.1, “Extending a Windows Server 2000 or Windows 2003 Server volume” on page 195.

Windows Server 2008 also uses the DiskPart utility to extend volumes. To start it, select Start Run, and enter DiskPart. The DiskPart utility will appear. The procedure is exactly the same as the procedure in Windows 2003 Server. Follow the Windows 2003 Server description to extend your volume.

5.8.5 Removing a disk on WindowsWhen we want to remove a disk from Windows, and the disk is an SVC VDisk, we follow the standard Windows procedure to make sure that there is no data that we want to preserve on the disk, that no applications are using the disk, and that no I/O is going to the disk. After completing this procedure, we remove the VDisk mapping on the SVC. We must make sure that we are removing the correct VDisk. To verify, we use SDD to find the serial number for the disk, and on the SVC, we use lshostvdiskmap to find the VDisk name and number. We also check that the SDD Serial number on the host matches the UID on the SVC for the VDisk.

When the VDisk mapping is removed, we perform a rescan for the disk, Disk Management on the server removes the disk, and the vpath goes into the status of CLOSE on the server. We can verify these actions by using the datapath query device SDD command, but the vpath that is closed will first be removed after a reboot of the server.

In the following sequence of examples, we show how we can remove an SVC VDisk from a Windows server. We show it on a Windows 2003 Server operating system, but the steps also apply to Windows Server 2000 and Windows Server 2008.

Chapter 5. Host configuration 211

Page 238: San

Figure 5-15 on page 196 shows the Disk Manager before removing the disk.

We will remove Disk 1. To find the correct VDisk information, we find the Serial/UID number using SDD (Example 5-43).

Example 5-43 Removing SVC disk from the Windows server

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000000F============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000010============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 94 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 55 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000011============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 100 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 69 0

Knowing the Serial/UID of the VDisk and the host name Senegal, we find the VDisk mapping to remove by using the lshostvdiskmap command on the SVC, and then, we remove the actual VDisk mapping (Example 5-44).

Example 5-44 Finding and removing the VDisk mapping

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegalid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Senegal 0 7 Senegal_bas0001 210000E08B89B9C0 6005076801A180E9080000000000000F1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E908000000000000101 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Senegal Senegal_bas0001

212 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 239: San

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap Senegalid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Senegal 1 8 Senegal_bas0002 210000E08B89B9C0 6005076801A180E908000000000000101 Senegal 2 9 Senegal_bas0003 210000E08B89B9C0 6005076801A180E90800000000000011

Here, we can see that the VDisk is removed from the server. On the server, we then perform a disk rescan in Disk Management, and we now see that the correct disk (Disk1) has been removed, as shown in Figure 5-38.

Figure 5-38 Disk Management: Disk has been removed

SDD also shows us that the status for all paths to Disk1 has changed to CLOSE, because the disk is not available (Example 5-45 on page 214).

Chapter 5. Host configuration 213

Page 240: San

Example 5-45 SDD: Closed path

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 3

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000000F============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 1471 0 1 Scsi Port2 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 CLOSE NORMAL 1324 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000010============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 20 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 124 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 72 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000011============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 134 0 1 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 82 0

The disk (Disk1) is now removed from the server. However, to remove the SDD information of the disk, we need to reboot the server, but we can wait until a more suitable time.

5.9 Using the SVC CLI from a Windows hostTo issue CLI commands, we must install and prepare the SSH client system on the Windows host system.

We can install the PuTTY SSH client software on a Windows host by using the PuTTY installation program. This program is in the SSHClient\PuTTY directory of the SAN Volume Controller Console CD-ROM, or you can download PuTTY from the following Web site:

http://www.chiark.greenend.org.uk/~sgtatham/putty/

The following Web site offers SSH client alternatives for Windows:

http://www.openssh.com/windows.html

Cygwin software has an option to install an OpenSSH client. You can download Cygwin from the following Web site:

http://www.cygwin.com/

214 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 241: San

We discuss more information about the CLI in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339.

5.10 Microsoft Volume Shadow CopyThe SVC provides support for the Microsoft Volume Shadow Copy Service. The Microsoft Volume Shadow Copy Service can provide a point-in-time (shadow) copy of a Windows host volume while the volume is mounted and the files are in use.

In this section, we discuss how to install the Microsoft Volume Copy Shadow Service.

The following operating system versions are supported:

� Windows 2003 Server Standard Server Edition, 32-bit and 64-bit (x64) versions� Windows 2003 Server Enterprise Edition, 32-bit and 64-bit (x64) versions� Windows 2003 Server Standard Server R2 Edition, 32-bit and 64-bit (x64) versions� Windows 2003 Server Enterprise R2 Edition, 32-bit and 64-bit (x64) versions� Windows Server 2008 Standard� Windows Server 2008 Enterprise

The following components are used to provide support for the service:

� SAN Volume Controller

� SAN Volume Controller Master Console

� IBM System Storage hardware provider, known as the IBM System Storage Support for Microsoft Volume Shadow Copy Service

� Microsoft Volume Shadow Copy Service

The IBM System Storage provider is installed on the Windows host.

To provide the point-in-time shadow copy, the components complete the following process:

1. A backup application on the Windows host initiates a snapshot backup.

2. The Volume Shadow Copy Service notifies the IBM System Storage hardware provider that a copy is needed.

3. The SAN Volume Controller prepares the volume for a snapshot.

4. The Volume Shadow Copy Service quiesces the software applications that are writing data on the host and flushes file system buffers to prepare for a copy.

5. The SAN Volume Controller creates the shadow copy using the FlashCopy Service.

6. The Volume Shadow Copy Service notifies the writing applications that I/O operations can resume and notifies the backup application that the backup was successful.

The Volume Shadow Copy Service maintains a free pool of VDisks for use as a FlashCopy target and a reserved pool of VDisks. These pools are implemented as virtual host systems on the SAN Volume Controller.

Chapter 5. Host configuration 215

Page 242: San

5.10.1 Installation overviewThe steps for implementing the IBM System Storage Support for Microsoft Volume Shadow Copy Service must be completed in the correct sequence.

Before you begin, you must have experience with, or knowledge of, administering a Windows operating system. And you must also have experience with, or knowledge of, administering a SAN Volume Controller.

You will need to complete the following tasks:

� Verify that the system requirements are met.

� Install the SAN Volume Controller Console if it is not already installed.

� Install the IBM System Storage hardware provider.

� Verify the installation.

� Create a free pool of volumes and a reserved pool of volumes on the SAN Volume Controller.

5.10.2 System requirements for the IBM System Storage hardware providerEnsure that your system satisfies the following requirements before you install the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software on the Windows operating system:

� SAN Volume Controller and Master Console Version 2.1.0 or later with FlashCopy enabled. You must install the SAN Volume Controller Console before you install the IBM System Storage Hardware provider.

� IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software Version 3.1 or later.

5.10.3 Installing the IBM System Storage hardware providerThis section includes the steps to install the IBM System Storage hardware provider on a Windows server. You must satisfy all of the system requirements before starting the installation.

During the installation, you will be prompted to enter information about the SAN Volume Controller Master Console, including the location of the truststore file. The truststore file is generated during the installation of the Master Console. You must copy this file to a location that is accessible to the IBM System Storage hardware provider on the Windows server.

When the installation is complete, the installation program might prompt you to restart the system. Complete the following steps to install the IBM System Storage hardware provider on the Windows server:

1. Download the installation program files from the IBM Web site, and place a copy on the Windows server where you will install the IBM System Storage hardware provider:

http://www-1.ibm.com/support/docview.wss?rs=591&context=STCCCXR&context=STCCCYH&dc=D400&uid=ssg1S4000663&loc=en_US&cs=utf-8&lang=en

2. Log on to the Windows server as an administrator, and navigate to the directory where the installation program is located.

3. Run the installation program by double-clicking IBMVSS.exe.

216 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 243: San

4. The Welcome window opens, as shown in Figure 5-39. Click Next to continue with the installation. You can click Cancel at any time to exit the installation. To move back to previous windows while using the wizard, click Back.

Figure 5-39 IBM System Storage Support for Microsoft Volume Shadow Copy installation

5. The License Agreement window opens (Figure 5-40). Read the license agreement information. Select whether you accept the terms of the license agreement, and click Next. If you do not accept, it means that you cannot continue with the installation.

Figure 5-40 IBM System Storage Support for Microsoft Volume Shadow Copy installation

Chapter 5. Host configuration 217

Page 244: San

6. The Choose Destination Location window opens (Figure 5-41). Click Next to accept the default directory where the setup program will install the files, or click Change to select another directory. Click Next.

Figure 5-41 IBM System Storage Support for Microsoft Volume Shadow Copy installation

7. Click Install to begin the installation (Figure 5-42).

Figure 5-42 IBM System Storage Support for Microsoft Volume Shadow Copy installation

218 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 245: San

8. From the next window, select the required CIM server, or select “Enter the CIM Server address manually”, and click Next (Figure 5-43).

Figure 5-43 IBM System Storage Support for Microsoft Volume Shadow Copy installation

9. The Enter CIM Server Details window opens. Enter the following information in the fields (Figure 5-44):

a. In the CIM Server Address field, type the name of the server where the SAN Volume Controller Console is installed.

b. In the CIM User field, type the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the server where the SAN Volume Controller Console is installed.

c. In the CIM Password field, type the password for the user name that the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software will use to gain access to the SAN Volume Controller Console.

d. Click Next.

Figure 5-44 IBM System Storage Support for Microsoft Volume Shadow Copy installation

10.In the next window, click Finish. If necessary, the InstallShield Wizard prompts you to restart the system (Figure 5-45 on page 220).

Chapter 5. Host configuration 219

Page 246: San

Figure 5-45 IBM System Storage Support for Microsoft Volume Shadow Copy installation

5.10.4 Verifying the installationPerform the following steps to verify the installation:

1. Select Start All Programs Administrative Tools Services from the Windows server task bar.

2. Ensure that the service named “IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service” software appears and that Status is set to Started and that Startup Type is set to Automatic.

3. Open a command prompt window, and issue the following command:

vssadmin list providers

Add it on al information:

� If these settings change after installation, you can use the ibmvcfg.exe tool to update the Microsoft Volume Shadow Copy and Virtual Disk Services software with the new settings.

� If you do not have the CIM Agent server, port, or user information, contact your CIM Agent administrator.

220 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 247: San

This command ensures that the service named IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software is listed as a provider (Example 5-46).

Example 5-46 Microsoft Software Shadow copy provider

C:\Documents and Settings\Administrator>vssadmin list providersvssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool(C) Copyright 2001 Microsoft Corp.

Provider name: 'Microsoft Software Shadow Copy provider 1.0' Provider type: System Provider Id: {b5946137-7b9f-4925-af80-51abd60b20d5} Version: 1.0.0.7

Provider name: 'IBM System Storage Volume Shadow Copy Service Hardware Provider'

Provider type: Hardware Provider Id: {d90dd826-87cf-42ce-a88d-b32caa82025b} Version: 3.1.0.1108

If you are able to successfully perform all of these verification tasks, the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software was successfully installed on the Windows server.

5.10.5 Creating the free and reserved pools of volumesThe IBM System Storage hardware provider maintains a free pool of volumes and a reserved pool of volumes. Because these objects do not exist on the SAN Volume Controller, the free pool of volumes and the reserved pool of volumes are implemented as virtual host systems. You must define these two virtual host systems on the SAN Volume Controller.

When a shadow copy is created, the IBM System Storage hardware provider selects a volume in the free pool, assigns it to the reserved pool, and then removes it from the free pool. This process protects the volume from being overwritten by other Volume Shadow Copy Service users.

To successfully perform a Volume Shadow Copy Service operation, there must be enough VDisks mapped to the free pool. The VDisks must be the same size as the source VDisks.

Use the SAN Volume Controller Console or the SAN Volume Controller command-line interface (CLI) to perform the following steps:

1. Create a host for the free pool of VDisks. You can use the default name VSS_FREE or specify another name. Associate the host with the worldwide port name (WWPN) 5000000000000000 (15 zeroes) (Example 5-47).

Example 5-47 Creating an mkhost for the free pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_FREE -hbawwpn 5000000000000000 -force Host, id [2], successfully created

2. Create a virtual host for the reserved pool of volumes. You can use the default name VSS_RESERVED or specify another name. Associate the host with the WWPN 5000000000000001 (14 zeroes) (Example 5-48 on page 222).

Chapter 5. Host configuration 221

Page 248: San

Example 5-48 Creating an mkhost for the reserved pool

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name VSS_RESERVED -hbawwpn 5000000000000001 -forceHost, id [3], successfully created

3. Map the logical units (VDisks) to the free pool of volumes. The VDisks cannot be mapped to any other hosts. If you already have VDisks created for the free pool of volumes, you must assign the VDisks to the free pool.

4. Create VDisk-to-host mappings between the VDisks selected in step 3 and the VSS_FREE host to add the VDisks to the free pool. Alternatively, you can use the ibmvcfg add command to add VDisks to the free pool (Example 5-49).

Example 5-49 Host mappings

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0001Virtual Disk to Host map, id [0], successfully createdIBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host VSS_FREE msvc0002Virtual Disk to Host map, id [1], successfully created

5. Verify that the VDisks have been mapped. If you do not use the default WWPNs 5000000000000000 and 5000000000000001, you must configure the IBM System Storage hardware provider with the WWPNs (Example 5-50).

Example 5-50 Verify hosts

IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap VSS_FREEid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID2 VSS_FREE 0 10 msvc0001 5000000000000000 6005076801A180E908000000000000122 VSS_FREE 1 11 msvc0002 5000000000000000 6005076801A180E90800000000000013

5.10.6 Changing the configuration parametersYou can change the parameters that you defined when you installed the IBM System Storage Support for Microsoft Volume Shadow Copy Service and Virtual Disk Service software. Therefore, you must use the ibmvcfg.exe utility. It is a command-line utility that is the located in C:\Program Files\IBM\Hardware Provider for VSS-VDS directory (Example 5-51).

Example 5-51 Using ibmvcfg.exe utility help

C:\Program Files\IBM\Hardware Provider for VSS-VDS>ibmvcfg.exe

IBM System Storage VSS Provider Configuration Tool Commands----------------------------------------ibmvcfg.exe <command> <command arguments>

Commands:/h | /help | -? | /?showcfglistvols <all|free|unassigned>add <volume esrial number list> (separated by spaces)rem <volume serial number list> (separated by spaces)

Configuration:set user <CIMOM user name>

222 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 249: San

set password <CIMOM password>set trace [0-7]set trustpassword <trustpassword>set truststore <truststore location>set usingSSL <YES | NO>set vssFreeInitiator <WWPN>set vssReservedInitiator <WWPN>set FlashCopyVer <1 | 2> (only applies to ESS)set cimomPort <PORTNUM>set cimomHost <Hostname>set namespace <Namespace>set targetSVC <svc_cluster_ip>set backgroundCopy <0-100>

Table 5-4 shows the available commands.

Table 5-4 Available ibmvcfg.util commands

Command Description Example

ibmvcfg showcfg Lists the current settings. ibmvcfg showcfg

ibmvcfg set username <username>

Sets the user name to access the SAN Volume Controller Console.

ibmvcfg set username Dan

ibmvcfg set password <password>

Sets the password of the user name that will access the SAN Volume Controller Console.

ibmvcfg set password mypassword

ibmvcfg set targetSVC <ipaddress>

Specifies the IP address of the SAN Volume Controller on which the VDisks are located when VDisks are moved to and from the free pool with the ibmvcfg add and ibmvcfg rem commands. The IP address is overridden if you use the -s flag with the ibmvcfg add and ibmvcfg rem commands.

set targetSVC 9.43.86.120

set backgroundCopy Sets the background copy rate for FlashCopy.

set backgroundCopy 80

ibmvcfg set usingSSL Specifies whether to use Secure Sockets Layer protocol to connect to the SAN Volume Controller Console.

ibmvcfg set usingSSL yes

ibmvcfg set cimomPort <portnum>

Specifies the SAN Volume Controller Console port number. The default value is 5,999.

ibmvcfg set cimomPort 5999

ibmvcfg set cimomHost <server name>

Sets the name of the server where the SAN Volume Controller Console is installed.

ibmvcfg set cimomHost cimomserver

Chapter 5. Host configuration 223

Page 250: San

ibmvcfg set namespace <namespace>

Specifies the namespace value that the Master Console is using. The default value is \root\ibm.

ibmvcfg set namespace \root\ibm

ibmvcfg set vssFreeInitiator <WWPN>

Specifies the WWPN of the host. The default value is 5000000000000000. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000000.

ibmvcfg set vssFreeInitiator 5000000000000000

ibmvcfg set vssReservedInitiator <WWPN>

Specifies the WWPN of the host. The default value is 5000000000000001. Modify this value only if there is a host already in your environment with a WWPN of 5000000000000001.

ibmvcfg set vssFreeInitiator 5000000000000001

ibmvcfg listvols Lists all VDisks, including information about the size, location, and VDisk to host mappings.

ibmvcfg listvols

ibmvcfg listvols all Lists all VDisks, including information about the size, location, and VDisk to host mappings.

ibmvcfg listvols all

ibmvcfg listvols free Lists the volumes that are currently in the free pool.

ibmvcfg listvols free

ibmvcfg listvols unassigned Lists the volumes that are currently not mapped to any hosts.

ibmvcfg listvols unassigned

ibmvcfg add -s ipaddress Adds one or more volumes to the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.

ibmvcfg add vdisk12 ibmvcfg add 600507 68018700035000000 0000000BA -s 66.150.210.141

ibmvcfg rem -s ipaddress Removes one or more volumes from the free pool of volumes. Use the -s parameter to specify the IP address of the SAN Volume Controller where the VDisks are located. The -s parameter overrides the default IP address that is set with the ibmvcfg set targetSVC command.

ibmvcfg rem vdisk12 ibmvcfg rem 600507 68018700035000000 0000000BA -s 66.150.210.141

Command Description Example

224 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 251: San

5.11 Specific Linux (on Intel) informationThe following sections describe specific information pertaining to the connection of Linux on Intel-based hosts to the SVC environment.

5.11.1 Configuring the Linux hostFollow these steps to configure the Linux host:

1. Use the latest firmware levels on your host system.

2. Install the HBA or HBAs on the Linux server, as described in 5.6.4, “Host adapter installation and configuration” on page 183.

3. Install the supported HBA driver/firmware and upgrade the kernel if required, as described in 5.11.2, “Configuration information” on page 225.

4. Connect the Linux server FC host adapters to the switches.

5. Configure the switches (zoning) if needed.

6. Install SDD for Linux, as described in 5.11.5, “Multipathing in Linux” on page 226.

7. Configure the host, VDisks, and host mapping in the SAN Volume Controller.

8. Rescan for LUNs on the Linux server to discover the VDisks that were created on the SVC.

5.11.2 Configuration informationThe SAN Volume Controller supports hosts that run the following Linux distributions:

� Red Hat Enterprise Linux� SUSE Linux Enterprise Server

For the latest information, always refer to this site:

http://www.ibm.com/storage/support/2145

For SVC Version 4.3, the following support information was available at the time of writing:

� Software supported levels:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278

� Hardware supported levels:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277

At this Web site, you will find the hardware list for supported HBAs and device driver levels for Windows. Check the supported firmware and driver level for your HBA, and follow the manufacture’s instructions to upgrade the firmware and driver levels for each type of HBA.

5.11.3 Disabling automatic Linux system updatesMany Linux distributions give you the ability to configure your systems for automatic system updates. Red Hat provides this ability in the form of a program called up2date, while Novell SUSE provides the YaST Online Update utility. These features periodically query for updates that are available for each host and can be configured to automatically install any new updates that they find.

Chapter 5. Host configuration 225

Page 252: San

Often, the automatic update process also upgrades the system to the latest kernel level. Hosts running SDD must turn off the automatic update of kernel levels, because certain drivers that are supplied by IBM, such as SDD, are dependent on a specific kernel and will cease to function on a new kernel. Similarly, HBA drivers need to be compiled against specific kernels in order to function optimally. By allowing automatic updates of the kernel, you risk affecting your host systems unexpectedly.

5.11.4 Setting queue depth with QLogic HBAsThe queue depth is the number of I/O operations that can be run in parallel on a device. Configure your host running the Linux operating system by using the formula that is specified in 5.16, “Calculating the queue depth” on page 252.

Perform the following steps to set the maximum queue depth:

1. Add the following line to the /etc/modules.conf file:

– For the 2.4 kernel (SUSE Linux Enterprise Server 8 or Red Hat Enterprise Linux):

options qla2300 ql2xfailover=0 ql2xmaxqdepth=new_queue_depth

– For the 2.6 kernel (SUSE Linux Enterprise Server 9, or later, or Red Hat Enterprise Linux 4, or later):

options qla2xxx ql2xfailover=0 ql2xmaxqdepth=new_queue_depth

2. Rebuild the RAM disk that is associated with the kernel being used by using one of the following commands:

– If you are running on a SUSE Linux Enterprise Server operating system, run the mk_initrd command.

– If you are running on a Red Hat Enterprise Linux operating system, run the mkinitrd command, and then restart.

5.11.5 Multipathing in LinuxRed Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support by the operating system. On older systems, it is necessary to install the IBM SDD multipath driver.

Installing SDDThis section describes how to install SDD for older distributions. Before performing these steps, always check for the currently supported levels, as described in 5.11.2, “Configuration information” on page 225.

226 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 253: San

The cat /proc/scsi/scsi command in Example 5-52 shows the devices that the SCSI driver has probed. In our configuration, we have two HBAs installed in our server, and we configured the zoning to access our VDisk from four paths.

Example 5-52 cat /proc/scsi/scsi command example

[root@diomede sdd]# cat /proc/scsi/scsiAttached devices:Host: scsi4 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Rev: 0000 Type: Unknown ANSI SCSI revision: 04Host: scsi5 Channel: 00 Id: 00 Lun: 00 Vendor: IBM Model: 2145 Rev: 0000 Type: Unknown ANSI SCSI revision: 04[root@diomede sdd]#

The rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpm command installs the package, as shown in Example 5-53.

Example 5-53 rpm command example

[root@Palau sdd]# rpm -ivh IBMsdd-1.6.3.0-5.i686.rhel4.rpmPreparing... ########################################### [100%] 1:IBMsdd ########################################### [100%]Added following line to /etc/inittab:srv:345:respawn:/opt/IBMsdd/bin/sddsrv > /dev/null 2>&1[root@Palau sdd]#

To manually load and configure SDD on Linux, use the service sdd start command (SUSE Linux users can use the sdd start command). If you are not running a supported kernel, you will get an error message.

If your kernel is supported, you see an OK success message, as shown in Example 5-54.

Example 5-54 Supported kernel for SDD

[root@Palau sdd]# sdd startStarting IBMsdd driver load: [ OK ]Issuing killall sddsrv to trigger respawn...Starting IBMsdd configuration: [ OK ]

Issue the cfgvpath query command to view the name and serial number of the VDisk that is configured in the SAN Volume Controller, as shown in Example 5-55.

Example 5-55 cfgvpath query example

[root@Palau ~]# cfgvpath queryRTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00total datalen=52 datalen_str=0x00 00 00 30RTPG succeeded: sd_name=/dev/sda df_ctlr=0/dev/sda ( 8, 0) host=0 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00total datalen=52 datalen_str=0x00 00 00 30RTPG succeeded: sd_name=/dev/sdb df_ctlr=0

Chapter 5. Host configuration 227

Page 254: San

/dev/sdb ( 8, 16) host=0 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00total datalen=52 datalen_str=0x00 00 00 30RTPG succeeded: sd_name=/dev/sdc df_ctlr=0/dev/sdc ( 8, 32) host=1 ch=0 id=0 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=0 df_ctlr=0RTPG command = a3 0a 00 00 00 00 00 00 08 20 00 00total datalen=52 datalen_str=0x00 00 00 30RTPG succeeded: sd_name=/dev/sdd df_ctlr=0/dev/sdd ( 8, 48) host=1 ch=0 id=1 lun=0 vid=IBM pid=2145 serial=60050768018201bee000000000000035 lun_id=60050768018201bee000000000000035 ctlr_flag=1 ctlr_nbr=1 df_ctlr=0[root@Palau ~]#

The cfgvpath command configures the SDD vpath devices, as shown in Example 5-56.

Example 5-56 cfgvpath command example

[root@Palau ~]# cfgvpathc--------- 1 root root 253, 0 Jun 5 09:04 /dev/IBMsddWARNING: vpatha path sda has already been configured.WARNING: vpatha path sdb has already been configured.WARNING: vpatha path sdc has already been configured.WARNING: vpatha path sdd has already been configured.Writing out new configuration to file /etc/vpath.conf[root@Palau ~]#

The configuration information is saved by default in the /etc/vpath.conf file. You can save the configuration information to a specified file name by entering the following command:

cfgvpath -f file_name.cfg

Issue the chkconfig command to enable SDD to run at system startup:

chkconfig sdd on

To verify the setting, enter the following command:

chkconfig --list sdd

This verification is shown in Example 5-57.

Example 5-57 sdd run level example

[root@Palau sdd]# chkconfig --list sddsdd 0:off 1:off 2:on 3:on 4:on 5:on 6:off[root@Palau sdd]#

If necessary, you can disable the startup option by entering this command:

chkconfig sdd off

228 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 255: San

Run the datapath query commands to display the online adapters and the paths to the adapters. Notice that the preferred paths are used from one of the nodes, that is, path 0 and path 2. Path 1 and path 3 connect to the other node and are used as alternate or backup paths for high availability, as shown in Example 5-58.

Example 5-58 datapath query command example

[root@Palau ~]# datapath query adapter

Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active 0 Host0Channel0 NORMAL ACTIVE 1 0 2 0 1 Host1Channel0 NORMAL ACTIVE 0 0 2 0[root@Palau ~]#[root@Palau ~]# datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SequentialSERIAL: 60050768018201bee000000000000035============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda CLOSE NORMAL 1 0 1 Host0Channel0/sdb CLOSE NORMAL 0 0 2 Host1Channel0/sdc CLOSE NORMAL 0 0 3 Host1Channel0/sdd CLOSE NORMAL 0 0[root@Palau ~]#

SDD has three path-selection policy algorithms:

� Failover only (fo): All I/O operations for the device are sent to the same (preferred) path unless the path fails because of I/O errors. Then, an alternate path is chosen for subsequent I/O operations.

� Load balancing (lb): The path to use for an I/O operation is chosen by estimating the load on the adapter to which each path is attached. The load is a function of the number of I/O operations currently in process. If multiple paths have the same load, a path is chosen at random from those paths. Load-balancing mode also incorporates failover protection. The load-balancing policy is also known as the optimized policy.

� Round robin (rr): The path to use for each I/O operation is chosen at random from paths that were not used for the last I/O operation. If a device has only two paths, SDD alternates between the two paths.

You can dynamically change the SDD path-selection policy algorithm by using the datapath set device policy SDD command.

You can see the SDD path-selection policy algorithm that is active on the device when you use the datapath query device command. Example 5-58 shows that the active policy is optimized, which means that the SDD path-selection policy algorithm active is Optimized Sequential.

Chapter 5. Host configuration 229

Page 256: San

Example 5-59 shows the VDisk information from the SVC command-line interface.

Example 5-59 svcinfo redhat1

IBM_2145:ITSOSVC42A:admin>svcinfo lshost linux2id 6name linux2port_count 2type genericmask 1111iogrp_count 4WWPN 210000E08B89C1CDnode_logged_in_count 2state activeWWPN 210000E08B054CAAnode_logged_in_count 2state activeIBM_2145:ITSOSVC42A:admin>

IBM_2145:ITSOSVC42A:admin>svcinfo lshostvdiskmap linux2id name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID6 linux2 0 33 linux_vd1 210000E08B89C1CD 60050768018201BEE000000000000035IBM_2145:ITSOSVC42A:admin>IBM_2145:ITSOSVC42A:admin>svcinfo lsvdisk linux_vd1id 33name linux_vd1IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 1mdisk_grp_name MDG0capacity 1.0GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018201BEE000000000000035throttling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0IBM_2145:ITSOSVC42A:admin>

230 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 257: San

5.11.6 Creating and preparing the SDD volumes for useFollow these steps to create and prepare the volumes:

1. Create a partition on the vpath device, as shown in Example 5-60.

Example 5-60 fdisk example

[root@Palau ~]# fdisk /dev/vpathaDevice contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): mCommand action a toggle a bootable flag b edit bsd disklabel c toggle the dos compatibility flag d delete a partition l list known partition types m print this menu n add a new partition o create a new empty DOS partition table p print the partition table q quit without saving changes s create a new empty Sun disklabel t change a partition's system id u change display/entry units v verify the partition table w write table to disk and exit x extra functionality (experts only)

Command (m for help): nCommand action e extended p primary partition (1-4)ePartition number (1-4): 1First cylinder (1-1011, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-1011, default 1011):Using default value 1011

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.Syncing disks.[root@Palau ~]#

Chapter 5. Host configuration 231

Page 258: San

2. Create a file system on the vpath, as shown in Example 5-61.

Example 5-61 mkfs command example

[root@Palau ~]# mkfs -t ext3 /dev/vpathamke2fs 1.35 (28-Feb-2004)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)131072 inodes, 262144 blocks13107 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=2684354568 block groups32768 blocks per group, 32768 fragments per group16384 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376

Writing inode tables: doneCreating journal (8192 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 27 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@Palau ~]#

3. Create the mount point, and mount the vpath drive, as shown in Example 5-62.

Example 5-62 Mount point

[root@Palau ~]# mkdir /itsosvc[root@Palau ~]# mount -t ext3 /dev/vpatha /itsosvc

4. The drive is now ready for use. The df command shows us the mounted disk /itsosvc, and the datapath query command shows that four paths are available (Example 5-63).

Example 5-63 Display mounted drives

[root@Palau ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 74699952 2564388 68341032 4% //dev/hda1 101086 13472 82395 15% /bootnone 1033136 0 1033136 0% /dev/shm/dev/vpatha 1032088 34092 945568 4% /itsosvc[root@Palau ~]#

[root@Palau ~]# datapath query device

Total Devices : 1

DEV#: 0 DEVICE NAME: vpatha TYPE: 2145 POLICY: Optimized SequentialSERIAL: 60050768018201bee000000000000035============================================================================

232 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 259: San

Path# Adapter/Hard Disk State Mode Select Errors 0 Host0Channel0/sda OPEN NORMAL 1 0 1 Host0Channel0/sdb OPEN NORMAL 6296 0 2 Host1Channel0/sdc OPEN NORMAL 6178 0 3 Host1Channel0/sdd OPEN NORMAL 0 0[root@Palau ~]#

5.11.7 Using the operating system MPIORed Hat Enterprise Linux 5 and later and SUSE Linux Enterprise Server 10 and later provide their own multipath support for the operating system. Therefore, you do not have to install an additional device driver. Always check whether your operating system includes one of the supported multipath drivers.

You will find this information in the links that are provided in 5.11.2, “Configuration information” on page 225. In SLES10, the multipath drivers and tools are installed by default, but for RHEL5, the user has to explicitly choose the multipath components during the OS installation to install them.

Each of the attached SAN Volume Controller LUNs has a special device file in the Linux /dev directory.

Hosts that use 2.6 kernel Linux operating systems can have as many FC disks as the SVC allows. The following Web site provides the most current information about the maximum configuration for the SAN Volume Controller:

http://www.ibm.com/storage/support/2145

5.11.8 Creating and preparing MPIO volumes for useFirst, you have to start the MPIO daemon on your system. Run the following commands on your host system:

1. Enable MPIO for SLES10 by running the following commands:

a. /etc/init.d/boot.multipath {start|stop}

b. /etc/init.d/multipathd {start|stop|status|try-restart|restart|force-reload|reload|probe}

2. Enable MPIO for RHEL5 by running the following commands:

a. modprobe dm-multipath

b. modprobe dm-round-robin

c. service multipathd start

d. chkconfig multipathd on

Example 5-64 on page 234 shows the commands issued on a Red Hat Enterprise Linux 5.1 operating system.

Tip: Run insserv boot.multipath multipathd to automatically load the multipath driver and multipathd daemon during startup.

Chapter 5. Host configuration 233

Page 260: San

Example 5-64 Starting MPIO daemon on Red Hat Enterprise Linux

[root@palau ~]# modprobe dm-round-robin[root@palau ~]# multipathd start[root@palau ~]# chkconfig multipathd on[root@palau ~]#

3. Open the multipath.conf file, and follow the instructions to enable multipathing for IBM devices. The file is located in the /etc directory. Example 5-65 shows editing using vi.

Example 5-65 Editing the multipath.conf file

[root@palau etc]# vi multipath.conf

4. Add the following entry to the multipath.conf file:

device {vendor "IBM"product "2145"path_grouping_policy group_by_prioprio_callout "/sbin/mpath_prio_alua /dev/%n"}

5. Restart the multipath daemon (Example 5-66).

Example 5-66 Stopping and starting the multipath daemon

[root@palau ~]# service multipathd stopStopping multipathd daemon: [ OK ][root@palau ~]# service multipathd startStarting multipathd daemon: [ OK ]

6. Type the multipath -dl command to see the mpio configuration. You will see two groups with two paths each. All paths must have the state [active][ready] and one group will be [enabled].

234 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 261: San

7. Use the fdisk command to create a partition on the SVC disk, as shown in Example 5-67.

Example 5-67 fdisk

[root@palau scsi]# fdisk -l

Disk /dev/hda: 80.0 GB, 80032038912 bytes255 heads, 63 sectors/track, 9730 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System/dev/hda1 * 1 13 104391 83 Linux/dev/hda2 14 9730 78051802+ 8e Linux LVM

Disk /dev/sda: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sda doesn't contain a valid partition table

Disk /dev/sdb: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdc: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdd: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sde doesn't contain a valid partition table

Disk /dev/sdf: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdf doesn't contain a valid partition table

Disk /dev/sdg: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdg doesn't contain a valid partition table

Chapter 5. Host configuration 235

Page 262: San

Disk /dev/sdh: 4244 MB, 4244635648 bytes131 heads, 62 sectors/track, 1020 cylindersUnits = cylinders of 8122 * 512 = 4158464 bytes

Disk /dev/sdh doesn't contain a valid partition table

Disk /dev/dm-2: 4244 MB, 4244635648 bytes255 heads, 63 sectors/track, 516 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-2 doesn't contain a valid partition table

Disk /dev/dm-3: 4244 MB, 4244635648 bytes255 heads, 63 sectors/track, 516 cylindersUnits = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/dm-3 doesn't contain a valid partition table[root@palau scsi]# fdisk /dev/dm-2Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabelBuilding a new DOS disklabel. Changes will remain in memory only,until you decide to write them. After that, of course, the previouscontent won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Command (m for help): nCommand action e extended p primary partition (1-4)ePartition number (1-4): 1First cylinder (1-516, default 1):Using default value 1Last cylinder or +size or +sizeM or +sizeK (1-516, default 516):Using default value 516

Command (m for help): wThe partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 22: Invalid argument.The kernel still uses the old table.The new table will be used at the next reboot.[root@palau scsi]# shutdown -r now

236 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 263: San

8. Create a file system using the mkfs command (Example 5-68).

Example 5-68 mkfs command

[root@palau ~]# mkfs -t ext3 /dev/dm-2mke2fs 1.39 (29-May-2006)Filesystem label=OS type: LinuxBlock size=4096 (log=2)Fragment size=4096 (log=2)518144 inodes, 1036288 blocks51814 blocks (5.00%) reserved for the super userFirst data block=0Maximum filesystem blocks=106115891232 block groups32768 blocks per group, 32768 fragments per group16192 inodes per groupSuperblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736

Writing inode tables: doneCreating journal (16384 blocks): doneWriting superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or180 days, whichever comes first. Use tune2fs -c or -i to override.[root@palau ~]#

9. Create a mount point, and mount the drive, as shown in Example 5-69.

Example 5-69 Mount point

[root@palau ~]# mkdir /svcdisk_0[root@palau ~]# cd /svcdisk_0/[root@palau svcdisk_0]# mount -t ext3 /dev/dm-2 /svcdisk_0[root@palau svcdisk_0]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 73608360 1970000 67838912 3% //dev/hda1 101086 15082 80785 16% /boottmpfs 967984 0 967984 0% /dev/shm/dev/dm-2 4080064 73696 3799112 2% /svcdisk_0

5.12 VMware configuration informationThis section explains the requirements and additional information for attaching the SAN Volume Controller to a variety of guest host operating systems running on the VMware operating system.

Chapter 5. Host configuration 237

Page 264: San

5.12.1 Configuring VMware hostsTo configure the VMware hosts, follow these steps:

1. Install the HBAs in your host system, as described in 5.12.4, “HBAs for hosts running VMware” on page 238.

2. Connect the server FC host adapters to the switches.

3. Configure the switches (zoning), as described in 5.12.6, “VMware storage and zoning recommendations” on page 240.

4. Install the VMware operating system (if not already done) and check the HBA timeouts, as described in 5.12.7, “Setting the HBA timeout for failover in VMware” on page 241.

5. Configure the host, VDisks, and host mapping in the SVC, as described in 5.12.9, “Attaching VMware to VDisks” on page 242.

5.12.2 Operating system versions and maintenance levelsFor the latest information about VMware support, refer to this Web site:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

At the time of writing, the following versions are supported:

� ESX V3.5� ESX V3.51� ESX V3.02� ESX V2.5.3� ESX V2.5.2� ESX V2.1 with Virtual Machine File System (VMFS) disks

5.12.3 Guest operating systemsAlso, make sure that you are using supported guest operating systems. The latest information is available at this Web site:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003278#_VMWare

5.12.4 HBAs for hosts running VMwareEnsure that your hosts that are running on VMware operating systems use the correct HBAs and firmware levels.

Install the host adapters in your system. Refer to the manufacturer’s instructions for installation and configuration of the HBAs.

In IBM System x servers, the HBA must always be installed in the first slots. Therefore, if you install, for example, two HBAs and two network cards, the HBAs must be installed in slot 1 and slot 2 and the network cards can be installed in the remaining slots.

For older ESX versions, you will find the supported HBAs at the IBM Web Site:

http://www.ibm.com/storage/support/2145

The interoperability matrixes for ESX V3.02, V3.5, and V3.51 are available at the VMware Web site (clicking this link opens or downloads the PDF):

Important: If you are running the VMware V3.01 build, you are required to move to a minimum VMware level of V3.02 for continued support.

238 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 265: San

� V3.02

http://www.vmware.com/pdf/vi3_io_guide.pdf

� V3.5

http://www.vmware.com/pdf/vi35_io_guide.pdf

The supported HBA device drivers are already included in the ESX server build.

After installing, load the default configuration of your FC HBAs. We recommend using the same model of HBA with the same firmware in one server. It is not supported to have Emulex and QLogic HBAs that access the same target in one server.

5.12.5 Multipath solutions supportedOnly single path is supported in ESX V2.1, and multipathing is supported in ESX V2.5.x.

The VMware operating system provides multipathing support, so installing multipathing software is not required.

VMware multipathing software dynamic pathing VMware multipathing software does not support dynamic pathing. Preferred paths that are set in the SAN Volume Controller are ignored. The VMware multipathing software performs static load balancing for I/O, based upon a host setting that defines the preferred path for a given volume.

Multipathing configuration maximums When you configure, remember the maximum configuration for the VMware multipathing software: 256 is the maximum number of SCSI devices supported by the VMware software and the maximum number of paths to each VDisk is four, giving you a total number of paths, on a server, of 1,024.

Clustering support for hosts running VMwareThe SVC provides cluster support on VMware guest operating systems. The following Web Site provides the current interoperability information:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S1003277#_VMware

SAN boot supportSAN boot of any guest OS is supported under VMware. The very nature of VMware means that SAN boot is a requirement on any guest OS. The guest OS must reside on a SAN disk.

If you are unfamiliar with the VMware environments and the advantages of storing virtual machines and application data on a SAN, we recommend that you get an overview about the VMware products before continuing.

VMware documentation is available at this Web site:

http://www.vmware.com/support/pubs/

Paths: Each path to a VDisk equates to a single SCSI device.

Chapter 5. Host configuration 239

Page 266: San

5.12.6 VMware storage and zoning recommendationsThe VMware ESX server can use a Virtual Machine File System (VMFS), which is a file system that is optimized to run multiple virtual machines as one workload to minimize disk I/O. It is also able to handle concurrent access from multiple physical machines, because it enforces the appropriate access controls. Therefore, multiple ESX hosts can share the same set of LUNs (Figure 5-46).

Figure 5-46 VMware: SVC zoning example

Theoretically, you can run all of your virtual machines on one LUN, but for performance reasons, in more complex scenarios, it can be better to load balance virtual machines over separate HBAs, storages, or arrays.

For example, if you run an ESX host, with several virtual machines, it makes sense to use one “slow” array, for example, for Print and Active Directory Services guest operating systems without high I/O, and another fast array for database guest operating systems.

Using fewer VDisks has the following advantages:

� More flexibility to create virtual machines without creating new space on the SVC� More possibilities for taking VMware snapshots� Fewer VDisks to manage

Using more and smaller VDisks has the following advantages:

� Separate I/O characteristics of the guest operating systems

240 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 267: San

� More flexibility (the multipathing policy and disk shares are set per VDisk)� Microsoft Cluster Service requires its own VDisk for each cluster disk resource

More documentation about designing your VMware infrastructure is provided at one of these Web sites:

� http://www.vmware.com/vmtn/resources/

� http://www.vmware.com/resources/techresources/1059

5.12.7 Setting the HBA timeout for failover in VMwareThe timeout for failover for ESX hosts must be set to 30 seconds:

� For QLogic HBAs, the timeout depends on the PortDownRetryCount parameter. The timeout value is 2 x PortDownRetryCount + 5 sec. It is recommended to set the qlport_down_retry parameter to 14.

� For Emulex HBAs, the lpfc_linkdown_tmo and the lpcf_nodev_tmo parameters must be set to 30 seconds.

To make these changes on your system, perform the following steps (Example 5-70):

1. Back up the /etc/vmware/esx.cof file.2. Open the /etc/vmware/esx.cof file for editing.3. The file includes a section for every installed SCSI device.4. Locate your SCSI adapters, and edit the previously described parameters.5. Repeat this process for every installed HBA.

Example 5-70 Setting the HBA timeout

[root@nile svc]# cp /etc/vmware/esx.conf /etc/vmware/esx.confbackup[root@nile svc]# vi /etc/vmware/esx.conf

Guidelines:

� ESX Server hosts that use shared storage for virtual machine failover or load balancing must be in the same zone.

� You can have only one VMFS volume per VDisk.

Chapter 5. Host configuration 241

Page 268: San

5.12.8 Multipathing in ESXThe ESX Server performs multipathing. You do not need to install a multipathing driver, such as SDD, either on the ESX server or on the guest operating systems.

5.12.9 Attaching VMware to VDisksFirst, we make sure that the VMware host is logged into the SAN Volume Controller. In our examples, we use the VMware ESX server V3.5 and the host name Nile.

Enter the following command to check the status of the host:

svcinfo lshost <hostname>

Example 5-71 shows that the host Nile is logged into the SVC with two HBAs.

Example 5-71 lshost Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Nileid 1name Nileport_count 2type genericmask 1111iogrp_count 2WWPN 210000E08B892BCDnode_logged_in_count 4state activeWWPN 210000E08B89B8C0node_logged_in_count 4state active

Then, we have to set the SCSI Controller Type in VMware. By default, ESX Server disables the SCSI bus sharing and does not allow multiple virtual machines to access the same VMFS file at the same time (Figure 5-47 on page 243).

But in many configurations, such as those configurations for high availability, the virtual machines have to share the same VMFS file to share a disk.

To set the SCSI Controller Type in VMware:

1. Log on to your Infrastructure Client, shut down the virtual machine, right-click it, and select Edit settings.

2. Highlight the SCSI Controller, and select one of the three available settings, depending on your configuration:

– None: Disks cannot be shared by other virtual machines.– Virtual: Disks can be shared by virtual machines on the same server.– Physical: Disks can be shared by virtual machines on any server.

Click OK to apply the setting.

242 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 269: San

Figure 5-47 Changing SCSI bus settings

3. Create your VDisks on the SVC, and map them to the ESX hosts.

For this example configuration, we have created one VDisk and have mapped it to our ESX host, as shown in Example 5-72.

Example 5-72 Mapped VDisk to ESX host Nile

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Nileid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Nile 0 12 VMW_pool 210000E08B892BCD 60050768018301BF2800000000000010

ESX does not automatically scan for SAN changes (except when rebooting the entire ESX server). If you have made any changes to your SVC or SAN configuration, perform the following steps:

1. Open your VMware Infrastructure Client.

2. Select the host.

3. In the Hardware window, choose Storage Adapters.

4. Click Rescan.

Tips:

� If you want to use features, such as VMotion, the VDisks that own the VMFS file have to be visible to every ESX host that will be able to host the virtual machine. In SVC, select Allow the virtual disks to be mapped even if they are already mapped to a host.

� The VDisk has to have the same SCSI ID on each ESX host.

Chapter 5. Host configuration 243

Page 270: San

To configure a storage device to use it in VMware, perform the following steps:

1. Open your VMware Infrastructure Client.

2. Select the host for which you want to see the assigned VDisks, and click the Configuration tab.

3. In the Hardware window on the left side, click Storage.

4. To create a new storage pool, select click here to create a datastore or Add storage if the yellow field does not appear (Figure 5-48).

Figure 5-48 VMWare add datastore

5. The Add storage wizard will appear.

6. Select Create Disk/Lun, and click Next.

7. Select the SVC VDisk that you want to use for the datastore, and click Next.

8. Review the disk layout and click Next.

9. Enter a datastore name and click Next.

10.Select a block size, enter the size of the new partition, and then, click Next.

11.Review your selections, and click Finish.

Now, the created VMFS datastore appears in the Storage window (Figure 5-49). You will see the details for the highlighted datastore. Check whether all of the paths are available and that the Path Selection is set to Most Recently Used.

Figure 5-49 VMWare storage configuration

If not all of the paths are available, check your SAN and storage configuration. After fixing the problem, select Refresh to perform a path rescan. The view will be updated to the new configuration.

244 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 271: San

The recommended Multipath Policy for SVC is Most Recently Used. If you have to edit this policy, perform the following steps:

1. Highlight the datastore.

2. Click Properties.

3. Click Managed Paths.

4. Click Change (see Figure 5-50).

5. Select Most Recently Used.

6. Click OK.

7. Click Close.

Now, your VMFS datastore has been created, and you can start using it for your guest operating systems.

5.12.10 VDisk naming in VMwareIn the Virtual Infrastructure Client, a VDisk is displayed as a sequence of three or four numbers, separated by colons (Figure 5-50):

<SCSI HBA>:<SCSI target>:<SCSi VDisk>:<disk partition>

where:

� SCSI HBA

The number of the SCSI HBA (can change).

� SCSI target

The number of the SCSI target (can change).

� SCSI VDisk

The number of the VDisk (never changes).

� disk partition

The number of the disk partition (never changes). If the last number is not displayed, the name stands for the entire VDisk.

Figure 5-50 VDisk naming in VMware

Chapter 5. Host configuration 245

Page 272: San

5.12.11 Setting the Microsoft guest operating system timeoutFor a Microsoft Windows 2000 Server or Windows 2003 Server installed as a VMware guest operating system, the disk timeout value must be set to 60 seconds.

We provide the instructions to perform this task in 5.6.5, “Changing the disk timeout on Microsoft Windows Server” on page 185.

5.12.12 Extending a VMFS volumeIt is possible to extend VMFS volumes while virtual machines are running. First, you have to extend the VDisk on the SVC, and then, you are able to extend the VMFS volume. Before performing these steps, we recommend having a backup of your data.

Perform the following steps to extend a volume:

1. The VDisk can be expanded with the svctask expandvdisksize -size 1 -unit gb <VDiskname> command (Example 5-73).

Example 5-73 Expanding a VDisk in SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_poolid 12name VMW_poolIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 60.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000010throttling 0preferred_node_id 2fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_name

246 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 273: San

fast_write_state emptyused_capacity 60.00GBreal_capacity 60.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsizeIBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb VMW_poolIBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk VMW_poolid 12name VMW_poolIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 65.0GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000010throttling 0preferred_node_id 2fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 65.00GBreal_capacity 65.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsizeIBM_2145:ITSO-CLS1:admin>

Chapter 5. Host configuration 247

Page 274: San

2. Open the Virtual Infrastructure Client.

3. Select the host.

4. Select Configuration.

5. Select Storage Adapters.

6. Click Rescan.

7. Make sure that the Scan for new Storage Devices check box is marked, and click OK. After the scan has completed, the new capacity is displayed in the Details section.

8. Click Storage.

9. Right-click the VMFS volume and click Properties.

10.Click Add Extend.

11.Select the new free space, and click Next.

12.Click Next.

13.Click Finish.

The VMFS volume has now been extended, and the new space is ready for use.

5.12.13 Removing a datastore from an ESX hostBefore you remove a datastore from an ESX host, you have to migrate or delete all of the virtual machines that reside on this datastore.

To remove it, perform the following steps:

1. Back up the data.

2. Open the Virtual Infrastructure Client.

3. Select the host.

4. Select Configuration.

5. Select Storage.

6. Highlight the datastore that you want to remove.

7. Click Remove.

8. Read the warning, and if you are sure that you want to remove the datastore and delete all of the data on it, click Yes.

9. Remove the host mapping on the SVC, or delete the VDisk (as shown in Example 5-74).

10.In the VI Client, select Storage Adapters.

11.Click Rescan.

12.Make sure that the Scan for new Storage Devices check box is marked, and click OK.

13.After the scan completes, the disk disappears from the view.

Your datastore has been successfully removed from the system.

Example 5-74 Remove VDisk host mapping: Delete VDisk

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile VMW_poolIBM_2145:ITSO-CLS1:admin>svctask rmvdisk VMW_pool

248 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 275: San

5.13 SUN Solaris support informationFor the latest information about supported software and driver levels, always refer to this site:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.13.1 Operating system versions and maintenance levelsAt the time of writing, Sun Solaris 8, Sun Solaris 9, and Sun Solaris 10 are supported in 64-bit only.

5.13.2 SDD dynamic pathingSolaris supports dynamic pathing when you either add more paths to an existing VDisk, or if you present a new VDisk to a host. No user intervention is required. SDD is aware of the preferred paths that SVC sets per VDisk.

SDD will use a round-robin algorithm when failing over paths, that is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk will go offline. Therefore, it can take time to perform path failover when multiple paths go offline.

SDD under Solaris performs load balancing across the preferred paths where appropriate.

Veritas Volume Manager with dynamic multipathingVeritas Volume Manager (VM) with dynamic multipathing (DMP) automatically selects the next available I/O path for I/O requests without action from the administrator. VM with DMP is also informed when you repair or restore a connection, and when you add or remove devices after the system has been fully booted (provided that the operating system recognizes the devices correctly). The new Java Native Interface (JNI) drivers support the mapping of new VDisks without rebooting the Solaris host.

Note the following support characteristics:

� Veritas VM with DMP does not support preferred pathing with SVC.� Veritas VM with DMP does support load balancing across multiple paths with SVC.

Co-existence with SDD and Veritas VM with DMPVeritas Volume Manager with DMP will coexist in “pass-through” mode with SDD. DMP will use the vpath devices that are provided by SDD.

OS cluster supportSolaris with Symantec Cluster V4.1, Symantec SFHA and SFRAC V4.1/5.0, and Solaris with Sun Cluster V3.1/3.2 are supported at the time of writing.

SAN boot supportNote the following support characteristics:

� Boot from SAN is supported under Solaris 9 running Symantec Volume Manager. � Boot from SAN is not supported when SDD is used as the multipathing software.

Chapter 5. Host configuration 249

Page 276: San

5.14 Hewlett-Packard UNIX configuration informationFor the latest information about Hewlett-Packard UNIX® (HP-UX) support, refer to this Web site:

http://www-03.ibm.com/systems/storage/software/virtualization/svc/interop.html

5.14.1 Operating system versions and maintenance levelsAt the time of writing, HP-UX V11.0 and V11i v1/v2/v3 are supported (64-bit only).

5.14.2 Multipath solutions supportedAt the time of writing, SDD V1.6.3.0 for HP-UX is supported. Multipathing Software PV Link and Cluster Software Service Guard V11.14/11.16/11.17/11.18 are also supported, but in a cluster environment, we recommend SDD.

SDD dynamic pathingHP-UX supports dynamic pathing when you either add more paths to an existing VDisk or if you present a new VDisk to a host.

SDD is aware of the preferred paths that SVC sets per VDisk. SDD will use a round-robin algorithm when failing over paths, that is, it will try the next known preferred path. If this method fails and all preferred paths have been tried, it will use a round-robin algorithm on the non-preferred paths until it finds a path that is available. If all paths are unavailable, the VDisk will go offline. It can take time, therefore, to perform path failover when multiple paths go offline.

SDD under HP-UX performs load balancing across the preferred paths where appropriate.

Physical volume links (PVLinks) dynamic pathingUnlike SDD, PVLinks does not load balance and is unaware of the preferred paths that SVC sets per VDisk. Therefore, we strongly recommend SDD, except when in a clustering environment or when using an SVC VDisk as your boot disk.

When creating a Volume Group, specify the primary path that you want HP-UX to use when accessing the Physical Volume that is presented by SVC. This path, and only this path, will be used to access the PV as long as it is available, no matter what the SVC’s preferred path to that VDisk is. Therefore, be careful when creating Volume Groups so that the primary links to the PVs (and load) are balanced over both HBAs, FC switches, SVC nodes, and so on.

When extending a Volume Group to add alternate paths to the PVs, the order in which you add these paths is HP-UX’s order of preference if the primary path becomes unavailable. Therefore, when extending a Volume Group, the first alternate path that you add must be from the same SVC node as the primary path, to avoid unnecessary node failover due to an HBA, FC link, or FC switch failure.

5.14.3 Co-existence of SDD and PV LinksIf you want to multipath a VDisk with PVLinks while SDD is installed, you need to make sure that SDD does not configure a vpath for that VDisk. To do this, you need to put the serial number of any VDisks that you want SDD to ignore in the /etc/vpathmanualexcl.cfg directory. In the case of SAN boot, if you are booting from an SVC VDisk, when you install SDD (from Version 1.6 onward), SDD will automatically ignore the boot VDisk.

250 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 277: San

SAN boot supportSAN boot is supported on HP-UX by using PVLinks as the multipathing software on the boot device. You can use PVLinks or SDD to provide the multipathing support for the other devices that are attached to the system.

5.14.4 Using an SVC VDisk as a cluster lock disk ServiceGuard does not provide a way to specify alternate links to a cluster lock disk. When using an SVC VDisk as your lock disk, if the path to FIRST_CLUSTER_LOCK_PV becomes unavailable, the HP node will not be able to access the lock disk if a 50-50 split in quorum occurs.

To ensure redundancy, when editing your Cluster Configuration ASCII file, make sure that the variable FIRST_CLUSTER_LOCK_PV has a separate path to the lock disk for each HP node in your cluster. For example, when configuring a two-node HP cluster, make sure that FIRST_CLUSTER_LOCK_PV on HP server A is on a separate SVC node and through a separate FC switch than the FIRST_CLUSTER_LOCK_PV on HP server B.

5.14.5 Support for HP-UX with greater than eight LUNsHP-UX will not recognize more than eight LUNS per port using the generic SCSI behavior.

To accommodate this behavior, SVC supports a “type” associated with a host. This type can be set using the svctask mkhost command and modified using the svctask chhost command. The type can be set to generic, which is the default for HP-UX.

When an initiator port, which is a member of a host of type HP-UX, accesses an SVC, the SVC will behave in the following way:

� Flat Space Addressing mode is used rather than the Peripheral Device Addressing Mode.

� When an inquiry command for any page is sent to LUN 0 using Peripheral Device Addressing, it is reported as Peripheral Device Type 0Ch (controller).

� When any command other than an inquiry is sent to LUN 0 using Peripheral Device Addressing, SVC will respond as an unmapped LUN 0 normally responds.

� When an inquiry is sent to LUN 0 using Flat Space Addressing, it is reported as Peripheral Device Type 00h (Direct Access Device) if a LUN is mapped at LUN 0 or 1Fh Unknown Device Type.

� When an inquiry is sent to an unmapped LUN that is not LUN 0 using Peripheral Device Addressing, the Peripheral qualifier returned is 001b and the Peripheral Device type is 1Fh (unknown or no device type). This response is in contrast to the behavior for generic hosts, where peripheral Device Type 00h is returned.

5.15 Using SDDDSM, SDDPCM, and SDD Web interfaceAfter installing the SDDDSM or SDD driver, there are specific commands available. To open a command window for SDDDSM or SDD, from the desktop, select Start Programs Subsystem Device Driver Subsystem Device Driver Management.

The command documentation for the various operating systems is available in the Multipath Subsystem Device Driver User Guides:

http://www-1.ibm.com/support/docview.wss?rs=540&context=ST52G7&dc=DA400&uid=ssg1S7000303&loc=en_US&cs=utf-8&lang=en

Chapter 5. Host configuration 251

Page 278: San

It is also possible to configure the multipath driver so that it offers a Web interface to run the commands. Before this configuration can work, we need to configure the Web interface. Sddsrv does not bind to any TCP/IP port by default, but it allows port binding to be dynamically enabled or disabled.

For all platforms except Linux, the multipath driver package ships an sddsrv.conf template file named the sample_sddsrv.conf file. On all UNIX platforms except Linux, the sample_sddsrv.conf file is located in the /etc directory. On Windows platforms, the sample_sddsrv.conf file is in the directory where SDD is installed.

You must use the sample_sddsrv.conf file to create the sddsrv.conf file in the same directory as the sample_sddsrv.conf file by simply copying it and naming the copied file sddsrv.conf. You can then dynamically change port binding by modifying the parameters in the sddsrv.conf file and changing the values of Enableport and Loopbackbind to True.

Figure 5-51 shows the start window of the multipath driver Web interface.

Figure 5-51 SDD Web interface

5.16 Calculating the queue depth The queue depth is the number of I/O operations that can be run in parallel on a device. It is usually possible to set a limit on the queue depth on the SDD paths (or equivalent) or the HBA. Ensure that you configure the servers to limit the queue depth on all of the paths to the SAN Volume Controller disks in configurations that contain a large number of servers or VDisks.

You might have a number of servers in the configuration that are idle, or do not initiate the calculated quantity of I/O operations. If so, you might not need to limit the queue depth.

252 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 279: San

5.17 Further sources of informationFor more information about host attachment and configuration to the SVC, refer to the IBM System Storage SAN Volume Controller: Host Attachment Guide, SC26-7563.

For more information about SDDDSM or SDD configuration, refer to the IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096.

When looking for information about certain storage subsystems, this link is usually helpful:

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp

5.17.1 Publications containing SVC storage subsystem attachment guidelinesIt is beyond the intended scope of this book to describe the attachment to each and every subsystem that the SVC supports. Here is a short list of what we found especially useful in the writing of this book, and in the field:

� SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, describes in detail how you can tune your back-end storage to maximize your performance on the SVC:

http://www.redbooks.ibm.com/redbooks/pdfs/sg247521.pdf

� Chapter 14 in DS8000 Performance Monitoring and Tuning, SG24-7146, describes the guidelines and procedures to make the most of the performance that is available from your DS8000 storage subsystem when attached to the IBM SAN Volume Controller:

http://www.redbooks.ibm.com/redbooks/pdfs/sg247146.pdf

� DS4000 Best Practices and Performance Tuning Guide, SG24-6363, explains how to connect and configure your storage for optimized performance on the SVC:

http://www.redbooks.ibm.com/redbooks/pdfs/sg2476363.pdf

� IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659, discusses specific considerations for attaching the XIV Storage System to a SAN Volume Controller:

http://www.redbooks.ibm.com/redpieces/pdfs/sg247659.pdf

Chapter 5. Host configuration 253

Page 280: San

254 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 281: San

Chapter 6. Advanced Copy Services

Before we discuss FlashCopy, Metro Mirror, and Global Mirror, we first describe the IBM System Storage SAN Volume Controller (SVC) Advanced Copy Services of FlashCopy, Metro Mirror, and Global Mirror.

In Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339, we describe how to use the command-line interface and Advanced Copy Services.

In Chapter 8, “SAN Volume Controller operations using the GUI” on page 469, we describe how to use the GUI and Advanced Copy Services.

6

© Copyright IBM Corp. 2010. All rights reserved. 255

Page 282: San

6.1 FlashCopy The FlashCopy function of the IBM System Storage SAN Volume Controller (SVC) provides the capability to perform a point-in-time copy of one or more virtual disks (VDisks).

In the topics that follow, we describe how FlashCopy works on the SVC, and we present examples of configuring and utilizing FlashCopy.

FlashCopy is also known as point-in-time copy. You can use the FlashCopy technique to help solve the challenge of making a consistent copy of a data set that is constantly being updated. The FlashCopy source is frozen for a few seconds or less during the point-in-time copy process. It will be able to accept I/O when the point-in-time copy bitmap is set up and the FlashCopy function is ready to intercept read/write requests in the I/O path. Although the background copy operation takes time, the resulting data at the target appears as though the copy were made instantaneously.

SVC’s FlashCopy service provides the capability to perform a point-in-time copy of one or more VDisks. Because the copy is performed at the block level, it operates underneath the operating system and application caches. The image that is presented is “crash-consistent”: that is to say, it is similar to an image that is seen in a crash event, such as an unexpected power failure.

6.1.1 Business requirementThe business applications for FlashCopy are many and various. An important use is facilitating consistent backups of constantly changing data, and, in these instances, a FlashCopy is created to capture a point-in-time copy. The resulting image can be backed up to tertiary storage, such as tape. After the copied data is on tape, the FlashCopy target is redundant.

Various tasks can benefit from the use of FlashCopy. In the following sections, we describe the most common situations.

6.1.2 Moving and migrating dataWhen you need to move a consistent data set from one host to another host, FlashCopy can facilitate this action with a minimum of downtime for the host application that is dependent on the source VDisk.

It might be beneficial to quiesce the application on the host and flush the application and OS buffers so that the new VDisk contains data that is “clean” to the application. Though without this step, the newly created VDisk data will still be usable by the application, it will require recovery procedures (such as log replay) to use. Quiescing the application ensures that the startup time against the mirrored copy is minimized.

The cache on the SVC is also flushed using the FlashCopy prestartfcmap command; see “Preparing” on page 275 prior to performing the FlashCopy.

The data set that has been created on the FlashCopy target is immediately available, as well as the source VDisk.

256 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 283: San

6.1.3 BackupFlashCopy does not affect your backup time, but it allows you to create a point-in-time consistent data set (across VDisks), with a minimum of downtime for your source host. The FlashCopy target can then be mounted on another host (or the backup server) and backed up. Using this procedure, the backup speed becomes less important, because the backup time does not require downtime for the host that is dependent on the source VDisks.

6.1.4 RestoreYou can keep periodically created FlashCopy targets online to provide extremely fast restore of specific files from the point-in-time consistent data set revealed on the FlashCopy targets. You simply copy the specific files to the source VDisk in case a restore is needed.

6.1.5 Application testingYou can test new applications and new operating system releases against a FlashCopy of your production data. The risk of data corruption is eliminated, and your application does not need to be taken offline for an extended period of time to perform the copy of the data.

Data mining is a good example of an area where FlashCopy can help you. Data mining can now extract data without affecting your application.

6.1.6 SVC FlashCopy featuresThe FlashCopy function in SVC supports these features:

� The target is the time-zero copy of the source (known as FlashCopy mapping targets).

� The source VDisk and target VDisk are available (almost) immediately.

� One source VDisk can have up to 256 target VDisks at the same or various points in time.

� Consistency groups are supported to enable FlashCopy across multiple VDisks.

� The target VDisk can be updated independently of the source VDisk.

� Bitmaps governing I/O redirection (I/O indirection layer) are maintained in both nodes of the SVC I/O Group to prevent a single point of failure.

� FlashCopy mapping can be automatically withdrawn after the completion of background copy.

� FlashCopy consistency groups can be automatically withdrawn after the completion of background copy.

� Multiple Target FlashCopy: FlashCopy now supports up to 256 target copies from a single source VDisk.

� Space-Efficient FlashCopy: Space-Efficient FlashCopy uses disk space only for changes between source and target data and not for the entire capacity of a VDisk copy.

� FlashCopy licensing: The FlashCopy previously was licensed by the source and target virtual capacity. It will now be licensed only by source virtual capacity.

� Incremental FlashCopy: A mapping created with the “incremental” flag copies only the data that has been changed on the source or the target since the previous copy completed. This incremental FlashCopy can substantially reduce the time that is required to recreate an independent image.

Chapter 6. Advanced Copy Services 257

Page 284: San

� Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete.

� Cascaded FlashCopy: The target VDisk of a FlashCopy mapping can be the source VDisk in a future FlashCopy mapping.

6.2 Reverse FlashCopyWith SVC Version 5.1.x, Reverse FlashCopy support is available. Reverse FlashCopy enables FlashCopy targets to become restore points for the source without breaking the FlashCopy relationship and without having to wait for the original copy operation to complete. It supports multiple targets and thus multiple rollback points.

A key advantage of SVC Multiple Target Reverse FlashCopy function is that the reverse FlashCopy does not destroy the original target. Thus, any process using the target, such as a tape backup process, will not be disrupted. Multiple recovery points can be tested.

SVC is also unique in that an optional copy of the source VDisk can be made before starting the reverse copy operation in order to diagnose problems.

When a user suffers a disaster and needs to restore from an on-disk backup, the user follows this procedure:

1. (Optional) Create a new target VDisk (VDisk Z) and FlashCopy the production VDisk (VDisk X) onto the new target for later problem analysis.

2. Create a new FlashCopy map with the backup to be restored (VDisk Y) or (VDisk W) as the source VDisk and VDisk X as the target VDisk, if this map does not already exist.

3. Start the FlashCopy map (VDisk Y VDisk X) with the new -restore option to copy the backup data onto the production disk.

4. The production disk is instantly available with the backup data.

Figure 6-1 on page 259 shows an example of Reverse FlashCopy.

258 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 285: San

Figure 6-1 Reverse FlashCopy

Regardless of whether the initial FlashCopy map (VDisk X VDisk Y) is incremental, the reverse operation only copies the modified data.

Consistency groups are reversed by creating a set of new “reverse” FlashCopy maps and adding them to a new “reverse” consistency group. A consistency group cannot contain more than one FlashCopy map with the same target VDisk.

6.2.1 FlashCopy and Tivoli Storage ManagerThe management of many large Reverse FlashCopy consistency groups is a complex task, without a tool for assistance.

IBM Tivoli FlashCopy Manager V2.1 is a new product that will improve the interlock between SVC and Tivoli Storage Manager for Advanced Copy Services, as well.

Figure 6-2 on page 260 shows the Tivoli Storage Manager for Advanced Copy Services features.

Chapter 6. Advanced Copy Services 259

Page 286: San

Figure 6-2 Tivoli Storage Manager for Advanced Copy Services features

Tivoli FlashCopy Manager provides many of the features of Tivoli Storage Manager for Advanced Copy Services without the requirement to use Tivoli Storage Manager. With Tivoli FlashCopy Manager, you can coordinate and automate host preparation steps before issuing FlashCopy start commands to ensure that a consistent backup of the application is made. You can put databases into hot backup mode, and before starting FlashCopy, you flush the filesystem cache.

FlashCopy Manager also allows for easier management of on-disk backups using FlashCopy and provides a simple interface to the “reverse” operation.

Figure 6-3 on page 261 shows the FlashCopy Manager feature.

260 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 287: San

Figure 6-3 Tivoli Storage Manager FlashCopy Manager features

It is beyond the intended scope of this book to describe Tivoli Storage Manager FlashCopy Manager.

6.3 How FlashCopy worksFlashCopy works by defining a FlashCopy mapping that consists of one source VDisk together with one target VDisk. You can define multiple FlashCopy mappings, and point-in-time consistency can be observed across multiple FlashCopy mappings using consistency groups. See “Consistency group with Multiple Target FlashCopy” on page 265.

When FlashCopy is started, it makes a copy of a source VDisk to a target VDisk, and the original contents of the target VDisk are overwritten. When the FlashCopy operation is started, the target VDisk presents the contents of the source VDisk as they existed at the single point-in-time of FlashCopy starting. This operation is also referred to as a time-zero copy (T0).

When a FlashCopy is started, the source and target VDisks are instantaneously available. When FlashCopy starts, bitmaps are created to govern and redirect I/O to the source or target VDisk, depending on where the requested block is located, while the blocks are copied in the background from the source VDisk to the target VDisk.

For more details about background copy, see 6.4.5, “Grains and the FlashCopy bitmap” on page 266.

Figure 6-4 on page 262 illustrates the redirection of the host I/O toward the source VDisk and the target VDisk.

Chapter 6. Advanced Copy Services 261

Page 288: San

Figure 6-4 Redirection of host I/O

6.4 Implementing SVC FlashCopyIn the topics that follow, we describe how FlashCopy is implemented in the SVC.

6.4.1 FlashCopy mappingsIn the SVC, FlashCopy occurs between a source VDisk and a target VDisk. The source and target VDisks must be the same size. The minimum granularity that SVC supports for FlashCopy is an entire VDisk; it is not possible to use FlashCopy to copy only part of a VDisk.

The source and target VDisks must both belong to the same SVC cluster, but they can be in separate I/O Groups within that cluster. SVC FlashCopy associates a source VDisk to a target VDisk in a FlashCopy mapping.

VDisks, which are members of a FlashCopy mapping, cannot have their size increased or decreased while they are members of the FlashCopy mapping. The SVC supports the creation of enough FlashCopy mappings to allow every VDisk to be a member of a FlashCopy mapping.

A FlashCopy mapping is the act of creating a relationship between a source VDisk and a target VDisk. FlashCopy mappings can be either stand-alone or a member of a consistency group. You can perform the act of preparing, starting, or stopping on either the stand-alone mapping or the consistency group.

Figure 6-5 on page 263 illustrates the concept of FlashCopy mapping.

Rule: After a mapping is in a consistency group, you can only operate on the group, and you can no longer prepare, start, or stop the individual mapping.

262 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 289: San

Figure 6-5 FlashCopy mapping

6.4.2 Multiple Target FlashCopy SVC supports up copying up to 256 target VDisks from a single source VDisk. Each copy is managed by a unique mapping. In general, each mapping acts independently and is not affected by other mappings sharing the same source VDisk. Figure 6-6 illustrates how a view of a Multiple Target FlashCopy implementation.

Figure 6-6 Multiple Target FlashCopy implementation

Figure 6-6 shows four targets and mappings taken from a single source. It also shows that there is an ordering to the targets: Target 1 is the oldest (as measured from the time it was started) through to Target 4, which is the newest. The ordering is important because of the way in which data is copied when multiple target VDisks are defined and because of the dependency chain that results. A write to the source VDisk does not cause its data to be copied to all of the targets; instead, it is copied to the newest target VDisk only (Target 4 in Figure 6-6). The older targets will refer to new targets first before referring to the source.

From the point of view of an intermediate target disk (neither the oldest or the newest), it treats the set of newer target VDisks and the true source VDisk as a type of composite source.

It treats all older VDisks as a kind of target (and behaves like a source to them). If the mapping for an intermediate target VDisk shows 100% progress, its target VDisk contains a complete set of data. In this case, mappings treat the set of newer target VDisks, up to and including the 100% progress target, as a form of composite source. A dependency relationship exists between a particular target and all newer targets (up to and including a target that shows 100% progress) that share the same source until all data has been copied to this target and all older targets.

You can read more information about Multiple Target FlashCopy in 6.4.6, “Interaction and dependency between Multiple Target FlashCopy mappings” on page 267.

Chapter 6. Advanced Copy Services 263

Page 290: San

6.4.3 Consistency groupsConsistency groups address the issue where the objective is to preserve data consistency across multiple VDisks, because the applications have related data that spans multiple VDisks. A requirement for preserving the integrity of data that is being written is to ensure that “dependent writes” are executed in the application’s intended sequence. Because the SVC provides point-in-time semantics, a self-consistent data set is obtained.

FlashCopy mappings can be members of a consistency group, or they can be operated in a stand-alone manner, not as part of a consistency group.

FlashCopy commands can be issued to a FlashCopy consistency group, which affects all FlashCopy mappings in the consistency group, or to a single FlashCopy mapping if it is not part of a defined FlashCopy consistency group.

Figure 6-7 illustrates a consistency group consisting of two FlashCopy mappings.

Figure 6-7 FlashCopy consistency group

Dependent writesTo illustrate why it is crucial to use consistency groups when a data set spans multiple VDisks, consider the following typical sequence of writes for a database update transaction:

1. A write is executed to update the database log, indicating that a database update is to be performed.

2. A second write is executed to update the database.

3. A third write is executed to update the database log, indicating that the database update has completed successfully.

The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step. However, if the database log (updates 1 and 3) and the database itself (update 2) are on separate VDisks and a FlashCopy mapping is started during this update, you need to exclude the possibility that the database itself is copied slightly before the database log. This will result in the target VDisks seeing writes (1) and (3) but not (2), because the database was copied before the write was completed.

In this case, if the database was restarted using the backup that was made from the FlashCopy target disks, the database log indicates that the transaction had completed successfully when, in fact, that is not the case, because the FlashCopy of the VDisk with the database file was started (bitmap was created) before the write was on the disk. Therefore, the transaction is lost, and the integrity of the database is in question.

264 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 291: San

To overcome the issue of dependent writes across VDisks and to create a consistent image of the client data, it is necessary to perform a FlashCopy operation on multiple VDisks as an atomic operation. To achieve this condition, the SVC supports the concept of consistency groups.

A FlashCopy consistency group can contain up to 512 FlashCopy mappings (up to the maximum number of FlashCopy mappings supported by the SVC cluster). FlashCopy commands can then be issued to the FlashCopy consistency group and thereby simultaneously for all of the FlashCopy mappings that are defined in the consistency group. For example, when issuing a FlashCopy start command to the consistency group, all of the FlashCopy mappings in the consistency group are started at the same time, resulting in a point-in-time copy that is consistent across all of the FlashCopy mappings that are contained in the consistency group.

Consistency group with Multiple Target FlashCopyIt is important to note that a consistency group aggregates FlashCopy mappings, not VDisks. Thus, where a source VDisk has multiple FlashCopy mappings, they can be in the same or separate consistency groups. If a particular VDisk is the source VDisk for multiple FlashCopy mappings, you might want to create separate consistency groups to separate each mapping of the same source VDisk. If the source VDisk with multiple target VDisks is in the same consistency group, the result is that when the consistency group is started, multiple identical copies of the VDisk will be created. However, this result might be what the user wants. For example, the user might want to run multiple simulations on the same set of source data. IF so, this approach is one way of obtaining identical sets of source data.

Maximum configurationsTable 6-1 shows the FlashCopy properties and maximum configurations.

Table 6-1 FlashCopy properties and maximum configuration

FlashCopy property Maximum Comment

FlashCopy targets per source 256 This maximum is the maximum number of FlashCopy mappings that can exist with the same source VDisk.

FlashCopy mappings per cluster 4,096 The number of mappings is no longer limited by the number of VDisks in the cluster, and so, the FlashCopy component limit applies.

FlashCopy consistency groups per cluster

127 This maximum is an arbitrary limit that is policed by the software.

FlashCopy VDisks per I/O Group 1,024 This maximum is a limit on the quantity of FlashCopy mappings using bitmap space from this I/O Group. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no Metro and Global Mirror bitmap space. The default is 40 TB.

FlashCopy mappings per consistency group

512 This limit is due to the time that is taken to prepare a consistency group with a large number of mappings.

Chapter 6. Advanced Copy Services 265

Page 292: San

6.4.4 FlashCopy indirection layerThe FlashCopy indirection layer governs the I/O to both the source and target VDisks when a FlashCopy mapping is started, which is done using a FlashCopy bitmap. The purpose of the FlashCopy indirection layer is to enable both the source and target VDisks for read and write I/O immediately after the FlashCopy has been started.

To illustrate how the FlashCopy indirection layer works, we look at what happens when a FlashCopy mapping is prepared and subsequently started.

When a FlashCopy mapping is prepared and started, the following sequence is applied:

1. Flush write the data in cache onto a source VDisk or VDisks that are part of a consistency group.

2. Put cache into write-through on the source VDisks.

3. Discard cache for the target VDisks.

4. Establish a sync point on all of the source VDisks in the consistency group (creating the FlashCopy bitmap).

5. Ensure that the indirection layer governs all of the I/O to the source VDisks and target VDisks.

6. Enable cache on both the source VDisks and target VDisks.

FlashCopy provides the semantics of a point-in-time copy, using the indirection layer, which intercepts the I/Os that targeted at either the source VDisks or target VDisks. The act of starting a FlashCopy mapping causes this indirection layer to become active in the I/O path, which occurs as an atomic command across all FlashCopy mappings in the consistency group. The indirection layer makes a decision about each I/O. This decision is based upon these factors:

� The VDisk and the logical block address (LBA) to which the I/O is addressed� Its direction (read or write)� The state of an internal data structure, the FlashCopy bitmap

The indirection layer either allows the I/O to go through the underlying storage, redirects the I/O from the target VDisk to the source VDisk, or stalls the I/O while it arranges for data to be copied from the source VDisk to the target VDisk. To explain in more detail which action is applied for each I/O, we first look at the FlashCopy bitmap.

6.4.5 Grains and the FlashCopy bitmapWhen data is copied between VDisks by FlashCopy, either from source to target or from target to target, it is copied in units of address space known as grains. The grain size is 256 KB or 64 KB. The FlashCopy bitmap contains one bit for each grain. The bit records whether the associated grain has yet been split by copying the grain from the source to the target.

Source readsReads of the source are always passed through to the underlying source disk.

Target readsIn order for FlashCopy to process a read from the target disk, FlashCopy must consult its bitmap. If the data being read has already been copied to the target, the read is sent to the target disk. If it has not, the read is sent to the source VDisk or possibly to another target VDisk if multiple FlashCopy mappings exist for the source VDisk. Clearly, this algorithm

266 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 293: San

requires that while this read is outstanding, no writes are allowed to execute that change the data being read. The SVC satisfies this requirement by using by a cluster-wide locking scheme.

Writes to the source or targetWhere writes occur to source or target to an area (grain), which has not yet been copied, these writes will usually be stalled while a copy operation is performed to copy data from the source to the target, to maintain the illusion that the target contains its own copy. A specific optimization is performed where an entire grain is written to the target VDisk. In this case, the new grain contents are written to the target VDisk. If this write succeeds, the grain is marked as split in the FlashCopy bitmap without a copy from the source to the target having been performed. If the write fails, the grain is not marked as split.

The rate at which the grains are copied across from the source VDisk to the target VDisk is called the copy rate. By default, the copy rate is 50, although you can alter this rate. For more information about copy rates, see 6.4.13, “Space-efficient FlashCopy” on page 276.

The FlashCopy indirection layer algorithmImagine the FlashCopy indirection layer as the I/O traffic cop when a FlashCopy mapping is active. The I/O is intercepted and handled according to whether it is directed at the source VDisk or at the target VDisk, depending on the nature of the I/O (read or write) and the state of the grain (whether it has been copied).

In Figure 6-8, we illustrate how the background copy runs while I/Os are handled according to the indirection layer algorithm.

Figure 6-8 I/O processing with FlashCopy

6.4.6 Interaction and dependency between Multiple Target FlashCopy mappings

Figure 6-9 on page 268 represents a set of four FlashCopy mappings that share a common source. The FlashCopy mappings will target VDisks Target 0, Target 1, Target 2, and Target 3.

Chapter 6. Advanced Copy Services 267

Page 294: San

Figure 6-9 Interactions between MTFC mappings

Target 0 is not dependent on a source, because it has completed copying. Target 0 has two dependent mappings (Target 1 and Target 2).

Target 1 is dependent upon Target 0. It will remain dependent until all of Target 1 has been copied. Target 2 is dependent on it, because Target 2 is 20% copy complete. After all of Target 1 has been copied, it can then move to the idle_copied state.

Target 2 is dependent upon Target 0 and Target 1 and will remain dependent until all of Target 2 has been copied. No target is dependent on Target 2, so when all of the data has been copied to Target 2, it can move to the Idle_copied state.

Target 3 has actually completed copying, so it is not dependent on any other maps.

Write to target VDiskA write to an intermediate or newest target VDisk must consider the state of the grain within its own mapping, as well as that of the grain of the next oldest mapping:

� If the grain of the next oldest mapping has not yet been copied, it must be copied before the write is allowed to proceed in order to preserve the contents of the next oldest mapping. The data written to the next oldest mapping comes from a target or source.

� If the grain in the target being written has not yet been copied, the grain is copied from the oldest already copied grain in the mappings that are newer than it, or the source if none are already copied. After this copy has been done, the write can be applied to the target.

Read to target VDiskIf the grain being read has been split, the read simply returns data from the target being read. If the read is to an uncopied grain on an intermediate target VDisk, each of the newer mappings is examined in turn to see if the grain has been split. The read is surfaced from the first split grain found or from the source VDisk if none of the newer mappings has a split grain.

268 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 295: San

Stopping the copy processAn important scenario arises when a stop command is delivered to a mapping for a target that has dependent mappings.

After a mapping is in the Stopped state, it can be deleted or restarted, which must not be allowed if there are still grains that hold data upon which other mappings depend. To avoid this situation, when a mapping receives a stopfcmap or stopfcconsistgrp command, rather than immediately moving to the Stopped state, it enters the Stopping state. An automatic copy process is driven that will find and copy all of the data that is uniquely held on the target VDisk of the mapping that is being stopped, to the next oldest mapping that is in the Copying state.

For example, if the mapping associated with Target 0 was issued a stopfcmap or stopfcconsistgrp command, Target 0 enters the Stopping state while a process copies the data of Target 0 to Target 1. After all of the data has been copied, Target 0 enters the Stopped state, and Target 1 is no longer dependent upon Target 0, but Target 1 remains dependent on Target 2.

6.4.7 Summary of the FlashCopy indirection layer algorithmTable 6-2 summarizes the indirection layer algorithm.

Table 6-2 Summary table of the FlashCopy indirection layer algorithm

6.4.8 Interaction with the cacheThis copy-on-write process can introduce significant latency into write operations. In order to isolate the active application from this latency, the FlashCopy indirection layer is placed logically beneath the cache.

Stopping the copy process: The stopping copy process can be ongoing for several mappings sharing the same source at the same time. At the completion of this process, the mapping will automatically make an asynchronous state transition to the Stopped state or the idle_copied state if the mapping was in the Copying state with progress = 100%.

VDisk being accessed

Has the grain been split (copied)?

Host I/O operation

Read Write

Source No Read from source VDisk. Copy grain to most recently started target for this source, then write to the source.

Yes Read from source VDisk. Write to source VDisk.

Target No If any newer targets exist for this source in which this grain has already been copied, read from the oldest of these targets. Otherwise, read from the source.

Hold the write. Check the dependency target VDisks to see if the grain is split. If the grain is not already copied to the next oldest target for this source, copy the grain to the next oldest target. Then, write to the target.

Yes Read from target VDisk. Write to target VDisk.

Chapter 6. Advanced Copy Services 269

Page 296: San

Therefore, the copy latency is typically only seen when destaged from the cache, rather than for write operations from an application; otherwise, the copy operation might be blocked waiting for the write to complete.

In Figure 6-10, we illustrate the logical placement of the FlashCopy indirection layer.

Figure 6-10 Logical placement of the FlashCopy indirection layer

6.4.9 FlashCopy rulesWith SVC 5.1, the maximum number of supported FlashCopy mappings has been improved to 8,192 per SVC cluster. Consider the following rules when defining FlashCopy mappings:

� There is a one-to-one mapping of the source VDisk to the target VDisk.

� One source VDisk can have 256 target VDisks.

� The source VDisks and target VDisks can be in separate I/O Groups of the same cluster.

� The minimum FlashCopy granularity is the entire VDisk.

� The source and target must be exactly equal in size.

� The size of the source VDisk and the target VDisk cannot be altered (increased or decreased) after the FlashCopy mapping is created.

� There is a per I/O Group limit of 1,024 TB on the quantity of the source VDisk and target VDisk capacity that can participate in FlashCopy mappings.

6.4.10 FlashCopy and image mode disksYou can use FlashCopy with an image mode VDisk. Because the source and target VDisks must be exactly the same size when creating a FlashCopy mapping, you must create a VDisk with the exact same size as the image mode VDisk. To accomplish this task, use the svcinfo lsvdisk -bytes VDiskName command. The size in bytes is then used to create the VDisk to use in the FlashCopy mapping.

In Example 6-1 on page 271, we list the size of the Image_VDisk_A VDisk. Subsequently, the VDisk_A_copy VDisk is created, specifying the same size.

270 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 297: San

Example 6-1 Listing the size of a VDisk in bytes and creating a VDisk of equal size

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk Image_VDisk_Aid 8name Image_VDisk_AIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 2mdisk_grp_name MDG_Imagecapacity 36.0GBtype image...autoexpandwarninggrainsize

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -size 36 -unit gb -name VDisk_A_copy -mdiskgrp MDG_DS47 -vtype striped -iogrp 1Virtual Disk, id [19], successfully created

6.4.11 FlashCopy mapping eventsIn this section, we explain the series of events that modify the states of a FlashCopy. In Figure 6-11 on page 272, the FlashCopy mapping state diagram shows an overview of the states that apply to a FlashCopy mapping. We describe the mapping events in Table 6-3 on page 272.

Tip: Alternatively, you can use the expandvdisksize and shrinkvdisksize VDisk commands to modify the size of the VDisk. See 7.4.10, “Expanding a VDisk” on page 367 and 7.4.16, “Shrinking a VDisk” on page 372 for more information.

You can use an image mode VDisk as either a FlashCopy source VDisk or target VDisk.

Overview of a FlashCopy sequence of events:

1. Associate the source data set with a target location (one or more source and target VDisks).

2. Create a FlashCopy mapping for each source VDisk to the corresponding target VDisk. The target VDisk must be equal in size to the source VDisk.

3. Discontinue access to the target (application dependent).

4. Prepare (pre-trigger) the FlashCopy:

a. Flush cache for the source.

b. Discard cache for the target.

5. Start (trigger) the FlashCopy:

a. Pause I/O (briefly) on the source.

b. Resume I/O on the source.

c. Start I/O on the target.

Chapter 6. Advanced Copy Services 271

Page 298: San

Figure 6-11 FlashCopy mapping state diagram

Table 6-3 Mapping events

Mapping event Description

Create A new FlashCopy mapping is created between the specified source VDisk and the specified target VDisk. The operation fails if any of the following conditions is true:� For SAN Volume Controller software Version 4.1.0 or earlier, the

source or target VDisk is already a member of a FlashCopy mapping.

� For SAN Volume Controller software Version 4.2.0 or later, the source or target VDisk is already a target VDisk of a FlashCopy mapping.

� For SAN Volume Controller software Version 4.2.0 or later, the source VDisk is already a member of 16 FlashCopy mappings.

� For SAN Volume Controller software Version 4.3.0 or later, the source VDisk is already a member of 256 FlashCopy mappings.

� The node has insufficient bitmap memory.� The source and target VDisk sizes differ.

Prepare The prestartfcmap or prestartfcconsistgrp command is directed to either a consistency group for FlashCopy mappings that are members of a normal consistency group or to the mapping name for FlashCopy mappings that are stand-alone mappings. The prestartfcmap or prestartfcconsistgrp command places the FlashCopy mapping into the Preparing state. Important: The prestartfcmap or prestartfcconsistgrp command can corrupt any data that previously resided on the target VDisk because cached writes are discarded. Even if the FlashCopy mapping is never started, the data from the target might have logically changed during the act of preparing to start the FlashCopy mapping.

272 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 299: San

Flush done The FlashCopy mapping automatically moves from the Preparing state to the Prepared state after all cached data for the source is flushed and all cached data for the target is no longer valid.

Start When all of the FlashCopy mappings in a consistency group are in the Prepared state, the FlashCopy mappings can be started. To preserve the cross volume consistency group, the start of all of the FlashCopy mappings in the consistency group must be synchronized correctly with respect to I/Os that are directed at the VDisks by using the startfcmap or startfcconsistgrp command. The following actions occur during the startfcmap or startfcconsistgrp command’s run:� New reads and writes to all source VDisks in the consistency

group are paused in the cache layer until all ongoing reads and writes beneath the cache layer are completed.

� After all FlashCopy mappings in the consistency group are paused, the internal cluster state is set to allow FlashCopy operations.

� After the cluster state is set for all FlashCopy mappings in the consistency group, read and write operations continue on the source VDisks.

� The target VDisks are brought online.As part of the startfcmap or startfcconsistgrp command, read and write caching is enabled for both the source and target VDisks.

Modify You can modify the following FlashCopy mapping properties: � FlashCopy mapping name � Clean rate � Consistency group � Copy rate (for background copy) � Automatic deletion of the mapping when the background copy is

complete

Stop There are two separate mechanisms by which a FlashCopy mapping can be stopped: � You have issued a command. � An I/O error has occurred.

Delete This command requests that the specified FlashCopy mapping be deleted. If the FlashCopy mapping is in the Stopped state, the force flag must be used.

Flush failed If the flush of data from the cache cannot be completed, the FlashCopy mapping enters the Stopped state.

Copy complete After all of the source data has been copied to the target and there are no dependent mappings, the state is set to Copied. If the option to automatically delete the mapping after the background copy completes is specified, the FlashCopy mapping is automatically deleted. If this option is not specified, the FlashCopy mapping is not automatically deleted and can be reactivated by preparing and starting again.

Bitmap online/offline The node has failed.

Mapping event Description

Chapter 6. Advanced Copy Services 273

Page 300: San

6.4.12 FlashCopy mapping statesIn this section, we explain the states of a FlashCopy mapping in more detail.

Idle_or_copiedRead and write caching is enabled for both the source and the target. A FlashCopy mapping exists between the source and target, but the source and the target behave as independent VDisks in this state.

CopyingThe FlashCopy indirection layer governs all I/O to the source and target VDisks while the background copy is running.

Reads and writes are executed on the target as though the contents of the source were instantaneously copied to the target during the startfcmap or startfcconsistgrp command.

The source and target can be independently updated. Internally, the target depends on the source for certain tracks.

Read and write caching is enabled on the source and the target.

StoppedThe FlashCopy was stopped either by a user command or by an I/O error.

When a FlashCopy mapping is stopped, any useful data in the target VDisk is lost. Therefore, while the FlashCopy mapping is in this state, the target VDisk is in the Offline state. To regain access to the target, the mapping must be started again (the previous point-in-time will be lost) or the FlashCopy mapping must be deleted. The source VDisk is accessible, and read/write caching is enabled for the source. In the Stopped state, a mapping can be prepared again or it can be deleted.

StoppingThe mapping is in the process of transferring data to an dependency mapping. The behavior of the target VDisk depends on whether the background copy process had completed while the mapping was in the Copying state. If the copy process had completed, the target VDisk remains online while the stopping copy process completes. If the copy process had not completed, data in the cache is discarded for the target VDisk. The target VDisk is taken offline, and the stopping copy process runs. After the data has been copied, a stop complete asynchronous event notification is issued. The mapping will move to the Idle/Copied state if the background copy has completed or to the Stopped state if the background copy has not completed.

The source VDisk remains accessible for I/O.

SuspendedThe target has been “flashed” from the source and was in the Copying or Stopping state. Access to the metadata has been lost, and as a consequence, both the source and target VDisks are offline. The background copy process has been halted.

When the metadata becomes available again, the FlashCopy mapping will return to the Copying or Stopping state, the access to the source and target VDisks will be restored, and the background copy or stopping process will be resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache, consuming resources, until the FlashCopy mapping leaves the Suspended state.

274 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 301: San

PreparingBecause the FlashCopy function is placed logically beneath the cache to anticipate any write latency problem, it demands no read or write data for the target and no write data for the source in the cache at the time that the FlashCopy operation is started. This design ensures that the resulting copy is consistent.

Performing the necessary cache flush as part of the startfcmap or startfcconsistgrp command unnecessarily delays the I/Os that are received after the startfcmap or startfcconsistgrp command is executed, because these I/Os must wait for the cache flush to complete.

To overcome this problem, SVC FlashCopy supports the prestartfcmap or prestartfcconsistgrp command, which prepares for a FlashCopy start while still allowing I/Os to continue to the source VDisk.

In the Preparing state, the FlashCopy mapping is prepared by the following steps:

1. Flushing any modified write data associated with the source VDisk from the cache. Read data for the source will be left in the cache.

2. Placing the cache for the source VDisk into write-through mode, so that subsequent writes wait until data has been written to disk before completing the write command that is received from the host.

3. Discarding any read or write data that is associated with the target VDisk from the cache.

While in this state, writes to the source VDisk will experience additional latency, because the cache is operating in write-through mode.

While the FlashCopy mapping is in this state, the target VDisk is reported as online, but it will not perform reads or writes. These reads and writes are failed by the SCSI front end.

Before starting the FlashCopy mapping, it is important that any cache at the host level, for example, the buffers in the host OSs or applications, are also instructed to flush any outstanding writes to the source VDisk.

PreparedWhen in the Prepared state, the FlashCopy mapping is ready to perform a start. While the FlashCopy mapping is in this state, the target VDisk is in the Offline state. In the Prepared state, writes to the source VDisk experience additional latency because the cache is operating in write-through mode.

Summary of FlashCopy mapping statesTable 6-4 on page 276 lists the various FlashCopy mapping states and the corresponding states of the source and target VDisks.

Chapter 6. Advanced Copy Services 275

Page 302: San

Table 6-4 FlashCopy mapping state summary

6.4.13 Space-efficient FlashCopy

You can have a mix of space-efficient and fully allocated VDisks in FlashCopy mappings. One common combination is a fully allocated source with a space-efficient target, which allows the target to consume a smaller amount of real storage than the source.

For the best performance, the grain size of the Space-Efficient VDisk must match the grain size of the FlashCopy mapping. However, if the grain sizes differ, the mapping still proceeds. Consider the following information when you create your FlashCopy mappings:

� If you are using a fully allocated source with a space-efficient target, disable the background copy and cleaning mode on the FlashCopy map by setting both the background copy rate and cleaning rate to zero. Otherwise, if these features are enabled, all of the source is copied onto the target VDisk, which causes the Space-Efficient VDisk to either go offline or to grow as large as the source.

� If you are using only a space-efficient source, only the space that is used on the source VDisk is copied to the target VDisk. For example, if the source VDisk has a virtual size of 800 GB and a real size of 100 GB, of which 50 GB has been used, only the used 50 GB is copied.

Multiple space-efficient targets for FlashCopyThe SVC implementation of Multiple Target FlashCopy ensures that when new data is written to a source or target, that data is copied to zero or one other targets. A consequence of this implementation is that Space-Efficient VDisks can be used in conjunction with Multiple Target FlashCopy without causing allocations to occur on multiple targets when data is written to the source.

Space-efficient incremental FlashCopyThe implementation of Space-Efficient VDisks does not preclude the use of incremental FlashCopy on the same VDisks. It does not make sense to have a fully allocated source VDisk and to use incremental FlashCopy to copy this fully allocated source VDisk to a space-efficient target VDisk; however, this combination is possible.

State Source Target

Online/Offline Cache state Online/Offline Cache state

Idling/Copied Online Write-back Online Write-back

Copying Online Write-back Online Write-back

Stopped Online Write-back Offline N/A

Stopping Online Write-back Online if copy completeOffline if copy not complete

N/A

Suspended Offline Write-back Offline N/A

Preparing Online Write-through Online but not accessible

N/A

Prepared Online Write-through Online but not accessible

N/A

276 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 303: San

Two more interesting combinations of incremental FlashCopy and Space-Efficient VDisks are:

� A space-efficient source VDisk can be incrementally copied using FlashCopy to a space-efficient target VDisk. Whenever the FlashCopy is retriggered, only data that has been modified is recopied to the target. Note that if space is allocated on the target because of I/O to the target VDisk, this space is not reclaimed when the FlashCopy is retriggered.

� A fully allocated source VDisk can be incrementally copied using FlashCopy to another fully allocated VDisk at the same time as being copied to multiple space-efficient targets (taken at separate points in time). This combination allows a single full backup to be kept for recovery purposes and separates the backup workload from the production workload, and at the same time, allowing older space-efficient backups to be retained.

Migration from and to a Space-Efficient VDiskThere are various scenarios to migrate a non-Space-Efficient VDisk to a Space-Efficient VDisk. We describe migration fully in Chapter 9, “Data migration” on page 675.

6.4.14 Background copyThe FlashCopy background feature enables you to copy all of the data in a source VDisk to the corresponding target VDisk. Without the FlashCopy background feature, only data that changed on the source VDisk can be copied to the target VDisk. The benefit of using a FlashCopy mapping with background copy enabled is that the target VDisk becomes a real clone (independent from the source VDisk) of the FlashCopy mapping source VDisk.

The background copy rate is a property of a FlashCopy mapping that is expressed as a value between 0 and 100. It can be changed in any FlashCopy mapping state and can differ in the mappings of one consistency group. A value of 0 disables background copy.

The relationship of the background copy rate value to the attempted number of grains to be split (copied) per second is shown in Table 6-5.

Table 6-5 Background copy rate

The grains per second numbers represent the maximum number of grains that the SVC will copy per second, assuming that the bandwidth to the managed disks (MDisks) can accommodate this rate.

Value Data copied per second Grains per second

1 - 10 128 KB 0.5

11 - 20 256 KB 1

21 - 30 512 KB 2

31 - 40 1 MB 4

41 - 50 2 MB 8

51 - 60 4 MB 16

61 - 70 8 MB 32

71 - 80 16 MB 64

81 - 90 32 MB 128

91 - 100 64 MB 256

Chapter 6. Advanced Copy Services 277

Page 304: San

If the SVC is unable to achieve these copy rates because of insufficient bandwidth from the SVC nodes to the MDisks, background copy I/O contends for resources on an equal basis with the I/O that is arriving from the hosts. Both background copy I/O and I/O that is arriving from the hosts tend to see an increase in latency and a consequential reduction in throughput. Both background copy and foreground I/O continue to make forward progress, and do not stop, hang, or cause the node to fail. The background copy is performed by both nodes of the I/O Group in which the source VDisk resides.

6.4.15 SynthesisThe FlashCopy functionality in SVC simply creates copy VDisks. All of the data in the source VDisk is copied to the destination VDisk, including operating system control information, as well as application data and metadata.

Certain operating systems are unable to use FlashCopy without an additional step, which is termed synthesis. In summary, synthesis performs a type of transformation on the operating system metadata in the target VDisk so that the operating system can use the disk.

6.4.16 Serialization of I/O by FlashCopyIn general, the FlashCopy function in the SVC introduces no explicit serialization into the I/O path. Therefore, many concurrent I/Os are allowed to the source and target VDisks.

However, there is a lock for each grain. The lock can be in shared or exclusive mode. For multiple targets, a common lock is shared and the mappings are derived from a particular source VDisk. The lock is used in the following modes under the following conditions:

� The lock is held in shared mode for the duration of a read from the target VDisk, which touches a grain that is not split.

� The lock is held in exclusive mode during a grain split, which happens prior to FlashCopy starting any destage (or write-through) from the cache to a grain that is going to be split (the destage waits for the grain to be split). The lock is held during the grain split and released before the destage is processed.

If the lock is held in shared mode, and another process wants to use the lock in shared mode, this request is granted unless a process is already waiting to use the lock in exclusive mode.

If the lock is held in shared mode and it is requested to be exclusive, the requesting process must wait until all holders of the shared lock free it.

Similarly, if the lock is held in exclusive mode, a process wanting to use the lock in either shared or exclusive mode must wait for it to be freed.

6.4.17 Error handlingWhen a FlashCopy mapping is not copying or stopping, the FlashCopy function does not affect the error handling or the reporting of errors in the I/O path. Error handling and reporting are only affected by FlashCopy when a FlashCopy mapping is copying or stopping.

We describe these scenarios in the following sections.

Node failureNormally, two copies of the FlashCopy bitmaps are maintained; one copy of the FlashCopy bitmaps is on each of the two nodes making up the I/O Group of the source VDisk. When a node fails, one copy of the bitmaps, for all FlashCopy mappings whose source VDisk is a

278 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 305: San

member of the failing node’s I/O Group, will become inaccessible. FlashCopy will continue with a single copy of the FlashCopy bitmap being stored as non-volatile in the remaining node in the source I/O Group. The cluster metadata is updated to indicate that the missing node no longer holds up-to-date bitmap information.

When the failing node recovers, or a replacement node is added to the I/O Group, up-to-date bitmaps will be reestablished on the new node, and it will again provide a redundant location for the bitmaps:

� When the FlashCopy bitmap becomes available again (at least one of the SVC nodes in the I/O Group is accessible), the FlashCopy mapping will return to the Copying state, access to the source and target VDisks will be restored, and the background copy process will be resumed. Unflushed data that was written to the source or target before the FlashCopy was suspended is pinned in the cache until the FlashCopy mapping leaves the Suspended state.

� Normally, two copies of the FlashCopy bitmaps are maintained (in non-volatile memory), one copy on each of the two SVC nodes making up the I/O Group of the source VDisk. If only one of the SVC nodes in the I/O Group to which the source VDisk belongs goes offline, the FlashCopy mapping will continue in the Copying state, with a single copy of the FlashCopy bitmap. When the failed SVC node recovers, or a replacement SVC node is added to the I/O Group, up-to-date FlashCopy bitmaps will be reestablished on the resuming SVC node and again provide a redundant location for the FlashCopy bitmaps.

Path failure (Path Offline state)In a fully functioning cluster, all of the nodes have a software representation of every VDisk in the cluster within their application hierarchy.

Because the storage area network (SAN) that links the SVC nodes to each other and to the MDisks is made up of many independent links, it is possible for a subset of the nodes to be temporarily isolated from several of the MDisks. When this situation happens, the managed disks are said to be Path Offline on certain nodes.

When an MDisk enters the Path Offline state on an SVC node, all of the VDisks that have any extents on the MDisk also become Path Offline. Again, this situation happens only on the affected nodes. When a VDisk is Path Offline on a particular SVC node, the host access to that VDisk through the node will fail with the SCSI sensor indicating Offline.

Path Offline for the source VDiskIf a FlashCopy mapping is in the Copying state and the source VDisk goes Path Offline, this Path Offline state is propagated to all target VDisks up to but not including the target VDisk for the newest mapping that is 100% copied but remains in the Copying state. If no mappings are 100% copied, all of the target VDisks are taken offline. Again, note that Path Offline is a state that exists on a per-node basis. Other nodes might not be affected. If the source VDisk comes Online, the target and source VDisks are brought back Online.

If both nodes in the I/O Group become unavailable: If both nodes in the I/O Group to which the target VDisk belongs become unavailable, the host cannot access the target VDisk.

Other nodes: Other nodes might see the managed disks as Online, because their connection to the managed disks is still functioning.

Chapter 6. Advanced Copy Services 279

Page 306: San

Path Offline for the target VDiskIf a target VDisk goes Path Offline, but the source VDisk is still Online, and if there are any dependent mappings, those target VDisks will also go Path Offline. The source VDisk will remain Online.

6.4.18 Asynchronous notificationsFlashCopy raises informational error logs when mappings or consistency groups make certain state transitions.

These state transitions occur as a result of configuration events that complete asynchronously, and the informational errors can be used to generate Simple Network Management Protocol (SNMP) traps to notify the user. Other configuration events complete synchronously, and no informational errors are logged as a result of these events:

� PREPARE_COMPLETED: This state transition is logged when the FlashCopy mapping or consistency group enters the Prepared state as a result of a user request to prepare. The user can now start (or stop) the mapping or consistency group.

� COPY_COMPLETED: This state transition is logged when the FlashCopy mapping or consistency group enters the Idle_or_copied state when it was previously in the Copying or Stopping state. This state transition indicates that the target disk now contains a complete copy and no longer depends on the source.

� STOP_COMPLETED: This state transition is logged when the FlashCopy mapping or consistency group has entered the Stopped state as a result of a user request to stop. It will be logged after the automatic copy process has completed. This state transition includes mappings where no copying needed to be performed. This state transition differs from the error that is logged when a mapping or group enters the Stopped state as a result of an I/O error.

6.4.19 Interoperation with Metro Mirror and Global MirrorFlashCopy can work together with Metro Mirror and Global Mirror to provide better protection of the data. For example, we can perform a Metro Mirror copy to duplicate data from Site_A to Site_B and, then, perform a daily FlashCopy and copy the data elsewhere.

Table 6-6 lists which combinations of FlashCopy and Remote Copy are supported. In the table, remote copy refers to Metro Mirror and Global Mirror.

Table 6-6 FlashCopy and remote copy interaction

Component Remote copy primary Remote copy secondary

FlashCopySource

Supported SupportedLatency: When the FlashCopy relationship is in the Preparing and Prepared states, the cache at the remote copy secondary site operates in write-through mode.This process adds additional latency to the already latent remote copy relationship.

FlashCopyDestination

Not supported Not supported

280 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 307: San

6.4.20 Recovering data from FlashCopyYou can use FlashCopy to recover the data if a form of corruption has happened. For example, if a user deletes data by mistake, you can map the FlashCopy target VDisks to the application server, and import all the logical volume-level configurations, start the application, and restore the data back to a given point in time.

FlashCopy backup is a disk-based backup copy that can be used to restore service more quickly than other backup techniques. This application is further enhanced by the ability to maintain multiple backup targets, spread over a range of time, allowing the user to choose a backup from before the time of the corruption.

6.5 Metro Mirror In the following topics, we describe the Metro Mirror copy service, which is a synchronous remote copy function. Metro Mirror in SVC is similar to Metro Mirror in the IBM System Storage DS family.

SVC provides a single point of control when enabling Metro Mirror in your SAN, regardless of the disk subsystems that are used.

The general application of Metro Mirror is to maintain two real-time synchronized copies of a disk. Often, two copies are geographically dispersed to two SVC clusters, although it is possible to use Metro Mirror in a single cluster (within an I/O Group). If the primary copy fails, you can enable a secondary copy for I/O operation.

A typical application of this function is to set up a dual-site solution using two SVC clusters. The first site is considered the primary or production site, and the second site is considered the backup site or failover site, which is activated when a failure at the first site is detected.

6.5.1 Metro Mirror overviewMetro Mirror works by establishing a Metro Mirror relationship between two VDisks of equal size. To maintain data integrity for dependency writes, you can use consistency groups to group a number of Metro Mirror relationships together, similar to FlashCopy consistency groups. SVC provides both intracluster and intercluster Metro Mirror.

Intracluster Metro MirrorYou can apply intracluster Metro Mirror within a single I/O Group.

Tip: It is better to map a FlashCopy target VDisk to a backup machine with the same application installed. We do not recommend that you map a FlashCopy target VDisk to the same application server to which the FlashCopy source VDisk is mapped, because the FlashCopy target and source VDisks have the same signature, pvid, vgda, and so on. Special steps are necessary to handle the conflict at the OS level. For example, you can use the recreatevg command in AIX to generate separate vg, lv, file system, and so on, names in order to avoid a naming conflict.

Tips: Intracluster Metro Mirror will consume more resources for a specific cluster, compared to an intercluster Metro Mirror relationship. We recommend using intercluster Metro Mirror when possible.

Chapter 6. Advanced Copy Services 281

Page 308: San

Applying Metro Mirror across I/O Groups in the same SVC cluster is not supported, because intracluster Metro Mirror can only be performed between VDisks in the same I/O Group.

Intercluster Metro MirrorIntercluster Metro Mirror operations require a pair of SVC clusters that are separated by a number of moderately high bandwidth links. Two SVC clusters must be defined in an SVC partnership, which must be performed on both SVC clusters to establish a fully functional Metro Mirror partnership.

Using standard single mode connections, the supported distance between two SVC clusters in a Metro Mirror partnership is 10 km (6.2 miles), although greater distances can be achieved by using extenders. For extended distance solutions, contact your IBM representative.

6.5.2 Remote copy techniquesMetro Mirror is a synchronous remote copy, which we briefly explain next. To illustrate the differences between synchronous and asynchronous remote copy, we also explain asynchronous remote copy.

Synchronous remote copyMetro Mirror is a fully synchronous remote copy technique that ensures that, as long as writes to the secondary VDisks are possible, writes are committed at both the primary and secondary VDisks before the application is given an acknowledgement of the completion of a write.

Errors, such as a loss of connectivity between the two clusters, can mean that it is not possible to replicate data from the primary VDisk to the secondary VDisk. In this case, Metro Mirror operates to ensure that a consistent image is left at the secondary VDisk, and then continues to allow I/O to the primary VDisk, so as not to affect the operations at the production site.

Figure 6-12 on page 283 illustrates how a write to the master VDisk is mirrored to the cache of the auxiliary VDisk before an acknowledgement of the write is sent back to the host that issued the write. This process ensures that the secondary is synchronized in real time, in case it is needed in a failover situation.

However, this process also means that the application is fully exposed to the latency and bandwidth limitations (if any) of the communication link to the secondary site. This process might lead to unacceptable application performance, particularly when placed under peak load. Therefore, using Metro Mirror has distance limitations.

Limit: When a local and a remote fabric are connected together for Metro Mirror purposes, the inter-switch link (ISL) hop count between a local node and a remote node cannot exceed seven.

282 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 309: San

Figure 6-12 Write on VDisk in Metro Mirror relationship

6.5.3 SVC Metro Mirror featuresSVC Metro Mirror supports the following features:

� Synchronous remote copy of VDisks dispersed over metropolitan scale distances is supported.

� SVC implements Metro Mirror relationships between VDisk pairs, with each VDisk in a pair managed by an SVC cluster.

� SVC supports intracluster Metro Mirror, where both VDisks belong to the same cluster (and I/O Group).

� SVC supports intercluster Metro Mirror, where each VDisk belongs to a separate SVC cluster. You can configure a specific SVC cluster for partnership with another cluster. All intercluster Metro Mirror processing takes place between two SVC clusters that are configured in a partnership.

� Intercluster and intracluster Metro Mirror can be used concurrently within a cluster for separate relationships.

� SVC does not require that a control network or fabric is installed to manage Metro Mirror. For intercluster Metro Mirror, SVC maintains a control link between two clusters. This control link is used to control the state and coordinate updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Metro Mirror I/O.

� SVC implements a configuration model that maintains the Metro Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events.

Chapter 6. Advanced Copy Services 283

Page 310: San

� SVC maintains and polices a strong concept of consistency and makes this concept available to guide configuration activity.

� SVC implements flexible resynchronization support enabling it to resynchronize VDisk pairs that have suffered write I/O to both disks and to resynchronize only those regions that are known to have changed.

6.5.4 Multiple Cluster Mirroring With the introduction of Multiple Cluster Mirroring in SVC 5.1, you can configure a cluster with multiple partner clusters.

Multiple Cluster Mirroring enables Metro Mirror and Global Mirror relationships to exist between a maximum of four SVC clusters.

The SVC clusters can take advantage of the maximum number of remote mirror relationships because Multiple Cluster Mirroring enables clients to copy from several remote sites to a single SVC cluster at a disaster recovery (DR) site. It supports implementation of consolidated DR strategies and helps clients that are moving or consolidating data centers.

Figure 6-13 shows an example of a Multiple Cluster Mirroring configuration.

Figure 6-13 Multiple Cluster Mirroring configuration example

Supported Multiple Cluster Mirroring topologiesPrior to SVC 5.1, you used one of the two cluster topologies that were allowed:

� A (no partnership configured)

� A B (one partnership configured)

284 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 311: San

With Multiple Cluster Mirroring, there is a wider range of possible topologies. You can connect a maximum of four clusters, directly or indirectly. Therefor, a cluster can never have any more than three partners.

For example, these topologies are allowed:

� A B, A C, and A D

Figure 6-14 shows a star topology.

Figure 6-14 SVC star topology

Figure 6-14 shows four clusters in a star topology, with cluster A at the center. Cluster A can be a central DR site for the three other locations.

Using a star topology, you can migrate separate applications at separate times by using a process, such as this example:

1. Suspend application at A.

2. Remove the A B relationship.

3. Create the A C relationship (or alternatively, the B C relationship).

4. Synchronize to cluster C, and ensure A C is established:

– A B, A C, A D, B C, B D, and C D

– A B, A C, and B C

Figure 6-15 on page 286 shows a triangle topology.

Chapter 6. Advanced Copy Services 285

Page 312: San

Figure 6-15 SVC triangle topology

There are three clusters in a triangle topology.

Figure 6-16 shows a fully connected topology.

Figure 6-16 SVC fully connected topology

Figure 6-16 is a fully connected mesh where every cluster has a partnership to each of the three other clusters. Therefore, VDisks can be replicated between any pair of clusters, but note that this topology is not required, unless relationships are needed between every pair of clusters:

A B, B C, and C D

The other option is a daisy-chain topology between four clusters, where we have a cascading solution; however, a VDisk must be in only one relationship, such as A B, for example. At the time of writing, a three-site solution, such as DS8000 Metro Global Mirror, is not supported.

Figure 6-17 on page 287 shows a daisy-chain topology.

286 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 313: San

Figure 6-17 SVC daisy-chain topology

Unsupported topologyAs an illustration of what is not supported, we show this example:

A B, B C, C D, and D E

Figure 6-18 shows this unsupported topology.

Figure 6-18 SVC unsupported topology

This topology is unsupported, because five clusters are indirectly connected. If the cluster can detect this topology at the time of the fourth mkpartnership command, the command will be rejected.

6.5.5 Metro Mirror relationshipA Metro Mirror relationship is composed of two VDisks that are equal in size. The master VDisk and the auxiliary VDisk can be in the same I/O Group, within the same SVC cluster (intracluster Metro Mirror), or they can be on separate SVC clusters that are defined as SVC partners (intercluster Metro Mirror).

Upgrade restrictions: The introduction of Multiple Cluster Mirroring necessitates upgrade restrictions:

� Concurrent code upgrade to 5.1.0 is supported from 4.3.1.x only.� If the cluster is in a partnership, the partnered cluster must meet a minimum software

level to allow concurrent I/O; the partnered cluster must be running 4.2.1 or higher.

Rules:

� A VDisk can only be part of one Metro Mirror relationship at a time.� A VDisk that is a FlashCopy target cannot be part of a Metro Mirror relationship.

Chapter 6. Advanced Copy Services 287

Page 314: San

Figure 6-19 illustrates the Metro Mirror relationship.

Figure 6-19 Metro Mirror relationship

Metro Mirror relationship between primary and secondary VDisksWhen creating a Metro Mirror relationship, you must define one VDisk as the master and the other VDisk as the auxiliary. The relationship between two copies is symmetric. When a Metro Mirror relationship is created, the master VDisk is initially considered the primary copy (often referred to as the source), and the auxiliary VDisk is considered the secondary copy (often referred to as the target). The initial copy direction mirrors the master VDisk to the auxiliary VDisk. After the initial synchronization is complete, you can change the copy direction, if appropriate.

In the most common applications of Metro Mirror, the master VDisk contains the production copy of the data and is used by the host application, while the auxiliary VDisk contains a mirrored copy of the data and is used for failover in DR scenarios. The terms master and auxiliary describe this use. However, if Metro Mirror is applied differently, the terms master VDisk and auxiliary VDisk need to be interpreted appropriately.

6.5.6 Importance of write orderingMany applications that use block storage must survive failures, such as the loss of power or a software crash, and not lose the data that existed prior to the failure. Because many applications need to perform large numbers of update operations in parallel with storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption.

An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine an application’s algorithms and can lead to problems, such as detected, or undetected, data corruption.

Dependent writes that span multiple VDisksThe following scenario illustrates a simple example of a sequence of dependent writes, and in particular, what can happen if they span multiple VDisks.

Consider the following typical sequence of writes for a database update transaction:

1. A write is executed to update the database log, indicating that a database update will be performed.

2. A second write is executed to update the database.

3. A third write is executed to update the database log, indicating that a database update has completed successfully.

288 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 315: San

Figure 6-20 shows the write sequence.

Figure 6-20 Dependent writes for a database

The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step.

But imagine if the database log and the database itself are on separate VDisks and a Metro Mirror relationship is stopped during this update. In this case, you need to consider the possibility that the Metro Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log. If this situation occurs, it is possible that the secondary VDisks see writes (1) and (3), but not (2).

Then, if the database was restarted using data available from secondary disks, the database log will indicate that the transaction had completed successfully, when it did not. In this scenario, the integrity of the database is in question.

Metro Mirror consistency groupsMetro Mirror consistency groups address the issue of dependent writes across VDisks, where the objective is to preserve data consistency across multiple Metro Mirrored VDisks. Consistency groups ensure a consistent data set, because applications have relational data spanning across multiple VDisks.

Database logs: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure.

Chapter 6. Advanced Copy Services 289

Page 316: San

A Metro Mirror consistency group can contain an arbitrary number of relationships up to the maximum number of Metro Mirror relationships that is supported by the SVC cluster. Metro Mirror commands can be issued to a Metro Mirror consistency group and, therefore, simultaneously for all Metro Mirror relationships defined within that consistency group, or to a single Metro Mirror relationship that is not part of a Metro Mirror consistency group. For example, when issuing a Metro Mirror startrcconsistgrp command to the consistency group, all of the Metro Mirror relationships in the consistency group are started at the same time.

Figure 6-21 illustrates the concept of Metro Mirror consistency groups.

Because the MM_Relationship 1 and 2 are part of the consistency group, they can be handled as one entity, while the stand-alone MM_Relationship 3 is handled separately.

Figure 6-21 Metro Mirror consistency group

Certain uses of Metro Mirror require manipulation of more than one relationship. Metro Mirror consistency groups can provide the ability to group relationships, so that they are manipulated in unison. Metro Mirror relationships within a consistency group can be in any form:

� Metro Mirror relationships can be part of a consistency group, or they can be stand-alone and therefore handled as single instances.

� A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name.

� All of the relationships in a consistency group must have matching master and auxiliary SVC clusters.

290 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 317: San

Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, this manipulation can lead to undesired side effects. The rules behind a consistency group mean that certain configuration commands are prohibited. These configuration commands are not prohibited if the relationship is not part of a consistency group.

For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error, there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Metro Mirror rejects attempts to enable access to secondary VDisks of either application.

If one application finishes its background copy much more quickly than the other application, Metro Mirror still refuses to grant access to its secondary VDisks even though it is safe in this case, because Metro Mirror policy is to refuse access to the entire consistency group if any part of it is inconsistent.

Stand-alone relationships and consistency groups share a common configuration and state model. All of the relationships in a non-empty consistency group have same state as the consistency group.

6.5.7 How Metro Mirror worksIn the sections that follow, we describe how Metro Mirror works.

Intercluster communication and zoningAll intercluster communication is performed over the SAN. Prior to creating intercluster Metro Mirror relationships, you must create a partnership between the two clusters.

SVC node ports on each SVC cluster must be able to access each other to facilitate the partnership creation. Therefore, you must define a zone in each fabric for intercluster communication (see Chapter 3, “Planning and configuration” on page 65).

SVC cluster partnershipEach SVC cluster can only be in a partnership with between one and three other SVC clusters. When an SVC cluster partnership has been defined on both clusters of a pair of clusters, further communication facilities between nodes in each of the clusters are established:

� A single control channel, which is used to exchange and coordinate configuration information

� I/O channels between each of these nodes in the clusters

These channels are maintained and updated as nodes appear and disappear and as links fail, and they are repaired to maintain operation where possible. If communication between SVC clusters is interrupted or lost, an error is logged (and consequently, Metro Mirror relationships will stop).

To handle error conditions, you can configure SVC to raise Simple Network Management Protocol (SNMP) traps to the enterprise monitoring system.

Maintenance of the intercluster linkAll SVC nodes maintain a database of other devices that are visible on the fabric. This database is updated as devices appear and disappear.

Chapter 6. Advanced Copy Services 291

Page 318: San

Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement clustering and the functional protocols of SVC.

Nodes that are in separate clusters do not exchange messages after initial discovery is complete, unless they have been configured together to perform Metro Mirror.

The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes.

If the designated node fails (or all of its logins to the remote cluster fail), a new node is chosen to carry control traffic. This node change causes the I/O to pause, but it does not put the relationships in a Consistent Stopped state.

6.5.8 Metro Mirror processSeveral major steps exist in the Metro Mirror process:

1. An SVC cluster partnership is created between two SVC clusters (for intercluster Metro Mirror).

2. A Metro Mirror relationship is created between two VDisks of the same size.

3. To manage multiple Metro Mirror relationships as one entity, relationships can be made part of a Metro Mirror consistency group, which ensures data consistency across multiple Metro Mirror relationships and provides ease of management.

4. When a Metro Mirror relationship is started, and when the background copy has completed, the relationship becomes consistent and synchronized.

5. After the relationship is synchronized, the secondary VDisk holds a copy of the production data at the primary, which can be used for DR.

6. To access the auxiliary VDisk, the Metro Mirror relationship must be stopped with the access option enabled before write I/O is submitted to the secondary.

7. The remote host server is mapped to the auxiliary VDisk, and the disk is available for I/O.

6.5.9 Methods of synchronizationThis section describes three methods that can be used to establish a relationship.

Full synchronization after creationThe full synchronization after creation method is the default method. It is the simplest in that it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the available bandwidth can make this method unsuitable.

Use this command sequence for a single relationship:

1. Run mkrcrelationship without specifying the -sync option.

2. Run startrcrelationship without specifying the -clean option.

292 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 319: San

Synchronized before creationIn this method, the administrator must ensure that the master and auxiliary VDisks contain identical data before creating the relationship. There are two ways to ensure that the master and auxiliary VDisks contain identical data:

� Both disks are created with the security delete feature so as to make all data zero.

� A complete tape image (or other method of moving data) is copied from one disk to the other disk.

In either technique, no write I/O must take place to either the master or the auxiliary before the relationship is established.

Then, the administrator must run these commands:

� Run mkrcrelationship with the -sync flag.� Run startrcrelationship without the -clean flag.

If these steps are performed incorrectly, Metro Mirror will report the relationship as being consistent when it is not, therefore, likely making any secondary disk useless. This method has an advantage over full synchronization, because it does not require all of the data to be copied over a constrained link. However, if data needs to be copied, the master and auxiliary disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after creationIn this method, the administrator must still copy data from the master to the auxiliary, but the administrator can use this method without stopping the application at the master. The administrator must ensure that these steps are taken:

� A mkrcrelationship command is issued with the -sync flag.

� A stoprcrelationship command is issued with the -access flag.

� A tape image (or other method of transferring data) is used to copy the entire master disk to the auxiliary disk.

After the copy is complete, the administrator must ensure that a startrcrelationship command is issued with the -clean flag.

With this technique, only data that has changed since the relationship was created, including all regions that were incorrect in the tape image, is copied from the master to the auxiliary. As with “Synchronized before creation” on page 293, the copy step must be performed correctly or the auxiliary will be useless, although the copy operation will report it as being synchronized.

Metro Mirror states and eventsIn this section, we explain the various states of a Metro Mirror relationship and the series of events that modify these states.

In Figure 6-22 on page 294, the Metro Mirror relationship state diagram shows an overview of states that can apply to a Metro Mirror relationship in a connected state.

Chapter 6. Advanced Copy Services 293

Page 320: San

Figure 6-22 Metro Mirror mapping state diagram

When creating the Metro Mirror relationship, you can specify if the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This capability is especially useful when creating Metro Mirror relationships for VDisks that have been created with the format option.

The numbers in Figure 6-22 relate to the following numbers. To create the relationship:

� Step 1:

a. The Metro Mirror relationship is created with the -sync option, and the Metro Mirror relationship enters the Consistent stopped state.

b. The Metro Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Metro Mirror relationship enters the Inconsistent stopped state.

� Step 2:

a. When starting a Metro Mirror relationship in the Consistent stopped state, the Metro Mirror relationship enters the Consistent synchronized state. Therefore, no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state. Otherwise, the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started.

b. When starting a Metro Mirror relationship in the Inconsistent stopped state, the Metro Mirror relationship enters the Inconsistent copying state, while the background copy is started.

294 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 321: San

� Step 3

When the background copy completes, the Metro Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state.

� Step 4:

a. When stopping a Metro Mirror relationship in the Consistent synchronized state, specifying the -access option, which enables write I/O on the secondary VDisk, the Metro Mirror relationship enters the Idling state.

b. To enable write I/O on the secondary VDisk, when the Metro Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship specifying the -access option, and the Metro Mirror relationship enters the Idling state.

� Step 5:

a. When starting a Metro Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Given that no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Metro Mirror relationship enters the Consistent synchronized state.

b. If write I/O has been performed to either the master or the auxiliary VDisk, the -force option must be specified, and the Metro Mirror relationship then enters the Inconsistent copying state, while the background copy is started.

Stop or Error: When a Metro Mirror relationship is stopped (either intentionally or due to an error), a state transition is applied:

� For example, the Metro Mirror relationships in the Consistent synchronized state enter the Consistent stopped state, and the Metro Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state.

� In case the connection is broken between the SVC clusters in a partnership, then all (intercluster) Metro Mirror relationships enter a Disconnected state. For further information, refer to “Connected versus disconnected” on page 295.

6.5.10 State overviewSVC-defined concepts of state are key to understanding configuration concepts. We explain them in more detail next.

Connected versus disconnectedThis distinction can arise when a Metro Mirror relationship is created with the two VDisks in separate clusters.

Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other.

When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected.

Common states: Stand-alone relationships and consistency groups share a common configuration and state model. All Metro Mirror relationships in a consistency group that is not empty have the same state as the consistency group.

Chapter 6. Advanced Copy Services 295

Page 322: San

In this scenario, each cluster is left with half of the relationship and has only a portion of the information that was available to it before. Limited configuration activity is possible and is a subset of what was possible before.

The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and what configuration commands are permitted.

When the clusters can communicate again, the relationships become connected again. Metro Mirror automatically reconciles the two state fragments, taking into account any configuration or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or it can enter another connected state.

Relationships that are configured between VDisks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistentRelationships that contain VDisks that are operating as secondaries can be described as being consistent or inconsistent. Consistency groups that contain relationships can also be described as being consistent or inconsistent. The consistent or inconsistent property describes the relationship of the data on the secondary to the one on the primary VDisk. It can be considered a property of the secondary VDisk itself.

A secondary is described as consistent if it contains data that might have been read by a host system from the primary if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the primary up to the recovery point:

� The secondary VDisk contains the data from all of the writes to the primary for which the host received successful completion and that data had not been overwritten by a subsequent write (before the recovery point).

� For writes for which the host did not receive a successful completion (that is, it received bad completion or no completion at all), and the host subsequently performed a read from the primary of that data and that read returned successful completion and no later write was sent (before the recovery point), the secondary contains the same data as that returned by the read from the primary.

From the point of view of an application, consistency means that a secondary VDisk contains the same data as the primary VDisk at the recovery point (the time at which the imaginary power failure occurred).

If an application is designed to cope with unexpected power failure, this guarantee of consistency means that the application will be able to use the secondary and begin operation just as though it had been restarted after the hypothetical power failure.

Again, the application is dependent on the key properties of consistency:

� Write ordering � Read stability for correct operation at the secondary

If a relationship, or set of relationships, is inconsistent and an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible:

� The application might decide that the data is corrupt and crash or exit with an error code.� The application might fail to detect that the data is corrupt and return erroneous data.

296 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 323: San

� The application might work without a problem.

Because of the risk of data corruption, and in particular undetected data corruption, Metro Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.

Consistency as a concept can be applied to a single relationship or a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks accessed through multiple systems; therefore, consistency must operate across all those disks.

When deciding how to use consistency groups, the administrator must consider the scope of an application’s data, taking into account all of the interdependent systems that communicate and exchange information.

If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur:

� All of the data accessed by the group of systems must be placed into a single consistency group.

� The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronizedA copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the primary and secondary VDisks only differ in regions where writes are outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at a point in time in the past. Write I/O might have continued to a primary and not have been copied to the secondary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the secondary.

When communication is lost for an extended period of time, Metro Mirror tracks the changes that happen at the primary, but not the order of such changes, or the details of such changes (write data). When communication is restored, it is impossible to synchronize the secondary without sending write data to the secondary out-of-order and, therefore, losing consistency.

Two policies can be used to cope with this situation:

� Make a point-in-time copy of the consistent secondary before allowing the secondary to become inconsistent. In the event of a disaster before consistency is achieved again, the point-in-time copy target provides a consistent, although out-of-date, image.

� Accept the loss of consistency and the loss of a useful secondary, while synchronizing the secondary.

Chapter 6. Advanced Copy Services 297

Page 324: San

6.5.11 Detailed statesThe following sections detail the states that are portrayed to the user, for either consistency groups or relationships. It also details the extra information that is available in each state. The major states are designed to provide guidance about the configuration commands that are available.

InconsistentStoppedInconsistentStopped is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O. A copy process needs to be started to make the secondary consistent.

This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop.

A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but it has no effect.

If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.

InconsistentCopyingInconsistentCopying is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is not accessible for either read or write I/O.

This state is entered after a start command is issued to an InconsistentStopped relationship or a consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group.

In this state, a background copy process runs that copies data from the primary to the secondary VDisk.

In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress.

A persistent error or stop command places the relationship or consistency group into an InconsistentStopped state. A start command is accepted, but it has no effect.

If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to the ConsistentSynchronized state.

If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStoppedConsistentStopped is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary.

This state can arise when a relationship was in a Consistent Synchronized state and suffers an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to TRUE.

298 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 325: San

Normally, following an I/O error, subsequent write activity causes updates to the primary and the secondary is no longer synchronized (set to false). In this case, to re-establish synchronization, consistency must be given up for a period. You must use a start command with the -force option to acknowledge this situation, and the relationship or consistency group transits to InconsistentCopying. Enter this command only after all of the outstanding errors are repaired.

In the unusual case where the primary and the secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, you can enter a switch command that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and the secondary.

If the relationship or consistency group becomes disconnected, the secondary transits to ConsistentDisconnected. The primary transitions to IdlingDisconnected.

An informational status log is generated every time that a relationship or consistency group enters the ConsistentStopped with a status of Online state. You can configure this situation to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronizedConsistentSynchronized is a connected state. In this state, the primary VDisk is accessible for read and write I/O, and the secondary VDisk is accessible for read-only I/O.

Writes that are sent to the primary VDisk are sent to both the primary and secondary VDisks. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transit out of the ConsistentSynchronized state before a write is completed to the host.

A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state.

A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the primary and secondary roles.

A start command is accepted, but it has no effect.

If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.

IdlingIdling is a connected state. Both master and auxiliary disks operate in the primary role. Consequently, both master and auxiliary are accessible for write I/O.

In this state, the relationship or consistency group accepts a start command. Metro Mirror maintains a record of regions on each disk that received write I/O while idling. This record is used to determine what areas need to be copied following a start command.

The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O, which is indicated by the Synchronized status. If the start command leads to loss of consistency, you must specify the -force parameter.

Chapter 6. Advanced Copy Services 299

Page 326: San

Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency.

Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnectedIdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O.

The major priority in this state is to recover the link and make the relationship or consistency group connected again.

No configuration activity is possible (except for deletes or stops) until the relationship becomes connected again. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on these factors:

� The state when it became disconnected� The write activity since it was disconnected� The configuration activity since it was disconnected

If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.

While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an error log is raised to notify you of this situation. This error log is the same error log that occurs when the same situation arises for ConsistentSynchronized.

InconsistentDisconnectedInconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O.

No configuration activity, except for deletes, is permitted until the relationship becomes connected again.

When the relationship or consistency group becomes connected again, the relationship becomes InconsistentCopying automatically unless either condition is true:

� The relationship was InconsistentStopped when it became disconnected.� The user issued a stop command while disconnected.

In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnectedConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the

300 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 327: San

FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster.

A stop command with the -access flag set to true transits the relationship or consistency group to the IdlingDisconnected state. This state allows write I/O to be performed to the secondary VDisk and is used as part of a DR scenario.

When the relationship or consistency group becomes connected again, the relationship or consistency group becomes ConsistentSynchronized only if this action does not lead to a loss of consistency. These conditions must be true:

� The relationship was ConsistentSynchronized when it became disconnected.� No writes received successful completion at the primary while disconnected.

Otherwise, the relationship become ConsistentStopped. The FreezeTime setting is retained.

EmptyThis state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point, the state of the relationship becomes the state of the consistency group.

Background copyMetro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly between all of the nodes that are performing background copy for one of the eligible relationships. This allocation is made irrespective of the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy.

For intracluster relationships, each node is assigned a static quota of 25 MBps.

6.5.12 Practical use of Metro MirrorThe master VDisk is the production VDisk and updates to this copy are mirrored in real time to the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was created are destroyed.

While the Metro Mirror relationship is active, the secondary copy (VDisk) is not accessible for host application write I/O at any time. The SVC allows read-only access to the secondary VDisk when it contains a “consistent” image. This time period is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimum delay, if required.

Switching copy direction: The copy direction for a Metro Mirror relationship can be switched so the auxiliary VDisk becomes the primary, and the master VDisk becomes the secondary.

Chapter 6. Advanced Copy Services 301

Page 328: San

For example, many operating systems must read logical block address (LBA) zero to configure a logical unit. Although read access is allowed at the secondary in practice, the data on the secondary volumes cannot be read by a host, because most operating systems write a “dirty bit” to the file system when it is mounted. Because this write operation is not allowed on the secondary volume, the volume cannot be mounted.

This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the secondary and later write I/Os that are performed at the primary.

To enable access to the secondary VDisk for host operations, you must stop the Metro Mirror relationship by specifying the -access parameter.

While access to the secondary VDisk for host operations is enabled, the host must be instructed to mount the VDisk and related tasks before the application can be started, or instructed to perform a recovery process.

For example, the Metro Mirror requirement to enable the secondary copy for access differentiates it from third-party mirroring software on the host, which aims to emulate a single, reliable disk regardless of what system is accessing it. Metro Mirror retains the property that there are two volumes in existence, but it suppresses one volume while the copy is being maintained.

Using a secondary copy demands a conscious policy decision by the administrator that a failover is required and that the tasks to be performed on the host involved in establishing operation on the secondary copy are substantial. The goal is to make this rapid (much faster when compared to recovering from a backup copy) but not seamless.

The failover process can be automated through failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

6.5.13 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions

Table 6-7 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a single VDisk.

Table 6-7 VDisk valid combination

6.5.14 Metro Mirror configuration limitsTable 6-8 lists the Metro Mirror configuration limits.

Table 6-8 Metro Mirror configuration limits

FlashCopy Metro Mirror or Global Mirror Primary

Metro Mirror or Global Mirror Secondary

FlashCopy Source Supported Supported

FlashCopy Target Not supported Not supported

Parameter Value

Number of Metro Mirror consistency groups per cluster

256

302 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 329: San

6.6 Metro Mirror commandsFor comprehensive details about Metro Mirror Commands, refer to the IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide, SC26-7903.

The command set for Metro Mirror contains two broad groups:

� Commands to create, delete, and manipulate relationships and consistency groups� Commands to cause state changes

Where a configuration command affects more than one cluster, Metro Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected and fail with no effect when they are disconnected.

Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Metro Mirror when the clusters become connected again.

For any given command, with one exception, a single cluster actually receives the command from the administrator. This design is significant for defining the context for a CreateRelationship mkrcrelationship or CreateConsistencyGroup mkrcconsistgrp command, in which case, the cluster receiving the command is called the local cluster.

The exception mentioned previously is the command that sets clusters into a Metro Mirror partnership. The mkpartnership command must be issued to both the local and remote clusters.

The commands here are described as an abstract command set and are implemented as either method:

� A command-line interface (CLI), which can be used for scripting and automation� A graphical user interface (GUI), which can be used for one-off tasks

6.6.1 Listing available SVC cluster partnersTo create an SVC cluster partnership, use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidateThe svcinfo lsclustercandidate command is used to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Metro Mirror relationships.

Number of Metro Mirror relationships per cluster

8,192

Number of Metro Mirror relationships per consistency group

8,192

Total VDisk size per I/O Group There is a per I/O Group limit of 1,024 TB on the quantity of primary and secondary VDisk address space that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.

Parameter Value

Chapter 6. Advanced Copy Services 303

Page 330: San

6.6.2 Creating the SVC cluster partnershipTo create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnershipThe svctask mkpartnership command is used to establish a one-way Metro Mirror partnership between the local cluster and a remote cluster.

To establish a fully functional Metro Mirror partnership, you must issue this command to both clusters. This step is a prerequisite to creating Metro Mirror relationships between VDisks on the SVC clusters.

When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latencyThe background copy bandwidth determines the rate at which the background copy for the SVC will be attempted. The background copy bandwidth can affect the foreground I/O latency in one of three ways:

� The following results can occur if the background copy bandwidth is set too high for the Metro Mirror intercluster link capacity:

– The background copy I/Os can back up on the Metro Mirror intercluster link.– There is a delay in the synchronous secondary writes of foreground I/Os.– The foreground I/O latency will increase as perceived by applications.

� If the background copy bandwidth is set too high for the storage at the primary site, the background copy read I/Os overload the primary storage and delay foreground I/Os.

� If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os.

In order to set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. This provisioning can be done by a calculation (as previously described) or alternatively by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable, and then backing off to allow for peaks in workload and a safety margin.

svctask chpartnershipIn case it is needed to change the bandwidth that is available for background copy in an SVC cluster partnership, you can use the svctask chpartnership command to specify the new bandwidth.

6.6.3 Creating a Metro Mirror consistency groupTo create a Metro Mirror consistency group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrpThe svctask mkrcconsistgrp command is used to create a new empty Metro Mirror consistency group.

304 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 331: San

The Metro Mirror consistency group name must be unique across all of the consistency groups that are known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process.

The new consistency group does not contain any relationships and will be in the Empty state. Metro Mirror relationships can be added to the group either upon creation or afterward by using the svctask chrelationship command.

6.6.4 Creating a Metro Mirror relationshipTo create a Metro Mirror relationship, use the command svctask mkrcrelationship.

svctask mkrcrelationshipThe svctask mkrcrelationship command is used to create a new Metro Mirror relationship. This relationship persists until it is deleted.

The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O Group. The master and auxiliary VDisk cannot be in an existing relationship and cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful.

When creating the Metro Mirror relationship, it can be added to an already existing consistency group, or it can be a stand-alone Metro Mirror relationship if no consistency group is specified.

To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Metro Mirror relationship, use the svcinfo lsrcrelationshipcandidate command.

svcinfo lsrcrelationshipcandidateThe svcinfo lsrcrelationshipcandidate command is used to list available VDisks that are eligible for a Metro Mirror relationship.

When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with prerequisites to create a Metro Mirror relationship. If the command is issued with no flags, all VDisks that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

6.6.5 Changing a Metro Mirror relationshipTo modify the properties of a Metro Mirror relationship, use the command svctask chrcrelationship.

svctask chrcrelationshipThe svctask chrcrelationship command is used to modify the following properties of a Metro Mirror relationship:

� Change the name of a Metro Mirror relationship.� Add a relationship to a group.� Remove a relationship from a group using the -force flag.

Adding a Metro Mirror relationship: When adding a Metro Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.

Chapter 6. Advanced Copy Services 305

Page 332: San

6.6.6 Changing a Metro Mirror consistency groupTo change the name of a Metro Mirror consistency group, use the svctask chrcconsistgrp command.

svctask chrcconsistgrpThe svctask chrcconsistgrp command is used to change the name of a Metro Mirror consistency group.

6.6.7 Starting a Metro Mirror relationshipTo start a stand-alone Metro Mirror relationship, use the svctask startrcrelationship command.

svctask startrcrelationshipThe svctask startrcrelationship command is used to start the copy process of a Metro Mirror relationship.

When issuing the command, the copy direction can be set, if it is undefined, and optionally mark the secondary VDisk of the relationship as clean. The command fails it if it is used to attempt to start a relationship that is part of a consistency group.

This command can only be issued to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force flag when restarting the relationship. This situation can arise if, for example, the relationship was stopped, and then, further writes were performed on the original primary of the relationship. The use of the -force flag here is a reminder that the data on the secondary will become inconsistent while resynchronization (background copying) occurs, and therefore, the data is not usable for DR purposes before the background copy has completed.

In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the -primary argument, but it must match the existing setting.

6.6.8 Stopping a Metro Mirror relationshipTo stop a stand-alone Metro Mirror relationship, use the svctask stoprcrelationship command.

svctask stoprcrelationshipThe svctask stoprcrelationship command is used to stop the copy process for a relationship. It can also be used to enable write access to a consistent secondary VDisk by specifying the -access flag.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary.

306 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 333: San

If the relationship is in an Inconsistent state, any copy operation stops and does not resume until you issue a svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a consistency freeze.

When a relationship is in a Consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the stoprcrelationship command to enable write access to the secondary VDisk.

6.6.9 Starting a Metro Mirror consistency groupTo start a Metro Mirror consistency group, use the svctask startrcconsistgrp command.

The svctask startrcconsistgrp command is used to start a Metro Mirror consistency group. This command can only be issued to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

6.6.10 Stopping a Metro Mirror consistency groupTo stop a Metro Mirror consistency group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrpThe svctask startrcconsistgrp command is used to stop the copy process for a Metro Mirror consistency group. It can also be used to enable write access to the secondary VDisks in the group if the group is in a Consistent state.

If the consistency group is in an Inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the primary to the secondary VDisks belonging to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a consistency freeze.

When a consistency group is in a Consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), the -access argument can be used with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.

6.6.11 Deleting a Metro Mirror relationshipTo delete a Metro Mirror relationship, use the svctask rmrcrelationship command.

svctask rmrcrelationshipThe svctask rmrcrelationship command is used to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two VDisks. It does not affect the VDisks themselves.

If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, then the relationship is automatically deleted on the other cluster.

Chapter 6. Advanced Copy Services 307

Page 334: San

Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters.

If you delete an inconsistent relationship, the secondary VDisk becomes accessible even though it is still inconsistent. This situation is the one case in which Metro Mirror does not inhibit access to inconsistent data.

6.6.12 Deleting a Metro Mirror consistency groupTo delete a Metro Mirror consistency group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrpThe svctask rmrcconsistgrp command is used to delete a Metro Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group.

If the consistency group is disconnected at the time that the command is issued, the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters.

If the consistency group is not empty, the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.

6.6.13 Reversing a Metro Mirror relationshipTo reverse a Metro Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationshipThe svctask switchrcrelationship command is used to reverse the roles of the primary and secondary VDisks when a stand-alone relationship is in a Consistent state. When issuing the command, the desired primary is specified.

6.6.14 Reversing a Metro Mirror consistency groupTo reverse a Metro Mirror consistency group, use the svctask switchrcconsistgrp command.

svctask switchrcconsistgrpThe svctask switchrcconsistgrp command is used to reverse the roles of the primary and secondary VDisks when a consistency group is in a Consistent state. This change is applied to all of the relationships in the consistency group, and when issuing the command, the desired primary is specified.

308 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 335: San

6.6.15 Background copyMetro Mirror paces the rate at which background copy is performed by the appropriate relationships. Background copy takes place on relationships that are in the InconsistentCopying state with a status of Online.

The quota of background copy (configured on the intercluster link) is divided evenly between the nodes that are performing background copy for one of the eligible relationships. This allocation is made without regard for the number of disks for which the node is responsible. Each node in turn divides its allocation evenly between the multiple relationships performing a background copy.

For intracluster relationships, each node is assigned a static quota of 25 MBps.

6.7 Global Mirror overviewIn the following topics, we describe the Global Mirror copy service, which is an asynchronous remote copy service. It provides and maintains a consistent mirrored copy of a source VDisk to a target VDisk. Data is written from the source VDisk to the target VDisk asynchronously. This method was previously known as Asynchronous Peer-to-Peer Remote Copy.

Global Mirror works by defining a Global Mirror relationship between two VDisks of equal size and maintains the data consistency in an asynchronous manner. Therefore, when a host writes to a source VDisk, the data is copied from the source VDisk cache to the target VDisk cache. At the initiation of that data copy, the confirmation of I/O completion is transmitted back to the host.

SVC provides both intracluster and intercluster Global Mirror.

6.7.1 Intracluster Global MirrorAlthough Global Mirror is available for intracluster, it has no functional value for production use. Intracluster Metro Mirror provides the same capability with less overhead. However, leaving this functionality in place simplifies testing and allows for client experimentation and testing (for example, to validate server failover on a single test cluster).

6.7.2 Intercluster Global MirrorIntercluster Global Mirror operations require a pair of SVC clusters that are commonly separated by a number of moderately high bandwidth links. The two SVC clusters must be defined in an SVC cluster partnership to establish a fully functional Global Mirror relationship.

Minimum firmware requirement: The minimum firmware requirement for Global Mirror functionality is V4.1.1. Any cluster or partner cluster that is not running this minimum level will not have Global Mirror functionality available. Even if you have a Global Mirror relationship running on a down-level partner cluster and you only want to use intracluster Global Mirror, the functionality will not be available to you.

Limit: When a local and a remote fabric are connected together for Global Mirror purposes, the ISL hop count between a local node and a remote node must not exceed seven hops.

Chapter 6. Advanced Copy Services 309

Page 336: San

6.8 Remote copy techniquesGlobal Mirror is an asynchronous remote copy, which we explain next. To illustrate the differences between synchronous and asynchronous remote copy, we also explain synchronous remote copy.

6.8.1 Asynchronous remote copyGlobal Mirror is an asynchronous remote copy technique. In asynchronous remote copy, write operations are completed on the primary site and the write acknowledgement is sent to the host before it is received at the secondary site. An update of this write operation is sent to the secondary site at a later stage, which provides the capability to perform remote copy over distances exceeding the limitations of synchronous remote copy.

The Global Mirror function provides the same function as Metro Mirror Remote Copy, but over long distance links with higher latency, without requiring the hosts to wait for the full round-trip delay of the long distance link.

Figure 6-23 shows that a write operation to the master VDisk is acknowledged back to the host issuing the write before the write operation is mirrored to the cache for the auxiliary VDisk.

Figure 6-23 Global Mirror write sequence

The Global Mirror algorithms maintain a consistent image at the secondary at all times. They achieve this consistent image by identifying sets of I/Os that are active concurrently at the primary, assigning an order to those sets, and applying those sets of I/Os in the assigned order at the secondary. As a result, Global Mirror maintains the features of Write Ordering and Read Stability that are described in this chapter.

The multiple I/Os within a single set are applied concurrently. The process that marshals the sequential sets of I/Os operates at the secondary cluster and, so, is not subject to the latency of the long distance link. These two elements of the protocol ensure that the throughput of the total cluster can be grown by increasing cluster size, while maintaining consistency across a growing data set.

310 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 337: San

In a failover scenario, where the secondary site needs to become the primary source of data, certain updates might be missing at the secondary site. Therefore, any applications that will use this data must have an external mechanism for recovering the missing updates and reapplying them, for example, such as a transaction log replay.

6.8.2 SVC Global Mirror featuresSVC Global Mirror supports the following features:

� Asynchronous remote copy of VDisks dispersed over metropolitan scale distances is supported.

� SVC implements the Global Mirror relationship between a VDisk pair, with each VDisk in the pair being managed by an SVC cluster.

� SVC supports intracluster Global Mirror, where both VDisks belong to the same cluster (and I/O Group). Although, as stated earlier, this functionality is better suited to Metro Mirror.

� SVC supports intercluster Global Mirror, where each VDisk belongs to its separate SVC cluster. A given SVC cluster can be configured for partnership with between one and three other clusters.

� Intercluster and intracluster Global Mirror can be used concurrently within a cluster for separate relationships.

� SVC does not require a control network or fabric to be installed to manage Global Mirror. For intercluster Global Mirror, the SVC maintains a control link between the two clusters. This control link is used to control the state and to coordinate the updates at either end. The control link is implemented on top of the same FC fabric connection that the SVC uses for Global Mirror I/O.

� SVC implements a configuration model that maintains the Global Mirror configuration and state through major events, such as failover, recovery, and resynchronization, to minimize user configuration action through these events.

� SVC maintains and polices a strong concept of consistency and makes this concept available to guide configuration activity.

� SVC implements flexible resynchronization support, enabling it to resynchronize VDisk pairs that have experienced write I/Os to both disks and to resynchronize only those regions that are known to have changed.

� Colliding writes are supported.

� An optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to secondary VDisks.

� SVC 5.1 introduces Multiple Cluster Mirroring.

Colliding writesPrior to V4.3.1, the Global Mirror algorithm required that only a single write is active on any given 512 byte LBA of a VDisk. If a further write is received from a host while the secondary write is still active, even though the primary write might have completed, the new host write will be delayed until the secondary write is complete. This restriction is needed in case a series of writes to the secondary have to be retried (called “reconstruction”). Conceptually, the data for reconstruction comes from the primary VDisk.

If multiple writes are allowed to be applied to the primary for a given sector, only the most recent write will get the correct data during reconstruction, and if reconstruction is interrupted for any reason, the intermediate state of the secondary is Inconsistent.

Chapter 6. Advanced Copy Services 311

Page 338: San

Applications that deliver such write activity will not achieve the performance that Global Mirror is intended to support. A VDisk statistic is maintained about the frequency of these collisions. From V4.3.1 onward, an attempt is made to allow multiple writes to a single location to be outstanding in the Global Mirror algorithm. There is still a need for primary writes to be serialized, and the intermediate states of the primary data must be kept in a non-volatile journal while the writes are outstanding to maintain the correct write ordering during reconstruction. Reconstruction must never overwrite data on the secondary with an earlier version. The VDisk statistic monitoring colliding writes is now limited to those writes that are not affected by this change.

Figure 6-24 shows a colliding write sequence example.

Figure 6-24 Colliding writes example

These numbers correspond to the numbers in Figure 6-24:

� (1) Original Global Mirror write in progress� (2) Second write to same sector and in-flight write logged to the journal file� (3 and 4) Second write to the secondary cluster� (5) Initial write completes

Delay simulationAn optional feature for Global Mirror permits a delay simulation to be applied on writes that are sent to secondary VDisks. This feature allows testing to be performed that detects colliding writes, and therefore, this feature can be used to test an application before the full deployment of the feature. The feature can be enabled separately for each of the intracluster or intercluster Global Mirrors. You specify the delay setting by using the chcluster command and viewed by using the lscluster command. The gm_intra_delay_simulation field expresses the amount of time that intracluster secondary I/Os are delayed. The gm_inter_delay_simulation field expresses the amount of time that intercluster secondary I/Os are delayed. A value of zero disables the feature.

312 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 339: San

Multiple Cluster MirroringSVC 5.1 introduces Multiple Cluster Mirroring. The rules for a Global Mirror Multiple Cluster Mirroring environment are the same as the rules in an Metro Mirror environment. For more detailed information, see 6.5.4, “Multiple Cluster Mirroring” on page 284.

6.9 Global Mirror relationshipsGlobal Mirror relationships are similar to FlashCopy mappings. They can be stand-alone or combined in consistency groups. You can issue the start and stop commands either against the stand-alone relationship or the consistency group.

Figure 6-25 illustrates the Global Mirror relationship.

Figure 6-25 Global Mirror relationship

A Global Mirror relationship is composed of two VDisks that are equal in size. The master VDisk and the auxiliary VDisk can be in the same I/O Group, within the same SVC cluster (intracluster Global Mirror), or can be on separate SVC clusters that are defined as SVC partners (intercluster Global Mirror).

6.9.1 Global Mirror relationship between primary and secondary VDisksWhen creating a Global Mirror relationship, the master VDisk is initially assigned as the primary, and the auxiliary VDisk is initially assigned as the secondary. This design implies that the initial copy direction is mirroring the master VDisk to the auxiliary VDisk. After the initial synchronization is complete, the copy direction can be changed, if appropriate.

In the most common applications of Global Mirror, the master VDisk contains the production copy of the data and is used by the host application, while the auxiliary VDisk contains the mirrored copy of the data and is used for failover in DR scenarios. The terms master and auxiliary help explain this use. If Global Mirror is applied differently, the terms master and auxiliary VDisks need to be interpreted appropriately.

6.9.2 Importance of write orderingMany applications that uses block storage have a requirement to survive failures, such as loss of power or a software crash, and to not lose data that existed prior to the failure. Because many applications must perform large numbers of update operations in parallel to that block storage, maintaining write ordering is key to ensuring the correct operation of applications following a disruption.

Rules:

� A VDisk can only be part of one Global Mirror relationship at a time.� A VDisk that is a FlashCopy target cannot be part of a Global Mirror relationship.

Chapter 6. Advanced Copy Services 313

Page 340: San

An application that performs a high volume of database updates is usually designed with the concept of dependent writes. With dependent writes, it is important to ensure that an earlier write has completed before a later write is started. Reversing the order of dependent writes can undermine the application’s algorithms and can lead to problems, such as detected or undetected data corruption.

6.9.3 Dependent writes that span multiple VDisksThe following scenario illustrates a simple example of a sequence of dependent writes and, in particular, what can happen if they span multiple VDisks. Consider the following typical sequence of writes for a database update transaction:

1. A write is executed to update the database log, indicating that a database update is to be performed.

2. A second write is executed to update the database.

3. A third write is executed to update the database log, indicating that the database update has completed successfully.

Figure 6-26 illustrates the write sequence.

Figure 6-26 Dependent writes for a database

The database ensures the correct ordering of these writes by waiting for each step to complete before starting the next step.

Database logs: All databases have logs associated with them. These logs keep records of database changes. If a database needs to be restored to a point beyond the last full, offline backup, logs are required to roll the data forward to the point of failure.

314 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 341: San

But imagine if the database log and the database are on separate VDisks and a Global Mirror relationship is stopped during this update. In this case, you must consider the possibility that the Global Mirror relationship for the VDisk with the database file is stopped slightly before the VDisk containing the database log.

If this happens, it is possible that the secondary VDisks see writes (1) and (3) but not write (2). Then, if the database was restarted using the data available from the secondary disks, the database log indicates that the transaction had completed successfully, when it did not. In this scenario, the integrity of the database is in question.

6.9.4 Global Mirror consistency groupsGlobal Mirror consistency groups address the issue of dependent writes across VDisks, where the objective is to preserve data consistency across multiple Global Mirrored VDisks. Consistency groups ensure a consistent data set, because applications have relational data spanning across multiple VDisks.

A Global Mirror consistency group can contain an arbitrary number of relationships up to the maximum number of Global Mirror relationships that is supported by the SVC cluster. Global Mirror commands can be issued to a Global Mirror consistency group, and thereby simultaneously for all Global Mirror relationships that are defined within that consistency group, or to a single Metro Mirror relationship, if not part of a Global Mirror consistency group.

For example, when issuing a Global Mirror start command to the consistency group, all of the Global Mirror relationships in the consistency group are started at the same time.

Figure 6-27 on page 316 illustrates the concept of Global Mirror consistency groups. Because GM_Relationship 1 and GM_Relationship 2 are part of the consistency group, they can be handled as one entity, while the stand-alone GM_Relationship 3 is handled separately.

Chapter 6. Advanced Copy Services 315

Page 342: San

Figure 6-27 Global Mirror consistency group

Certain uses of Global Mirror require the manipulation of more than one relationship. Global Mirror consistency groups can provide the ability to group relationships so that they are manipulated in unison. Global Mirror relationships within a consistency group can be in any form:

� Global Mirror relationships can be part of a consistency group, or be stand-alone and therefore handled as single instances.

� A consistency group can contain zero or more relationships. An empty consistency group, with zero relationships in it, has little purpose until it is assigned its first relationship, except that it has a name.

� All of the relationships in a consistency group must have matching master and auxiliary SVC clusters.

Although it is possible to use consistency groups to manipulate sets of relationships that do not need to satisfy these strict rules, such manipulation can lead to undesired side effects. The rules behind a consistency group mean that certain configuration commands are prohibited. These specific configuration commands are not prohibited if the relationship is not part of a consistency group.

For example, consider the case of two applications that are completely independent, yet they are placed into a single consistency group. In the event of an error, there is a loss of synchronization, and a background copy process is required to recover synchronization. While this process is in progress, Global Mirror rejects attempts to enable access to the secondary VDisks of either application.

If one application finishes its background copy much more quickly than the other application, Global Mirror still refuses to grant access to its secondary VDisk. Even though it is safe in this

316 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 343: San

case, Global Mirror policy refuses access to the entire consistency group if any part of it is inconsistent.

Stand-alone relationships and consistency groups share a common configuration and state model. All of the relationships in a consistency group that is not empty have the same state as the consistency group.

6.10 Global Mirror This section discusses how Global Mirror works.

6.10.1 Intercluster communication and zoningAll intercluster communication is performed through the SAN. Prior to creating intercluster Global Mirror relationships, you must create a partnership between the two clusters.

SVC node ports on each SVC cluster must be able to access each other to facilitate the partnership creation. Therefore, you must define a zone in each fabric for intercluster communication; see Chapter 3, “Planning and configuration” on page 65 for more information.

6.10.2 SVC cluster partnershipWhen the SVC cluster partnership has been defined on both clusters, further communication facilities between the nodes in each of the clusters are established. The communication facilities consist of these components:

� A single control channel, which is used to exchange and coordinate configuration information

� I/O channels between each of the nodes in the clusters

These channels are maintained and updated as nodes appear and disappear and as links fail, and are repaired to maintain operation where possible. If communication between the SVC clusters is interrupted or lost, an error is logged (and, consequently, Global Mirror relationships will stop).

To handle error conditions, you can configure the SVC to raise SNMP traps or e-mail. Or, if Tivoli Storage Productivity Center for Replication is in place, the Tivoli Storage Productivity Center for Replication can control the link’s status and issue an alert using SNMP traps or e-mail, too.

6.10.3 Maintenance of the intercluster linkAll SVC nodes maintain a database of the other devices that are visible on the fabric. This database is updated as devices appear and disappear.

Devices that advertise themselves as SVC nodes are categorized according to the SVC cluster to which they belong. SVC nodes that belong to the same cluster establish communication channels between themselves and begin to exchange messages to implement the clustering and functional protocols of SVC.

Nodes that are in separate clusters do not exchange messages after the initial discovery is complete unless they have been configured together to perform Global Mirror.

Chapter 6. Advanced Copy Services 317

Page 344: San

The intercluster link carries control traffic to coordinate activity between two clusters. It is formed between one node in each cluster. The traffic between the designated nodes is distributed among logins that exist between those nodes.

If the designated node fails (or if all of its logins to the remote cluster fail), a new node is chosen to carry control traffic. This event causes I/O to pause, but it does not cause relationships to become Consistent Stopped.

6.10.4 Distribution of work among nodesGlobal Mirror VDisks must have their preferred nodes evenly distributed among the nodes of the clusters. Each VDisk within an I/O Group has a preferred node property that can be used to balance the I/O load between nodes in that group. Global Mirror also uses this property to route I/O between clusters.

Figure 6-28 shows the best relationship between VDisks and their preferred nodes in order to get the best performance.

Figure 6-28 Preferred VDisk Global Mirror relationship

6.10.5 Background copy performanceBackground copy resources for intercluster remote copy are available within two nodes of an I/O Group to perform background copy at a maximum of 200 MBps (each data read and data written) total. The background copy performance is subject to sufficient RAID controller bandwidth. Performance is also subject to other potential bottlenecks (such as the intercluster fabric) and possible contention from host I/O for the SVC bandwidth resources.

Background copy I/O will be scheduled to avoid bursts of activity that might have an adverse effect on system behavior. An entire grain of tracks on one VDisk will be processed at around the same time but not as a single I/O. Double buffering is used to try to take advantage of sequential performance within a grain. However, the next grain within the VDisk might not be scheduled for a while. Multiple grains might be copied simultaneously and might be enough to satisfy the requested rate, unless the available resources cannot sustain the requested rate.

Background copy proceeds from the low LBA to the high LBA in sequence to avoid convoying conflicts with FlashCopy, which operates in the opposite direction. It is expected that background copy will not convoy conflict with sequential applications, because it tends to vary disks more often.

318 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 345: San

6.10.6 Space-efficient background copyPrior to SVC 4.3.1, if a primary VDisk was space-efficient, the background copy process caused the secondary to become fully allocated. When both primary and secondary clusters are running SVC 4.3.1 or higher, Metro Mirror and Global Mirror relationships can preserve the space-efficiency of the primary.

Conceptually, the background copy process detects an unallocated region of the primary and sends a special “zero buffer” to the secondary. If the secondary VDisk is space-efficient, and the region is unallocated, the special buffer prevents a write (and, therefore, an allocation). If the secondary VDisk is not space-efficient, or the region in question is an allocated region of a Space-Efficient VDisk, a buffer of “real” zeros is synthesized on the secondary and written as normal.

If the secondary cluster is running code prior to SVC 4.3.1, this version of the code is detected by the primary cluster and a buffer of “real” zeros is transmitted and written on the secondary. The background copy rate controls the rate at which the virtual capacity is being copied.

6.11 Global Mirror processThere are several steps in the Global Mirror process:

1. An SVC cluster partnership is created between two SVC clusters (for intercluster Global Mirror).

2. A Global Mirror relationship is created between two VDisks of the same size.

3. To manage multiple Global Mirror relationships as one entity, the relationships can be made part of a Global Mirror consistency group to ensure data consistency across multiple Global Mirror relationships, or simply for ease of management.

4. The Global Mirror relationship is started, and when the background copy has completed, the relationship is consistent and synchronized.

5. When synchronized, the secondary VDisk holds a copy of the production data at the primary that can be used for DR.

6. To access the auxiliary VDisk, the Global Mirror relationship must be stopped with the access option enabled, before write I/O is submitted to the secondary.

7. The remote host server is mapped to the auxiliary VDisk, and the disk is available for I/O.

6.11.1 Methods of synchronizationThis section describes three methods that can be used to establish a relationship.

Full synchronization after creationFull synchronization after creation is the default method. It is the simplest method, and it requires no administrative activity apart from issuing the necessary commands. However, in certain environments, the bandwidth that is available makes this method unsuitable.

Use this sequence for a single relationship:

� A new relationship is created (mkrcrelationship is issued) without specifying the -sync flag.

� A new relationship is started (startrcrelationship is issued) without the -clean flag.

Chapter 6. Advanced Copy Services 319

Page 346: San

Synchronized before creationIn this method, the administrator must ensure that the master and auxiliary VDisks contain identical data before creating the relationship. There are two ways to ensure that the master and auxiliary VDisks contain identical data:

� Both disks are created with the security delete (-fmtdisk) feature to make all data zero.

� A complete tape image (or other method of moving data) is copied from one disk to the other disk.

In either technique, no write I/O must take place either on the master or the auxiliary before the relationship is established.

Then, the administrator must ensure that commands are issued:

� A new relationship is created (mkrcrelationship is issued) with the -sync flag.� A new relationship is started (startrcrelationship is issued) without the -clean flag.

If these steps are not performed correctly, the relationship is reported as being consistent, when it is not. This situation most likely makes any secondary disk useless. This method has an advantage over full synchronization: It does not require all of the data to be copied over a constrained link. However, if the data must be copied, the master and auxiliary disks cannot be used until the copy is complete, which might be unacceptable.

Quick synchronization after creationIn this method, the administrator must still copy data from the master to the auxiliary, but the data can be used without stopping the application at the master. The administrator must ensure that these commands are issued:

� A new relationship is created (mkrcrelationship is issued) with the -sync flag.

� A new relationship is stopped (mkrcrelationship is issued) with the -access flag.

� A tape image (or other method of transferring data) is used to copy the entire master disk to the auxiliary disk.

After the copy is complete, the administrator must ensure that a new relationship is started (startrcrelationship is issued) with the -clean flag.

With this technique, only the data that has changed since the relationship was created, including all regions that were incorrect in the tape image, is copied from master and auxiliary. As with “Synchronized before creation” on page 320, the copy step must be performed correctly, or else the auxiliary is useless, although the copy reports it as being synchronized.

Global Mirror states and eventsIn this section, we explain the states of a Global Mirror relationship and the series of events that modify these states.

Figure 6-29 on page 321 shows an overview of the states that apply to a Global Mirror relationship in the connected state.

320 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 347: San

Figure 6-29 Global Mirror state diagram

When creating the Global Mirror relationship, you can specify whether the auxiliary VDisk is already in sync with the master VDisk, and the background copy process is then skipped. This capability is especially useful when creating Global Mirror relationships for VDisks that have been created with the format option. The following steps explain the Global Mirror state diagram (these numbers correspond to the numbers in Figure 6-29):

� Step 1:

a. The Global Mirror relationship is created with the -sync option, and the Global Mirror relationship enters the Consistent stopped state.

b. The Global Mirror relationship is created without specifying that the master and auxiliary VDisks are in sync, and the Global Mirror relationship enters the Inconsistent stopped state.

� Step 2:

a. When starting a Global Mirror relationship in the Consistent stopped state, it enters the Consistent synchronized state. This state implies that no updates (write I/O) have been performed on the primary VDisk while in the Consistent stopped state. Otherwise, you must specify the -force option, and the Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started.

b. When starting a Global Mirror relationship in the Inconsistent stopped state, it enters the Inconsistent copying state, while the background copy is started.

� Step 3:

a. When the background copy completes, the Global Mirror relationship transits from the Inconsistent copying state to the Consistent synchronized state.

Chapter 6. Advanced Copy Services 321

Page 348: San

� Step 4:

a. When stopping a Global Mirror relationship in the Consistent synchronized state, where specifying the -access option enables write I/O on the secondary VDisk, the Global Mirror relationship enters the Idling state.

b. To enable write I/O on the secondary VDisk, when the Global Mirror relationship is in the Consistent stopped state, issue the command svctask stoprcrelationship, specifying the -access option, and the Global Mirror relationship enters the Idling state.

� Step 5:

a. When starting a Global Mirror relationship that is in the Idling state, you must specify the -primary argument to set the copy direction. Because no write I/O has been performed (to either the master or auxiliary VDisk) while in the Idling state, the Global Mirror relationship enters the Consistent synchronized state.

b. In case write I/O has been performed to either the master or the auxiliary VDisk, then you must specify the -force option. The Global Mirror relationship then enters the Inconsistent copying state, while the background copy is started.

If the Global Mirror relationship is intentionally stopped or experiences an error, a state transition is applied. For example, Global Mirror relationships in the Consistent synchronized state enter the Consistent stopped state, and Global Mirror relationships in the Inconsistent copying state enter the Inconsistent stopped state.

In a case where the connection is broken between the SVC clusters in a partnership, all of the (intercluster) Global Mirror relationships enter a Disconnected state. For further information, refer to “Connected versus disconnected” on page 322.

6.11.2 State overviewThe SVC defined concepts of state are key to understanding the configuration concepts. We explain them in more detail next.

Connected versus disconnectedThis distinction can arise when a Global Mirror relationship is created with the two VDisks in separate clusters.

Under certain error scenarios, communications between the two clusters might be lost. For example, power might fail, causing one complete cluster to disappear. Alternatively, the fabric connection between the two clusters might fail, leaving the two clusters running but unable to communicate with each other.

When the two clusters can communicate, the clusters and the relationships spanning them are described as connected. When they cannot communicate, the clusters and the relationships spanning them are described as disconnected.

In this scenario, each cluster is left with half of the relationship, and each cluster has only a portion of the information that was available to it before. Only a subset of the normal configuration activity is available.

Common configuration and state model: Stand-alone relationships and consistency groups share a common configuration and state model. All of the Global Mirror relationships in a consistency group that is not empty have the same state as the consistency group.

322 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 349: San

The disconnected relationships are portrayed as having a changed state. The new states describe what is known about the relationship and which configuration commands are permitted.

When the clusters can communicate again, the relationships become connected again. Global Mirror automatically reconciles the two state fragments, taking into account any configuration activity or other event that took place while the relationship was disconnected. As a result, the relationship can either return to the state that it was in when it became disconnected or it can enter another connected state.

Relationships that are configured between VDisks in the same SVC cluster (intracluster) will never be described as being in a disconnected state.

Consistent versus inconsistentRelationships or consistency groups that contain relationships can be described as being consistent or inconsistent. The consistent or inconsistent property describes the state of the data on the secondary VDisk in relation to the data on the primary VDisk. Consider the consistent or inconsistent property to be a property of the secondary VDisk.

A secondary is described as consistent if it contains data that might have been read by a host system from the primary if power had failed at an imaginary point in time while I/O was in progress, and power was later restored. This imaginary point in time is defined as the recovery point. The requirements for consistency are expressed with respect to activity at the primary up to the recovery point:

� The secondary VDisk contains the data from all writes to the primary for which the host had received successful completion and that data has not been overwritten by a subsequent write (before the recovery point).

� The writes are on the secondary and the host did not receive successful completion for these writes (that is, the host received bad completion or no completion at all), and the host subsequently performed a read from the primary of that data. If that read returned successful completion and no later write was sent (before the recovery point), the secondary contains the same data as the data that was returned by the read from the primary.

From the point of view of an application, consistency means that a secondary VDisk contains the same data as the primary VDisk at the recovery point (the time at which the imaginary power failure occurred).

If an application is designed to cope with an unexpected power failure, this guarantee of consistency means that the application will be able to use the secondary and begin operation just as though it had been restarted after the hypothetical power failure.

Again, the application is dependent on the key properties of consistency:

� Write ordering � Read stability for correct operation at the secondary

If a relationship, or a set of relationships, is inconsistent and if an attempt is made to start an application using the data in the secondaries, a number of outcomes are possible:

� The application might decide that the data is corrupt and crash or exit with an error code.� The application might fail to detect that the data is corrupt and return erroneous data.� The application might work without a problem.

Because of the risk of data corruption, and, in particular, undetected data corruption, Global Mirror strongly enforces the concept of consistency and prohibits access to inconsistent data.

Chapter 6. Advanced Copy Services 323

Page 350: San

You can apply consistency as a concept to a single relationship or to a set of relationships in a consistency group. Write ordering is a concept that an application can maintain across a number of disks that are accessed through multiple systems, and therefore, consistency must operate across all of those disks.

When deciding how to use consistency groups, the administrator must consider the scope of an application’s data, taking into account all of the interdependent systems that communicate and exchange information.

If two programs or systems communicate and store details as a result of the information exchanged, either of the following actions might occur:

� All of the data that is accessed by the group of systems must be placed into a single consistency group.

� The systems must be recovered independently (each within its own consistency group). Then, each system must perform recovery with the other applications to become consistent with them.

Consistent versus synchronizedA copy that is consistent and up-to-date is described as synchronized. In a synchronized relationship, the primary and secondary VDisks only differ in the regions where writes are outstanding from the host.

Consistency does not mean that the data is up-to-date. A copy can be consistent and yet contain data that was frozen at an earlier point in time. Write I/O might have continued to a primary and not have been copied to the secondary. This state arises when it becomes impossible to keep up-to-date and maintain consistency. An example is a loss of communication between clusters when writing to the secondary.

When communication is lost for an extended period of time, Global Mirror tracks the changes that happen at the primary, but not the order of these changes, or the details of these changes (write data). When communication is restored, it is impossible to make the secondary synchronized without sending write data to the secondary out-of-order and, therefore, losing consistency.

You can use two policies to cope with this situation:

� Make a point-in-time copy of the consistent secondary before allowing the secondary to become inconsistent. In the event of a disaster, before consistency is achieved again, the point-in-time copy target provides a consistent, though out-of-date, image.

� Accept the loss of consistency, and the loss of a useful secondary, while making it synchronized.

6.11.3 Detailed statesThe following sections detail the states that are portrayed to the user, for either consistency groups or relationships. It also details the extra information that is available in each state. We described the various major states to provide guidance regarding the available configuration commands.

InconsistentStoppedInconsistentStopped is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is inaccessible for either read or write I/O. A copy process needs to be started to make the secondary consistent.

324 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 351: San

This state is entered when the relationship or consistency group was InconsistentCopying and has either suffered a persistent error or received a stop command that has caused the copy process to stop.

A start command causes the relationship or consistency group to move to the InconsistentCopying state. A stop command is accepted, but it has no effect.

If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transits to IdlingDisconnected.

InconsistentCopyingInconsistentCopying is a connected state. In this state, the primary is accessible for read and write I/O, but the secondary is inaccessible for either read or write I/O.

This state is entered after a start command is issued to an InconsistentStopped relationship or consistency group. It is also entered when a forced start is issued to an Idling or ConsistentStopped relationship or consistency group.

In this state, a background copy process runs, which copies data from the primary to the secondary VDisk.

In the absence of errors, an InconsistentCopying relationship is active, and the copy progress increases until the copy process completes. In certain error situations, the copy progress might freeze or even regress.

A persistent error or stop command places the relationship or consistency group into the InconsistentStopped state. A start command is accepted, but it has no effect.

If the background copy process completes on a stand-alone relationship, or on all relationships for a consistency group, the relationship or consistency group transits to the ConsistentSynchronized state.

If the relationship or consistency group becomes disconnected, the secondary side transits to InconsistentDisconnected. The primary side transitions to IdlingDisconnected.

ConsistentStoppedConsistentStopped is a connected state. In this state, the secondary contains a consistent image, but it might be out-of-date with respect to the primary.

This state can arise when a relationship is in the Consistent Synchronized state and experiences an error that forces a Consistency Freeze. It can also arise when a relationship is created with a CreateConsistentFlag set to true.

Normally, following an I/O error, subsequent write activity causes updates to the primary, and the secondary is no longer synchronized (set to false). In this case, to re-establish synchronization, consistency must be given up for a period. A start command with the -force option must be used to acknowledge this situation, and the relationship or consistency group transits to InconsistentCopying. Issue this command only after all of the outstanding errors are repaired.

In the unusual case where the primary and secondary are still synchronized (perhaps following a user stop, and no further write I/O was received), a start command takes the relationship to ConsistentSynchronized. No -force option is required. Also, in this unusual case, a switch command is permitted that moves the relationship or consistency group to ConsistentSynchronized and reverses the roles of the primary and the secondary.

Chapter 6. Advanced Copy Services 325

Page 352: San

If the relationship or consistency group becomes disconnected, then the secondary side transits to ConsistentDisconnected. The primary side transitions to IdlingDisconnected.

An informational status log is generated every time a relationship or consistency group enters the ConsistentStopped with a status of Online state. This can be configured to enable an SNMP trap and provide a trigger to automation software to consider issuing a start command following a loss of synchronization.

ConsistentSynchronizedThis is a connected state. In this state, the primary VDisk is accessible for read and write I/O. The secondary VDisk is accessible for read-only I/O.

Writes that are sent to the primary VDisk are sent to both primary and secondary VDisks. Either successful completion must be received for both writes, the write must be failed to the host, or a state must transit out of the ConsistentSynchronized state before a write is completed to the host.

A stop command takes the relationship to the ConsistentStopped state. A stop command with the -access parameter takes the relationship to the Idling state.

A switch command leaves the relationship in the ConsistentSynchronized state, but it reverses the primary and secondary roles.

A start command is accepted, but it has no effect.

If the relationship or consistency group becomes disconnected, the same transitions are made as for ConsistentStopped.

IdlingIdling is a connected state. Both master and auxiliary disks are operating in the primary role. Consequently, both master and auxiliary disks are accessible for write I/O.

In this state, the relationship or consistency group accepts a start command. Global Mirror maintains a record of regions on each disk that received write I/O while Idling. This record is used to determine what areas need to be copied following a start command.

The start command must specify the new copy direction. A start command can cause a loss of consistency if either VDisk in any relationship has received write I/O, which is indicated by the synchronized status. If the start command leads to loss of consistency, you must specify a -force parameter.

Following a start command, the relationship or consistency group transits to ConsistentSynchronized if there is no loss of consistency, or to InconsistentCopying if there is a loss of consistency.

Also, while in this state, the relationship or consistency group accepts a -clean option on the start command. If the relationship or consistency group becomes disconnected, both sides change their state to IdlingDisconnected.

IdlingDisconnectedIdlingDisconnected is a disconnected state. The VDisk or disks in this half of the relationship or consistency group are all in the primary role and accept read or write I/O.

The major priority in this state is to recover the link and reconnect the relationship or consistency group.

326 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 353: San

No configuration activity is possible (except for deletes or stops) until the relationship is reconnected. At that point, the relationship transits to a connected state. The exact connected state that is entered depends on the state of the other half of the relationship or consistency group, which depends on these factors:

� The state when it became disconnected� The write activity since it was disconnected� The configuration activity since it was disconnected

If both halves are IdlingDisconnected, the relationship becomes Idling when reconnected.

While IdlingDisconnected, if a write I/O is received that causes loss of synchronization (synchronized attribute transits from true to false) and the relationship was not already stopped (either through a user stop or a persistent error), an error log is raised. This error log is the same error log that is raised when the same situation arises in the ConsistentSynchronized state.

InconsistentDisconnectedInconsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and do not accept read or write I/O.

No configuration activity, except for deletes, is permitted until the relationship reconnects.

When the relationship or consistency group reconnects, the relationship becomes InconsistentCopying automatically unless either of these conditions exist:

� The relationship was InconsistentStopped when it became disconnected.� The user issued a stop while disconnected.

In either case, the relationship or consistency group becomes InconsistentStopped.

ConsistentDisconnectedConsistentDisconnected is a disconnected state. The VDisks in this half of the relationship or consistency group are all in the secondary role and accept read I/O but not write I/O.

This state is entered from ConsistentSynchronized or ConsistentStopped when the secondary side of a relationship becomes disconnected.

In this state, the relationship or consistency group displays an attribute of FreezeTime, which is the point in time that Consistency was frozen. When entered from ConsistentStopped, it retains the time that it had in that state. When entered from ConsistentSynchronized, the FreezeTime shows the last time at which the relationship or consistency group was known to be consistent. This time corresponds to the time of the last successful heartbeat to the other cluster.

A stop command with the -access flag set to true transits the relationship or consistency group to the IdlingDisconnected state. This state allows write I/O to be performed to the secondary VDisk and is used as part of a DR scenario.

When the relationship or consistency group reconnects, the relationship or consistency group becomes ConsistentSynchronized only if this state does not lead to a loss of consistency. This is the case provided that these conditions are true:

� The relationship was ConsistentSynchronized when it became disconnected.� No writes received successful completion at the primary while disconnected.

Otherwise, the relationship becomes ConsistentStopped. The FreezeTime setting is retained.

Chapter 6. Advanced Copy Services 327

Page 354: San

EmptyThis state only applies to consistency groups. It is the state of a consistency group that has no relationships and no other state information to show.

It is entered when a consistency group is first created. It is exited when the first relationship is added to the consistency group, at which point, the state of the relationship becomes the state of the consistency group.

6.11.4 Practical use of Global MirrorTo use Global Mirror, you must define a relationship between two VDisks.

When creating the Global Mirror relationship, one VDisk is defined as the master, and the other VDisk is defined as the auxiliary. The relationship between the two copies is asymmetric. When the Global Mirror relationship is created, the master VDisk is initially considered the primary copy (often referred to as the source), and the auxiliary VDisk is considered the secondary copy (often referred to as the target).

The master VDisk is the production VDisk, and updates to this copy are real-time mirrored to the auxiliary VDisk. The contents of the auxiliary VDisk that existed when the relationship was created are destroyed.

While the Global Mirror relationship is active, the secondary copy (VDisk) is inaccessible for host application write I/O at any time. The SVC allows read-only access to the secondary VDisk when it contains a “consistent” image. This read-only access is only intended to allow boot time operating system discovery to complete without error, so that any hosts at the secondary site can be ready to start up the applications with minimal delay, if required.

For example, many operating systems need to read logical block address (LBA) 0 (zero) to configure a logical unit. Although read access is allowed at the secondary in practice, the data on the secondary volumes cannot be read by a host, because most operating systems write a “dirty bit” to the file system when it is mounted. Because this write operation is not allowed on the secondary volume, the volume cannot be mounted.

This access is only provided where consistency can be guaranteed. However, there is no way in which coherency can be maintained between reads that are performed at the secondary and later write I/Os that are performed at the primary.

To enable access to the secondary VDisk for host operations, you must stop the Global Mirror relationship by specifying the -access parameter.

While access to the secondary VDisk for host operations is enabled, you must instruct the host to mount the VDisk and other related tasks, before the application can be started or instructed to perform a recovery process.

Using a secondary copy demands a conscious policy decision by the administrator that a failover is required, and the tasks to be performed on the host that is involved in establishing operation on the secondary copy are substantial. The goal is to make this failover rapid (much faster than recovering from a backup copy), but it is not seamless.

Switching the copy direction: The copy direction for a Global Mirror relationship can be switched so the auxiliary VDisk becomes the primary and the master VDisk becomes the secondary.

328 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 355: San

You can automate the failover process by using failover management software. The SVC provides Simple Network Management Protocol (SNMP) traps and programming (or scripting) for the command-line interface (CLI) to enable this automation.

6.11.5 Valid combinations of FlashCopy and Metro Mirror or Global Mirror functions

Table 6-7 on page 302 outlines the combinations of FlashCopy and Metro Mirror or Global Mirror functions that are valid for a VDisk.

Table 6-9 VDisk valid combinations

6.11.6 Global Mirror configuration limitsTable 6-10 lists the Global Mirror configuration limits.

Table 6-10 Global Mirror configuration limits

6.12 Global Mirror commandsHere, we summarize several of the most important Global Mirror commands. For complete details about all of the Global Mirror commands, see IBM System Storage SAN Volume Controller: Command-Line Interface User's Guide, SC26-7903.

The command set for Global Mirror contains two broad groups:

� Commands to create, delete, and manipulate relationships and consistency groups� Commands that cause state changes

Where a configuration command affects more than one cluster, Global Mirror performs the work to coordinate configuration activity between the clusters. Certain configuration commands can only be performed when the clusters are connected, and those commands fail with no effect when the clusters are disconnected.

FlashCopy Metro Mirror or Global Mirror Primary

Metro Mirror or Global Mirror Secondary

FlashCopy Source Supported Supported

FlashCopy Target Not supported Not supported

Parameter Value

Number of Metro Mirror consistency groups per cluster

256

Number of Metro Mirror relationships per cluster

8,192

Number of Metro Mirror relationships per consistency group

8,192

Total VDisk size per I/O Group A per I/O Group limit of 1,024 TB exists on the quantity of Primary and Secondary VDisk address spaces that can participate in Metro Mirror and Global Mirror relationships. This maximum configuration will consume all 512 MB of bitmap space for the I/O Group and allow no FlashCopy bitmap space.

Chapter 6. Advanced Copy Services 329

Page 356: San

Other configuration commands are permitted even though the clusters are disconnected. The state is reconciled automatically by Global Mirror when the clusters are reconnected.

For any given command, with one exception, a single cluster actually receives the command from the administrator. This action is significant for defining the context for a CreateRelationship (mkrcrelationship) command or a CreateConsistencyGroup (mkrcconsistgrp) command, in which case, the cluster receiving the command is called the local cluster.

The exception is the command that sets clusters into a Global Mirror partnership. The administrator must issue the mkpartnership command to both the local and to the remote cluster.

The commands are described here as an abstract command set. You can implement these commands in one of two ways:

� A command-line interface (CLI), which can be used for scripting and automation� A graphical user interface (GUI), which can be used for one-off tasks

6.12.1 Listing the available SVC cluster partnersTo create an SVC cluster partnership, we use the svcinfo lsclustercandidate command.

svcinfo lsclustercandidateUse the svcinfo lsclustercandidate command to list the clusters that are available for setting up a two-cluster partnership. This command is a prerequisite for creating Global Mirror relationships.

To display the characteristics of the cluster, use the svcinfo lscluster command, specifying the name of the cluster.

svctask chclusterThere are three parameters for Global Mirror in the command output:

� -gmlinktolerance link_tolerance

This parameter specifies the maximum period of time that the system will tolerate delay before stopping Global Mirror relationships. Specify values between 60 and 86400 seconds in increments of 10 seconds. The default value is 300. Do not change this value except under the direction of IBM Support.

� -gminterdelaysimulation link_tolerance

This parameter specifies the number of milliseconds that I/O activity (intercluster copying to a secondary VDisk) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intercluster Global Mirror relationship separately.

� -gmintradelaysimulation link_tolerance

This parameter specifies the number of milliseconds that I/O activity (intracluster copying to a secondary VDisk) is delayed. This parameter permits you to test performance implications before deploying Global Mirror and obtaining a long distance link. Specify a value from 0 to 100 milliseconds in 1 millisecond increments. The default value is 0. Use this argument to test each intracluster Global Mirror relationship separately.

Use the svctask chcluster command to adjust these values:

svctask chcluster -gmlinktolerance 300

330 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 357: San

You can view all of these parameter values with the svcinfo lscluster <clustername> command.

gmlinktoleranceThe gmlinktolerance parameter needs a particular and detailed note.

If poor response extends past the specified tolerance, a 1920 error is logged and one or more Global Mirror relationships are automatically stopped, which protects the application hosts at the primary site. During normal operation, application hosts experience a minimal effect from the response times, because the Global Mirror feature uses asynchronous replication. However, if Global Mirror operations experience degraded response times from the secondary cluster for an extended period of time, I/O operations begin to queue at the primary cluster. This queue results in an extended response time to application hosts. In this situation, the gmlinktolerance feature stops Global Mirror relationships and the application host’s response time returns to normal. After a 1920 error has occurred, the Global Mirror auxiliary VDisks are no longer in the consistent_synchronized state until you fix the cause of the error and restart your Global Mirror relationships. For this reason, ensure that you monitor the cluster to track when this 1920 error occurs.

You can disable the gmlinktolerance feature by setting the gmlinktolerance value to 0 (zero). However, the gmlinktolerance feature cannot protect applications from extended response times if it is disabled. It might be appropriate to disable the gmlinktolerance feature in the following circumstances:

� During SAN maintenance windows where degraded performance is expected from SAN components and application hosts can withstand extended response times from Global Mirror VDisks.

� During periods when application hosts can tolerate extended response times and it is expected that the gmlinktolerance feature might stop the Global Mirror relationships. For example, if you test using an I/O generator, which is configured to stress the back-end storage, the gmlinktolerance feature might detect the high latency and stop the Global Mirror relationships. Disabling the gmlinktolerance feature prevents this result at the risk of exposing the test host to extended response times.

We suggest using a script to periodically monitor the Global Mirror status.

Example 6-2 shows an example of a script in ksh to check the Global Mirror status.

Example 6-2 Script example

[AIX1@root] /usr/GMC > cat checkSVCgm#!/bin/sh## Description## GM_STATUS GM Status variable# HOSTsvcNAME SVC cluster ipaddress# PARA_TEST Consistent syncronized variable# PARA_TESTSTOPIN Stop inconsistent variable# PARA_TESTSTOP Consistent stopped variable# IDCONS Consistency Group ID variable# variable definitionHOSTsvcNAME="128.153.3.237"IDCONS=255PARA_TEST="consistent_synchronized"PARA_TESTSTOP="consistent_stopped"PARA_TESTSTOPIN="inconsistent_stopped"

Chapter 6. Advanced Copy Services 331

Page 358: San

FLOG="/usr/GMC/log/gmtest.log"VAR=0

# Start Programm if [[ $1 == "" ]]then CICLI="true"fiwhile $CICLIdo GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` echo "`date` Gobal Mirror STATUS <$GM_STATUS> " >> $FLOG if [[ $GM_STATUS = $PARA_TEST ]] then sleep 600 else sleep 600 GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP || $GM_STATUS = $PARA_TESTSTOPIN ]] then ssh -l admin $HOSTsvcNAME svctask startrcconsistgrp -force $IDCONS TESTEX=`echo $?` echo "`date` Gobal Mirror RESTARTED.......... con RC=$TESTEX " >> $FLOG fi GM_STATUS=`ssh -l admin $HOSTsvcNAME svcinfo lsrcconsistgrp -delim : | awk -F: 'NR==2 {print $8 }'` if [[ $GM_STATUS = $PARA_TESTSTOP ]] then echo "`date` Global Mirror restarted <$GM_STATUS>" else echo "`date` ERROR Global Mirro Failed <$GM_STATUS>" fi sleep 600 fi ((VAR+=1))donePARA_TESTSTOP="consistent_stopped"

The script in Example 6-2 on page 331 performs these functions:

� Check the Global Mirror status every 600 seconds.

� If the status is Consistent_Syncronized, wait another 600 seconds and test again.

� If the status is Consistent_Stopped or Inconsistent_Stopped, wait another 600 seconds and then try to restart Global Mirror. If the status is the status is Consistent_Stopped or Inconsistent_Stopped, probably we have a 1920 error scenario, which means that we might have a performance problem. Waiting 600 seconds before restarting Global Mirror can give the SVC enough time to deliver the high workload that is requested by the server. Because Global Mirror has been stopped for 10 minutes (600 seconds), the secondary copy is now out-of-date by this amount of time and must be resynchronized.

Sample script: The script that is described in Example 6-2 on page 331 is supplied as-is.

332 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 359: San

A 1920 error indicates that one or more of the SAN components are unable to provide the performance that is required by the application hosts. This situation can be temporary (for example, a result of a maintenance activity) or permanent (for example, a result of a hardware failure or an unexpected host I/O workload).

If you experience 1920 errors, we suggest that you install a SAN performance analysis tool, such as the IBM Tivoli Storage Productivity Center, and make sure that the tool is correctly configured and monitoring statistics to look for problems and to try to prevent them.

6.12.2 Creating an SVC cluster partnershipTo create an SVC cluster partnership, use the svctask mkpartnership command.

svctask mkpartnershipUse the svctask mkpartnership command to establish a one-way Global Mirror partnership between the local cluster and a remote cluster.

To establish a fully functional Global Mirror partnership, you must issue this command on both clusters. This step is a prerequisite for creating Global Mirror relationships between VDisks on the SVC clusters.

When creating the partnership, you can specify the bandwidth to be used by the background copy process between the local and the remote SVC cluster, and if it is not specified, the bandwidth defaults to 50 MBps. The bandwidth must be set to a value that is less than or equal to the bandwidth that can be sustained by the intercluster link.

Background copy bandwidth effect on foreground I/O latencyThe background copy bandwidth determines the rate at which the background copy will be attempted for Global Mirror. The background copy bandwidth can affect foreground I/O latency in one of three ways:

� The following result can occur if the background copy bandwidth is set too high compared to the Global Mirror intercluster link capacity:

– The background copy I/Os can back up on the Global Mirror intercluster link.– There is a delay in the synchronous secondary writes of foreground I/Os.– The foreground I/O latency will increase as perceived by applications.

� If the background copy bandwidth is set too high for the storage at the primary site, background copy read I/Os overload the primary storage and delay foreground I/Os.

� If the background copy bandwidth is set too high for the storage at the secondary site, background copy writes at the secondary overload the secondary storage and again delay the synchronous secondary writes of foreground I/Os.

In order to set the background copy bandwidth optimally, make sure that you consider all three resources (the primary storage, the intercluster link bandwidth, and the secondary storage). Provision the most restrictive of these three resources between the background copy bandwidth and the peak foreground I/O workload. Perform this provisioning by calculation or, alternatively, by determining experimentally how much background copy can be allowed before the foreground I/O latency becomes unacceptable and then reducing the background copy to accommodate peaks in workload and an additional safety margin.

svctask chpartnershipTo change the bandwidth that is available for background copy in an SVC cluster partnership, use the svctask chpartnership command to specify the new bandwidth.

Chapter 6. Advanced Copy Services 333

Page 360: San

6.12.3 Creating a Global Mirror consistency groupTo create a Global Mirror consistency group, use the svctask mkrcconsistgrp command.

svctask mkrcconsistgrpUse the svctask mkrcconsistgrp command to create a new, empty Global Mirror consistency group.

The Global Mirror consistency group name must be unique across all consistency groups that are known to the clusters owning this consistency group. If the consistency group involves two clusters, the clusters must be in communication throughout the creation process.

The new consistency group does not contain any relationships and will be in the Empty state. You can add Global Mirror relationships to the group, either upon creation or afterward, by using the svctask chrelationship command.

6.12.4 Creating a Global Mirror relationshipTo create a Global Mirror relationship, use the svctask mkrcrelationship command.

svctask mkrcrelationshipUse the svctask mkrcrelationship command to create a new Global Mirror relationship. This relationship persists until it is deleted.

The auxiliary VDisk must be equal in size to the master VDisk or the command will fail, and if both VDisks are in the same cluster, they must both be in the same I/O Group. The master and auxiliary VDisk cannot be in an existing relationship, and they cannot be the target of a FlashCopy mapping. This command returns the new relationship (relationship_id) when successful.

When creating the Global Mirror relationship, you can add it to a consistency group that already exists, or it can be a stand-alone Global Mirror relationship if no consistency group is specified.

To check whether the master or auxiliary VDisks comply with the prerequisites to participate in a Global Mirror relationship, use the svcinfo lsrcrelationshipcandidate command, as shown in “svcinfo lsrcrelationshipcandidate” on page 334.

svcinfo lsrcrelationshipcandidateUse the svcinfo lsrcrelationshipcandidate command to list the available VDisks that are eligible to form a Global Mirror relationship.

When issuing the command, you can specify the master VDisk name and auxiliary cluster to list candidates that comply with the prerequisites to create a Global Mirror relationship. If the command is issued with no parameters, all VDisks that are not disallowed by another configuration state, such as being a FlashCopy target, are listed.

6.12.5 Changing a Global Mirror relationshipTo modify the properties of a Global Mirror relationship, use the svctask chrcrelationship command.

Optional parameter: If you do not use the -global optional parameter, a Metro Mirror relationship will be created instead of a Global Mirror relationship.

334 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 361: San

svctask chrcrelationshipUse the svctask chrcrelationship command to modify the following properties of a Global Mirror relationship:

� Change the name of a Global Mirror relationship.� Add a relationship to a group.� Remove a relationship from a group using the -force flag.

6.12.6 Changing a Global Mirror consistency groupTo change the name of a Global Mirror consistency group, use the following command.

svctask chrcconsistgrpUse the svctask chrcconsistgrp command to change the name of a Global Mirror consistency group.

6.12.7 Starting a Global Mirror relationshipTo start a stand-alone Global Mirror relationship, use the following command.

svctask startrcrelationshipUse the svctask startrcrelationship command to start the copy process of a Global Mirror relationship.

When issuing the command, you can set the copy direction if it is undefined, and, optionally, you can mark the secondary VDisk of the relationship as clean. The command fails if it is used as an attempt to start a relationship that is already a part of a consistency group.

You can only issue this command to a relationship that is connected. For a relationship that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

If the resumption of the copy process leads to a period when the relationship is inconsistent, you must specify the -force parameter when restarting the relationship. This situation can arise if, for example, the relationship was stopped and then further writes were performed on the original primary of the relationship. The use of the -force parameter here is a reminder that the data on the secondary will become inconsistent while resynchronization (background copying) takes place and, therefore, is unusable for DR purposes before the background copy has completed.

In the Idling state, you must specify the primary VDisk to indicate the copy direction. In other connected states, you can provide the primary argument, but it must match the existing setting.

6.12.8 Stopping a Global Mirror relationshipTo stop a stand-alone Global Mirror relationship, use the svctask stoprcrelationship command.

Adding a Global Mirror relationship: When adding a Global Mirror relationship to a consistency group that is not empty, the relationship must have the same state and copy direction as the group in order to be added to it.

Chapter 6. Advanced Copy Services 335

Page 362: San

svctask stoprcrelationshipUse the svctask stoprcrelationship command to stop the copy process for a relationship. You can also use this command to enable write access to a consistent secondary VDisk by specifying the -access parameter.

This command applies to a stand-alone relationship. It is rejected if it is addressed to a relationship that is part of a consistency group. You can issue this command to stop a relationship that is copying from primary to secondary.

If the relationship is in an inconsistent state, any copy operation stops and does not resume until you issue an svctask startrcrelationship command. Write activity is no longer copied from the primary to the secondary VDisk. For a relationship in the ConsistentSynchronized state, this command causes a Consistency Freeze.

When a relationship is in a consistent state (that is, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcrelationship command to enable write access to the secondary VDisk.

6.12.9 Starting a Global Mirror consistency groupTo start a Global Mirror consistency group, use the svctask startrcconsistgrp command.

svctask startrcconsistgrpUse the svctask startrcconsistgrp command to start a Global Mirror consistency group. You can only issue this command to a consistency group that is connected.

For a consistency group that is idling, this command assigns a copy direction (primary and secondary roles) and begins the copy process. Otherwise, this command restarts a previous copy process that was stopped either by a stop command or by an I/O error.

6.12.10 Stopping a Global Mirror consistency groupTo stop a Global Mirror consistency group, use the svctask stoprcconsistgrp command.

svctask stoprcconsistgrpUse the svctask startrcconsistgrp command to stop the copy process for a Global Mirror consistency group. You can also use this command to enable write access to the secondary VDisks in the group if the group is in a consistent state.

If the consistency group is in an inconsistent state, any copy operation stops and does not resume until you issue the svctask startrcconsistgrp command. Write activity is no longer copied from the primary to the secondary VDisks, which belong to the relationships in the group. For a consistency group in the ConsistentSynchronized state, this command causes a Consistency Freeze.

When a consistency group is in a consistent state (for example, in the ConsistentStopped, ConsistentSynchronized, or ConsistentDisconnected state), you can use the -access parameter with the svctask stoprcconsistgrp command to enable write access to the secondary VDisks within that group.

6.12.11 Deleting a Global Mirror relationshipTo delete a Global Mirror relationship, use the svctask rmrcrelationship command.

336 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 363: San

svctask rmrcrelationshipUse the svctask rmrcrelationship command to delete the relationship that is specified. Deleting a relationship only deletes the logical relationship between the two VDisks. It does not affect the VDisks themselves.

If the relationship is disconnected at the time that the command is issued, the relationship is only deleted on the cluster on which the command is being run. When the clusters reconnect, the relationship is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the relationship on both clusters, you can issue the rmrcrelationship command independently on both of the clusters.

A relationship cannot be deleted if it is part of a consistency group. You must first remove the relationship from the consistency group.

If you delete an inconsistent relationship, the secondary VDisk becomes accessible even though it is still inconsistent. This situation is the one case in which Global Mirror does not inhibit access to inconsistent data.

6.12.12 Deleting a Global Mirror consistency groupTo delete a Global Mirror consistency group, use the svctask rmrcconsistgrp command.

svctask rmrcconsistgrpUse the svctask rmrcconsistgrp command to delete a Global Mirror consistency group. This command deletes the specified consistency group. You can issue this command for any existing consistency group.

If the consistency group is disconnected at the time that the command is issued, the consistency group is only deleted on the cluster on which the command is being run. When the clusters reconnect, the consistency group is automatically deleted on the other cluster.

Alternatively, if the clusters are disconnected, and you still want to remove the consistency group on both clusters, you can issue the svctask rmrcconsistgrp command separately on both of the clusters.

If the consistency group is not empty, the relationships within it are removed from the consistency group before the group is deleted. These relationships then become stand-alone relationships. The state of these relationships is not changed by the action of removing them from the consistency group.

6.12.13 Reversing a Global Mirror relationshipTo reverse a Global Mirror relationship, use the svctask switchrcrelationship command.

svctask switchrcrelationshipUse the svctask switchrcrelationship command to reverse the roles of the primary VDisk and the secondary VDisk when a stand-alone relationship is in a consistent state; when issuing the command, the desired primary needs to be specified.

6.12.14 Reversing a Global Mirror consistency groupTo reverse a Global Mirror consistency group, use the svctask switchrcconsistgrp command.

Chapter 6. Advanced Copy Services 337

Page 364: San

svctask switchrcconsistgrpUse the svctask switchrcconsistgrp command to reverse the roles of the primary VDisk and the secondary VDisk when a consistency group is in a consistent state. This change is applied to all of the relationships in the consistency group, and when issuing the command, the desired primary needs to be specified.

338 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 365: San

Chapter 7. SAN Volume Controller operations using the command-line interface

In this chapter, we describe operational management. We use the command-line interface (CLI) to demonstrate both normal operation and, then, advanced operation.

You can use either the CLI or GUI to manage IBM System Storage SAN Volume Controller (SVC) operations. We prefer to use the CLI in this chapter. You might want to script these operations, and we think it is easier to create the documentation for the scripts using the CLI.

This chapter assumes a fully functional SVC environment.

7

© Copyright IBM Corp. 2010. All rights reserved. 339

Page 366: San

7.1 Normal operations using CLIIn the following topics, we describe those commands that best represent normal operational commands.

7.1.1 Command syntax and online help Two major command sets are available:

� The svcinfo command set allows us to query the various components within the SVC environment.

� The svctask command set allows us to make changes to the various components within the SVC.

When the command syntax is shown, you will see certain parameters in square brackets, for example, [parameter], indicating that the parameter is optional in most, if not all, instances. Any information that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands:

� svcinfo -?: Shows a complete list of information commands.� svctask -?: Shows a complete list of task commands.� svcinfo commandname -?: Shows the syntax of information commands.� svctask commandname -?: Shows the syntax of task commands.� svcinfo commandname -filtervalue?: Shows the filters that you can use to reduce the

output of the information commands.

If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue.

7.2 Working with managed disks and disk controller systemsThis section details the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment and the tasks that you can perform at a disk controller level.

7.2.1 Viewing disk controller detailsUse the svcinfo lscontroller command to display summary information about all available back-end storage systems.

To display more detailed information about a specific controller, run the command again and append the controller name parameter, for example, controller id 0, as shown in Example 7-1 on page 341.

Help: You can also use -h instead of -?, for example, the svcinfo -h or svctask commandname -h command.

Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, backspace, and delete keys to edit commands before you resubmit them.

340 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 367: San

Example 7-1 svctask lscontroller command

IBM_2145:ITSO_SVC_4:admin>svcinfo lscontroller 0id 0controller_name ITSO_XIV_01WWNN 50017380022C0000mdisk_link_count 10max_mdisk_link_count 10degraded novendor_id IBMproduct_id_low 2810XIV-product_id_high LUN-0product_revision 10.1ctrl_s/nallow_quorum yesWWPN 50017380022C0170path_count 2max_path_count 4WWPN 50017380022C0180path_count 2max_path_count 2WWPN 50017380022C0190path_count 4max_path_count 6WWPN 50017380022C0182path_count 4max_path_count 12WWPN 50017380022C0192path_count 4max_path_count 6WWPN 50017380022C0172path_count 4max_path_count 6

7.2.2 Renaming a controllerUse the svctask chcontroller command to change the name of a storage controller. To verify the change, run the svcinfo lscontroller command. Example 7-2 shows both of these commands.

Example 7-2 svctask chcontroller command

IBM_2145:ITSO-CLS1:admin>svctask chcontroller -name DS4500 controller0IBM_2145:ITSO-CLS1:admin>svcinfo lscontroller -delim ,id,controller_name,ctrl_s/n,vendor_id,product_id_low,product_id_high0,DS4500,,IBM ,1742-900,1,DS4700,,IBM ,1814 , FAStT

This command renames the controller named controller0 to DS4500.

Choosing a new name: The chcontroller command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 15 characters in length. However, the new name cannot start with a number, dash, or the word “controller” (because this prefix is reserved for SVC assignment only).

Chapter 7. SAN Volume Controller operations using the command-line interface 341

Page 368: San

7.2.3 Discovery statusUse the svcinfo lsdiscoverystatus command, as shown in Example 7-3, to determine if a discovery operation is in progress. The output of this command is the status of active or inactive.

Example 7-3 lsdiscoverystatus command

IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatusstatusinactive

7.2.4 Discovering MDisksIn general, the cluster detects the MDisks automatically when they appear in the network. However, certain Fibre Channel (FC) controllers do not send the required Small Computer System Interface (SCSI) primitives that are necessary to automatically discover the new MDisks.

If new storage has been attached and the cluster has not detected it, it might be necessary to run this command before the cluster can detect the new MDisks.

Use the svctask detectmdisk command to scan for newly added MDisks (Example 7-4).

Example 7-4 svctask detectmdisk

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk

To check whether any newly added MDisks were successfully detected, run the svcinfo lsmdisk command and look for new unmanaged MDisks.

If the disks do not appear, check that the disk is appropriately assigned to the SVC in the disk subsystem, and that the zones are set up properly.

When all of the disks allocated to the SVC are seen from the SVC cluster, the following procedure is a good way to verify which MDisks are unmanaged and ready to be added to the Managed Disk Group (MDG).

Perform the following steps to display MDisks:

1. Enter the svcinfo lsmdiskcandidate command, as shown in Example 7-5. This command displays all detected MDisks that are not currently part of an MDG.

Example 7-5 svcinfo lsmdiskcandidate command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskcandidateid012..

Note: If you have assigned a large number of logical unit numbers (LUNs) to your SVC, the discovery process can take time. Check, several times, using the svcinfo lsmdisk command if all of the MDisks that you were expecting are present.

342 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 369: San

Alternatively, you can list all MDisks (managed or unmanaged) by issuing the svcinfo lsmdisk command, as shown in Example 7-6.

Example 7-6 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID0,mdisk0,online,unmanaged,,,36.0GB,0000000000000000,controller0,600a0b8000174431000000eb47139cca000000000000000000000000000000001,mdisk1,online,unmanaged,,,36.0GB,0000000000000001,controller0,600a0b8000174431000000ef47139e1c000000000000000000000000000000002,mdisk2,online,unmanaged,,,36.0GB,0000000000000002,controller0,600a0b8000174431000000f147139e72000000000000000000000000000000003,mdisk3,online,unmanaged,,,36.0GB,0000000000000003,controller0,600a0b8000174431000000e447135754000000000000000000000000000000004,mdisk4,online,unmanaged,,,36.0GB,0000000000000004,controller0,600a0b8000174431000000e647135760000000000000000000000000000000005,mdisk5,online,unmanaged,,,36.0GB,0000000000000000,controller1,600a0b800026b28200003ea34851577c000000000000000000000000000000006,mdisk6,online,unmanaged,,,36.0GB,0000000000000005,controller0,600a0b8000174431000000e747139cb6000000000000000000000000000000007,mdisk7,online,unmanaged,,,36.0GB,0000000000000001,controller1,600a0b80002904de00004188485157a4000000000000000000000000000000008,mdisk8,online,unmanaged,,,36.0GB,0000000000000006,controller0,600a0b8000174431000000ea47139cc400000000000000000000000000000000

From this output, you can see additional information about each MDisk (such as the current status). For the purpose of our current task, we are only interested in the unmanaged disks, because they are candidates for MDGs (all MDisks, in our case).

2. If not all of the MDisks that you expected are visible, rescan the available FC network by entering the svctask detectmdisk command, as shown in Example 7-7.

Example 7-7 svctask detectmdisk

IBM_2145:ITSO-CLS1:admin>svctask detectmdisk

3. If you run the svcinfo lsmdiskcandidate command again and your MDisk or MDisks are still not visible, check that the LUNs from your subsystem have been properly assigned to the SVC and that appropriate zoning is in place (for example, the SVC can see the disk subsystem). See Chapter 3, “Planning and configuration” on page 65 for details about setting up your storage area network (SAN) fabric.

7.2.5 Viewing MDisk informationWhen viewing information about the MDisks (managed or unmanaged), we can use the svcinfo lsmdisk command to display overall summary information about all available managed disks. To display more detailed information about a specific MDisk, run the command again and append the -mdisk name parameter (for example, mdisk0).

The overview command is svcinfo lsmdisk -delim, as shown in Example 7-8 on page 344.

The summary for an individual MDisk is svcinfo lsmdisk (name/ID of the MDisk from which you want the information), as shown in Example 7-9 on page 344.

Tip: The -delim parameter collapses output instead of wrapping text over multiple lines.

Chapter 7. SAN Volume Controller operations using the command-line interface 343

Page 370: San

Example 7-8 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b8000486a6600000ae94a895759000000000000000000000000000000001,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000000e134a895d6e000000000000000000000000000000002,mdisk2,online,managed,0,MDG_DS47,16.0GB,0000000000000002,controller0,600a0b80004858a000000e144a895d94000000000000000000000000000000003,mdisk3,online,managed,0,MDG_DS47,16.0GB,0000000000000003,controller0,600a0b80004858a000000e154a895db000000000000000000000000000000000

Example 7-9 shows a summary for a single MDisk.

Example 7-9 Usage of the command svcinfo lsmdisk (ID)

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 2id 2name mdisk2status onlinemode managedmdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 16.0GBquorum_index 0block_size 512controller_name controller0ctrl_type 4ctrl_WWNN 200600A0B84858A0controller_id 0path_count 2max_path_count 2ctrl_LUN_# 0000000000000002UID 600a0b80004858a000000e144a895d9400000000000000000000000000000000preferred_WWPN 200600A0B84858A2active_WWPN 200600A0B84858A2

7.2.6 Renaming an MDiskUse the svctask chmdisk command to change the name of an MDisk. When using the command, be aware that the new name comes first and then the ID/name of the MDisk being renamed. Use this format: svcinfo chmdisk -name (new name) (current ID/name). Use the svcinfo lsmdisk command to verify the change. Example 7-10 show both of these commands.

Example 7-10 svctask chmdisk command

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdisk_6 mdisk6IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID6,mdisk_6,online,managed,0,MDG_DS45,36.0GB,0000000000000005,DS4500,600a0b8000174431000000e747139cb600000000000000000000000000000000

344 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 371: San

This command renamed the MDisk named mdisk6 to mdisk_6.

7.2.7 Including an MDiskIf a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a SAN problem, or the result of poorly planned maintenance. If it is a hardware fault, you receive Simple Network Management Protocol (SNMP) alerts about the state of the disk subsystem (before the disk was excluded), and you can undertake preventive maintenance. If not, the hosts that were using virtual disks (VDisks), which used the excluded MDisk, now have I/O errors.

By running the svcinfo lsmdisk command, you can see that mdisk9 is excluded in Example 7-11.

Example 7-11 svcinfo lsmdisk command: Excluded MDisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431000000ea47139cc4000000000000000000000000000000009,mdisk9,excluded,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b28200003ed6485157b600000000000000000000000000000000

After taking the necessary corrective action to repair the MDisk (for example, replace the failed disk, repair the SAN zones, and so on), we need to include the MDisk again by issuing the svctask includemdisk command (Example 7-12), because the SVC cluster does not include the MDisk automatically.

Example 7-12 svctask includemdisk

IBM_2145:ITSO-CLS1:admin>svctask includemdisk mdisk9

Running the svcinfo lsmdisk command again shows mdisk9 online again, as shown in Example 7-13.

Example 7-13 svcinfo lsmdisk command: Verifying that MDisk is included

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID8,mdisk8,online,managed,0,MDG_DS45,36.0GB,0000000000000006,DS4500,600a0b8000174431000000ea47139cc4000000000000000000000000000000009,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b28200003ed6485157b600000000000000000000000000000000

The chmdisk command: The chmdisk command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 15 characters in length. However, the new name cannot start with a number, dash, or the word “mdisk” (because this prefix is reserved for SVC assignment only).

Chapter 7. SAN Volume Controller operations using the command-line interface 345

Page 372: San

7.2.8 Adding MDisks to a managed disk groupIf you created an empty MDG or you simply assign additional MDisks to your already configured MDG, you can use the svctask addmdisk command to populate the MDG (Example 7-14).

Example 7-14 svctask addmdisk command

IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk mdisk6 MDG_DS45

You can only add unmanaged MDisks to an MDG. This command adds the MDisk named mdisk6 to the MDG named MDG_DS45.

7.2.9 Showing the Managed Disk Group Use the svcinfo lsmdisk command as before to display information about the MDG to which an MDisk belongs, as shown in Example 7-15.

Example 7-15 svcinfo lsmdisk command

id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_capacity,used_capacity,real_capacity,overallocation,warning0,MDG_DS45,online,13,4,468.0GB,512,355.0GB,140.00GB,100.00GB,112.00GB,29,01,MDG_DS47,online,8,3,288.0GB,512,217.5GB,120.00GB,20.00GB,70.00GB,41,0

7.2.10 Showing MDisks in an managed disk groupUse the svcinfo lsmdisk -filtervalue command, as shown in Example 7-16, to see which MDisks are part of a specific MDG. This command shows all of the MDisks that are part of the MDG named MDG2.

Example 7-16 svcinfo lsmdisk -filtervalue: Mdisks in MDG

IBM_2145:ITSOSVC42A:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG2 -delim :id:name:status:mode:mdisk_grp_id:mdisk_grp_name:capacity:ctrl_LUN_#:controller_name:UID6:mdisk6:online:managed:2:MDG2:3.0GB:0000000000000006:DS4000:600a0b800017423300000044465c0a27000000000000000000000000000000007:mdisk7:online:managed:2:MDG2:6.0GB:0000000000000007:DS4000:600a0b80001744310000006f465bf9320000000000000000000000000000000021:mdisk21:online:image:2:MDG2:2.0GB:0000000000000015:DS4000:600a0b8000174431000000874664018600000000000000000000000000000000

7.2.11 Working with Managed Disk Groups Before we can create any volumes on the SVC cluster, we need to virtualize the allocated storage that is assigned to the SVC. After we have assigned volumes to the SVC’s “managed disks”, we cannot start using them until they are members of an MDG. Therefore, one of our first operations is to create an MDG where we can place our MDisks.

Important: Do not add this MDisk to an MDG if you want to create an image mode VDisk from the MDisk that you are adding. As soon as you add an MDisk to an MDG, it becomes managed, and extent mapping is not necessarily one-to-one anymore.

346 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 373: San

This section describes the operations using MDisks and MDGs. It explains the tasks that we can perform at an MDG level.

7.2.12 Creating a managed disk groupAfter the successful login to the CLI interface of the SVC, we create the MDG.

Using the svctask mkmdiskgrp command, create an MDG, as shown in Example 7-17.

Example 7-17 svctask mkmdiskgrp

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 MDisk Group, id [0], successfully created

This command creates an MDG called MDG_DS47. The extent size that is used within this group is 512 MB, which is the most commonly used extent size.

We have not added any MDisks to the MDG yet, so it is an empty MDG.

There is a way to add unmanaged MDisks and create the MDG in the same command. Using the command svctask mkmdiskgrp with the -mdisk parameter and entering the IDs or names of the MDisks adds the MDisks immediately after the MDG is created.

So, prior to the creation of the MDG, enter the svcinfo lsmdisk command, as shown in Example 7-18, where we list all of the available MDisks that are seen by the SVC cluster.

Example 7-18 Listing available MDisks

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID0,mdisk0,online,unmanaged,,,16.0GB,0000000000000000,controller0,600a0b8000486a6600000ae94a895759000000000000000000000000000000001,mdisk1,online,unmanaged,,,16.0GB,0000000000000001,controller0,600a0b80004858a000000e134a895d6e000000000000000000000000000000002,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004858a000000e144a895d94000000000000000000000000000000003,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004858a000000e154a895db000000000000000000000000000000000

Using the same command as before (svctask mkmdiskgrp) and knowing the MDisk IDs that we are using, we can add multiple MDisks to the MDG at the same time. We now add the unmanaged MDisks, as shown in Example 7-18, to the MDG that we created, as shown in Example 7-19.

Example 7-19 Creating an MDG and adding available MDisks

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_DS47 -ext 512 -mdisk 0:1MDisk Group, id [0], successfully created

This command creates an MDG called MDG_DS47. The extent size that is used within this group is 512 MB, and two MDisks (0 and 1) are added to the group.

Chapter 7. SAN Volume Controller operations using the command-line interface 347

Page 374: San

By running the svcinfo lsmdisk command, you now see the MDisks as “managed” and as part of the MDG_DS47, as shown in Example 7-20.

Example 7-20 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID0,mdisk0,online,managed,0,MDG_DS47,16.0GB,0000000000000000,controller0,600a0b8000486a6600000ae94a895759000000000000000000000000000000001,mdisk1,online,managed,0,MDG_DS47,16.0GB,0000000000000001,controller0,600a0b80004858a000000e134a895d6e000000000000000000000000000000002,mdisk2,online,managed,0,MDG_DS83,16.0GB,0000000000000002,controller1,600a0b80004858a000000e144a895d94000000000000000000000000000000003,mdisk3,online,managed,0,MDG_DS83,16.0GB,0000000000000003,controller1,600a0b80004858a000000e154a895db000000000000000000000000000000000

You have completed the tasks that are required to create an MDG.

7.2.13 Viewing Managed Disk Group informationUse the svcinfo lsmdiskgrp command, as shown in Example 7-21, to display information about the MDGs that are defined in the SVC.

Example 7-21 svcinfo lsmdiskgrp command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_capacity,used_capacity,real_capacity,overallocation,warning0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,01,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0

7.2.14 Renaming a managed disk groupUse the svctask chmdiskgrp command to change the name of an MDG. To verify the change, run the svcinfo lsmdiskgrp command. Example 7-22 shows both of these commands.

Example 7-22 svctask chmdiskgrp command

IBM_2145:ITSO-CLS1:admin>svctask chmdiskgrp -name MDG_DS81 MDG_DS83IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_capacity,used_capacity,real_capacity,overallocation,warning0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,01,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0

MDG name: The -name and -mdisk parameters are optional. If you do not enter a -name, the default is MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally. If you do not enter the -mdisk parameter, an empty MDG is created.

If you want to provide a name, you can use letters A to Z, a to z, numbers 0 to 9, and the underscore. The name can be between one and 15 characters in length, but it cannot start with a number or the word “mDiskgrp” (because this prefix is reserved for SVC assignment only).

348 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 375: San

2,MDG_DS81,online,0,0,0,512,0,0.00MB,0.00MB,0.00MB,0,85

This command renamed the MDG from MDG_DS83 to MDG_DS81.

7.2.15 Deleting a managed disk groupUse the svctask rmmdiskgrp command to remove an MDG from the SVC cluster configuration (Example 7-23).

Example 7-23 svctask rmmdiskgrp

IBM_2145:ITSO-CLS1:admin>svctask rmmdiskgrp MDG_DS81

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp -delim ,id,name,status,mdisk_count,vdisk_count,capacity,extent_size,free_capacity,virtual_capacity,used_capacity,real_capacity,overallocation,warning0,MDG_DS45,online,13,5,468.0GB,512,345.0GB,150.00GB,110.00GB,122.00GB,32,01,MDG_DS47,online,8,2,288.0GB,512,227.5GB,110.00GB,10.00GB,60.00GB,38,0

This command removes MDG_DS81 from the SVC cluster configuration.

7.2.16 Removing MDisks from a managed disk groupUse the svctask rmmdisk command to remove an MDisk from an MDG (Example 7-24).

Example 7-24 svctask rmmdisk command

IBM_2145:ITSO-CLS1:admin>svctask rmmdisk -mdisk 6 -force MDG_DS45

This command removes the MDisk called mdisk6 from the MDG named MDG_DS45.The -force flag is set, because there are VDisks using this MDG.

Changing the MDG name: The chmdiskgrp command specifies the new name first. You can use letters A to Z, a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 15 characters in length. However, the new name cannot start with a number, dash, or the word “mdiskgrp” (because this prefix is reserved for SVC assignment only).

Removing an MDG from the SVC cluster configuration: If there are MDisks within the MDG, you must use the -force flag to remove the MDG from the SVC cluster configuration, for example:

svctask rmmdiskgrp MDG_DS81 -force

Ensure that you definitely want to use this flag, because it destroys all mapping information and data held on the VDisks, which cannot be recovered.

Sufficient space: The removal only takes place if there is sufficient space to migrate the VDisk data to other extents on other MDisks that remain in the MDG. After you remove the MDisk group, it takes time to change the mode from managed to unmanaged.

Chapter 7. SAN Volume Controller operations using the command-line interface 349

Page 376: San

7.3 Working with hostsThis section explains the tasks that can be performed at a host level.

When we create a host in our SVC cluster, we need to define the connection method. Starting with SVC 5.1, we can now define our host as iSCSI-attached or FC-attached, and we describe these connection methods in detail in Chapter 2, “IBM System Storage SAN Volume Controller” on page 7.

7.3.1 Creating a Fibre Channel-attached hostWe show creating an FC-attached host under various circumstances in the following sections.

Host is powered on, connected, and zoned to the SVCWhen you create your host on the SVC, it is good practice to check whether the host bus adapter (HBA) worldwide port names (WWPNs) of the server are visible to the SVC. By doing that, you ensure that zoning is done and that the correct WWPN will be used. Issue the svcinfo lshbaportcandidate command, as shown in Example 7-25.

Example 7-25 svcinfo lshbaportcandidate command

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidateid210000E08B89C1CD210000E08B054CAA

After you know that the WWPNs that are displayed match your host (use host or SAN switch utilities to verify), use the svctask mkhost command to create your host.

The command to create a host is shown in Example 7-26.

Example 7-26 svctask mkhost

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B89C1CD:210000E08B054CAAHost, id [0], successfully created

This command creates a host called Palau using WWPN 21:00:00:E0:8B:89:C1:CD and 21:00:00:E0:8B:05:4C:AA.

Name: If you do not provide the -name parameter, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally).

You can use the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, dash, or the word “host” (because this prefix is reserved for SVC assignment only).

Ports: You can define from one up to eight ports per host, or you can use the addport command, which we show in 7.3.5, “Adding ports to a defined host” on page 354.

350 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 377: San

Host is not powered on or not connected to the SANIf you want to create a host on the SVC without seeing your target WWPN by using the svcinfo lshbaportcandidate command, add the -force flag to your mkhost command, as shown in Example 7-27. This option is more open for human errors than if you choose the WWPN from a list, but it is typically used when many host definitions are created at the same time, such as through a script.

In this case, you can type the WWPN of your HBA or HBAs and use the -force flag to create the host, regardless of whether they are connected, as shown in Example 7-27.

Example 7-27 mkhost -force

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Guinea -hbawwpn 210000E08B89C1DC -forceHost, id [4], successfully created

This command forces the creation of a host called Guinea using WWPN 210000E08B89C1DC.

If you run the svcinfo lshost command again, you now see your host named Guinea under host ID 4.

7.3.2 Creating an iSCSI-attached hostNow, we can create host definitions to a host that is not connected to the SAN but that has LAN access to our SVC nodes. Before we create the host definition, we configure our SVC clusters to use the new iSCSI connection method. We describe additional information about configuring your nodes to use iSCSI in 7.7.4, “iSCSI configuration” on page 382.

The iSCSI functionality allows the host to access volumes through the SVC without being attached to the SAN. Back-end storage and node-to-node communication still need the FC network to communicate, but the host does not necessarily need to be connected to the SAN.

When we create a host that is going to use iSCSI as a communication method, iSCSI initiator software must be installed on the host to initiate the communication between the SVC and the host. This installation creates an iSCSI qualified name (IQN) identifier that is needed before we create our host.

Before we start, we check our server’s IQN address. We are running Windows Server 2008. We select Start Programs Administrative tools, and we select iSCSI initiator. In our example, our IQN, as shown in Figure 7-1 on page 352, is:

iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.com

Note: WWPNs are not case sensitive in the CLI.

Chapter 7. SAN Volume Controller operations using the command-line interface 351

Page 378: San

Figure 7-1 IQN from the iSCSI initiator tool

We create the host by issuing the mkhost command, as shown in Example 7-28. When the command completes successfully, we display our newly created host.

It is important to know that when the host is initially configured, the default authentication method is set to no authentication and no Challenge Handshake Authentication Protocol (CHAP) secret is set. To set a CHAP secret for authenticating the iSCSI host with the SVC cluster, use the svctask chhost command with the chapsecret parameter.

Example 7-28 mkhost command

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Baldur -iogrp 0 -iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.comHost, id [4], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lshost 4id 4name Baldurport_count 1type genericmask 1111iogrp_count 1iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.comnode_logged_in_count 0state offline

We have now created our host definition. We map a VDisk to our new iSCSI server, as shown in Example 7-29. We have already created the VDisk, as shown in 7.4.1, “Creating a VDisk” on page 356. In our scenario, our VDisk has ID 21 and the host name is Baldur. We map it to our iSCSI host.

Example 7-29 Mapping VDisk to iSCSI host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Baldur 21Virtual Disk to Host map, id [0], successfully created

352 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 379: San

After the VDisk has been mapped to the host, we display the host information again, as shown in Example 7-30.

Example 7-30 svcinfo lshost

IBM_2145:ITSO-CLS1:admin>svcinfo lshost 4id 4name Baldurport_count 1type genericmask 1111iogrp_count 1iscsiname iqn.1991-05.com.microsoft:st-2k8hv-004.englab.brocade.comnode_logged_in_count 1state online

If you need to display a CHAP secret for an already defined server, use the svcinfo lsiscsiauth command.

7.3.3 Modifying a hostUse the svctask chhost command to change the name of a host. To verify the change, run the svcinfo lshost command. Example 7-31 shows both of these commands.

Example 7-31 svctask chhost command

IBM_2145:ITSO-CLS1:admin>svctask chhost -name Angola Guinea

IBM_2145:ITSO-CLS1:admin>svcinfo lshostid name port_count iogrp_count0 Palau 2 41 Nile 2 12 Kanaga 2 13 Siam 2 24 Angola 1 4

This command renamed the host from Guinea to Angola.

Note: FC hosts and iSCSI hosts are handled in the same way operationally after they have been created.

Note: The chhost command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The new name can be between one and 15 characters in length. However, it cannot start with a number, dash, or the word “host” (because this prefix is reserved for SVC assignment only).

Note: If you use Hewlett-Packard UNIX (HP-UX), you use the -type option. See the IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563, for more information about the hosts that require the -type parameter.

Chapter 7. SAN Volume Controller operations using the command-line interface 353

Page 380: San

7.3.4 Deleting a hostUse the svctask rmhost command to delete a host from the SVC configuration. If your host is still mapped to VDisks and you use the -force flag, the host and all of the mappings with it are deleted. The VDisks are not deleted, only the mappings to them.

The command that is shown in Example 7-32 deletes the host called Angola from the SVC configuration.

Example 7-32 svctask rmhost Angola

IBM_2145:ITSO-CLS1:admin>svctask rmhost Angola

7.3.5 Adding ports to a defined hostIf you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can use the svctask addhostport command to add the new port definitions to your host configuration.

If your host is currently connected through SAN with FC and if the WWPN is already zoned to the SVC cluster, issue the svcinfo lshbaportcandidate command, as shown in Example 7-33, to compare with the information that you have from the server administrator.

Example 7-33 svcinfo lshbaportcandidate

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidateid210000E08B054CAA

If the WWPN matches your information (use host or SAN switch utilities to verify), use the svctask addhostport command to add the port to the host.

Example 7-34 shows the command to add a host port.

Example 7-34 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA Palau

This command adds the WWPN of 210000E08B054CAA to the Palau host.

If the new HBA is not connected or zoned, the svcinfo lshbaportcandidate command does not display your WWPN. In this case, you can manually type the WWPN of your HBA or HBAs and use the -force flag to create the host, as shown in Example 7-35.

Example 7-35 svctask addhostport

IBM_2145:ITSO-CLS1:admin>svctask addhostport -hbawwpn 210000E08B054CAA -force Palau

Deleting a host: If there are any VDisks assigned to the host, you must use the -force flag, for example: svctask rmhost -force Angola.

Adding multiple ports: You can add multiple ports all at one time by using the separator or colon (:) between WWPNs, for example:

svctask addhostport -hbawwpn 210000E08B054CAA:210000E08B89C1CD Palau

354 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 381: San

This command forces the addition of the WWPN named 210000E08B054CAA to the host called Palau.

If you run the svcinfo lshost command again, you see your host with an updated port count of 2 in Example 7-36.

Example 7-36 svcinfo lshost command: Port count

IBM_2145:ITSO-CLS1:admin>svcinfo lshostid name port_count iogrp_count0 Palau 2 41 ITSO_W2008 1 42 Thor 3 13 Frigg 1 14 Baldur 1 1

If your host currently uses iSCSI as a connection method, you must have the new iSCSI IQN ID before you add the port. Unlike FC-attached hosts, you cannot check for available candidates with iSCSI.

After you have acquired the additional iSCSI IQN, use the svctask addhostport command, as shown in Example 7-37.

Example 7-37 Adding an iSCSI port to an already configured host

IBM_2145:ITSO-CLS1:admin>svctask addhostport -iscsiname iqn.1991-05.com.microsoft:baldur 4

7.3.6 Deleting portsIf you make a mistake when adding a port, or if you remove an HBA from a server that is already defined within the SVC, you can use the svctask rmhostport command to remove WWPN definitions from an existing host.

Before you remove the WWPN, be sure that it is the correct WWPN by issuing the svcinfo lshost command, as shown in Example 7-38.

Example 7-38 svcinfo lshost command

IBM_2145:ITSO-CLS1:admin>svcinfo lshost Palauid 0name Palauport_count 2type genericmask 1111iogrp_count 4WWPN 210000E08B054CAAnode_logged_in_count 2state activeWWPN 210000E08B89C1CDnode_logged_in_count 2state offline

WWPNs: WWPNs are not case sensitive within the CLI.

Chapter 7. SAN Volume Controller operations using the command-line interface 355

Page 382: San

When you know the WWPN or iSCSI IQN, use the svctask rmhostport command to delete a host port, as shown in Example 7-39.

Example 7-39 svctask rmhostport

For removing WWPNIBM_2145:ITSO-CLS1:admin>svctask rmhostport -hbawwpn 210000E08B89C1CD Palau

and for removing iSCSI IQN

IBM_2145:ITSO-CLS1:admin>svctask rmhostport -iscsiname iqn.1991-05.com.microsoft:baldur Baldur

This command removes the WWPN of 210000E08B89C1CD from the Palau host and the iSCSI IQN iqn.1991-05.com.microsoft:baldur from the Baldur host.

7.4 Working with VDisksThis section details the various configuration and administration tasks that can be performed on the VDisks within the SVC environment.

7.4.1 Creating a VDiskThe mkvdisk command creates sequential, striped, or image mode VDisk objects. When they are mapped to a host object, these objects are seen as disk drives with which the host can perform I/O operations.

When creating a VDisk, you must enter several parameters at the CLI. There are both mandatory and optional parameters.

See the full command string and detailed information in the Command-Line Interface User’s Guide, SC26-7903-05.

When you are ready to create a VDisk, you must know the following information before you start creating the VDisk:

� In which MDG is the VDisk going to have its extents� From what I/O Group will the VDisk be accessed� Size of the VDisk� Name of the VDisk

When you are ready to create your striped VDisk, you use the svctask mkvdisk command (we discuss sequential and image mode VDisks later). In Example 7-40 on page 357, this command creates a 10 GB, striped VDisk with VDisk id0 within the MDG_DS47 MDG and assigns it to the iogrp_0 I/O Group.

Removing multiple ports: You can remove multiple ports at one time by using the separator or colon (:) between the port names, for example:

svctask rmhostport -hbawwpn 210000E08B054CAA:210000E08B892BCD Angola

Creating an image mode disk: If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used.

356 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 383: San

Example 7-40 svctask mkvdisk command

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS47 -iogrp io_grp0 -size 10 -unit gb -name TigerVirtual Disk, id [0], successfully created

To verify the results, you can use the svcinfo lsvdisk command, as shown in Example 7-41.

Example 7-41 svcinfo lsvdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 0id 0name TigerIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 10.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 6005076801AB813F1000000000000000throttling 0preferred_node_id 2fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 10.00MBreal_capacity 10.00MBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

You have completed the required tasks to create a VDisk.

Chapter 7. SAN Volume Controller operations using the command-line interface 357

Page 384: San

7.4.2 VDisk informationUse the svcinfo lsvdisk command to display summary information about all VDisks defined within the SVC environment. To display more detailed information about a specific VDisk, run the command again and append the VDisk name parameter (for example, VDisk_D). Example 7-42 shows both of these commands.

Example 7-42 svcinfo lsvdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -delim ,id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count0,vdisk_A,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,,,60050768018301BF2800000000000008,0,11,vdisk_B,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001,0,12,vdisk_C,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000000002,0,13,vdisk_D,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000000003,0,14,MM_DBLog_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,4,MMREL2,60050768018301BF2800000000000004,0,15,MM_DB_Pri,0,io_grp0,online,0,MDG_DS45,10.0GB,striped,,,5,MMREL1,60050768018301BF2800000000000005,0,16,MM_App_Pri,1,io_grp1,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000000006,0,1

7.4.3 Creating a Space-Efficient VDiskExample 7-43 shows an example of creating a Space-Efficient VDisk. It is important to know that, in addition to the normal parameters, you must use the following parameters:

� -rsize: This parameter makes the VDisk space-efficient; otherwise, the VDisk is fully allocated.

� -autoexpand: This parameter specifies that space-efficient copies automatically expand their real capacities by allocating new extents from their MDG.

� -grainsize: This parameter sets the grain size (KB) for a Space-Efficient VDisk.

Example 7-43 Usage of the command svctask mkvdisk

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_DS45 -iogrp 1 -vtype striped -size 10 -unit gb -rsize 50% -autoexpand -grainsize 32Virtual Disk, id [7], successfully created

This command creates a space-efficient 10 GB VDisk. The VDisk belongs to mdiskgrp MDG with the name of MDG_DS45 and is owned by the io_grp1 I/O Group. The real_capacity automatically expands until the VDisk size of 10 GB is reached. The grain size is set to 32 K, which is the default.

358 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 385: San

7.4.4 Creating a VDisk in image modeThis virtualization type allows image mode VDisks to be created when an MDisk already has data on it, perhaps from a pre-virtualized subsystem. When an image mode VDisk is created, it directly corresponds to the previously unmanaged MDisk from which it was created. Therefore, with the exception of space-efficient image mode VDisks, VDisk logical block address (LBA) x equals MDisk LBA x.

You can use this command to bring a non-virtualized disk under the control of the cluster. After it is under the control of the cluster, you can migrate the VDisk from the single managed disk. When it is migrated, the VDisk is no longer an image mode VDisk. You can add image mode VDisks to an already populated MDG with other types of VDisks, such as a striped or sequential VDisk.

You must use the -mdisk parameter to specify an MDisk that has a mode of unmanaged. The -fmtdisk parameter cannot be used to create an image mode VDisk.

Use the svctask mkvdisk command to create an image mode VDisk, as shown in Example 7-44.

Example 7-44 svctask mkvdisk (image mode)

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Image -iogrp 0 -mdisk mdisk20 -vtype image -name Image_Vdisk_AVirtual Disk, id [8], successfully created

This command creates an image mode VDisk called Image_Vdisk_A using the mdisk20 MDisk. The VDisk belongs to the MDG_Image MDG and is owned by the io_grp0 I/O Group.

Disk size: When using the -rsize parameter, you have the following options: disk_size, disk_size_percentage, and auto.

� Specify the disk_size_percentage value using an integer, or an integer immediately followed by the percent character (%).

� Specify the units for a disk_size integer using the -unit parameter; the default is MB. The -rsize value can be greater than, equal to, or less than the size of the VDisk.

� The auto option creates a VDisk copy that uses the entire size of the MDisk; if you specify the -rsize auto option, you must also specify the -vtype image option.

An entry of 1 GB uses 1,024 MB.

Size: An image mode VDisk must be at least 512 bytes (the capacity cannot be 0). That is, the minimum size that can be specified for an image mode VDisk must be the same as the MDisk group extent size to which it is added, with a minimum of 16 MB.

Capacity: If you create a mirrored VDisk from two image mode MDisks without specifying a -capacity value, the capacity of the resulting VDisk is the smaller of the two MDisks, and the remaining space on the larger MDisk is inaccessible.

If you do not specify the -size parameter when you create an image mode disk, the entire MDisk capacity is used.

Chapter 7. SAN Volume Controller operations using the command-line interface 359

Page 386: San

If we run the svcinfo lsmdisk command again, notice that mdisk20 now has a status of image, as shown in Example 7-45.

Example 7-45 svcinfo lsmdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID19,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b28200003f9f485158870000000000000000000000000000000020,mdisk20,online,image,2,MDG_Image,36.0GB,0000000000000007,DS4700,600a0b80002904de00004282485158aa00000000000000000000000000000000

7.4.5 Adding a mirrored VDisk copyYou can create a mirrored copy of a VDisk, which keeps a VDisk accessible even when the MDisk on which it depends has become unavailable. You can create a copy of a VDisk either on separate MDGs or by creating an image mode copy of the VDisk. Copies increase the availability of data; however, they are not separate objects. You can only create or change mirrored copies from the VDisk.

In addition, you can use VDisk Mirroring as an alternative method of migrating VDisks between MDGs.

For example, if you have a non-mirrored VDisk in one MDG and want to migrate that VDisk to another MDG, you can add a new copy of the VDisk and specify the second MDG. After the copies are synchronized, you can delete the copy on the first MDG. The VDisk is migrated to the second MDG while remaining online during the migration.

To create a mirrored copy of an VDisk, use the addvdiskcopy command. This command adds a copy of the chosen VDisk to the selected MDG, which changes a non-mirrored VDisk into a mirrored VDisk.

In the following scenario, we show creating a VDisk copy mirror from one MDG to another MDG.

As you can see in Example 7-46, the VDisk has a copy with copy_id 0.

Example 7-46 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Cid 2name vdisk_CIO_group_id 1IO_group_name io_grp1status onlinemdisk_grp_id 1mdisk_grp_name MDG_DS47capacity 45.0GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_id

360 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 387: San

RC_namevdisk_UID 60050768018301BF2800000000000002virtual_disk_throttling (MB) 20preferred_node_id 3fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 1mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 12.00GBfree_capacity 12.00GBoverallocation 375autoexpand offwarning 23grainsize 32

In Example 7-47, we add the VDisk copy mirror by using the svctask addvdiskcopy command.

Example 7-47 svctask addvdiskcopy

IBM_2145:ITSO-CLS1:admin>svctask addvdiskcopy -mdiskgrp MDG_DS45 -vtype striped -rsize 20 -autoexpand -grainsize 64 -unit gb vdisk_CVdisk [2] copy [1] successfully created

During the synchronization process, you can see the status by using the svcinfo lsvdisksyncprogress command. As shown in Example 7-48, the first time that the status is checked, the synchronization progress is at 86%, and the estimated completion time is 19:16:54. The second time that the command is run, the progress status is at 100%, and the synchronization is complete.

Example 7-48 Synchronization

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_Cvdisk_id vdisk_name copy_id progress estimated_completion_time2 vdisk_C 1 86 080710191654

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisksyncprogress -copy 1 vdisk_Cvdisk_id vdisk_name copy_id progress estimated_completion_time2 vdisk_C 1 100

Chapter 7. SAN Volume Controller operations using the command-line interface 361

Page 388: San

As you can see in Example 7-49, the new VDisk copy mirror (copy_id 1) has been added and can be seen by using the svcinfo lsvdisk command.

Example 7-49 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Cid 2name vdisk_CIO_group_id 1IO_group_name io_grp1status onlinemdisk_grp_id manymdisk_grp_name manycapacity 45.0GBtype manyformatted nomdisk_id manymdisk_name manyFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000002virtual_disk_throttling (MB) 20preferred_node_id 3fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 2

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 1mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 12.00GBfree_capacity 12.00GBoverallocation 375autoexpand offwarning 23grainsize 32

copy_id 1status onlinesync yesprimary nomdisk_grp_id 0mdisk_grp_name MDG_DS45

362 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 389: San

type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.44MBreal_capacity 20.02GBfree_capacity 20.02GBoverallocation 224autoexpand onwarning 80grainsize 64

Notice that the VDisk copy mirror (copy_id 1) does not have the same values as the VDisk copy. While adding a VDisk copy mirror, you are able to define a mirror with separate parameters than the VDisk copy. Therefore, you can define a Space-Efficient VDisk copy mirror for a non-Space-Efficient VDisk copy and vice-versa, which is one way to migrate a non-Space-Efficient VDisk to a Space-Efficient VDisk.

7.4.6 Splitting a VDisk CopyThe splitvdiskcopy command creates a new VDisk in the specified I/O Group from a copy of the specified VDisk. If the copy that you are splitting is not synchronized, you must use the -force parameter. The command fails if you are attempting to remove the only synchronized copy. To avoid this failure, wait for the copy to synchronize, or split the unsynchronized copy from the VDisk by using the -force parameter. You can run the command when either VDisk copy is offline.

Example 7-50 shows the svctask splitvdiskcopy command, which is used to split a VDisk copy. It creates a new vdisk_N from the copy of vdisk_B.

Example 7-50 Split VDisk

IBM_2145:ITSO-CLS1:admin>svctask splitvdiskcopy -copy 1 -iogrp 0 -name vdisk_N vdisk_BVirtual Disk, id [2], successfully created

As you can see in Example 7-51, the new VDisk, vdisk_N, has been created as an independent VDisk.

Example 7-51 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Nid 2name vdisk_NIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 100.0GBtype stripedformatted no

Note: To change the parameters of a VDisk copy mirror, you must delete the VDisk copy mirror and redefine it with the new values.

Chapter 7. SAN Volume Controller operations using the command-line interface 363

Page 390: San

mdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF280000000000002Fthrottling 0preferred_node_id 2fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 84.75MBreal_capacity 20.10GBfree_capacity 20.01GBoverallocation 497autoexpand onwarning 80grainsize 64

The VDisk copy of vdisk_B VDisk has now lost its mirror. Therefore, a new VDisk has been created.

7.4.7 Modifying a VDiskExecuting the svctask chvdisk command will modify a single property of a VDisk. Only one property can be modified at a time. So, changing the name and modifying the I/O Group require two invocations of the command.

You can specify a new name or label. The new name can be used subsequently to reference the VDisk. The I/O Group with which this VDisk is associated can be changed. Note that this requires a flush of the cache within the nodes in the current I/O Group to ensure that all data is written to disk. I/O must be suspended at the host level before performing this operation.

364 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 391: San

7.4.8 I/O governingYou can set a limit on the amount of I/O transactions that is accepted for a VDisk. It is set in terms of I/Os per second or MB per second. By default, no I/O governing rate is set when a VDisk is created.

Base the choice between I/O and MB as the I/O governing throttle on the disk access profile of the application. Database applications generally issue large amounts of I/O, but they only transfer a relatively small amount of data. In this case, setting an I/O governing throttle based on MBs per second does not achieve much. It is better to use an I/Os per second as a second throttle.

At the other extreme, a streaming video application generally issues a small amount of I/O, but transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much, so it is better to use an MB per second throttle.

An example of the chvdisk command is shown in Example 7-52.

Example 7-52 svctask chvdisk (rate/warning Space-Efficient VDisk)

IBM_2145:ITSO-CLS1:admin>svctask chvdisk -rate 20 -unitmb vdisk_C

IBM_2145:ITSO-CLS1:admin>svctask chvdisk -warning 85% vdisk7

The first command changes the VDisk throttling of vdisk7 to 20 MBps, while the second command changes the Space-Efficient VDisk warning to 85%.

Tips: If the VDisk has a mapping to any hosts, it is not possible to move the VDisk to an I/O Group that does not include any of those hosts.

This operation will fail if there is not enough space to allocate bitmaps for a mirrored VDisk in the target I/O Group.

If the -force parameter is used and the cluster is unable to destage all write data from the cache, the contents of the VDisk are corrupted by the loss of the cached data.

If the -force parameter is used to move a VDisk that has out-of-sync copies, a full resynchronization is required.

I/O governing rate: An I/O governing rate of 0 (displayed as throttling in the CLI output of the svcinfo lsvdisk command) does not mean that zero I/Os per second (or MBs per second) can be achieved. It means that no throttle is set.

New name: The chvdisk command specifies the new name first. The name can consist of letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). It can be between one and 15 characters in length. However, it cannot start with a number, the dash, or the word “vdisk” (because this prefix is reserved for SVC assignment only).

Chapter 7. SAN Volume Controller operations using the command-line interface 365

Page 392: San

If you want to verify the changes, issue the svcinfo lsvdisk command, as shown in Example 7-53.

Example 7-53 svcinfo lsvdisk command: Verifying throttling

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk7id 7name vdisk7IO_group_id 1IO_group_name io_grp1status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 10.0GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF280000000000000Avirtual_disk_throttling (MB) 20preferred_node_id 6fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 5.02GBfree_capacity 5.02GBoverallocation 199autoexpand onwarning 85grainsize 32

366 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 393: San

7.4.9 Deleting a VDiskWhen executing this command on an existing managed mode VDisk, any data that remained on it will be lost. The extents that made up this VDisk will be returned to the pool of free extents available in the MDG.

If any Remote Copy, FlashCopy, or host mappings still exist for this VDisk, the delete fails unless the -force flag is specified. This flag ensures the deletion of the VDisk and any VDisk to host mappings and copy mappings.

If the VDisk is currently the subject of a migrate to image mode, the delete fails unless the -force flag is specified. This flag halts the migration and then deletes the VDisk.

If the command succeeds (without the -force flag) for an image mode disk, the underlying back-end controller logical unit will be consistent with the data that a host might previously have read from the image mode VDisk. That is, all fast write data has been flushed to the underlying LUN. If the -force flag is used, there is no guarantee.

If there is any nondestaged data in the fast write cache for this VDisk, the deletion of the VDisk fails unless the -force flag is specified. Now, any nondestaged data in the fast write cache is deleted.

Use the svctask rmvdisk command to delete a VDisk from your SVC configuration, as shown in Example 7-54.

Example 7-54 svctask rmvdisk

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk vdisk_A

This command deletes the vdisk_A VDisk from the SVC configuration. If the VDisk is assigned to a host, you need to use the -force flag to delete the VDisk (Example 7-55).

Example 7-55 svctask rmvdisk (-force)

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk -force vdisk_A

7.4.10 Expanding a VDiskExpanding a VDisk presents a larger capacity disk to your operating system. Although this expansion can be easily performed using the SVC, you must ensure that your operating systems support expansion before using this function.

Assuming your operating systems support it, you can use the svctask expandvdisksize command to increase the capacity of a given VDisk.

Example 7-56 shows a sample of this command.

Example 7-56 svctask expandvdisksize

IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -size 5 -unit gb vdisk_C

This command expands the vdisk_C VDisk, which was 35 GB before, by another 5 GB to give it a total size of 40 GB.

To expand a Space-Efficient VDisk, you can use the -rsize option, as shown in Example 7-57 on page 368. This command changes the real size of the vdisk_B VDisk to a real capacity of 55 GB. The capacity of the VDisk remains unchanged.

Chapter 7. SAN Volume Controller operations using the command-line interface 367

Page 394: San

Example 7-57 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Bid 1name vdisk_Bcapacity 100.0GBmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 50.00GBfree_capacity 50.00GBoverallocation 200autoexpand offwarning 40grainsize 32

IBM_2145:ITSO-CLS1:admin>svctask expandvdisksize -rsize 5 -unit gb vdisk_B

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Bid 1name vdisk_Bcapacity 100.0GBmdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 55.00GBfree_capacity 55.00GBoverallocation 181autoexpand offwarning 40grainsize 32

7.4.11 Assigning a VDisk to a hostUse the svctask mkvdiskhostmap command to map a VDisk to a host. When executed, this command creates a new mapping between the VDisk and the specified host, which essentially presents this VDisk to the host, as though the disk was directly attached to the host. It is only after this command is executed that the host can perform I/O to the VDisk. Optionally, a SCSI LUN ID can be assigned to the mapping.

When the HBA on the host scans for devices that are attached to it, it discovers all of the VDisks that are mapped to its FC ports. When the devices are found, each one is allocated an identifier (SCSI LUN ID).

For example, the first disk found is generally SCSI LUN 1, and so on. You can control the order in which the HBA discovers VDisks by assigning the SCSI LUN ID as required. If you do

Important: If a VDisk is expanded, its type will become striped even if it was previously sequential or in image mode. If there are not enough extents to expand your VDisk to the specified size, you receive the following error message:

CMMVC5860E Ic_failed_vg_insufficient_virtual_extents

368 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 395: San

not specify a SCSI LUN ID, the cluster automatically assigns the next available SCSI LUN ID, given any mappings that already exist with that host.

Using the VDisk and host definition that we created in the previous sections, we assign VDisks to hosts that are ready for their use. We use the svctask mkvdiskhostmap command (see Example 7-58).

Example 7-58 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_BVirtual Disk to Host map, id [2], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Tiger vdisk_CVirtual Disk to Host map, id [1], successfully created

This command assigns vdisk_B and vdisk_C to host Tiger as shown in Example 7-59.

Example 7-59 svcinfo lshostvdiskmap -delim, command

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim ,id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID1,Tiger,2,1,vdisk_B,210000E08B892BCD,60050768018301BF28000000000000011,Tiger,1,2,vdisk_C,210000E08B892BCD,60050768018301BF2800000000000002

Be aware that certain HBA device drivers stop when they find a gap in the SCSI LUN IDs. For example:

� VDisk 1 is mapped to Host 1 with SCSI LUN ID 1.� VDisk 2 is mapped to Host 1 with SCSI LUN ID 2.� VDisk 3 is mapped to Host 1 with SCSI LUN ID 4.

When the device driver scans the HBA, it might stop after discovering VDisks 1 and 2, because there is no SCSI LUN mapped with ID 3. Be careful to ensure that the SCSI LUN ID allocation is contiguous.

It is not possible to map a VDisk to a host more than one time at separate LUNs (Example 7-60).

Example 7-60 svctask mkvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Siam vdisk_AVirtual Disk to Host map, id [0], successfully created

This command maps the VDisk called vdisk_A to the host called Siam.

You have completed all of the tasks that are required to assign a VDisk to an attached host.

7.4.12 Showing VDisk-to-host mappingUse the svcinfo lshostvdiskmap command to show which VDisks are assigned to a specific host (Example 7-61 on page 370).

Assigning a specific LUN ID to a VDisk: The optional -scsi scsi_num parameter can help assign a specific LUN ID to a VDisk that is to be associated with a given host. The default (if nothing is specified) is to increment based on what is already assigned to the host.

Chapter 7. SAN Volume Controller operations using the command-line interface 369

Page 396: San

Example 7-61 svcinfo lshostvdiskmap

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siamid,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID3,Siam,0,0,vdisk_A,210000E08B18FF8A,60050768018301BF280000000000000C

From this command, you can see that the host Siam has only one assigned VDisk called vdisk_A. The SCSI LUN ID is also shown, which is the ID by which the VDisk is presented to the host. If no host is specified, all defined host to VDisk mappings will be returned.

7.4.13 Deleting a VDisk-to-host mappingWhen deleting a VDisk mapping, you are not deleting the volume itself, only the connection from the host to the volume. If you mapped a VDisk to a host by mistake, or you simply want to reassign the volume to another host, use the svctask rmvdiskhostmap command to unmap a VDisk from a host (Example 7-62).

Example 7-62 svctask rmvdiskhostmap

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Tiger vdisk_D

This command unmaps the VDisk called vdisk_D from the host called Tiger.

7.4.14 Migrating a VDiskFrom time to time, you might want to migrate VDisks from one set of MDisks to another set of MDisks to decommission an old disk subsystem, to have better balanced performance across your virtualized environment, or simply to migrate data into the SVC environment transparently using image mode.

You can obtain further information about migration in Chapter 9, “Data migration” on page 675.

As you can see from the parameters in Example 7-63, before you can migrate your VDisk, you must know the name of the VDisk you want to migrate and the name of the MDG to which you want to migrate. To discover the name, simply run the svcinfo lsvdisk and svcinfo lsmdiskgrp commands.

When you know these details, you can issue the migratevdisk command, as shown in Example 7-63.

Example 7-63 svctask migratevdisk

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -mdiskgrp MDG_DS47 -vdisk vdisk_C

Specifying the flag before the host name: Although the -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the host name. Otherwise, it returns the following message:

CMMVC6070E An invalid or duplicated parameter, unaccompanied argument, or incorrect argument sequence has been detected. Ensure that the input is as per the help.

Important: After migration is started, it continues until completion unless it is stopped or suspended by an error condition or unless the VDisk being migrated is deleted.

370 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 397: San

This command moves vdisk_C to MDG_DS47.

You can run the svcinfo lsmigrate command at any time to see the status of the migration process (Example 7-64).

Example 7-64 svcinfo lsmigrate command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 12migrate_source_vdisk_index 2migrate_target_mdisk_grp 1max_thread_count 4migrate_source_vdisk_copy_id 0

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 16migrate_source_vdisk_index 2migrate_target_mdisk_grp 1max_thread_count 4migrate_source_vdisk_copy_id 0

7.4.15 Migrate a VDisk to an image mode VDiskMigrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path, which might be useful where the SVC is used as a data mover appliance. You can use the svctask migratetoimage command.

To migrate a VDisk to an image mode VDisk, the following rules apply:

� The destination MDisk must be greater than or equal to the size of the VDisk.

� The MDisk that is specified as the target must be in an unmanaged state.

� Regardless of the mode in which the VDisk starts, it is reported as managed mode during the migration.

� Both of the MDisks involved are reported as being image mode during the migration.

� If the migration is interrupted by a cluster recovery or by a cache problem, the migration resumes after the recovery completes.

Tips: If insufficient extents are available within your target MDG, you receive an error message. Make sure that the source and target MDisk group have the same extent size.

The optional threads parameter allows you to assign a priority to the migration process. The default is 4, which is the highest priority setting. However, if you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1.

Progress: The progress is given as percent complete. If you get no more replies, the process has finished.

Chapter 7. SAN Volume Controller operations using the command-line interface 371

Page 398: San

Example 7-65 shows an example of the command.

Example 7-65 svctask migratetoimage

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk vdisk_A -mdisk mdisk8 -mdiskgrp MDG_Image

In this example, you migrate the data from vdisk_A onto mdisk8, and the MDisk must be put into the MDG_Image MDG.

7.4.16 Shrinking a VDiskThe shrinkvdisksize command reduces the capacity that is allocated to the particular VDisk by the amount that you specify. You cannot shrink the real size of a space-efficient volume to less than its used size. All capacities, including changes, must be in multiples of 512 bytes. An entire extent is reserved even if it is only partially used. The default capacity units are MB.

The command can be used to shrink the physical capacity that is allocated to a particular VDisk by the specified amount. The command can also be used to shrink the virtual capacity of a Space-Efficient VDisk without altering the physical capacity assigned to the VDisk:

� For a non-Space-Efficient VDisk, use the -size parameter. � For a Space-Efficient VDisk real capacity, use the -rsize parameter. � For the Space-Efficient VDisk virtual capacity, use the -size parameter.

When the virtual capacity of a Space-Efficient VDisk is changed, the warning threshold is automatically scaled to match. The new threshold is stored as a percentage.

The cluster arbitrarily reduces the capacity of the VDisk by removing a partial extent, one extent, or multiple extents from those extents that are allocated to the VDisk. You cannot control which extents are removed, and so, you cannot assume that it is unused space that is removed.

Assuming your operating system supports it, you can use the svctask shrinkvdisksize command to decrease the capacity of a given VDisk.

Example 7-66 on page 373 shows an example of this command.

Reducing disk size: Image mode VDisks cannot be reduced in size. They must first be migrated to Managed Mode. To run the shrinkvdisksize command on a mirrored VDisk, all of the copies of the VDisk must be synchronized.

Important:

� If the VDisk contains data, do not shrink the disk.

� Certain operating systems or file systems use what they consider to be the outer edge of the disk for performance reasons. This command can shrink FlashCopy target VDisks to the same capacity as the source.

� Before you shrink a VDisk, validate that the VDisk is not mapped to any host objects. If the VDisk is mapped, data is displayed. You can determine the exact capacity of the source or master VDisk by issuing the svcinfo lsvdisk -bytes vdiskname command. Shrink the VDisk by the required amount by issuing the svctask shrinkvdisksize -size disk_size -unit b | kb | mb | gb | tb | pb vdisk_name | vdisk_id command.

372 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 399: San

Example 7-66 svctask shrinkvdisksize

IBM_2145:ITSO-CLS1:admin>svctask shrinkvdisksize -size 44 -unit gb vdisk_A

This command shrinks a volume called Vdisk_A from a total size of 80 GB, by 44 GB, to a new total size of 36 GB.

7.4.17 Showing a VDisk on an MDiskUse the svcinfo lsmdiskmember command to display information about the VDisks that use space on a specific MDisk, as shown in Example 7-67.

Example 7-67 svcinfo lsmdiskmember command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskmember mdisk1id copy_id0 02 03 04 05 0

This command displays a list of all of the VDisk IDs that correspond to the VDisk copies that use mdisk1.

To correlate the IDs displayed in this output to VDisk names, we can run the svcinfo lsvdisk command, which we discuss in more detail in 7.4, “Working with VDisks” on page 356.

7.4.18 Showing VDisks using a managed disk groupUse the svcinfo lsvdisk -filtervalue command, as shown in Example 7-68, to see which VDisks are part of a specific MDG. This command shows all of the VDisks that are part of the MDG called MDG_DS47.

Example 7-68 svcinfo lsvdisk -filtervalue: VDisks in the MDG

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk -filtervalue mdisk_grp_name=MDG_DS47 -delim ,id,name,status,mode,mdisk_grp_id,mdisk_grp_name,capacity,ctrl_LUN_#,controller_name,UID5,mdisk5,online,managed,1,MDG_DS47,36.0GB,0000000000000000,DS4700,600a0b800026b28200003ea34851577c000000000000000000000000000000007,mdisk7,online,managed,1,MDG_DS47,36.0GB,0000000000000001,DS4700,600a0b80002904de00004188485157a4000000000000000000000000000000009,mdisk9,online,managed,1,MDG_DS47,36.0GB,0000000000000002,DS4700,600a0b800026b28200003ed6485157b60000000000000000000000000000000012,mdisk12,online,managed,1,MDG_DS47,36.0GB,0000000000000003,DS4700,600a0b80002904de000041ba485157d00000000000000000000000000000000014,mdisk14,online,managed,1,MDG_DS47,36.0GB,0000000000000004,DS4700,600a0b800026b28200003f6c485158520000000000000000000000000000000018,mdisk18,online,managed,1,MDG_DS47,36.0GB,0000000000000005,DS4700,600a0b80002904de00004250485158680000000000000000000000000000000019,mdisk19,online,managed,1,MDG_DS47,36.0GB,0000000000000006,DS4700,600a0b800026b28200003f9f4851588700000000000000000000000000000000

Chapter 7. SAN Volume Controller operations using the command-line interface 373

Page 400: San

20,mdisk20,online,managed,1,MDG_DS47,36.0GB,0000000000000007,DS4700,600a0b80002904de00004282485158aa00000000000000000000000000000000

7.4.19 Showing which MDisks are used by a specific VDiskUse the svcinfo lsvdiskmember command, as shown in Example 7-69, to show which MDisks a specific VDisk’s extents are from.

Example 7-69 svcinfo lsvdiskmember command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember vdisk_Did012346101113151617

If you want to know more about these MDisks, you can run the svcinfo lsmdisk command, as explained in 7.2, “Working with managed disks and disk controller systems” on page 340 (using the ID displayed in Example 7-69 rather than the name).

7.4.20 Showing from which Managed Disk Group a VDisk has its extentsUse the svcinfo lsvdisk command, as shown in Example 7-70, to show to which MDG a specific VDisk belongs.

Example 7-70 svcinfo lsvdisk command: MDG name

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk vdisk_Did 3name vdisk_DIO_group_id 1IO_group_name io_grp1status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 80.0GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018301BF2800000000000003throttling 0

374 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 401: San

preferred_node_id 6fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1

copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS45type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 80.00GBreal_capacity 80.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

If you want to know more about these MDGs, you can run the svcinfo lsmdiskgrp command, as explained in 7.2.11, “Working with Managed Disk Groups” on page 346.

7.4.21 Showing the host to which the VDisk is mappedTo show the hosts to which a specific VDisk has been assigned, run the svcinfo lsvdiskhostmap command, as shown in Example 7-71.

Example 7-71 svcinfo lsvdiskhostmap command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskhostmap -delim , vdisk_Bid,name,SCSI_id,host_id,host_name,wwpn,vdisk_UID1,vdisk_B,2,1,Nile,210000E08B892BCD,60050768018301BF28000000000000011,vdisk_B,2,1,Nile,210000E08B89B8C0,60050768018301BF2800000000000001

This command shows the host or hosts to which the vdisk_B VDisk was mapped. It is normal for you to see duplicate entries, because there are more paths between the cluster and the host. To be sure that the operating system on the host sees the disk only one time, you must install and configure a multipath software application, such as the IBM Subsystem Driver (SDD).

Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case, you must specify this flag before the VDisk name. Otherwise, the command does not return any data.

Chapter 7. SAN Volume Controller operations using the command-line interface 375

Page 402: San

7.4.22 Showing the VDisk to which the host is mappedTo show the VDisk to which a specific host has been assigned, run the svcinfo lshostvdiskmap command, as shown in Example 7-72.

Example 7-72 lshostvdiskmap command example

id,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF28000000000000053,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF28000000000000043,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006

This command shows which VDisks are mapped to the host called Siam.

7.4.23 Tracing a VDisk from a host back to its physical diskIn many cases, you must verify exactly what physical disk is presented to the host, for example, from what MDG a specific volume comes. From the host side, it is not possible for the server administrator via the GUI to see on which physical disks the volumes are running. You must enter the command (listed in Example 7-73) from your multipath command prompt.

1. On your host, run the datapath query device command. You see a long disk serial number for each vpath device, as shown in Example 7-73.

Example 7-73 datapath query device

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 60050768018301BF2800000000000005============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 20 0 1 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 2343 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 60050768018301BF2800000000000004============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 2335 0 1 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

DEV#: 2 DEVICE NAME: Disk3 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 60050768018301BF2800000000000006============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk3 Part0 OPEN NORMAL 2331 0 1 Scsi Port3 Bus0/Disk3 Part0 OPEN NORMAL 0 0

Specifying the -delim flag: Although the optional -delim flag normally comes at the end of the command string, in this case. you must specify this flag before the VDisk name. Otherwise, the command does not return any data.

State: In Example 7-73, the state of each path is OPEN. Sometimes, you will see the state CLOSED, which does not necessarily indicate a problem, because it might be a result of the path’s processing stage.

376 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 403: San

2. Run the svcinfo lshostvdiskmap command to return a list of all assigned VDisks (Example 7-74).

Example 7-74 svcinfo lshostvdiskmap

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap -delim , Siamid,name,SCSI_id,vdisk_id,vdisk_name,wwpn,vdisk_UID3,Siam,0,5,MM_DB_Pri,210000E08B18FF8A,60050768018301BF28000000000000053,Siam,1,4,MM_DBLog_Pri,210000E08B18FF8A,60050768018301BF28000000000000043,Siam,2,6,MM_App_Pri,210000E08B18FF8A,60050768018301BF2800000000000006

Look for the disk serial number that matches your datapath query device output. This host was defined in our SVC as Siam.

3. Run the svcinfo lsvdiskmember vdiskname command for a list of the MDisk or MDisks that make up the specified VDisk (Example 7-75).

Example 7-75 svcinfo lsvdiskmember

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember MM_DBLog_Priid01234101113151617

4. Query the MDisks with the svcinfo lsmdisk mdiskID to find their controller and LUN number information, as shown in Example 7-76. The output displays the controller name and the controller LUN ID to help you (provided you named your controller a unique name, such as a serial number) to track back to a LUN within the disk subsystem (Example 7-76).

Example 7-76 svcinfo lsmdisk command

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk 3id 3name mdisk3status onlinemode managedmdisk_grp_id 0mdisk_grp_name MDG_DS45capacity 36.0GBquorum_indexblock_size 512controller_name DS4500ctrl_type 4ctrl_WWNN 200400A0B8174431controller_id 0path_count 4max_path_count 4ctrl_LUN_# 0000000000000003UID 600a0b8000174431000000e44713575400000000000000000000000000000000preferred_WWPN 200400A0B8174433

Chapter 7. SAN Volume Controller operations using the command-line interface 377

Page 404: San

active_WWPN 200400A0B8174433

7.5 Scripting under the CLI for SVC task automationUsing scripting constructs works better for the automation of regular operational jobs. You can use available shells to develop scripts. To run an SVC Console where the operating system is Windows 2000 and higher, you can either purchase licensed shell emulation software or download Cygwin from this Web site:

http://www.cygwin.com

Scripting enhances the productivity of SVC administrators and the integration of their storage virtualization environment.

We show an example of scripting in Appendix A, “Scripting” on page 785.

You can create your own customized scripts to automate a large number of tasks for completion at a variety of times and run them through the CLI.

We recommend that in large SAN environments, where scripting with svctask commands is used, that you keep the scripting as simple as possible. It is harder to manage fallback, documentation, and verifying a successful script prior to execution in a large SAN environment.

7.6 SVC advanced operations using the CLIIn the following topics, we describe the commands that we think best represent advanced operational commands.

7.6.1 Command syntaxTwo major command sets are available:

� The svcinfo command set allows us to query the various components within the SVC environment.

� The svctask command set allows us to make changes to the various components within the SVC.

When the command syntax is shown, you see several parameters in square brackets, for example, [parameter], which indicates that the parameter is optional in most if not all instances. Any parameter that is not in square brackets is required information. You can view the syntax of a command by entering one of the following commands:

� svcinfo -?: Shows a complete list of information commands.� svctask -?: Shows a complete list of task commands.� svcinfo commandname -?: Shows the syntax of information commands.� svctask commandname -?: Shows the syntax of task commands.� svcinfo commandname -filtervalue?: Shows which filters you can use to reduce the

output of the information commands.

Help: You can also use -h instead of -?, for example, svcinfo -h or svctask commandname -h.

378 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 405: San

If you look at the syntax of the command by typing svcinfo command name -?, you often see -filter listed as a parameter. Be aware that the correct parameter is -filtervalue.

7.6.2 Organizing on window contentSometimes the output of a command can be long and difficult to read in the window. In cases where you need information about a subset of the total number of available items, you can use filtering to reduce the output to a more manageable size.

FilteringTo reduce the output that is displayed by an svcinfo command, you can specify a number of filters, depending on which svcinfo command you are running. To see which filters are available, type the command followed by the -filtervalue? flag, as shown in Example 7-77.

Example 7-77 svcinfo lsvdisk -filtervalue? command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue?

Filters for this view are : name id IO_group_id IO_group_name status mdisk_grp_name mdisk_grp_id capacity type FC_id FC_name RC_id RC_name vdisk_name vdisk_id vdisk_UID fc_map_count copy_count

When you know the filters, you can be more selective in generating output:

� Multiple filters can be combined to create specific searches. � You can use an asterisk (*) as a wildcard when using names.� When capacity is used, the units must also be specified using -u b | kb | mb | gb | tb | pb.

For example, if we issue the svcinfo lsvdisk command with no filters, we see the output that is shown in Example 7-78 on page 380.

Tip: You can use the up and down arrow keys on your keyboard to recall commands that were recently issued. Then, you can use the left and right, backspace, and delete keys to edit commands before you resubmit them.

Chapter 7. SAN Volume Controller operations using the command-line interface 379

Page 406: San

Example 7-78 svcinfo lsvdisk command: No filters

id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000000000,0,11,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001,0,12,vdisk2,1,io_grp1,online,0,MDG_DS45,40.0GB,striped,,,,,60050768018301BF2800000000000002,0,13,vdisk3,1,io_grp1,online,0,MDG_DS45,80.0GB,striped,,,,,60050768018301BF2800000000000003,0,1

If we now add a filter to our svcinfo command (such as FC_name), we can reduce the output, as shown in Example 7-79.

Example 7-79 svcinfo lsvdisk command: With a filter

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue mdisk_grp_name=*7 -delim ,id,name,IO_group_id,IO_group_name,status,mdisk_grp_id,mdisk_grp_name,capacity,type,FC_id,FC_name,RC_id,RC_name,vdisk_UID,fc_map_count,copy_count0,vdisk0,0,io_grp0,online,1,MDG_DS47,10.0GB,striped,,,,,60050768018301BF2800000000000000,0,11,vdisk1,1,io_grp1,online,1,MDG_DS47,100.0GB,striped,,,,,60050768018301BF2800000000000001,0,1

The first command shows all VDisks with the IO_group_id=0. The second command shows us all VDisks where the mdisk_grp_name ends with a 7. You can use the wildcard asterisk character (*) when names are used.

7.7 Managing the cluster using the CLIIn these sections, we show cluster administration.

7.7.1 Viewing cluster propertiesUse the svcinfo lscluster command to display summary information about all of the clusters that are configured to the SVC, as shown in Example 7-80 on page 381.

Tip: The -delim parameter truncates the content in the window and separates data fields with colons as opposed to wrapping text over multiple lines. This parameter is normally used in cases where you need to get reports during script execution.

380 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 407: San

Example 7-80 svcinfo lscluster command

IBM_2145:ITSO-CLS1:admin>svcinfo lsclusterid name location partnership bandwidth id_alias000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC40000020063E03A38 ITSO-CLS4 remote fully_configured 20 0000020063E03A380000020061006FCA ITSO-CLS2 remote fully_configured 50 0000020061006FCA

7.7.2 Changing cluster settingsUse the svctask chcluster command to change the settings of the cluster. This command modifies the specific features of a cluster. You can change multiple features by issuing a single command.

If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address. The service IP address is not used until a node is removed from the cluster. If this node cannot rejoin the cluster, you can bring the node up in service mode. In this mode, the node can be accessed as a stand-alone node using the service IP address.

All command parameters are optional; however, you must specify at least one parameter.

7.7.3 Cluster authenticationAn important point with respect to authentication in SVC 5.1 is that the superuser password replaces the previous cluster admin. This user is a member of the Security admin. If this password is not known, you can reset it from the cluster front panel.

We describe the authentication method in detail in Chapter 2, “IBM System Storage SAN Volume Controller” on page 7.

Note: Only a user with administrator authority can change the password.

After the cluster IP address is changed, you lose the open shell connection to the cluster. You must reconnect with the newly specified IP address.

Important: Changing the speed on a running cluster breaks I/O service to the attached hosts. Before changing the fabric speed, stop I/O from the active hosts and force these hosts to flush any cached data by unmounting volumes (for UNIX host types) or by removing drive letters (for Windows host types). Specific hosts might need to be rebooted to detect the new fabric speed. The fabric speed setting applies only to the 4F2 and 8F2 model nodes in a cluster. The 8F4 nodes automatically negotiate the fabric speed on a per-port basis.

Tip: If you do not want the password to display as you enter it on the command line, omit the new password. The command line then prompts you to enter and confirm the password without the password being displayed.

Chapter 7. SAN Volume Controller operations using the command-line interface 381

Page 408: San

The only authentication that can be changed from the chcluster command is the Service account user password, and to be able to change that, you need to have administrative rights. The Service account user password is changed in Example 7-81.

Example 7-81 svctask chcluster -servicepwd (for the Service account)

IBM_2145:ITSO-CLS1:admin>svctask chcluster -servicepwdEnter a value for -password :Enter password:Confirm password:IBM_2145:ITSO-CLS1:admin>

See 7.10.1, “Managing users using the CLI” on page 394 for more information about managing users.

7.7.4 iSCSI configurationStarting with SVC 5.1, iSCSI is introduced as a supported method of communication between the SVC and hosts. All back-end storage and intracluster communication still uses FC and the SAN, so iSCSI cannot be used for that communication.

In Chapter 2, “IBM System Storage SAN Volume Controller” on page 7, we described in detail how iSCSi works. In this section, we show how to configure our cluster for usage with iSCSI.

We will configure our nodes to use the primary and secondary Ethernet ports for iSCSI, as well as contain the cluster IP. When we configure our nodes to be used with iSCSI, we do not affect our cluster IP. The cluster IP is changed, as shown in 7.7.2, “Changing cluster settings” on page 381.

It is important to know that we can have more than a one IP address to one physical connection relationship. We have the capability to have a four to one relationship (4:1), consisting of two IPv4 plus two IPv6 addresses (four total) to one physical connection per port per node.

We describe this function in Chapter 2, “IBM System Storage SAN Volume Controller” on page 7.

There are two ways to perform iSCSI authentication or CHAP, either for the whole cluster or per host connection. Example 7-82 shows configuring CHAP for the whole cluster.

Example 7-82 Setting a CHAP secret for the entire cluster to “passw0rd”

IBM_2145:ITSO-CLS1:admin>svctask chcluster -iscsiauthmethod chap -chapsecret passw0rdIBM_2145:ITSO-CLS1:admin>

In our scenario, we have our cluster IP of 9.64.210.64, which is not affected during our configuration of the node’s IP addresses.

We start by listing our ports using the svcinfo lsportip command. We can see that we have two ports per node with which to work. Both ports can have two IP addresses that can be used for iSCSI.

Tip: When reconfiguring IP ports, be aware that already configured iSCSI connections will need to reconnect if changes are made to the IP addresses of the nodes.

382 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 409: San

In our example, we configure the secondary port in both nodes in our I/O Group, which is shown in Example 7-83.

Example 7-83 Configuring secondary Ethernet port on SVC nodes

IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 1 -ip 9.8.7.1 -gw 9.0.0.1 -mask 255.255.255.0 2IBM_2145:ITSO-CLS1:admin>svctask cfgportip -node 2 -ip 9.8.7.3 -gw 9.0.0.1 -mask 255.255.255.0 2

While both nodes are online, each node will be available to iSCSI hosts on the IP address that we have configured. Because iSCSI failover between nodes is enabled automatically, if a node goes offline for any reason, its partner node in the I/O group will become available on the failed node’s port IP address, ensuring that hosts will continue to be able to perform I/O. The svcinfo lsportip command will display which port IP addresses are currently active on each node.

7.7.5 Modifying IP addressesStarting with SVC 5.1, we can use both IP ports of the nodes. However, the first time that you configure a second port, all IP information is required, because port 1 on the cluster must always have one stack fully configured.

There are now two active cluster ports on the configuration node. If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if connected through that port.

List the IP address of the cluster by issuing the svcinfo lsclusterip command. Modify the IP address by issuing the svctask chclusterip command. You can either specify a static IP address or have the system assign a dynamic IP address, as shown in Example 7-84.

Example 7-84 svctask chclusterip -clusterip

IBM_2145:ITSO-CLS1:admin>svctask chclusterip -clusterip 10.20.133.5 -gw 10.20.135.1 -mask 255.255.255.0 -port 1

This command changes the current IP address of the cluster to 10.20.133.5.

7.7.6 Supported IP address formatsTable 7-1 on page 384 shows the IP address formats.

Important: If you specify a new cluster IP address, the existing communication with the cluster through the CLI is broken and the PuTTY application automatically closes. You must relaunch the PuTTY application and point to the new IP address, but your SSH key will still work.

Chapter 7. SAN Volume Controller operations using the command-line interface 383

Page 410: San

Table 7-1 ip_address_list formats

We have completed the tasks that are required to change the IP addresses (cluster and service) of the SVC environment.

7.7.7 Setting the cluster time zone and timeUse the -timezone parameter to specify the numeric ID of the time zone that you want to set. Issue the svcinfo lstimezones command to list the time zones that are available on the cluster; this command displays a list of valid time zone settings.

Setting the cluster time zonePerform the following steps to set the cluster time zone and time:

1. Find out for which time zone your cluster is currently configured. Enter the svcinfo showtimezone command, as shown in Example 7-85.

Example 7-85 svcinfo showtimezone command

IBM_2145:ITSO-CLS1:admin>svcinfo showtimezoneid timezone522 UTC

2. To find the time zone code that is associated with your time zone, enter the svcinfo lstimezones command, as shown in Example 7-86. A truncated list is provided for this example. If this setting is correct (for example, 522 UTC), you can go to Step 4. If not, continue with Step 3.

Example 7-86 svcinfo lstimezones command

IBM_2145:ITSO-CLS1:admin>svcinfo lstimezonesid timezone..507 Turkey508 UCT509 Universal510 US/Alaska511 US/Aleutian512 US/Arizona

IP type ip_address_list format

IPv4 (no port set, SVC uses default) 1.2.3.4

IPv4 with specific port 1.2.3.4:22

Full IPv6, default port 1234:1234:0001:0123:1234:1234:1234:1234

Full IPv6, default port, leading zeros suppressed 1234:1234:1:123:1234:1234:1234:1234

Full IPv6 with port [2002:914:fc12:848:209:6bff:fe8c:4ff6]:23

Zero-compressed IPv6, default port 2002::4ff6

Zero-compressed IPv6 with port [2002::4ff6]:23

Tip: If you have changed the time zone, you must clear the error log dump directory before you can view the error log through the Web application.

384 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 411: San

513 US/Central514 US/Eastern515 US/East-Indiana516 US/Hawaii517 US/Indiana-Starke518 US/Michigan519 US/Mountain520 US/Pacific521 US/Samoa522 UTC..

3. Now that you know which time zone code is correct for you, set the time zone by issuing the svctask settimezone (Example 7-87) command.

Example 7-87 svctask settimezone command

IBM_2145:ITSO-CLS1:admin>svctask settimezone -timezone 520

4. Set the cluster time by issuing the svctask setclustertime command (Example 7-88).

Example 7-88 svctask setclustertime command

IBM_2145:ITSO-CLS1:admin>svctask setclustertime -time 061718402008

The format of the time is MMDDHHmmYYYY.

You have completed the necessary tasks to set the cluster time zone and time.

7.7.8 Start statistics collectionStatistics are collected at the end of each sampling period (as specified by the -interval parameter). These statistics are written to a file. A new file is created at the end of each sampling period. Separate files are created for MDisks, VDisks, and node statistics.

Use the svctask startstats command to start the collection of statistics, as shown in Example 7-89.

Example 7-89 svctask startstats command

IBM_2145:ITSO-CLS1:admin>svctask startstats -interval 15

The interval that we specify (minimum 1, maximum 60) is in minutes. This command starts statistics collection and gathers data at 15 minute intervals.

Example 7-90 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1statistics_status onstatistics_frequency 15-- Note that the output has been shortened for easier reading. --

Statistics collection: To verify that statistics collection is set, display the cluster properties again, as shown in Example 7-90.

Chapter 7. SAN Volume Controller operations using the command-line interface 385

Page 412: San

We have completed the required tasks to start statistics collection on the cluster.

7.7.9 Stopping a statistics collectionUse the svctask stopstats command to stop the collection of statistics within the cluster (Example 7-91).

Example 7-91 svctask stopstats command

IBM_2145:ITSO-CLS1:admin>svctask stopstats

This command stops the statistics collection. Do not expect any prompt message from this command.

To verify that the statistics collection is stopped, display the cluster properties again, as shown in Example 7-92.

Example 7-92 Statistics collection status and frequency

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1

statistics_status offstatistics_frequency 15-- Note that the output has been shortened for easier reading. --

Notice that the interval parameter is not changed, but the status is off. We have completed the required tasks to stop statistics collection on our cluster.

7.7.10 Status of copy operationUse the svcinfo lscopystatus command, as shown in Example 7-93, to determine if a file copy operation is in progress. Only one file copy operation can be performed at a time. The output of this command is a status of active or inactive.

Example 7-93 lscopystatus command

IBM_2145:ITSO-CLS1:admin>svcinfo lsdiscoverystatusstatusinactive

7.7.11 Shutting down a clusterIf all input power to an SVC cluster is to be removed for more than a few minutes (for example, if the machine room power is to be shut down for maintenance), it is important to shut down the cluster before removing the power. If the input power is removed from the uninterruptible power supply units without first shutting down the cluster and the uninterruptible power supply units, the uninterruptible power supply units remain operational and eventually become drained of power.

When input power is restored to the uninterruptible power supply units, they start to recharge. However, the SVC does not permit any I/O activity to be performed to the VDisks until the uninterruptible power supply units are charged enough to enable all of the data on the SVC nodes to be destaged in the event of a subsequent unexpected power loss. Recharging the uninterruptible power supply can take as long as two hours.

386 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 413: San

Shutting down the cluster prior to removing input power to the uninterruptible power supply units prevents the battery power from being drained. It also makes it possible for I/O activity to be resumed as soon as input power is restored.

You can use the following procedure to shut down the cluster:

1. Use the svctask stopcluster command to shut down your SVC cluster (Example 7-94).

Example 7-94 svctask stopcluster

IBM_2145:ITSO-CLS1:admin>svctask stopclusterAre you sure that you want to continue with the shut down?

This command shuts down the SVC cluster. All data is flushed to disk before the power is removed. At this point, you lose administrative contact with your cluster, and the PuTTY application automatically closes.

2. You will be presented with the following message:

Warning: Are you sure that you want to continue with the shut down?

Ensure that you have stopped all FlashCopy mappings, Metro Mirror (Remote Copy) relationships, data migration operations, and forced deletions before continuing. Entering y to this message will execute the command. No feedback is then displayed. Entering anything other than y(es) or Y(ES) will result in the command not executing. No feedback is displayed.

3. We have completed the tasks that are required to shut down the cluster. To shut down the uninterruptible power supply units, press the power on button on the front panel of each uninterruptible power supply unit.

7.8 NodesThis section details the tasks that can be performed at an individual node level.

Important: Before shutting down a cluster, ensure that all I/O operations are stopped that are destined for this cluster, because you will lose all access to all VDisks being provided by this cluster. Failure to do so can result in failed I/O operations being reported to the host operating systems.

Begin the process of quiescing all I/O to the cluster by stopping the applications on the hosts that are using the VDisks that are provided by the cluster.

Restarting the cluster: To restart the cluster, you must first restart the uninterruptible power supply units by pressing the power button on their front panels. Then, press the power on button on the service panel of one of the nodes within the cluster. After the node is fully booted up (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the panel), you can start the other nodes in the same way.

As soon as all of the nodes are fully booted, you can reestablish administrative contact using PuTTY, and your cluster is fully operational again.

Chapter 7. SAN Volume Controller operations using the command-line interface 387

Page 414: San

7.8.1 Viewing node detailsUse the svcinfo lsnode command to view the summary information about the nodes that are defined within the SVC environment. To view more details about a specific node, append the node name (for example, SVCNode_1) to the command.

Example 7-95 shows both of these commands.

Example 7-95 svcinfo lsnode command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,hardware1,node1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G42,node2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G43,node3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G44,node4,100066C108,50050768010027E2,online,1,io_grp1,no,20400001864C1008,8G4IBM_2145:ITSO-CLS1:admin>svcinfo lsnode node1id 1name node1UPS_serial_number 1000739007WWNN 50050768010037E5status onlineIO_group_id 0IO_group_name io_grp0partner_node_id 2partner_node_name node2config_node yesUPS_unique_id 20400001C3240007port_id 50050768014037E5port_status activeport_speed 4Gbport_id 50050768013037E5port_status activeport_speed 4Gbport_id 50050768011037E5port_status activeport_speed 4Gbport_id 50050768012037E5port_status activeport_speed 4Gbhardware 8G4

7.8.2 Adding a nodeAfter cluster creation is completed through the service panel (the front panel of one of the SVC nodes) and cluster Web interface, only one node (the configuration node) is set up.

To have a fully functional SVC cluster, you must add a second node to the configuration.

Tip: The -delim parameter truncates the content in the window and separates data fields with colons (:) as opposed to wrapping text over multiple lines.

388 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 415: San

To add a node to a cluster, gather the necessary information, as explained in these steps:

� Before you can add a node, you must know which unconfigured nodes you have as “candidates”. Issue the svcinfo lsnodecandidate command (Example 7-96).

� You must specify to which I/O Group you are adding the node. If you enter the svcinfo lsnode command, you can easily identify the I/O Group ID of the group to which you are adding your node, as shown in Example 7-97.

Example 7-96 svctask lsnodecandidate command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnodecandidateid panel_name UPS_serial_number UPS_unique_id hardware50050768010027E2 108283 100066C108 20400001864C1008 8G450050768010037DC 104603 1000739004 20400001C3240004 8G4

Example 7-97 svcinfo lsnode command

id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,hardware,iscsi_name,iscsi_alias1,ITSO_CLS1_0,100089J040,50050768010059E7,online,0,io_grp0,yes,2040000209680100,8G4,iqn.1986-03.com.ibm:2145.ITSO_CLS1_0.ITSO_CLS1_0_N0,

Now that we know the available nodes, we can use the svctask addnode command to add the node to the SVC cluster configuration.

Example 7-98 shows the command to add a node to the SVC cluster.

Example 7-98 svctask addnode (wwnodename) command

IBM_2145:ITSO-CLS1:admin>svctask addnode -wwnodename 50050768010027E2 -name Node2 -iogrp io_grp0Node, id [2], successfully added

This command adds the candidate node with the wwnodename of 50050768010027E2 to the I/O Group called io_grp0.

We used the -wwnodename parameter (50050768010027E2). However, we can also use the -panelname parameter (108283) instead (Example 7-99). If you are standing in front of the node, it is easier to read the panel name than it is to get the WWNN.

Example 7-99 svctask addnode (panelname) command

IBM_2145:ITSO-CLS1:admin>svctask addnode -panelname 108283 -name Node2 -iogrp io_grp0

We also used the optional -name parameter (Node2). If you do not provide the -name parameter, the SVC automatically generates the name nodex (where x is the ID sequence number that is assigned internally by the SVC).

Tip: The node that you want to add must have a separate uninterruptible power supply unit serial number from the uninterruptible power supply unit on the first node.

Name: If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, dash, or the word “node” (because this prefix is reserved for SVC assignment only).

Chapter 7. SAN Volume Controller operations using the command-line interface 389

Page 416: San

If the svctask addnode command returns no information, your second node is powered on, and the zones are correctly defined, preexisting cluster configuration data can be stored in the node. If you are sure that this node is not part of another active SVC cluster, you can use the service panel to delete the existing cluster information. After this action is complete, reissue the svcinfo lsnodecandidate command and you will see it listed.

7.8.3 Renaming a nodeUse the svctask chnode command to rename a node within the SVC cluster configuration.

Example 7-100 svctask chnode -name command

IBM_2145:ITSO-CLS1:admin>svctask chnode -name ITSO_CLS1_Node1 4

This command renames node ID 4 to ITSO_CLS1_Node1.

7.8.4 Deleting a nodeUse the svctask rmnode command to remove a node from the SVC cluster configuration (Example 7-98 on page 389).

Example 7-101 svctask rmnode command

IBM_2145:ITSO-CLS1:admin>svctask rmnode node4

This command removes node4 from the SVC cluster.

Because node4 was also the configuration node, the SVC transfers the configuration node responsibilities to a surviving node, within the I/O Group. Unfortunately, the PuTTY session cannot be dynamically passed to the surviving node. Therefore, the PuTTY application loses communication and closes automatically.

We must restart the PuTTY application to establish a secure session with the new configuration node.

7.8.5 Shutting down a nodeOn occasion, it can be necessary to shut down a single node within the cluster to perform tasks, such as scheduled maintenance, while leaving the SVC environment up and running.

Use the svctask stopcluster -node command, as shown in Example 7-102 on page 391, to shut down a single node.

Name: The chnode command specifies the new name first. You can use letters A to Z and a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, dash, or the word “node” (because this prefix is reserved for SVC assignment only).

Important: If this node is the last node in an I/O Group, and there are VDisks still assigned to the I/O Group, the node is not deleted from the cluster.

If this node is the last node in the cluster, and the I/O Group has no VDisks remaining, the cluster is destroyed and all virtualization information is lost. Any data that is still required must be backed up or migrated prior to destroying the cluster.

390 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 417: San

Example 7-102 svctask stopcluster -node command

IBM_2145:ITSO-CLS1:admin>svctask stopcluster -node n4Are you sure that you want to continue with the shut down?

This command shuts down node n4 in a graceful manner. When this node has been shut down, the other node in the I/O Group will destage the contents of its cache and will go into write-through mode until the node is powered up and rejoins the cluster.

If this is the last node in an I/O Group, all access to the VDisks in the I/O Group will be lost. Verify that you want to shut down this node before executing this command. You must specify the -force flag.

By reissuing the svcinfo lsnode command (Example 7-103), we can see that the node is now offline.

Example 7-103 svcinfo lsnode command

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode -delim ,id,name,UPS_serial_number,WWNN,status,IO_group_id,IO_group_name,config_node,UPS_unique_id,hardware1,n1,1000739007,50050768010037E5,online,0,io_grp0,yes,20400001C3240007,8G42,n2,1000739004,50050768010037DC,online,0,io_grp0,no,20400001C3240004,8G43,n3,100066C107,5005076801001D1C,online,1,io_grp1,no,20400001864C1007,8G46,n4,100066C108,0000000000000000,offline,1,io_grp1,no,20400001864C1008,unknown

IBM_2145:ITSO-CLS1:admin>svcinfo lsnode n4CMMVC5782E The object specified is offline.

We have completed the tasks that are required to view, add, delete, rename, and shut down a node within an SVC environment.

7.9 I/O GroupsThis section explains the tasks that you can perform at an I/O Group level.

7.9.1 Viewing I/O Group detailsUse the svcinfo lsiogrp command, as shown in Example 7-104 on page 392, to view information about the I/O Groups that are defined within the SVC environment.

Important: There is no need to stop FlashCopy mappings, Remote Copy relationships, and data migration operations. The other cluster will handle these activities, but be aware that this cluster is a single point of failure now.

Restart: To restart the node manually, press the power on button from the service panel of the node.

Chapter 7. SAN Volume Controller operations using the command-line interface 391

Page 418: San

Example 7-104 I/O Group details

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrpid name node_count vdisk_count host_count0 io_grp0 2 3 31 io_grp1 2 4 32 io_grp2 0 0 23 io_grp3 0 0 24 recovery_io_grp 0 0 0

As we can see, the SVC predefines five I/O Groups. In a four node cluster (similar to our example), only two I/O Groups are actually in use. The other I/O Groups (io_grp2 and io_grp3) are for a six or eight node cluster.

The recovery I/O Group is a temporary home for VDisks when all nodes in the I/O Group that normally owns them have suffered multiple failures. This design allows us to move the VDisks to the recovery I/O Group and, then, into a working I/O Group. Of course, while temporarily assigned to the recovery I/O Group, I/O access is not possible.

7.9.2 Renaming an I/O GroupUse the svctask chiogrp command to rename an I/O Group (Example 7-105).

Example 7-105 svctask chiogrp command

IBM_2145:ITSO-CLS1:admin>svctask chiogrp -name io_grpA io_grp1

This command renames the I/O Group io_grp1 to io_grpA.

To see whether the renaming was successful, issue the svcinfo lsiogrp command again to see the change.

We have completed the tasks that are required to rename an I/O Group.

7.9.3 Adding and removing hostiogrpTo map or unmap a specific host object to a specific I/O Group to reach the maximum number of hosts supported by an SVC cluster, use the svctask addhostiogrp command to map a specific host to a specific I/O Group, as shown in Example 7-106 on page 393.

Name: The chiogrp command specifies the new name first.

If you want to provide a name, you can use letters A to Z, letters a to z, numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, dash, or the word “iogrp” (because this prefix is reserved for SVC assignment only).

392 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 419: San

Example 7-106 svctask addhostiogrp command

IBM_2145:ITSO-CLS1:admin>svctask addhostiogrp -iogrp 1 Kanaga

Parameters:

� -iogrp iogrp_list -iogrpall

Specifies a list of one or more I/O Groups that must be mapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all the I/O Groups must be mapped to the specified host. This parameter is mutually exclusive with -iogrp.

� -host host_id_or_name

Identify the host either by ID or name to which the I/O Groups must be mapped.

Use the svctask rmhostiogrp command to unmap a specific host to a specific I/O Group, as shown in Example 7-107.

Example 7-107 svctask rmhostiogrp command

IBM_2145:ITSO-CLS1:admin>svctask rmhostiogrp -iogrp 0 Kanaga

Parameters:

� -iogrp iogrp_list -iogrpall

Specifies a list of one or more I/O Groups that must be unmapped to the host. This parameter is mutually exclusive with -iogrpall. The -iogrpall option specifies that all of the I/O Groups must be unmapped to the specified host. This parameter is mutually exclusive with -iogrp.

� -force

If the removal of a host to I/O Group mapping will result in the loss of VDisk to host mappings, the command fails if the -force flag is not used. The -force flag, however, overrides this behavior and forces the deletion of the host to I/O Group mapping.

� host_id_or_name

Identify the host either by the ID or name to which the I/O Groups must be mapped.

7.9.4 Listing I/O GroupsTo list all of the I/O Groups that are mapped to the specified host and vice versa, use the svcinfo lshostiogrp command, specifying the host name Kanaga, as shown in Example 7-108.

Example 7-108 svcinfo lshostiogrp command

IBM_2145:ITSO-CLS1:admin>svcinfo lshostiogrp Kanagaid name1 io_grp1

To list all of the host objects that are mapped to the specified I/O Group, use the svcinfo lsiogrphost command, as shown in Example 7-109 on page 394.

Chapter 7. SAN Volume Controller operations using the command-line interface 393

Page 420: San

Example 7-109 svcinfo lsiogrphost command

IBM_2145:ITSO-CLS1:admin>svcinfo lsiogrphost io_grp1id name1 Nile2 Kanaga3 Siam

In Example 7-110, iogrp_1 is the I/O Group name.

7.10 Managing authentication In the following topics, we show authentication administration.

7.10.1 Managing users using the CLIIn this section, we demonstrate operating and managing authentication using the CLI.

All users must now be a member of a predefined user group. You can list those groups by using the svcinfo lsusergrp command, as shown in Example 7-110.

Example 7-110 svcinfo lsusergrp command

IBM_2145:ITSO-CLS2:admin>svcinfo lsusergrpid name role remote0 SecurityAdmin SecurityAdmin no1 Administrator Administrator no2 CopyOperator CopyOperator no3 Service Service no4 Monitor Monitor no

Example 7-111 is a simple example of creating a user. User John is added to the user group Monitor with the password m0nitor.

Example 7-111 svctask mkuser called John with password m0nitor

IBM_2145:ITSO-CLS1:admin>svctask mkuser -name John -usergrp Monitor -password m0nitorUser, id [2], successfully createdIBM_2145:ITSO-CLS1:admin>

Local users are those users that are not authenticated by a remote authentication server. Remote users are those users that are authenticated by a remote central registry server.

The user groups already have a defined authority role, as shown in Table 7-2 on page 395.

394 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 421: San

Table 7-2 Authority roles

7.10.2 Managing user roles and groupsRole-based security commands are used to restrict the administrative abilities of a user. We cannot create new user roles, but we can create new user groups and assign a predefined role to our group.

To view the user roles on your cluster, use the svcinfo lsusergrp command, as shown in Example 7-112 on page 396, to list all of the users.

User group Role User

Security admin All commands Superusers

Administrator All commands except:svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp,chusergrp, and setpwdreset

Administrators that control the SVC

Copy operator All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp,chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap,startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp,startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership

For those users that control all of the copy functionality of the cluster

Service All svcinfo commandsand the following svctask commands:applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps,settimezone, stopcluster, startstats, stopstats, and settime

For those users that perform service maintenance and other hardware tasks on the cluster

Monitor All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, and chcurrentuserAnd svcconfig: backup

For those users only needing view access

Chapter 7. SAN Volume Controller operations using the command-line interface 395

Page 422: San

Example 7-112 svcinfo lsusergrp command

IBM_2145:ITSO-CLS2:admin>svcinfo lsusergrpid name role remote0 SecurityAdmin SecurityAdmin no1 Administrator Administrator no2 CopyOperator CopyOperator no3 Service Service no4 Monitor Monitor no

To view our currently defined users and the user groups to which they belong, we use the svcinfo lsuser command, as shown in Example 7-113.

Example 7-113 svcinfo lsuser command

IBM_2145:ITSO-CLS2:admin>svcinfo lsuser -delim ,id,name,password,ssh_key,remote,usergrp_id,usergrp_name0,superuser,yes,no,no,0,SecurityAdmin1,admin,no,yes,no,0,SecurityAdmin2,Pall,yes,no,no,1,Administrator

7.10.3 Changing a userTo change user passwords, issue the svctask chuser command. To change the Service account user password, see 7.7.3, “Cluster authentication” on page 381.

The chuser command allows you to modify a user that is already created. You can rename, assign a new password (if you are logged on with administrative privileges), move a user from one user group to another user group, but be aware that a member can only be a member of one group at a time.

7.10.4 Audit log commandThe audit log can be extremely helpful to see which commands have been entered on our cluster.

Most action commands that are issued by the old or new CLI are recorded in the audit log:

� The native GUI performs actions by using the CLI programs. � The SVC Console performs actions by issuing Common Information Model (CIM)

commands to the CIM object manager (CIMOM), which then runs the CLI programs.

Actions performed by using both the native GUI and the SVC Console are recorded in the audit log.

Certain commands are not audited:

� svctask cpdumps� svctask cleardumps� svctask finderr

396 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 423: San

� svctask dumperrlog� svctask dumpinternallog

The audit log contains approximately 1 MB of data, which can contain about 6,000 average length commands. When this log is full, the cluster copies it to a new file in the /dumps/audit directory on the config node and resets the in-memory audit log.

To display entries from the audit log, use the svcinfo catauditlog -first 5 command to return a list of five in-memory audit log entries, as shown in Example 7-114.

Example 7-114 catauditlog command

IBM_2145:ITSO-CLS1:admin>svcinfo catauditlog -first 5 -delim ,291,090904200329,superuser,10.64.210.231,0,,svctask mkvdiskhostmap -host 1 21292,090904201238,admin,10.64.210.231,0,,svctask chvdisk -name swiss_cheese 21293,090904204314,superuser,10.64.210.231,0,,svctask chhost -name ITSO_W2008 1294,090904204314,superuser,10.64.210.231,0,,svctask chhost -mask 15 1295,090904204410,admin,10.64.210.231,0,,svctask chvdisk -name SwissCheese 21

If you need to dump the contents of the in-memory audit log to a file on the current configuration node, use the svctask dumpauditlog command. This command does not provide any feedback, only the prompt. To obtain a list of the audit log dumps, use the svcinfo lsauditlogdumps command, as described in Example 7-115.

Example 7-115 svctask dumpauditlog/svcinfo lsauditlogdumps command

IBM_2145:ITSO-CLS1:admin>svctask dumpauditlogIBM_2145:ITSO-CLS1:admin>svcinfo lsauditlogdumpsid auditlog_filename0 auditlog_0_80_20080619134139_0000020060c06fca

7.11 Managing Copy ServicesIn these topics, we show how to manage copy services.

7.11.1 FlashCopy operationsIn this section, we use a scenario to illustrate how to use commands with PuTTY to perform FlashCopy. See the IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface User’s Guide, SC26-7544, for more commands.

Scenario descriptionWe use the following scenario in both the command-line section and the GUI section. In the following scenario, we want to FlashCopy the following VDisks:

DB_Source Database filesLog_Source Database log filesApp_Source Application files

We create consistency groups to handle the FlashCopy of DB_Source and Log_Source, because data integrity must be kept on DB_Source and Log_Source.

In our scenario, the application files are independent of the database, so we create a single FlashCopy mapping for App_Source. We will make two FlashCopy targets for DB_Source and

Chapter 7. SAN Volume Controller operations using the command-line interface 397

Page 424: San

Log_Source and, therefore, two consistency groups. Example 7-123 on page 403 shows the scenario.

Figure 7-2 FlashCopy scenario

7.11.2 Setting up FlashCopyWe have already created the source and target VDisks, and the source and target VDisks are identical in size, which is a requirement of the FlashCopy function:

� DB_Source, DB_Target1, and DB_Target2� Log_Source, Log_Target1, and Log_Target2� App_Source and App_Target1

To set up the FlashCopy, we performed the following steps:

1. Create two FlashCopy consistency groups:

– FCCG1– FCCG2

2. Create FlashCopy mappings for Source VDisks:

– DB_Source FlashCopy to DB_Target1, the mapping name is DB_Map1– DB_Source FlashCopy to DB_Target2, the mapping name is DB_Map2– Log_Source FlashCopy to Log_Target1, the mapping name is Log_Map1– Log_Source FlashCopy to Log_Target2, the mapping name is Log_Map2– App_Source FlashCopy to App_Target1, the mapping name is App_Map1– Copyrate 50

7.11.3 Creating a FlashCopy consistency groupTo create a FlashCopy consistency group, we use the command svctask mkfcconsistgrp to create a new consistency group. The ID of the new group is returned. If you have created several FlashCopy mappings for a group of VDisks that contain elements of data for the same application, it might be convenient to assign these mappings to a single FlashCopy consistency group. Then, you can issue a single prepare or start command for the whole

398 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 425: San

group, so that, for example, all of the files for a particular database are copied at the same time.

In Example 7-116, the FCCG1 and FCCG2 consistency groups are created to hold the FlashCopy maps of DB and Log. This step is extremely important for FlashCopy on database applications. It helps to keep data integrity during FlashCopy.

Example 7-116 Creating two FlashCopy consistency groups

IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG1FlashCopy Consistency Group, id [1], successfully created

IBM_2145:ITSO-CLS1:admin>svctask mkfcconsistgrp -name FCCG2FlashCopy Consistency Group, id [2], successfully created

In Example 7-117, we checked the status of consistency groups. Each consistency group has a status of empty.

Example 7-117 Checking the status

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrpid name status1 FCCG1 empty2 FCCG2 empty

If you want to change the name of a consistency group, you can use the svctask chfcconsistgrp command. Type svctask chfcconsistgrp -h for help with this command.

7.11.4 Creating a FlashCopy mapping To create a FlashCopy mapping, we use the svctask mkfcmap command. This command creates a new FlashCopy mapping, which maps a source VDisk to a target VDisk to prepare for subsequent copying.

When executed, this command creates a new FlashCopy mapping logical object. This mapping persists until it is deleted. The mapping specifies the source and destination VDisks. The destination must be identical in size to the source, or the mapping will fail. Issue the svcinfo lsvdisk -bytes command to find the exact size of the source VDisk for which you want to create a target disk of the same size.

In a single mapping, source and destination cannot be on the same VDisk. A mapping is triggered at the point in time when the copy is required. The mapping can optionally be given a name and assigned to a consistency group. These groups of mappings can be triggered at the same time, enabling multiple VDisks to be copied at the same time, which creates a consistent copy of multiple disks. A consistent copy of multiple disks is required for database products in which the database and log files reside on separate disks.

If no consistency group is defined, the mapping is assigned to the default group 0, which is a special group that cannot be started as a whole. Mappings in this group can only be started on an individual basis.

The background copy rate specifies the priority that must be given to completing the copy. If 0 is specified, the copy will not proceed in the background. The default is 50.

Chapter 7. SAN Volume Controller operations using the command-line interface 399

Page 426: San

In Example 7-118, the first FlashCopy mapping for DB_Source and Log_Source is created.

Example 7-118 Create the first FlashCopy mapping for DB_Source, Log_Source, and App_Source

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target_1 -name DB_Map1 -consistgrp FCCG1FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target_1 -name Log_Map1 -consistgrp FCCG1FlashCopy Mapping, id [1], successfully created

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Target_1 -name App_Map1FlashCopy Mapping, id [2], successfully created

Example 7-119 shows the command to create a second FlashCopy mapping for VDisk DB_Source and Log_Source.

Example 7-119 Create additional FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source DB_Source -target DB_Target2 -name DB_Map2 -consistgrp FCCG2FlashCopy Mapping, id [3], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source Log_Source -target Log_Target2 -name Log_Map2 -consistgrp FCCG2FlashCopy Mapping, id [4], successfully created

Example 7-120 shows the result of these FlashCopy mappings. The status of the mapping is idle_or_copied.

Example 7-120 Check the result of Multiple Target FlashCopy mappings

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapid name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 idle_or_copied 0 50 100 off no1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 idle_or_copied 0 50 100 off no2 App_Map1 2 App_Source 3 App_Target_1 idle_or_copied 0 50 100 off no

Tip: There is a parameter to delete FlashCopy mappings automatically after completion of a background copy (when the mapping gets to the idle_or_copied state). Use the command:

svctask mkfcmap -autodelete

This command does not delete mappings in cascade with dependent mappings, because it cannot get to the idle_or_copied state in this situation.

400 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 427: San

3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 idle_or_copied 0 50 100 off no4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 idle_or_copied 0 50 100 off noIBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrpid name status1 FCCG1 idle_or_copied2 FCCG2 idle_or_copied

If you want to change the FlashCopy mapping, you can use the svctask chfcmap command. Type svctask chfcmap -h to get help with this command.

7.11.5 Preparing (pre-triggering) the FlashCopy mappingAt this point, the mapping has been created, but the cache still accepts data for the source VDisks. You can only trigger the mapping when the cache does not contain any data for FlashCopy source VDisks. You must issue an svctask prestartfcmap command to prepare a FlashCopy mapping to start. This command tells the SVC to flush the cache of any content for the source VDisk and to pass through any further write data for this VDisk.

When the svctask prestartfcmap command is executed, the mapping enters the Preparing state. After the preparation is complete, it changes to the Prepared state. At this point, the mapping is ready for triggering. Preparing and the subsequent triggering are usually performed on a consistency group basis. Only mappings belonging to consistency group 0 can be prepared on their own, because consistency group 0 is a special group, which contains the FlashCopy mappings that do not belong to any consistency group. A FlashCopy must be prepared before it can be triggered.

In our scenario, App_Map1 is not in a consistency group. In Example 7-121, we show how we initialize the preparation for App_Map1.

Another option is that you add the -prep parameter to the svctask startfcmap command, which first prepares the mapping and then starts the FlashCopy.

In the example, we also show how to check the status of the current FlashCopy mapping. App_Map1’s status is prepared.

Example 7-121 Prepare a FlashCopy without a consistency group

IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap App_Map1IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1id 2name App_Map1source_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 3target_vdisk_name App_Target_1group_idgroup_namestatus preparedprogress 0copy_rate 50start_timedependent_mappings 0

Chapter 7. SAN Volume Controller operations using the command-line interface 401

Page 428: San

autodelete offclean_progress 100clean_rate 50incremental offdifference 100grain_size 256IO_group_id 0IO_group_name io_grp0partner_FC_idpartner_FC_namerestoring no

7.11.6 Preparing (pre-triggering) the FlashCopy consistency groupWe use the svctask prestartfcconsistsgrp command to prepare a FlashCopy consistency group. As with 7.11.5, “Preparing (pre-triggering) the FlashCopy mapping” on page 401, this command flushes the cache of any data that is destined for the source VDisks and forces the cache into the write-through mode until the mapping is started. The difference is that this command prepares a group of mappings (at a consistency group level) instead of one mapping.

When you have assigned several mappings to a FlashCopy consistency group, you only have to issue a single prepare command for the whole group to prepare all of the mappings at one time.

Example 7-122 shows how we prepare the consistency groups for DB and Log and check the result. After the command has executed all of the FlashCopy maps that we have, all of them are in the prepared status, and all the consistency groups are in the prepared status, too. Now, we are ready to start the FlashCopy.

Example 7-122 Prepare a FlashCopy consistency group

IBM_2145:ITSO-CLS1:admin>svctask prestartfcconsistgrp FCCG1IBM_2145:ITSO-CLS1:admin>svctask prestartfcconsistgrp FCCG2IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp FCCG1id 1name FCCG1status preparedautodelete offFC_mapping_id 0FC_mapping_name DB_Map1FC_mapping_id 1FC_mapping_name Log_Map1IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrpid name status1 FCCG1 prepared2 FCCG2 prepared

7.11.7 Starting (triggering) FlashCopy mappingsThe svctask startfcmap command is used to start a single FlashCopy mapping. When invoked, a point-in-time copy of the source VDisk is created on the target VDisk.

402 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 429: San

When the FlashCopy mapping is triggered, it enters the Copying state. The way that the copy proceeds depends on the background copy rate attribute of the mapping. If the mapping is set to 0 (NOCOPY), only data that is subsequently updated on the source will be copied to the destination. We suggest that you use this scenario as a backup copy while the mapping exists in the Copying state. If the copy is stopped, the destination is unusable. If you want to end up with a duplicate copy of the source at the destination, set the background copy rate greater than 0. This way, the system copies all of the data (even unchanged data) to the destination and eventually reaches the idle_or_copied state. After this data is copied, you can delete the mapping and have a usable point-in-time copy of the source at the destination.

In Example 7-123, after the FlashCopy is started, App_Map1 changes to copying status.

Example 7-123 Start App_Map1

IBM_2145:ITSO-CLS1:admin>svctask startfcmap App_Map1IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapid name source_vdisk_id source_vdisk_name target_vdisk_id target_vdisk_name group_id group_name status progress copy_rate clean_progress incremental partner_FC_id partner_FC_name restoring0 DB_Map1 0 DB_Source 6 DB_Target_1 1 FCCG1 prepared 0 50 100 off no1 Log_Map1 1 Log_Source 4 Log_Target_1 1 FCCG1 prepared 0 50 100 off no2 App_Map1 2 App_Source 3 App_Target_1 copying 0 50 100 off no3 DB_Map2 0 DB_Source 7 DB_Target_2 2 FCCG2 prepared 0 50 100 off no4 Log_Map2 1 Log_Source 5 Log_Target_2 2 FCCG2 prepared 0 50 100 off noIBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1id 2name App_Map1source_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 3target_vdisk_name App_Target_1group_idgroup_namestatus copyingprogress 29copy_rate 50start_time 090826171647dependent_mappings 0autodelete offclean_progress 100clean_rate 50incremental offdifference 100grain_size 256IO_group_id 0

Chapter 7. SAN Volume Controller operations using the command-line interface 403

Page 430: San

IO_group_name io_grp0partner_FC_idpartner_FC_namerestoring no

7.11.8 Starting (triggering) FlashCopy consistency groupWe execute the svctask startfcconsistgrp command, as shown in Example 7-124, and afterward, the database can be resumed. We have created two point-in-time consistent copies of the DB and Log VDisks. After execution, the consistency group and the FlashCopy maps are all in the copying status.

Example 7-124 Start FlashCopy consistency group

IBM_2145:ITSO-CLS1:admin>svctask startfcconsistgrp FCCG1IBM_2145:ITSO-CLS1:admin>svctask startfcconsistgrp FCCG2IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp FCCG1id 1name FCCG1status copyingautodelete offFC_mapping_id 0FC_mapping_name DB_Map1FC_mapping_id 1FC_mapping_name Log_Map1IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrpid name status1 FCCG1 copying2 FCCG2 copying

7.11.9 Monitoring the FlashCopy progressTo monitor the background copy progress of the FlashCopy mappings, we issue the svcinfo lsfcmapprogress command for each FlashCopy mapping.

Alternatively, you can also query the copy progress by using the svcinfo lsfcmap command. As shown in Example 7-125, both DB_Map1 and Log_Map1 return information that the background copy is 21% completed, and both DB_Map2 and Log_Map2 return information that the background copy is 18% completed.

Example 7-125 Monitoring background copy progress

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map1id progress0 23IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map1id progress1 23IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress Log_Map2id progress4 23IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress DB_Map2id progress3 23IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress App_Map1

404 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 431: San

id progress2 53

When the background copy has completed, the FlashCopy mapping enters the idle_or_copied state, and when all FlashCopy mappings in a consistency group enter this status, the consistency group will be at idle_or_copied status.

When in this state, the FlashCopy mapping can be deleted, and the target disk can be used independently, if, for example, another target disk is to be used for the next FlashCopy of the particular source VDisk.

7.11.10 Stopping the FlashCopy mappingThe svctask stopfcmap command is used to stop a FlashCopy mapping. This command allows you to stop an active (copying) or suspended mapping. When executed, this command stops a single FlashCopy mapping.

When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC. The FlashCopy mapping needs to be prepared again or retriggered to bring the target VDisk online again.

Example 7-126 shows how to stop the App_Map1 FlashCopy. The status of App_Map1 has changed to idle_or_copied.

Example 7-126 Stop APP_Map1 FlashCopy

IBM_2145:ITSO-CLS1:admin>svctask stopfcmap App_Map1IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap App_Map1id 2name App_Map1source_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 3target_vdisk_name App_Target_1group_idgroup_namestatus idle_or_copiedprogress 100copy_rate 50start_time 090826171647dependent_mappings 0autodelete offclean_progress 100clean_rate 50

Tip: In a Multiple Target FlashCopy environment, if you want to stop a mapping or group, consider whether you want to keep any of the dependent mappings. If not, issue the stop command with the force parameter, which will stop all of the dependent maps and negate the need for the stopping copy process to run.

Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in use, or when you want to modify the FlashCopy mapping. When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC, if the mapping is in the Copying state and progress=100.

Chapter 7. SAN Volume Controller operations using the command-line interface 405

Page 432: San

incremental offdifference 100grain_size 256IO_group_id 0IO_group_name io_grp0partner_FC_idpartner_FC_namerestoring no

7.11.11 Stopping the FlashCopy consistency groupThe svctask stopfcconsistgrp command is used to stop any active FlashCopy consistency group. It stops all mappings in a consistency group. When a FlashCopy consistency group is stopped for all mappings that are not 100% copied, the target VDisks become invalid and are set offline by the SVC. The FlashCopy consistency group needs to be prepared again and restarted to bring the target VDisks online again.

As shown in Example 7-127, we stop the FCCG1 and FCCG2 consistency groups. The status of the two consistency groups has changed to stopped. Most of the FlashCopy mapping relations now have the status stopped. As you can see, several of them have already completed the copy operation and are now in a status of idle_or_copied.

Example 7-127 Stop FCCG1 and FCCG2 consistency groups

IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG1

IBM_2145:ITSO-CLS1:admin>svctask stopfcconsistgrp FCCG2

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrpid name status1 FCCG1 stopped2 FCCG2 stopped

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap -delim ,id,name,source_vdisk_id,source_vdisk_name,target_vdisk_id,target_vdisk_name,group_id,group_name,status,progress,copy_rate,clean_progress,incremental,partner_FC_id,partner_FC_name,restoring0,DB_Map1,0,DB_Source,6,DB_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no1,Log_Map1,1,Log_Source,4,Log_Target_1,1,FCCG1,idle_or_copied,100,50,100,off,,,no2,App_Map1,2,App_Source,3,App_Target_1,,,idle_or_copied,100,50,100,off,,,no3,DB_Map2,0,DB_Source,7,DB_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no4,Log_Map2,1,Log_Source,5,Log_Target_2,2,FCCG2,idle_or_copied,100,50,100,off,,,no

7.11.12 Deleting the FlashCopy mappingTo delete a FlashCopy mapping, we use the svctask rmfcmap command. When the command is executed, it attempts to delete the specified FlashCopy mapping. If the FlashCopy mapping is stopped, the command fails unless the -force flag is specified. If the mapping is active (copying), it must first be stopped before it can be deleted.

Important: Only stop a FlashCopy mapping when the data on the target VDisk is not in use, or when you want to modify the FlashCopy consistency group. When a consistency group is stopped, the target VDisk might become invalid and set offline by the SVC, depending on the state of the mapping.

406 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 433: San

Deleting a mapping only deletes the logical relationship between the two VDisks. However, when issued on an active FlashCopy mapping using the -force flag, the delete renders the data on the FlashCopy mapping target VDisk as inconsistent.

As shown in Example 7-128, we delete App_Map1.

Example 7-128 Delete App_Map1

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap App_Map1

7.11.13 Deleting the FlashCopy consistency groupThe svctask rmfcconsistgrp command is used to delete a FlashCopy consistency group. When executed, this command deletes the specified consistency group. If there are mappings that are members of the group, the command fails unless the -force flag is specified.

If you want to delete all of the mappings in the consistency group, as well, you must first delete the mappings and, then, delete the consistency group.

As shown in Example 7-129, we delete all of the maps and consistency groups, and then, we check the result.

Example 7-129 Remove fcmaps and fcconsistgrp

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map1

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap DB_Map2

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map1

IBM_2145:ITSO-CLS1:admin>svctask rmfcmap Log_Map2

IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG1

IBM_2145:ITSO-CLS1:admin>svctask rmfcconsistgrp FCCG2

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcconsistgrp

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap

IBM_2145:ITSO-CLS1:admin>

7.11.14 Migrating a VDisk to a Space-Efficient VDiskUse the following scenario to migrate a VDisk to a Space-Efficient VDisk:

1. Create a space-efficient target VDisk with exactly the same size as the VDisk that you want to migrate.

Example 7-130 on page 408 shows the VDisk 8 details. It has been created as a Space-Efficient VDisk with the same size of App_Source VDisk.

Tip: If you want to use the target VDisk as a normal VDisk, monitor the background copy progress until it is complete (100% copied) and, then, delete the FlashCopy mapping. Another option is to set the -autodelete option when creating the FlashCopy mapping.

Chapter 7. SAN Volume Controller operations using the command-line interface 407

Page 434: San

Example 7-130 svcinfo lsvdisk 8 command

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk 8id 8name App_Source_SEIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 1.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 6005076801AB813F100000000000000Bthrottling 0preferred_node_id 2fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 221.17MBfree_capacity 220.77MBoverallocation 462autoexpand onwarning 80grainsize 32

2. Define a FlashCopy mapping in which the non-Space-Efficient VDisk is the source and the Space-Efficient VDisk is the target. Specify a copy rate as high as possible, and activate the -autodelete option for the mapping. See Example 7-131.

Example 7-131 svctask mkfcmap

IBM_2145:ITSO-CLS1:admin>svctask mkfcmap -source App_Source -target App_Source_SE -name MigrtoSEV -copyrate 100 -autodeleteFlashCopy Mapping, id [0], successfully created

408 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 435: San

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap 0id 0name MigrtoSEVsource_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 8target_vdisk_name App_Source_SEgroup_idgroup_namestatus idle_or_copiedprogress 0copy_rate 100start_timedependent_mappings 0autodelete onclean_progress 100clean_rate 50incremental offdifference 100grain_size 256IO_group_id 0IO_group_name io_grp0partner_FC_idpartner_FC_namerestoring no

3. Run the svctask prestartfcmap command and the svcinfo lsfcmap MigrtoSEV command, as shown in Example 7-132.

Example 7-132 svctask prestartfcmap

IBM_2145:ITSO-CLS1:admin>svctask prestartfcmap MigrtoSEVIBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEVid 0name MigrtoSEVsource_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 8target_vdisk_name App_Source_SEgroup_idgroup_namestatus preparedprogress 0copy_rate 100start_timedependent_mappings 0autodelete onclean_progress 100clean_rate 50incremental offdifference 100grain_size 256IO_group_id 0IO_group_name io_grp0partner_FC_id

Chapter 7. SAN Volume Controller operations using the command-line interface 409

Page 436: San

partner_FC_namerestoring no

4. Run the svctask startfcmap command, as shown in Example 7-133.

Example 7-133 svctask startfcmap command

IBM_2145:ITSO-CLS1:admin>svctask startfcmap MigrtoSEV

5. Monitor the copy process using the svcinfo lsfcmapprogress command, as shown in Example 7-134.

Example 7-134 svcinfo lsfcmapprogress command

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmapprogress MigrtoSEVid progress0 63

6. The FlashCopy mapping has been deleted automatically, as shown in Example 7-135.

Example 7-135 svcinfo lsfcmap command

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEVid 0name MigrtoSEVsource_vdisk_id 2source_vdisk_name App_Sourcetarget_vdisk_id 8target_vdisk_name App_Source_SEgroup_idgroup_namestatus copyingprogress 73copy_rate 100start_time 090827095354dependent_mappings 0autodelete onclean_progress 100clean_rate 50incremental offdifference 100grain_size 256IO_group_id 0IO_group_name io_grp0partner_FC_idpartner_FC_namerestoring no

IBM_2145:ITSO-CLS1:admin>svcinfo lsfcmap MigrtoSEVCMMVC5754E The specified object does not exist, or the name supplied does not meet the naming rules.

An independent copy of the source VDisk (App_Source) has been created. The migration has completed, as shown in Example 7-136 on page 411.

410 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 437: San

Example 7-136 svcinfo lsvdisk

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk App_Source_SEid 8name App_Source_SEIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 1.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 6005076801AB813F100000000000000Bthrottling 0preferred_node_id 2fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 1.00GBreal_capacity 1.00GBfree_capacity 0.77MBoverallocation 99autoexpand onwarning 80grainsize 32

To migrate a Space-Efficient VDisk to a fully allocated VDisk, you can follow the same scenario.

Real size: Independently of what you defined as the real size of the target SEV, the real size will be at least the capacity of the source VDisk.

Chapter 7. SAN Volume Controller operations using the command-line interface 411

Page 438: San

7.11.15 Reverse FlashCopyStarting with SVC 5.1, you can have a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.

In Example 7-137, FCMAP0 is the forward FlashCopy mapping, and FCMAP0_rev is a reverse FlashCopy mapping. Its source is FCMAP0’s target, and its target is FCMAP0’s source. When starting a reverse FlashCopy mapping, you must use the -restore option to indicate that the user wants to overwrite the data on the source disk of the forward mapping.

Example 7-137 Reverse FlashCopy

IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk0 -target vdsk1 -name FCMAP0FlashCopy Mapping, id [0], successfully created

IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep FCMAP0

IBM_2145:ITSO-CLS1:admin> svctask mkfcmap -source vdsk1 -target vdsk0 -name FCMAP0_rev FlashCopy Mapping, id [1], successfully created

IBM_2145:ITSO-CLS1:admin> svctask startfcmap -prep -restore FCMAP0_revid:name:source_vdisk_id:source_vdisk_name:target_vdisk_id:target_vdisk_name:group_id:group_name:status:progress:copy_rate:clean_progress:incremental:partner_FC_id:partner_FC_name:restoring0:FCMAP0:75:vdsk0:76:vdsk1:::copying:0:10:99:off:1:FCMAP0_rev:no1:FCMAP0_rev:76:vdsk1:75:vdsk0:::copying:99:50:100:off:0:FCMAP0:yes

FCMAP0_rev will show a restoring value of yes while the FlashCopy mapping is copying. After it has finished copying, the restoring value field will change to no.

7.11.16 Split-stopping of FlashCopy mapsThe stopfcmap command now has a -split option. This option allows the source target of a map, which is 100% complete, to be removed from the head of a cascade, when the map is stopped.

For example, if we have four VDisks in a cascade (A B C D), and the map A B is 100% complete, using the stopfcmap -split mapAB command results in mapAB becoming idle_copied and the remaining cascade becomes B C D.

Without the -split option, VDisk A remains at the head of the cascade (A C D). Consider this sequence of steps:

1. User takes a backup using the mapping A B. A is the production VDisk; B is a backup.

2. At a later point, the user experiences corruption on A and, so, reverses the mapping B A.

3. The user then takes another backup from the production disk A, resulting in the cascade B A C.

Stopping A B without the -split option results in the cascade B C. Note that the backup disk B is now at the head of this cascade.

When the user next wants to take a backup to B, the user can still start mapping A B (using the -restore flag), but the user cannot then reverse the mapping to A (B A or C A).

412 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 439: San

Stopping A B with the -split option results in the cascade A C. This action does not result in the same problem, because production disk A is at the head of the cascade instead of the backup disk B.

7.12 Metro Mirror operation

In the following scenario, we set up an intercluster Metro Mirror relationship between the SVC cluster ITSO-CLS1 primary site and the SVC cluster ITSO-CLS4 at the secondary site. Table 7-3 shows the details of the VDisks.

Table 7-3 VDisk details

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri VDisks, a CG_WIN2K3_MM consistency group is created to handle Metro Mirror relationships for them.

Because, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri VDisk. Figure 7-3 on page 414 illustrates the Metro Mirror setup.

Note: This example is for intercluster operations only. If you want to set up intracluster operations, we highlight those parts of the following procedure that you do not need to perform.

Content of VDisk VDisks at primary site VDisks at secondary site

Database files MM_DB_Pri MM_DB_Sec

Database log files MM_DBLog_Pri MM_DBLog_Sec

Application files MM_App_Pri MM_App_Sec

Chapter 7. SAN Volume Controller operations using the command-line interface 413

Page 440: San

Figure 7-3 Metro Mirror scenario

7.12.1 Setting up Metro MirrorIn the following section, we assume that the source and target VDisks have already been created and that the inter-switch links (ISLs) and zoning are in place, enabling the SVC clusters to communicate.

To set up the Metro Mirror, perform the following steps:

1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS4, on both SVC clusters.

2. Create a Metro Mirror consistency group:

Name CG_W2K3_MM

3. Create the Metro Mirror relationship for MM_DB_Pri:

– Master MM_DB_Pri– Auxiliary MM_DB_Sec– Auxiliary SVC cluster ITSO-CLS4– Name MMREL1– Consistency group CG_W2K3_MM

4. Create the Metro Mirror relationship for MM_DBLog_Pri:

– Master MM_DBLog_Pri– Auxiliary MM_DBLog_Sec– Auxiliary SVC cluster ITSO-CLS4– Name MMREL2– Consistency group CG_W2K3_MM

414 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 441: San

5. Create the Metro Mirror relationship for MM_App_Pri:

– Master MM_App_Pri– Auxiliary MM_App_Sec– Auxiliary SVC cluster ITSO-CLS4– Name MMREL3

In the following section, we perform each step by using the CLI.

7.12.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4We create the SVC partnership on both clusters.

Pre-verificationTo verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command.

As shown in Example 7-138, ITSO-CLS4 is an eligible SVC cluster candidate at ITSO-CLS1 for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.

Example 7-138 Listing the available SVC cluster for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidateid configured name0000020069E03A42 no ITSO-CLS30000020063E03A38 no ITSO-CLS40000020061006FCA no ITSO-CLS2

IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidateid configured name0000020069E03A42 no ITSO-CLS3000002006AE04FC4 no ITSO-CLS10000020061006FCA no ITSO-CLS2

Example 7-139 shows the output of the svcinfo lscluster command, before setting up the Metro Mirror relationship. We show it so that you can compare with the same relationship after setting up the Metro Mirror relationship.

Example 7-139 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lsclusterid name location partnership bandwidth id_alias000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC4

IBM_2145:ITSO-CLS4:admin>svcinfo lsclusterid name location partnership bandwidth id_alias

Intracluster Metro Mirror: If you are creating an intracluster Metro Mirror, do not perform the next step; instead, go to 7.12.3, “Creating a Metro Mirror consistency group” on page 416.

Chapter 7. SAN Volume Controller operations using the command-line interface 415

Page 442: San

0000020063E03A38 ITSO-CLS4 local 0000020063E03A38

Partnership between clustersIn Example 7-140, a partnership is created between ITSO-CLS1 and ITSO-CL4, specifying 50 MBps bandwidth to be used for the background copy.

To check the status of the newly created partnership, issue the svcinfo lscluster command. Also, notice that the new partnership is only partially configured. It remains partially configured until the Metro Mirror relationship is created on the other node.

Example 7-140 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4IBM_2145:ITSO-CLS1:admin>svcinfo lsclusterid name location partnership bandwidth id_alias000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC40000020063E03A38 ITSO-CLS4 remote fully_configured 50 0000020063E03A38

In Example 7-141, the partnership is created between ITSO-CLS4 back to ITSO-CLS1, specifying the bandwidth to be used for a background copy of 50 MBps.

After creating the partnership, verify that the partnership is fully configured on both clusters by reissuing the svcinfo lscluster command.

Example 7-141 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS4:admin>svcinfo lsclusterid name location partnership bandwidth id_alias0000020063E03A38 ITSO-CLS4 local 0000020063E03A38000002006AE04FC4 ITSO-CLS1 remote fully_configured 50 000002006AE04FC4

7.12.3 Creating a Metro Mirror consistency groupIn Example 7-142, we create the Metro Mirror consistency group using the svctask mkrcconsistgrp command. This consistency group will be used for the Metro Mirror relationships of the database VDisks named MM_DB_Pri and MM_DBLog_Pri. The consistency group is named CG_W2K3_MM.

Example 7-142 Creating the Global Mirror consistency group CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_MMRC Consistency Group, id [0], successfully created

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrpid name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type

416 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 443: San

0 CG_W2K3_MM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group

7.12.4 Creating the Metro Mirror relationshipsIn Example 7-143, we create the Metro Mirror relationships MMREL1 and MMREL2, for MM_DB_Pri and MM_DBLog_Pri. Also, we make them members of the Metro Mirror consistency group CG_W2K3_MM. We use the svcinfo lsvdisk command to list all of the VDisks in the ITSO-CLS1 cluster, and we then use the svcinfo lsrcrelationshipcandidate command to show the VDisks in the ITSO-CLS4 cluster.

By using this command, we check the possible candidates for MM_DB_Pri. After checking all of these conditions, use the svctask mkrcrelationship command to create the Metro Mirror relationship.

To verify the newly created Metro Mirror relationships, list them with the svcinfo lsrcrelationship command.

Example 7-143 Creating Metro Mirror relationships MMREL1 and MMREL2

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=MM*id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state13 MM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000010 0 1 empty14 MM_Log_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000011 0 1 empty15 MM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000012 0 1 emptyIBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidateid vdisk_name0 DB_Source1 Log_Source2 App_Source3 App_Target_14 Log_Target_15 Log_Target_26 DB_Target_17 DB_Target_28 App_Source_SE9 FC_A13 MM_DB_Pri14 MM_Log_Pri15 MM_App_Pri

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master MM_DB_Priid vdisk_name0 MM_DB_Sec1 MM_Log_Sec2 MM_App_Sec

Chapter 7. SAN Volume Controller operations using the command-line interface 417

Page 444: San

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_DB_Pri -aux MM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL1RC Relationship, id [13], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_Log_Pri -aux MM_Log_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_MM -name MMREL2RC Relationship, id [14], successfully created

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipid name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type13 MMREL1 000002006AE04FC4 ITSO-CLS1 13 MM_DB_Pri 0000020063E03A38 ITSO-CLS4 0 MM_DB_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro14 MMREL2 000002006AE04FC4 ITSO-CLS1 14 MM_Log_Pri 0000020063E03A38 ITSO-CLS4 1 MM_Log_Sec master 0 CG_W2K3_MM inconsistent_stopped 50 0 metro

7.12.5 Creating a stand-alone Metro Mirror relationship for MM_App_PriIn Example 7-144, we create the stand-alone Metro Mirror relationship MMREL3 for MM_App_Pri. After it is created, we check the status of this Metro Mirror relationship.

Notice that the state of MMREL3 is consistent_stopped. MMREL3 is in this state, because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) VDisk is already synchronized with the primary (master) VDisk. Initial background synchronization is skipped when this option is used, even though the VDisks are not actually synchronized in this scenario. We want to illustrate the option of pre-synchronized master and auxiliary VDisks, before setting up the relationship. We have created the new relationship for MM_App_Sec using the -sync option.

MMREL2 and MMREL1 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary VDisks need to be synchronized with their primary VDisks.

Example 7-144 Creating a stand-alone relationship and verifying it

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master MM_App_Pri -aux MM_App_Sec -sync -cluster ITSO-CLS4 -name MMREL3RC Relationship, id [15], successfully created

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship 15id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15

Tip: The -sync option is only used when the target VDisk has already mirrored all of the data from the source VDisk. By using this option, there is no initial background copy between the primary VDisk and the secondary VDisk.

418 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 445: San

master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_stoppedbg_copy_priority 50progress 100freeze_timestatus onlinesync in_synccopy_type metrosync in_synccopy_type metro

7.12.6 Starting Metro MirrorNow that the Metro Mirror consistency group and relationships are in place, we are ready to use Metro Mirror relationships in our environment.

When implementing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy for a dataset if a failure occurs that affects the production site.

In the following section, we show how to stop and start stand-alone Metro Mirror relationships and consistency groups.

Starting a stand-alone Metro Mirror relationshipIn Example 7-145, we start a stand-alone Metro Mirror relationship named MMREL3. Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state.

Example 7-145 Starting the stand-alone Metro Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship MMREL3

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50

Chapter 7. SAN Volume Controller operations using the command-line interface 419

Page 446: San

progressfreeze_timestatus onlinesynccopy_type metroIBM_2145:ITSO-CLS1:admin>

7.12.7 Starting a Metro Mirror consistency groupIn Example 7-146, we start the Metro Mirror consistency group CG_W2K3_MM. Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships in the consistency group.

Upon completion of the background copy, it enters the Consistent synchronized state.

Example 7-146 Starting the Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate inconsistent_copyingrelationship_count 2freeze_timestatussynccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2IBM_2145:ITSO-CLS1:admin>

7.12.8 Monitoring the background copy progressTo monitor the background copy progress, we can use the svcinfo lsrcrelationship command. This command shows us all of the defined Metro Mirror relationships if it is used without any arguments. In the command output, progress indicates the current background copy progress.

Our Metro Mirror relationship is shown in Example 7-147 on page 421.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Metro Mirror consistency groups or relationships change state.

420 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 447: San

Example 7-147 Monitoring background copy progress example

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL1id 13name MMREL1master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 13master_vdisk_name MM_DB_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 0aux_vdisk_name MM_DB_Secprimary masterconsistency_group_id 0consistency_group_name CG_W2K3_MMstate consistent_synchronizedbg_copy_priority 50progress 35freeze_timestatus onlinesynccopy_type metro

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL2id 14name MMREL2master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 14master_vdisk_name MM_Log_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 1aux_vdisk_name MM_Log_Secprimary masterconsistency_group_id 0consistency_group_name CG_W2K3_MMstate consistent_synchronizedbg_copy_priority 50progress 37freeze_timestatus onlinesynccopy_type metro

When all Metro Mirror relationships have completed the background copy, the consistency group enters the Consistent synchronized state, as shown in Example 7-148.

Example 7-148 Listing the Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1

Chapter 7. SAN Volume Controller operations using the command-line interface 421

Page 448: San

aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

7.12.9 Stopping and restarting Metro MirrorNow that the Metro Mirror consistency group and relationships are running, in this section and in the following sections, we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships, as well as the consistency group.

In this section, we show how to stop and restart the stand-alone Metro Mirror relationships and the consistency group.

7.12.10 Stopping a stand-alone Metro Mirror relationshipExample 7-149 shows how to stop the stand-alone Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary VDisks. It also shows the relationship entering the Idling state.

Example 7-149 Stopping stand-alone Metro Mirror relationship and enabling access to the secondary

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access MMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimaryconsistency_group_idconsistency_group_namestate idlingbg_copy_priority 50progressfreeze_timestatussync in_synccopy_type metro

422 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 449: San

7.12.11 Stopping a Metro Mirror consistency groupExample 7-150 shows how to stop the Metro Mirror consistency group without specifying the -access flag. The consistency group enters the Consistent stopped state.

Example 7-150 Stopping a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_MMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate consistent_stoppedrelationship_count 2freeze_timestatussync in_synccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

If, afterwards, we want to enable access (write I/O) to the secondary VDisk, reissue the svctask stoprcconsistgrp command, specifying the -access flag, and the consistency group transits to the Idling state, as shown in Example 7-151.

Example 7-151 Stopping a Metro Mirror consistency group and enabling access to the secondary

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_MMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primarystate idlingrelationship_count 2freeze_timestatussync in_synccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

Chapter 7. SAN Volume Controller operations using the command-line interface 423

Page 450: San

7.12.12 Restarting a Metro Mirror relationship in the Idling stateWhen restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk, consistency will be compromised. Therefore, we must issue the command with the -force flag to restart a relationship, as shown in Example 7-152.

Example 7-152 Restarting a Metro Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force MMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type metro

7.12.13 Restarting a Metro Mirror consistency group in the Idling stateWhen restarting a Metro Mirror consistency group in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the Metro Mirror relationships in the consistency group, the consistency is compromised. Therefore, we must use the -force flag to start a relationship. If the -force flag is not used, the command fails.

In Example 7-153, we change the copy direction by specifying the auxiliary VDisks to become the primaries.

Example 7-153 Restarting a Metro Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -force -primary aux CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38

424 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 451: San

aux_cluster_name ITSO-CLS4primary auxstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

7.12.14 Changing copy direction for Metro MirrorIn this section, we show how to change the copy direction of the stand-alone Metro Mirror relationship and the consistency group.

7.12.15 Switching copy direction for a Metro Mirror relationshipWhen a Metro Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship using the svctask switchrcrelationship command, specifying the primary VDisk.

If the specified VDisk, when you issue this command, is already a primary, the command has no effect.

In Example 7-154, we change the copy direction for the stand-alone Metro Mirror relationship by specifying the auxiliary VDisk to become the primary.

Example 7-154 Switching the copy direction for a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progress

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transitions from the primary to the secondary, because all of the I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.

Chapter 7. SAN Volume Controller operations using the command-line interface 425

Page 452: San

freeze_timestatus onlinesynccopy_type metro

IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux MMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship MMREL3id 15name MMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 15master_vdisk_name MM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 2aux_vdisk_name MM_App_Secprimary auxconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type metro

7.12.16 Switching copy direction for a Metro Mirror consistency groupWhen a Metro Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the consistency group, by using the svctask switchrcconsistgrp command and specifying the primary VDisk.

If the specified VDisk is already a primary when you issue this command, the command has no effect.

In Example 7-155, we change the copy direction for the Metro Mirror consistency group by specifying the auxiliary VDisk to become the primary.

Example 7-155 Switching the copy direction for a Metro Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary master

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transitions from primary to secondary, because all of the I/O will be inhibited when that VDisk becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.

426 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 453: San

state consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_MM

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_MMid 0name CG_W2K3_MMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary auxstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type metroRC_rel_id 13RC_rel_name MMREL1RC_rel_id 14RC_rel_name MMREL2

7.12.17 Creating an SVC partnership among many clustersStarting with SVC 5.1, you can have a cluster partnership among many SVC clusters. This capability allows you to create four configurations using a maximum of four connected clusters:

� Star configuration� Triangle configuration� Fully connected configuration� Daisy-chain configuration

In this section, we describe how to configure the SVC cluster partnership for each configuration.

In our scenarios, we configure the SVC partnership by referring to the clusters as A, B, C, and D:

� ITSO-CLS1 = A� ITSO-CLS2 = B

Important: In order to have a supported and working configuration, all of the SVC clusters must be at level 5.1 or higher.

Chapter 7. SAN Volume Controller operations using the command-line interface 427

Page 454: San

� ITSO-CLS3 = C� ITSO-CLS4 = D

Example 7-156 shows the available clusters for a partnership using the lsclustercandidate command on each cluster.

Example 7-156 Available clusters

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidateid configured name0000020069E03A42 no ITSO-CLS30000020063E03A38 no ITSO-CLS40000020061006FCA no ITSO-CLS2

IBM_2145:ITSO-CLS2:admin>svcinfo lsclustercandidateid configured cluster_name000002006AE04FC4 no ITSO-CLS10000020069E03A42 no ITSO-CLS30000020063E03A38 no ITSO-CLS4

IBM_2145:ITSO-CLS3:admin>svcinfo lsclustercandidateid configured name000002006AE04FC4 no ITSO-CLS10000020063E03A38 no ITSO-CLS40000020061006FCA no ITSO-CLS2

IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidateid configured name0000020069E03A42 no ITSO-CLS3000002006AE04FC4 no ITSO-CLS10000020061006FCA no ITSO-CLS2

7.12.18 Star configuration partnershipFigure 7-4 shows the star configuration.

Figure 7-4 Star configuration

428 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 455: San

Example 7-157 shows the sequence of mkpartnership commands to execute to create a star configuration.

Example 7-157 Creating a star configuration using the mkpartnership command

From ITSO-CLS1 to multiple clusters

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

From ITSO-CLS2 to ITSO-CLS1

IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1

From ITSO-CLS3 to ITSO-CLS1

IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1

From ITSO-CLS4 to ITSO-CLS1

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1

From ITSO-CLS1

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A420000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38From ITSO-CLS2

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A420000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38

From ITSO-CLS3

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38

From ITSO-CLS4

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

Chapter 7. SAN Volume Controller operations using the command-line interface 429

Page 456: San

0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.

Triangle configurationFigure 7-5 shows the triangle configuration.

Figure 7-5 Triangle configuration

Example 7-158 shows the sequence of mkpartnership commands to execute to create a triangle configuration.

Example 7-158 Creating a triangle configuration

From ITSO-CLS1 to ITSO-CLS2 and ITSO-CLS3

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3

IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

From ITSO-CLS3 to ITSO-CLS1 and ITSO-CLS2

IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2

From ITSO-CLS1

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

From ITSO-CLS2

430 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 457: San

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC40000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

From ITSO-CLS3

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.

Fully connected configurationFigure 7-6 shows the fully connected configuration.

Figure 7-6 Fully connected configuration

Example 7-159 shows the sequence of mkpartnership commands to execute to create a fully connected configuration.

Example 7-159 Creating a fully connected configuration

From ITSO-CLS1 to ITSO-CLS2, ITSO-CLS3 and ITSO-CLS4

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

From ITSO-CLS2 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4

IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

Chapter 7. SAN Volume Controller operations using the command-line interface 431

Page 458: San

IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

From ITSO-CLS3 to ITSO-CLS1, ITSO-CLS3 and ITSO-CLS4

IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

From ITSO-CLS4 to ITSO-CLS1, ITSO-CLS2 and ITSO-CLS3

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

From ITSO-CLS1

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A420000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38

From ITSO-CLS2

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A420000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38

From ITSO-CLS3

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020063E03A38:ITSO-CLS4:remote:fully_configured:50:0000020063E03A38

From ITSO-CLS4

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38000002006AE04FC4:ITSO-CLS1:remote:fully_configured:50:000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA0000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.

432 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 459: San

Daisy-chain configurationFigure 7-7 shows the daisy-chain configuration.

Figure 7-7 Daisy-chain configuration

Example 7-160 shows the sequence of mkpartnership commands to execute to create a daisy-chain configuration.

Example 7-160 Creating a daisy-chain configuration

From ITSO-CLS1 to ITSO-CLS2

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2

From ITSO-CLS2 to ITSO-CLS1 and ITSO-CLS3

IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS1IBM_2145:ITSO-CLS2:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

From ITSO-CLS3 to ITSO-CLS2 and ITSO-CLS4

IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS2IBM_2145:ITSO-CLS3:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS4

From ITSO-CLS4 to ITSO-CLS3

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 50 ITSO-CLS3

From ITSO-CLS1

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias000002006AE04FC4:ITSO-CLS1:local:::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

From ITSO-CLS2

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020061006FCA:ITSO-CLS2:local:::0000020061006FCA000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020069E03A42:ITSO-CLS3:remote:fully_configured:50:0000020069E03A42

From ITSO-CLS3

Chapter 7. SAN Volume Controller operations using the command-line interface 433

Page 460: San

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020069E03A42:ITSO-CLS3:local:::0000020069E03A42000002006AE04FC4:ITSO-CLS1:remote:fully_configured::000002006AE04FC40000020061006FCA:ITSO-CLS2:remote:fully_configured:50:0000020061006FCA

From ITSO-CLS4

IBM_2145:ITSO-CLS4:admin>svcinfo lscluster -delim :id:name:location:partnership:bandwidth:id_alias0000020063E03A38:ITSO-CLS4:local:::0000020063E03A38

After the SVC partnership has been configured, you can configure any rcrelationship or rcconsistgrp that you need. Make sure that a single VDisk is only in one relationship.

7.13 Global Mirror operationIn the following scenario, we set up an intercluster Global Mirror relationship between the SVC cluster ITSO-CLS1 at the primary site and the SVC cluster ITSO-CLS4 at the secondary site.

Table 7-4 shows the details of the VDisks.

Table 7-4 Details of VDisks for Global Mirror relationship scenario

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. Because, in this scenario, the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 7-8 on page 435 illustrates the Global Mirror relationship setup.

Note: This example is for an intercluster Global Mirror operation only. In case you want to set up an intracluster operation, we highlight those parts in the following procedure that you do not need to perform.

Content of VDisk VDisks at primary site VDisks at secondary site

Database files GM_DB_Pri GM_DB_Sec

Database log files GM_DBLog_Pri GM_DBLog_Sec

Application files GM_App_Pri GM_App_Sec

434 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 461: San

Figure 7-8 Global Mirror scenario

7.13.1 Setting up Global MirrorIn the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.

To set up the Global Mirror, perform the following steps:

1. Create an SVC partnership between ITSO_CLS1 and ITSO_CLS4, on both SVC clusters:

Bandwidth 10 MBps

2. Create a Global Mirror consistency group:

Name CG_W2K3_GM

3. Create the Global Mirror relationship for GM_DB_Pri:

– Master GM_DB_Pri– Auxiliary GM_DB_Sec– Auxiliary SVC cluster ITSO-CLS4– Name GMREL1– Consistency group CG_W2K3_GM

4. Create the Global Mirror relationship for GM_DBLog_Pri:

– Master GM_DBLog_Pri– Auxiliary GM_DBLog_Sec– Auxiliary SVC cluster ITSO-CLS4– Name GMREL2– Consistency group CG_W2K3_GM

GM_DB_Pri

GM_Dlog_Pri

GM_DB_Sec

GM_DBlog_Sec

GM Relationship 1

GM Relationship 2

GM_App_Pri GM_App_SecGM Relationship 3

Primary SiteSVC Cluster - ITSO - CLS1

Secondary SiteSVC Cluster - ITSO - CLS4

Consistency GroupCG_W2K3_GM

Chapter 7. SAN Volume Controller operations using the command-line interface 435

Page 462: San

5. Create the Global Mirror relationship for GM_App_Pri:

– Master GM_App_Pri– Auxiliary GM_App_Sec– Auxiliary SVC cluster ITSO-CLS4– Name GMREL3

In the following sections, we perform each step by using the CLI.

7.13.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS4We create an SVC partnership between both clusters.

Pre-verificationTo verify that both clusters can communicate with each other, use the svcinfo lsclustercandidate command. Example 7-161 confirms that our clusters are communicating, because ITSO-CLS4 is an eligible SVC cluster candidate, at ITSO-CLS1, for the SVC cluster partnership, and vice versa. Therefore, both clusters are communicating with each other.

Example 7-161 Listing the available SVC clusters for partnership

IBM_2145:ITSO-CLS1:admin>svcinfo lsclustercandidate

id configured cluster_name0000020068603A42 no ITSO-CLS4

IBM_2145:ITSO-CLS4:admin>svcinfo lsclustercandidate

id configured cluster_name0000020060C06FCA no ITSO-CLS1

In Example 7-162, we show the output of the svcinfo lscluster command, before setting up the SVC clusters’ partnership for Global Mirror. We show this output for comparison after we have set up the SVC partnership.

Example 7-162 Pre-verification of cluster configuration

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster -delim :

id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_address:cluster_IP_address_6:cluster_service_IP_address_6:id_alias0000020060C06FCA:ITSO-CLS1:local:::10.64.210.240:10.64.210.241:::0000020060C06FCA

IBM_2145:ITSO-CLS2:admin>svcinfo lscluster -delim :

id:name:location:partnership:bandwidth:cluster_IP_address:cluster_service_IP_address:cluster_IP_address_6:cluster_service_IP_address_6:id_alias0000020063E03A38:ITSO-CLS4:local:::10.64.210.246.119:10.64.210.247:::0000020063E03A38

Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to 7.13.3, “Changing link tolerance and cluster delay simulation” on page 437.

436 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 463: San

Partnership between clustersIn Example 7-163, we create the partnership from ITSO-CLS1 to ITSO-CLS4, specifying a 10 MBps bandwidth to use for the background copy.

To verify the status of the newly created partnership, we issue the svcinfo lscluster command. Notice that the new partnership is only partially configured. It will remain partially configured until we run the mkpartnership command on the other cluster.

Example 7-163 Creating the partnership from ITSO-CLS1 to ITSO-CLS4 and verifying the partnership

IBM_2145:ITSO-CLS1:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS4IBM_2145:ITSO-CLS1:admin>svcinfo lsclusterid name location partnership bandwidth id_alias000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC40000020063E03A38 ITSO-CLS4 remote partially_configured_local 10 0000020063E03A38

In Example 7-164, we create the partnership from ITSO-CLS4 back to ITSO-CLS1, specifying a 10 MBps bandwidth to be used for the background copy.

After creating the partnership, verify that the partnership is fully configured by reissuing the svcinfo lscluster command.

Example 7-164 Creating the partnership from ITSO-CLS4 to ITSO-CLS1 and verifying the partnership

IBM_2145:ITSO-CLS4:admin>svctask mkpartnership -bandwidth 10 ITSO-CLS1IBM_2145:ITSO-CLS4:admin>svcinfo lsclusterid name location partnership bandwidth id_alias0000020063E03A38 ITSO-CLS4 local 0000020063E03A38000002006AE04FC4 ITSO-CLS1 remote fully_configured 10 000002006AE04FC4

IBM_2145:ITSO-CLS1:admin>svcinfo lsclusterid name location partnership bandwidth id_alias000002006AE04FC4 ITSO-CLS1 local 000002006AE04FC40000020063E03A38 ITSO-CLS4 remote fully_configured 10 0000020063E03A38

7.13.3 Changing link tolerance and cluster delay simulationThe gm_link_tolerance defines the sensitivity of the SVC to inter-link overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships in order to prevent affecting host I/O at the primary site. In order to change the value, use the following command:

svctask chcluster -gmlinktolerance link_tolerance

The link_tolerance value is between 60 and 86,400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds. A value of 0 disables link tolerance.

Chapter 7. SAN Volume Controller operations using the command-line interface 437

Page 464: San

Intercluster and intracluster delay simulationThis Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This feature allows testing to be performed that detects colliding writes, and so, you can use this feature to test an application before the full deployment of the Global Mirror feature. The delay simulation can be enabled separately for each intracluster or intercluster Global Mirror. To enable this feature, you need to run the following command either for the intracluster or intercluster simulation:

� For intercluster:

svctask chcluster -gminterdelaysimulation <inter_cluster_delay_simulation>

� For intracluster:

svctask chcluster -gmintradelaysimulation <intra_cluster_delay_simulation>

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express the amount of time (in milliseconds) secondary I/Os are delayed respectively for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity, that is, copying a primary VDisk to a secondary VDisk, is delayed. You can set a value from 0 to 100 milliseconds in 1 millisecond increments for the cluster_delay_simulation in the previous commands. A value of zero (0) disables the feature.

To check the current settings for the delay simulation, use the following command:

svcinfo lscluster <clustername>

In Example 7-165, we show the modification of the delay simulation value and a change of the Global Mirror link tolerance parameters. We also show the changed values of the Global Mirror link tolerance and delay simulation parameters.

Example 7-165 Delay simulation and link tolerance modification

IBM_2145:ITSO-CLS1:admin>svctask chcluster -gminterdelaysimulation 20IBM_2145:ITSO-CLS1:admin>svctask chcluster -gmintradelaysimulation 40IBM_2145:ITSO-CLS1:admin>svctask chcluster -gmlinktolerance 200IBM_2145:ITSO-CLS1:admin>svcinfo lscluster 000002006AE04FC4id 000002006AE04FC4name ITSO-CLS1location localpartnershipbandwidthtotal_mdisk_capacity 160.0GBspace_in_mdisk_grps 160.0GBspace_allocated_to_vdisks 19.00GBtotal_free_space 141.0GBstatistics_status offstatistics_frequency 15required_memory 8192cluster_locale en_UStime_zone 520 US/Pacificcode_level 5.1.0.0 (build 17.1.0908110000)FC_port_speed 2Gbconsole_IP

Recommendation: We strongly recommend that you use the default value. If the link is overloaded for a period, which affects host I/O at the primary site, the relationships will be stopped to protect those hosts.

438 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 465: San

id_alias 000002006AE04FC4gm_link_tolerance 200gm_inter_cluster_delay_simulation 20gm_intra_cluster_delay_simulation 40email_replyemail_contactemail_contact_primaryemail_contact_alternateemail_contact_locationemail_state invalidinventory_mail_interval 0total_vdiskcopy_capacity 19.00GBtotal_used_capacity 19.00GBtotal_overallocation 11total_vdisk_capacity 19.00GBcluster_ntp_IP_addresscluster_isns_IP_addressiscsi_auth_method noneiscsi_chap_secretauth_service_configured noauth_service_enabled noauth_service_urlauth_service_user_nameauth_service_pwd_set noauth_service_cert_set norelationship_bandwidth_limit 25

7.13.4 Creating a Global Mirror consistency groupIn Example 7-166, we create the Global Mirror consistency group using the svctask mkrcconsistgrp command. We will use this consistency group for the Global Mirror relationships for the database VDisks. The consistency group is named CG_W2K3_GM.

Example 7-166 Creating the Global Mirror consistency group CG_W2K3_GM

IBM_2145:ITSO-CLS1:admin>svctask mkrcconsistgrp -cluster ITSO-CLS4 -name CG_W2K3_GMRC Consistency Group, id [0], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrpid name master_cluster_id master_cluster_name aux_cluster_id aux_cluster_name primary state relationship_count copy_type0 CG_W2K3_GM 000002006AE04FC4 ITSO-CLS1 0000020063E03A38 ITSO-CLS4 empty 0 empty_group

7.13.5 Creating Global Mirror relationshipsIn Example 7-168 on page 441, we create the GMREL1 and GMREL2 Global Mirror relationships for the GM_DB_Pri and GM_DBLog_Pri VDisks. We also make them members of the CG_W2K3_GM Global Mirror consistency group.

Chapter 7. SAN Volume Controller operations using the command-line interface 439

Page 466: San

We use the svcinfo lsvdisk command to list all of the VDisks in the ITSO-CLS1 cluster and, then, use the svcinfo lsrcrelationshipcandidate command to show the possible VDisk candidates for GM_DB_Pri in ITSO-CLS4.

After checking all of these conditions, use the svctask mkrcrelationship command to create the Global Mirror relationship.

To verify the newly created Global Mirror relationships, list them with the svcinfo lsrcrelationship command.

Example 7-167 Creating GMREL1 and GMREL2 Global Mirror relationships

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdisk -filtervalue name=GM*id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state16 GM_App_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000013 0 1 empty17 GM_DB_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000014 0 1 empty18 GM_DBLog_Pri 0 io_grp0 online 0 MDG_DS47 1.00GB striped 6005076801AB813F1000000000000015 0 1 emptyIBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipcandidate -aux ITSO-CLS4 -master GM_DB_Priid vdisk_name0 MM_DB_Sec1 MM_Log_Sec2 MM_App_Sec3 GM_App_Sec4 GM_DB_Sec5 GM_DBLog_Sec6 SEV

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL1 -globalRC Relationship, id [9], successfully created

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS2 -consistgrp CG_W2K3_GM -name GMREL2 -globalRC Relationship, id [10], successfully created

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DB_Pri -aux GM_DB_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL1 -globalRC Relationship, id [17], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_DBLog_Pri -aux GM_DBLog_Sec -cluster ITSO-CLS4 -consistgrp CG_W2K3_GM -name GMREL2 -globalRC Relationship, id [18], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationshipid name master_cluster_id master_cluster_name master_vdisk_id master_vdisk_name aux_cluster_id aux_cluster_name aux_vdisk_id aux_vdisk_name primary consistency_group_id consistency_group_name state bg_copy_priority progress copy_type17 GMREL1 000002006AE04FC4 ITSO-CLS1 17 GM_DB_Pri 0000020063E03A38 ITSO-CLS4 4 GM_DB_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global18 GMREL2 000002006AE04FC4 ITSO-CLS1 18 GM_DBLog_Pri 0000020063E03A38 ITSO-CLS4 5 GM_DBLog_Sec master 0 CG_W2K3_GM inconsistent_stopped 50 0 global

440 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 467: San

7.13.6 Creating the stand-alone Global Mirror relationship for GM_App_PriIn Example 7-168, we create the stand-alone Global Mirror relationship GMREL3 for GM_App_Pri. After it is created, we will check the status of each of our Global Mirror relationships.

Notice that the status of GMREL3 is consistent_stopped, because it was created with the -sync option. The -sync option indicates that the secondary (auxiliary) VDisk is already synchronized with the primary (master) VDisk. The initial background synchronization is skipped when this option is used.

GMREL1 and GMREL2 are in the inconsistent_stopped state, because they were not created with the -sync option, so their auxiliary VDisks need to be synchronized with their primary VDisks.

Example 7-168 Creating a stand-alone Global Mirror relationship and verifying it

IBM_2145:ITSO-CLS1:admin>svctask mkrcrelationship -master GM_App_Pri -aux GM_App_Sec -cluster ITSO-CLS4 -sync -name GMREL3 -globalRC Relationship, id [16], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship -delim :id:name:master_cluster_id:master_cluster_name:master_vdisk_id:master_vdisk_name:aux_cluster_id:aux_cluster_name:aux_vdisk_id:aux_vdisk_name:primary:consistency_group_id:consistency_group_name:state:bg_copy_priority:progress:copy_type16:GMREL3:000002006AE04FC4:ITSO-CLS1:16:GM_App_Pri:0000020063E03A38:ITSO-CLS4:3:GM_App_Sec:master:::consistent_stopped:50:100:global17:GMREL1:000002006AE04FC4:ITSO-CLS1:17:GM_DB_Pri:0000020063E03A38:ITSO-CLS4:4:GM_DB_Sec:master:0:CG_W2K3_GM:inconsistent_stopped:50:0:global18:GMREL2:000002006AE04FC4:ITSO-CLS1:18:GM_DBLog_Pri:0000020063E03A38:ITSO-CLS4:5:GM_DBLog_Sec:master:0:CG_W2K3_GM:inconsistent_stopped:50:0:global

7.13.7 Starting Global MirrorNow that we have created the Global Mirror consistency group and relationships, we are ready to use the Global Mirror relationships in our environment.

When implementing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site.

In this section, we show how to start the stand-alone Global Mirror relationships and the consistency group.

7.13.8 Starting a stand-alone Global Mirror relationshipIn Example 7-145 on page 419, we start the stand-alone Global Mirror relationship named GMREL3. Because the Global Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state.

Example 7-169 Starting the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship GMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3id 16name GMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1

Chapter 7. SAN Volume Controller operations using the command-line interface 441

Page 468: San

master_vdisk_id 16master_vdisk_name GM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 3aux_vdisk_name GM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type global

7.13.9 Starting a Global Mirror consistency groupIn Example 7-146 on page 420, we start the CG_W2K3_GM Global Mirror consistency group. Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships that are in the consistency group.

Upon completion of the background copy, the CG_W2K3_GM Global Mirror consistency group enters the Consistent synchronized state (see Example 7-170).

Example 7-170 Starting the Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp CG_W2K3_GMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate inconsistent_copyingrelationship_count 2freeze_timestatussynccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

442 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 469: San

7.13.10 Monitoring background copy progressTo monitor the background copy progress, use the svcinfo lsrcrelationship command. This command shows us all of the defined Global Mirror relationships if it is used without any parameters. In the command output, progress indicates the current background copy progress. Example 7-147 on page 421 shows our Global Mirror relationships.

Example 7-171 Monitoring background copy progress example

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL1id 17name GMREL1master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 17master_vdisk_name GM_DB_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 4aux_vdisk_name GM_DB_Secprimary masterconsistency_group_id 0consistency_group_name CG_W2K3_GMstate inconsistent_copyingbg_copy_priority 50progress 38freeze_timestatus onlinesynccopy_type globalIBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL2id 18name GMREL2master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 18master_vdisk_name GM_DBLog_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 5aux_vdisk_name GM_DBLog_Secprimary masterconsistency_group_id 0consistency_group_name CG_W2K3_GMstate inconsistent_copyingbg_copy_priority 50progress 40freeze_timestatus onlinesynccopy_type global

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror consistency groups or relationships change state.

Chapter 7. SAN Volume Controller operations using the command-line interface 443

Page 470: San

When all of the Global Mirror relationships complete the background copy, the consistency group enters the Consistent synchronized state, as shown in Example 7-148 on page 421.

Example 7-172 Listing the Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

7.13.11 Stopping and restarting Global MirrorNow that the Global Mirror consistency group and relationships are running, we now describe how to stop, restart, and also change the direction of the stand-alone Global Mirror relationships, as well as the consistency group.

First, we show how to stop and restart the stand-alone Global Mirror relationships and the consistency group.

7.13.12 Stopping a stand-alone Global Mirror relationshipIn Example 7-149 on page 422, we stop the stand-alone Global Mirror relationship, while enabling access (write I/O) to both the primary and the secondary VDisk, and as a result, the relationship enters the Idling state.

Example 7-173 Stopping the stand-alone Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svctask stoprcrelationship -access GMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3id 16name GMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 16master_vdisk_name GM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 3aux_vdisk_name GM_App_Secprimaryconsistency_group_idconsistency_group_name

444 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 471: San

state idlingbg_copy_priority 50progressfreeze_timestatussync in_synccopy_type global

7.13.13 Stopping a Global Mirror consistency groupIn Example 7-150 on page 423, we stop the Global Mirror consistency group without specifying the -access parameter; therefore, the consistency group enters the Consistent stopped state.

Example 7-174 Stopping a Global Mirror consistency group without specifying -access

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp CG_W2K3_GMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate consistent_stoppedrelationship_count 2freeze_timestatussync in_synccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

If, afterwards, we want to enable access (write I/O) for the secondary VDisk, we can reissue the svctask stoprcconsistgrp command, specifying the -access parameter, and the consistency group transits to the Idling state, as shown in Example 7-151 on page 423.

Example 7-175 Stopping a Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svctask stoprcconsistgrp -access CG_W2K3_GMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primarystate idlingrelationship_count 2freeze_timestatus

Chapter 7. SAN Volume Controller operations using the command-line interface 445

Page 472: San

sync in_synccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

7.13.14 Restarting a Global Mirror relationship in the Idling stateWhen restarting a Global Mirror relationship in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk, consistency will be compromised. Therefore, we must issue the -force parameter to restart the relationship. If the -force parameter is not used, the command will fail, which is shown in Example 7-152 on page 424.

Example 7-176 Restarting a Global Mirror relationship after updates in the Idling state

IBM_2145:ITSO-CLS1:admin>svctask startrcrelationship -primary master -force GMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3id 16name GMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 16master_vdisk_name GM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 3aux_vdisk_name GM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type global

7.13.15 Restarting a Global Mirror consistency group in the Idling stateWhen restarting a Global Mirror consistency group in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, consistency will be compromised. Therefore, we must issue the -force parameter to start the relationship. If the -force parameter is not used, the command will fail.

In Example 7-153 on page 424, we restart the consistency group and change the copy direction by specifying the auxiliary VDisks to become the primaries.

446 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 473: San

Example 7-177 Restarting a Global Mirror relationship while changing the copy direction

IBM_2145:ITSO-CLS1:admin>svctask startrcconsistgrp -primary aux CG_W2K3_GMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary auxstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

7.13.16 Changing direction for Global MirrorIn this section, we show how to change the copy direction of the stand-alone Global Mirror relationships and the consistency group.

7.13.17 Switching copy direction for a Global Mirror relationshipWhen a Global Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship by using the svctask switchrcrelationship command and specifying the primary VDisk.

If the VDisk that is specified as the primary when issuing this command is already a primary, the command has no effect.

In Example 7-154 on page 425, we change the copy direction for the stand-alone Global Mirror relationship, specifying the auxiliary VDisk to become the primary.

Example 7-178 Switching the copy direction for a Global Mirror relationship

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3id 16name GMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 16master_vdisk_name GM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcrelationship command.

Chapter 7. SAN Volume Controller operations using the command-line interface 447

Page 474: San

aux_vdisk_id 3aux_vdisk_name GM_App_Secprimary masterconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type global

IBM_2145:ITSO-CLS1:admin>svctask switchrcrelationship -primary aux GMREL3IBM_2145:ITSO-CLS1:admin>svcinfo lsrcrelationship GMREL3id 16name GMREL3master_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1master_vdisk_id 16master_vdisk_name GM_App_Priaux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4aux_vdisk_id 3aux_vdisk_name GM_App_Secprimary auxconsistency_group_idconsistency_group_namestate consistent_synchronizedbg_copy_priority 50progressfreeze_timestatus onlinesynccopy_type global

7.13.18 Switching copy direction for a Global Mirror consistency groupWhen a Global Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the relationship by using the svctask switchrcconsistgrp command and specifying the primary VDisk.

If the VDisk that is specified as the primary when issuing this command is already a primary, the command has no effect.

In Example 7-155 on page 426, we change the copy direction for the Global Mirror consistency group, specifying the auxiliary to become the primary.

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited when that VDisk becomes the secondary. Therefore, careful planning is required prior to using the svctask switchrcconsistgrp command.

448 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 475: San

Example 7-179 Switching the copy direction for a Global Mirror consistency group

IBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary masterstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2IBM_2145:ITSO-CLS1:admin>svctask switchrcconsistgrp -primary aux CG_W2K3_GMIBM_2145:ITSO-CLS1:admin>svcinfo lsrcconsistgrp CG_W2K3_GMid 0name CG_W2K3_GMmaster_cluster_id 000002006AE04FC4master_cluster_name ITSO-CLS1aux_cluster_id 0000020063E03A38aux_cluster_name ITSO-CLS4primary auxstate consistent_synchronizedrelationship_count 2freeze_timestatussynccopy_type globalRC_rel_id 17RC_rel_name GMREL1RC_rel_id 18RC_rel_name GMREL2

7.14 Service and maintenanceThis section details the various service and maintenance tasks that you can execute within the SVC environment.

Chapter 7. SAN Volume Controller operations using the command-line interface 449

Page 476: San

7.14.1 Upgrading softwareThis section explains how to upgrade the SVC software.

Package numbering and versionThe format for software upgrade packages is four positive integers that are separated by periods. For example, a software upgrade package contains something similar to 5.1.0.0, and each software package is given a unique number.

Check the recommended software levels on the Web at this Web site:

http://www.ibm.com/storage/support/2145

SVC software upgrade test utilityThe SAN Volume Controller Software Upgrade Test Utility, which resides on the Master Console, will check software levels in the system against the recommended levels, which will be documented on the support Web site. You will be informed if the software levels are up-to-date, or if you need to download and install newer levels. You can download the utility and installation instructions from this link:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

After the software file has been uploaded to the cluster (to the /home/admin/upgrade directory), you can select the software and apply it to the cluster. Use the Web script and the svctask applysoftware command. When a new code level is applied, it is automatically installed on all of the nodes within the cluster.

The underlying command-line tool runs the sw_preinstall script, which checks the validity of the upgrade file, and whether it can be applied over the current level. If the upgrade file is unsuitable, the pre-install script deletes the files, which prevents the buildup of invalid files on the cluster.

Precaution before upgradeSoftware installation is normally considered to be a client’s task. The SVC supports concurrent software upgrade. You can perform the software upgrade concurrently with I/O user operations and certain management activities, but only limited CLI commands will be operational from the time that the install command starts until the upgrade operation has either terminated successfully or been backed out. Certain commands will fail with a message indicating that a software upgrade is in progress.

Before you upgrade the SVC software, ensure that all I/O paths between all hosts and SANs are working. Otherwise, the applications might have I/O failures during the software upgrade. Ensure that all I/O paths between all hosts and SANs are working by using the Subsystem Device Driver (SDD) command. Example 7-180 shows the output.

Example 7-180 Query adapter

#datapath query adapterActive Adapters :2

Adpt# Name State Mode Select Errors Paths Active 0 fscsi0 NORMAL ACTIVE 1445 0 4 4 1 fscsi1 NORMAL ACTIVE 1888 0 4 4

Requirement: It is mandatory that you run on SVC 4.3.1.7 cluster code before upgrading to SVC 5.1.0.0 cluster code.

450 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 477: San

#datapath query deviceTotal Devices : 2

DEV#: 0 DEVICE NAME: vpath0 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018201BF2800000000000000==========================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk3 OPEN NORMAL 0 0 1 fscsi1/hdisk7 OPEN NORMAL 972 0

DEV#: 1 DEVICE NAME: vpath1 TYPE: 2145 POLICY: OptimizedSERIAL: 60050768018201BF2800000000000002==========================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 fscsi0/hdisk4 OPEN NORMAL 784 0 1 fscsi1/hdisk8 OPEN NORMAL 0 0

Verify that your uninterruptible power supply unit configuration is also set up correctly (even if your cluster is running without problems). Specifically, make sure that the following conditions are true:

� Your uninterruptible power supply units are all getting their power from an external source, and they are not daisy chained. Make sure that each uninterruptible power supply unit is not supplying power to another node’s uninterruptible power supply unit.

� The power cable and the serial cable, which comes from each node, go back to the same uninterruptible power supply unit. If the cables are crossed and go back to separate uninterruptible power supply units, during the upgrade, while one node is shut down, another node might also be mistakenly shut down.

You must also ensure that all I/O paths are working for each host that runs I/O operations to the SAN during the software upgrade. You can check the I/O paths by using the datapath query commands.

You do not need to check for hosts that have no active I/O operations to the SAN during the software upgrade.

ProcedureTo upgrade the SVC cluster software, perform the following steps:

1. Before starting the upgrade, you must back up the configuration (see 7.14.9, “Backing up the SVC cluster configuration” on page 466) and save the backup config file in a safe place.

2. Also, save the data collection for support diagnosis in case of problems, as shown in Example 7-181 on page 452.

Write-through mode: During a software upgrade, there are periods where not all of the nodes in the cluster are operational, and as a result, the cache operates in write-through mode. write-through mode has an effect on the throughput, latency, and bandwidth aspects of performance.

Important: Do not share the SVC uninterruptible power supply unit with any other devices.

Chapter 7. SAN Volume Controller operations using the command-line interface 451

Page 478: San

Example 7-181 svc_snap command

IBM_2145:ITSO-CLS1:admin>svc_snapCollecting system information...Copying files, please wait...Copying files, please wait...Listing files, please wait...Copying files, please wait...Listing files, please wait...Copying files, please wait...Listing files, please wait...Dumping error log...Creating snap package...Snap data collected in /dumps/snap.104643.080617.002427.tgz

3. List the dump that was generated by the previous command, as shown in Example 7-182.

Example 7-182 svcinfo ls2145dumps command

IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumpsid 2145_filename0 svc.config.cron.bak_node31 svc.config.cron.bak_SVCNode_22 svc.config.cron.bak_node13 dump.104643.070803.0154244 dump.104643.071010.2327405 svc.config.backup.bak_ITSOCL1_N16 svc.config.backup.xml_ITSOCL1_N17 svc.config.backup.tmp.xml8 svc.config.cron.bak_ITSOCL1_N19 dump.104643.080609.20274110 104643.080610.154323.ups_log.tar.gz11 104643.trc.old12 dump.104643.080609.21262613 104643.080612.221933.ups_log.tar.gz14 svc.config.cron.bak_Node115 svc.config.cron.log_Node116 svc.config.cron.sh_Node117 svc.config.cron.xml_Node118 dump.104643.080616.20365919 104643.trc20 ups_log.a21 snap.104643.080617.002427.tgz22 ups_log.b

4. Save the generated dump in a safe place using the pscp command, as shown in Example 7-183.

Example 7-183 pscp -load command

C:\>pscp -load ITSOCL1 [email protected]:/dumps/snap.104643.080617.002427.tgz c:\snap.104643.080617.002427 | 597 kB | 597.7 kB/s | ETA: 00:00:00 | 100%

5. Upload the new software package using PuTTY Secure Copy. Enter the command, as shown in Example 7-184 on page 453.

452 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 479: San

Example 7-184 pscp -load command

C:\>pscp -load ITSOCL1 IBM2145_INSTALL_4.3.0.0 [email protected]:/home/admin/upgradeIBM2145_INSTALL_4.3.0.0-0 | 103079 kB | 9370.8 kB/s | ETA: 00:00:00 | 100%

6. Upload the SAN Volume Controller Software Upgrade Test Utility by using PuTTY Secure Copy. Enter the command, as shown in Example 7-185.

Example 7-185 Upload utility

C:\>pscp -load ITSOCL1 IBM2145_INSTALL_svcupgradetest_1.11 [email protected]:/home/admin/upgradeIBM2145_INSTALL_svcupgrad | 11 kB | 12.0 kB/s | ETA: 00:00:00 | 100%

7. Verify that the packages were successfully delivered through the PuTTY command-line application by entering the svcinfo lssoftwaredumps command, as shown in Example 7-186.

Example 7-186 svcinfo lssoftwaredumps command

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumpsid software_filename0 IBM2145_INSTALL_4.3.0.01 IBM2145_INSTALL_svcupgradetest_1.11

8. Now that the packages are uploaded, first install the SAN Volume Controller Software Upgrade Test Utility, as shown in Example 7-187.

Example 7-187 svctask applysoftware command

IBM_2145:ITSO-CLS1:admin>svctask applysoftware -file IBM2145_INSTALL_svcupgradetest_1.11CMMVC6227I The package installed successfully.

9. Using the following command, test the upgrade for known issues that might prevent a software upgrade from completing successfully, as shown in Example 7-188.

Example 7-188 svcupgradetest command

IBM_2145:ITSO-CLS1:admin>svcupgradetestsvcupgradetest version 1.11. Please wait while the tool testsfor issues that may prevent a software upgrade from completingsuccessfully. The test will take approximately one minute to complete.The test has not found any problems with the 2145 cluster.Please proceed with the software upgrade.

10.Now, use the svctask command set to apply the software upgrade, as shown in Example 7-189.

Example 7-189 Apply upgrade command example

IBM_2145:ITSOSVC42A:admin>svctask applysoftware -file IBM2145_INSTALL_4.3.0.0

Important: If the svcupgradetest command produces any errors, troubleshoot the errors using the maintenance procedures before continuing further.

Chapter 7. SAN Volume Controller operations using the command-line interface 453

Page 480: San

While the upgrade runs, you can check the status, as shown in Example 7-190.

Example 7-190 Check update status

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwareupgradestatusstatusupgrading

11.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted one at a time. If a node does not restart automatically during the upgrade, you must repair it manually.

12.Eventually both nodes display Cluster: on line one on the SVC front panel and the name of your cluster on line two of the SVC front panel. Be prepared for a wait (in our case, we waited approximately 40 minutes).

13.To verify that the upgrade was successful, you can perform either of the following options:

– Run the svcinfo lscluster and svcinfo lsnodevpd commands, as shown in Example 7-191. We have truncated the lscluster and lsnodevpd information for this example.

Example 7-191 svcinfo lscluster and svcinfo lsnodevpd commands

IBM_2145:ITSO-CLS1:admin>svcinfo lscluster ITSO-CLS1id 0000020060806FCAname ITSO-CLS1location localpartnershipbandwidthcluster_IP_address 9.43.86.117cluster_service_IP_address 9.43.86.118total_mdisk_capacity 756.0GBspace_in_mdisk_grps 756.0GBspace_allocated_to_vdisks 156.00GBtotal_free_space 600.0GBstatistics_status offstatistics_frequency 15required_memory 8192cluster_locale en_USSNMP_setting noneSNMP_communitySNMP_server_IP_address 0.0.0.0subnet_mask 255.255.252.0default_gateway 9.43.85.1time_zone 522 UTCemail_settingemail_id

Solid-state drives: If you use solid-state drives, the data of the solid-state drive within the restarted node will not be available during the reboot.

Performance: During this process, both your CLI and GUI vary from sluggish (slow) to unresponsive. The important thing is that I/O to the hosts can continue through this process.

454 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 481: San

code_level 4.3.0.0 (build 8.15.0806110000)FC_port_speed 2Gbconsole_IP 9.43.86.115:9080id_alias 0000020060806FCAgm_link_tolerance 300gm_inter_cluster_delay_simulation 0gm_intra_cluster_delay_simulation 0email_server 127.0.0.1email_server_port 25email_reply [email protected]_contact ITSO Useremail_contact_primary 555-1234email_contact_alternateemail_contact_location ITSOemail_state runningemail_user_count 1inventory_mail_interval 0cluster_IP_address_6cluster_service_IP_address_6prefix_6default_gateway_6total_vdiskcopy_capacity 156.00GBtotal_used_capacity 156.00GBtotal_overallocation 20total_vdisk_capacity 156.00GBIBM_2145:ITSO-CLS1:admin>

IBM_2145:ITSO-CLS1:admin>svcinfo lsnodevpd 1id 1

system board: 24 fieldspart_number 31P0906system_serial_number 13DVT31number_of_processors 4number_of_memory_slots 8number_of_fans 6number_of_FC_cards 1number_of_scsi/ide_devices 2BIOS_manufacturer IBMBIOS_version -[GFE136BUS-1.09]-BIOS_release_date 02/08/2008system_manufacturer IBMsystem_product IBM System x3550 -[21458G4]-..software: 6 fieldscode_level 4.3.0.0 (build 8.15.0806110000)node_name Node1ethernet_status 1WWNN 0x50050768010037e5id 1

Chapter 7. SAN Volume Controller operations using the command-line interface 455

Page 482: San

– Copy the error log to your management workstation, as explained in 7.14.2, “Running maintenance procedures” on page 456. Open the error log in WordPad and search for Software Install completed.

You have now completed the required tasks to upgrade the SVC software.

7.14.2 Running maintenance proceduresUse the svctask finderr command to generate a list of any unfixed errors in the system. This command analyzes the last generated log that resides in the /dumps/elogs/ directory on the cluster.

If you want to generate a new log before analyzing unfixed errors, run the svctask dumperrlog command (Example 7-192).

Example 7-192 svctask dumperrlog command

IBM_2145:ITSO-CLS2:admin>svctask dumperrlog

This command generates a errlog_timestamp file, such as errlog_100048_080618_042419, where:

� errlog is part of the default prefix for all error log files.� 100048 is the panel name of the current configuration node.� 080618 is the date (YYMMDD).� 042419 is the time (HHMMSS).

You can add the -prefix parameter to your command to change the default prefix of errlog to something else (Example 7-193).

Example 7-193 svctask dumperrlog -prefix command

IBM_2145:ITSO-CLS2:admin>svctask dumperrlog -prefix svcerrlog

This command creates a file called svcerrlog_timestamp.

To see the file name, you must enter the following command (Example 7-194).

Example 7-194 svcinfo lserrlogdumps command

IBM_2145:ITSO-CLS2:admin>svcinfo lserrlogdumpsid filename0 errlog_100048_080618_0420491 errlog_100048_080618_0421282 errlog_100048_080618_0423553 errlog_100048_080618_0424194 errlog_100048_080618_1756525 errlog_100048_080618_1757026 errlog_100048_080618_1757247 errlog_100048_080619_2059008 errlog_100048_080624_1702149 svcerrlog_100048_080624_170257

456 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 483: San

After you generate your error log, you can issue the svctask finderr command to scan the error log for any unfixed errors, as shown in Example 7-195.

Example 7-195 svctask finderr command

IBM_2145:ITSO-CLS2:admin>svctask finderrHighest priority unfixed error code is [1230]

As you can see, we have one unfixed error on our system. To analyze this error, download it onto your own PC.

To know more about this unfixed error, look at the error log in more detail. Use the PuTTY Secure Copy process to copy the file from the cluster to your local management workstation, as shown in Example 7-196.

Example 7-196 pscp command: Copy error logs off of the SVC

In W2K3 Start Run cmd

C:\Program Files\PuTTY>pscp -load SVC_CL2 [email protected]:/dumps/elogs/svcerrlog_100048_080624_170257 c:\temp\svcerrlog.txtsvcerrlog.txt | 6390 kB | 3195.1 kB/s | ETA: 00:00:00 | 100%

In order to use the Run option, you must know where your pscp.exe is located. In this case, it is in the C:\Program Files\PuTTY\ folder.

This command copies the file called svcerrlog_100048_080624_170257 to the C:\temp directory on our local workstation and calls the file svcerrlog.txt.

Open the file in WordPad (Notepad does not format the window as well). You will see information similar to what is shown in Example 7-197. We truncated this list for the purposes of this example.

Example 7-197 errlog in WordPad

Error Log Entry 400 Node Identifier : Node2 Object Type : device Object ID : 0 Copy ID : Sequence Number : 37404 Root Sequence Number : 37404 First Error Timestamp : Sat Jun 21 00:08:21 2008 : Epoch + 1214006901 Last Error Timestamp : Sat Jun 21 00:11:36 2008 : Epoch + 1214007096 Error Count : 2 Error ID : 10013 : Login Excluded

Maximum number of error log dump files: A maximum of ten error log dump files per node will be kept on the cluster. When the eleventh dump is made, the oldest existing dump file for that node will be overwritten. Note that the directory might also hold log files retrieved from other nodes. These files are not counted. The SVC will delete the oldest file (when necessary) for this node in order to maintain the maximum number of files. The SVC will not delete files from other nodes unless you issue the cleandumps command.

Chapter 7. SAN Volume Controller operations using the command-line interface 457

Page 484: San

Error Code : 1230 : Login excluded Status Flag : UNFIXED Type Flag : TRANSIENT ERROR

03 00 00 00 03 00 00 00 31 44 17 B8 A0 00 04 20 33 44 17 B8 A0 00 05 20 00 11 01 00 00 00 01 00 33 00 33 00 05 00 0B 00 00 00 01 00 00 00 01 00 04 00 04 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

Scrolling through, or searching for the term unfixed, you can find more detail about the problem. You might see more entries in the errorlog that have the status of unfixed.

After you take the necessary steps to rectify the problem, you can mark the error as fixed in the log by issuing the svctask cherrstate command against its sequence numbers (Example 7-198).

Example 7-198 svctask cherrstate command

IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37404

If you accidentally mark the wrong error as fixed, you can mark it as unfixed again by entering the same command and appending the -unfix flag to the end, as shown in Example 7-199.

Example 7-199 unfix flag

IBM_2145:ITSO-CLS2:admin>svctask cherrstate -sequencenumber 37406 -unfix

7.14.3 Setting up SNMP notificationTo set up error notification, use the svctask mksnmpserver command.

Example 7-200 shows an example of the mksnmpserver command.

Example 7-200 svctask mksnmpserver command

IBM_2145:ITSO-CLS2:admin>svctask mksnmpserver -error on -warning on -info on -ip 9.43.86.160 -community SVCSNMP Server id [1] successfully created

This command sends all errors and warning and informational events to the SVC community on the SNMP manager with the IP address 9.43.86.160.

7.14.4 Set syslog event notificationStarting with SVC 5.1, you can save a syslog to a defined syslog server. The SVC now provides support for syslog in addition to e-mail and SNMP traps.

The syslog protocol is a client-server standard for forwarding log messages from a sender to a receiver on an IP network. You can use syslog to integrate log messages from various types of systems into a central repository. You can configure SVC 5.1 to send information to six syslog servers.

458 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 485: San

You use the svctask mksyslogserver command to configure the SVC using the CLI, as shown in Example 7-201.

Using this command with the -h parameter gives you information about all of the available options. In our example, we only configure the SVC to use the default values for our syslog server.

Example 7-201 Configuring the syslog

IBM_2145:ITSO-CLS2:admin>svctask mksyslogserver -ip 10.64.210.231 -name Syslogserv1Syslog Server id [1] successfully created

When we have configured our syslog server, we can display the current syslog server configurations in our cluster, as shown in Example 7-202.

Example 7-202 svcinfo lssyslogserver command

IBM_2145:ITSO-CLS2:admin>svcinfo lssyslogserverid name IP_address facility error warning info0 Syslogsrv 10.64.210.230 4 on on on1 Syslogserv1 10.64.210.231 0 on on on

7.14.5 Configuring error notification using an e-mail server The SVC can use an e-mail server to send event notification and inventory e-mails to e-mail users. It can transmit any combination of error, warning, and informational notification types. The SVC supports up to six e-mail servers to provide redundant access to the external e-mail network. The SVC uses the e-mail servers in sequence until the e-mail is successfully sent from the SVC.

The attempt is successful when the SVC gets a positive acknowledgement from an e-mail server that the e-mail has been received by the server.

If no port is specified, port 25 is the default port, as shown in Example 7-203.

Example 7-203 The mkemailserver command syntax

IBM_2145:ITSO-CLS1:admin>svctask mkemailserver -ip 192.168.1.1Email Server id [0] successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsemailserver 0id 0name emailserver0IP_address 192.168.1.1port 25

We can configure an e-mail user that will receive e-mail notifications from the SVC cluster. We can define 12 users to receive e-mails from our SVC.

Important: Before the SVC can start sending e-mails, we must run the svctask startemail command, which enables this service.

Chapter 7. SAN Volume Controller operations using the command-line interface 459

Page 486: San

Using the svcinfo lsemailuser command, we can verify who is already registered and what type of information is sent to that user, as shown in Example 7-204.

Example 7-204 svcinfo lsemailuser command

IBM_2145:ITSO-CLS2:admin>svcinfo lsemailuserid name address user_type error warning info inventory0 IBM_Support_Center [email protected] support on off off on

We can also create a new user, as shown in Example 7-205 for a SAN administrator.

Example 7-205 svctask mkemailuser command

IBM_2145:ITSO-CLS2:admin>svctask mkemailuser -address [email protected] -error on -warning on -info on -inventory onUser, id [1], successfully created

7.14.6 Analyzing the error logThe following types of events and errors are logged in the error log:

� Events: State changes are detected by the cluster software and are logged for informational purposes. Events are recorded in the cluster error log.

� Errors: Hardware or software problems are detected by the cluster software and require repair. Errors are recorded in the cluster error log.

� Unfixed errors: Errors were detected and recorded in the cluster error log and have not yet been corrected or repaired.

� Fixed errors: Errors were detected and recorded in the cluster error log and have subsequently been corrected or repaired.

To display the error log, use the svcinfo lserrlog command or the svcinfo caterrlog command, as shown in Example 7-206 (the output is the same).

Example 7-206 svcinfo caterrlog command

IBM_2145:ITSOSVC42A:admin>svcinfo caterrlog -delim :id:type:fixed:SNMP_trap_raised:error_type:node_name:sequence_number:root_sequence_number:first_timestamp:last_timestamp:number_of_errors:error_code0:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:009901010:cluster:no:no:5:SVCNode_1:0:0:070606094909:070606094909:1:0099010112:_grp:no:no:5:SVCNode_1:0:0:070606094858:070606094858:1:0099014512:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094539:070606094539:1:009901730:internal:no:no:5:SVCNode_1:0:0:070606094507:070606094507:1:0099021912:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094208:070606094208:1:0099014812:mdisk_grp:no:no:5:SVCNode_1:0:0:070606094139:070606094139:1:00990145.........

IBM_2145:ITSO-CLS1:admin>svcinfo caterrlog -delim ,id,type,fixed,SNMP_trap_raised,error_type,node_name,sequence_number,root_sequence_number,first_timestamp,last_timestamp,number_of_errors,error_code,copy_id0,cluster,no,yes,6,n4,171,170,080624115947,080624115947,1,00981001,0,cluster,no,yes,6,n4,170,170,080624115932,080624115932,1,00981001,0,cluster,no,no,5,n1,0,0,080624105428,080624105428,1,00990101,0,internal,no,no,5,n1,0,0,080624095359,080624095359,1,00990219,

460 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 487: San

0,internal,no,no,5,n1,0,0,080624094301,080624094301,1,00990220,0,internal,no,no,5,n1,0,0,080624093355,080624093355,1,00990220,11,vdisk,no,no,5,n1,0,0,080623150020,080623150020,1,00990183,4,vdisk,no,no,5,n1,0,0,080623145958,080623145958,1,00990183,5,vdisk,no,no,5,n1,0,0,080623145934,080623145934,1,00990183,11,vdisk,no,no,5,n1,0,0,080623145017,080623145017,1,00990182,6,vdisk,no,no,5,n1,0,0,080623144153,080623144153,1,00990183,.

This command views the last error log that was generated. Use the method that is described in 7.14.2, “Running maintenance procedures” on page 456 to upload and analyze the error log in more detail.

To clear the error log, you can issue the svctask clearerrlog command, as shown in Example 7-207.

Example 7-207 svctask clearerrlog command

IBM_2145:ITSO-CLS1:admin>svctask clearerrlogDo you really want to clear the log? y

Using the -force flag will stop any confirmation requests from appearing.

When executed, this command will clear all of the entries from the error log. This process will proceed even if there are unfixed errors in the log. It also clears any status events that are in the log.

This command is a destructive command for the error log. Only use this command when you have either rebuilt the cluster, or when you have fixed a major problem that has caused many entries in the error log that you do not want to fix manually.

7.14.7 License settingsTo change the licensing feature settings, use the svctask chlicense command.

Before you change the licensing, you can display the licenses that you already have by issuing the svcinfo lslicense command, as shown in Example 7-208.

Example 7-208 svcinfo lslicense command

IBM_2145:ITSO-CLS1:admin>svcinfo lslicenseused_flash 0.00used_remote 0.00used_virtualization 0.74license_flash 50license_remote 20license_virtualization 80

The current license settings for the cluster are displayed in the viewing license settings log window. These settings show whether you are licensed to use the FlashCopy, Metro Mirror, Global Mirror, or Virtualization features. They also show the storage capacity that is licensed for virtualization. Typically, the license settings log contains entries, because feature options must be set as part of the Web-based cluster creation process.

Chapter 7. SAN Volume Controller operations using the command-line interface 461

Page 488: San

Consider, for example, that you have purchased an additional 5 TB of licensing for the Metro Mirror and Global Mirror feature. Example 7-209 on page 462 shows the command that you enter.

Example 7-209 svctask chlicense command

IBM_2145:ITSO-CLS1:admin>svctask chlicense -remote 25

To turn a feature off, add 0 TB as the capacity for the feature that you want to disable.

To verify that the changes you have made are reflected in your SVC configuration, you can issue the svcinfo lslicense command as before (see Example 7-210).

Example 7-210 svcinfo lslicense command: Verifying changes

IBM_2145:ITSO-CLS1:admin>svcinfo lslicenseused_flash 0.00used_remote 0.00used_virtualization 0.74license_flash 50license_remote 25license_virtualization 80

7.14.8 Listing dumpsSeveral commands are available for you to list the dumps that were generated over a period of time. You can use the lsxxxxdumps command, where xxxx is the object dumps, to return a list of dumps in the appropriate directory.

These object dumps are available:

� lserrlogdumps � lsfeaturedumps � lsiotracedumps� lsiostatsdumps� lssoftwaredumps� ls2145dumps

If no node is specified, the command lists the dumps that are available on the configuration node.

Error or event dumpThe dumps that are contained in the /dumps/elogs directory are dumps of the contents of the error and event log at the time that the dump was taken. You create an error or event log dump by using the svctask dumperrlog command. This command dumps the contents of the error or event log to the /dumps/elogs directory. If you do not supply a file name prefix, the system uses the default errlog_ file name prefix. The full, default file name is errlog_NNNNNN_YYMMDD_HHMMSS. In this file name, NNNNNN is the node front panel name. If the command is used with the -prefix option, the value that is entered for the -prefix is used instead of errlog.

The svcinfo lserrlogdumps command lists all of the dumps in the /dumps/elogs directory (Example 7-211).

Example 7-211 svcinfo lserrlogdumps command

IBM_2145:ITSO-CLS1:admin>svcinfo lserrlogdumps

462 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 489: San

id filename0 errlog_104643_080617_1728591 errlog_104643_080618_1635272 errlog_104643_080619_1649293 errlog_104643_080619_1651174 errlog_104643_080624_0933555 svcerrlog_104643_080624_0943016 errlog_104643_080624_1208077 errlog_104643_080624_1211028 errlog_104643_080624_1222049 errlog_104643_080624_160522

Featurization log dump The dumps that are contained in the /dumps/feature directory are dumps of the featurization log. A featurization log dump is created by using the svctask dumpinternallog command. This command dumps the contents of the featurization log to the /dumps/feature directory to a file called feature.txt. Only one of these files exists, so every time that the svctask dumpinternallog command is run, this file is overwritten.

The svcinfo lsfeaturedumps command lists all of the dumps in the /dumps/feature directory (Example 7-212).

Example 7-212 svctask lsfeaturedumps command

IBM_2145:ITSO-CLS1:admin>svcinfo lsfeaturedumpsid feature_filename0 feature.txt

I/O trace dumpDumps that are contained in the /dumps/iotrace directory are dumps of I/O trace data. The type of data that is traced depends on the options that are specified by the svctask settrace command. The collection of the I/O trace data is started by using the svctask starttrace command. The I/O trace data collection is stopped when the svctask stoptrace command is used. When the trace is stopped, the data is written to the file.

The file name is prefix_NNNNNN_YYMMDD_HHMMSS, where NNNNNN is the node front panel name, and prefix is the value that is entered by the user for the -filename parameter in the svctask settrace command.

The command to list all of the dumps in the /dumps/iotrace directory is the svcinfo lsiotracedumps command (Example 7-213).

Example 7-213 svcinfo lsiotracedumps command

IBM_2145:ITSO-CLS1:admin>svcinfo lsiotracedumpsid iotrace_filename0 tracedump_104643_080624_1722081 iotrace_104643_080624_172451

I/O statistics dumpThe dumps that are contained in the /dumps/iostats directory are the dumps of the I/O statistics for the disks on the cluster. An I/O statistics dump is created by using the svctask startstats command. As part of this command, you can specify a time interval at which you want the statistics to be written to the file (the default is 15 minutes). Every time that the time

Chapter 7. SAN Volume Controller operations using the command-line interface 463

Page 490: San

interval is encountered, the I/O statistics that are collected up to this point are written to a file in the /dumps/iostats directory.

The file names that are used for storing I/O statistics dumps are m_stats_NNNNNN_YYMMDD_HHMMSS or v_stats_NNNNNN_YYMMDD_HHMMSS, depending on whether the statistics are for MDisks or VDisks. In these file names, NNNNNN is the node front panel name.

The command to list all of the dumps that are in the /dumps/iostats directory is the svcinfo lsiostatsdumps command (Example 7-214).

Example 7-214 svcinfo lsiostatsdumps command

IBM_2145:ITSO-CLS1:admin>svcinfo lsiostatsdumpsid iostat_filename0 Nm_stats_104603_071115_0200541 Nn_stats_104603_071115_0200542 Nv_stats_104603_071115_0200543 Nv_stats_104603_071115_022057........

Software dumpThe svcinfo lssoftwaredump command lists the contents of the /home/admin/upgrade directory. Any files in this directory are copied there at the time that you perform a software upgrade. Example 7-215 shows the command.

Example 7-215 svcinfo lssoftwaredumps

IBM_2145:ITSO-CLS1:admin>svcinfo lssoftwaredumpsid software_filename0 IBM2145_INSTALL_4.3.0.0

Other node dumpsAll of the svcinfo lsxxxxdumps commands can accept a node identifier as input (for example, append the node name to the end of any of the node dump commands). If this identifier is not specified, the list of files on the current configuration node is displayed. If the node identifier is specified, the list of files on that node is displayed.

However, files can only be copied from the current configuration node (using PuTTY Secure Copy). Therefore, you must issue the svctask cpdumps command to copy the files from a non-configuration node to the current configuration node. Subsequently, you can copy them to the management workstation using PuTTY Secure Copy.

For example, you discover a dump file and want to copy it to your management workstation for further analysis. In this case, you must first copy the file to your current configuration node.

To copy dumps from other nodes to the configuration node, use the svctask cpdumps command.

In addition to the directory, you can specify a file filter. For example, if you specified /dumps/elogs/*.txt, all of the files in the /dumps/elogs directory that end in .txt are copied.

464 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 491: San

Example 7-216 shows an example of the cpdumps command.

Example 7-216 svctask cpdumps command

IBM_2145:ITSO-CLS1:admin>svctask cpdumps -prefix /dumps/configs n4

Now that you have copied the configuration dump file from Node n4 to your configuration node, you can use PuTTY Secure Copy to copy the file to your management workstation for further analysis.

To clear the dumps, you can run the svctask cleardumps command. Again, you can append the node name if you want to clear dumps off of a node other than the current configuration node (the default for the svctask cleardumps command).

The commands in Example 7-217 clear all logs or dumps from the SVC Node n1.

Example 7-217 svctask cleardumps command

IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/iostats n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/iotrace n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/feature n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/config n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /dumps/elog n1IBM_2145:ITSO-CLS1:admin>svctask cleardumps -prefix /home/admin/upgrade n1

Application abends dump The dumps that are contained in the /dumps directory are the dumps resulting from application (abnormal ends) abends. These dumps are written to the /dumps directory. The default file names are dump.NNNNNN.YYMMDD.HHMMSS. NNNNNN is the node front panel name. In addition to the dump file, trace files can be written to this directory. These trace files are named NNNNNN.trc.

The command to list all of the dumps in the /dumps directory is the svcinfo ls2145dumps command (Example 7-218).

Example 7-218 svcinfo ls2145dumps command

IBM_2145:ITSO-CLS1:admin>svcinfo ls2145dumpsid 2145_filename0 svc.config.cron.bak_node31 svc.config.cron.bak_SVCNode_22 dump.104643.070803.0154243 dump.104643.071010.2327404 svc.config.backup.bak_ITSOCL1_N1

Wildcards: The following rules apply to the use of wildcards with the SAN Volume Controller CLI:

� The wildcard character is an asterisk (*).

� The command can contain a maximum of one wildcard.

� When you use a wildcard, you must surround the filter entry with double quotation marks (""), for example:

>svctask cleardumps -prefix "/dumps/elogs/*.txt"

Chapter 7. SAN Volume Controller operations using the command-line interface 465

Page 492: San

7.14.9 Backing up the SVC cluster configurationYou can back up your cluster configuration by using the Backing Up a Cluster Configuration window or the CLI svcconfig command. In this section, we describe the overall procedure for backing up your cluster configuration and the conditions that must be satisfied to perform a successful backup.

The backup command extracts configuration data from the cluster and saves it to the svc.config.backup.xml file in the /tmp directory. This process also produces an svc.config.backup.sh file. You can study this file to see what other commands were issued to extract information.

A svc.config.backup.log log is also produced. You can study this log for the details of what was done and when it was done. This log also includes information about the other commands that were issued.

Any pre-existing svc.config.backup.xml file is archived as the svc.config.backup.bak file. The system only keeps one archive. We recommend that you immediately move the .XML file and related KEY files (see the following limitations) off of the cluster for archiving. Then, erase the files from the /tmp directory using the svcconfig clear -all command. We also recommend that you change all of the objects having default names to non-default names. Otherwise, a warning is produced for objects with default names. Also, the object with the default name is restored with its original name with an “_r” appended. The prefix _(underscore) is reserved for backup and restore command usage. Do not use this prefix in any object names.

PrerequisitesYou must have the following prerequisites in place:

� All nodes must be online.� No object name can begin with an underscore.� All objects must have non-default names, that is, names that are not assigned by the SVC.

Although we recommend that objects have non-default names at the time that the backup is taken, this prerequisite is not mandatory. Objects with default names are renamed when they are restored.

Example 7-219 shows an example of the svcconfig backup command.

Example 7-219 svcconfig backup command

IBM_2145:ITSO-CLS1:admin>svcconfig backup......CMMVC6130W Inter-cluster partnership fully_configured will not be restored...................CMMVC6112W io_grp io_grp0 has a default nameCMMVC6112W io_grp io_grp1 has a default nameCMMVC6112W mdisk mdisk18 has a default nameCMMVC6112W mdisk mdisk19 has a default nameCMMVC6112W mdisk mdisk20 has a default name

Important: The tool backs up logical configuration data only, not client data. It does not replace a traditional data backup and restore tool, but this tool supplements a traditional data backup and restore tool with a way to back up and restore the client’s configuration. To provide a complete backup and disaster recovery solution, you must back up both user (non-configuration) data and configuration (non-user) data. After the restoration of the SVC configuration, you must fully restore user (non-configuration) data to the cluster’s disks.

466 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 493: San

................CMMVC6136W No SSH key file svc.config.admin.admin.keyCMMVC6136W No SSH key file svc.config.admincl1.admin.keyCMMVC6136W No SSH key file svc.config.ITSOSVCUser1.admin.key.......................CMMVC6112W vdisk vdisk7 has a default name...................CMMVC6155I SVCCONFIG processing completed successfully

Example 7-220 shows the pscp command.

Example 7-220 pscp command

C:\Program Files\PuTTY>pscp -load SVC_CL1 [email protected]:/tmp/svc.config.backup.xml c:\temp\clibackup.xmlclibackup.xml | 97 kB | 97.2 kB/s | ETA: 00:00:00 | 100%

The following scenario illustrates the value of configuration backup:

1. Use the svcconfig command to create a backup file on the cluster that contains details about the current cluster configuration.

2. Store the backup configuration on a form of tertiary storage. You must copy the backup file from the cluster or it becomes lost if the cluster crashes.

3. If a sufficiently severe failure occurs, the cluster might be lost. Both the configuration data (for example, the cluster definitions of hosts, I/O Groups, MDGs, and MDisks) and the application data on the virtualized disks are lost. In this scenario, it is assumed that the application data can be restored from normal client backup procedures. However, before you can perform this restoration, you must reinstate the cluster as it was configured at the time of the failure. Therefore, you restore the same MDGs, I/O Groups, host definitions, and VDisks that existed prior to the failure. Then, you can copy the application data back onto these VDisks and resume operations.

4. Recover the hardware: hosts, SVCs, disk controller systems, disks, and SAN fabric. The hardware and SAN fabric must physically be the same as the hardware and SAN fabric that were used before the failure.

5. Re-initialize the cluster with the configuration node; the other nodes will be recovered when restoring the configuration.

6. Restore your cluster configuration using the backup configuration file that was generated prior to the failure.

7. Restore the data on your VDisks using your preferred restoration solution or with help from IBM Service.

8. Resume normal operations.

7.14.10 Restoring the SVC cluster configurationIt is extremely important that you always consult IBM Support before you restore the SVC cluster configuration from the backup. IBM Support can assist you in analyzing the root cause of why the cluster configuration was lost.

After the svcconfig restore -execute command is started, consider any prior user data on the VDisks destroyed. The user data must be recovered through your usual application data backup and restore process.

Chapter 7. SAN Volume Controller operations using the command-line interface 467

Page 494: San

See IBM TotalStorage Open Software Family SAN Volume Controller: Command-Line Interface User’s Guide, SC26-7544, for more information about this topic.

For a detailed description of the SVC configuration backup and restore functions, see IBM TotalStorage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543.

7.14.11 Deleting configuration backupIn this section, we describe in detail the tasks that you can perform to delete the configuration backup that is stored in the configuration file directory on the cluster. Never clear this configuration without having a backup of your configuration stored in a separate, secure place.

When using the clear command, you erase the files in the /tmp directory. This command does not clear the running configuration and prevent the cluster from working, but the command clears all of the configuration backup that is stored in the /tmp directory (Example 7-221).

Example 7-221 svcconfig clear command

IBM_2145:ITSO-CLS1:admin>svcconfig clear -all.CMMVC6155I SVCCONFIG processing completed successfully

7.15 SAN troubleshooting and data collectionWhen we encounter a SAN issue, the SVC is often extremely helpful in troubleshooting the SAN, because the SVC is the at the center of the environment through which the communication travels.

Chapter 14 in SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, contains a detailed description of how to troubleshoot and collect data from the SVC:

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

7.16 T3 recovery processA procedure called “T3 recovery” has been tested and used in select cases where the cluster has been completely destroyed. (One example is simultaneously pulling power cords from all nodes to their uninterruptible power supply units; in this case, all nodes boot up to node error 578 when the power is restored.)

This procedure, in certain circumstances, is able to recover most user data. However, this procedure is not to be used by the client or IBM service representative without direct involvement from IBM level 3 technical support. This procedure is not published, but we refer to it here only to indicate that the loss of a cluster can be recoverable without total data loss, but it requires a restoration of application data from the backup. It is an extremely sensitive procedure, which is only to be used as a last resort, and cannot recover any data that was unstaged from cache at the time of the total cluster failure.

468 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 495: San

Chapter 8. SAN Volume Controller operations using the GUI

In this chapter, we show IBM System Storage SAN Volume Controller (SVC) operational management by using the SVC GUI. We have divided this chapter into normal operations and advanced operations.

We describe the basic configuration procedures that are required to get your SVC environment up and running as quickly as possible using the Master Console and its associated GUI.

Chapter 2, “IBM System Storage SAN Volume Controller” on page 7 describes the features in greater depth. In this chapter, we focus on the operational aspects.

8

© Copyright IBM Corp. 2010. All rights reserved. 469

Page 496: San

8.1 SVC normal operations using the GUIIn this topic, we discuss several of the operations that we have defined as normal, day-to-day activities.

It is possible for many users to be logged into the GUI at any given time. However, no locking mechanism exists, so if two users change the same object at the same time, the last action entered from the GUI is the one that will take effect.

8.1.1 Organizing on window contentIn the following sections, there are several windows within the SVC GUI where you can perform filtering (to minimize the amount of data that is shown on the window) and sorting (to organize the content on the window). This section provides a brief overview of these functions.

The SVC Welcome window (Figure 8-1) is an important window and will be referred to as the Welcome window throughout this chapter. We expect users to be able to locate this window without us having to show it each time.

Figure 8-1 The Welcome window

From the Welcome window, select Work with Virtual Disks, and select Virtual Disks.

Table filteringWhen you are in the Viewing Virtual Disks list, you can use the table filter option to filter the visible list, which is useful if the list of entries is too large to work with. You can change the filtering here as many times as you like, to further reduce the lists or for separate views. Perform these steps to use table filtering:

1. Use the Show Filter Row icon, as shown in Figure 8-2 on page 471, or select Show Filter Row in the list, and click Go.

Important: Data entries made through the GUI are case sensitive.

470 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 497: San

Figure 8-2 Show Filter Row icon

2. This function enables you to filter based on the column names, as shown in Figure 8-3. The Filter under each column name shows that no filter is in effect for that column.

Figure 8-3 Show Filter Row

3. If you want to filter on a column, click the word Filter, which opens up a filter window, as shown in Figure 8-4 on page 472.

Chapter 8. SAN Volume Controller operations using the GUI 471

Page 498: San

Figure 8-4 Filter option on Name

A list with virtual disks (VDisks) is displayed that contains names that include 01 somewhere in the name, as shown in Figure 8-5. (Notice the filter line under each column heading, showing that our filter is in place.) If you want, you can perform additional filtering on the other columns to further narrow your view.

Figure 8-5 Filtered on Name containing 01 in the name

4. The option to reset the filters is shown in Figure 8-6 on page 473. Use the Clear All Filters icon or use the Clear All Filters option in the list, and click Go.

472 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 499: San

Figure 8-6 Clear All Filter options

SortingRegardless of whether you use the pre-filter or additional filter options, when you are in the Viewing Virtual Disks window, you can sort the displayed data by selecting Edit Sort from the list and clicking Go, or you can click the small Edit Sort icon highlighted by the mouse pointer in Figure 8-7.

Figure 8-7 Selecting Edit Sort icon

As shown in Figure 8-8 on page 474, you can sort based on up to three criteria, including Name, State, I/O Group, Managed Disk Group (MDisk Group), Capacity (MB), Space-Efficient, Type, Hosts, FlashCopy Pair, FlashCopy Map Count, Relationship Name, UID, and Copies.

Sort criteria: The actual sort criteria differs based on the information that you are sorting.

Chapter 8. SAN Volume Controller operations using the GUI 473

Page 500: San

Figure 8-8 Sorting criteria

When you finish making your choices, click OK to regenerate the display based on your sorting criteria. Look at the icons next to each column name to see the sort criteria currently in use, as shown in Figure 8-9.

If you want to clear the sort, simply select Clear All Sorts from the list and click Go, or click the Clear All Sorts icon that is highlighted by the mouse pointer in Figure 8-9.

Figure 8-9 Selecting to clear all sorts

474 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 501: San

8.1.2 DocumentationIf you need to access the online documentation, in the upper right corner of the window, click the information icon. This action opens the Help Assistant pane on the right side of the window, as shown in Figure 8-10.

Figure 8-10 Online help using the i icon

8.1.3 HelpIf you need to access the online help, in the upper right corner of the window, click the question mark icon. This action opens a new window called the information center. Here, you can search on any item for which you want help (see Figure 8-11 on page 476).

Chapter 8. SAN Volume Controller operations using the GUI 475

Page 502: San

Figure 8-11 Online help using the ? icon

8.1.4 General housekeepingIf, at any time, the content in the right side of the frame is abbreviated, you can collapse the My Work column by clicking the icon at the top of the My Work column. When collapsed, the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small arrow that points right expands the My Work column back to its original size.

In addition, each time that you open a configuration or administration window using the GUI in the following sections, it creates a link for that window along the top of your Web browser beneath the banner graphic. As a general housekeeping task, we recommend that you close each window when you finish using it by clicking the icon to the right of the window name, but beneath the icon. Be careful not to close the entire browser.

8.1.5 Viewing progressWith this view, you can see the status of activities, such as VDisk Migration, MDisk Removal (Figure 8-12 on page 477), Image Mode Migration, Extend Migration, FlashCopy, Metro Mirror and Global Mirror, VDisk Formatting, Space Efficient copy repair, VDisk copy verification, and VDisk copy synchronization.

You can see detailed information about the item by clicking the underlined (progress) number in the Progress column.

476 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 503: San

Figure 8-12 Showing possible processes to view where the MDisk is being removed from the MDG

8.2 Working with managed disksThis section describes the various configuration and administration tasks that you can perform on the managed disks (MDisks) within the SVC environment.

This section details the tasks that you can perform at a disk controller level.

8.2.1 Viewing disk controller detailsPerform the following steps to view information about a back-end disk controller in use by the SVC environment:

1. Select Work with Managed Disks, and then, select Disk Controller Systems.

2. The Viewing Disk Controller Systems window (Figure 8-13) opens. For more detailed information about a specific controller, click its ID (highlighted by the mouse cursor in Figure 8-13).

Figure 8-13 Disk controller systems

3. When you click the controller Name (Figure 8-13), the Viewing General Details for Name window (Figure 8-14 on page 478) opens for the controller (where Name is the controller that you selected). Review the details, and click Close to return to the previous window.

Chapter 8. SAN Volume Controller operations using the GUI 477

Page 504: San

Figure 8-14 Viewing general details about a disk controller

8.2.2 Renaming a disk controllerPerform the following steps to rename a disk controller that is used by the SVC cluster:

1. Select the controller that you want to rename. Then, select Rename a Disk Controller System from the list, and click Go.

2. In the Renaming Disk Controller System controllername window (where controllername is the controller that you selected in the previous step), type the new name that you want to assign to the controller, and click OK. See Figure 8-15.

Figure 8-15 Renaming a controller

3. You return to the Disk Controller Systems window. You now see the new name of your controller displayed.

Controller name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the word “controller” (because this prefix is reserved for SVC assignment only).

478 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 505: San

8.2.3 Discovery statusYou can view the status of a managed disk (MDisk) discovery from the Viewing Discovery Status window. This status tells you if there is an ongoing MDisk discovery. A running MDisk discovery will be displayed with a status of Active.

Perform the following steps to view the status of an MDisk discovery:

1. Select Work with Managed Disks Discovery Status. The Viewing Discovery Status window is displayed, as shown in Figure 8-16.

Figure 8-16 Discovery status view

2. Click Close to close this window.

8.2.4 Managed disksThis section details the tasks that can be performed at an MDisk level. You perform each of the following tasks from the Viewing Managed Disks window (Figure 8-17). To access this window, from the SVC Welcome window, click Work with Managed Disks, and then, click Managed Disks.

Figure 8-17 Viewing Managed Disks window

8.2.5 MDisk informationTo retrieve information about a specific MDisk, perform the following steps:

Chapter 8. SAN Volume Controller operations using the GUI 479

Page 506: San

1. In the Viewing Managed Disks window (Figure 8-18 on page 480), click the underlined name of any MDisk in the list to reveal more detailed information about the specified MDisk.

Figure 8-18 Managed disk details

2. Review the details, and then, click Close to return to the previous window.

8.2.6 Renaming an MDiskPerform the following steps to rename an MDisk that is controlled by the SVC cluster:

1. Select the MDisk that you want to rename in the window that is shown in Figure 8-17 on page 479. Select Rename an MDisk from the list, and click Go.

2. On the Renaming Managed Disk MDiskname window (where MDiskname is the MDisk that you selected in the previous step), type the new name that you want to assign to the MDisk, and click OK. See Figure 8-19 on page 481.

Tip: If, at any time, the content in the right side of frame is abbreviated, you can minimize the My Work column by clicking the arrow to the right of the My Work heading at the top right of the column (highlighted with the mouse pointer in Figure 8-17 on page 479).

After you minimize the column, you see an arrow in the far left position in the same location where the My Work column formerly appeared.

480 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 507: San

Figure 8-19 Renaming an MDisk

8.2.7 Discovering MDisksPerform the following steps to discover newly assigned MDisks:

1. Select Discover MDisks from the drop-down list that is shown in Figure 8-17 on page 479, and click Go.

2. Any newly assigned MDisks are displayed in the window that is shown in Figure 8-20.

Figure 8-20 Newly discovered managed disks

8.2.8 Including an MDiskIf a significant number of errors occur on an MDisk, the SVC automatically excludes it. These errors can result from a hardware problem, a storage area network (SAN) zoning problem, or the result of poorly planned maintenance. If it is a hardware fault, you will receive Simple Network Management Protocol (SNMP) alerts in regard to the state of the hardware (before the disk was excluded) and preventive maintenance that has been undertaken. If not, the hosts that were using VDisks, which used the excluded MDisk, now have I/O errors.

MDisk name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash (-), and the underscore (_). The name can be between one and 15 characters in length. However, the name cannot start with a number, the dash, or the word “MDisk” (because this prefix is reserved for SVC assignment only).

Chapter 8. SAN Volume Controller operations using the GUI 481

Page 508: San

After you take the necessary corrective action to repair the MDisk (for example, replace the failed disk and repair the SAN zones), you can tell the SVC to include the MDisk again.

8.2.9 Showing a VDisk using a certain MDiskTo display information about VDisks that reside on an MDisk, perform the following steps:

1. As shown in Figure 8-21, select the MDisk about which you want to obtain VDisk information. Select Show VDisks using this MDisk from the list, and click Go.

Figure 8-21 Show VDisk using an MDisk

2. You now see a subset (specific to the MDisk that you chose in the previous step) of the Viewing VDisks using MDisk window in Figure 8-22. We cover the Viewing VDisks window in more detail in 8.4, “Working with hosts” on page 493.

Figure 8-22 VDisk list from a selected MDisk

482 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 509: San

8.3 Working with Managed Disk GroupsIn this section, we describe the tasks that can be performed with the Managed Disk Group (MDG). From the Welcome window that is shown in Figure 8-1 on page 470, select Working with MDisks.

8.3.1 Viewing MDisk group informationWe perform each of the following tasks from the Viewing Managed Disk Groups window (Figure 8-23). To access this window, from the SVC Welcome window, click Work with Managed Disks, and then, click Managed Disk Groups.

Figure 8-23 Viewing Managed Disk Groups window

To retrieve information about a specific MDG, perform the following steps:

1. In the Viewing Managed Disk Groups window (Figure 8-23), click the underlined name of any MDG in the list.

2. In the View Managed Disk Group Details for MDGname window (where MDGname is the MDG that you selected in the previous step), as shown in Figure 8-24, you see more detailed information about the specified MDG. Here, you see information pertaining to the number of MDisks and VDisks, as well as the capacity (both total and free space) within the MDG. When you finish viewing the details, click Close to return to the previous window.

Figure 8-24 MDG details

Chapter 8. SAN Volume Controller operations using the GUI 483

Page 510: San

8.3.2 Creating MDGsPerform the following steps to create an MDG:

1. From the SVC Welcome window (Figure 8-1 on page 470), select Work with Managed Disks, and then, select Managed Disk Groups.

2. The Viewing Managed Disk Groups window opens (see Figure 8-25). Select Create an MDisk Group from the list, and click Go.

Figure 8-25 Selecting the option to create an MDisk group

3. In the Create a Managed Disk Group window, the wizard provides an overview of the steps that will be performed. Click Next.

4. While in the Name the group and select the managed disks window (Figure 8-26 on page 485), follow these steps:

a. Type a name for the MDG.

b. From the MDisk Candidates box, as shown in Figure 8-26 on page 485, one at a time, select the MDisks that you want to put into the MDG. Click Add to move them to the Selected MDisks box. More than one page of disks might exist; you can navigate between the windows (the MDisks that you have selected will be preserved).

c. You can specify a threshold to send a warning to the error log when the capacity is first exceeded. The threshold can either be a percentage or a specific amount.

d. Click Next.

MDG name: If you do not provide a name, the SVC automatically generates the name MDiskgrpx, where x is the ID sequence number that is assigned by the SVC internally.

If you want to provide a name (as we have done), you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore (_). The name can be between one and 15 characters in length and is case sensitive, but it cannot start with a number or the word “MDiskgrp” (because this prefix is reserved for SVC assignment only).

484 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 511: San

Figure 8-26 Name the group and select the managed disks window

5. From the list that is shown in Figure 8-27, select the extent size to use. When you select a specific extent size, the typical value is 512; the total cluster size is shown in TB. Select Next.

Figure 8-27 Select Extent Size window

6. In the Verify Managed Disk Group window (Figure 8-28 on page 486), verify that the information that you have specified is correct. Click Finish.

Chapter 8. SAN Volume Controller operations using the GUI 485

Page 512: San

Figure 8-28 Verify Managed Disk Group wizard

7. Return to the Viewing Managed Disk Groups window (Figure 8-29) where the new MDG is displayed.

Figure 8-29 A new MDG was added successfully

You have now completed the tasks that are required to create an MDG.

8.3.3 Renaming a managed disk groupTo rename an MDG, perform the following steps:

1. In the Viewing Managed Disk Groups window (Figure 8-30), select the MDG that you want to rename. Select Modify an MDisk Group from the list, and click Go.

Figure 8-30 Renaming an MDG

486 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 513: San

From the Modifying Managed Disk Group MDisk Group Name window (where the MDisk Group Name is the MDG that you selected in the previous step), type the new name that you want to assign and click OK (see Figure 8-31).

You can also set or change the usage threshold from this window.

Figure 8-31 Renaming an MDG

It is considered a best practice to enable the capacity warning for your MDGs. You must address the range to be used in the planning phase of the SVC installation, although this range can always be changed without interruption.

8.3.4 Deleting a managed disk groupTo delete an MDG, perform the following steps:

1. Select the MDG that you want to delete. Select Delete an MDisk Group from the list, and click Go.

2. In the Deleting a Managed Disk Group MDGname window (where MDGname is the MDG that you selected in the previous step), click OK to confirm that you want to delete the MDG (see Figure 8-32).

Figure 8-32 Deleting an MDG

3. If there are MDisks and VDisks within the MDG that you are deleting, you are required to click Forced delete for the MDG (Figure 8-33 on page 488).

MDG name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, a dash (-), and the underscore (_). The new name can be between one and 15 characters in length, but it cannot start with a number, a dash, or the word “mdiskgrp” (because this prefix is reserved for SVC assignment only).

Chapter 8. SAN Volume Controller operations using the GUI 487

Page 514: San

Figure 8-33 Confirming forced deletion of an MDG

8.3.5 Adding MDisksIf you created an empty MDG or you simply assign additional MDisks to your SVC environment later, you can add MDisks to existing MDGs by performing the following steps:

1. In Figure 8-34, select the MDG to which you want to add MDisks. Select Add MDisks from the list, and click Go.

Figure 8-34 Adding an MDisk to an existing MDG

2. From the Adding Managed Disks to Managed Disk Group MDGname window (where MDGname is the MDG that you selected in the previous step), select the desired MDisk or MDisks from the MDisk Candidates list (Figure 8-35 on page 489). After you select all of the desired MDisks, click OK.

Important: If you delete an MDG with the Forced Delete option, and VDisks were associated with that MDG, you will lose the data on your VDisks, because they are deleted before the MDG. If you want to save your data, migrate or mirror the VDisks to another MDG before you delete the MDG previously assigned to the VDisks.

Note: You can only add unmanaged MDisks to an MDG.

488 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 515: San

Figure 8-35 Adding MDisks to an MDG

8.3.6 Removing MDisksTo remove an MDisk from an MDG, perform the following steps:

1. In Figure 8-36, select the MDG from which you want to remove an MDisk. Select Remove MDisks from the list, and click Go.

Figure 8-36 Viewing MDGs

2. From the Deleting Managed Disks from Managed Disk Group MDGname window (where MDGname is the MDG that you selected in the previous step), select the desired MDisk or MDisks from the list (Figure 8-37 on page 490). After you select all of the desired MDisks, click OK.

Chapter 8. SAN Volume Controller operations using the GUI 489

Page 516: San

Figure 8-37 Removing MDisks from an MDG

3. If VDisks are using the MDisks that you are removing from the MDG, you are required to click Forced Delete to confirm the removal of the MDisk, as shown in Figure 8-38.

4. An error message is displayed if there is insufficient space to migrate the VDisk data to other extents on other MDisks in that MDG.

Figure 8-38 Confirming forced deletion of MDisks from an MDG

8.3.7 Displaying MDisksIf you want to view the MDisks that are configured on your system, perform the following steps to display MDisks.

From the SVC Welcome window (Figure 8-1 on page 470), select Work with Managed Disks, and then, select Managed Disks. In the Viewing Managed Disks window (Figure 8-39 on page 491), if your MDisks are not displayed, rescan the Fibre Channel (FC) network. Select Discover MDisks from the list, and click Go.

490 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 517: San

Figure 8-39 Discover MDisks

8.3.8 Showing MDisks in this groupTo show a list of MDisks within an MDG, perform the following steps:

1. Select the MDG from which you want to retrieve MDisk information (Figure 8-40). Select Show MDisks in This Group from the list, and click Go.

Figure 8-40 Viewing Managed Disk Groups

2. You now see a subset (specific to the MDG that you chose in the previous step) of the Viewing Managed Disks window (Figure 8-41 on page 492) that was shown in 8.2.4, “Managed disks” on page 479.

Troubleshooting: If your MDisks are still not visible, check that the logical unit numbers (LUNs) from your subsystem are properly assigned to the SVC (for example, using storage partitioning with a DS4000) and that appropriate zoning is in place (for example, the SVC can see the disk subsystem).

Chapter 8. SAN Volume Controller operations using the GUI 491

Page 518: San

Figure 8-41 Viewing MDisks in an MDG

8.3.9 Showing the VDisks that are associated with an MDisk groupTo show a list of the VDisks that are associated with MDisks within an MDG, perform the following steps:

1. In Figure 8-42, select the MDG from which you want to retrieve VDisk information. Select Show VDisks using this group from the list, and click Go.

Figure 8-42 Viewing Managed Disk Groups

2. You see a subset (specific to the MDG that you chose in the previous step) of the Viewing Virtual Disks window in Figure 8-43 on page 493. We describe the Viewing Virtual Disks window in more detail in “VDisk information” on page 505.

Note: Remember, you can collapse the column entitled My Work at any time by clicking the arrow to the right of the My Work column heading.

492 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 519: San

Figure 8-43 VDisks belonging to selected MDG

You have now completed the required tasks to manage the disk controller systems, MDisks, and MDGs within the SVC environment.

8.4 Working with hostsIn this section, we describe the various configuration and administration tasks that you can perform on the host that is connected to your SVC.

For more details about connecting hosts to an SVC in a SAN environment, obtain more detailed information in IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905-05.

Starting with SVC 5.1, iSCSI is introduced as an additional method for connecting your host to the SVC. With this option, the host can now choose between FC or iSCSI as the connection method. After the connection type has been selected, all further work with the host is identical for the FC-attached host and the iSCSI-attached host.

To access the Viewing Hosts window from the SVC Welcome window on Figure 8-1 on page 470, click Work with Hosts, and then, click Hosts. The Viewing Hosts window opens, as shown in Figure 8-44. You perform each task that is shown in the following sections from the Viewing Hosts window.

Figure 8-44 Viewing Hosts window

Chapter 8. SAN Volume Controller operations using the GUI 493

Page 520: San

8.4.1 Host informationTo retrieve information about a specific host, perform the following steps:

1. In the Viewing Hosts window (see Figure 8-44 on page 493), click the underlined name of any host in the displayed list.

2. Next, you can obtain details for the host that you requested:

a. In the Viewing General Details window (Figure 8-45), you can see more detailed information about the specified host.

Figure 8-45 Host details

b. You can click Port Details (Figure 8-46) to see the attachment information, such as the worldwide port names (WWPNs) that are defined for this host or the iSCSI qualified name (IQN) that is defined for this host.

Figure 8-46 Host port details

c. You can click Mapped I/O Groups (Figure 8-47 on page 495) to see which I/O Groups this host can access.

494 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 521: San

Figure 8-47 Host mapped I/O Groups

d. A new feature in SVC 5.1 is the capability to create hosts that use either FC connections or iSCSI connections. If we select iSCSI for our host in this example, we do not see any iSCSI parameters (as shown in Figure 8-48), because this host is already configured with an FC port, as shown in Figure 8-46 on page 494.

Figure 8-48 iSCSI parameters

When you are finished viewing the details, click Close to return to the previous window.

8.4.2 Creating a hostBecause we have two types of connection methods from which to choose for our host, iSCSI or FC, we will show both methods.

8.4.3 Fibre Channel-attached hostsTo create a new host that uses the FC connection type, perform the following steps:

1. As shown in Figure 8-49 on page 496, select Create a Host from the list, and click Go.

Chapter 8. SAN Volume Controller operations using the GUI 495

Page 522: San

Figure 8-49 Create a host

2. In the Creating Hosts window (Figure 8-50 on page 497), type a name for your host (Host Name).

3. Select the mode (Type) for the host. The default type is Generic. Use generic for all hosts, except if you use Hewlett-Packard UNIX (HP-UX) or SUN, in which case, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

4. The connection type is either Fibre Channel or iSCSI. If you select Fibre Channel, you are asked for the port mask and the WWPN of the server that you are creating. If you select iSCSI, you are asked for the iSCSI initiator, which is commonly called the IQN, and the Challenge Handshake Authentication Protocol (CHAP) authentication secret to ensure authentication of the target host and volume access.

5. You can use a port mask to control the node target ports that a host can access. The port mask applies to the logins from the host initiator port that are associated with the host object.

As shown in Figure 8-50 on page 497, our port mask is 1111; the HBA port can access all node ports. If, for example, a port mask is set to 0011, only port 1 and port 2 are enabled for this host access.

Host name: If you do not provide a name, the SVC automatically generates the name hostx (where x is the ID sequence number that is assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The host name can be between one and 15 characters in length. However, the name cannot start with a number or the word “host” (because this prefix is reserved for SVC assignment only). Although using an underscore might work in certain circumstances, it violates the request for change (RFC) 2396 definition of Uniform Resource Identifiers (URIs) and can cause problems. So, we recommend that you do not use the underscore in host names.

Note: For each login between a host bus adapter (HBA) port and a node port, the node examines the port mask that is associated with the host object for which the HBA is a member and determines if access is allowed or denied. If access is denied, the node responds to SCSI commands as though the HBA port is unknown.

The port mask is four binary bits. Valid mask values range from 0000 (no ports enabled) to 1111 (all ports enabled). The rightmost bit in the mask corresponds to the lowest numbered SVC port (1, not 4) on a node.

496 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 523: San

6. Select and add the WWPNs that correspond to your HBA or HBAs. Click OK.

In certain cases, your WWPNs might not be displayed, although you are sure that your adapter is functioning (for example, you see the WWPN in the switch name server) and your zones are correctly set up. In this case, you can manually type the WWPN of your HBA or HBAs into the Additional Ports field (type the WWPNs one per line) at the bottom of the window and select Do not validate WWPN before you click OK.

Figure 8-50 Creating a new FC-connected host

This action brings you back to the Viewing Hosts window (Figure 8-51) where you can see the newly added host.

Figure 8-51 Create host results

8.4.4 iSCSI-attached hostsNow, we show you the steps to configure a host that is connected by using iSCSI.

Prior to starting to use iSCSI, we must configure our cluster to use the iSCSI option, which is shown in 8.4.4, “iSCSI-attached hosts” on page 497.

Chapter 8. SAN Volume Controller operations using the GUI 497

Page 524: San

When creating an iSCSI-attached host from the Welcome window, select Working with hosts, and select Hosts. From the drop-down list, we select Create a Host, as shown in Figure 8-52.

Figure 8-52 Creating an iSCSI host

In the Creating Hosts window (Figure 8-53 on page 499), type a name for your host (Host Name). Follow these steps:

1. Select the mode (Type) for the host. The default type is Generic. Use generic for all hosts, except for HP-UX or SUN. For HP or Sun, select HP_UX (to have more than eight LUNs supported for HP_UX machines) or TPGS for Sun hosts using MPxIO.

2. The connection type is iSCSI.

3. The iSCSI initiator or IQN is iqn.1991-05.com.microsoft:freyja. This IQN is obtained from the server and generally has the same purpose as the WWPN.

4. The CHAP secret is the authentication method that is used to restrict access for other iSCSI hosts to use the same connection. You can set the CHAP for the whole cluster under cluster properties or for each host definition. The CHAP must be identical on the server and the cluster/host definition. You can create an iSCSI host definition without using a CHAP.

In Figure 8-53 on page 499, we set the parameters for our host called Freyja.

498 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 525: San

Figure 8-53 iSCSI parameters

The iSCSI host is now created.

8.4.5 Modifying a hostTo modify a host, perform the following steps:

1. Select the host that you want to rename (Figure 8-54). Select Modify a Host from the list, and click Go.

Figure 8-54 Modifying a host

2. From the Modifying Host window (Figure 8-55 on page 500), type the new name that you want to assign or change the Type parameter, and click OK.

Name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The name can be between one and 15 characters in length. However, it cannot start with a number or the word “host” (because this prefix is reserved for SVC assignment only). While using an underscore might work in certain circumstances, it violates the RFC 2396 definition of Uniform Resource Identifiers (URIs) and thus can cause problems. So, we recommend that you do not use the underscore in host names.

Chapter 8. SAN Volume Controller operations using the GUI 499

Page 526: San

Figure 8-55 Modifying a host (choosing a new name)

8.4.6 Deleting a hostTo delete a host, perform the following steps:

1. Select the host that you want to delete (Figure 8-56). Select Delete a Host from the list, and click Go.

Figure 8-56 Deleting a host

2. In the Deleting Host host name window (where host name is the host that you selected in the previous step), click OK if you are sure that you want to delete the host. See Figure 8-57.

Figure 8-57 Deleting a host

500 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 527: San

3. If you still have VDisks associated with the host, you will see a window (Figure 8-58) requesting confirmation for the forced deletion of the host. Click OK and all of the mappings between this host and its VDisks are deleted before the host is deleted.

Figure 8-58 Forcing a deletion

8.4.7 Adding portsIf you add an HBA or a network interface controller (NIC) to a server that is already defined within the SVC, you can simply add additional ports to your host definition by performing the following steps:

1. Select the host to which you want to add ports, as shown in Figure 8-59. Select Add Ports from the list, and click Go.

Figure 8-59 Adding ports to a host

2. From the Adding ports window, you can select whether to add an FC port (WWPN) or an iSCSI port (IQN initiator) for the connection type. Select either the desired WWPN from the Available Ports list and click Add, or enter the new IQN in the iSCSI window. After adding the WWPN or IQN, click OK. See Figure 8-60 on page 502.

If your WWPNs are not in the list of the Available Ports and you are sure that your adapter is functioning (for example, you see WWPN in the switch name server) and your zones are correctly set up, you can manually type the WWPN of your HBAs into the Add Additional Ports field at the bottom of the window before you click OK.

Note: A host definition can only have FC ports or an iSCSI port defined, but not both.

Chapter 8. SAN Volume Controller operations using the GUI 501

Page 528: San

Figure 8-60 Adding WWPN ports to a host

Figure 8-61 shows where IQN is added to our host called Thor.

Figure 8-61 Adding IQN port to a host

8.4.8 Deleting portsTo delete a port from a host, perform the following steps:

1. Select the host from which you want to delete a port (Figure 8-62). Select Delete Ports from the list, and click Go.

Figure 8-62 Delete ports from a host

502 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 529: San

2. On the Deleting Ports From host name window (where host name is the host that you selected in the previous step), start by selecting the connection type of the port that you want to delete. If you select Fibre Channel, you select the port that you want to delete from the Available Ports list, and click Add. When you have selected all of the ports that you want to delete from your host and when you have added them to the column to the right, click OK. If you selected the connection type iSCSI, you select the ports from the available iSCSI initiator and click Add. Then, click OK. Figure 8-63 shows selecting a WWPN port to delete. Figure 8-64 shows that we have selected an iSCSI initiator to delete.

Figure 8-63 Deleting WWPN port from a host

Figure 8-64 Deleting iSCSI initiator from an host

3. If you have VDisks that are associated with the host, you receive a warning about deleting a host port. You need to confirm your action when prompted, as shown in Figure 8-65 on page 504. A similar warning message appears if you delete an iSCSI port.

Chapter 8. SAN Volume Controller operations using the GUI 503

Page 530: San

Figure 8-65 Port deletion confirmation

8.5 Working with VDisksIn this section, we describe the tasks that you can perform at a VDisk level.

8.5.1 Using the Viewing VDisks using MDisk windowYou perform each of the following tasks from the Viewing VDisks using MDisk window (Figure 8-66). To access this window, from the SVC Welcome window, click Work with Virtual Disks, and then, click Virtual Disks. The list contains all of the actions that you can perform in the Viewing VDisks window.

Figure 8-66 Viewing VDisks

504 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 531: San

8.5.2 VDisk informationTo retrieve information about a specific VDisk, perform the following steps:

1. In the Viewing Virtual Disks window, click the underlined name of the desired VDisk in the list.

2. The next window (Figure 8-67) that opens shows detailed information. Review the information. When you are finished, click Close to return to the Viewing VDisks window.

Figure 8-67 VDisk details

8.5.3 Creating a VDiskTo create a new VDisk, perform the following steps:

1. Select Create a VDisk from the list (Figure 8-66 on page 504), and click Go.

2. The Create Virtual Disks wizard launches. Click Next.

3. The Choose an I/O Group and a Preferred Node window opens. Choose an I/O Group, and then, select a preferred node (see Figure 8-68 on page 506). In our case, we let the system choose. Click Next.

Chapter 8. SAN Volume Controller operations using the GUI 505

Page 532: San

Figure 8-68 Creating a VDisk: Select Groups

4. The Set Attributes window opens (Figure 8-69):

a. Choose the type of VDisk that you want to create: striped or sequential.

b. Select the cache mode: Read/Write or None.

c. If you want, enter a unit device identifier.

d. Enter the number of VDisks that you want to create.

e. You can select the Space-efficient or Mirrored Disk check box, which will expand the respective sections with extra options.

f. Optionally, format the new VDisk by selecting the Format VDisk before use check box (write zeros to its MDisk extents).

g. Click Next.

Figure 8-69 Creating a VDisk: Set Attributes

5. Select the MDG from which you want the VDisk to be a member:

a. If you selected Striped, you will see the window that is shown in Figure 8-70 on page 507. You must select the MDisk group, and then, the Managed Disk Candidates window will appear. You can optionally add MDisks to be striped.

506 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 533: San

Figure 8-70 Selecting an MDG

b. If you selected Sequential mode, you see the window that is shown in Figure 8-71. You must select the MDisk group, and then, a list of managed disks appears. You must choose at least one MDisk as a managed disk.

Figure 8-71 Creating a VDisk wizard: Select attributes for sequential mode VDisks

c. Enter the size of the VDisk that you want to create and select the capacity measurement (MB or GB) from the list.

d. Click Next.

6. You can enter the VDisk name if you want to create a single VDisk, or you can enter the naming prefix if you want to create multiple VDisks. Click Next.

Capacity: An entry of 1 GB uses 1,024 MB.

VDisk naming: When you create more than one VDisk, the wizard does not ask you for a name for each VDisk to be created. Instead, the name that you use here will be a prefix and have a number, starting at zero, appended to it as each VDisk is created.

Chapter 8. SAN Volume Controller operations using the GUI 507

Page 534: San

Figure 8-72 Creating a VDisk wizard: Name the VDisks

7. In the Verify Attributes window (see Figure 8-73 for striped mode and Figure 8-74 on page 509 for sequential mode), check whether you are satisfied with the information that is shown, and then, click Finish to complete the task. Otherwise, click Back to return to make any corrections.

Figure 8-73 Creating a VDisk wizard: Verify the VDisk striped type

Note: If you do not provide a name, the SVC automatically generates the name VDiskn (where n is the ID sequence number that is assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The name can be between one and 15 characters in length, but it cannot start with a number or the word “VDisk” (because this prefix is reserved for SVC assignment only).

508 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 535: San

Figure 8-74 Creating a VDisk wizard: Verify the VDisk sequential type

8. Figure 8-75 shows the progress of the creation of your VDisks on the storage and the final results.

Figure 8-75 Creating a VDisk wizard: Final result

8.5.4 Creating a Space-Efficient VDisk with autoexpandUsing Space-Efficient VDisks allows you to commit the minimal amount of space while promising an allocation that might be larger than the available free storage.

Chapter 8. SAN Volume Controller operations using the GUI 509

Page 536: San

In this section, we create a Space-Efficient VDisk step-by-step. This process allows you to create VDisks with a much higher capacity than is physically available (this approach is called thin provisioning).

While the host using this VDisk starts utilizing up to the level of the real allocation, the SVC can dynamically grow (when you enable the autoexpand feature) until it reaches the virtual capacity limit or the MDG physically runs out of free space. For the latter scenario, running out of space causes the growing VDisk to go offline, affecting the host that is using that VDisk. Therefore, enabling threshold warnings is important and recommended.

Perform the following steps to create a Space-Efficient VDisk with autoexpand:

1. Select Create a VDisk from the list (Figure 8-66 on page 504), and click Go.

2. The Create Virtual Disks wizard launches. Click Next.

3. The Choose an I/O Group and a Preferred Node window opens. Choose an I/O Group, and then, choose a preferred node (see Figure 8-76). In our case, we let the system choose. Click Next.

Figure 8-76 Creating a VDisk wizard: Select Groups

4. The Set Attributes window opens (Figure 8-69 on page 506). Perform these steps:

a. Choose the type of VDisk that you want to create: striped or sequential.

b. Select the cache mode: Read/Write or None.

c. Enter a unit device identifier (optional).

d. Enter the number of VDisks that you want to create.

e. Select Space-efficient, which expands this section with the following options:

i. Type the size of the VDisk Capacity (remember, this size is the virtual size).

ii. Type a percentage or select a specific size for the usage threshold warning.

iii. Select Auto expand, which allows the real disk size to grow as required.

iv. Select the Grain size (choose 32 KB normally, but match the FlashCopy grain size, which is 256 KB, if the VDisk will be used for FlashCopy).

f. Optionally, format the new VDisk by selecting Format VDisk before use (write zeros to its managed disk extents).

g. Click Next.

510 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 537: San

Figure 8-77 Creating a VDisk wizard: Set Attributes

5. On the Select MDisk(s) and Size for a <modetype>-Mode VDisk window, as shown in Figure 8-78, follow these steps:

a. Select the Managed Disk Group from the list.

b. Optionally, choose the MDisk Candidates upon which to create the VDisk. Click Add to move them to the Managed Disks Striped in this Order box.

c. Type the Real size that you want to allocate. This size is the amount of disk space that will actually be allocated. It can either be a percentage of the virtual size or a specific number.

Figure 8-78 Creating a VDisk wizard: Selecting MDisks and sizes

6. In the Name the VDisk(s) window (Figure 8-79 on page 512), type a name for the VDisk that you are creating. In our case, we used vdisk_sev2. Click Next.

Chapter 8. SAN Volume Controller operations using the GUI 511

Page 538: San

Figure 8-79 Name the VDisk(s) window

7. In the Verify Attributes window (Figure 8-80), verify the selections. We can select Back at any time to make changes.

Figure 8-80 Verifying Space-Efficient VDisk Attributes window

8. After selecting Finish, we are presented with a window (Figure 8-81 on page 513) that tells us the result of the action.

512 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 539: San

Figure 8-81 Space-Efficient VDisk creation success

8.5.5 Deleting a VDiskTo delete a VDisk, perform the following steps:

1. Select the VDisk that you want to delete (Figure 8-66 on page 504). Select Delete a VDisk from the list, and click Go.

2. In the Deleting Virtual Disk VDiskname window (where VDiskname is the VDisk that you just selected), click OK to confirm your desire to delete the VDisk. See Figure 8-82.

Figure 8-82 Deleting a VDisk

If the VDisk is currently assigned to a host, you receive a secondary message where you must click Forced Delete to confirm your decision. See Figure 8-83 on page 514. This action deletes the VDisk-to-host mapping before deleting the VDisk.

Important: Deleting a VDisk is a destructive action for user data residing in that VDisk.

Chapter 8. SAN Volume Controller operations using the GUI 513

Page 540: San

Figure 8-83 Deleting a VDisk: Forcing a deletion

8.5.6 Deleting a VDisk-to-host mappingTo unmap (unassign) a VDisk from a host, perform the following steps:

1. Select the VDisk that you want to unmap. Select Delete a VDisk-to-host mapping from the list, and click Go.

2. In the Deleting a VDisk-to-host Mapping window (Figure 8-84), from the Host Name list, select the host from which to unassign the VDisk. Click OK.

Figure 8-84 Deleting a VDisk-to-host Mapping window

8.5.7 Expanding a VDiskExpanding a VDisk presents a larger capacity disk to your operating system. Although you can expand a VDisk easily using the SVC, you must ensure that your operating system is prepared for it and supports the volume expansion before you use this function.

Dynamic expansion of a VDisk is only supported when the VDisk is in use by one of the following operating systems:

� AIX 5L V5.2 and higher

� Microsoft Windows 2000 Server and Windows Server 2003 for basic disks

� Microsoft Windows 2000 Server and Windows Server 2003 with a hot fix from Microsoft (Q327020) for dynamic disks

Tip: Make sure that the host is no longer using that disk. Unmapping a disk from a host does not destroy the disk’s contents.

Unmapping a disk has the same effect as powering off the computer without first performing a clean shutdown and, thus, might leave the data in an Inconsistent state. Also, any running application that was using the disk will start to receive I/O errors.

514 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 541: San

Assuming that your operating system supports it, to expand a VDisk, perform the following steps:

1. Select the VDisk that you want to expand, as shown in Figure 8-68 on page 506. Select Expand a VDisk from the list, and click Go.

2. The Expanding Virtual Disks VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens. See Figure 8-85. Follow these steps:

a. Select the new size of the VDisk. This size is the increment to add. For example, if you have a 5 GB disk and you want it to become 10 GB, you specify 5 GB in this field.

b. Optionally, select the MDisk candidates from which to obtain the additional capacity. The default for a striped VDisk is to use equal capacity from each MDisk in the MDG.

c. Optionally, you can format the extra space with zeros by selecting the Format Additional Managed Disk Extents check box. This option does not format the entire VDisk, only the newly expanded space.

When you are finished, click OK.

Figure 8-85 Expanding a VDisk

VDisk expansion notes:

� With sequential VDisks, you must specify the MDisk from which you want to obtain space.

� No support exists for the expansion of image mode VDisks.

� If there are insufficient extents to expand your VDisk to the specified size, you receive an error message.

� If you use VDisk Mirroring, all copies must be synchronized before expanding.

Chapter 8. SAN Volume Controller operations using the GUI 515

Page 542: San

8.5.8 Assigning a VDisk to a hostWhen we map a VDisk to a host, it does not matter whether the host is attached using either an iSCSI or FC connection type. The SVC treats the VDisk mapping in the same way for both connection types.

Perform the following steps to map a VDisk to a host:

1. From the SVC Welcome window (Figure 8-1 on page 470), select Work with Virtual Disks, and then, select Virtual Disks.

2. In the Viewing VDisks window (Figure 8-86), from the list, select Map VDisks to a host, and click Go.

Figure 8-86 Assigning VDisks to a host

3. In the Creating Virtual Disk-to-Host Mappings window (Figure 8-87), select the target host. We have the option to specify the SCSI LUN ID. (This field is optional. Use this field to specify an ID for the SCSI LUN. If you do not specify an ID, the next available SCSI LUN ID on the host adapter is automatically used.) Click OK.

Figure 8-87 Creating VDisk-to-Host Mappings window

4. You are presented with an information window that displays the status, as shown in Figure 8-88 on page 517.

516 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 543: San

Figure 8-88 VDisk to host mapping successful

5. You now return to the Viewing Virtual Disks window (Figure 8-86 on page 516).

You have now completed all of the tasks that are required to assign a VDisk to an attached host, and the VDisk is ready for use by the host.

8.5.9 Modifying a VDiskThe Modifying Virtual Disk menu item allows you to rename the VDisk, reassign the VDisk to another I/O Group, and set throttling parameters.

To modify a VDisk, perform the following steps:

1. Select the VDisk that you want to modify (Figure 8-66 on page 504). Select Modify a VDisk from the list, and click Go.

2. The Modifying Virtual Disk VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens. See Figure 8-89 on page 518. You can perform the following steps separately or in combination:

a. Type a new name for your VDisk.

b. Select an alternate I/O Group from the list to alter the I/O Group to which it is assigned.

c. Set performance throttling for a specific VDisk. In the I/O Governing field, type a number and select either I/O or MB from the list. Note the following items:

• I/O governing effectively throttles the amount of I/Os per second (or MBs per second) to and from a specific VDisk. You might want to use I/O governing if you have a VDisk that has an access pattern that adversely affects the performance of other VDisks on the same set of MDisks, for example, if it uses most of the available bandwidth.

• If this application is highly important, migrating the VDisk to another set of MDisks might be advisable. However, in certain cases, it is an issue with the I/O profile of the application rather than a measure of its use or importance.

• Base your choice between I/O and MB as the I/O governing throttle on the disk access profile of the application. Database applications generally issue large amounts of I/O, but they only transfer a relatively small amount of data. In this case,

New name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, and the underscore. The name can be between one and 15 characters in length. However, it cannot start with a number or the word “VDisk” (because this prefix is reserved for SVC assignment only).

Chapter 8. SAN Volume Controller operations using the GUI 517

Page 544: San

setting an I/O governing throttle that is based on MBs per second does not achieve much. It is better for you to use an I/O per second throttle.

At the other extreme, a streaming video application generally issues a small amount of I/O, but it transfers large amounts of data. In contrast to the database example, setting an I/O governing throttle based on I/Os per second does not achieve much. Therefore, it is better for you to use an MB per second throttle.

• Additionally, you can specify a unit device identifier.

• The Primary Copy is used to select which VDisk copy is going to be used as the preferred copy for read operations.

• The Mirror Synchronization rate is the I/O governing rate in a percentage during the initial synchronization. A zero value disables synchronization.

• The Copy ID section is used for Space-Efficient VDisks. If you only have a single Space-Efficient VDisk, the Copy ID drop-down list will be grayed out and you can change the warning thresholds and whether the copy will autoexpand. If you have a VDisk mirror and one, or more, of the copies are space-efficient, you can select a copy, or all copies, and change the warning thresholds/autoexpand individually.

Click OK when you have finished making changes.

Figure 8-89 Modifying a VDisk

8.5.10 Migrating a VDiskTo migrate a VDisk, perform the following steps:

1. Select the VDisk that you want to migrate (Figure 8-66 on page 504). Select Migrate a VDisk from the list, and click Go.

518 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 545: San

2. The Migrating Virtual Disk VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens, as shown in Figure 8-90. From the MDisk Group Name list, perform these steps:

a. Select the MDG to which you want to reassign the VDisk. You will only be presented with a list of MDisk groups with the same extent size.

b. Specify the number of threads to devote to this process (a value from 1 to 4). The optional threads parameter allows you to assign a priority to the migration process. A setting of 4 is the highest priority setting. If you want the process to take a lower priority over other types of I/O, you can specify 3, 2, or 1.

When you have finished making your selections, click OK to begin the migration process.

3. You must manually refresh your browser or close it. Return to the Viewing Virtual Disks window periodically to see the MDisk Group Name column in the Viewing Virtual Disks window update to reflect the new MDG name.

Figure 8-90 Migrating a VDisk

8.5.11 Migrating a VDisk to an image mode VDiskMigrating a VDisk to an image mode VDisk allows the SVC to be removed from the data path. This action might be useful where the SVC is used as a data mover appliance.

To migrate a VDisk to an image mode VDisk, the following rules apply:

� The destination MDisk must be greater than or equal to the size of the VDisk.

� The MDisk that is specified as the target must be in an unmanaged state.

� Regardless of the mode in which the VDisk starts, it is reported as being in managed mode during the migration.

� Both of the MDisks involved are reported as being in image mode during the migration.

� If the migration is interrupted by a cluster recovery, or by a cache problem, the migration resumes after the recovery completes.

To accomplish the migration, perform the following steps:

1. Select a VDisk from the list, choose Migrate to an Image Mode VDisk from the drop-down list (Figure 8-91 on page 520), and click Go.

Important: After a migration starts, you cannot stop it. Migration continues until it is complete unless it is stopped or suspended by an error condition or the VDisk that is being migrated is deleted.

Chapter 8. SAN Volume Controller operations using the GUI 519

Page 546: San

2. The Migrate to Image Mode VDisk wizard launches (it is not shown here). Read the steps in this window, and click Next.

3. Select the MDisk to which the data will be migrated (Figure 8-91). Click Next.

Figure 8-91 Migrate to image mode VDisk wizard: Select the Target MDisk

4. Select the MDG that the MDisk will join (Figure 8-92). Click Next.

Figure 8-92 Migrate to image mode VDisk wizard: Select MDG

5. Select the priority of the migration by selecting the number of threads (Figure 8-93). Click Next.

Figure 8-93 Migrate to image mode VDisk wizard: Select the Threads

520 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 547: San

6. Verify that the information that you specified is correct (Figure 8-94). If you are satisfied, click Finish. If you want to change something, use the Back option.

Figure 8-94 Migrate to image mode VDisk wizard: Verify Migration Attributes

7. Figure 8-95 displays the details of the VDisk that you are migrating.

Figure 8-95 Migrate to image mode VDisk wizard: Progress of migration

8.5.12 Creating a VDisk Mirror from an existing VDiskYou can create a mirror of the MDisks from an existing VDisk, that is, it can give you two copies of the underlying disk extents.

You can use a VDisk mirror for any operation for which you can use a VDisk. It is transparent to higher level operations, such as Metro Mirror, Global Mirror, or FlashCopy.

Creating a VDisk mirror from an existing VDisk is not restricted to the same MDG, so it makes an ideal method to protect your data from a disk system or an array failure. If one copy of the mirror fails, it provides continuous data access to the other copy. When the failed copy is repaired, the copies automatically resynchronize.

You can also use a VDisk mirror as an alternative migration tool, where you can synchronize the mirror before splitting off the original side of the mirror. The VDisk stays online, and it can be used normally, while the data is being synchronized. The copies can also be separate structures (that is, striped, image, sequential, or space-efficient) and separate extent sizes.

Tip: You can also create a new mirrored VDisk by selecting an option during the VDisk creation, as shown in Figure 8-69 on page 506.

Chapter 8. SAN Volume Controller operations using the GUI 521

Page 548: San

To create a mirror copy from within a VDisk, perform the following steps;

1. Select a VDisk from the list, choose Add a Mirrored VDisk Copy from the drop-down list (see Figure 8-66 on page 504), and click Go.

2. The Add Copy to VDisk VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens. See Figure 8-96. You can perform the following steps separately or in combination:

a. Choose the type of VDisk Copy that you want to create: striped or sequential.

b. Select the MDG in which you want to put the copy. We recommend that you choose a separate group to maintain higher availability.

c. Click Select MDisk(s) manually, which expands the section that has a list of MDisks that are available for adding.

d. Choose the Mirror synchronization rate, which is the I/O governing rate in a percentage during initial synchronization. A zero value disables synchronization. You can also select Synchronized, but only use this option when the VDisk has never been used or is going to be formatted by the host.

e. You can make the copy space-efficient. This section will expand, giving you options to allocate the virtual size, warning thresholds, autoexpansion, and grain size. See 8.5.4, “Creating a Space-Efficient VDisk with autoexpand” on page 509 for more information.

f. Optionally, format the new VDisk by selecting the “Format the new VDisk copy and mark the VDisk synchronized” check box. Use this option with care, because if the primary copy goes offline, you might not have the data replicated on the other copy.

g. Click OK.

Figure 8-96 Add Copy to VDisk window

You can monitor the MDisk copy synchronization progress by selecting the Manage Progress menu option and, then, by selecting the View Progress link.

522 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 549: San

8.5.13 Creating a mirrored VDiskIn this section, we create a mirrored VDisk step-by-step. This process creates a highly available VDisk.

Refer to 8.5.3, “Creating a VDisk” on page 505, perform steps 1 to 4, and, then, perform the following steps:

1. In the Set Attributes window (Figure 8-97), follow these steps:

a. Select the type of VDisk to create (striped or sequential) from the list.

b. Select the cache mode (read/write or none) from the list.

c. Select a Unit device identifier (a numerical number) for this VDisk.

d. Select the number of VDisks to create.

e. Select the Mirrored Disk check box. Certain mirror disk options will appear.

f. Type the Mirror Synchronization rate, in a percent. It is set to 50%, by default.

g. Optionally, you can check the Synchronized check box. Select this option when MDisks are already formatted or when read stability to unwritten areas of the VDisk is not required.

h. Click Next.

Figure 8-97 Select the attributes for the VDisk

2. In the Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window, as shown in Figure 8-98 on page 524, follow these steps:

a. Select the MDG from the list.

b. Type the capacity of the VDisk. Select the unit of capacity from the list.

c. Click Next.

Chapter 8. SAN Volume Controller operations using the GUI 523

Page 550: San

Figure 8-98 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 0) window

3. In the Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window, as shown in Figure 8-99, select an MDG for Copy 1 of the mirror. You can define Copy 1 within the same MDG or on another MDG. Click Next.

Figure 8-99 Select MDisk(s) and Size for a Striped-Mode VDisk (Copy 1) window

4. In the Name the VDisk(s) window (Figure 8-100), type a name for the VDisk that you are creating. In this case, we used MirrorVDisk1. Click Next.

Figure 8-100 Name the VDisk(s) window

524 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 551: San

5. In the Verify Mirrored VDisk Attributes window (Figure 8-101), verify the selections. We can select the Back button at any time to make changes.

Figure 8-101 Verifying Mirrored VDisk Attributes window

6. After selecting Finish, we are presented with the window, which is shown in Figure 8-102, that informs us of the result of the action.

Figure 8-102 Mirrored VDisk creation success

We click Close again, and by clicking our newly created VDisk, we can see more detailed information about that VDisk, as shown in Figure 8-103 on page 526.

Chapter 8. SAN Volume Controller operations using the GUI 525

Page 552: San

Figure 8-103 List of created mirrored VDisks

8.5.14 Creating a VDisk in image mode An image mode disk is a VDisk that has an exact one-to-one (1:1) mapping of VDisk extents with the underlying MDisk. For example, extent 0 on the VDisk contains the same data as extent 1 on the MDisk and so on. Without this (1:1) mapping (for example, if extent 0 on the VDisk mapped to extent 3 on the MDisk), there is little chance that the data on a newly introduced MDisk is still readable.

Image mode is intended for the purpose of migrating data from an environment without the SVC to an environment with the SVC. A LUN that was previously directly assigned to a SAN-attached host can now be reassigned to the SVC (during a short outage) and returned to the same host as an image mode VDisk, with the user’s data intact. During the same outage, the host, cables, and zones can be reconfigured to access the disk, now through the SVC.

After access is re-established, the host workload can resume while the SVC manages the transparent migration of the data to other SVC managed MDisks on the same or another disk subsystem.

We recommend that, during the migration phase of the SVC implementation, you add one image mode VDisk at a time to the SVC environment. This approach reduces the risk of error. It also means that the short outages that are required to reassign the LUNs from the subsystem or subsystems and to reconfigure the SAN and host can be staggered over a period of time to minimize the effect on the business.

As of SVC Version 4.3, you have the ability to create a VDisk mirror or a Space-Efficient VDisk while you are creating an image mode VDisk.

You can use the mirroring option, while making the image mode VDisk, as a storage array migration tool, because the Copy1 MDisk will also be in image mode.

To create a space-efficient image mode VDisk, you must have the same amount of real disk space as the original MDisk, because the SVC is unable to detect how much physical space a host utilizes on a LUN.

526 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 553: San

To create an image mode VDisk, perform the following steps:

1. From the My Work window on the left side of your GUI, select Work with virtual disks.

2. From the Work with Virtual Disks window, select Virtual Disks.

3. From the list, select Create Image Mode VDisk.

4. From the overview for the creation of an image mode VDisk, select Next.

5. The “Set the attributes for the image mode Virtual Disk you are creating” window opens (Figure 8-104), where you enter the name of the VDisk that you want to create.

Figure 8-104 Set attributes for the VDisk

6. You can also select whether you want to have read and write operations stored in cache by specifying a cache mode. Additionally, you can specify a unit device identifier. You can optionally choose to have a mirrored or Space-Efficient VDisk. Click Next to continue.

We describe the VDisk cache modes in Table 8-1.

Table 8-1 VDisk cache modes

Important: You can create an image mode VDisk only by using an unmanaged disk, that is, you must create an image mode VDisk before you add the MDisk that corresponds to your original logical volume to an MDG.

Cache mode: You must specify the cache mode when you create the VDisk. After the VDisk is created, you cannot change the cache mode.

Read/Write All read and write I/O operations that are performed by the VDisk are stored in cache. Read/Write cache mode is the default cache mode for all VDisks.

None All read and write I/O operations that are performed by the VDisk are not stored in cache.

Chapter 8. SAN Volume Controller operations using the GUI 527

Page 554: San

7. Next, choose the MDisk to use for your image mode VDisk, as shown in Figure 8-105.

Figure 8-105 Select your MDisk to use for your image mode VDisk

8. Select your I/O Group and preferred node to handle the I/O traffic for the VDisk that you are creating or have the system choose for you, as shown in Figure 8-106.

Figure 8-106 Select the I/O Group and preferred node

9. Figure 8-107 on page 529 shows you the characteristics of the new image VDisk. Click Finish to complete this task.

Note: If you do not provide a name, the SVC automatically generates the name VDiskn (where n is the ID sequence number that is assigned by the SVC internally).

If you want to provide a name, you can use the letters A to Z and a to z, the numbers 0 to 9, a dash, and the underscore. The name can be between one and 15 characters in length, but it cannot start with a number, a dash, or the word “VDisk” (because this prefix is reserved for SVC assignment only).

528 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 555: San

Figure 8-107 Verify image VDisk attributes

You can now map the newly created VDisk to your host.

8.5.15 Creating an image mode mirrored VDiskThis procedure defines a mirror copy to the image mode VDisk creation process. The second copy (Copy1) will also be an image mode MDisk. You can use this mirror copy as a storage array migration tool, using the SVC as the data mover. Follow these steps:

1. From the My Work window on the left side of your GUI, select Work with Virtual Disks.

2. From the Work with Virtual Disks window, select Virtual Disks.

3. From the drop down menu, select Create Image Mode VDisk.

4. After selecting Next on the overview window, you see the attribute selection window, as shown in Figure 8-108 on page 530. Follow these steps:

a. Enter the name of the VDisk that you want to create.

b. Select the Mirrored Disk check box and a subsection expands. The mirror synchronization rate is a percentage of the peak rate. The synchronized option is only available when the original disk is unused (or going to be otherwise formatted by the host).

Chapter 8. SAN Volume Controller operations using the GUI 529

Page 556: San

Figure 8-108 Set attributes for the VDisk

5. Figure 8-109 enables you to choose on which of the available MDisks your Copy 0 and Copy 1 will be stored. Notice that we have selected a second MDisk that is larger than the original MDisk. Click Next to proceed.

Figure 8-109 Select MDisks

6. Now, you can optionally select an I/O Group and a preferred node, and you can select an MDG for each of the MDisk copies, as shown in Figure 8-110 on page 531. Click Next to proceed.

530 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 557: San

Figure 8-110 Choose an I/O Group and an MDG for each of the MDisk copies

7. Figure 8-111 shows you the characteristics of the new image mode VDisk. Click Finish to complete this task.

Figure 8-111 Verify image VDisk attributes

You can monitor the MDisk copy synchronization progress by selecting Manage Progress and then View Progress, as shown in Figure 8-112 on page 532.

Chapter 8. SAN Volume Controller operations using the GUI 531

Page 558: San

Figure 8-112 VDisk copy synchronization status

Optionally, you can assign the VDisk to the host or wait until it is synchronized and, after deleting the MDisk mirror Copy 1, map the MDisk copy to the host.

8.5.16 Migrating to a Space-Efficient VDisk using VDisk MirroringIn this scenario, we migrate from a fully allocated (or an image mode) VDisk to a Space-Efficient VDisk using VDisk Mirroring. We repeat the procedure as described and shown in 8.5.12, “Creating a VDisk Mirror from an existing VDisk” on page 521, but here we select the Space-Efficient VDisk as the mirrored copy. Follow these steps:

1. Select a VDisk from the list, choose Add a Mirrored VDisk Copy from the drop-down list (see Figure 8-66 on page 504), and click Go.

2. The Add Copy to VDisk VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens. See Figure 8-113 on page 533. You can perform the following steps separately or in combination:

a. Choose the type of VDisk copy that you want to create: striped or sequential.

b. Select the MDG in which you want to put the copy.

c. Click Select MDisk(s) manually, which will expand the section with a list of MDisks that are available for adding.

d. Specify a percentage for the Mirror synchronization rate, which is the I/O governing rate used during initial synchronization. A zero value disables synchronization. You can also select Synchronized, but only when the VDisk has never been used or is going to be formatted by the host.

e. Select Space-efficient. This section will expand. Perform these steps:

i. Type 100 in the % box for the real size to initially allocate. The SVC will see Copy 0 as 100% utilized, so Copy 1 must be defined as the same size.

ii. Clear the Warn when used capacity of VDisk reaches check box.

iii. Check Auto expand.

iv. Set the Grain size. See 8.5.4, “Creating a Space-Efficient VDisk with autoexpand” on page 509 for more information.

f. Click OK.

532 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 559: San

Figure 8-113 Add a space-efficient copy to VDisk

You can monitor the VDisk copy synchronization progress by selecting the Manage Progress menu option and, then, the View Progress link, as shown in Figure 8-114 on page 534.

Chapter 8. SAN Volume Controller operations using the GUI 533

Page 560: San

Figure 8-114 Two ongoing VDisk copies in the system

8.5.17 Deleting a VDisk copy from a VDisk mirrorAfter the VDisk copy has finished synchronizing, you can remove the original VDisk copy (Copy 0):

1. In the Viewing Virtual Disks window, select the mirrored VDisk from the list, choose Delete a Mirrored VDisk Copy from the drop-down list (Figure 8-115), and click Go.

Figure 8-115 Viewing Virtual Disks: Deleting a mirrored VDisk copy

2. Figure 8-116 on page 535 displays both copies of the VDisk mirror. Select the original copy (Copy ID 0), and click OK.

534 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 561: San

Figure 8-116 Deleting VDisk Copy 0

The VDisk is now a single Space-Efficient VDisk copy.

To migrate a Space-Efficient VDisk to a fully allocated VDisk, follow the same scenario, but add a normal (fully allocated) VDisk as the second copy.

8.5.18 Splitting a VDisk copyTo split off a synchronized VDisk copy to a new VDisk, perform the following steps:

1. Select a mirrored VDisk from the list, choose Split a VDisk Copy from the drop-down list (Figure 8-66 on page 504), and click Go.

2. The Split a Copy from VDisk VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens (See Figure 8-117 on page 536). Perform the following steps:

a. Select which copy you want to split.

a. Type a name for the new VDisk.

b. You can optionally force-split the copies even if the copy is not synchronized. However, the split copy will not be point-in-time consistent.

c. Choose an I/O Group and then a preferred node. In our case, we let the system choose.

d. Select the cache mode: Read/Write or None.

e. If you want, enter a unit device identifier.

f. Click OK.

Chapter 8. SAN Volume Controller operations using the GUI 535

Page 562: San

Figure 8-117 Split a copy from a VDisk

This new VDisk is available to be mapped to a host.

8.5.19 Shrinking a VDiskThe method that the SVC uses to shrink a VDisk is to remove the required number of extents from the end of the VDisk. Depending on where the data actually resides on the VDisk, this action can be quite destructive. For example, you might have a VDisk that consists of 128 extents (0 to 127) of 16 MB (2 GB capacity), and you want to decrease the capacity to 64 extents (1 GB capacity). In this case, the SVC simply removes extents 64 to 127. Depending on the operating system, there is no easy way to ensure that your data resides entirely on extents 0 through 63, so be aware that you might lose data.

Although easily done using the SVC, you must ensure that your operating system supports shrinking, either natively or by using third-party tools, before using this function.

In addition, we recommend that you always have a good current backup before you execute this task.

Shrinking a VDisk is useful in certain circumstances, such as:

� Reducing the size of a candidate target VDisk of a copy relationship to make it the same size as the source

� Releasing space from VDisks to have free extents in the MDG, provided that you do not use that space any more and take precautions with the remaining data

Important: After you split a VDisk mirror, you cannot resynchronize or recombine them. You must create a VDisk copy from scratch.

536 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 563: San

Assuming your operating system supports it, perform the following steps to shrink a VDisk:

1. Perform any necessary steps on your host to ensure that you are not using the space that you are about to remove.

2. Select the VDisk that you want to shrink (Figure 8-66 on page 504). Select Shrink a VDisk from the list, and click Go.

3. The Shrinking Virtual Disks VDiskname window (where VDiskname is the VDisk that you selected in the previous step) opens, as shown in Figure 8-118. In the Reduce Capacity By field, enter the capacity by which you want to reduce. Select B, KB, MB, GB, TB, or PB. The final capacity of the VDisk is the Current Capacity minus the capacity that you specify.

When you are finished, click OK. The changes become visible on your host.

Figure 8-118 Shrinking a VDisk

8.5.20 Showing the MDisks that are used by a VDiskTo show the MDisks that are used by a specific VDisk, perform the following steps:

1. Select the VDisk for which you want to view MDisk information (Figure 8-66 on page 504). Select Show MDisks This VDisk is Using from the list, and click Go.

2. You will see a subset (specific to the VDisk that you chose in the previous step) of the Viewing Managed Disks window (Figure 8-119).

Figure 8-119 Showing MDisks that are used by a VDisk

Capacity: Be careful with the capacity information. The Current Capacity field shows the capacity in MBs, while you can specify a capacity to reduce in GBs. SVC calculates 1 GB as 1,024 MB.

Chapter 8. SAN Volume Controller operations using the GUI 537

Page 564: San

For information about what actions you can perform in this window, see 8.2.4, “Managed disks” on page 479.

8.5.21 Showing the MDG to which a VDisk belongsTo show the MDG to which a specific VDisk belongs, perform the following steps:

1. Select the VDisk for which you want to view MDG information (Figure 8-66 on page 504). Select Show MDisk Group This VDisk Belongs To from the list, and click Go.

2. You will see a subset (specific to the VDisk that you chose in the previous step) of the Viewing Managed Disk Groups Belonging to VDiskname window (Figure 8-120).

Figure 8-120 Showing an MDG for a VDisk

8.5.22 Showing the host to which the VDisk is mappedTo show the host to which a specific VDisk belongs, select the VDisk for which you want to view MDG information (Figure 8-66 on page 504). Select Show Hosts This VDisk is Mapped To from the list, and click Go, which s shows you the Host to which the VDisk is attached (Figure 8-121). Alternatively, you can use the procedure that is described in 8.5.24, “Showing VDisks mapped to a particular host” on page 539 to see all of the VDisk to Host mappings.

Figure 8-121 Show host to VDisk mapping

8.5.23 Showing capacity informationTo show the capacity information of the cluster, perform the following steps.

From the VDisk overview drop-down list, select Show Capacity information, as shown in Figure 8-122 on page 539.

538 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 565: San

Figure 8-122 Selecting capacity information for a VDisk

Figure 8-123 shows you the total MDisk capacity, the space in the MDGs, the space allocated to the VDisks, and the total free space.

Figure 8-123 Show capacity information

8.5.24 Showing VDisks mapped to a particular hostTo show the VDisks that are assigned to a specific host, perform the following steps:

1. From the SVC Welcome window, click Work with Virtual Disks and, then, Virtual Disk to Host Mappings (Figure 8-124).

Figure 8-124 VDisk to host mapping

Chapter 8. SAN Volume Controller operations using the GUI 539

Page 566: San

2. Now you can see to which host that VDisk belongs. If this is a long list, you can use the Additional Filtering and Sort option from 8.7.1, “Organizing on window content” on page 543.

8.5.25 Deleting VDisks from a hostPerform these steps to delete a mapping:

1. From the same window where you can view VDisk to host mapping (Figure 8-124 on page 539), you can also delete a mapping. Select the host and VDisk combination that you want to delete. Ensure that Delete a Mapping is selected from the list. Click Go.

2. Confirm the selection that you made in Figure 8-125 by clicking Delete.

Figure 8-125 Deleting VDisk to Host mapping

3. Now you are back at the window that is shown in Figure 8-124 on page 539. Now, you can assign this VDisk to another host, as described in 8.5.8, “Assigning a VDisk to a host” on page 516.

You have now completed the required tasks to manage VDisks within an SVC environment.

8.6 Working with solid-state drivesIn SVC, solid-state drives are introduced as part of each SVC node. During our operational work on the SVC cluster, it is necessary to know how to identify where our solid-state drives are located and how they are configured. In this section, we describe the basic operational tasks related to the solid-state drives. Note that storing the quorum disk on solid-state drives is not supported.

More detailed information about solid-state drives and internal controllers is in 2.5, “Solid-state drives” on page 49.

8.6.1 Solid-state drive introductionIf you have solid-state drives installed in your node, they will appear as unmanaged MDisks, which are controlled by an internal controller. This controller is only used for the solid-state drives, and each controller is dedicated to a single node; therefore, we can have eight internal controllers in a single cluster configuration. Those controllers are automatically assigned as owners of the solid-state drives, and the controllers have the same worldwide node name (WWNN) as the node to which they belong. An internal controller is identified in Figure 8-126 on page 541.

540 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 567: San

Figure 8-126 SVC internal controller

The unmanaged MDisks (solid-state drives) are owned by the internal controllers. When these MDisks are added to an MDG, we recommend that a dedicated MDG is created for the solid-state drives. When those MDisks are added to an MDG, they will become “managed” and will be treated as any other MDisks in an MDG.

If we look closer at one of the selected controllers, as shown in Figure 8-127, we can verify the SVC node that owns this controller, and we can verify that this controller is an internal SVC controller.

Figure 8-127 Shows internal solid-state drive controller

We can now check what MDisks (sourced from our solid-state drives) are provisioned from that controller, as shown in Figure 8-128 on page 542.

Chapter 8. SAN Volume Controller operations using the GUI 541

Page 568: San

Figure 8-128 Our solid-state drives

From this view, we can see all of the relevant information, such as the status, the MDG, and the size. To see more detailed information about a single MDisk (single solid-state drive), we click a single MDisk and we will see its information, as shown in Figure 8-129.

Figure 8-129 Showing details for a solid-state drive MDisk

Notice the controller type (6), which is an identifier for the internal controller type.

When you have your solid-state drives in full operation and you want to see the VDisks that use your solid-state drives, the easiest way is to locate the MDG that contains your solid-state drives as MDisks, and select Show VDisks Using This Group, as shown in Figure 8-130 on page 543.

542 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 569: San

Figure 8-130 Showing VDisks using our solid-state drives

This action displays the VDisks that use your solid-state drives.

8.7 SVC advanced operations using the GUIIn the following topics, we describe the more advanced activities.

8.7.1 Organizing on window contentDetailed information about filtering and sorting the content that is displayed in the GUI is available in 8.1.1, “Organizing on window content” on page 470.

If you need to access the online help, in the upper right corner of the window, click the icon. This icon opens an information center window where you can search on any item for which you want help (see Figure 8-131 on page 544).

Chapter 8. SAN Volume Controller operations using the GUI 543

Page 570: San

Figure 8-131 Online help using the question mark icon

General maintenanceIf, at any time, the content in the right side of the frame is abbreviated, you can collapse the My Work column by clicking the icon at the top of the My Work column. When collapsed, the small arrow changes from pointing to the left to pointing to the right ( ). Clicking the small arrow that points right expands the My Work column back to its original size.

In addition, each time that you open a configuration or administrative window using the GUI in the following sections, it creates a link for that window along the top of your Web browser beneath the banner graphic. As a general maintenance task, we recommend that you close each window when you finish using it by clicking the icon to the right of the window name, but under the icon. Be careful not to close the entire browser.

8.8 Managing the cluster using the GUIThis section explains the various configuration and administrative tasks that you can perform on the cluster.

8.8.1 Viewing cluster propertiesPerform the following steps to display the cluster properties:

1. From the SVC Welcome window, select Manage Cluster and, then, View Cluster Properties.

2. The Viewing General Properties window (Figure 8-132 on page 545) opens. Click IP Addresses, Remote Authentication, Space, Statistics, Metro & Global Mirror, iSCSI, SNMP, Syslog, E-mail server, and E-mail user, and you will see additional information about your cluster’s configuration.

544 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 571: San

Figure 8-132 View Cluster Properties: General properties

8.8.2 Modifying IP addressesIn SVC 5.1, one new function enables us to use both IP ports of each node. Now, there are two active cluster ports on each node. We describe the two active cluster ports on each node in further detail in 2.2.11, “Usage of IP addresses and Ethernet ports” on page 28.

If the cluster IP address is changed, the open command-line shell closes during the processing of the command. You must reconnect to the new IP address if the cluster is connected through that port.

In this section, we discuss the modification of IP addresses.

Perform the following steps to modify the cluster and service IP addresses of our SVC configuration:

1. From the SVC Welcome window, select Manage Cluster and, then, Modify IP Addresses.

2. The Modify IP Addresses window (Figure 8-133 on page 546) opens.

Important: If you specify a new cluster IP address, the existing communication with the cluster through the GUI is lost. You need to relaunch the SAN Volume Controller Application from the GUI Welcome window.

Modifying the IP address of the cluster, although quite simple, requires reconfiguration for other items within the SVC environments, including reconfiguring the central administration GUI by adding the cluster again with its new IP address.

Chapter 8. SAN Volume Controller operations using the GUI 545

Page 572: San

Figure 8-133 Modify cluster IP address

Select the port that you want to modify, select Modify Port Setting, and click GO. Notice that you can configure both ports on the SVC node, as shown in Figure 8-134.

Figure 8-134 Modify cluster IP addresses

We enter the new information, as shown in Figure 8-135 on page 547.

546 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 573: San

Figure 8-135 Entering the new cluster IP address

3. You advance to the next window, which shows a message indicating that the IP addresses were updated.

You have now completed the required tasks to change the IP addresses (cluster, service, gateway, and Master Console) for your SVC environment.

8.8.3 Starting the statistics collectionPerform the following steps to start the statistics collection on your cluster:

1. From the SVC Welcome window, select Manage Cluster and Start Statistics Collection.

2. The Starting the Collection of Statistics window (Figure 8-136) opens. Make an interval change, if desired. The interval that you specify (minimum 1, maximum 60) is in minutes. Click OK.

Figure 8-136 Starting the Collection of Statistics

3. Although it does not state the current status, clicking OK turns on the statistics collection. To verify, click Cluster Properties, as you did in 8.8.1, “Viewing cluster properties” on page 544. Then, click Statistics. You see the interval as specified in Step 2 and the status of On, as shown in Figure 8-137 on page 548.

Chapter 8. SAN Volume Controller operations using the GUI 547

Page 574: San

Figure 8-137 Verifying that statistics collection is on

You have now completed the required tasks to start statistics collection on your cluster.

8.8.4 Stopping the statistics collectionPerform the following steps to stop statistics collection on your cluster:

1. From the SVC Welcome window, select Manage Cluster and Stop Statistics Collection.

2. The Stopping the Collection of Statistics window (Figure 8-138) opens, and you see a message asking whether you are sure that you want to stop the statistics collection. Click Yes to stop the ongoing task.

Figure 8-138 Stopping the collection of statistics

3. The window closes. To verify that the collection has stopped, click Cluster Properties, as you did in 8.8.1, “Viewing cluster properties” on page 544. Then, click Statistics. Now, you see the status has changed to Off, as shown in Figure 8-139.

Figure 8-139 Verifying that statistics collection is off

548 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 575: San

You have now completed the required tasks to stop statistics collection on your cluster.

8.8.5 Metro Mirror and Global MirrorFrom the Manage Cluster window, we can see how Metro Mirror or Global Mirror is configured.

In Figure 8-140, we can see the overview of partnership properties and which clusters are currently in partnership with our cluster.

Figure 8-140 Metro Mirror and Global Mirror overview

8.8.6 iSCSIFrom the View Cluster Properties window, we can select iSCSI to see the iSCSI overview. The iSCSI properties show whether the iSNS server and CHAP are configured and what type, if any, of authentication is supported (Figure 8-141).

Figure 8-141 iSCSI overview from cluster properties window

8.8.7 Setting the cluster time and configuring the Network Time Protocol server

Perform the following steps to configure time settings:

1. From the SVC Welcome window, select Manage Cluster and Set Cluster Time.

2. The Cluster Date and Time Settings window (Figure 8-142 on page 550) opens. At the top of the window, you can see the current settings.

Chapter 8. SAN Volume Controller operations using the GUI 549

Page 576: San

3. If you are using an Network Time Protocol (NTP) server, you enter the IP address of the NTP server and select Set NTP Server. From now on, the cluster will use that server’s settings as its time reference.

4. If it is necessary to change the cluster time, select Update Cluster Time.

Figure 8-142 Changing cluster date and time

You have now completed the necessary tasks to configure an NTP server and to set the cluster time zone and time.

8.8.8 Shutting down a clusterIf all input power to a SAN Volume Controller cluster is removed for more than a few minutes (for example, if the machine room power is shut down for maintenance), it is important that you shut down the cluster before you remove the power. Shutting down the cluster while it is still connected to the main power ensures that the uninterruptible power supply unit batteries are still fully charged (when power is restored).

If you remove the main power while the cluster is still running, the uninterruptible power supply unit will detect the loss of power and instruct the nodes to shut down. This shutdown can take several minutes to complete, and although the uninterruptible power supply unit has sufficient power to perform the shutdown, you will be unnecessarily draining the uninterruptible power supply unit batteries.

When power is restored, the SVC nodes will start; however, one of the first checks that the SVC nodes make is to ensure that the uninterruptible power supply unit’s batteries have sufficient power to survive another power failure, enabling the node to perform a clean shutdown. (We do not want the uninterruptible power supply unit to run out of power while the

550 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 577: San

node’s shutdown activities have not yet completed.) If the uninterruptible power supply unit’s batteries are not sufficiently charged, the node will not start. It can take up to three hours to charge the batteries sufficiently for a node to start.

SVC uninterruptible power supply units are designed to survive at least two power failures in a short time, before nodes will refuse to start until the batteries have sufficient power (to survive another immediate power failure). If, during your maintenance activities, the uninterruptible power supply unit detected power and a loss of power multiple times (and thus the nodes start and shut down more than one time in a short time frame), you might find that you have unknowingly drained the uninterruptible power supply unit batteries. You will have to wait until they are charged sufficiently before the nodes will start.

Perform the following steps to shut down your cluster:

1. From the SVC Welcome window, select Manage Cluster and Shut Down Cluster.

2. The Shutting Down cluster window (Figure 8-143) opens. You will get a message asking you to confirm whether you want to shut down the cluster. Ensure that you have stopped all FlashCopy mappings, Remote Copy relationships, data migration operations, and forced deletions before continuing. Click Yes to begin the shutdown process.

Figure 8-143 Shutting down the cluster

You have now completed the required tasks to shut down the cluster. Now, you can shut down the uninterruptible power supply units by pressing the power buttons on their front panels.

Note: When a node shuts down due to loss of power, the node will dump the cache to an internal hard drive so that the cached data can be retrieved when the cluster starts. With the 8F2/8G4 nodes, the cache is 8 GB and can take several minutes to dump to the internal drive.

Important: Before shutting down a cluster, quiesce all I/O operations that are destined for this cluster, because you will lose access to all of the VDisks that are provided by this cluster. Failure to do so might result in failed I/O operations being reported to your host operating systems.

There is no need to quiesce all I/O operations if you are only shutting down one SVC node.

Begin the process of quiescing all I/O to the cluster by stopping the applications on your hosts that are using the VDisks that are provided by the cluster.

If you are unsure which hosts are using the VDisks that are provided by the cluster, follow the procedure in 8.5.22, “Showing the host to which the VDisk is mapped” on page 538, and repeat this procedure for all VDisks.

Note: At this point, you will lose administrative contact with your cluster.

Chapter 8. SAN Volume Controller operations using the GUI 551

Page 578: San

8.9 Manage authenticationUsers are managed from within the Manage Authentication window in the SAN Volume Controller console GUI (see Figure 8-146 on page 554).

Each user account has a name, a role, and password assigned to it, which differs from the Secure Shell (SSH)-key based role approach that is used by the CLI.

We describe authentication in detail in 2.3.5, “User authentication” on page 40.

The role-based security feature organizes the SVC administrative functions into groups, which are known as roles, so that permissions to execute the various functions can be granted differently to the separate administrative users. There are four major roles and one special role.

Table 8-2 on page 553 shows the user roles.

Tip: When you shut down the cluster, it will not automatically start. You must manually start the cluster.

If the cluster shuts down because the uninterruptible power supply unit has detected a loss of power, it will automatically restart when the uninterruptible power supply unit detects that the power has been restored (and the batteries have sufficient power to survive another immediate power failure).

Note: To restart the SVC cluster, you must first restart the uninterruptible power supply units by pressing the power buttons on their front panels. After they are on, go to the service panel of one of the nodes within your SVC cluster and press the power on button, releasing it quickly. After it is fully booted (for example, displaying Cluster: on line 1 and the cluster name on line 2 of the SVC front panel), you can start the other nodes in the same way.

As soon as all of the nodes are fully booted and you have re-established administrative contact using the GUI, your cluster is fully operational again.

552 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 579: San

Table 8-2 Authority roles

The superuser user is a built-in account that has the Security Admin user role permissions. You cannot change permissions or delete this superuser account; you can only change the password. You can also change this password manually on the front panels of the cluster nodes.

8.9.1 Modify current userFrom the SVC Welcome window, select Manage authentication in the My Work pane, and select Modify Current User, as shown in Figure 8-144 on page 554.

User group Role User

Security Admin All commands Superusers

Administrator All commands except:svctask: chauthservice, mkuser, rmuser, chuser, mkusergrp, rmusergrp,chusergrp, and setpwdreset

Administrators that control the SVC

Copy Operator All svcinfo commands and the following svctask commands: prestartfcconsistgrp, startfcconsistgrp, stopfcconsistgrp,chfcconsistgrp, prestartfcmap, startfcmap, stopfcmap, chfcmap,startrcconsistgrp, stoprcconsistgrp, switchrcconsistgrp, chrcconsistgrp,startrcrelationship, stoprcrelationship, switchrcrelationship,chrcrelationship, and chpartnership

For those users that control all copy functionality of the cluster

Service All svcinfo commandsand the following svctask commands:applysoftware, setlocale, addnode, rmnode, cherrstate,writesernum, detectmdisk, includemdisk, clearerrlog, cleardumps,settimezone, stopcluster, startstats, stopstats, and settime

For those users that perform service maintenance and other hardware tasks on the cluster

Monitor All svcinfo commands and the following svctask commands: finderr, dumperrlog, dumpinternallog, chcurrentuser and the svcconfig command: backup

For those users only needing view access

Chapter 8. SAN Volume Controller operations using the GUI 553

Page 580: San

Figure 8-144 Modifying current user

Toward the upper-left side of the window, you can see the name of the user that you are modifying. We enter our new password, as shown in Figure 8-145.

Figure 8-145 Changing password for the current user

8.9.2 Creating a userPerform the following steps to view and create a user:

1. From the SVC Welcome window, select Users in the My Work pane, as shown in Figure 8-146.

Figure 8-146 Viewing users

554 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 581: San

2. Select Create a User from the list, as shown in Figure 8-147.

Figure 8-147 Create a user

3. Enter a name for your user and the desired password. Because we are not connected to a Lightweight Directory Access Protocol (LDAP) server, we select Local for the authentication type. Therefore, we can choose to which user group our user belongs. In our scenario, we are creating a user for SAN administrative purposes, and it is therefore appropriate to add this user to the Administrator group. We attach the SSH key, as well, so a CLI session can be opened. We view the attributes, as shown in Figure 8-148.

Figure 8-148 Creating attributes for new user called qwerty

And, we see the result of our creation in Figure 8-149.

Figure 8-149 Overview of users that we have created

Chapter 8. SAN Volume Controller operations using the GUI 555

Page 582: San

8.9.3 Modifying a user rolePerform the following steps to modify a role:

1. Select the user, as shown in Figure 8-150, to change the assigned role. Select Modify a User from the list, and click Go.

Figure 8-150 Modify a user

2. You have the option of changing the password, assigning a new role, or changing the SSH key for the given user name. Click OK (Figure 8-151).

Figure 8-151 Modifying a user window

8.9.4 Deleting a user rolePerform the following steps to delete a user role:

1. Select the user that you want to delete. Select Delete Users from the drop-down list (Figure 8-152 on page 557), and click Go.

556 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 583: San

Figure 8-152 Delete a user

2. Click OK to confirm that you want to delete the user, as shown in Figure 8-153.

Figure 8-153 Confirming deleting a user

8.9.5 User groupsWe have several options to change and modify our user groups. We have five roles to assign to our user groups. Those roles cannot be modified, but a new user group can be created and linked to an already configured role. In Figure 8-154, we select to Create a Group.

Figure 8-154 Create a new user group

Here, we have several options for our user group and we find detailed information about the available groups. In Figure 8-155 on page 558, we can see the options, which are the same options with which we are presented when we select Modify User group.

Chapter 8. SAN Volume Controller operations using the GUI 557

Page 584: San

Figure 8-155 Create user group or modify a user group

8.9.6 Cluster passwordTo change the cluster password, select Manage authentication from the Welcome window, and then, select Cluster Passwords, as shown in Figure 8-156.

Figure 8-156 Change cluster password

8.9.7 Remote authenticationTo enable remote authentication using LDAP, we configure our SVC cluster by selecting Manage Authentication from My Work and selecting Remote Authentication, as shown in Figure 8-157.

Figure 8-157 Configuring Remote Authentication Services window

We have now completed the tasks that are required to create, modify, and delete a user and user groups within the SVC cluster.

558 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 585: San

8.10 Working with nodes using the GUIThis section discusses the various configuration and administrative tasks that you can perform on the nodes within an SVC cluster.

8.10.1 I/O GroupsThis section details the tasks that can be performed at an I/O Group level.

8.10.2 Renaming an I/O GroupPerform the following steps to rename an I/O Group:

1. From the SVC Welcome window, select Work with Nodes and I/O Groups.

2. The Viewing Input/Output Groups window (Figure 8-158) opens. Select the I/O Group that you want to rename. In this case, we select io_grp1. Ensure that Rename an I/O Group is selected from the drop-down list. Click Go.

Figure 8-158 Viewing I/O Groups

3. On the Renaming I/O Group I/O Group name window (where I/O Group name is the I/O Group that you selected in the previous step), type the New Name that you want to assign to the I/O Group. Click OK, as shown in Figure 8-159. Our new name is PROD_IO_GRP.

Figure 8-159 Renaming the I/O Group

Chapter 8. SAN Volume Controller operations using the GUI 559

Page 586: San

We have now completed the required tasks to rename an I/O Group.

8.10.3 Adding nodes to the clusterAfter cluster creation is completed through the service window (the front window of one of the SVC nodes) and the cluster Web interface, only one node (the configuration node) is set up. To be a fully functional SVC cluster, you must add at least a second node to the configuration.

Perform the following steps to add nodes to the cluster:

1. Open the GUI using one of the following methods:

– Double-click the SAN Volume Controller Console icon on your System Storage Productivity Center desktop.

– Open a Web browser on the System Storage Productivity Center console and point to this address:

http://localhost:9080/ica

– Open a Web browser on a separate workstation and point to this address:

http://sspcconsoleipaddress:9080/ica

2. The GUI Welcome window opens, as shown in Figure 8-160, and we select Clusters from the My Work window. This window contains several useful links and information: My Work (top left), the GUI version and build level information (on the right, under the graphic), and a hypertext link to the SVC download page:

http://www.ibm.com/storage/support/2145

Figure 8-160 GUI Welcome window

I/O Group name: The name can consist of the letters A to Z and a to z, the numbers 0 to 9, the dash, and the underscore. The name can be between one and 15 characters in length, but it cannot start with a number, the dash, or the word “iogrp” (because this prefix is reserved for SVC assignment only).

SVC also uses “io_grp” as a reserve word prefix. A node name cannot therefore be changed to io_grpn where n is a numeric; however, io_grpny or io_grpyn, where y is any non-numeric character that is used in conjunction with n, is acceptable.

560 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 587: San

3. The Viewing Clusters window opens, as shown in Figure 8-161. On the Viewing Clusters window, select the cluster on which you want to perform actions (in our case, ITSO_CLS3). Click Go.

Figure 8-161 Launch the SAN Volume Controller application

4. The SAN Volume Controller Console Application launches in a separate browser window (Figure 8-162). In this window, as with the Welcome window, you can see several links under My Work (top left), a Recent Tasks list (bottom left), the SVC Console version and build level information (on the right, under the graphic), and a hypertext link that takes you to the SVC download page:

http://www.ibm.com/storage/support/2145

Under My Work, click Work with Nodes and, then, Nodes.

Figure 8-162 SVC Console Welcome window

5. The Viewing Nodes window (Figure 8-163 on page 562) opens. Note the input/output (I/O) group name (for example, io_grp0). Select the node that you want to add. Ensure that Add a node is selected from the drop-down list, and click Go.

Chapter 8. SAN Volume Controller operations using the GUI 561

Page 588: San

Figure 8-163 Viewing Nodes

6. The next window (Figure 8-164) displays the available nodes. Select the node from the Available Candidate Nodes drop-down list. Associate it with an I/O Group and provide a name (for example, SVCNode2). Click OK.

Figure 8-164 Adding a Node to a Cluster window

In our case, we only have enough nodes to complete the formation of one I/O Group. Therefore, we added our new node to the I/O Group that node1 was already using, io_grp0 (you can rename the I/O Group from the default of iogrp0 using your own naming convention standards).

Node name: You can rename the existing node to your own naming convention standards (we show you how to rename the existing node later). In your window, it appears as node1, by default.

Note: If you do not provide a name, the SVC automatically generates the name noden, where n is the ID sequence number that is assigned by the SVC internally. If you want to provide a name, you can use letters A to Z and a to z, numbers 0 to 9, and the underscore. The name be between one and 15 characters in length, but it cannot start with a number or the word “node” (because this prefix is reserved for SVC assignment only).

562 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 589: San

If this window does not display any available nodes (which is indicated by the message “CMMVC1100I There are no candidate nodes available”), check whether your second node is powered on and whether zones are appropriately configured in your switches. It is also possible that a pre-existing cluster’s configuration data is stored on the second node. If you are sure that this node is not part of another active SVC cluster, use the service window to delete the existing cluster information. When this action is complete, return to this window and you will see the node listed.

7. Return to the Viewing Nodes window (Figure 8-165). It shows the status change of the node from Adding to Online.

Figure 8-165 Node added and currently has status “adding”

We have completed the cluster configuration.

Now, you have a fully redundant SVC environment.

8.10.4 Configuring iSCSI portsIn this topic, we show configuring a cluster for use with iSCSI.

We will configure our nodes to use the primary and secondary Ethernet ports for iSCSI, as well as to contain the cluster IP. While we are configuring our nodes to be used with iSCSI, we are not affecting our cluster IP. The cluster IP is changed, as shown in 8.8, “Managing the cluster using the GUI” on page 544.

It is important to know that you can have more than a one IP address to one physical connection relationship. The capability exists to have a four to one relationship (4:1) consisting of two IPv4 addresses, plus two IPv6 addresses (four total), to one physical connection per port per node.

You can perform iSCSI authentication or CHAP in either of two ways, either for the whole cluster or per host connection. We show configuring the CHAP for the entire cluster in 8.8.6, “iSCSI” on page 549.

In our scenario, we have a cluster IP of 9.64.210.64, as shown in Figure 8-166 on page 564. That cluster will not be impacted during our configuration of the nodes’ IP addresses.

Refresh: This window does not automatically refresh. Therefore, you continue to see the Adding status until you click Refresh.

Important: When reconfiguring IP ports, be aware that you must reconnect already configured iSCSI connections if changes are made on the IP addresses of the nodes.

Chapter 8. SAN Volume Controller operations using the GUI 563

Page 590: San

Perform these steps:

Figure 8-166 Cluster IP address shown

1. We start by selecting Work with nodes from our Welcome window and by selecting Node Ethernet Ports, as shown in Figure 8-167.

Figure 8-167 Configuring node Ethernet ports

We can see that we have four (two per node) connections to use. They are all physically connected with a 100 Mb link, but they are not configured yet.

From the list, we select Configure a Node Ethernet Port and insert the IP address that we intend to use for iSCSI, as shown in Figure 8-168.

Figure 8-168 IP parameters for iSCSI

2. We can now see that one of our Ethernet ports is now configured and online, as shown in Figure 8-169 on page 565. We perform the same task to configure the three remaining IP addresses.

564 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 591: San

Figure 8-169 Ethernet port successfully configured and online

We configure the remaining ports and use a unique IP address for each port. When finished, all of our Ethernet ports are configured, as shown in Figure 8-170.

Figure 8-170 All Ethernet ports are online

Now, both physical ports on each node are configured for iSCSI.

We can see the iSCSI identifier (iSCSI name) for our SVC node by selecting Working with nodes from our Welcome window. Then, by selecting Nodes, under the column iSCSI Name, we see our iSCSI identifier, as shown in Figure 8-171.

Each node has a unique iSCSI name associated with two IP addresses. After the host has initiated the iSCSI connection to a target node, this IQN from the target node will be visible in the iSCSI configuration tool on the host.

Figure 8-171 iSCSI identifier for our nodes

You can also enter an iSCSI alias name for the iSCSI name on the node, as shown in Figure 8-172 on page 566.

Chapter 8. SAN Volume Controller operations using the GUI 565

Page 592: San

Figure 8-172 Entering an iSCSI alias name

We change the name to a name that is easier to recognize, as shown in Figure 8-173.

Figure 8-173 Changing the iSCSI alias name

We have now finished configuring iSCSI for our SVC cluster.

8.11 Managing Copy ServicesSee Chapter 6, “Advanced Copy Services” on page 255 for more information about the functionality of Copy Services in the SVC environment.

8.12 FlashCopy operations using the GUIIt is often easier to control working with FlashCopy by using the GUI, as long as you have a small number of mappings. When using many mappings, we recommend that you use the CLI to execute your commands.

8.13 Creating a FlashCopy consistency groupTo create a FlashCopy consistency group in the SVC GUI, perform these steps:

1. Expand Manage Copy Services in the Task pane, and select FlashCopy Consistency Groups (Figure 8-174 on page 567).

566 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 593: San

Figure 8-174 Select FlashCopy Consistency Groups

2. Then, from the list, select Create a Consistency Group, and click Go (Figure 8-175).

Figure 8-175 Create a FlashCopy consistency group

3. Enter the desired FlashCopy consistency group name, and click OK, as shown in Figure 8-176.

Figure 8-176 Create consistency group

Autodelete: If you choose to use the Automatically Delete Consistency Group When Empty feature, you can only use this consistency group for mappings that are marked for autodeletion. The non-autodelete consistency group can contain both autodelete FlashCopy mappings and non-autodelete FlashCopy mappings.

Chapter 8. SAN Volume Controller operations using the GUI 567

Page 594: San

4. Click Close when the new name has been entered. Figure 8-177 on page 568 shows the result.

Figure 8-177 View consistency group

Repeat the previous steps to create another FlashCopy consistency group (Figure 8-178). The FlashCopy consistency groups are now ready to use.

Figure 8-178 Viewing FlashCopy Consistency Groups

8.13.1 Creating a FlashCopy mapping In this section, we create the FlashCopy mappings for each of our VDisks for their respective targets. Follow these steps:

1. In the SVC GUI, expand Manage Copy Services in the Task pane, and select FlashCopy mappings.

2. When prompted for filtering, select Bypass Filter to show all of the defined FlashCopy mappings, if there were any FlashCopy mappings created previously.

3. As shown in Figure 8-179, select Create a Mapping from the list, and click Go to start the creation process of a FlashCopy mapping.

Figure 8-179 Create a FlashCopy mapping

4. We are then presented with the FlashCopy creation wizard overview of the creation process for a FlashCopy mapping, and we click Next to proceed.

568 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 595: San

5. We name the first FlashCopy mapping PROD_1, select the previously created consistency group FC_SIGNA, set the background copy priority to 50 and the Grain Size to 64, and click Next to proceed, as shown in Figure 8-180 on page 569.

Figure 8-180 Define FlashCopy mapping properties

6. The next step is to select the source VDisk. If there were many source VDisks that were not already defined in a FlashCopy mapping, we can filter that list here. In Figure 8-181, we define the filter * (asterisk will show us all of our VDisks) for the source VDisk, and click Next to proceed.

Figure 8-181 Filter source VDisk candidates

7. We select Galtarey_01 from the available VDisks as our source disk, and click Next to proceed.

8. The next step is to select our target VDisk. The FlashCopy mapping wizard only presents a list of the VDisks that are the same size as the source VDisk. These VDisks are not already in a FlashCopy mapping, and they are not already defined in a Metro Mirror relationship. In Figure 8-182 on page 570, we select the target Hrappsey_01 and click Next to proceed.

Chapter 8. SAN Volume Controller operations using the GUI 569

Page 596: San

Figure 8-182 Select target VDisk

In the next step, we select an I/O Group for this mapping.

9. Finally, we verify our FlashCopy mapping (Figure 8-183) and click Finish to create it.

Figure 8-183 FlashCopy mapping verification

We check the result of this FlashCopy mapping, as shown in Figure 8-184.

Figure 8-184 View FlashCopy mapping

We repeat the procedure to create other FlashCopy mappings on the second FlashCopy target VDisk named Galtarey_01:

1. We give this VDisk another FlashCopy mapping name and choose a separate FlashCopy consistency group, as shown in Figure 8-185 on page 571.

570 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 597: San

2. As you can see in this example, we changed the background copy rate to 30, which slows down the background copy process. The clearing rate of 60 extends the stopping process if we had to stop the mapping during a copy process. An incremental mapping copies only the parts of the source or target VDisk that have changed since the last FlashCopy process.

Figure 8-185 Creating a FlashCopy mapping type of incremental

In Figure 8-186 on page 572, you can see that Galtarey_01 is still available.

Note: Even if the type of the FlashCopy mapping is incremental, the first copy process copies all of the data from the source to the target VDisk.

Consistency groups: If no consistency group is defined, the mapping is a stand-alone mapping, and it can be prepared and started without affecting other mappings. All mappings in the same consistency group must have the same status to maintain the “consistency” of the group.

Chapter 8. SAN Volume Controller operations using the GUI 571

Page 598: San

Figure 8-186 Viewing FlashCopy mapping

3. We select Heimaey_02 as the destination VDisk, as shown in Figure 8-187.

Figure 8-187 Select a second target VDisk

4. On the final page of the wizard, as shown in Figure 8-188 on page 573, we select Finish after verifying all the parameters.

572 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 599: San

Figure 8-188 Verification of FlashCopy mapping

The background copy rate specifies the priority to give to complete the copy. If 0 is specified, the copy does not proceed in the background. A default value is 50.

8.13.2 Preparing (pre-triggering) the FlashCopyWhen performing the FlashCopy on the VDisks with the database, we want to be able to control the point-in-time when the FlashCopy is triggered to keep our quiesce time to a minimum and to preserve data integrity. We put the VDisks in a consistency group, and then, we prepare the consistency group to flush the cache for all source VDisks.

If you only select one mapping to be prepared, the cluster will ask if you want all of the volumes in that consistency group to be prepared, as shown in Figure 8-189.

Figure 8-189 FlashCopy messages

When you have assigned several mappings to a FlashCopy consistency group, you only have to issue a single prepare command for the whole group, to prepare all of the mappings at one time.

We select the FlashCopy consistency group, select Prepare a consistency group from the list, and click Go. The status changes to Preparing and, then, finally to Prepared. Click Refresh several times until the FlashCopy consistency group is in the Prepared state.

Figure 8-190 on page 574 shows how we check the result. The status of the consistency group has changed to Prepared.

Tip: You can invoke FlashCopy from the SVC GUI, but using the SVC GUI might not make much sense if you plan to handle a large number of FlashCopy mappings or consistency groups periodically, or at varying times. In this case, creating a script by using the CLI might be more convenient.

Chapter 8. SAN Volume Controller operations using the GUI 573

Page 600: San

Figure 8-190 View Prepared state of consistency groups

8.13.3 Starting (triggering) FlashCopy mappingsWhen the FlashCopy mapping enters the Prepared state, we can start the copy process. Only mappings that are not a member of an consistency group, or the only mapping in an consistency group, can be started individually. As shown in Figure 8-191, we select the FlashCopy that we want to start, select Start a Mapping from the menu, and click Go to proceed.

Figure 8-191 Start a FlashCopy mapping

Because we have already prepared the FlashCopy mapping, we are ready to start the mapping right away. Notice that this mapping is not a member of any consistency group. An overview message with information about the mapping that we are about to start is shown in Figure 8-192, and we select Start to start the FlashCopy mapping.

Figure 8-192 Starting a FlashCopy mapping

After we have selected Start, we are automatically shown the copy process view that shows the progress of our copy mappings.

8.13.4 Starting (triggering) a FlashCopy consistency groupAs shown in Figure 8-193 on page 575, the FlashCopy consistency group enters the Prepared state. All of the mappings in this group will be brought to the same state. To start the

574 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 601: San

FlashCopy consistency group, we select the consistency group, select Start a Consistency Group from the list, and click Go.

Figure 8-193 Start the consistency group

In Figure 8-194, we are prompted to confirm starting the FlashCopy consistency group. We now flush the database and OS buffers and quiesce the database. Then, we click OK to start the FlashCopy consistency group.

Figure 8-194 Start consistency group message

As shown in Figure 8-195, we verified that the consistency group is in the Copying state, and subsequently, we resume the database I/O.

Figure 8-195 Consistency group status

8.13.5 Monitoring the FlashCopy progressTo monitor the copy progress, you can click Refresh or another option is to select Manage Progress and FlashCopy. Then, you can monitor the progress (Figure 8-196 on page 576).

Note: Because we have already prepared the FlashCopy consistency group, this option is grayed out when you are prompted to confirm starting the FlashCopy consistency group.

Chapter 8. SAN Volume Controller operations using the GUI 575

Page 602: San

Figure 8-196 FlashCopy background copy progress

When the background copy is completed for all FlashCopy mappings in the consistency group, the status is changed to “Idle or Copied”.

8.13.6 Stopping the FlashCopy consistency groupWhen a FlashCopy consistency group is stopped, the target VDisks become invalid and are set offline by the SVC. The FlashCopy mapping or consistency group must be prepared again or retriggered to bring the target VDisks online again.

As shown in Figure 8-197 on page 577, we stop the FC_DONA consistency group. All of the mappings belonging to that consistency group are now in the Copying state.

Tip: If you want to stop a mapping or group in a Multiple Target FlashCopy environment, consider whether you want to keep any of the dependent mappings. If not, issue the stop command with the force parameter, which stops all of the dependent maps too and negates the need for stopping the copy process.

Important: Only stop a FlashCopy mapping when the data on the target VDisk is useless, or if you want to modify the FlashCopy mapping.

When a FlashCopy mapping is stopped, the target VDisk becomes invalid and is set offline by the SVC.

576 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 603: San

Figure 8-197 Stop FlashCopy consistency group

Perform these steps:

1. We select the FC_DONA FlashCopy consistency group, and from the list, we select Stop a Consistency Group, as shown in Figure 8-198.

Figure 8-198 Stopping the FlashCopy consistency group

2. When selecting the method to use to stop the mapping, we have the three options, as shown in Figure 8-199.

Figure 8-199 Stopping FlashCopy consistency group options

3. Because we want to stop the mapping immediately, we select Forced Stop. The status of the FlashCopy consistency groups changes from Copying Stopping Stopped, as shown in Figure 8-200 on page 578.

Chapter 8. SAN Volume Controller operations using the GUI 577

Page 604: San

Figure 8-200 FlashCopy consistency group status

8.13.7 Deleting the FlashCopy mappingWe have two options to delete a FlashCopy mapping: the automatic deletion of a mapping or manual deletion.

When we initially create a mapping, we can select the “Automatically delete mapping when the background copy completes” function, as shown in Figure 8-201.

Figure 8-201 Selecting the function to automatically delete the mapping

Or, if the option has not been selected initially, you can delete the mapping manually, as shown in Figure 8-202 on page 579.

578 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 605: San

Figure 8-202 Manually deleting a FlashCopy mapping

8.13.8 Deleting the FlashCopy consistency groupIf you delete a consistency group with active mappings in it, all of the mappings in that group become stand-alone mappings.

When deleting a consistency group, we start by selecting a group. From the list, select Delete a Consistency Group and click Go, as shown in Figure 8-203.

Figure 8-203 Deleting a FlashCopy consistency group

We can still delete a FlashCopy consistency group even if the consistency group has a status of Copying, as shown in Figure 8-204, by forcing the deletion.

Figure 8-204 Deleting a consistency group with a mapping in the Copying state

And, because there is an active mapping with the state of Copying, we see a warning message, as shown in Figure 8-205 on page 580.

Tip: If you want to use the target VDisks in a consistency group as normal VDisks, you can monitor the background copy progress until it is complete (100% copied) and, then, delete the FlashCopy mapping.

Chapter 8. SAN Volume Controller operations using the GUI 579

Page 606: San

Figure 8-205 Warning message

8.13.9 Migrating between a fully allocated VDisk and a Space-Efficient VDiskIf you want to migrate from a fully allocated VDisk to a Space-Efficient VDisk, follow the same procedure as described in 8.13.1, “Creating a FlashCopy mapping” on page 568, but make sure that you select a Space-Efficient VDisk that has already been created as your target volume. You can use this same method to migrate from a Space-Efficient VDisk to a fully allocated VDisk.

Create a FlashCopy mapping with the fully allocated VDisk as the source and the Space-Efficient VDisk as the target. We describe creating a Space-Efficient VDisk in 8.5.4, “Creating a Space-Efficient VDisk with autoexpand” on page 509 in detail.

8.13.10 Reversing and splitting a FlashCopy mappingStarting with SVC 5.1, you can now perform a reverse FlashCopy mapping without having to remove the original FlashCopy mapping, and without restarting a FlashCopy mapping from the beginning.

You can start a FlashCopy mapping whose target is the source of another FlashCopy mapping. This capability enables you to reverse the direction of a FlashCopy map, without having to remove existing maps, and without losing the data from the target.

When you prepare either a stand-alone mapping or consistency group, you are prompted with a message, as shown in Figure 8-206.

Figure 8-206 FlashCopy restore option

Important: The copy process overwrites all of the data on the target VDisk. You must back up all of the data before you start the copy process.

580 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 607: San

Splitting a cascaded FlashCopy mapping allows the source target of a map, which is 100% complete, to be removed from the head of the cascade when the map is stopped.

For example, if you have four VDisks in a cascade (A B C D), and the map A B is 100% complete, as shown in Figure 8-207, clicking Split Stop, as shown in Figure 8-208, results in FCMAP_AB becoming idle_copied and the remaining cascade becomes B C D.

Figure 8-207 Stopping a FlashCopy mapping

Figure 8-208 Selecting the Split Stop option

Without the split option, VDisk A remains at the head of the cascade (A C D). Consider this sequence of steps:

� User takes a backup using the mapping A B. A is the production VDisk; B is a backup.

� At a later point, the user experiences corruption on A and, therefore, reverses the mapping B A.

� The user then takes another backup from the production disk A and, therefore, has the cascade B A C.

Stopping A B without using the Split Stop option will result in the cascade B C. Note that the backup disk B is now at the head of this cascade.

When the user next wants to take a backup to B, the user can still start mapping A B (using the -restore flag), but the user cannot then reverse the mapping to A (B A or C A).

Stopping A B with the Split Stop option results in the cascade A C. This option does not result in the same problem, because the production disk A is at the head of the cascade instead of the backup disk B.

Chapter 8. SAN Volume Controller operations using the GUI 581

Page 608: San

8.14 Metro Mirror operationsNext, we show how to set up Metro Mirror using the GUI.

8.14.1 Cluster partnershipStarting with SVC 5.1, you now have the opportunity to create more than a one-to-one cluster partnership.

Now, you can have a cluster partnership among multiple SVC clusters, which allows you to create four types of configurations, using a maximum of four connected clusters:

� Star configuration, as shown in Figure 8-209

Figure 8-209 Star configuration

� Triangle configuration, as shown in Figure 8-210

Figure 8-210 Triangle configuration

Note: This example is for intercluster Metro Mirror operations only. If you want to set up Metro Mirror intracluster operations, we highlight those parts of the following procedure that you do not need to perform.

582 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 609: San

� Fully connected configuration, as shown in Figure 8-211

Figure 8-211 Fully connected configuration

� Daisy-chain configuration, as shown in Figure 8-212

Figure 8-212 Daisy-chain configuration

In the following scenario, we set up an intercluster Metro Mirror relationship between the ITSO-CLS1 SVC cluster at the primary site and the ITSO-CLS2 SVC cluster at the secondary site. Table 8-3 shows the details of the VDisks.

Table 8-3 VDisk details

Because data consistency is needed across the MM_DB_Pri and MM_DBLog_Pri VDisks, a consistency group named CG_WIN2K3_MM is created to handle the Metro Mirror relationships for them. While, in this scenario, application files are independent of the database, a stand-alone Metro Mirror relationship is created for the MM_App_Pri VDisk. Figure 8-213 on page 584 illustrates the Metro Mirror setup.

Important: All SVC clusters must be at level 5.1 or higher.

Content of VDisk VDisks at primary site VDisks at secondary site

Database files MM_DB_Pri MM_DB_Sec

Database log files MM_DBLog_Pri MM_DBLog_Sec

Application files MM_App_Pri MM_App_Sec

Chapter 8. SAN Volume Controller operations using the GUI 583

Page 610: San

Figure 8-213 Metro Mirror scenario

8.14.2 Setting up Metro MirrorIn the following section, we assume that the source and target VDisks have already been created and that the inter-switch links (ISLs) and zoning are in place, enabling the SVC clusters to communicate.

To set up the Metro Mirror, perform the following steps:

1. Create the SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters.

2. Create a Metro Mirror consistency group:

Name CG_W2K3_MM

3. Create the Metro Mirror relationship for MM_DB_Pri:

– Master MM_DB_Pri– Auxiliary MM_DB_Sec– Auxiliary SVC cluster ITSO-CLS2– Name MMREL1– Consistency group CG_W2K3_MM

4. Create the Metro Mirror relationship for MM_DBLog_Pri:

– Master MM_DBLog_Pri– Auxiliary MM_DBLog_Sec– Auxiliary SVC cluster ITSO-CLS2– Name MMREL2– Consistency group CG_W2K3_MM

MM_DB_Pri

MM_DBlog_Pri

MM_DB_Sec

MM_DBlog_Sec

MM Relationship 1

MM Relationship 2

MM_App_Pri MM_App_SecMM Relationship 3

Primary SiteSVC Cluster - ITSO - CLS1

Secondary SiteSVC Cluster - ITSO - CLS2

Consistency GroupCG_W2K3_MM

584 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 611: San

5. Create the Metro Mirror relationship for MM_App_Pri:

– Master MM_App_Pri– Auxiliary MM_App_Sec– Auxiliary SVC cluster ITSO-CLS2– Name MMREL3

8.14.3 Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2We perform this operation to create the partnership on both clusters.

To create a Metro Mirror partnership between the SVC clusters using the GUI, perform these steps:

1. We launch the SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services from the Welcome window and click Metro & Global Mirror Cluster Partnerships. The window opens, as shown in Figure 8-214.

Figure 8-214 Creating a cluster partnership

2. After we have selected Go for the creation of an cluster partnership, as shown in Figure 8-214, the SVC cluster shows us the available options to select a partner cluster, as shown in Figure 8-215 on page 586. We have multiple cluster candidates from which to choose. In our scenario, we choose ITSO-CLS2.

Select ITSO-CLS2, specify the available bandwidth for the background copy, in this case, 50 MBps, and then, click OK. Two options are available during creation:

– Intercluster Delay Simulation, which simulates the Global Mirror round-trip delay between the two clusters, in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds.

– Intracluster Delay Simulation, which simulates the Global Mirror round-trip delay in milliseconds. The default is 0, and the valid range is 0 to 100 milliseconds.

Note: If you are creating an intracluster Metro Mirror, do not perform this next step to create the SVC cluster Metro Mirror partnership. Instead, skip to 8.14.4, “Creating a Metro Mirror consistency group” on page 587.

Chapter 8. SAN Volume Controller operations using the GUI 585

Page 612: San

Figure 8-215 Showing available cluster candidates

As shown in Figure 8-216, our partnership is in the Partially Configured state, because we have only performed the work on one side of the partnership so far.

Figure 8-216 Viewing cluster partnerships

3. To fully configure the Metro Mirror cluster partnership, we must perform the same steps on ITSO-CLS2 as we did on ITSO-CLS1. For simplicity and brevity, only two most significant windows are shown when the partnership is fully configured.

4. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Metro Mirror cluster partnership and specify the available bandwidth for the background copy, again 50 MBps, and then click OK, as shown in Figure 8-217 on page 587.

586 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 613: San

Figure 8-217 We select the cluster partner for the secondary partner

Now that both sides of the SVC cluster partnership are defined, the resulting window shown in Figure 8-218 confirms that our Metro Mirror cluster partnership is in the Fully Configured state.

Figure 8-218 Fully configured cluster partnership

The GUI for ITSO-CLS2 is no longer necessary. Close this GUI, and use the GUI for the ITSO-CLS1 cluster for all further steps.

8.14.4 Creating a Metro Mirror consistency groupTo create the consistency group to use for the Metro Mirror relationships of VDisks with database and database log files, select Manage Copy Services and click Metro Mirror Consistency Groups from the Welcome window.

Chapter 8. SAN Volume Controller operations using the GUI 587

Page 614: San

To create a Metro Mirror consistency group, perform the following steps:

1. Select Create a Consistency Group from the list, and click Go, as shown in Figure 8-219.

Figure 8-219 Creating a consistency group

2. The wizard appears that helps to create the Metro Mirror consistency group. First, the wizard introduces the steps that are involved in the creation of a Metro Mirror consistency group, as shown in Figure 8-220. Click Next to proceed.

Figure 8-220 Introduction to the Metro Mirror consistency group creation wizard

3. As shown in Figure 8-221, specify the name for the consistency group, and select the remote cluster, which we have already defined in 8.14.3, “Creating the SVC partnership between ITSO-CLS1 and ITSO-CLS2” on page 585. If you are planning to use this consistency group for internal mirroring, that is, mirroring within the same cluster, select intracluster consistency group. In our scenario, we selected Create an inter-cluster consistency group with the remote cluster ITSO_CLS2. Click Next.

Figure 8-221 Specifying the consistency group name and type

588 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 615: San

4. In Figure 8-222, we can see the Metro Mirror relationships that have already been created that can be included in our Metro Mirror consistency group. Because we do not have any existing relationships at this point to include in the Metro Mirror consistency group, we create a blank group by clicking Next to proceed.

Figure 8-222 Empty list

5. Verify the setting for the consistency group, and click Finish to create the Metro Mirror consistency group, as shown in Figure 8-223.

Figure 8-223 Verifying settings for Metro Mirror consistency group

After creating the consistency group, the GUI returns to the Viewing Metro & Global Mirror Consistency Groups window, as shown in Figure 8-224. This page lists the newly created consistency group. Notice that the newly created consistency group is “empty”, because no relationships have been added to the group.

Figure 8-224 Viewing the newly created consistency group

Chapter 8. SAN Volume Controller operations using the GUI 589

Page 616: San

8.14.5 Creating Metro Mirror relationships for MM_DB_Pri and MM_DBLog_PriTo create the Metro Mirror relationships for VDisks MM_DB_Pri and MM_DBLog_Pri, perform the following steps:

1. Select Manage Copy Services and click Metro Mirror Cluster Relationships from the SVC Welcome window.

2. To start the creation process, select Create a Relationship from the list, and click Go, as shown in Figure 8-225.

Figure 8-225 Create a relationship

3. We are presented with the wizard that will help us create the Metro Mirror relationship. First, the wizard introduces the steps that are involved in the creation of the Metro Mirror relationship, as shown in Figure 8-226. Click Next to proceed.

Figure 8-226 Introduction to the Metro Mirror relationship creation wizard

4. As shown in Figure 8-227 on page 591, we name the first Metro Mirror relationship MMREL1 and specify the type of cluster relationship (in this case, intercluster as per the scenario that is shown in Figure 8-213 on page 584). The wizard also gives us the option to select the type of copy service, which, in our case, is Metro Mirror Relationship.

590 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 617: San

Figure 8-227 Naming the Metro Mirror relationship and selecting the type of cluster relationship

5. Next, we select a master VDisk. Because the list of VDisks can be large, the Filtering Master VDisk Candidates window opens, which allows us to reduce the list of eligible VDisks based on a defined filter.

In Figure 8-228, you can use the asterisk character (*) filter to list all of the VDisks, and click Next.

Figure 8-228 Define filter for VDisk candidates

6. As shown in Figure 8-229 on page 592, we select MM_DB_Pri to be a master VDisk for this relationship, and click Next to proceed.

Tip: In our scenario, we use MM* as a filter to avoid listing all the VDisks.

Chapter 8. SAN Volume Controller operations using the GUI 591

Page 618: San

Figure 8-229 Selecting the master VDisk

7. The next step requires us to select an auxiliary VDisk. The Metro Mirror relationship wizard will automatically filter this list, so that only eligible VDisks are shown. Eligible VDisks are VDisks that have the same size as the master VDisk and that are not already part of a Metro Mirror relationship.

As shown in Figure 8-230, we select MM_DB_Sec as the auxiliary VDisk for this relationship and click Next to proceed.

Figure 8-230 Selecting the auxiliary VDisk

8. As shown in Figure 8-231, we select the consistency group that we created, and now our relationship is immediately added to that group. Click Next to proceed.

Figure 8-231 Selecting the relationship to be a part of the consistency group

592 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 619: San

9. Finally, in Figure 8-232, we verify the attributes for our Metro Mirror relationship and click Finish to create it.

Figure 8-232 Verifying the Metro Mirror relationship

After the relationship is successfully created, we are returned to the Metro Mirror relationship list.

After the successful creation of the relationship, the GUI returns to the Viewing Metro & Global Mirror Relationships window, as shown in Figure 8-233. This window lists the newly created relationship. Notice that we have not started the copy process; we have only established the connections between those two VDisks.

Figure 8-233 Viewing the Metro Mirror relationship

By following a similar process, we create the second Metro Mirror relationship, MMREL2, which is shown in Figure 8-234.

Figure 8-234 Viewing the second Metro Mirror relationship MMREL2

Chapter 8. SAN Volume Controller operations using the GUI 593

Page 620: San

8.14.6 Creating a stand-alone Metro Mirror relationship for MM_App_PriTo create a stand-alone Metro Mirror relationship, perform the following steps:

1. We start the creation process by selecting Create a Relationship from the menu, and click Go.

2. Next, we are presented with the wizard that shows the steps that are involved in the process of creating a consistency group, and we click Next to proceed.

3. As shown in Figure 8-235, we name the relationship (MMREL3), specify that it is an intercluster relationship with ITSO-CLS2, and click Next.

Figure 8-235 Specifying the Metro Mirror relationship name and auxiliary cluster

4. As shown in Figure 8-236, we are prompted for a filter prior to use to present the master VDisk candidates. We enter the MM* filter and click Next.

Figure 8-236 Filtering VDisk candidates

5. As shown in Figure 8-237 on page 595, we select MM_App_Pri to be the master VDisk of the relationship, and we click Next to proceed.

594 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 621: San

Figure 8-237 Selecting the master VDisk

6. As shown in Figure 8-238, we select MM_APP_Sec as the auxiliary VDisk of the relationship, and we click Next to proceed.

Figure 8-238 Selecting the auxiliary VDisk

7. As shown in Figure 8-239, we do not select a consistency group, because we are creating a stand-alone Metro Mirror relationship.

Figure 8-239 Selecting options for the Metro Mirror relationship

Note: To add a Metro Mirror relationship to a consistency group, it must be in the same state as the consistency group.

Chapter 8. SAN Volume Controller operations using the GUI 595

Page 622: San

As shown in Figure 8-240, we cannot select a consistency group, because we selected our relationship as “synchronized”, which is not in the same state as the consistency group that we created earlier.

Figure 8-240 The consistency group must have the same state as the relationship

8. Finally, Figure 8-241 shows the actions that will be performed. We click Finish to create this new relationship.

Figure 8-241 Verifying the Metro Mirror relationship

After the successful creation, we are returned to the Metro Mirror relationship window. Figure 8-242 now shows all of our defined Metro Mirror relationships.

Figure 8-242 Viewing Metro Mirror relationships

596 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 623: San

8.14.7 Starting Metro MirrorNow that we have created the Metro Mirror consistency group and relationships, we are ready to use Metro Mirror relationships in our environment.

When performing Metro Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy if a failure occurs that affects the SAN at the production site.

In the following section, we show how to stop and start a stand-alone Metro Mirror relationship and a consistency group.

8.14.8 Starting a stand-alone Metro Mirror relationshipIn Figure 8-243, we select the MMREL3 stand-alone Metro Mirror relationship, and from the list, we select Start Copy Process and click Go.

Figure 8-243 Starting a stand-alone Metro Mirror relationship

In Figure 8-244, we do not need to change the Forced start, Mark as clean, or Copy direction parameter, because we are invoking this Metro Mirror relationship for the first time (and we have defined the relationship as already synchronized). We click OK to start the MMREL3 stand-alone Metro Mirror relationship.

Figure 8-244 Selecting options and starting the copy process

Because the Metro Mirror relationship was in the Consistent stopped state and no updates have been made to the primary VDisk, the relationship quickly enters the Consistent synchronized state, as shown in Figure 8-246 on page 598.

Chapter 8. SAN Volume Controller operations using the GUI 597

Page 624: San

Figure 8-245 Viewing Metro Mirror relationships

8.14.9 Starting a Metro Mirror consistency groupTo start the CG_W2K3_MM Metro Mirror consistency group, we select Manage Copy Services and click Metro Mirror Consistency Groups from our SVC Welcome window.

In Figure 8-246, we select the CG_W2K3_MM Metro Mirror consistency group, and from the list, we select Start Copy Process and click Go.

Figure 8-246 Starting copy process for the consistency group

As shown in Figure 8-247, we click OK to start the copy process. We cannot select the Forced start, Mark as clean, or Copy Direction option, because our consistency group is currently in the Inconsistent stopped state.

Figure 8-247 Selecting options and starting the copy process

As shown in Figure 8-248 on page 599, we are returned to the Metro Mirror consistency group list and the CG_W2K3_MM consistency group has changed to the Inconsistent copying state.

598 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 625: San

Figure 8-248 Viewing Metro Mirror consistency groups

Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships in the consistency group. Upon the completion of the background copy for all of the relationships in the consistency group, the consistency group enters the Consistent synchronized state.

8.14.10 Monitoring background copy progressYou can view the status of the background copy progress either in the last column of the Viewing Metro Mirror Relationships window or by opening it under the My Work, Manage progress view, and clicking View progress. This option allows you to view the Metro Mirror progress, as shown in Figure 8-249.

Figure 8-249 Viewing background copy progress for Metro Mirror relationships

8.14.11 Stopping and restarting Metro MirrorNow that the Metro Mirror consistency group and relationships are running, in this section and the following sections, we describe how to stop, restart, and change the direction of the stand-alone Metro Mirror relationships, as well as the consistency group.

In this section, we show how to stop and restart the stand-alone Metro Mirror relationship and the consistency group.

Note: Setting up SNMP traps for the SVC enables automatic notification when the Metro Mirror consistency group or relationships change state.

Chapter 8. SAN Volume Controller operations using the GUI 599

Page 626: San

8.14.12 Stopping a stand-alone Metro Mirror relationshipTo stop a Metro Mirror relationship, while enabling access (write I/O) to both the primary and secondary VDisk, we select the relationship and select Stop Copy Process from the list and click Go, as shown in Figure 8-250.

Figure 8-250 Stopping a stand-alone Metro MIrror relationship

As shown in Figure 8-251, we select Enable write access to the secondary VDisk, if it is consistent with the primary VDisk and click OK to stop the Metro Mirror relationship.

Figure 8-251 Enable write access to the secondary VDisk while stopping the relationship

As shown in Figure 8-252, the Metro Mirror relationship transits to the Idling state, when stopped while enabling access to the secondary VDisk.

Figure 8-252 Viewing the Metro Mirror relationships

8.14.13 Stopping a Metro Mirror consistency groupAs shown in Figure 8-253 on page 601, we select the Metro Mirror consistency group and Stop Copy Process from the list and click Go.

600 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 627: San

Figure 8-253 Selecting the Metro Mirror consistency group to be stopped

As shown in Figure 8-254, we click OK without specifying “Enable write access to the secondary VDisks, if they are consistent with the primary VDisks”.

Figure 8-254 Stopping consistency group without enabling access to secondary VDisks

As shown in Figure 8-255, the consistency group enters the Consistent stopped state, when stopped without enabling access to the secondary.

Figure 8-255 Viewing Metro Mirror consistency groups

Afterwards, if we want to enable write access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process and, this time, specify that we want to enable write access to the secondary VDisks.

In Figure 8-256 on page 602, we select the Metro Mirror relationship, select Stop Copy Process from the list and click Go.

Chapter 8. SAN Volume Controller operations using the GUI 601

Page 628: San

Figure 8-256 Stopping the Metro Mirror consistency group

As shown in Figure 8-257, we check Enable write access to the secondary VDisks, if they are consistent with the primary VDisks and click OK.

Figure 8-257 Enabling access to secondary VDisks

When applying the “Enable write access to the secondary VDisk, if it is consistent with the primary VDisk option”, the consistency group transits to the Idling state, as shown in Figure 8-258.

Figure 8-258 Viewing Metro Mirror consistency group in the Idling state

8.14.14 Restarting a Metro Mirror relationship in the Idling stateWhen restarting a Metro Mirror relationship in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or auxiliary VDisks in the Metro Mirror relationship, consistency is compromised. In this situation, we must check the Force option to start the copy process; otherwise, the command fails.

As shown in Figure 8-259 on page 603, we select the Metro Mirror relationship and Start Copy Process from the list and click Go.

602 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 629: San

Figure 8-259 Starting a stand-alone Metro Mirror relationship in the Idling state

As shown in Figure 8-260, we check the Force option, because write I/O has been performed while in the Idling state, and we select the copy direction by defining the master VDisk as the primary and click OK.

Figure 8-260 Specifying options while starting copy process

The Metro Mirror relationship enters the Consistent copying, and when background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-261.

Figure 8-261 Viewing Metro Mirror relationship

8.14.15 Restarting a Metro Mirror consistency group in the Idling stateWhen restarting a Metro Mirror consistency group in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or auxiliary VDisk in any of the Metro Mirror relationships in the consistency group, consistency is compromised. In this

Chapter 8. SAN Volume Controller operations using the GUI 603

Page 630: San

situation, we must check the Force option to start the copy process; otherwise, the command fails.

As shown in Figure 8-262, we select the Metro Mirror consistency group and Start Copy Process from the list and click Go.

Figure 8-262 Starting the copy process for the consistency group

As shown in Figure 8-263, we check the Force option and set the copy direction by selecting the primary as the master.

Figure 8-263 Specifying the options while starting the copy process in the consistency group

When the background copy completes, the Metro Mirror consistency group enters the Consistent synchronized state, as shown in Figure 8-264.

Figure 8-264 Viewing Metro Mirror consistency groups

8.14.16 Changing copy direction for Metro MirrorIn this section, we show how to change the copy direction of the stand-alone Metro Mirror relationships and the consistency group.

604 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 631: San

8.14.17 Switching copy direction for a Metro Mirror consistency groupWhen a Metro Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the Metro Mirror consistency group.

In Figure 8-265, we select the CG_W2K3_MM consistency group, click Switch Copy Direction from the list, and click Go.

Figure 8-265 Selecting the consistency group for which the copy direction is to change

In Figure 8-266, we see that the current primary VDisks are the master. So, to change the copy direction for the Metro Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.

Figure 8-266 Selecting primary VDisk, as auxiliary, to switch the copy direction

The copy direction is now switched, and we are returned to the Metro Mirror consistency group list, where we see that the copy direction has switched, as shown in Figure 8-267 on page 606.

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the VDisks that will change from primary to secondary, because all of the I/O will be inhibited when the VDisks become secondary. Therefore, careful planning is required prior to switching the copy direction.

Chapter 8. SAN Volume Controller operations using the GUI 605

Page 632: San

Figure 8-267 Viewing Metro Mirror consistency group after changing the copy direction

In Figure 8-268, we show the new copy direction for individual relationships within that consistency group.

Figure 8-268 Viewing Metro Mirror relationship after changing the copy direction

8.14.18 Switching the copy direction for a Metro Mirror relationshipWhen a Metro Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship.

In Figure 8-269, we select the MMREL3 relationship, click Switch Copy Direction from the list, and click Go.

Figure 8-269 Selecting the relationship whose copy direction needs to be changed

Important: When the copy direction is switched, it is crucial that no outstanding I/O exists to the VDisk that transits from primary to secondary, because all of the I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Metro Mirror relationship.

606 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 633: San

In Figure 8-270, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Metro Mirror relationship, we specify the auxiliary VDisk to become the primary, and click OK.

Figure 8-270 Selecting the primary VDisk, as auxiliary, to switch copy direction

The copy direction is now switched. We are returned to the Metro Mirror relationship list, where we see that the copy direction has been switched and that the auxiliary VDisk has become the primary, as shown in Figure 8-271.

Figure 8-271 Viewing Metro Mirror relationships

8.15 Global Mirror operationsNext, we show how to set up Global Mirror.

Starting with 5.1, we can install multiple clusters in a partnership. We show this capability in 8.14.1, “Cluster partnership” on page 582, but in the following scenario, we set up an intercluster Global Mirror relationship between the ITSO-CLS1 SVC cluster at primary site and the ITSO-CLS2 SVC cluster at the secondary site. Table 8-4 on page 608 shows the details of the VDisks.

Note: This example is for intercluster Global Mirror operations only. In case you want to set up intracluster Global Mirror operations, we highlight those parts of the following procedure that you do not need to perform.

Chapter 8. SAN Volume Controller operations using the GUI 607

Page 634: San

Table 8-4 Details of VDisks for Global Mirror relationship

Because data consistency is needed across GM_DB_Pri and GM_DBLog_Pri, we create a consistency group to handle Global Mirror relationships for them. While, in this scenario, the application files are independent of the database, we create a stand-alone Global Mirror relationship for GM_App_Pri. Figure 8-272 illustrates the Global Mirror setup.

Figure 8-272 Global Mirror scenario using the GUI

8.15.1 Setting up Global MirrorIn the following section, we assume that the source and target VDisks have already been created and that the ISLs and zoning are in place, enabling the SVC clusters to communicate.

To set up the Global Mirror, you must perform the following steps:

1. Create an SVC partnership between ITSO-CLS1 and ITSO-CLS2, on both SVC clusters:

Bandwidth 10 MBps

2. Create a Global Mirror consistency group:

Name CG_W2K3_GM

3. Create the Global Mirror relationship for GM_DB_Pri:

– Master GM_DB_Pri– Auxiliary GM_DB_Sec

Content of VDisk VDisks at primary site VDisks at secondary site

Database files GM_DB_Pri GM_DB_Sec

Database log files GM_DBLog_Pri GM_DBLog_Sec

Application files GM_App_Pri GM_App_Sec

GM_App_Pri

GM_DB_Pri GM_DB_Sec

GM_DBLog_Sec

GM_App_Sec

Primary SiteSVC Cluster – ITSO-CLS1

Secondary SiteSVC Cluster – ITSO-CLS2

GM_Relationship 1

Consistency GroupCG_W2K3_MM

GM_Relationship 2

GM_Relationship 3

GM_DBLog_Pri

608 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 635: San

– Auxiliary SVC cluster ITSO-CLS2– Name GMREL1– Consistency group CG_W2K3_GM

4. Create the Global Mirror relationship for GM_DBLog_Pri:

– Master GM_DBLog_Pri– Auxiliary GM_DBLog_Sec– Auxiliary SVC cluster ITSO-CLS2– Name GMREL2– Consistency group CG_W2K3_GM

5. Create the Global Mirror relationship for GM_App_Pri:

– Master GM_App_Pri– Auxiliary GM_App_Sec– Auxiliary SVC cluster ITSO-CLS2– Name GMREL3

8.15.2 Creating an SVC partnership between ITSO-CLS1 and ITSO-CLS2In this section, we create the SVC partnership on both clusters.

To create a Global Mirror partnership between the SVC clusters using the GUI, perform these steps:

1. We launch the SVC GUI for ITSO-CLS1. Then, we select Manage Copy Services and click Metro & Global Mirror Cluster Partnerships, as shown in Figure 8-273.

Figure 8-273 Selecting Global Mirror Cluster Partnership on ITSO-CLS1

2. Figure 8-274 on page 610 shows the cluster partnership that is defined for this cluster. Because there is no existing partnership, nothing is listed. Figure 8-274 on page 610 also gives a warning stating that for any type of copy relationship between VDisks across two separate clusters, the partnership must exist between them. Notice that we already have another partnership running. Select GO to continue creating your partnership.

Note: If you are creating an intracluster Global Mirror, do not perform the next step; instead, go to 8.15.4, “Creating a Global Mirror consistency group” on page 614.

Chapter 8. SAN Volume Controller operations using the GUI 609

Page 636: San

Figure 8-274 Creating a new partnership

3. Figure 8-275 lists the available SVC cluster candidates. In our case, we select ITSO-CLS4 and specify the available bandwidth for the background copy; we enter 10 MBps and, then, click OK.

Figure 8-275 Selecting SVC cluster partner and specifying bandwidth for background copy

In the resulting window, which is shown in Figure 8-276 on page 611, the newly created Global Mirror cluster partnership is shown as Partially Configured.

610 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 637: San

Figure 8-276 Viewing the newly created Global Mirror partnership

To fully configure the Global Mirror cluster partnership, we must perform the same steps on ITSO-CLS4 that we performed on ITSO-CLS1. For simplicity, in the following figures, only the last two windows are shown.

4. Launching the SVC GUI for ITSO-CLS2, we select ITSO-CLS1 for the Global Mirror cluster partnership, specify the available bandwidth for the background copy, which again is 10 MBps, and then, click OK, as shown in Figure 8-277.

Figure 8-277 Selecting SVC cluster partner and specifying bandwidth for background copy

5. Now that we have defined both sides of the SVC cluster partnership, the window that is shown in Figure 8-278 on page 612 confirms that our Global Mirror cluster partnership is in the Fully Configured state.

Chapter 8. SAN Volume Controller operations using the GUI 611

Page 638: San

Figure 8-278 Global Mirror cluster partnership is fully configured

8.15.3 Global Mirror link tolerance and delay simulationsWe describe link tolerance and delay simulations.

Global Mirror link toleranceThe gm_link_tolerance parameter defines the SVC’s sensitivity to interlinking overload conditions. The value is the number of seconds of continuous link difficulties that will be tolerated before the SVC will stop the remote copy relationships to prevent affecting host I/O at the primary site. To change the value, refer to “Changing link tolerance and delay simulation values for Global Mirror” on page 613.

The link tolerance values are between 60 and 86,400 seconds in increments of 10 seconds. The default value for the link tolerance is 300 seconds.

Global Mirror intercluster and intracluster delay simulationThis Global Mirror feature permits a simulation of a delayed write to a remote VDisk. This feature allows you to perform testing that detects colliding writes and so can be used to test an application before the full deployment of the Global Mirror feature. You can enable delay simulation separately for either intracluster or intercluster Global Mirror. To enable and change to the appropriate value, refer to “Changing link tolerance and delay simulation values for Global Mirror” on page 613.

The inter_cluster_delay_simulation and intra_cluster_delay_simulation values express the amount of time that secondary I/Os are delayed for intercluster and intracluster relationships. These values specify the number of milliseconds that I/O activity, that is, copying the primary

Note: Link tolerance, intercluster delay simulation, and intracluster delay simulation are introduced with the use of the Global Mirror feature.

Recommendation: We strongly recommend using the default value. If the link is overloaded for a period that affects host I/O at the primary site, the relationships will be stopped to protect those hosts.

612 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 639: San

VDisk to a secondary VDisk, is delayed. A value from 0 to 100 milliseconds in 1 millisecond increments can be set. A value of zero disables this feature.

To check the current settings for the delay simulation, refer to “Changing link tolerance and delay simulation values for Global Mirror” on page 613.

Changing link tolerance and delay simulation values for Global MirrorHere, we show the modification of the delay simulations and the Global Mirror link tolerance values. We also show the changed values for the Global Mirror link tolerance and delay simulation parameters.

Launching the SVC GUI for ITSO-CLS1, we select Global Mirror Cluster Partnership to view and to modify the parameters, as shown in Figure 8-279 and Figure 8-280.

Figure 8-279 View and modify Global Mirror link tolerance and delay simulation parameters

Figure 8-280 Set Global Mirror link tolerance and delay simulations parameters

After performing the steps, the GUI returns to the Global Mirror Partnership window and lists the new parameter settings, as shown in Figure 8-281 on page 614.

Chapter 8. SAN Volume Controller operations using the GUI 613

Page 640: San

Figure 8-281 View modified parameters

8.15.4 Creating a Global Mirror consistency groupTo create the consistency group for use by the Global Mirror relationships for the VDisks with the database and database log files, perform these steps:

1. We select Manage Copy Services and click Global Mirror Consistency Groups, as shown in Figure 8-282.

Figure 8-282 Selecting Global Mirror consistency groups

2. To start the creation process, we select Create Consistency Group from the list and click Go, as shown in Figure 8-283 on page 615. We see that, in our list, we already have one Metro Mirror consistency group that was created between ITSO-CLS1 and ITSO-CLS2, but now, we are creating a new Global Mirror Consistency group.

614 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 641: San

Figure 8-283 Creating a consistency group

3. We are presented with a wizard that helps us to create the Global Mirror consistency group. First, the wizard introduces the steps that are involved in the creation of the Global Mirror consistency group, as shown in Figure 8-284. Click Next to proceed.

Figure 8-284 Introduction to Global Mirror consistency group creation wizard

4. As shown in Figure 8-285, we specify the consistency group name and whether it will be used for intercluster or intracluster relationships. In our scenario, we select Create an inter-cluster consistency group and, then, we need to select our remote cluster partner. In Figure 8-285, we select ITSO-CLS4, because it is our Global Mirror partner, and click Next.

Figure 8-285 Specifying the consistency group name and type

Chapter 8. SAN Volume Controller operations using the GUI 615

Page 642: San

5. Figure 8-286 shows any existing Global Mirror relationships that can be included in the Global Mirror consistency group. Because we do not have any existing Global Mirror relationships at this time, we create an empty group by clicking Next to proceed, as shown in Figure 8-286.

Figure 8-286 Selecting the existing Global Mirror relationship

6. Verify the settings for the consistency group, and click Finish to create the Global Mirror consistency group, as shown in Figure 8-287.

Figure 8-287 Verifying the settings for the Global Mirror consistency group

When the Global Mirror consistency group is created, we are returned to the Viewing Metro & Global Mirror Consistency Groups window. It shows our newly created Global Mirror consistency group, as shown in Figure 8-288.

Figure 8-288 Viewing Global Mirror consistency groups

616 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 643: San

8.15.5 Creating Global Mirror relationships for GM_DB_Pri and GM_DBLog_Pri

To create the Global Mirror Relationships for GM_DB_Pri and GM_DBLog_Pri, perform these steps:

1. We select Manage Copy Services and click Global Mirror Cluster Relationships, from the Welcome window.

2. To start the creation process, we select Create a Relationship from the list and click Go, as shown in Figure 8-289.

Figure 8-289 Creating a relationship

3. We are presented with a wizard that helps us to create Global Mirror relationships. First, the wizard introduces the steps that are involved in the creation of the Global Mirror relationship, as shown in Figure 8-290. Click Next to proceed.

Figure 8-290 Introduction to the Global Mirror relationship creation wizard

4. As shown in Figure 8-291 on page 618, we name our first Global Mirror relationship GMREL1, click Global Mirror Relationship, and select the relationship for the cluster. In this case, it is an intercluster relationship toward ITSO-CLS4, as shown in Figure 8-272 on page 608.

Chapter 8. SAN Volume Controller operations using the GUI 617

Page 644: San

Figure 8-291 Naming the Global Mirror relationship and selecting the type of the cluster relationship

5. The next step enables us to select a master VDisk. Because this list can be large, the Filtering Master VDisk Candidates window opens, which enables us to define a filter to reduce the list of eligible VDisks.

In Figure 8-292, we use the filter GM* (you can use the asterisk character (*) to list all VDisks) and click Next.

Figure 8-292 Defining the filter for master VDisk candidates

6. As shown in Figure 8-293, we select GM_DB_Pri to be the master VDisk of the relationship, and we click Next to proceed.

Figure 8-293 Selecting the master VDisk

618 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 645: San

The next step requires us to select an auxiliary VDisk. The Global Mirror relationship wizard automatically filters this list so that only eligible VDisks are shown. Eligible VDisks are those VDisks that have the same size as the master VDisk and that are not already part of a Global Mirror relationship.

7. As shown in Figure 8-294, we select GM_DB_Sec as the auxiliary VDisk for this relationship, and we click Next to proceed.

Figure 8-294 Selecting the auxiliary VDisk

8. As shown in Figure 8-295, select the relationship to be part of the consistency group that we have created, and click Next to proceed.

Figure 8-295 Selecting the relationship to be part of a consistency group

9. Finally, in Figure 8-296 on page 620, we verify the Global Mirror Relationship attributes and click Finish to create it.

Consistency groups: It is not mandatory to make the relationship part of a consistency group at this stage. You can make the relationship part of a consistency group at a later stage after the creation of the relationship. You can add the relationship to the consistency group by modifying that relationship.

Chapter 8. SAN Volume Controller operations using the GUI 619

Page 646: San

Figure 8-296 Verifying the Global Mirror relationship

After the successful creation of the relationship, the GUI returns to the Viewing Metro & Global Mirror Relationships window, as shown in Figure 8-297. This window lists the newly created relationship.

Using the same process, create the second Global Mirror relationship, GMREL2. Figure 8-297 shows both relationships.

Figure 8-297 Viewing Metro & Global Mirror relationships

8.15.6 Creating the stand-alone Global Mirror relationship for GM_App_PriTo create the stand-alone Global Mirror relationship, perform these steps:

1. We start the creation process by selecting Create a Relationship from the list and by clicking Go, as shown in Figure 8-298.

Figure 8-298 Creating a Global Mirror relationship

2. Next, we are presented with the wizard that shows the steps that are involved in the process of creating a Global Mirror relationship, as shown in Figure 8-299 on page 621. Click Next to proceed.

620 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 647: San

Figure 8-299 Introduction to the Global Mirror relationship creation wizard

3. In Figure 8-300, we name the Global Mirror relationship GMREL3, specify that it is an intercluster relationship, and click Next.

Figure 8-300 Naming the Global Mirror relationship and selecting the type of cluster relationship

4. As shown in Figure 8-301, we are prompted for a filter prior to presenting the master VDisk candidates. We use the asterisk character (*) to list all of the candidates and click Next.

Figure 8-301 Filtering master VDisk candidates

Chapter 8. SAN Volume Controller operations using the GUI 621

Page 648: San

5. As shown in Figure 8-302, we select GM_App_Pri to be the master VDisk for the relationship and click Next to proceed.

Figure 8-302 Selecting the master VDisk

6. As shown in Figure 8-303, we select GM_App_Sec as the auxiliary VDisk for the relationship and click Next to proceed.

Figure 8-303 Selecting auxiliary VDisk

As shown in Figure 8-304 on page 623, we did not select a consistency group, because we are creating a stand-alone Global Mirror relationship.

622 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 649: San

Figure 8-304 Selecting options for the Global Mirror relationship

7. We also specify that the master and auxiliary VDisks are already synchronized; for the purpose of this example, we can assume that they are pristine (Figure 8-305).

Figure 8-305 Selecting the synchronized option for the Global Mirror relationship

8. Finally, Figure 8-306 on page 624 prompts you to verify the relationship information. We click Finish to create this new relationship.

Note: To add a Global Mirror relationship to a consistency group, the Global Mirror relationship must be in the same state as the consistency group.

Even if we intend to make the GMREL3 Global Mirror relationship part of the CG_W2K3_GM consistency group, we are not offered the option, as shown in Figure 8-305, because the states differ. The state of the GMREL3 relationship is Consistent Stopped, because we selected the synchronized option. The state of the CG_W2K3_GM consistency group is currently Inconsistent Stopped.

Chapter 8. SAN Volume Controller operations using the GUI 623

Page 650: San

Figure 8-306 Verifying the Global Mirror relationship

After the successful creation, we are returned to the Viewing Metro & Global Mirror Relationships window. Figure 8-307 now shows all of our defined Global Mirror relationships.

Figure 8-307 Viewing Global Mirror relationships

8.15.7 Starting Global MirrorNow that we have created the Global Mirror consistency group and relationships, we are ready to use the Global Mirror relationships in our environment.

When performing Global Mirror, the goal is to reach a consistent and synchronized state that can provide redundancy in case a hardware failure occurs that affects the SAN at the production site.

In this section, we show how to start the stand-alone Global Mirror relationship and the consistency group.

8.15.8 Starting a stand-alone Global Mirror relationshipPerform these steps to start a stand-alone Global Mirror relationship:

1. In Figure 8-308 on page 625, we select the stand-alone Global Mirror relationship GMREL3, and from the list, we select Start Copy Process and click Go.

624 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 651: San

Figure 8-308 Starting the stand-alone Global Mirror relationship

2. In Figure 8-309, we do not need to change the parameters Forced start, Mark as clean, or Copy Direction, because we are invoking this Global Mirror relationship for the first time (and we have already defined the relationship as being synchronized in Figure 8-305 on page 623). We click OK to start the stand-alone Global Mirror relationship GMREL3.

Figure 8-309 Selecting options and starting the copy process

3. Because the Global Mirror relationship was in the Consistent Stopped state and no updates have been made on the primary VDisk, the relationship quickly enters the Consistent Synchronized state, as shown in Figure 8-310.

Figure 8-310 Viewing Global Mirror relationship

8.15.9 Starting a Global Mirror consistency groupPerform these steps to start the CG_W2K3_GM Global Mirror consistency group:

1. Select Global Mirror Consistency Groups from the SVC Welcome window.

2. In Figure 8-311 on page 626, we select the Global Mirror consistency group CG_W2K3_GM, and from the list, we select Start Copy Process and click Go.

Chapter 8. SAN Volume Controller operations using the GUI 625

Page 652: San

Figure 8-311 Selecting the Global Mirror consistency group and starting the copy process

3. As shown in Figure 8-312, we click OK to start the copy process. We cannot select the options Forced start, Mark as clean, or Copy Direction, because we are invoking this Global Mirror relationship for the first time.

Figure 8-312 Selecting options and starting the copy process

4. We are returned to the Viewing Metro & Global Mirror Consistency Groups window and the CG_W2K3_GM consistency group has changed to the Inconsistent copying state. Because the consistency group was in the Inconsistent stopped state, it enters the Inconsistent copying state until the background copy has completed for all of the relationships in the consistency group. Upon completion of the background copy for all of the relationships in the consistency group, it enters the Consistent Synchronized state, as shown in Figure 8-313.

Figure 8-313 Viewing Global Mirror consistency groups

8.15.10 Monitoring background copy progressThe status of the background copy progress can be seen in the Viewing Global Mirror Relationships window, as shown in Figure 8-314 on page 627, or alternatively, use the Manage Progress section under My Work and select Viewing Global Mirror Progress, as shown in Figure 8-315 on page 627.

626 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 653: San

Figure 8-314 Monitoring background copy process for Global Mirror relationships

Figure 8-315 Monitoring background copy process for Global Mirror relationships

8.15.11 Stopping and restarting Global MirrorNow that the Global Mirror consistency group and relationships are running, we describe how to stop, restart, and change the direction of the stand-alone Global Mirror relationships, as well as the consistency group.

In this section, we show how to stop and restart the stand-alone Global Mirror relationships and the consistency group.

8.15.12 Stopping a stand-alone Global Mirror relationshipPerform these steps to stop a Global Mirror relationship while enabling access (write I/O) to the secondary VDisk:

1. We select the relationship and click Stop Copy Process from the list and click Go, as shown in Figure 8-316 on page 628.

Using SNMP traps: Setting up SNMP traps for the SVC enables automatic notification when Global Mirror consistency groups or relationships change state.

Chapter 8. SAN Volume Controller operations using the GUI 627

Page 654: San

Figure 8-316 Stopping a stand-alone Global Mirror relationship

2. As shown in Figure 8-317, we select Enable write access to the secondary VDisk, if it is consistent with the primary VDisk and click OK to stop the Global Mirror relationship.

Figure 8-317 Enable access to the secondary VDisk while stopping the relationship

3. As shown in Figure 8-318, the Global Mirror relationship transits to the Idling state when stopped, while enabling write access to the secondary VDisk.

Figure 8-318 Viewing Global Mirror relationships

8.15.13 Stopping a Global Mirror consistency groupPerform these steps to stop a Global Mirror consistency group:

1. As shown in Figure 8-319 on page 629, we select the Global Mirror consistency group, click Stop Copy Process from the list, and click Go.

628 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 655: San

Figure 8-319 Selecting the Global Mirror consistency group to be stopped

2. As shown in Figure 8-320, we click OK without specifying “Enable write access to the secondary VDisks, if they are consistent with the primary VDisks”.

Figure 8-320 Stopping the consistency group without enabling access to the secondary VDisks

The consistency group enters the Consistent stopped state when stopped.

Afterward, if we want to enable access (write I/O) to the secondary VDisks, we can reissue the Stop Copy Process and specify to enable access to the secondary VDisks.

3. In Figure 8-321, we select the Global Mirror relationship, select Stop Copy Process from the list, and click Go.

Figure 8-321 Selecting the Global Mirror consistency group

4. As shown in Figure 8-322 on page 630, we select Enable write access to the secondary VDisks, if they are consistent with the primary VDisks and click OK.

Chapter 8. SAN Volume Controller operations using the GUI 629

Page 656: San

Figure 8-322 Enabling access to the secondary VDisks

When applying the Enable write access to the secondary VDisks, if they are consistent with the primary VDisks option, the consistency group transits to the Idling state, as shown in Figure 8-323.

Figure 8-323 Viewing the Global Mirror consistency group after write access to the secondary VDisk

8.15.14 Restarting a Global Mirror relationship in the Idling stateWhen restarting a Global Mirror relationship in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, consistency is compromised. In this situation, we must check Force to start the copy process, or the command will fail.

Perform these steps to restart a Global Mirror relationship in the Idling state:

1. As shown in Figure 8-324, we select the Global Mirror relationship, click Start Copy Process from the list, and click Go.

Figure 8-324 Starting stand-alone Global Mirror relationship in the Idling state

2. As shown in Figure 8-325 on page 631, we check Force, because write I/O has been performed while in the Idling state. We select the copy direction by defining the master VDisk as the primary and click OK.

630 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 657: San

Figure 8-325 Restarting the copy process

The Global Mirror relationship enters the Consistent copying state. When the background copy is complete, the relationship transits to the Consistent synchronized state, as shown in Figure 8-326.

Figure 8-326 Viewing the Global Mirror relationship

8.15.15 Restarting a Global Mirror consistency group in the Idling stateWhen restarting a Global Mirror consistency group in the Idling state, we must specify the copy direction.

If any updates have been performed on either the master or the auxiliary VDisk in any of the Global Mirror relationships in the consistency group, consistency is compromised. In this situation, we must check Force to start the copy process, or the command will fail.

Perform these steps:

1. As shown in Figure 8-327, we select the Global Mirror consistency group, select Start Copy Process from the list, and click Go.

Figure 8-327 Starting the copy process for Global Mirror consistency group

Chapter 8. SAN Volume Controller operations using the GUI 631

Page 658: San

2. As shown in Figure 8-328, we check Force and set the copy direction by selecting the auxiliary as the master. Click OK.

Figure 8-328 Restarting the copy process for the consistency group

3. When the background copy completes, the Global Mirror consistency group enters the Consistent synchronized state, as shown in Figure 8-329.

Figure 8-329 Viewing Global Mirror consistency groups

The individual relationships within that consistency group also are shown in Figure 8-330.

Figure 8-330 Viewing Global Mirror relationships

8.15.16 Changing copy direction for Global MirrorWhen a stand-alone Global Mirror relationship is in the Consistent synchronized state, we can change the copy direction for the relationship. Perform these steps:

1. In Figure 8-331 on page 633, we select the GMREL3 relationship, click Switch Copy Direction from the list, and click Go.

632 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 659: San

Figure 8-331 Selecting the relationship for which the copy direction is to be changed

2. In Figure 8-332, we see that the current primary VDisk is the master, so to change the copy direction for the stand-alone Global Mirror relationship, we specify the auxiliary VDisk to become the primary, and click OK.

Figure 8-332 Selecting the primary VDisk as auxiliary to switch the copy direction

3. The copy direction is now switched, and we are returned to the Viewing Global Mirror Relationships window, where we see that the copy direction has been switched, as shown in Figure 8-333.

Figure 8-333 Viewing Global Mirror relationship after changing the copy direction

Important: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisk that transits from primary to secondary, because all I/O will be inhibited to that VDisk when it becomes the secondary. Therefore, careful planning is required prior to switching the copy direction for a Global Mirror relationship.

Chapter 8. SAN Volume Controller operations using the GUI 633

Page 660: San

8.15.17 Switching copy direction for a Global Mirror consistency groupWhen a Global Mirror consistency group is in the Consistent synchronized state, we can change the copy direction for the Global Mirror consistency group. Perform these steps:

1. In Figure 8-334, we select the CG_W2K3_GM consistency group, click Switch Copy Direction from the list, and click Go.

Figure 8-334 Selecting the consistency group for which the copy direction is to be changed

2. In Figure 8-335, we see that currently the primary VDisks are also the master. So, to change the copy direction for the Global Mirror consistency group, we specify the auxiliary VDisks to become the primary, and click OK.

Figure 8-335 Selecting the primary VDisk as auxiliary to switch the copy direction

The copy direction is now switched and we are returned to the Viewing Global Mirror Consistency Group window, where we see that the copy direction has been switched. Figure 8-336 on page 635 shows that the auxiliary is now the primary.

Note: When the copy direction is switched, it is crucial that there is no outstanding I/O to the VDisks that transit from primary to secondary, because all I/O will be inhibited when they become the secondary. Therefore, careful planning is required prior to switching the copy direction.

634 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 661: San

.

Figure 8-336 Viewing Global Mirror consistency groups after changing the copy direction

Figure 8-337 shows the new copy direction for individual relationships within that consistency group.

Figure 8-337 Viewing Global Mirror Relationships, after changing copy direction for consistency group

Because everything has been completed to our expectations, we are now finished with Global Mirror.

8.16 Service and maintenanceThis section discusses the various service and maintenance tasks that you can perform within the SVC environment. To perform all of the following activities, in the SVC Welcome window (Figure 8-338 on page 636), select the Service and Maintenance option.

Note: You are prompted for a cluster user ID and password for several of the following tasks.

Chapter 8. SAN Volume Controller operations using the GUI 635

Page 662: San

Figure 8-338 Service and Maintenance functions

8.17 Upgrading softwareThis section explains how to upgrade the SVC software.

8.17.1 Package numbering and versionThe format for the software upgrade package name ends in four positive integers separated by dots. For example, a software upgrade package might have the name IBM_2145_INSTALL_5.1.0.0.

8.17.2 Upgrade status utilityA function of the Master Console is to check the software levels in the system against recommended levels that will be documented on the support Web site. You are informed if software levels are up-to-date, or if you need to download and install newer levels. This information is provided after you log in to the SVC GUI. In the middle of the Welcome window, you will see that new software is available. Use the link that is provided there to download the new software and get more information about it.

Important: To use this feature, the System Storage Productivity Center/Master Console must be able to access the Internet.

If the System Storage Productivity Center cannot access the Internet because of restrictions, such as a local firewall, you will see the message “The update server cannot be reached at this time.” Use the Web link that is provided in the message for the latest software information.

636 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 663: San

8.17.3 Precautions before upgradeIn this section, we describe precautions that you must take before attempting an upgrade.

During the upgrade, each node in your cluster will be automatically shut down and restarted by the upgrade process. Because each node in an I/O Group provides an alternate path to VDisks, use Subsystem Device Driver (SDD) to make sure that all I/O paths between all hosts and SANs are working.

If you have not performed this check, certain hosts might lose connectivity to their VDisk and experience I/O errors when the SVC node providing that access is shut down during the upgrade process (Example 8-1).

Example 8-1 Using datapath query commands to check that all paths are online

C:\Program Files\IBM\SDDDSM>datapath query adapter

Active Adapters :2

Adpt# Name State Mode Select Errors Paths Active 0 Scsi Port2 Bus0 NORMAL ACTIVE 167 0 4 4 1 Scsi Port3 Bus0 NORMAL ACTIVE 137 0 4 4

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E9080000000000002A============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 37 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 2 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port3 Bus0/Disk1 Part0 OPEN NORMAL 29 0

DEV#: 1 DEVICE NAME: Disk2 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A180E90800000000000010============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 0 0 1 Scsi Port2 Bus0/Disk2 Part0 OPEN NORMAL 130 0 2 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 108 0 3 Scsi Port3 Bus0/Disk2 Part0 OPEN NORMAL 0 0

You can check the I/O paths by using datapath query commands, as shown in Example 8-1. You do not need to check for hosts that have no active I/O operations to the SANs during the software upgrade.

Important: Before attempting any SVC code update, read and understand the SVC concurrent compatibility and code cross-reference matrix. Go to the following site and click the link for Latest SAN Volume Controller code:

http://www-1.ibm.com/support/docview.wss?uid=ssg1S1001707

Chapter 8. SAN Volume Controller operations using the GUI 637

Page 664: San

It is well worth double-checking that your uninterruptible power supply unit power configuration is also set up correctly (even if your cluster is running without problems). Specifically, double-check these areas:

� Ensure that your uninterruptible power supply units are all getting their power from an external source, and that they are not daisy-chained. Make sure that each uninterruptible power supply unit is not supplying power to another node’s uninterruptible power supply unit.

� Ensure that the power cable, and the serial cable coming from the back of each node, goes back to the same uninterruptible power supply unit. If the cables are crossed and are going back to separate uninterruptible power supply units, during the upgrade, as one node is shut down, another node might also be mistakenly shut down.

8.17.4 SVC software upgrade test utilityThe SVC software upgrade test utility is an SVC software utility that checks for known issues that can cause problems during an SVC software upgrade. You can run it on any SVC cluster running level 4.1.0.0 or higher. It is available from the following location:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

You can use the svcupgradetest utility to check for known issues that might cause problems during a SAN Volume Controller software upgrade. You can use it to check for potential problems upgrading from V4.1.0.0 and all later releases to the latest available level.

You can run the utility multiple times on the same cluster to perform a readiness check in preparation for a software upgrade. We strongly recommend running this utility for a final time immediately prior to applying the SVC upgrade, making sure that there have not been any new releases of the utility since it was originally downloaded.

After you install the utility, you can obtain the version information for this utility by running the svcupgradetest -h command.

The installation and usage of this utility are nondisruptive and do not require restarting any SVC nodes, so there is no interruption to host I/O. The utility is only installed on the current configuration node.

System administrators must continue to check if the version of code that they plan to install is the latest version. You can obtain information about the latest information at this Web site:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S1001707#_Latest_SAN_Volume_Controller%20Code

This utility is intended to supplement rather than duplicate the existing tests that are carried out by the SVC upgrade procedure (for example, checking for unfixed errors in the error log).

The upgrade test utility includes command-line parameters.

PrerequisitesYou can install this utility only on clusters running SVC V4.1.0.0 or later.

Tip: See the Subsystem Device Driver User’s Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540, for more information about datapath query commands.

638 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 665: San

Installation InstructionsTo use the upgrade test utility, follow these steps:

1. Download the latest version of the upgrade test utility (IBM2145_INSTALL_svcupgradetest_V.R) using the download link:

http://www-01.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

2. You can install the utility package by using the standard SVC Console (GUI) or command-line interface (CLI) software upgrade procedures that are used to install any new software onto the cluster.

3. An example CLI command to install the package, after it has been uploaded to the cluster, is svcservicetask applysoftware -file IBM2145_INSTALL_svcupgradetest_n.nn.

4. Run the upgrade test utility by logging onto the SVC CLI and running svcupgradetest -v <V.R.M.F> where V.R.M.F is the version number of the SVC release being installed.

5. For example, if upgrading to SVC V5.1.0.0, the command is svcupgradetest -v 5.1.0.0.

6. The output from the command will either state that there have been no problems found, or will direct you to details about any known issues that have been discovered on this cluster.

Example 8-2 shows the command to test an upgrade.

Example 8-2 Run an upgrade test

IBM_2145:ITSO-CLS2:admin>svcupgradetestsvcupgradetest version 4.11. Please wait while the tool testsfor issues that may prevent a software upgrade from completingsuccessfully. The test will take approximately one minute to complete.

The test has not found any problems with the 2145 cluster.Please proceed with the software upgrade.

8.17.5 Upgrade procedureTo upgrade the SVC cluster software, perform the following steps:

1. Use the Run Maintenance Procedure in the GUI and correct all open problems first, as described in 8.17.6, “Running maintenance procedures” on page 645.

2. Back up the SVC Config, as described in 8.18.1, “Backup procedure” on page 669.

3. Back up the support data in case there is a problem during the upgrade that renders a node unusable. This information can assist IBM Support in determining why the upgrade might have failed and help with a resolution. Example 8-3 shows the necessary commands that need to be run. This command is only available in the CLI.

Example 8-3 Creating an SVC snapshot

IBM_2145:ITSO-CLS2:admin>svc_snapCollecting system information...Copying files, please wait...Copying files, please wait...Dumping error log...Creating snap package...Snap data collected in /dumps/snap.100047.080617.002334.tgz

Note: You can ignore the error message “No such file or directory”.

Chapter 8. SAN Volume Controller operations using the GUI 639

Page 666: San

4. Select Software Maintenance List Dumps Software Dumps, download the dump that was created in Example 8-3 on page 639, and store it in a safe place with the SVC Config that you created previously (see Figure 8-339 and Figure 8-340).

Figure 8-339 Getting software dumps

Figure 8-340 Downloading software dumps

5. From the SVC Welcome window, click Service and Maintenance, and then, click the Upgrade Software link.

6. In the Software Upgrade window that is shown in Figure 8-341 on page 641, you can either upload a new software upgrade file or list the upgrade files. Click Upload to upload the latest SVC cluster code.

640 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 667: San

Figure 8-341 Software Upgrade window

7. In the Software Upgrade (file upload) window (Figure 8-342), type or browse to the directory on your management workstation (for example, Master Console) where you stored the latest code level, and click Upload.

Figure 8-342 Software upgrade (file upload)

8. The File Upload window (Figure 8-343) is displayed if the file is uploaded. Click Continue.

Figure 8-343 File Upload window

9. The Select Upgrade File window (Figure 8-344 on page 642) lists the available software packages. Make sure that the package that you want to apply is selected. Click Apply.

Chapter 8. SAN Volume Controller operations using the GUI 641

Page 668: San

Figure 8-344 Select Upgrade File window

10.In the Confirm Upgrade File window (Figure 8-345), click Confirm.

Figure 8-345 Confirm Upgrade File window

11.After this confirmation, the SVC will check whether there are any outstanding errors. If there are no errors, click Continue, as shown in Figure 8-346, to proceed to the next upgrade step. Otherwise, the Run Maintenance button is displayed, which is used to check the errors. For more information about how to use the maintenance procedures, see 8.17.6, “Running maintenance procedures” on page 645.

Figure 8-346 Check Outstanding Errors window

12.The Check Node Status window shows the in-use nodes with their current status displayed, as shown in Figure 8-347 on page 643. Click Continue to proceed.

642 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 669: San

Figure 8-347 Check Node Status window

13.The Start Upgrade window opens. Click Start Software Upgrade to start the software upgrade, as shown in Figure 8-348.

Figure 8-348 Start Upgrade window

The upgrade starts by upgrading one node in each I/O Group.

14.The Software Upgrade Status window (Figure 8-349 on page 644) opens. Click Check Upgrade Status periodically. This process might take a while to complete. If the software is completely upgraded, you get a software completed message, and the code level of the cluster and nodes will show the newly applied software level.

Chapter 8. SAN Volume Controller operations using the GUI 643

Page 670: San

Figure 8-349 Software Upgrade Status window

15.During the upgrade process, you can only issue informational commands. All task commands, such as the creation of a VDisk (as shown in Figure 8-350), are denied, including both GUI and CLI tasks. All tasks, such as creation, modifying, mapping, and deleting, are denied.

Figure 8-350 Denial of a task command during the software update

16.The new code is distributed and applied to each node in the SVC cluster. After installation, each node is automatically restarted in turn.

Although unlikely, if the concurrent code load (CCL) fails, for example, if one node fails to accept the new code level, the update on that one node will be backed out, and the node will revert back to the original code level.

From 4.1.0 onward, the update will simply wait for user intervention. For example, if there are two nodes (A and B) in an I/O Group, and node A has been upgraded successfully, and then, node B experiences a hardware failure, the upgrade will end with an I/O Group that has a single node at the higher code level. If the hardware failure is repaired on node B, the CCL will then complete the code upgrade process.

644 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 671: San

I

17.If you run into an error, go to the Analyze Error Log window. Search for Software Install completed. Select Sort by date with the newest first, and then, click Perform to list the software near the top. For more information about working with the Analyze Error Log window, see 8.17.10, “Analyzing the error log” on page 655.

It might also be worthwhile to capture information for IBM Support to help you diagnose what went wrong.

You have now completed the tasks that are required to upgrade the SVC software. Click the X icon in the upper-right corner of the display area to close the Software Upgrade window. Do not close the browser by mistake.

8.17.6 Running maintenance proceduresTo run the maintenance procedures on the SVC cluster, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, click Run Maintenance Procedures.

2. Click Start Analysis, as shown in Figure 8-351, to analyze the cluster log and to guide you through the maintenance procedures.

Figure 8-351 Maintenance Procedures window

3. This action generates a new error log file in the /dumps/elogs/ directory (Figure 8-352 on page 646). Also, we can see the list of the errors, as shown in Figure 8-352 on page 646.

Tip: Be patient. After the software update is applied, the first SVC node in a cluster will update and install the new SVC code version shortly afterward. If multiple I/O Groups (up to four I/O Groups are possible) exist in an SVC cluster, the second node of the second I/O Group will load the new SVC code and restart with a 10 minute delay to the first node. A 30 minute delay between the update of the first node and the second node in an I/O Group ensures that all paths, from a multipathing point of view, are available again.

An SVC cluster update with one I/O Group takes approximately one hour.

Chapter 8. SAN Volume Controller operations using the GUI 645

Page 672: San

Figure 8-352 Maintenance error log with unfixed errors

4. Click the error number in the Error Code column in Figure 8-352 to see the explanation for this error, as shown in Figure 8-353.

Figure 8-353 Maintenance: Error code description

5. To perform problem determination, click Continue. The details for the error appear and might provide options to diagnose and repair the problem. In this case, it asks you to check an external configuration and, then, to click Continue (Figure 8-354 on page 647).

646 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 673: San

Figure 8-354 Maintenance procedures: Fixing error

6. The SVC maintenance procedure has completed, and the error is fixed, as shown in Figure 8-355.

Figure 8-355 Maintenance procedure: Fixing error

7. The discovery reported no new errors, so the entry in the error log is now marked as fixed (as shown in Figure 8-356). Click OK.

Figure 8-356 Maintenance procedure: Fixed

8.17.7 Setting up error notificationTo set up error notification, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, Set SNMP Error Notifications.

Chapter 8. SAN Volume Controller operations using the GUI 647

Page 674: San

Figure 8-357 Setting SNMP error notification

2. Select Add a Server and click Go.

3. In Figure 8-358, add the Server Name, the IP address of your SNMP Manager, (optional) Port, and Community string to use.

Figure 8-358 Set the SNMP settings

4. The next window now displays confirmation that it has updated the settings, as shown in Figure 8-359 on page 649.

IP address: Depending on which IP protocol addressing is configured, the window displays options for IPV4, IPV6, or both.

648 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 675: San

Figure 8-359 Error notification settings confirmation

5. The next window now displays the current status, as shown in Figure 8-360.

Figure 8-360 Current event notification settings

6. You can now click the X icon in the upper-right corner of the Set SNMP Event Notification window to close the window.

8.17.8 Setting syslog event notificationStarting with SVC 5.1, you can save a syslog to a defined syslog server. The SVC provides support for syslogs, in addition to e-mail and SNMP traps.

Figure 8-361 on page 650, Figure 8-362 on page 650, and Figure 8-363 on page 651 show the sequence of windows to use to define a syslog server.

Chapter 8. SAN Volume Controller operations using the GUI 649

Page 676: San

Figure 8-361 Adding a syslog server

Figure 8-362 shows the syslog server definition window.

Figure 8-362 Syslog server definition

650 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 677: San

Figure 8-363 Syslog server confirmation

The syslog messages can be sent in either compact message format or full message format.

Example 8-4 shows a compact format syslog message.

Example 8-4 Compact syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=NodeCPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100

Example 8-5 shows a full format syslog message.

Example 8-5 Full format syslog message example

IBM2145 #NotificationType=Error #ErrorID=077001 #ErrorCode=1070 #Description=NodeCPU fan failed #ClusterName=SVCCluster1 #Timestamp=Wed Jul 02 08:00:00 2008 BST#ObjectType=Node #ObjectName=Node1 #CopyID=0 #ErrorSequenceNumber=100 #ObjectID=2#NodeID=2 #MachineType=21454F2#SerialNumber=1234567 #SoftwareVersion=5.1.0.0(build 8.14.0805280000)#FRU=fan 24P1118, system board 24P1234 #AdditionalData(0->63)=0000000021000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000#AdditionalData(64-127)=00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

8.17.9 Set e-mail featuresThe SVC GUI supports the SVC e-mail error notification service.The SVC uses the e-mail server to send event notification and inventory e-mails to e-mail users. It can transmit any combination of error, warning, and informational notification types.

To run the e-mail service for the first time, the Web pages guide us through the required steps:

� Set the e-mail server and contact details� Test the e-mail service

Figure 8-364 on page 652 shows the set e-mail notification window.

Chapter 8. SAN Volume Controller operations using the GUI 651

Page 678: San

Figure 8-364 Setting e-mail notification window

Figure 8-365 shows how to insert contact details.

Figure 8-365 Inserting contact details

Figure 8-366 shows the confirmation window for the e-mail contact details.

Figure 8-366 Contact details confirmation

652 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 679: San

Figure 8-367 shows how to configure the Simple Mail Transfer Protocol (SMTP) server in the SVC cluster.

Figure 8-367 SMTP server definition

Figure 8-368 shows the SMTP server definition confirmation.

Figure 8-368 SMTP definition confirmation

Figure 8-369 on page 654 shows how to define the support e-mail to which SVC notifications will be sent.

Chapter 8. SAN Volume Controller operations using the GUI 653

Page 680: San

Figure 8-369 E-mail notification user

Figure 8-370 shows how to start the e-mail service.

Figure 8-370 Starting the e-mail service

Figure 8-371 shows how to start the test e-mail process.

Figure 8-371 Sending test e-mail

Figure 8-372 on page 655 shows how to send a test e-mail to all users.

654 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 681: San

Figure 8-372 Sending a test e-mail to all users

Figure 8-373 shows how to confirm the test e-mail notification.

Figure 8-373 Confirming the test e-mail notification

8.17.10 Analyzing the error logThe following types of events and errors are logged in the error log:

� Events: State changes are detected by the cluster software and are logged for informational purposes. Events are recorded in the cluster error log.

� Errors: Hardware or software problems are detected by the cluster software and require repair. Errors are recorded in the cluster error log.

� Unfixed errors: Errors were detected and recorded in the cluster error log and were not yet corrected or repaired.

� Fixed errors: Errors were detected and recorded in the cluster error log and were subsequently corrected or repaired.

To display the error log for analysis, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, click Analyze Error Log.

2. From the Error Log Analysis window (Figure 8-374 on page 656), you can choose either Process or Clear Log:

Chapter 8. SAN Volume Controller operations using the GUI 655

Page 682: San

Figure 8-374 Analyzing the error log

a. Select the appropriate radio buttons and click Process to display the log for analysis. The Analysis Options and Display Options allow you to filter the results of your log inquiry to reduce the output.

b. You can display the whole log, or you can filter the log so that only errors, events, or unfixed errors are displayed. You can also sort the results by selecting the appropriate display options. For example, you can sort the errors by error priority (lowest number = most serious error) or by date. If you sort by date, you can specify whether the newest or oldest error displays at the top of the table. You can also specify the number of entries that you want to display on each page of the table.

Figure 8-375 on page 657 shows an example of the error logs listed.

656 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 683: San

Figure 8-375 Analyzing Error Log: Process

c. Click an underlined sequence number to see the detailed log of this error (Figure 8-376 on page 658).

Chapter 8. SAN Volume Controller operations using the GUI 657

Page 684: San

Figure 8-376 Analyzing Error Log: Detailed Error Analysis window

d. You can optionally display detailed sense code data by clicking Sense Expert, as shown in Figure 8-377 on page 659. Click Return to go back to the Detailed Error Analysis window.

658 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 685: San

Figure 8-377 Decoding Sense Data window

e. If the log entry is an error, you can optionally mark the error as fixed, which does not cause any other checks or processes. We recommend that you do this action as a maintenance task (see 8.17.6, “Running maintenance procedures” on page 645).

f. Click Clear Log at the bottom of the Error Log Analysis window (see Figure 8-374 on page 656) to clear the log. If the error log contains unfixed errors, a warning message is displayed when you click Clear Log.

3. You can now click the X icon in the upper-right corner of the Analyze Error Log window.

8.17.11 License settingsTo change the license settings, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, License Settings, as shown in Figure 8-378 on page 660.

Chapter 8. SAN Volume Controller operations using the GUI 659

Page 686: San

Figure 8-378 License setting

2. Now, you can choose between Capacity Licensing or Physical Disk Licensing.

Figure 8-379 shows the Physical Disk Licensing Settings window.

Figure 8-379 Physical Disk Licensing Settings window

Figure 8-380 on page 661 shows the Capacity Licensing Settings window.

660 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 687: San

Figure 8-380 Capacity License Setting window

3. Consult your license before you make changes in the License Settings window (Figure 8-381). If you have purchased additional features (for example, FlashCopy or Global Mirror) or if you have increased the capacity of your license, make the appropriate changes. Then, click Update License Settings.

Figure 8-381 License Settings window

4. You now see a license confirmation window, as shown in Figure 8-382 on page 662. Review this window and ensure that you are in compliance. If you are in compliance, click I Agree to make the requested changes take effect.

Chapter 8. SAN Volume Controller operations using the GUI 661

Page 688: San

Figure 8-382 License agreement

5. You return to the License Settings window to review your changes (Figure 8-383). Make sure that your changes are reflected.

Figure 8-383 Feature settings update

6. You can now click the X icon in the upper-right corner of the License Settings window.

8.17.12 Viewing the license settings logTo view the feature log, which registers the events that are related to the SVC-licensed features, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, View License Settings Log.

2. The View License Settings Log window (Figure 8-384 on page 663) opens. It displays the current license settings and a log of when changes were made.

662 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 689: San

Figure 8-384 Feature log

3. You can now click the X icon in the upper-right corner of the View License Settings Log window.

8.17.13 Dumping the cluster configurationTo dump your cluster configuration, click Service and Maintenance and, then, Dump Configuration, as shown in Figure 8-385.

Figure 8-385 Dumping Cluster Configuration window

8.17.14 Listing dumpsTo list the dumps that were generated, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, List Dumps.

2. In the List Dumps window (Figure 8-386 on page 664), you see several dumps and log files that were generated over time on this node. They include the configuration dump that we generated in Example 8-3 on page 639. Click any of the available links (the underlined

Chapter 8. SAN Volume Controller operations using the GUI 663

Page 690: San

text in the table under the Dump Type heading) to go to another window that displays the available dumps. To see the dumps on the other node, you must click Check other nodes.

Figure 8-386 List Dumps

3. Figure 8-387 shows the list of dumps from the partner node. You can see a list of the dumps by clicking one of the Dump Types.

Figure 8-387 List Dumps from the partner node

4. To copy a file from this partner node to the config node, click the dump type and then click the file that you want to copy, as shown in Figure 8-388 on page 665.

Note: By default, the dump and log information that is displayed is available from the configuration node. In addition to these files, each node in the SVC cluster keeps a local software dump file. Occasionally, other dumps are stored on them. Click Check Other Nodes at the bottom of the List Dumps window (Figure 8-386) to see which dumps or logs exist on other nodes in your cluster.

664 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 691: San

Figure 8-388 Copy dump files

5. You will see a confirmation window that the dumps are being retrieved. You can either click Continue to continue working with the other node or click Cancel to go back to the original node (Figure 8-389).

Figure 8-389 Retrieve dump confirmation

6. After all of the necessary files are copied to the SVC config node, click Cancel to finish the copy operation, and click Cancel again to return to the SVC config node. Now, for example, if you click the Error Logs link, you see information similar to that shown in Figure 8-390 on page 666.

Chapter 8. SAN Volume Controller operations using the GUI 665

Page 692: San

Figure 8-390 List Dumps: Error Logs

7. From this window, you can perform either of the following tasks:

– Click any of the available log file links (indicated by the underlined text) to display the log in complete detail.

– Delete one or all of the dump or log files. To delete all, click Delete All. To delete several error log files, select the check boxes to the right of the file, and click Delete.

8. You can now click the X icon in the upper-right corner of the List Dumps window.

8.17.15 Setting up a quorum diskThe SVC cluster, after the process of node discovery, automatically chooses three MDisks as quorum disks. Each disk is assigned an index number of either 0, 1, or 2.

In the event that half of the nodes in a cluster are missing for any reason, the other half of the cluster nodes cannot simply assume that the nodes are “dead”. It might mean that the cluster state information is not being successfully passed between nodes for a reason (network failure, for example). For this reason, if half of the cluster disappears from the view of the other half, each surviving half attempts to lock the first quorum disk (index 0). In the event that quorum disk index 0 is not available on any node, the next disk (index 1) becomes the quorum, and so on.

The half of the cluster that is successful in locking the quorum disk becomes the exclusive processor of I/O activity. It attempts to reform the cluster with any nodes that it can still see. The other half will stop processing I/O, which provides a tie-breaker solution and ensures that both halves of the cluster do not continue to operate.

In the case that all clusters can see the quorum disk, they will use this quorum disk to communicate with each other, and they will decide which half will become the exclusive processor of I/O activity.

666 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 693: San

If, for any reason, you want to set your own quorum disks (for example, if you have installed additional back-end storage and you want to move one or two quorum disks onto this newly installed back-end storage subsystem), complete the following tasks:

1. From the Welcome window, select Work with Managed Disks, then select Quorum Disks, which takes you to the window that is shown in Figure 8-391.

Figure 8-391 Selecting the quorum disks

2. We can now select our quorum disks and identify which disk will be the active quorum disk.

3. To change the active quorum disk, as shown in Figure 8-392, we start by selecting another MDisk to be the active quorum disk. We click Set Active Quorum Disk and click Go.

Figure 8-392 Selecting a new active quorum disk

4. We confirm that we want to change the active quorum disk by clicking Set Active Quorum Disk, as shown in Figure 8-393.

Figure 8-393 Confirming the change of the active quorum disk

5. After we have changed the active quorum disk, we can see that our previous active quorum disk is in the state of initializing, as shown in Figure 8-394 on page 668.

Chapter 8. SAN Volume Controller operations using the GUI 667

Page 694: San

Figure 8-394 Quorum disk initializing

6. Shortly afterward, we have a successful change, as shown in Figure 8-395.

Figure 8-395 New quorum disk is now active

Quorum disks are only created if at least one MDisk is in managed mode (that is, it was formatted by the SVC with extents in it). Otherwise, a 1330 cluster error message is displayed in the SVC front window. You can correct it only when you place MDisks in managed mode.

8.18 Backing up the SVC configurationThe SVC configuration data is stored on all the nodes in the cluster. It is specially hardened, so that, in normal circumstances, the SVC never loses its configuration settings. However, in exceptional circumstances, this data can become corrupted or lost.

This section details the tasks that you can perform to save the configuration data from an SVC configuration node and restore it. The following configuration information is backed up:

� Storage subsystem� Hosts� Managed disks (MDisks)� MDGs� SVC nodes� VDisks� VDisk-to-host mappings� FlashCopy mappings� FlashCopy consistency groups� Mirror relationships� Mirror consistency groups

668 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 695: San

Backing up the cluster configuration enables you to restore your cluster configuration in the event that it is lost. But only the data that describes the cluster configuration is backed up. To back up your application data, you need to use the appropriate backup methods.

To begin the restore process, consult IBM Support to determine the cause or reason why you cannot access your original configuration data.

Perform or verify these prerequisites to have a successful backup:

� All nodes in the cluster must be online.

� No object name can begin with an underscore (_).

� Do not run any independent operations that might change the cluster configuration while the backup command runs.

� Do not make any changes to the fabric or cluster between backup and restore. If changes are made, back up your configuration again, or you might not be able to restore it later.

The svc.config.backup.xml file is stored in the /tmp folder on the configuration node and must be copied to an external and secure place for backup purposes.

8.18.1 Backup procedureTo back up the SVC configuration data, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, Backup Configuration.

2. In the Backing up a Cluster Configuration window (Figure 8-396), click Backup.

Figure 8-396 Backing up Cluster Configuration data

3. After the configuration backup is successful, you see messages similar to the messages that are shown in Figure 8-397 on page 670. Make sure that you read, understand, act upon, and document the warning messages, because they can affect the restore procedure.

Important: We recommend that you make a backup of the SVC configuration data after each major change in the environment, such as defining or changing a VDisk, VDisk-to-host mappings, and so on.

Important: We strongly recommend that you change the default names of all objects to non-default names. For objects with a default name, a warning is issued, and the object is restored with its original name and “_r” appended to it.

Chapter 8. SAN Volume Controller operations using the GUI 669

Page 696: San

Figure 8-397 Configuration backup successful messages and warnings

4. You can now click the X icon in the upper-right corner of the Backing up a Cluster Configuration window.

8.18.2 Saving the SVC configurationTo save the SVC configuration in a safe place, follow these steps:

� From the List Dump window, select Software Dumps, select the configuration dump that you want to save, and right-click to save it.

Figure 8-398 on page 671 shows saving a software dump on the Software Dumps window.

Change the default names: To avoid getting the CMMVC messages that are shown in Figure 8-397, you need to replace all the default names, for example, mdisk1, vdisk1, and so on.

670 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 697: San

Figure 8-398 Software Dumps list with options

After you have saved your configuration file, it will be presented to you as a .xml file.

Figure 8-399 shows an SVC backup configuration file example.

Figure 8-399 SVC backup configuration file example

Chapter 8. SAN Volume Controller operations using the GUI 671

Page 698: San

8.18.3 Restoring the SVC configurationIt is extremely important that you perform the configuration backup that is described in 8.18.1, “Backup procedure” on page 669 periodically, and every time after you change the configuration of your cluster.

Carry out the restore procedure only under the direction of IBM Level 3 support.

8.18.4 Deleting the configuration backup filesThis section details the tasks that you can perform to delete the configuration backup files from the default folder in the SVC Master Console. You can do this task if you have already copied them to another external and secure place.

To delete the SVC configuration backup files, perform the following steps:

1. From the SVC Welcome window, click Service and Maintenance and, then, Delete Backup.

2. In the Deleting a Cluster Configuration window (Figure 8-400), click OK to confirm the deletion of the C:\Program Files\IBM\svcconsole\cimom\backup\SVCclustername folder (where SVCclustername is the SVC cluster name on which you are working) on the SVC Master Console and all its contents.

Figure 8-400 Deleting a Cluster Configuration window

3. Click Delete to confirm the deletion of the configuration backup data. See Figure 8-401.

Figure 8-401 Deleting a Cluster Configuration confirmation message

4. The cluster configuration is now deleted.

8.18.5 FabricsFrom the Fabrics Link in the Service and Maintenance window, you can view the fabrics from the SVC’s point of view. This function might be useful to debug a SAN problem.

Figure 8-402 on page 673 shows a Viewing Fabrics example.

672 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 699: San

Figure 8-402 Viewing Fabrics example

8.18.6 Common Information Model object manager log configurationBecause the Common Information Model object manager (CIMOM) has been moved from the Hardware Management Console (System Storage Productivity Center) to the SVC cluster starting with SVC 5.1, you can configure the SVC CIMOM log by using the GUI to set the detail logging level.

Figure 8-403 shows the Configuring CIMOM Log window.

Figure 8-403 CIMOM Configuration Log window

We have completed our discussion of the service and maintenance operational tasks.

Chapter 8. SAN Volume Controller operations using the GUI 673

Page 700: San

674 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 701: San

Chapter 9. Data migration

In this chapter, we explain how to migrate from a conventional storage infrastructure to a virtualized storage infrastructure by applying the IBM System Storage SAN Volume Controller (SVC). We also explain how the SVC can be phased out of a virtualized storage infrastructure, for example, after a trial period. Or, the SVC can be phased out of a virtualized storage infrastructure because you want to use the SVC as a data mover because it best meets your requirements in terms of data migration performance. Or, the SVC can be phased out of a virtualized storage infrastructure, because it gives the best service level agreement (SLA) to your application during the data migration.

Moreover, we show migrating from a fully allocated VDisk to a space-efficient virtual disk (VDisk) using the VDisk Mirroring feature and the space-efficient volume together.

We also show an example of using intracluster Metro Mirror to migrate data.

9

© Copyright IBM Corp. 2010. All rights reserved. 675

Page 702: San

9.1 Migration overviewThe SVC allows you to change the mapping of VDisk extents to managed disk (MDisk) extents, without interrupting host access to the VDisk. This functionality is utilized when performing VDisk migrations, and it can be performed for any VDisk that is defined on the SVC.

This functionality can be used for these tasks:

� Redistributing VDisks and, therefore, the workload within an SVC cluster across back-end storage:

– Moving workload onto newly installed storage– Moving workload off of old or failing storage, ahead of decommissioning it– Moving workload to rebalance a changed workload

� Migrating data from older back-end storage to SVC-managed storage

� Migrating data from one back-end controller to another back-end controller using the SVC as a data block mover and afterward removing the SVC from the SAN

� Migrating data from managed mode back into image mode prior to removing the SVC from a SAN

9.2 Migration operationsYou can perform migration at either the VDisk or the extent level, depending on the purpose of the migration. These migration activities are supported:

� Migrating extents within a Managed Disk Group (MDG), redistributing the extents of a given VDisk on the MDisks in the MDG

� Migrating extents off of an MDisk, which is removed from the MDG, to other MDisks in the MDG

� Migrating a VDisk from one MDG to another MDG

� Migrating a VDisk to change the virtualization type of the VDisk to image

� Migrating a VDisk between I/O Groups

9.2.1 Migrating multiple extents (within an MDG)You can migrate a number of VDisk extents at one time by using the migrateexts command.

When executed, this command migrates a given number of extents from the source MDisk, where the extents of the specified VDisk reside, to a defined target MDisk that must be part of the same MDG.

You can specify a number of migration threads to be used in parallel (from 1 to 4).

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is migrated, while the MDisk access mode transitions from image to managed.

676 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 703: San

The syntax of the command-line interface (CLI) command is:

svctask migrateexts -source src_mdisk_id | src_mdisk_name -exts num_extents -targettarget_mdisk_id | target_mdisk_name [-threads number_of_threads] -vdisk vdisk_id | vdisk_name

The parameters for the CLI command are defined this way:

� -vdisk: Specifies the VDisk ID or name to which the extents belong.

� -source: Specifies the source MDisk ID or name on which the extents currently reside.

� -exts: Specifies the number of extents to migrate.

� -target: Specifies the target MDisk ID or name onto which the extents are to be migrated.

� -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

9.2.2 Migrating extents off of an MDisk that is being deletedWhen an MDisk is deleted from an MDG using the rmmdisk -force command, any occupied extents on the MDisk are migrated off of the MDisk (to other MDisks in the MDG) prior to its deletion.

In this case, the extents that need to be migrated are moved onto the set of MDisks that are not being deleted, and the extents are distributed. This statement holds true if multiple MDisks are being removed from the MDG at the same time and if MDisks that are being removed are not candidates for supplying free extents to the allocation of the free extents algorithm.

If a VDisk uses one or more extents that need to be moved as a result of an rmmdisk command, the virtualization type for that VDisk is set to striped (if it was previously sequential or image).

If the MDisk is operating in image mode, the MDisk transitions to managed mode while the extents are being migrated, and upon deletion, it transitions to unmanaged mode.

The syntax of the CLI command follows this format:

svctask rmmdisk -mdisk mdisk_id_list | mdisk_name_list [-force]mdisk_group_id | mdisk_group_name

The parameters for the CLI command are defined this way:

� -mdisk: Specifies one or more MDisk IDs or names to delete from the group.

� -force: Migrates any data that belongs to other VDisks before removing the MDisk.

Using the -force flag: If the -force flag is not supplied and if VDisks occupy extents on one or more of the MDisks that are specified, the command fails.

When the -force flag is supplied and when VDisks exist that are made from extents on one or more of the MDisks that are specified, all extents on the MDisks will be migrated to the other MDisks in the MDG, if there are enough free extents in the MDG. The deletion of the MDisks is postponed until all extents are migrated, which can take time. In the case where there are insufficient free extents in the MDG, the command fails.

When the -force flag is supplied, the command completes asynchronously.

Chapter 9. Data migration 677

Page 704: San

9.2.3 Migrating a VDisk between MDGsAn entire VDisk can be migrated from one MDG to another MDG using the migratevdisk command. A VDisk can be migrated between MDGs regardless of the virtualization type (image, striped, or sequential), although it transitions to the virtualization type of striped. The command varies depending on the type of migration, as shown in Table 9-1.

Table 9-1 Migration type

The syntax of the CLI command is this format:

svctask migratevdisk -mdiskgrp mdisk_group_id | mdisk_group_name [-threadsnumber_of_threads -copy_id] -vdisk vdisk_id | vdisk_name

The parameters for the CLI command are defined this way:

� -vdisk: Specifies the VDisk ID or name to migrate into another MDG.

� -mdiskgrp: Specifies the target MDG ID or name.

� -threads: An optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

� -copy_id: Required if the specified VDisk has more than one copy.

The syntax of the CLI command is this format:

svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id |name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]

The parameters for the CLI command are:

� -vdisk: Specifies the name or ID of the source VDisk to be migrated.

� -copy_id: Required if the specified VDisk has more than one copy.

� -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk must be unmanaged and large enough to contain the data of the disk being migrated.)

� -mdiskgrp: Specifies the MDG into which the MDisk must be placed after the migration has completed.

� -threads: Optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

MDG to MDG type Command

Managed to managed migratevdisk

Image to managed migratevdisk

Managed to image migratetoimage

Image to image migratetoimage

678 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 705: San

In Figure 9-1, we illustrate the V3 VDisk migrating from MDG1 to MDG2.

Figure 9-1 Managed VDisk migration to another MDG

Extents are allocated to the migrating VDisk, from the set of MDisks in the target MDG, using the extent allocation algorithm.

The process can be prioritized by specifying the number of threads to use while migrating; using only one thread will put the least background load on the system. If a large number of extents are being migrated, you can specify the number of threads that will be used in parallel (from 1 to 4).

The offline rules apply to both MDGs; therefore, referring back to Figure 9-1, if any of the M4, M5, M6, or M7 MDisks go offline, the V3 VDisk goes offline. If the M4 MDisk goes offline, V3 and V5 go offline, but V1, V2, V4, and V6 remain online.

If the type of the VDisk is image, the VDisk type transitions to striped when the first extent is migrated while the MDisk access mode transitions from image to managed.

For the duration of the move, the VDisk is listed as being a member of the original MDG. For the purposes of configuration, the VDisk moves to the new MDG instantaneously at the end of the migration.

Rule: In order for the migration to be acceptable, the source and destination MDisk must have the same extent size.

V 1

S V C 1 IO -G rp 0 N o d e 1

M 1

M 2

R AID C o n tro lle r A

M 3

M 4

M D G 1

V 2

V 3

I/O G ro u p 0

S V C 1 IO -G rp 0 N o d e 2

M D G 2

M 5

M 7

R AID C o n tro lle r B

M D G 3

M 6

M 7

V 3

V 6

V 5

V 4

Chapter 9. Data migration 679

Page 706: San

9.2.4 Migrating the VDisk to image modeThe facility to migrate a VDisk to an image mode VDisk can be combined with the ability to migrate between MDGs. The source for the migration can be a managed mode or an image mode VDisk. This leads to four possibilities:

� Migrate image mode to image mode within an MDG.� Migrate managed mode to image mode within an MDG.� Migrate image mode to image mode between MDGs.� Migrate managed mode to image mode between MDGs.

These conditions must apply to be able to migrate:

� The destination MDisk must be greater than or equal to the size of the VDisk.

� The MDisk that is specified as the target must be in an unmanaged state at the time that the command is run.

� If the migration is interrupted by a cluster recovery, the migration will resume after the recovery completes.

� If the migration involves moving between MDGs, the VDisk behaves as described in 9.2.3, “Migrating a VDisk between MDGs” on page 678.

The syntax of the CLI command is this format:

svctask migratetoimage -copy_id -vdisk source_vdisk_id | name -mdisk unmanaged_target_mdisk_id |name -mdiskgrp managed_disk_group_id | name [-threads number_of_threads]

The parameters for the CLI command are defined this way:

� -copy_id: Required if the specified VDisk has more than one copy.

� -vdisk: Specifies the name or ID of the source VDisk to be migrated.

� -mdisk: Specifies the name of the MDisk to which the data must be migrated. (This MDisk must be unmanaged and large enough to contain the data of the disk being migrated.)

� -mdiskgrp: Specifies the MDG into which the MDisk must be placed after the migration has completed.

� -threads: An optional parameter that specifies the number of threads to use while migrating these extents, from 1 to 4.

Regardless of the mode in which the VDisk starts, it is reported as a managed mode during the migration. Also, both of the MDisks involved are reported as being in image mode during the migration. Upon completion of the command, the VDisk is classified as an image mode VDisk.

9.2.5 Migrating a VDisk between I/O GroupsA VDisk can be migrated between I/O Groups by using the svctask chvdisk command. This command is only supported if the VDisk is not in a FlashCopy Mapping or Remote Copy relationship.

To move a VDisk between I/O Groups, the cache must be flushed. The SVC will attempt to destage all write data for the VDisk from the cache during the I/O Group move. This flush will fail if data has been pinned in the cache for any reason (such as an MDG being offline). By default, this failed flush will cause the migration between I/O Groups to fail, but this behavior can be overridden using the -force flag. If the -force flag is used and if the SVC is unable to destage all write data from the cache, the result is that the contents of the VDisk are

680 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 707: San

corrupted by the loss of the cached data. During the flush, the VDisk operates in cache write-through mode.

You must quiesce host I/O before the migration for two reasons:

� If there is significant data in cache that takes a long time to destage, the command line will time out.

� Subsystem Device Driver (SDD) vpaths that are associated with the VDisk are deleted before the VDisk move takes place in order to avoid data corruption. So, data corruption can occur if I/O is still ongoing at a particular logical unit number (LUN) ID when it is reused for another VDisk.

When migrating a VDisk between I/O Groups, you do not have the ability to specify the preferred node. The preferred node is assigned by the SVC.

The syntax of the CLI command is:

svctask chvdisk [-name -new_name_arg][-iogrp -io_group_id | - io_group_name [-force]] [-node -node_id | - node_name [-rate -throttle_rate]] [-unitmb -udid -vdisk_udid] [-warning -disk_size | -disk_size_percentage] [-autoexpand -on | -off [ -copy -id]] [-primary -copy_id][-syncrate -percentage_arg] [vdisk_name | vdisk_id [-unit [-b | -kb | -mb | -gb | -tb | -pb]]]

For detailed information about the chvdisk command parameters, refer to the SVC command-line interface help by typing this command:

svctask chvdisk -h

Or, refer to the Command Line Interface User’s Guide, SG26-7903-05.

The chvdisk command modifies a single property of a VDisk. To change the VDisk name and to modify the I/O Group, for example, you must issue the command twice. A VDisk that is a member of a FlashCopy or Remote Copy relationship cannot be moved to another I/O Group, and you cannot override this restriction by using the -force flag.

9.2.6 Monitoring the migration progressTo monitor the progress of ongoing migrations, use the CLI command:

svcinfo lsmigrate

To determine the extent allocation of MDisks and VDisks, use the following commands:

� To list the VDisk IDs and the corresponding number of extents that the VDisks occupy on the queried MDisk, use the following CLI command:

svcinfo lsmdiskextent <mdiskname | mdisk_id>

� To list the MDisk IDs and the corresponding number of extents that the queried VDisks occupy on the listed MDisks, use the following CLI command:

svcinfo lsvdiskextent <vdiskname | vdisk_id>

� To list the number of available free extents on an MDisk, use the following CLI command:

svcinfo lsfreeextents <mdiskname | mdisk_id>

Important: Do not move a VDisk to an offline I/O Group under any circumstance. You must ensure that the I/O Group is online before you move the VDisks to avoid any data loss.

Chapter 9. Data migration 681

Page 708: San

9.3 Functional overview of migrationThis section describes the functional view of data migration.

9.3.1 ParallelismYou can perform several of the following activities in parallel.

Per clusterAn SVC cluster supports up to 32 active concurrent instances of members of the set of migration activities:

� Migrate multiple extents� Migrate between MDGs� Migrate off of a deleted MDisk� Migrate to image mode

These high-level migration tasks operate by scheduling single extent migrations:

� Up to 256 single extent migrations can run concurrently. This number is made up of single extent migrates, which result from the operations previously listed.

� The Migrate Multiple Extents and Migrate Between MDGs commands support a flag that allows you to specify the number of “threads” to use, between 1 and 4. This parameter affects the number of extents that will be concurrently migrated for that migration operation. Thus, if the thread value is set to 4, up to four extents can be migrated concurrently for that operation, subject to other resource constraints.

Per MDiskThe SVC supports up to four concurrent single extent migrates per MDisk. This limit does not take into account whether the MDisk is the source or the destination. If more than four single extent migrates are scheduled for a particular MDisk, further migrations are queued pending the completion of one of the currently running migrations.

Important: After a migration has been started, there is no way for you to stop the migration. The migration runs to completion unless it is stopped or suspended by an error condition, or if the VDisk being migrated is deleted.

682 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 709: San

9.3.2 Error handlingIf a medium error occurs on a read from the source, and the destination’s medium error table is full, if an I/O error occurs on a read from the source repeatedly, or if the MDisks go offline repeatedly, the migration is suspended or stopped.

The migration will be suspended if any of the following conditions exist; otherwise, it will be stopped:

� The migration is between MDGs and has progressed beyond the first extent. These migrations are always suspended rather than stopped, because stopping a migration in progress leaves a VDisk spanning MDGs, which is not a valid configuration other than during a migration.

� The migration is a Migrate to Image Mode (even if it is processing the first extent). These migrations are always suspended rather than stopped, because stopping a migration in progress leaves the VDisk in an inconsistent state.

� A migration is waiting for a metadata checkpoint that has failed.

If a migration is stopped, and if any migrations are queued awaiting the use of the MDisk for migration, these migrations are now considered. If, however, a migration is suspended, the migration continues to use resources, and so, another migration is not started.

The SVC attempts to resume the migration if the error log entry is marked as fixed using the CLI or the GUI. If the error condition no longer exists, the migration will proceed. The migration might resume on another node than the node that started the migration.

9.3.3 Migration algorithmThis section describes the effect of the migration algorithm.

ChunksRegardless of the extent size for the MDG, data is migrated in units of 16 MB. In this description, this unit is referred to as a chunk.

We describe the algorithm that is used to migrate an extent:

1. Pause (pause means to queue all new I/O requests in the virtualization layer in SVC and to wait for all outstanding requests to complete) all I/O on the source MDisk on all nodes in the SVC cluster. The I/O to other extents is unaffected.

2. Unpause (resume) I/O on all of the source MDisk extents apart from writes to the specific chunk that is being migrated. Writes to the extent are mirrored to the source and destination.

3. On the node that is performing the migration, for each 256 KB section of the chunk:

– Synchronously read 256 KB from the source.– Synchronously write 256 KB to the target.

4. After the entire chunk has been copied to the destination, repeat the process for the next chunk within the extent.

5. After the entire extent has been migrated, pause all I/O to the extent being migrated, perform a checkpoint on the extent move to on-disk metadata, redirect all further reads to the destination, and stop mirroring writes (writes only to destination).

6. If the checkpoint fails, the I/O is unpaused.

Chapter 9. Data migration 683

Page 710: San

During the migration, the extent can be divided into three regions, as shown in Figure 9-2. Region B is the chunk that is being copied. Writes to Region B are queued (paused) in the virtualization layer waiting for the chunk to be copied. Reads to Region A are directed to the destination, because this data has already been copied. Writes to Region A are written to both the source and the destination extent in order to maintain the integrity of the source extent. Reads and writes to Region C are directed to the source, because this region has yet to be migrated.

The migration of a chunk requires 64 synchronous reads and 64 synchronous writes. During this time, all writes to the chunk from higher layers in the software stack (such as cache destages) are held back. If the back-end storage is operating with significant latency, it is possible that this operation might take time (minutes) to complete, which can have an adverse affect on the overall performance of the SVC. To avoid this situation, if the migration of a particular chunk is still active after one minute, the migration is paused for 30 seconds. During this time, writes to the chunk are allowed to proceed. After 30 seconds, the migration of the chunk is resumed. This algorithm is repeated as many times as necessary to complete the migration of the chunk.

Figure 9-2 Migrating an extent

SVC guarantees read stability during data migrations even if the data migration is stopped by a node reset or a cluster shutdown. This read stability is possible, because SVC disallows writes on all nodes to the area being copied, and upon a failure, the extent migration is restarted from the beginning.

At the conclusion of the operation, we will have these results:

� Extents migrated in 16 MB chunks, one chunk at a time.� Chunks are either copied, in progress, or not copied.� When the extent is finished, its new location is saved.

Figure 9-3 on page 685 shows the data migration and write operation relationship.

Extent N-1 Extent N Extent N+1

Region A (already copied) reads/writes go to destination

Region C (yet to be copied) reads/writes go to source

Region B (copying) reads/writes paused

16 MBNot to scale

Managed Disk Extents

684 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 711: San

Figure 9-3 Migration and write operation relationship

9.4 Migrating data from an image mode VDiskThis section describes migrating data from an image mode VDisk to a fully managed VDisk.

9.4.1 Image mode VDisk migration conceptFirst, we describe the concepts associated with this operation.

MDisk modesThere are three MDisk modes:

� Unmanaged MDisk:

An MDisk is reported as unmanaged when it is not a member of any MDG. An unmanaged MDisk is not associated with any VDisks and has no metadata stored on it. The SVC will not write to an MDisk that is in unmanaged mode except when it attempts to change the mode of the MDisk to one of the other modes.

� Image mode MDisk:

Image mode provides a direct block-for-block translation from the MDisk to the VDisk with no virtualization. Image mode VDisks have a minimum size of one block (512 bytes) and always occupy at least one extent. An image mode MDisk is associated with exactly one VDisk.

� Managed mode MDisk:

Managed mode Mdisks contribute extents to the pool of available extents in the MDG. Zero or more managed mode VDisks might use these extents.

Transitions between the modesThe following state transitions can occur to an MDisk (see Figure 9-4 on page 686):

� Unmanaged mode to managed mode.

This transition occurs when an MDisk is added to an MDisk group, which makes the MDisk eligible for the allocation of data and metadata extents.

� Managed mode to unmanaged mode.

This transition occurs when an MDisk is removed from an MDisk group.

� Unmanaged mode to image mode.

Chapter 9. Data migration 685

Page 712: San

This transition occurs when an image mode MDisk is created on an MDisk that was previously unmanaged. It also occurs when an MDisk is used as the target for a migration to image mode.

� Image mode to unmanaged mode.

There are two distinct ways in which this transition can happen:

– When an image mode VDisk is deleted. The MDisk that supported the VDisk becomes unmanaged.

– When an image mode VDisk is migrated in image mode to another MDisk, the MDisk that is being migrated from remains in image mode until all data has been moved off of it. It then transitions to unmanaged mode.

� Image mode to managed mode.

This transition occurs when the image mode VDisk that is using the MDisk is migrated into managed mode.

� Managed mode to image mode is impossible.

There is no operation that will take an MDisk directly from managed mode to image mode. You can achieve this transition by performing operations that convert the MDisk to unmanaged mode and then to image mode.

Figure 9-4 Various states of a VDisk

Image mode VDisks have the special property that the last extent in the VDisk can be a partial extent. Managed mode disks do not have this property.

To perform any type of migration activity on an image mode VDisk, the image mode disk must first be converted into a managed mode disk. If the image mode disk has a partial last extent, this last extent in the image mode VDisk must be the first extent to be migrated. This migration is handled as a special case.

Managedmode

start migrate tomanaged mode

complete migrate

Migrating toimage mode

Image mode

Not in group

add to group

remove from group

delete image mode vdisk

create image mode vdisk

start migrate to image mode

686 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 713: San

After this special migration operation has occurred, the VDisk becomes a managed mode VDisk and is treated in the same way as any other managed mode VDisk. If the image mode disk does not have a partial last extent, no special processing is performed; the image mode VDisk is simply changed into a managed mode VDisk and is treated in the same way as any other managed mode VDisk.

After data is migrated off of a partial extent, there is no way to migrate data back onto the partial extent.

9.4.2 Migration tipsYou have several methods to migrate an image mode VDisk to a managed mode VDisk:

� If your image mode VDisk is in the same MDG as the MDisks on which you want to migrate the extents, you can perform one of these migrations:

– Migrate a single extent. You have to migrate the last extent of the image mode VDisk (number N-1).

– Migrate multiple extents.

– Migrate all of the in-use extents from an MDisk. Migrate extents off of an MDisk that is being deleted.

� If you have two MDGs, one MDG for the image mode VDisk, and one MDG for the managed mode VDisks, you can migrate a VDisk from one MDG to another MDG.

The recommended method is to have one MDG for all the image mode VDisks, and other MDGs for the managed mode VDisks, and to use the migrate VDisk facility.

Be sure to verify that enough extents are available in the target MDG.

9.5 Data migration for Windows using the SVC GUIIn this section, we move the two LUNs from a Windows Server 2008 server that is currently attached to a DS4700 storage subsystem over to the SVC.

We then manage those LUNs with the SVC, migrate them from an image mode VDisk to a VDisk, migrate one of them back to an image mode VDisk, and finally, move it to another image mode VDisk on another storage subsystem, so that those LUNs can then be masked/mapped back to the host directly. This approach, of course, also works if we move the LUN back to the same storage subsystem.

Using this example will help you perform any one of the following activities in your environment:

� Move a Microsoft server’s SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. Perform this activity first when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 9.5.2, “Adding the SVC between the host system and the DS4700” on page 690.

� Migrate your image mode VDisk to a VDisk while your host is still running and servicing your business application. You might perform this activity if you were removing a storage subsystem from your SAN environment, or wanting to move the data onto LUNs that are more appropriate for the type of data stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 9.5.4, “Migrating the VDisk from image mode to managed mode” on page 700.

Chapter 9. Data migration 687

Page 714: San

� Migrate your VDisk to an image mode VDisk. You might perform this activity if you were removing the SVC from your SAN environment after a trial period. We describe this step in detail in 9.5.5, “Migrating the VDisk from managed mode to image mode” on page 702.

� Move an image mode VDisk to another image mode VDisk. Use this procedure to migrate data from one storage subsystem to the other storage subsystem. We describe this step in detail in 9.6.6, “Migrate the VDisks to image mode VDisks” on page 728.

You can use these activities individually, or together, to migrate your server’s LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool.

The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.

9.5.1 Windows Server 2008 host system connected directly to the DS4700In our example configuration, we use a Windows Server 2008 host, a DS4700, and a DS4500. The host has two LUNs (drive X and Y). The two LUNs are part of one DS4700 array. Before the migration, LUN masking is defined in the DS4700 to give access to the Windows Server 2008 host system for the volume from DS4700 labeled X and Y (see Figure 9-6 on page 689).

Figure 9-5 shows the starting zoning scenario.

Figure 9-5 Starting zoning scenario

Figure 9-6 on page 689 shows the two LUNs (drive X and Y).

688 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 715: San

Figure 9-6 Drives X and Y

Figure 9-7 shows the properties of one of the DS4700 disks using the Subsystem Device Driver DSM (SDDDSM). The disk appears as an IBM 1814 Fast Multipath Device.

Figure 9-7 Disk properties

Chapter 9. Data migration 689

Page 716: San

9.5.2 Adding the SVC between the host system and the DS4700Figure 9-8 shows the new environment with the SVC and a second storage subsystem attached to the SAN. The second storage subsystem is not required to migrate to the SVC, but in the following examples, we show that it is possible to move data across storage subsystems without any host downtime.

Figure 9-8 Add SVC and second storage subsystem

To add the SVC between the host system and the DS4700 storage subsystem, perform the following steps:

1. Check that you have installed supported device drivers on your host system.

2. Check that your SAN environment fulfills the supported zoning configurations.

3. Shut down the host.

4. Change the LUN masking in the DS4700. Mask the LUNs to the SVC, and remove the masking for the host.

Figure 9-9 on page 691 shows the two LUNs with LUN IDs 12 and 13 remapped to SVC ITSO-CLS3.

690 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 717: San

Figure 9-9 LUNs remapped

5. Log on to your SVC Console, open Work with Managed Disks and Managed Disks, select Discover Managed Disks in the drop-down list, and click Go (Figure 9-10).

Figure 9-10 Discover managed disks

Figure 9-11 on page 692 shows the two LUNs discovered as Mdisk12 and Mdisk13.

Chapter 9. Data migration 691

Page 718: San

Figure 9-11 Mdisk12 and Mdisk13 discovered

6. Now, we create one new empty MDG for each MDisk that we want to use to create an image VDisk later. Open Work with Managed Disks and Managed Disks Group, select Create an Mdisk Group in the drop-down list, and click Go.

Figure 9-12 shows the MDisk Group creation.

Figure 9-12 MDG creation

7. Click Next.

8. Type the MDG name, MDG_img_1. Do not select any MDisk, as shown in Figure 9-13 on page 693, and then, click Next.

692 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 719: San

Figure 9-13 MDG for image VDisk creation

9. Choose the extent size that you want to use, as shown in Figure 9-14, and then, click Next. Remember that the extent size that you choose must be the same extent size in the MDG to which you will migrate your data later.

Figure 9-14 Extent size selection

10.Now, click Finish to complete the MDG creation.

Figure 9-15 shows the completion window.

Figure 9-15 Completion window

Chapter 9. Data migration 693

Page 720: San

11.Now, we create the new VDisks named W2k8_Log and W2k8_Data by using the two newly discovered MDisks in the MDG0 MDG.

12.Expand Work with Virtual Disks and click Virtual Disks. As shown in Figure 9-16, select Create an Imagemode VDisk from the drop-down list, and click Go.

Figure 9-16 Image VDisk creation

13.The Create Image Mode Virtual Disk window (Figure 9-17) opens. Click Next.

Figure 9-17 Create Image Mode Virtual Disk window

14.Type the name that you want to use for the VDisk, and select the attributes, in our case, the name is W2k8_Log. Click Next (Figure 9-18 on page 695).

694 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 721: San

Figure 9-18 Set the attributes for the image mode VDisk

15.Select the MDisk to use to create the image mode VDisk, and click Next (Figure 9-19).

Figure 9-19 Select the MDisk to use to create your image mode VDisk

16.Select an I/O Group, the Preferred Node, and the MDisk group that you previously created. Optionally, you can let this system choose these settings (Figure 9-20 on page 696). Click Next.

Chapter 9. Data migration 695

Page 722: San

Figure 9-20 Select I/O Group and MDisk Group

17.Review the summary, and click Finish to create the image mode VDisk.

Figure 17 shows the image VDisk summary and attributes.

Figure 9-21 Verify Attributes window

18.Repeat steps 6 through 17 for each LUN that you want to migrate to the SVC.

19.In the Viewing Virtual Disk view, we see the two newly created VDisks, as shown in Figure 9-22 on page 697. In our example, they are named W2k8_log and W2k8_data.

Multiple nodes: If you have more than two nodes in the cluster, select the I/O Group of the nodes to evenly share the load.

696 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 723: San

Figure 9-22 Viewing Virtual Disks

20.In the Viewing Managed Disks window (Figure 9-23), we see the two new MDisks are now shown as Image Mode disks. In our example, they are named mdisk12 and mdisk13.

Figure 9-23 Viewing Managed Disks

21.Map the VDisks again to the Windows Server 2008 host system.

22.Expand Work with Virtual Disks, and click Virtual Disks. Select the VDisks, and select Map VDisks to a Host, and click Go (Figure 9-24).

Figure 9-24 Mapping VDisks to a host

23.Choose the host, and enter the Small Computer System Interface (SCSI) LUN IDs. Click OK (Figure 9-25 on page 698).

Chapter 9. Data migration 697

Page 724: San

Figure 9-25 Creating Virtual Disk-to-Host Mappings window

9.5.3 Putting the migrated disks onto an online Windows Server 2008 host Perform these steps:

1. Start the Windows Server 2008 host system again, and expand Computer Management to see the new disk properties changed to a 2145 Multi-Path Disk Device (Figure 9-26).

Figure 9-26 Disk Management

698 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 725: San

2. Figure 9-27 shows the Disk Management window.

Figure 9-27 Migrated disks are available

3. Select Start All Programs Subsystem Device Driver DSM Subsystem Device Driver DSM to open the SDDDSM command-line utility (Figure 9-28).

Figure 9-28 Subsystem Device Driver DSM CLI

Chapter 9. Data migration 699

Page 726: San

4. Enter the datapath query device command to check if all paths are available, as planned in your SAN environment (Example 9-1).

Example 9-1 datapath query device command

C:\Program Files\IBM\SDDDSM>datapath query device

Total Devices : 2

DEV#: 0 DEVICE NAME: Disk0 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A680E90800000000000007============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 180 0 1 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0 2 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 145 0 3 Scsi Port2 Bus0/Disk0 Part0 OPEN NORMAL 0 0

DEV#: 1 DEVICE NAME: Disk1 Part0 TYPE: 2145 POLICY: OPTIMIZEDSERIAL: 6005076801A680E90800000000000005============================================================================Path# Adapter/Hard Disk State Mode Select Errors 0 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 25 0 1 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 164 0 2 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 0 0 3 Scsi Port2 Bus0/Disk1 Part0 OPEN NORMAL 136 0

C:\Program Files\IBM\SDDDSM>

9.5.4 Migrating the VDisk from image mode to managed modePerform these steps to migrate the VDisk to managed mode by migrating the completed VDisk:

1. As shown in Figure 9-29 on page 701, select the VDisk. Then, select Migrate a VDisk from the drop-down list, and click Go.

700 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 727: San

Figure 9-29 Migrate a VDisk

2. Select the MDG to which to migrate the disk, and select the number of threads to use for this process, as shown in Figure 9-30. Click OK.

Figure 9-30 Migrating virtual disks

Extent sizes: If you migrate the VDisks to another MDG, the extent size of the source MDG and the extent size of the target MDG must be equal.

Chapter 9. Data migration 701

Page 728: San

3. The Viewing VDisk Migration Progress window opens and enables you to monitor the migration progress (Figure 9-31).

Figure 9-31 Viewing VDisk Migration Progress window

4. Click the percentage to show more detailed information about this VDisk. During the migration process, the VDisks are still in the old MDG. During the migration, your server is still accessing the data. After the migration is complete, the VDisk is in the new MDG_DS45 MDG and is a striped VDisk.

Figure 9-32 shows the migrated VDisk in the new MDG.

Figure 9-32 VDisk W2k8_log in the new MDG

9.5.5 Migrating the VDisk from managed mode to image modeYou can migrate the VDisk from managed mode to image mode. In this example, we migrate a managed VDisk to an image mode VDisk. Follow these steps:

1. Create an empty MDG, following the same procedure as shown previously, one time for each VDisk that you want to migrate to image mode. These MDGs will host the target MDisk that we will map later to our server at the end of the migration.

2. Select the VDisk that you want to migrate, and select Migrate to an Image Mode VDisk from the list (Figure 9-33 on page 703). Click Go.

702 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 729: San

Figure 9-33 Migrate to an Image Mode VDisk

3. The Introduction window opens. Click Next (Figure 9-34).

Figure 9-34 Introduction to migrating to an image mode VDisk

4. Select the source VDisk copy, and click Next (Figure 9-35).

Figure 9-35 Migrating to an image mode VDisk

Chapter 9. Data migration 703

Page 730: San

5. Select a target MDisk (Figure 9-36). Click Next.

Figure 9-36 Select the Target MDisk window

6. Select an MDG (Figure 9-37). Click Next.

Figure 9-37 Selecting the target MDG

Extent sizes: If you migrate the VDisks to another MDG, the extent size of the source MDG must equal the extent size of the target MDG.

704 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 731: San

7. Select the number of threads (1 to 4) to use for this migration process. The higher the number, the higher the priority (Figure 9-38). Click Next.

Figure 9-38 Selecting the number of threads

8. Verify the migration attributes (Figure 9-39), and click Finish.

Figure 9-39 Verify Migration Attributes window

9. The progress window opens.

10.Repeat these steps for every VDisk that you want to migrate to an image mode VDisk.

11.Free the data from the SVC by using the procedure that is described in 9.5.7, “Free the data from the SVC” on page 709.

9.5.6 Migrating the VDisk from image mode to image modeUse the migrating a VDisk from image mode to image mode process to move image mode VDisks from one storage subsystem to another storage subsystem without going through the fully managed mode. The data stays available for the applications during this migration. This procedure is nearly the same procedure as the procedure in 9.5.5, “Migrating the VDisk from managed mode to image mode” on page 702.

In this section, we describe how to migrate an image mode VDisk to another image mode VDisk. In our example, we migrate the W2k8_Log VDisk to another disk subsystem as an image mode VDisk. The second storage subsystem is a DS4500; a new LUN is configured on the storage and mapped to the SVC cluster. The LUN is available in SVC as unmanaged disk11.

Figure shows Mdisk11.

Chapter 9. Data migration 705

Page 732: San

Figure 9-40 Unmanaged disk on a DS4500 storage subsystem

To migrate the image mode VDisk to another image mode VDisk, perform the following steps:

1. Check the VDisk to migrate, and select Migrate to an image mode VDisk from the list. Click Go.

Figure 9-41 Migrate to an image mode VDisk

2. The Introduction window opens, as shown in Figure 9-42 on page 707. Click Next.

706 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 733: San

Figure 9-42 Migrating data to an image mode VDisk

3. Select the VDisk source copy, and click Next (Figure 9-43).

Figure 9-43 Select copy

4. Select a target MDisk, as shown in Figure 9-44 on page 708. Click Next.

Chapter 9. Data migration 707

Page 734: San

Figure 9-44 Select Target MDisk

5. Select a target MDG for the MDisk to join, as shown in Figure 9-45. Click Next.

Figure 9-45 Select MDisk Group window

6. Select the number of threads (1 to 4) to devote to this process, as shown in Figure 9-46. The higher the number, the higher the priority. Click Next.

Figure 9-46 Select the Threads window

7. Verify the migration attributes, as shown in Figure 9-47 on page 709, and click Finish.

708 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 735: San

Figure 9-47 Verify Migration Attributes window

8. Check the progress window (Figure 9-48), and click Close.

Figure 9-48 Progress window

9. Repeat these steps for all of the image mode VDisks that you want to migrate.

10.If you want to free the data from the SVC, use the procedure that is described in 9.5.7, “Free the data from the SVC” on page 709.

9.5.7 Free the data from the SVC If your data resides in an image mode VDisk inside the SVC, you can free the data from the SVC. The following sections show how to migrate data to an image mode VDisk. Depending on your environment, you might have to follow these procedures before freeing the data of the SVC:

� 9.5.5, “Migrating the VDisk from managed mode to image mode” on page 702

� 9.5.6, “Migrating the VDisk from image mode to image mode” on page 705

To free the data from the SVC, we use the delete vdisk command.

Chapter 9. Data migration 709

Page 736: San

If the command succeeds on an image mode VDisk, the underlying back-end storage controller will be consistent with the data that a host might previously have read from the image mode VDisk; that is, all fast write data will have been flushed to the underlying LUN. Deleting an image mode VDisk causes the MDisk that is associated with the VDisk to be ejected from the MDG. The mode of the MDisk will be returned to unmanaged.

As shown in Example 9-1 on page 700, the SAN disks currently reside on the SVC 2145 device.

Check that you have installed the supported device drivers on your host system.

To switch back to the storage subsystem, perform the following steps:

1. Shut down your host system.

2. Edit the LUN masking on your storage subsystem. Remove the SVC from the LUN masking, and add the host to the masking.

3. Open the Viewing Virtual Disk-to-Host Mappings window in the SVC Console, mark your host, select Delete a Mapping, and click Go (Figure 9-49).

Figure 9-49 Delete a mapping

4. Confirm the task by clicking Delete (Figure 9-50).

Figure 9-50 Delete a mapping

5. The VDisk is removed from the SVC.

6. Repeat steps 3 and 4 for every disk that you want to free from the SVC.

7. Power on your host system.

Note: This situation only applies to image mode VDisks. If you delete a normal VDisk, all of the data will also be deleted.

710 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 737: San

9.5.8 Put the free disks online on Windows Server 2008Put the disks, which have ben freed from the SVC, online on WIndows Server 2008:

1. Using your DS4500 Storage Manager interface, now remap the two LUNs that were MDisks back to your Windows Server 2008 server.

2. Open your Computer Management window. Figure 9-51 shows that the LUNs are now back to an IBM 1814 type.

Figure 9-51 IBM 1814 type

3. Open your Disk Management window, you will see that the disks have appeared. You might need to reactivate your disk using the right-click option on each disk.

Chapter 9. Data migration 711

Page 738: San

Figure 9-52 Windows Server 2008 Disk Management

9.6 Migrating Linux SAN disks to SVC disksIn this section, we move the two LUNs from a Linux server that is currently booting directly off of our DS4000 storage subsystem over to the SVC. We then manage those LUNs with SVC, move them between other managed disks, and then, finally, move them back to image mode disks, so that those LUNs can be masked and mapped back to the Linux server directly.

Using this example can help you to perform any of the following activities in your environment:

� Move a Linux server’s SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC. Perform this activity first when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. We describe this step in detail in 9.6.2, “Preparing your SVC to virtualize disks” on page 715.

� Move data between storage subsystems while your Linux server is still running and servicing your business application. You might perform this activity if you are removing a storage subsystem from your SAN environment. Or, perform this activity if you want to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking availability, performance, and redundancy into account. We describe this step in 9.6.4, “Migrate the image mode VDisks to managed MDisks” on page 722.

� Move your Linux server’s LUNs back to image mode VDisks so that they can be remapped and remasked directly back to the Linux server. We describe this step in 9.6.5, “Preparing to migrate from the SVC” on page 725.

You can use these three activities individually, or together, to migrate your Linux server’s LUNs from one storage subsystem to another storage subsystem using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment.

712 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 739: San

The only downtime required for these activities is the time that it takes to remask and remap the LUNs between the storage subsystems and your SVC.

In Figure 9-53, we show our Linux environment.

Figure 9-53 Linux SAN environment

Figure 9-53 shows our Linux server connected to our SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem:

� The LUN with SCSI ID 0 has the host operating system (our host is Red Hat Enterprise Linux V5.1), and this LUN is used to boot the system directly from the storage subsystem. The operating system identifies it as /dev/mapper/VolGroup00-LogVol00.

Linux sees this LUN as our /dev/sda disk.

� We have also mapped a second disk (SCSI LUN ID 1) to the host. It is 5 GB in size, and it is mounted in the / data folder on the /dev/dm-2 disk.

Example 9-2 shows our disks that are directly attached to the Linux hosts.

Example 9-2 Directly attached disks

[root@Palau data]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 10093752 1971344 7601400 21% //dev/sda1 101086 12054 83813 13% /boot

SCSI LUN ID 0: To successfully boot a host off of the SAN, you must have assigned the LUN as SCSI LUN ID 0.

LINUXHost

SAN

IBM or OEMStorage

Subsystem

Green Zone

Zoning for migration scenarios

Chapter 9. Data migration 713

Page 740: San

tmpfs 1033496 0 1033496 0% /dev/shm/dev/dm-2 5160576 158160 4740272 4% /data[root@Palau data]#

Our Linux server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 9-53 on page 713:

� The Linux server’s host bus adapter (HBA) cards are zoned so that they are in the Green zone with our storage subsystem.

� The two LUNs that have been defined on the storage subsystem, using LUN masking, are directly available to our Linux server.

9.6.1 Connecting the SVC to your SAN fabricThis section describes the basic steps that you take to introduce the SVC into your SAN environment. While this section only summarizes these activities, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network.

If you have an SVC that is already connected, skip to 9.6.2, “Preparing your SVC to virtualize disks” on page 715.

Connecting the SVC to your SAN fabric requires that you perform these tasks:

� Assemble your SVC components (nodes, uninterruptible power supply unit, and Master Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN. We describe these tasks in much greater detail in Chapter 3, “Planning and configuration” on page 65.

� Create and configure your SVC cluster.

� Create these additional zones:

– An SVC node zone (our Black zone in Figure 9-54 on page 715). This zone only contains all of the ports (or worldwide names (WWN)) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster, where each node has four ports. So, our Black zone has eight defined WWNs.

– A storage zone (our Red zone). This zone also has all of the ports/WWNs from the SVC node zone, as well as the ports/WWNs for all the storage subsystems that SVC will virtualize.

– A host zone (our Blue zone). This zone contains the ports/WWNs for each host that will access the VDisk, together with the ports that are defined in the SVC node zone.

We set our environment in this manner. Figure 9-54 on page 715 shows our environment.

Important: Do not put your storage subsystems in the host (Blue) zone. The host zone is an unsupported configuration. Putting your storage subsystems in the host zone can lead to data loss.

714 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 741: San

Figure 9-54 SAN environment with SVC attached

9.6.2 Preparing your SVC to virtualize disksThis section describes the preparation tasks that we performed before taking our Linux server offline.

These activities are all nondisruptive. They do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a managed disk groupWhen we move the two Linux LUNs to the SVC, we use them initially in image mode. Therefore, we need a Managed Disk Group (MDG) to hold those disks.

First, we need to create an empty MDG for each of the disks, using the commands in Example 9-3. We name our MDGs Palau-MDG0 to hold our boot LUN. And, we name the second MDG Palau-MDG1 to hold the data LUN.

Example 9-3 Create an empty MDG

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name Palau_Data -ext 512MDisk Group, id [7], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning6 Palau_SANB online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0

LINUXHost

SVCSVC

SVCI/O grp0

SAN

IBM or OEMStorage

Subsystem

IBM or OEMStorage

Subsystem

Green Zone

Red Zone

Blue Zone

Black Zone

Zoning per Migration Scenarios

By Pinocchio 12-09-2005

Chapter 9. Data migration 715

Page 742: San

7 Palau_Data online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0IBM_2145:ITSO-CLS1:admin>

Creating your host definitionIf you have prepared your zones correctly, the SVC can see the Linux server’s HBA adapters on the fabric (our host only had one HBA).

The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-4 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates that we have a zone configuration problem.)

Example 9-4 Display HBA port candidates

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidateid210000E08B89C1CD210000E08B054CAA210000E08B0548BC210000E08B0541BC210000E08B89CCC2IBM_2145:ITSO-CLS1:admin>

If you do not know the WWN of your Linux server, you can look at which WWNs are currently configured on your storage subsystem for this host. Figure 9-55 shows our configured ports on an IBM DS4700 storage subsystem.

Figure 9-55 Display port WWNs

716 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 743: San

After verifying that the SVC can see our host (linux2), we create the host entry and assign the WWN to this entry. Example 9-5 shows these commands.

Example 9-5 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Palau -hbawwpn 210000E08B054CAA:210000E08B89C1CDHost, id [0], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lshost Palauid 0name Palauport_count 2type genericmask 1111iogrp_count 4WWPN 210000E08B89C1CDnode_logged_in_count 4state inactiveWWPN 210000E08B054CAAnode_logged_in_count 4state inactiveIBM_2145:ITSO-CLS1:admin>

Verify that we can see our storage subsystemIf we set up our zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 9-6).

Example 9-6 Discover storage controller

IBM_2145:ITSO-CLS1:admin>svcinfo lscontrollerid controller_name ctrl_s/n vendor_id product_id_low product_id_high0 DS4500 IBM 1742-9001 DS4700 IBM 1814 FAStTIBM_2145:ITSO-CLS1:admin>

You can rename the storage subsystem to a more meaningful name (if we had multiple storage subsystems that were connected to our SAN fabric, renaming them makes it considerably easier to identify them) with the svctask chcontroller -name command.

Get the disk serial numbersTo help avoid the risk of creating the wrong VDisks from all of the available, unmanaged MDisks (in case the SVC sees many available, unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode VDisks.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive and choose Properties. Our serial numbers are shown in Figure 9-56 on page 718 and in Figure 9-57 on page 718.

Chapter 9. Data migration 717

Page 744: San

Figure 9-56 Obtaining the disk serial number

Figure 9-57 Obtaining the disk serial number

718 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 745: San

Before we move the LUNs to the SVC, we must configure the host multipath configuration for the SVC. Add the following entry to your multipath.conf file, as shown in Example 9-7, and add the content of Example 9-8 to the file.

Example 9-7 Edit the multipath.conf file

[root@Palau ~]# vi /etc/multipath.conf[root@Palau ~]# service multipathd stopStopping multipathd daemon: [ OK ][root@Palau ~]# service multipathd startStarting multipathd daemon: [ OK ][root@Palau ~]#

Example 9-8 Data to add to the multipath.conf file

# SVCdevice { vendor "IBM" product "2145CF8" path_grouping_policy group_by_serial}

We are now ready to move the ownership of the disks to the SVC, to discover them as MDisks, and to give them back to the host as VDisks.

9.6.3 Move the LUNs to the SVCIn this step, we move the LUNs that are assigned to the Linux server and reassign them to the SVC.

Our Linux server has two LUNs: One LUN is for our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host.

If we only wanted to move the LUN that holds our application and data files, we do not have to reboot the host. The only requirement is that we unmount the file system and vary off the Volume Group to ensure the data integrity between the reassignment.

The following steps are required, because we intend to move both LUNs at the same time:

1. Confirm that the multipath.conf file is configured for SVC.

2. Shut down the host.

If you are only moving the LUNs that contain the application and data, follow this procedure instead:

a. Stop the applications that are using the LUNs.

b. Unmount those file systems with the umount MOUNT_POINT command.

c. If the file systems are a logical volume manager (LVM) volume, deactivate that Volume Group with the vgchange -a n VOLUMEGROUP_NAME.

d. If possible, also unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide those details here.

Chapter 9. Data migration 719

Page 746: San

3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the Linux server and remap and remask the disks to the SVC.

4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available MDisk number (starting from 0). Example 9-9 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.

Example 9-9 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID26 mdisk26 online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca0000000000000000000000000000000027 mdisk27 online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 9-10).

Example 9-10 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauS mdisk26IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name md_palauD mdisk27IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID26 md_palauS online unmanaged 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca0000000000000000000000000000000027 md_palauD online unmanaged 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

LUN IDs: Even though we are using boot from SAN, you can also map the boot disk with any LUN number to the SVC. It does not have to be 0 until later when we configure the mapping in the SVC to the host.

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk task display) with the serial number that you recorded earlier (in Figure 9-56 and Figure 9-57 on page 718).

720 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 747: San

6. We create our image mode VDisks with the svctask mkvdisk command and the -vtype image option (Example 9-11). This command virtualizes the disks in the exact same layout as though they were not virtualized.

Example 9-11 Create the image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_SANB -iogrp 0 -vtype image -mdisk md_palauS -name palau_SANBVirtual Disk, id [29], successfully createdIBM_2145:ITSO-CLS1:admin>IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp Palau_Data -iogrp 0 -vtype image -mdisk md_palauD -name palau_DataVirtual Disk, id [30], successfully create

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca0000000000000000000000000000000027 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e100000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskid name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count29 palau_SANB 0 io_grp0 online 4 Palau_SANB 12.0GB image 60050768018301BF280000000000002B 0 130 palau_Data 0 io_grp0 online 4 Palau_Data 5.0GB image 60050768018301BF280000000000002C 0 1

7. Map the new image mode VDisks to the host (Example 9-12).

Example 9-12 Map the VDisks to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 0 palau_SANBVirtual Disk to Host map, id [0], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Palau -scsi 1 palau_DataVirtual Disk to Host map, id [1], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap Palauid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID0 Palau 0 29 palau_SANB 210000E08B89C1CD 60050768018301BF280000000000002B0 Palau 1 30 palau_Data 210000E08B89C1CD 60050768018301BF280000000000002C

Important: Make sure that you map the boot VDisk with SCSI ID 0 to your host. The host must be able to identify the boot volume during the boot process.

Chapter 9. Data migration 721

Page 748: San

IBM_2145:ITSO-CLS1:admin>

8. Power on your host server and enter your Fibre Channel (FC) HBA adapter BIOS before booting the operating system, and make sure that you change the boot configuration so that it points to the SVC. In our example, we performed the following steps on a QLogic HBA:

a. Press Ctrl+Q to enter the HBA BIOS.

b. Open Configuration Settings.

c. Open Selectable Boot Settings.

d. Change the entry from your storage subsystem to the SVC 2145 LUN with SCSI ID 0.

e. Exit the menu and save your changes.

9. Boot up your Linux operating system.

If you only moved the application LUN to the SVC and left your Linux server running, you only need to follow these steps to see the new VDisk:

a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book).

b. Check your syslog, and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory.

c. If your application and data are on an LVM volume, rediscover the Volume Group, and then, run the vgchange -a y VOLUME_GROUP command to activate the Volume Group.

10.Mount your file systems with the mount /MOUNT_POINT command (Example 9-13). The df output shows us that all of disks are available again.

Example 9-13 Mount data disk

[root@Palau data]# mount /dev/dm-2 /data[root@Palau data]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 10093752 1938056 7634688 21% //dev/sda1 101086 12054 83813 13% /boottmpfs 1033496 0 1033496 0% /dev/shm/dev/dm-2 5160576 158160 4740272 4% /data[root@Palau data]#

11.You are now ready to start your application.

9.6.4 Migrate the image mode VDisks to managed MDisksWhile the Linux server is still running, and while our file systems are in use, we migrate the image mode VDisks onto striped VDisks, with the extents being spread over the other three MDisks. In our example, the three new LUNs are located on an DS4500 storage subsystem, so we will also move to another storage subsystem in this example.

FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy process has completed before starting your application.

722 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 749: San

Preparing MDisks for striped mode VDisksFrom our second storage subsystem, we have performed these tasks:

� Created and allocated three new LUNs to the SVC� Discovered them as MDisks� Renamed these LUNs to more meaningful names� Created a new MDG� Placed all of these MDisks into this MDG

You can see the output of our commands in Example 9-14.

Example 9-14 Create a new MDG

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MD_palauVD -ext 512MDisk Group, id [8], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID600a0b8000174233000000b5486d255b0000000000000000000000000000000026 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca0000000000000000000000000000000027 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e10000000000000000000000000000000028 mdisk28 online unmanaged 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab0000000000000000000000000000000029 mdisk29 online unmanaged 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae0000000000000000000000000000000030 mdisk30 online unmanaged 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md1 mdisk28IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md2 mdisk29IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name palau-md3 mdisk30IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md1 MD_palauVDIBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md2 MD_palauVDIBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk palau-md3 MD_palauVDIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID26 md_palauS online image 6 Palau_SANB 12.0GB 0000000000000008 DS4700 600a0b800026b2820000428f48739bca0000000000000000000000000000000027 md_palauD online image 7 Palau_Data 5.0GB 0000000000000009 DS4700 600a0b800026b282000042f84873c7e10000000000000000000000000000000028 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab00000000000000000000000000000000

Chapter 9. Data migration 723

Page 750: San

29 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae0000000000000000000000000000000030 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d900000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

Migrate the VDisksWe are now ready to migrate the image mode VDisks onto striped VDisks in the MD_palauVD MDG with the svctask migratevdisk command (Example 9-15).

While the migration is running, our Linux server is still running.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-15. Listing the MDG with the svcinfo lsmdiskgrp command shows that the free capacity on the old MDGs is slowly increasing as those extents are moved to the new MDG.

Example 9-15 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_SANB -mdiskgrp MD_palauVDIBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk palau_Data -mdiskgrp MD_palauVDIBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 25migrate_source_vdisk_index 29migrate_target_mdisk_grp 8max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type MDisk_Group_Migrationprogress 70migrate_source_vdisk_index 30migrate_target_mdisk_grp 8max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS1:admin>

After this task has completed, Example 9-16 shows that the VDisks are now spread over three MDisks.

Example 9-16 Migration complete

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp MD_palauVDid 8name MD_palauVDstatus onlinemdisk_count 3vdisk_count 2capacity 24.0GBextent_size 512free_capacity 7.0GBvirtual_capacity 17.00GBused_capacity 17.00GB

724 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 751: San

real_capacity 17.00GBoverallocation 70warning 0IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_SANBid282930IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskmember palau_Dataid282930IBM_2145:ITSO-CLS1:admin>

Our migration to striped VDisks on another storage subsystem (DS4500) is now complete. The original MDisks (Palau-MDG0 and Palau-MD1) can now be removed from the SVC, and these LUNs can be removed from the storage subsystem.

If these LUNs are the last LUNs that were used on our DS4700 storage subsystem, we can remove it from our SAN fabric.

9.6.5 Preparing to migrate from the SVCBefore we move the Linux server’s LUNs from being accessed by the SVC as VDisks to being directly accessed from the storage subsystem, we must convert the VDisks into image mode VDisks.

You might want to perform this activity for any one of these reasons:

� You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem.

� You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no longer need that host connected to the SVC.

� You want to ship a host, and its data, that is currently connected to the SVC to a site where there is no SVC.

� Changes to your environment no longer require this host to use the SVC.

There are also other preparation activities that we can perform before we have to shut down the host and reconfigure the LUN masking and mapping. This section covers those activities.

If you are moving the data to a new storage subsystem, it is assumed that the storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 9-58 on page 726.

Chapter 9. Data migration 725

Page 752: San

Figure 9-58 Environment with SVC

Making fabric zone changesThe first step is to set up the SAN configuration so that all of the zones are created. You must add the new storage subsystem to the Red zone so that the SVC can talk to it directly.

We also need a Green zone for our host to use when we are ready for it to directly access the disk after it has been removed from the SVC.

It is assumed that you have created the necessary zones.

After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s controller using the svcinfo lscontroller command, as shown in Figure 9-10 on page 691. It is also a good idea to rename the new storage subsystem’s controller to a more useful name, which can be done with the svctask chcontroller -name command.

Creating new LUNsOn our storage subsystem, we created two LUNs and masked the LUNs so that the SVC can see them. Eventually, we will give these two LUNs directly to the host, removing the VDisks that the host currently has. To check that the SVC can use these two LUNs, issue the svctask detectmdisk command, as shown in Example 9-17.

Example 9-17 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID

LINUXHost

SVCSVC

SVCI/O grp0

SAN

IBM or OEMStorage

Subsystem

IBM or OEMStorage

Subsystem

Green Zone

Red Zone

Blue Zone

Black Zone

Zoning for migration scenarios

726 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 753: San

0 mdisk0 online managed 600a0b800026b282000042f84873c7e10000000000000000000000000000000028 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab0000000000000000000000000000000029 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae0000000000000000000000000000000030 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d90000000000000000000000000000000031 mdisk31 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f0000000000000000000000000000000032 mdisk32 online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to more meaningful names, so that they do not get confused with other MDisks that are used by other activities. Also, we create the MDGs to hold our new MDisks, which is shown in Example 9-18.

Example 9-18 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name mdpalau_ivd mdisk32IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512MDisk Group, id [9], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Palauivd -ext 512CMMVC5758E Object name already exists.IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning8 MD_palauVD online 3 2 24.0GB 512 7.0GB 17.00GB 17.00GB 17.00GB 70 09 MDG_Palauivd online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0IBM_2145:ITSO-CLS1:admin>

Our SVC environment is now ready for the VDisk migration to image mode VDisks.

Chapter 9. Data migration 727

Page 754: San

9.6.6 Migrate the VDisks to image mode VDisksWhile our Linux server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is the svctask migratetoimage command, which is shown in Example 9-19.

Example 9-19 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_SANB -mdisk mdpalau_ivd -mdiskgrp MD_palauVDIBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk palau_Data -mdisk mdpalau_ivd1 -mdiskgrp MD_palauVDIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID28 palau-md1 online managed 8 MD_palauVD 8.0GB 0000000000000010 DS4500 600a0b8000174233000000b9487778ab0000000000000000000000000000000029 palau-md2 online managed 8 MD_palauVD 8.0GB 0000000000000011 DS4500 600a0b80001744310000010f48776bae0000000000000000000000000000000030 palau-md3 online managed 8 MD_palauVD 8.0GB 0000000000000012 DS4500 600a0b8000174233000000bb487778d90000000000000000000000000000000031 mdpalau_ivd1 online image 8 MD_palauVD 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f0000000000000000000000000000000032 mdpalau_ivd online image 8 MD_palauVD 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type Migrate_to_Imageprogress 4migrate_source_vdisk_index 29migrate_target_mdisk_index 32migrate_target_mdisk_grp 8max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type Migrate_to_Imageprogress 30migrate_source_vdisk_index 30migrate_target_mdisk_index 31migrate_target_mdisk_grp 8max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS1:admin>

During the migration, our Linux server is unaware that its data is being physically moved between storage subsystems.

After the migration has completed, the image mode VDisks are ready to be removed from the Linux server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystem’s tool.

728 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 755: San

9.6.7 Removing the LUNs from the SVCThe next step requires downtime on the Linux server, because we will remap and remask the disks so that the host sees them directly through the Green zone, as shown in Figure 9-58 on page 726.

Our Linux server has two LUNs: one LUN is our boot disk and operating system file systems, and the other LUN holds our application and data files. Moving both LUNs at one time requires shutting down the host.

If we only want to move the LUN that holds our application and data files, we can move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the Volume Group to ensure the data integrity during the reassignment.

When you intend to move both LUNs at the same time, you must use these required steps:

1. Confirm that your operating system is configured for the new storage.

2. Shut down the host.

If you are only moving the LUNs that contain the application and data, you can follow this procedure instead:

a. Stop the applications that are using the LUNs.

b. Unmount those file systems with the umount MOUNT_POINT command.

c. If the file systems are an LVM volume, deactivate that Volume Group with the vgchange -a n VOLUMEGROUP_NAME command.

d. If you can, unload your HBA driver using the rmmod DRIVER_MODULE command. This command removes the SCSI definitions from the kernel (we will reload this module and rediscover the disks later). It is possible to tell the Linux SCSI subsystem to rescan for new disks without requiring you to unload the HBA driver; however, we do not provide these details here.

3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-20). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the Linux server.

Example 9-20 Remove the VDisks from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_SANBIBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Palau palau_DataIBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmap PalauIBM_2145:ITSO-CLS1:admin>

4. Remove the VDisks from the SVC by using the svctask rmvdisk command. This step makes them unmanaged, as seen in Example 9-21 on page 730.

Before you start: Moving LUNs to another storage subsystem might need an additional entry in the multipath.conf file. Check with the storage subsystem vendor to see which content you must add to the file. You might be able to install and modify the file ahead of time.

Chapter 9. Data migration 729

Page 756: San

Example 9-21 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_SANBIBM_2145:ITSO-CLS1:admin>svctask rmvdisk palau_DataIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID31 mdpalau_ivd1 online unmanaged 6.0GB 0000000000000013 DS4500 600a0b8000174233000000bd4877890f0000000000000000000000000000000032 mdpalau_ivd online unmanaged 12.5GB 0000000000000014 DS4500 600a0b80001744310000011048777bda00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the Linux server.

6. Power on your host server and enter your FC HBA BIOS before booting the OS. Make sure that you change the boot configuration, so that it points to the SVC. In our example, we have performed the following steps on a QLogic HBA:

a. Press Ctrl+Q to enter the HBA BIOS.

b. Open Configuration Settings.

c. Open Selectable Boot Settings.

Cached data: When you run the svctask rmvdisk command, the SVC will first double-check that there is no outstanding dirty cached data for the VDisk that is being removed. If there is still uncommitted cached data, the command fails with the following error message:

CMMVC6212E The command failed because data in the cache has not been committed to disk

You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk.

The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage, and how busy the I/O subsystem is, determines how long this command takes to complete.

You can check if the VDisk has uncommitted data in the cache by using the command svcinfo lsvdisk <VDISKNAME> and checking the fast_write_state attribute. This attribute has the following meanings:

empty No modified data exists in the cache.not_empty Modified data might exist in the cache.corrupt Modified data might have existed in the cache, but any data has been

lost.

Important: If one of the disks is used to boot your Linux server, you must make sure that it is presented back to the host as SCSI ID 0, so that the FC adapter BIOS finds that disk during its initialization.

730 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 757: San

d. Change the entry from the SVC to your storage subsystem LUN with SCSI ID 0.

e. Exit the menu and save your changes.

7. We are now ready to restart the Linux server.

If all of the zoning and LUN masking and mapping were done successfully, our Linux server boots as though nothing has happened.

If you only moved the application LUN to the SVC and left your Linux server running, you must follow these steps to see the new VDisk:

a. Load your HBA driver with the modprobe DRIVER_NAME command. If you did not (and cannot) unload your HBA driver, you can issue commands to the kernel to rescan the SCSI bus to see the new VDisks (these details are beyond the scope of this book).

b. Check your syslog and verify that the kernel found the new VDisks. On Red Hat Enterprise Linux, the syslog is stored in the /var/log/messages directory.

c. If your application and data are on an LVM volume, run the vgscan command to rediscover the Volume Group, and then, run the vgchange -a y VOLUME_GROUP command to activate the Volume Group.

8. Mount your file systems with the mount /MOUNT_POINT command (Example 9-22). The df output shows us that all of the disks are available again.

Example 9-22 File system after migration

[root@Palau ~]# mount /dev/dm-2 /data[root@Palau ~]# dfFilesystem 1K-blocks Used Available Use% Mounted on/dev/mapper/VolGroup00-LogVol00 10093752 1938124 7634620 21% //dev/sda1 101086 12054 83813 13% /boottmpfs 1033496 0 1033496 0% /dev/shm/dev/dm-2 5160576 158160 4740272 4% /data[root@Palau ~]#

9. You are ready to start your application.

10.And finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then, they will automatically be removed when the SVC determines that there are no VDisks associated with these MDisks.

Important: This is the last step that you can perform and still safely back out everything that you have done so far.

Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss:

� Remap and remask the LUNs back to the SVC.

� Run the svctask detectmdisk command to rediscover the MDisks.

� Recreate the VDisks with the svctask mkvdisk command.

� Remap the VDisks back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data loss.

Chapter 9. Data migration 731

Page 758: San

9.7 Migrating ESX SAN disks to SVC disksIn this section, we move the two LUNs from our VMware ESX server to the SVC. The ESX operating system is installed locally on the host, but the two SAN disks are connected, and the virtual machines are stored there.

We then manage those LUNs with the SVC, move them between other managed disks, and then, finally move them back to image mode disks, so that those LUNs can then be masked and mapped back to the VMware ESX server directly.

This example can help you perform any one of the following activities in your environment:

� Move your ESX server’s data LUNs (that are your VMware vmfs file systems where you might have your virtual machines stored), which are directly accessed from a storage subsystem, to virtualized disks under the control of the SVC.

� Move LUNs between storage subsystems while your VMware virtual machines are still running. You might perform this activity to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking into account availability, performance, and redundancy. We describe this step in 9.7.4, “Migrating the image mode VDisks” on page 742.

� Move your VMware ESX server’s LUNs back to image mode VDisks so that they can be remapped and remasked directly back to the server. This step starts in 9.7.5, “Preparing to migrate from the SVC” on page 745.

You can use these activities individually, or together, to migrate your VMware ESX server’s LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, you can introduce the SVC in your environment, or move the data between your storage subsystems.

The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.

In Figure 9-59 on page 733, we show our starting SAN environment.

732 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 759: San

Figure 9-59 ESX environment before migration

Figure 9-59 shows our ESX server connected to the SAN infrastructure. It has two LUNs that are masked directly to it from our storage subsystem.

Our ESX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 9-59:

� The ESX Server’s HBA cards are zoned so that they are in the Green zone with our storage subsystem.

� The two LUNs that have been defined on the storage subsystem and that use LUN masking are directly available to our ESX server.

9.7.1 Connecting the SVC to your SAN fabricThis section describes the steps to take to introduce the SVC into your SAN environment. While we only summarize these activities here, you can introduce the SVC into your SAN environment without any downtime to any host or application that also uses your storage area network.

If you have an SVC already connected, skip to the instructions that are given in 9.7.2, “Preparing your SVC to virtualize disks” on page 735.

Chapter 9. Data migration 733

Page 760: San

Be extremely careful connecting the SVC to your storage area network, because it requires you to connect cables to your SAN switches and to alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions.

Connecting the SVC to your SAN fabric will require you to perform these tasks:

� Assemble your SVC components (nodes, uninterruptible power supply unit, and Master Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN area network.

� Create and configure your SVC cluster.

� Create these additional zones:

– An SVC node zone (the Black zone in our picture on Example 9-45 on page 757). This zone only contains all of the ports (or WWNs) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster where each node has four ports. So, our Black zone has eight WWNs defined.

– A storage zone (our Red zone). This zone also has all of the ports or WWNs from the SVC node zone, as well as the ports/WWNs for all of the storage subsystems that SVC will virtualize.

– A host zone (our Blue zone). This zone contains the ports or WWNs for each host that will access VDisks, together with the ports that are defined in the SVC node zone.

Figure 9-60 on page 735 shows the environment that we set up.

Important: Do not put your storage subsystems in the host (Blue) zone. This zone is an unsupported configuration and can lead to data loss.

734 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 761: San

Figure 9-60 SAN environment with SVC attached

9.7.2 Preparing your SVC to virtualize disksThis section describes the preparatory tasks that we perform before taking our ESX server or virtual machines offline. These tasks are all nondisruptive activities, which do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a managed disk groupWhen we move the two ESX LUNs to the SVC, they are first used in image mode, and therefore, we need an MDG to hold those disks.

We create an empty MDG for each of the disks, by using the commands in Example 9-23. Our ESX-BOOT-MDG MDG holds the boot LUN and our ESX-DATA-MDG MDG holds our data LUN.

Example 9-23 Creating an empty MDG

IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_Nile_VM -ext 512MDisk Group, id [3], successfully created

Creating the host definitionIf you prepared the zones correctly, the SVC can see the ESX server’s HBA adapters on the fabric (our host only had one HBA).

Chapter 9. Data migration 735

Page 762: San

First, we get the WWN for our ESX server’s HBA, because we have many hosts connected to our SAN fabric and in the Blue zone. We want to make sure that we have the correct WWN to reduce our ESX server’s downtime.

Log in to your VMware management console as root, navigate to Configuration, and then, select Storage Adapter. The Storage Adapters are shown on the right side of this window and display all of the necessary information. Figure 9-61 shows our WWNs, which are 210000E08B89B8C0 and 210000E08B892BCD.

Figure 9-61 Obtain your WWN using the VMware Management Console

Use the svcinfo lshbaportcandidate command on the SVC to list all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-24 shows the output of the nodes that it found on our SAN fabric. (If the port did not show up, it indicates that we have a zone configuration problem.)

Example 9-24 Add the host to the SVC

IBM_2145:ITSO-CLS1:admin>svcinfo lshbaportcandidateid210000E08B89B8C0210000E08B892BCD210000E08B0548BC210000E08B0541BC210000E08B89CCC2IBM_2145:ITSO-CLS1:admin>

After verifying that the SVC can see our host, we create the host entry and assign the WWN to this entry. Example 9-25 shows these commands.

Example 9-25 Create the host entry

IBM_2145:ITSO-CLS1:admin>svctask mkhost -name Nile -hbawwpn 210000E08B89B8C0:210000E08B892BCDHost, id [1], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lshost Nileid 1name Nileport_count 2

736 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 763: San

type genericmask 1111iogrp_count 4WWPN 210000E08B892BCDnode_logged_in_count 4state activeWWPN 210000E08B89B8C0node_logged_in_count 4state activeIBM_2145:ITSO-CLS1:admin>

Verify that you can see your storage subsystemIf our zoning has been performed correctly, the SVC can also see the storage subsystem with the svcinfo lscontroller command (Example 9-26).

Example 9-26 Available storage controllers

IBM_2145:ITSO-CLS1:admin>svcinfo lscontrollerid controller_name ctrl_s/n vendor_id product_id_low product_id_high0 DS4500 IBM 1742-9001 DS4700 IBM 1814 FAStT

Get your disk serial numbersTo help avoid the risk of creating the wrong VDisks from all of the available unmanaged MDisks (in case the SVC sees many available unmanaged MDisks), we get the LUN serial numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode VDisks.

If you also use a DS4000 family storage subsystem, Storage Manager provides the LUN serial numbers. Right-click your logical drive, and choose Properties. Figure 9-63 on page 738 and Figure 9-62 on page 738 show our serial numbers.

Chapter 9. Data migration 737

Page 764: San

Figure 9-62 Obtaining the disk serial number

Figure 9-63 Obtaining the disk serial number

Now, we are ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.

738 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 765: San

9.7.3 Move the LUNs to the SVCIn this step, we move the LUNs that are assigned to the ESX server and reassign them to the SVC.

Our ESX server has two LUNs, as shown in Figure 9-64.

Figure 9-64 VMWare LUNs

The virtual machines are located on these LUNs. So, in order to move these LUNs under the control of the SVC, we do not need to reboot the entire ESX server, but we have to stop and suspend all VMware guests that are using these LUNs.

Move VMware guest LUNsTo move the VMware LUNs to the SVC, perform the following steps:

1. Using Storage Manager, we have identified the LUN number that has been presented to the ESX Server. Make sure to record which LUN had which LUN number (Figure 9-65).

Figure 9-65 Identify LUN numbers in IBM DS4000 Storage Manager

2. Next, identify all of the VMware guests that are using this LUN and shut them down. One way to identify them is to highlight the virtual machine and open the Summary Tab. The datapool that is used is displayed under Datastore. Figure 9-66 on page 740 shows a Linux virtual machine using the datastore named SLES_Costa_Rica.

Chapter 9. Data migration 739

Page 766: San

Figure 9-66 Identify the LUNs that are used by virtual machines

3. If you have several ESX hosts, also check the other ESX hosts to make sure that there is no guest operating system that is running and using this datastore.

4. Repeat steps 1 to 3 for every datastore that you want to migrate.

5. After the guests are suspended, we use Storage Manager (our storage subsystem management tool) to unmap and unmask the disks from the ESX server and to remap and to remask the disks to the SVC.

6. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named as mdiskN, where N is the next available MDisk number (starting from 0). Example 9-27 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.

Example 9-27 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID21 mdisk21 online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a50000000000000000000000000000000022 mdisk22 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

740 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 767: San

7. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 9-28).

Example 9-28 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_W2k3 mdisk22IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_SLES mdisk21IBM_2145:ITSO-CLS1:admin>svcinfo lsmdisk

21 ESX_SLES online unmanaged 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a50000000000000000000000000000000022 ESX_W2k3 online unmanaged 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

8. We create our image mode VDisks with the svctask mkvdisk command (Example 9-29). The parameter -vtype image makes sure that it will create image mode VDisks, which means that the virtualized disks will have the exact same layout as though they were not virtualized.

Example 9-29 Create the image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_W2k3 -name ESX_W2k3_IVDVirtual Disk, id [29], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkvdisk -mdiskgrp MDG_Nile_VM -iogrp 0 -vtype image -mdisk ESX_SLES -name ESX_SLES_IVDVirtual Disk, id [30], successfully createdIBM_2145:ITSO-CLS1:admin>

9. Finally, we can map the new image mode VDisks to the host. Use the same SCSI LUN IDs as on the storage subsystem for the mapping (Example 9-30).

Example 9-30 Map the VDisks to the host

IBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 0 ESX_SLES_IVDVirtual Disk to Host map, id [0], successfully createdIBM_2145:ITSO-CLS1:admin>svctask mkvdiskhostmap -host Nile -scsi 1 ESX_W2k3_IVD Virtual Disk to Host map, id [1], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmapid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you obtained earlier (in Figure 9-62 and Figure 9-63 on page 738).

Chapter 9. Data migration 741

Page 768: San

10.Using the VMware management console, rescan to discover the new VDisk. Open the configuration tab, select Storage Adapters, and click Rescan. During the rescan, you might receive geometry errors, when ESX discovers that the old disk has disappeared. Your VDisk will appear with the new vmhba devices.

11.We are ready to restart the VMware guests again.

You have migrated the VMware LUNs successfully to the SVC.

9.7.4 Migrating the image mode VDisks While the VMware server and its virtual machines are still running, we migrate the image mode VDisks onto striped VDisks, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode VDisksIn this example, we migrate the image mode VDisks to VDisks and we move the data to another storage subsystem in one step.

Adding a new storage subsystem to SVCIf you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, which is shown in Figure 9-67.

Figure 9-67 ESX SVC SAN environment

Make fabric zone changesThe first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red zone so that the SVC can talk to it directly.

742 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 769: San

We also need a Green zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC.

We assume that you have created the necessary zones.

In our environment, we have performed these tasks:

� Created three LUNs on another storage subsystem and mapped it to the SVC� Discovered them as MDisks� Created a new MDG� Renamed these LUNs to more meaningful names.� Put all these MDisks into this MDG.

You can see the output of our commands in Example 9-31.

Example 9-31 Create a new MDisk group

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a50000000000000000000000000000000022 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd0000000000000000000000000000000023 mdisk23 online unmanaged 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d25030000000000000000000000000000000024 mdisk24 online unmanaged 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c0000000000000000000000000000000025 mdisk25 online unmanaged 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_ESX_VD -ext 512IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD1 mdisk23IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD2 mdisk24IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name IBMESX-MD3 mdisk25IBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD1 MDG_ESX_VDIBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD2 MDG_ESX_VDIBM_2145:ITSO-CLS1:admin>svctask addmdisk -mdisk IBMESX-MD3 MDG_ESX_VDIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID21 ESX_SLES online image 3 MDG_Nile_VM 60.0GB 0000000000000008 DS4700 600a0b800026b282000041ca486d14a50000000000000000000000000000000022 ESX_W2k3 online image 3 MDG_Nile_VM 70.0GB 0000000000000009 DS4700 600a0b80002904de0000447a486d14cd0000000000000000000000000000000023 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d250300000000000000000000000000000000

Chapter 9. Data migration 743

Page 770: San

24 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c0000000000000000000000000000000025 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

Migrating the VDisksWe are ready to migrate the image mode VDisks onto striped VDisks in the new MDG (MDG_ESX_VD) with the svctask migratevdisk command (Example 9-32).

While the migration is running, our VMware ESX server, as well as our VMware guests, will remain running.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-32. Listing the MDG with the svcinfo lsmdiskgrp command shows that the free capacity on the old MDG is slowly increasing as those extents are moved to the new MDG.

Example 9-32 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_SLES_IVD -mdiskgrp MDG_ESX_VDIBM_2145:ITSO-CLS1:admin>svctask migratevdisk -vdisk ESX_W2k3_IVD -mdiskgrp MDG_ESX_VDIBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 0migrate_source_vdisk_index 30migrate_target_mdisk_grp 4max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type MDisk_Group_Migrationprogress 0migrate_source_vdisk_index 29migrate_target_mdisk_grp 4max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 1migrate_source_vdisk_index 30migrate_target_mdisk_grp 4max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type MDisk_Group_Migrationprogress 0migrate_source_vdisk_index 29migrate_target_mdisk_grp 4max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrp

744 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 771: San

id name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning3 MDG_Nile_VM online 2 2 130.0GB 512 1.0GB 130.00GB 130.00GB 130.00GB 100 04 MDG_ESX_VD online 3 0 165.0GB 512 35.0GB 0.00MB 0.00MB 0.00MB 0 0IBM_2145:ITSO-CLS1:admin>

If you compare the svcinfo lsmdiskgrp output after the migration, as shown in Example 9-33, you can see that all of the virtual capacity has now been moved from the old MDG (MDG_Nile_VM) to the new MDG (MDG_ESX_VD). The mdisk_count column shows that the capacity is now spread over three MDisks.

Example 9-33 List MDisk group

IBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning3 MDG_Nile_VM online 2 0 130.0GB 512 130.0GB 0.00MB 0.00MB 0.00MB 0 04 MDG_ESX_VD online 3 2 165.0GB 512 35.0GB 130.00GB 130.00GB 130.00GB 78 0IBM_2145:ITSO-CLS1:admin>

Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem.

If these LUNs are the last LUNs that were used on our storage subsystem, we can remove it from our SAN fabric.

9.7.5 Preparing to migrate from the SVCBefore we move the ESX server’s LUNs from being accessible by the SVC as VDisks to becoming directly accessed from the storage subsystem, we need to convert the VDisks into image mode VDisks.

You might want to perform this activity for any one of these reasons:

� You purchased a new storage subsystem, and you were using SVC as a tool to migrate from your old storage subsystem to this new storage subsystem.

� You used SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk, and you no longer need that host connected to the SVC.

� You want to ship a host, and its data, that currently is connected to the SVC, to a site where there is no SVC.

� Changes to your environment no longer require this host to use the SVC.

Chapter 9. Data migration 745

Page 772: San

There are also other preparatory activities that we can perform before we shut down the host and reconfigure the LUN masking and mapping. This section describes those activities. In our example, we will move VDisks that are located on a DS4500 to image mode VDisks that are located on a DS4700.

If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as described in “Adding a new storage subsystem to SVC” on page 742 and “Make fabric zone changes” on page 742.

Creating new LUNsOn our storage subsystem, we create two LUNs and mask the LUNs so that the SVC can see them. These two LUNs will eventually be given directly to the host, removing the VDisks that it currently has. To check that the SVC can use them, issue the svctask detectmdisk command, as shown in Example 9-34.

Example 9-34 Discover the new MDisks

IBM_2145:ITSO-CLS1:admin>svctask detectmdiskIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID

23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d25030000000000000000000000000000000024 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c0000000000000000000000000000000025 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b0000000000000000000000000000000026 mdisk26 online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e21010000000000000000000000000000000027 mdisk27 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to more meaningful names, so that they do not get confused with other MDisks being used by other activities. Also, we create the MDGs to hold our new MDisks. Example 9-35 shows these tasks.

Example 9-35 Rename the MDisks

IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_SLES mdisk26IBM_2145:ITSO-CLS1:admin>svctask chmdisk -name ESX_IVD_W2K3 mdisk27IBM_2145:ITSO-CLS1:admin>svctask mkmdiskgrp -name MDG_IVD_ESX -ext 512MDisk Group, id [5], successfully createdIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

746 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 773: San

4 MDG_ESX_VD online 3 2 165.0GB 512 35.0GB 130.00GB 130.00GB 130.00GB 78 05 MDG_IVD_ESX online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0IBM_2145:ITSO-CLS1:admin>

Our SVC environment is ready for the VDisk migration to image mode VDisks.

9.7.6 Migrating the managed VDisks to image mode VDisksWhile our ESX server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is the svctask migratetoimage command, which is shown in Example 9-36.

Example 9-36 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_SLES_IVD -mdisk ESX_IVD_SLES -mdiskgrp MDG_IVD_ESXIBM_2145:ITSO-CLS1:admin>svctask migratetoimage -vdisk ESX_W2k3_IVD -mdisk ESX_IVD_W2K3 -mdiskgrp MDG_IVD_ESXIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 23 IBMESX-MD1 online managed 4 MDG_ESX_VD 55.0GB 000000000000000D DS4500 600a0b8000174233000000b4486d25030000000000000000000000000000000024 IBMESX-MD2 online managed 4 MDG_ESX_VD 55.0GB 000000000000000E DS4500 600a0b800017443100000108486d182c0000000000000000000000000000000025 IBMESX-MD3 online managed 4 MDG_ESX_VD 55.0GB 000000000000000F DS4500 600a0b8000174233000000b5486d255b0000000000000000000000000000000026 ESX_IVD_SLES online image 5 MDG_IVD_ESX 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e21010000000000000000000000000000000027 ESX_IVD_W2K3 online image 5 MDG_IVD_ESX 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

During the migration, our ESX server is unaware that its data is being physically moved between storage subsystems. We can continue to run and continue to use the virtual machines that are running on the server.

You can check the migration status with the svcinfo lsmigrate command, as shown in Example 9-37 on page 748.

Chapter 9. Data migration 747

Page 774: San

Example 9-37 The svcinfo lsmigrate command and output

IBM_2145:ITSO-CLS1:admin>svcinfo lsmigratemigrate_type Migrate_to_Imageprogress 2migrate_source_vdisk_index 29migrate_target_mdisk_index 27migrate_target_mdisk_grp 5max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type Migrate_to_Imageprogress 12migrate_source_vdisk_index 30migrate_target_mdisk_index 26migrate_target_mdisk_grp 5max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS1:admin>

After the migration has completed, the image mode VDisks are ready to be removed from the ESX server, and the real LUNs can be mapped and masked directly to the host using the storage subsystem’s tool.

9.7.7 Remove the LUNs from the SVCYour ESX server’s configuration determines in what order your LUNs are removed from the control of the SVC, and whether you need to reboot the ESX server, as well as suspend the VMware guests.

In our example, we have moved the virtual machine disks, so in order to remove these LUNs from the control of the SVC, we have to stop and suspend all of the VMware guests that are using this LUN. Perform the following steps:

1. Check which SCSI LUN IDs are assigned to the migrated disks, by using the svcinfo lshostvdiskmap command, as shown in Example 9-38. Compare the VDisk UID and sort out the information.

Example 9-38 Note SCSI LUN IDs

IBM_2145:ITSO-CLS1:admin>svcinfo lshostvdiskmapid name SCSI_id vdisk_id vdisk_name wwpn vdisk_UID1 Nile 0 30 ESX_SLES_IVD 210000E08B892BCD 60050768018301BF280000000000002A1 Nile 1 29 ESX_W2k3_IVD 210000E08B892BCD 60050768018301BF2800000000000029

IBM_2145:ITSO-CLS1:admin>svcinfo lsvdiskid name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count0 vdisk_A 0 io_grp0 online 2 MDG_Image 36.0GB image29 ESX_W2k3_IVD 0 io_grp0 online 4 MDG_ESX_VD 70.0GB striped 60050768018301BF2800000000000029 0 1

748 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 775: San

30 ESX_SLES_IVD 0 io_grp0 online 4 MDG_ESX_VD 60.0GB striped 60050768018301BF280000000000002A 0 1IBM_2145:ITSO-CLS1:admin>

2. Shut down and suspend all of our guests using the LUNs. You can use the same method that is used in “Move VMware guest LUNs” on page 739 to identify the guests that are using this LUN.

3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-39). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which shows that these VDisks are no longer mapped to the ESX server.

Example 9-39 Remove the VDisks from the host

IBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_W2k3_IVDIBM_2145:ITSO-CLS1:admin>svctask rmvdiskhostmap -host Nile ESX_SLES_IVD

4. Remove the VDisks from the SVC by using the svctask rmvdisk command, which makes the MDisks unmanaged, as shown in Example 9-40.

Example 9-40 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_W2k3_IVDIBM_2145:ITSO-CLS1:admin>svctask rmvdisk ESX_SLES_IVDIBM_2145:ITSO-CLS1:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID

26 ESX_IVD_SLES online unmanaged 120.0GB 000000000000000A DS4700 600a0b800026b282000041f0486e210100000000000000000000000000000000

Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the VDisk that is being removed. If there is still uncommitted cached data, the command fails with this error message:

CMMVC6212E The command failed because data in the cache has not been committed to disk

You have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk.

The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the VDisk. Depending on the amount of data to destage and how busy the I/O subsystem is determine how long this command takes to complete.

You can check if the VDisk has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings:

empty No modified data exists in the cache.

not_empty Modified data might exist in the cache.

corrupt Modified data might have existed in the cache, but the data has been lost.

Chapter 9. Data migration 749

Page 776: San

27 ESX_IVD_W2K3 online unmanaged 100.0GB 000000000000000B DS4700 600a0b800026b282000041e3486e20cf00000000000000000000000000000000IBM_2145:ITSO-CLS1:admin>

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the ESX server. Remember in Example 9-38 on page 748 that we have recorded the SCSI LUNs’ IDs. To map your LUNs on the storage subsystem, use the same SCSI LUN IDs that you used in the SVC.

6. Using the VMware management console, rescan to discover the new VDisk. Figure 9-68 shows the view before the rescan. Figure 9-69 on page 751 shows the view after the rescan. Note that the size of the LUN has changed, because we have moved to another LUN on another storage subsystem.

Figure 9-68 Before adapter rescan

Important: This is the last step that you can perform and still safely back out of everything you have done so far.

Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss:

� Remap and remask the LUNs back to the SVC.

� Run the svctask detectmdisk command to rediscover the MDisks.

� Recreate the VDisks with the svctask mkvdisk command.

� Remap the VDisks back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data loss.

750 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 777: San

Figure 9-69 After adapter rescan

During the rescan, you might receive geometry errors when ESX discovers that the old disk has disappeared. Your VDisk will appear with a new vmhba address, and VMware will recognize it as our VMWARE-GUESTS disk.

7. We are now ready to restart the VMware guests.

8. Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks are discovered as offline and, then, automatically removed when the SVC determines that there are no VDisks associated with these MDisks.

9.8 Migrating AIX SAN disks to SVC disksIn this section, we move the two LUNs from an AIX server, which is directly off of our DS4000 storage subsystem, over to the SVC.

We then manage those LUNs with the SVC, move them between other managed disks, and then finally move them back to image mode disks, so that those LUNs can then be masked and mapped back to the AIX server directly.

If you use this example, it can help you perform any of the following activities in your environment:

� Move an AIX server’s SAN LUNs from a storage subsystem and virtualize those same LUNs through the SVC, which is the first activity that you perform when introducing the SVC into your environment. This section shows that your host downtime is only a few minutes while you remap and remask disks using your storage subsystem LUN management tool. This step starts in 9.8.2, “Preparing your SVC to virtualize disks” on page 754.

� Move data between storage subsystems while your AIX server is still running and servicing your business application. You might perform this activity if you were removing a storage subsystem from your SAN environment and if you want to move the data onto LUNs that are more appropriate for the type of data that is stored on those LUNs, taking

Chapter 9. Data migration 751

Page 778: San

into account availability, performance, and redundancy. We describe this step in 9.8.4, “Migrating image mode VDisks to VDisks” on page 761.

� Move your AIX server’s LUNs back to image mode VDisks, so that they can be remapped and remasked directly back to the AIX server. This step starts in 9.8.5, “Preparing to migrate from the SVC” on page 763.

Use these activities individually or together to migrate your AIX server’s LUNs from one storage subsystem to another storage subsystem, using the SVC as your migration tool. If you do not use all three activities, you can introduce or remove the SVC from your environment.

The only downtime that is required for these activities is the time that it takes you to remask and remap the LUNs between the storage subsystems and your SVC.

We show our AIX environment in Figure 9-70.

Figure 9-70 AIX SAN environment

Figure 9-70 shows our AIX server connected to our SAN infrastructure. It has two LUNs (hdisk3 and hdisk4) that are masked directly to it from our storage subsystem.

The hdisk3 disk makes up the itsoaixvg LVM group, and the hdisk4 disk makes up the itsoaixvg1 LVM group, as shown in Example 9-41 on page 753.

AIXHost

SAN

IBM or OEMStorage

Subsystem

Green Zone

Zoning for migration scenarios

752 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 779: San

Example 9-41 AIX SAN configuration

#lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Available 1D-08-02 1814 DS4700 Disk Array Devicehdisk4 Available 1D-08-02 1814 DS4700 Disk Array Device#lspvhdisk0 0009cddaea97bf61 rootvg activehdisk1 0009cdda43c9dfd5 rootvg activehdisk2 0009cddabaef1d99 rootvg activehdisk3 0009cdda0a4c0dd5 itsoaixvg activehdisk4 0009cdda0a4d1a64 itsoaixvg1 active#

Our AIX server represents a typical SAN environment with a host directly using LUNs that were created on a SAN storage subsystem, as shown in Figure 9-70 on page 752:

� The AIX server’s HBA cards are zoned so that they are in the Green (dotted line) zone, with our storage subsystem.

� The two LUNs, hdisk3 and hdisk4, have been defined on the storage subsystem, and using LUN masking, are directly available to our AIX server.

9.8.1 Connecting the SVC to your SAN fabricThis section describes the steps to take to introduce the SVC into your SAN environment. While this section only summarizes these activities, you can accomplish this task without any downtime to any host or application that also uses your storage area network.

If you have an SVC already connected, skip to 9.8.2, “Preparing your SVC to virtualize disks” on page 754.

Be extremely careful, because connecting the SVC into your storage area network requires you to connect cables to your SAN switches and alter your switch zone configuration. Performing these activities incorrectly can render your SAN inoperable, so make sure that you fully understand the effect of your actions.

Connecting the SVC to your SAN fabric will require you to perform these tasks:

� Assemble your SVC components (nodes, uninterruptible power supply unit, and Master Console), cable the SVC correctly, power the SVC on, and verify that the SVC is visible on your SAN.

� Create and configure your SVC cluster.

� Create these additional zones:

– An SVC node zone (our Black zone in Example 9-54 on page 763). This zone only contains all of the ports (or WWNs) for each of the SVC nodes in your cluster. Our SVC is made up of a two node cluster, where each node has four ports. So, our Black zone has eight defined WWNs.

– A storage zone (our Red zone). This zone also has all of the ports and WWNs from the SVC node zone, as well as the ports and WWNs for all of the storage subsystems that SVC will virtualize.

Chapter 9. Data migration 753

Page 780: San

– A host zone (our Blue zone). This zone contains the ports and WWNs for each host that will access the VDisk, together with the ports that are defined in the SVC node zone.

Figure 9-71 shows our environment.

Figure 9-71 SAN environment with SVC attached

9.8.2 Preparing your SVC to virtualize disksThis section describes the preparatory tasks that we perform before taking our AIX server offline. These tasks are all nondisruptive activities and do not affect your SAN fabric or your existing SVC configuration (if you already have a production SVC in place).

Creating a managed disk groupWhen we move the two AIX LUNs to the SVC, they are first used in image mode; therefore, we must create an MDG to hold those disks. We must create an empty MDG for each of the disks, using the commands in Example 9-42 on page 755. We name the MDGs to hold our LUNs KANAGA_MDG_0 and KANAGA_MDG_1.

Important: Do not put your storage subsystems in the host (Blue) zone, which is an unsupported configuration and can lead to data loss.

AIXHost

SVCSVC

SVCI/O grp0

SAN

IBM or OEMStorage

Subsystem

IBM or OEMStorage

Subsystem

Green Zone

Red Zone

Blue Zone

Black Zone

Zoning for migration scenarios

754 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 781: San

Example 9-42 Create empty mdiskgroup

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_imgmdg -ext 512MDisk Group, id [7], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning

7 aix_imgmdg online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0IBM_2145:ITSO-CLS2:admin>

Creating our host definitionIf you have prepared the zones correctly, the SVC can see the AIX server’s HBA adapters on the fabric (our host only had one HBA).

First, we get the WWN for our AIX server’s HBA, because we have many hosts that are connected to our SAN fabric and in the Blue zone. We want to make sure we have the correct WWN to reduce our AIX servers’ downtime. Example 9-43 shows the commands to get the WWN; our host has a WWN of 10000000C932A7FB.

Example 9-43 Discover your WWN

#lsdev -Ccadapter|grep fcsfcs0 Available 1Z-08 FC Adapterfcs1 Available 1D-08 FC Adapter#lscfg -vpl fcs0 fcs0 U0.1-P2-I4/Q1 FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A68D Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A7FB ROS Level and ID............02C03951 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........03000909 Device Specific.(Z4)........FF401210 Device Specific.(Z5)........02C03951 Device Specific.(Z6)........06433951 Device Specific.(Z7)........07433951 Device Specific.(Z8)........20000000C932A7FB Device Specific.(Z9)........CS3.91A1 Device Specific.(ZA)........C1D3.91A1 Device Specific.(ZB)........C2D3.91A1 Device Specific.(YL)........U0.1-P2-I4/Q1

PLATFORM SPECIFIC

Chapter 9. Data migration 755

Page 782: San

Name: fibre-channel Model: LP9002 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I4/Q1#lscfg -vpl fcs1 fcs1 U0.1-P2-I5/Q1 FC Adapter

Part Number.................00P4494 EC Level....................A Serial Number...............1E3120A67B Manufacturer................001E Device Specific.(CC)........2765 FRU Number.................. 00P4495 Network Address.............10000000C932A800 ROS Level and ID............02C03891 Device Specific.(Z0)........2002606D Device Specific.(Z1)........00000000 Device Specific.(Z2)........00000000 Device Specific.(Z3)........02000909 Device Specific.(Z4)........FF401050 Device Specific.(Z5)........02C03891 Device Specific.(Z6)........06433891 Device Specific.(Z7)........07433891 Device Specific.(Z8)........20000000C932A800 Device Specific.(Z9)........CS3.82A1 Device Specific.(ZA)........C1D3.82A1 Device Specific.(ZB)........C2D3.82A1 Device Specific.(YL)........U0.1-P2-I5/Q1

PLATFORM SPECIFIC

Name: fibre-channel Model: LP9000 Node: fibre-channel@1 Device Type: fcp Physical Location: U0.1-P2-I5/Q1##

The svcinfo lshbaportcandidate command on the SVC lists all of the WWNs, which have not yet been allocated to a host, that the SVC can see on the SAN fabric. Example 9-44 shows the output of the nodes that it found in our SAN fabric. (If the port did not show up, it indicates that we have a zone configuration problem.)

Example 9-44 Add the host to the SVC

IBM_2145:ITSO-CLS2:admin>svcinfo lshbaportcandidateid10000000C932A7FB10000000C932A800210000E08B89B8C0IBM_2145:ITSO-CLS2:admin>

756 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 783: San

After verifying that the SVC can see our host (Kanaga), we create the host entry and assign the WWN to this entry, as shown with the commands in Example 9-45.

Example 9-45 Create the host entry

IBM_2145:ITSO-CLS2:admin>svctask mkhost -name Kanaga -hbawwpn 10000000C932A7FB:10000000C932A800Host, id [5], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lshost Kanagaid 5name Kanagaport_count 2type genericmask 1111iogrp_count 4WWPN 10000000C932A800node_logged_in_count 2state inactiveWWPN 10000000C932A7FBnode_logged_in_count 2state inactiveIBM_2145:ITSO-CLS2:admin>

Verifying that we can see our storage subsystemIf we performed the zoning correctly, the SVC can see the storage subsystem with the svcinfo lscontroller command (Example 9-46).

Example 9-46 Discover the storage controller

IBM_2145:ITSO-CLS2:admin>svcinfo lscontrollerid controller_name ctrl_s/n vendor_id product_id_low product_id_high0 DS4500 IBM 1742-9001 DS4700 IBM 1814 IBM_2145:ITSO-CLS2:admin>

Getting the disk serial numbersTo help avoid the risk of creating the wrong VDisks from all of the available unmanaged MDisks (in case there are many available unmanaged MDisks that are seen by the SVC), we obtain the LUN serial numbers from our storage subsystem administration tool (Storage Manager).

When we discover these MDisks, we confirm that we have the correct serial numbers before we create the image mode VDisks.

If you also use a DS4000 family storage subsystem, Storage Manager will provide the LUN serial numbers. Right-click your logical drive, and choose Properties. Figure 9-72 on page 758 and Figure 9-73 on page 758 show our serial numbers.

Names: The svctask chcontroller command enables you to change the discovered storage subsystem name in SVC. In complex SANs, we recommend that you rename your storage subsystem to a more meaningful name.

Chapter 9. Data migration 757

Page 784: San

Figure 9-72 Obtaining disk serial number

Figure 9-73 Obtaining disk serial number

We are now ready to move the ownership of the disks to the SVC, discover them as MDisks, and give them back to the host as VDisks.

758 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 785: San

9.8.3 Moving the LUNs to the SVCIn this step, we move the LUNs that are assigned to the AIX server and reassign them to the SVC.

Because we only want to move the LUN that holds our application and data files, we move that LUN without rebooting the host. The only requirement is that we unmount the file system and vary off the Volume Group to ensure data integrity after the reassignment.

The following steps are required, because we intend to move both LUNs at the same time:

1. Confirm that the SDD is installed.

2. Unmount and vary off the Volume Groups:

a. Stop the applications that are using the LUNs.

b. Unmount those file systems with the umount MOUNT_POINT command.

c. If the file systems are an LVM volume, deactivate that Volume Group with the varyoffvg VOLUMEGROUP_NAME command.

Example 9-47 shows the commands that we ran on Kanaga.

Example 9-47 AIX command sequence

#varyoffvg itsoaixvg#varyoffvg itsoaixvg1#lsvgrootvgitsoaixvgitsoaixvg1#lsvg -orootvg

3. Using Storage Manager (our storage subsystem management tool), we can unmap and unmask the disks from the AIX server and remap and remask the disks to the SVC.

4. From the SVC, discover the new disks with the svctask detectmdisk command. The disks will be discovered and named mdiskN, where N is the next available mdisk number (starting from 0). Example 9-48 shows the commands that we used to discover our MDisks and to verify that we have the correct MDisks.

Example 9-48 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdiskIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID

24 mdisk24 online unmanaged 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f41900000000000000000000000000000000

Before you start: Moving LUNs to the SVC requires that the Subsystem Device Driver (SDD) device driver is installed on the AIX server. You can install the SDD ahead of time; however, it might require an outage of your host to do so.

Chapter 9. Data migration 759

Page 786: San

25 mdisk25 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>

5. After we have verified that we have the correct MDisks, we rename them to avoid confusion in the future when we perform other MDisk-related tasks (Example 9-49).

Example 9-49 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX mdisk24IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name Kanaga_AIX1 mdisk25IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f4190000000000000000000000000000000025 Kanaga_AIX1 online unmanaged 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c00000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>

6. We create our image mode VDisks with the svctask mkvdisk command and the option -vtype image (Example 9-50). This command virtualizes the disks in the exact same layout as though they were not virtualized.

Example 9-50 Create the image mode VDisks

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX -name IVD_KanagaVirtual Disk, id [8], successfully createdIBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp aix_imgmdg -iogrp 0 -vtype image -mdisk Kanaga_AIX1 -name IVD_Kanaga1Virtual Disk, id [9], successfully createdIBM_2145:ITSO-CLS2:admin>

7. Finally, we can map the new image mode VDisks to the host (Example 9-51).

Example 9-51 Map the VDisks to the host

IBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_KanagaVirtual Disk to Host map, id [0], successfully createdIBM_2145:ITSO-CLS2:admin>svctask mkvdiskhostmap -host Kanaga IVD_Kanaga1Virtual Disk to Host map, id [1], successfully createdIBM_2145:ITSO-CLS2:admin>

Important: Match your discovered MDisk serial numbers (UID on the svcinfo lsmdisk command task display) with the serial number that you discovered earlier (in Figure 9-72 and Figure 9-73 on page 758).

FlashCopy: While the application is in a quiescent state, you can choose to use FlashCopy to copy the new image VDisks onto other VDisks. You do not need to wait until the FlashCopy process has completed before starting your application.

760 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 787: San

Now, we are ready to perform the following steps to put the image mode VDisks online:

1. Remove the old disk definitions, if you have not done so already.

2. Run the cfgmgr -vs command to rediscover the available LUNs.

3. If your application and data are on an LVM volume, rediscover the Volume Group, and then, run the varyonvg VOLUME_GROUP command to activate the Volume Group.

4. Mount your file systems with the mount /MOUNT_POINT command.

5. You are ready to start your application.

9.8.4 Migrating image mode VDisks to VDisksWhile the AIX server is still running, and our file systems are in use, we migrate the image mode VDisks onto striped VDisks, with the extents being spread over three other MDisks.

Preparing MDisks for striped mode VDisksFrom our storage subsystem, we have performed these tasks:

� Created and allocated three LUNs to the SVC� Discovered them as MDisks� Renamed these LUNs to more meaningful names� Created a new MDG� Put all these MDisks into this MDG

You can see the output of our commands in Example 9-52.

Example 9-52 Create a new MDisk group

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name aix_vd -ext 512IBM_2145:ITSO-CLS2:admin>svctask detectmdiskIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f4190000000000000000000000000000000025 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c0000000000000000000000000000000026 mdisk26 online unmanaged 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc0000000000000000000000000000000027 mdisk27 online unmanaged 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da90000000000000000000000000000000028 mdisk28 online unmanaged 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd0 mdisk26IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd1 mdisk27IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name aix_vd2 mdisk28IBM_2145:ITSO-CLS2:admin>IBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd0 aix_vdIBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd1 aix_vdIBM_2145:ITSO-CLS2:admin>svctask addmdisk -mdisk aix_vd2 aix_vd

Chapter 9. Data migration 761

Page 788: San

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID24 Kanaga_AIX online image 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f4190000000000000000000000000000000025 Kanaga_AIX1 online image 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c0000000000000000000000000000000026 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc0000000000000000000000000000000027 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da90000000000000000000000000000000028 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc200000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>

Migrating the VDisksWe are ready to migrate the image mode VDisks onto striped VDisks with the svctask migratevdisk command (Example 9-15 on page 724).

While the migration is running, our AIX server is still running, and we can continue accessing the files.

To check the overall progress of the migration, we use the svcinfo lsmigrate command, as shown in Example 9-53. Listing the MDG with the svcinfo lsmdiskgrp command shows that the free capacity on the old MDG is slowly increasing while those extents are moved to the new MDG.

Example 9-53 Migrating image mode VDisks to striped VDisks

IBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga -mdiskgrp aix_vdIBM_2145:ITSO-CLS2:admin>svctask migratevdisk -vdisk IVD_Kanaga1 -mdiskgrp aix_vd

IBM_2145:ITSO-CLS2:admin>svcinfo lsmigratemigrate_type MDisk_Group_Migrationprogress 10migrate_source_vdisk_index 8migrate_target_mdisk_grp 6max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type MDisk_Group_Migrationprogress 0migrate_source_vdisk_index 9migrate_target_mdisk_grp 6max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS2:admin>

After this task has completed, Example 9-54 on page 763 shows that the VDisks are spread over three MDisks in the aix_vd MDG. The old MDG is empty.

762 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 789: San

Example 9-54 Migration complete

IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_vdid 6name aix_vdstatus onlinemdisk_count 3vdisk_count 2capacity 18.0GBextent_size 512free_capacity 5.0GBvirtual_capacity 13.00GBused_capacity 13.00GBreal_capacity 13.00GBoverallocation 72warning 0IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrp aix_imgmdgid 7name aix_imgmdgstatus onlinemdisk_count 2vdisk_count 0capacity 13.0GBextent_size 512free_capacity 13.0GBvirtual_capacity 0.00MBused_capacity 0.00MBreal_capacity 0.00MBoverallocation 0warning 0IBM_2145:ITSO-CLS2:admin>

Our migration to the SVC is complete. You can remove the original MDisks from the SVC, and you can remove these LUNs from the storage subsystem.

If these LUNs are the LUNs that were used last on our storage subsystem, we can remove it from our SAN fabric.

9.8.5 Preparing to migrate from the SVCBefore we change the AIX servers’ LUNs from being accessed by the SVC as VDisks to being directly accessed from the storage subsystem, we need to convert the VDisks into image mode VDisks.

You might want to perform this activity for one of these reasons:

� You purchased a new storage subsystem, and you were using the SVC as a tool to migrate from your old storage subsystem to this new storage subsystem.

� You used the SVC to FlashCopy or Metro Mirror a VDisk onto another VDisk and you no longer need that host connected to the SVC.

� You want to ship a host, and its data, that is currently connected to the SVC to a site where there is no SVC.

� Changes to your environment no longer require this host to use the SVC.

Chapter 9. Data migration 763

Page 790: San

There are other preparatory activities before we shut down the host and reconfigure the LUN masking and mapping. This section covers those activities.

If you are moving the data to a new storage subsystem, it is assumed that this storage subsystem is connected to your SAN fabric, powered on, and visible from your SAN switches. Your environment must look similar to our environment, as shown in Figure 9-74.

Figure 9-74 Environment with SVC

Making fabric zone changesThe first step is to set up the SAN configuration so that all of the zones are created. Add the new storage subsystem to the Red zone, so that the SVC can communicate with it directly.

Create a Green zone for our host to use when we are ready for it to directly access the disk, after it has been removed from the SVC.

It is assumed that you have created the necessary zones.

After your zone configuration is set up correctly, the SVC sees the new storage subsystem’s controller by using the svcinfo lscontroller command, as shown in Example 9-55 on page 765. It is also a good idea to rename the controller to a more meaningful name. You can use the svctask chcontroller -name command.

AIXHost

SVCSVC

SVCI/O grp0

SAN

IBM or OEMStorage

Subsystem

IBM or OEMStorage

Subsystem

Green Zone

Red Zone

Blue Zone

Black Zone

Zoning for migration scenarios

764 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 791: San

Example 9-55 Discovering the new storage subsystem

IBM_2145:ITSO-CLS2:admin>svcinfo lscontrollerid controller_name ctrl_s/n vendor_id product_id_low product_id_high0 DS4500 IBM 1742-900 1 DS4700 IBM 1814 FAStTIBM_2145:ITSO-CLS2:admin>

Creating new LUNsOn our storage subsystem, we created two LUNs and masked them so that the SVC can see them. We will eventually give these LUNs directly to the host, removing the VDisks that it currently has. To check that the SVC can use the LUNs, issue the svctask detectmdisk command, as shown in Example 9-56. In our example, we use two 10 GB LUNs that are located on the DS4500 subsystem, so in this step, we migrate back to image mode VDisks and to another subsystem in one step. We have already deleted the old LUNs on the DS4700 storage subsystem, which is the reason why they appear offline here.

Example 9-56 Discover the new MDisks

IBM_2145:ITSO-CLS2:admin>svctask detectmdiskIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f4190000000000000000000000000000000025 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c0000000000000000000000000000000026 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc0000000000000000000000000000000027 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da90000000000000000000000000000000028 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc20000000000000000000000000000000029 mdisk29 online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f0000000000000000000000000000000030 mdisk30 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>

Even though the MDisks will not stay in the SVC for long, we still recommend that you rename them to more meaningful names so that they do not get confused with other MDisks that are used by other activities. Also, we create the MDGs to hold our new MDisks, as shown in Example 9-57.

Example 9-57 Rename the MDisks

IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG mdisk29IBM_2145:ITSO-CLS2:admin>svctask chmdisk -name AIX_MIG1 mdisk30

Chapter 9. Data migration 765

Page 792: San

IBM_2145:ITSO-CLS2:admin>svctask mkmdiskgrp -name KANAGA_AIXMIG -ext 512MDisk Group, id [3], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskgrpid name status mdisk_count vdisk_count capacity extent_size free_capacity virtual_capacity used_capacity real_capacity overallocation warning 3 KANAGA_AIXMIG online 0 0 0 512 0 0.00MB 0.00MB 0.00MB 0 0 6 aix_vd online 3 2 18.0GB 512 5.0GB 13.00GB 13.00GB 13.00GB 72 0 7 aix_imgmdg offline 2 0 13.0GB 512 13.0GB 0.00MB 0.00MB 0.00MB 0 0 IBM_2145:ITSO-CLS2:admin>

Our SVC environment is ready for the VDisk migration to image mode VDisks.

9.8.6 Migrating the managed VDisksWhile our AIX server is still running, we migrate the managed VDisks onto the new MDisks using image mode VDisks. The command to perform this action is the svctask migratetoimage command, which is shown in Example 9-58.

Example 9-58 Migrate the VDisks to image mode VDisks

IBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga -mdisk AIX_MIG -mdiskgrp KANAGA_AIXMIGIBM_2145:ITSO-CLS2:admin>svctask migratetoimage -vdisk IVD_Kanaga1 -mdisk AIX_MIG1 -mdiskgrp KANAGA_AIXMIGIBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 24 Kanaga_AIX offline managed 7 aix_imgmdg 5.0GB 0000000000000008 DS4700 600a0b800026b282000043224874f4190000000000000000000000000000000025 Kanaga_AIX1 offline managed 7 aix_imgmdg 8.0GB 0000000000000009 DS4700 600a0b800026b2820000432f4874f57c0000000000000000000000000000000026 aix_vd0 online managed 6 aix_vd 6.0GB 000000000000000A DS4700 600a0b800026b2820000439c48751ddc0000000000000000000000000000000027 aix_vd1 online managed 6 aix_vd 6.0GB 000000000000000B DS4700 600a0b800026b2820000438448751da90000000000000000000000000000000028 aix_vd2 online managed 6 aix_vd 6.0GB 000000000000000C DS4700 600a0b800026b2820000439048751dc20000000000000000000000000000000029 AIX_MIG online image 3 KANAGA_AIXMIG 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f00000000000000000000000000000000

766 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 793: San

30 AIX_MIG1 online image 3 KANAGA_AIXMIG 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>svcinfo lsmigratemigrate_type Migrate_to_Imageprogress 50migrate_source_vdisk_index 9migrate_target_mdisk_index 30migrate_target_mdisk_grp 3max_thread_count 4migrate_source_vdisk_copy_id 0migrate_type Migrate_to_Imageprogress 50migrate_source_vdisk_index 8migrate_target_mdisk_index 29migrate_target_mdisk_grp 3max_thread_count 4migrate_source_vdisk_copy_id 0IBM_2145:ITSO-CLS2:admin>

During the migration, our AIX server is unaware that its data is being physically moved between storage subsystems.

After the migration is complete, the image mode VDisks are ready to be removed from the AIX server, and the real LUNs can be mapped and masked directly to the host by using the storage subsystems tool.

9.8.7 Removing the LUNs from the SVCThe next step will require downtime, while we remap and remask the disks so that the host sees them directly through the Green zone.

Because our LUNs only hold data files, and because we use a unique Volume Group, we can remap and remask the disks without rebooting the host. The only requirement is that we unmount the file system and vary off the Volume Group to ensure data integrity after the reassignment.

Follow these required steps to remove the SVC:

1. Confirm that the correct device driver for the new storage subsystem is loaded. Because we are moving to a DS4500, we can continue to use the SDD.

2. Shut down any applications and unmount the file systems:

a. Stop the applications that are using the LUNs.

b. Unmount those file systems with the umount MOUNT_POINT command.

c. If the file systems are an LVM volume, deactivate that Volume Group with the varyoffvg VOLUMEGROUP_NAME command.

Before you start: Moving LUNs to another storage system might need another driver than SDD. Check with the storage subsystems vendor to see which driver you will need. You might be able to install this driver ahead of time.

Chapter 9. Data migration 767

Page 794: San

3. Remove the VDisks from the host by using the svctask rmvdiskhostmap command (Example 9-59). To double-check that you have removed the VDisks, use the svcinfo lshostvdiskmap command, which shows that these disks are no longer mapped to the AIX server.

Example 9-59 Remove the VDisks from the host

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_KanagaIBM_2145:ITSO-CLS2:admin>svctask rmvdiskhostmap -host Kanaga IVD_Kanaga1IBM_2145:ITSO-CLS2:admin>svcinfo lshostvdiskmap KanagaIBM_2145:ITSO-CLS2:admin>

4. Remove the VDisks from the SVC by using the svctask rmvdisk command, which will make the MDisks unmanaged, as shown in Example 9-60.

Example 9-60 Remove the VDisks from the SVC

IBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_KanagaIBM_2145:ITSO-CLS2:admin>svctask rmvdisk IVD_Kanaga1IBM_2145:ITSO-CLS2:admin>svcinfo lsmdiskid name status mode mdisk_grp_id mdisk_grp_name capacity ctrl_LUN_# controller_name UID 29 AIX_MIG online unmanaged 10.0GB 0000000000000010 DS4500 600a0b8000174233000000b84876512f0000000000000000000000000000000030 AIX_MIG1 online unmanaged 10.0GB 0000000000000011 DS4500 600a0b80001744310000010e4876444600000000000000000000000000000000IBM_2145:ITSO-CLS2:admin>

Cached data: When you run the svctask rmvdisk command, the SVC first double-checks that there is no outstanding dirty cached data for the VDisk being removed. If uncommitted cached data still exists, the command fails with the following error message:

CMMVC6212E The command failed because data in the cache has not been committed to disk

You will have to wait for this cached data to be committed to the underlying storage subsystem before you can remove the VDisk.

The SVC will automatically destage uncommitted cached data two minutes after the last write activity for the VDisk. How much data there is to destage and how busy the I/O subsystem is will determine how long this command takes to complete.

You can check whether the VDisk has uncommitted data in the cache by using the svcinfo lsvdisk <VDISKNAME> command and checking the fast_write_state attribute. This attribute has the following meanings:

empty No modified data exists in the cache.

not_empty Modified data might exist in the cache.

corrupt Modified data might have existed in the cache, but any modified data has been lost.

768 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 795: San

5. Using Storage Manager (our storage subsystem management tool), unmap and unmask the disks from the SVC back to the AIX server.

We are ready to access the LUNs from the AIX server. If all of the zoning and LUN masking and mapping were done successfully, our AIX server will boot as though nothing has happened:

1. Run the cfgmgr -S command to discover the storage subsystem.

2. Use the lsdev -Ccdisk command to verify the discovery of the new disk.

3. Remove the references to all of the old disks. Example 9-61 shows the removal using SDD and Example 9-62 on page 770 shows the removal using SDDPCM.

Example 9-61 Remove references to old paths using SDD

#lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Devicehdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Devicehdisk5 Defined 1Z-08-02 SAN Volume Controller Devicehdisk6 Defined 1Z-08-02 SAN Volume Controller Devicehdisk7 Defined 1D-08-02 SAN Volume Controller Devicehdisk8 Defined 1D-08-02 SAN Volume Controller Devicehdisk10 Defined 1Z-08-02 SAN Volume Controller Devicehdisk11 Defined 1Z-08-02 SAN Volume Controller Devicehdisk12 Defined 1D-08-02 SAN Volume Controller Devicehdisk13 Defined 1D-08-02 SAN Volume Controller Devicevpath0 Defined Data Path Optimizer Pseudo Device Drivervpath1 Defined Data Path Optimizer Pseudo Device Drivervpath2 Defined Data Path Optimizer Pseudo Device Driver# for i in 5 6 7 8 10 11 12 13; do rmdev -dl hdisk$i -R;donehdisk5 deletedhdisk6 deletedhdisk7 deletedhdisk8 deletedhdisk10 deletedhdisk11 deletedhdisk12 deletedhdisk13 deleted#for i in 0 1 2; do rmdev -dl vpath$i -R;done

Important: This step is the last step that you can perform and still safely back out of everything you have done so far.

Up to this point, you can reverse all of the actions that you have performed so far to get the server back online without data loss:

� Remap and remask the LUNs back to the SVC.

� Run the svctask detectmdisk command to rediscover the MDisks.

� Recreate the VDisks with the svctask mkvdisk command.

� Remap the VDisks back to the server with the svctask mkvdiskhostmap command.

After you start the next step, you might not be able to turn back without the risk of data loss.

Chapter 9. Data migration 769

Page 796: San

vpath0 deletedvpath1 deletedvpath2 deleted#lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Available 1Z-08-02 1742-900 (900) Disk Array Devicehdisk4 Available 1Z-08-02 1742-900 (900) Disk Array Device#

Example 9-62 Remove references to old paths using SDDPCM

# lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk3 Defined 1D-08-02 MPIO FC 2145hdisk4 Defined 1D-08-02 MPIO FC 2145hdisk5 Available 1D-08-02 MPIO FC 2145# for i in 3 4; do rmdev -dl hdisk$i -R;donehdisk3 deletedhdisk4 deleted# lsdev -Cc diskhdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drivehdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drivehdisk2 Available 1S-08-00-10,0 16 Bit LVD SCSI Disk Drivehdisk5 Available 1D-08-02 MPIO FC 2145

4. If your application and data are on an LVM volume, rediscover the Volume Group, and then, run the varyonvg VOLUME_GROUP command to activate the Volume Group.

5. Mount your file systems with the mount /MOUNT_POINT command.

6. You are ready to start your application.

Finally, to make sure that the MDisks are removed from the SVC, run the svctask detectmdisk command. The MDisks will first be discovered as offline, and then, they will automatically be removed after the SVC determines that there are no VDisks associated with these MDisks.

9.9 Using SVC for storage migrationThe primary use of the SVC is not as a storage migration tool. However, the advanced capabilities of the SVC enable us to use the SVC as a “storage migration tool”; therefore, you can add the SVC temporarily to your SAN environment to copy the data from one storage subsystem to another storage subsystem. The SVC enables you to copy image mode VDisks directly from one subsystem to another subsystem while host I/O is running. The only downtime that is required is when the SVC is added to and removed from your SAN environment.

To use the SVC for migration purposes only, perform the following steps:

1. Add the SVC to your SAN environment.

2. Prepare the SVC.

770 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 797: San

3. Depending on your operating system, unmount the selected LUNs or shut down the host.

4. Add the SVC between your storage and the host.

5. Mount the LUNs or start the host again.

6. Start the migration.

7. After the migration process is complete, unmount the selected LUNs or shut down the host.

8. Remove the SVC from your SAN.

9. Mount the LUNs, or start the host again.

10.The migration is complete.

As you can see, extremely little downtime is required. If you prepare everything correctly, you are able to reduce your downtime to a few minutes. The copy process is handled by the SVC, so the host does not hinder the performance while the migration progresses.

To use the SVC for storage migrations, perform the steps that are described in the following sections:

� 9.5.2, “Adding the SVC between the host system and the DS4700” on page 690

� 9.5.6, “Migrating the VDisk from image mode to image mode” on page 705

� 9.5.7, “Free the data from the SVC” on page 709

9.10 Using VDisk Mirroring and Space-Efficient VDisks together

In this section, we show that you can use the VDisk Mirroring feature and Space-Efficient VDisks together to move data from a fully allocated VDisk to a Space-Efficient VDisk.

9.10.1 Zero detect featureSVC 5.1 introduced the zero detect feature for Space-Efficient VDisks. This feature enables clients to reclaim unused allocated disk space (zeros) when converting a fully allocated VDisk to a Space-Efficient VDisk using VDisk Mirroring.

To migrate from a fully allocated VDisk to a Space-Efficient VDisk, perform these steps:

1. Add the target space-efficient copy.

2. Wait for synchronization to complete.

3. Remove the source fully allocated copy.

By using this feature, clients can easily free up managed disk space and make better use of their storage, without needing to purchase any additional function for the SVC.

VDisk Mirroring and Space-Efficient VDisk functions are included in the base virtualization license. Clients with thin-provisioned storage on an existing storage system can migrate their data under SVC management using Space-Efficient VDisks without having to allocate additional storage space.

Zero detect only works if the disk actually contains zeroes; an uninitialized disk can contain anything, unless the disk has been formatted (for example, using the - fmtdisk flag on the mkvdisk command).

Chapter 9. Data migration 771

Page 798: San

Figure 9-75 shows the Space-Efficient VDisk zero detect concept.

Figure 9-75 Space-Efficient VDisk zero detect feature

Figure 9-76 on page 773 shows the Space-Efficient VDisk organization.

772 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 799: San

Figure 9-76 Space-Efficient VDisk organization

As shown in Figure 9-76, a Space-Efficient VDisk has these components:

� Used capacity: This term specifies the portion of real capacity that is being used to store data. For non-space-efficient copies, this value is the same as the VDisk capacity. If the VDisk copy is space-efficient, the value increases from zero to the real capacity value as more of the VDisk is written to.

� Real capacity: This capacity is the real allocated space in the MDG. In a Space-Efficient VDisk, this value can differ from the total capacity.

� Free data: This value specifies the difference between the real capacity and the used capacity values. The SVC is continuously trying to keep equal to the real capacity for contingency. If the free data capacity reaches the used capacity and if the VDisk has been configured with the -autoexpand option, the SVC autoexpands the allocated space for this VDisk to keep this value equal to the real capacity.

� Grains: This value is the smallest unit in into which the allocated space can be divided.

� Metadata: This value is allocated in the real capacity, and it tracks the used capacity, real capacity, and free capacity.

9.10.2 VDisk Mirroring With Space-Efficient VDisksIn this section, we show an example of using the VDisk Mirror feature with Space-Efficient VDisks:

1. We create a fully allocated VDisk of 15 GB named VD_Full, as shown in Example 9-63.

Example 9-63 VD_Full creation example

IBM_2145:ITSO-CLS2:admin>svctask mkvdisk -mdiskgrp 0 -iogrp 0 -mdisk 0:1:2:3:4:5 -node 1 -vtype striped -size 15 -unit gb -fmtdisk -name VD_FullVirtual Disk, id [2], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Full

Chapter 9. Data migration 773

Page 800: San

id 2name VD_FullIO_group_id 0IO_group_name io_grp0status offlinemdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 15.00GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Bthrottling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1

copy_id 0status offlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 15.00GBreal_capacity 15.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsize

2. We then add a Space-Efficient VDisk copy with the VDisk Mirroring option by using the addvdiskcopy command and the autoexpand parameter, as shown in Example 9-64.

Example 9-64 addvdiskcopy command example

IBM_2145:ITSO-CLS2:admin>svctask addvdiskcopy -mdiskgrp 1 -mdisk 6:7:8:9 -vtype striped -rsize 2% -autoexpand -grainsize 32 -unit gb VD_FullVDisk [2] copy [1] successfully created

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Fullid 2name VD_Full

774 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 801: San

IO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id manymdisk_grp_name manycapacity 15.00GBtype manyformatted yesmdisk_id manymdisk_name manyFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Bthrottling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 2copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 15.00GBreal_capacity 15.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsizecopy_id 1status onlinesync noprimary nomdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 323.57MBfree_capacity 323.17MBoverallocation 4746autoexpand on

Chapter 9. Data migration 775

Page 802: San

warning 80grainsize 32

As you can see in Example 9-64 on page 774, the VD_Full has a copy_id 1 where the used_capacity is 0.41 MB, which is equal to the metadata, because only zeros exist in the disk. The real_capacity is 323.57 MB, which is equal to the -rsize 2% value that is specified in the addvdiskcopy command. The free capacity is 323.17 MB, which is equal to the real capacity minus the used capacity.

If zeros are written on the disk, the Space-Efficient VDisk does not consume space. Example 9-65 shows that the Space-Efficient VDisk does not consume space even when they are in sync.

Example 9-65 Space-Efficient VDisk display

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisksyncprogress 2vdisk_id vdisk_name copy_id progress estimated_completion_time2 VD_Full 0 1002 VD_Full 1 100IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Fullid 2name VD_FullIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id manymdisk_grp_name manycapacity 15.00GBtype manyformatted yesmdisk_id manymdisk_name manyFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Bthrottling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 2copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_namefast_write_state empty

776 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 803: San

used_capacity 15.00GBreal_capacity 15.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsizecopy_id 1status onlinesync yesprimary nomdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 323.57MBfree_capacity 323.17MBoverallocation 4746autoexpand onwarning 80grainsize 32

3. We can split the VDisk Mirror or remove one of the copies, keeping the space-efficient copy as our valid copy by using the splitvdiskcopy command or the rmvdiskcopy command:

– If you need your copy as a space-efficient clone, we suggest that you use the splitvdiskcopy command, because that command will generate a new VDisk and you will be able to map to any server that you want.

– If you need your copy, because you are migrating from a previously fully allocated VDisk to go to a Space-Efficient VDisk without any effect on the server operations, we suggest that you use the rmvdiskcopy command. In this case, the original VDisk name is kept, and it remains mapped to the same server.

Example 9-66 shows the splitvdiskcopy command.

Example 9-66 splitvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask splitvdiskcopy -copy 1 -name VD_SEV VD_FullVirtual Disk, id [7], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state2 VD_Full 0 io_grp0 online 0 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000D 0 1 emptyIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEVid 7

Chapter 9. Data migration 777

Page 804: San

name VD_SEVIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 1mdisk_grp_name MDG_DS83capacity 15.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Dthrottling 0preferred_node_id 2fast_write_state emptycache readwriteudidfc_map_count 0sync_rate 50copy_count 1copy_id 0status onlinesync yesprimary yesmdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 323.57MBfree_capacity 323.17MBoverallocation 4746autoexpand onwarning 80grainsize 32

Example 9-67 shows the rmvdiskcopy command.

Example 9-67 rmvdiskcopy command

IBM_2145:ITSO-CLS2:admin>svctask rmvdiskcopy -copy 0 VD_FullIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state2 VD_Full 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000B 0 1 emptyIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk 2

778 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 805: San

id 2name VD_FullIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 1mdisk_grp_name MDG_DS83capacity 15.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Bthrottling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1copy_id 1status onlinesync yesprimary yesmdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 323.57MBfree_capacity 323.17MBoverallocation 4746autoexpand onwarning 80grainsize 32

9.10.3 Metro Mirror and Space-Efficient VDiskIn this section, we show how to use Metro Mirror with a Space-Efficient VDisk as a target VDisk. Using Metro Mirror is one way to migrate data when it is used in an intracluster configuration.

Remember that VDisk Mirroring or migrating a VDisk is a concurrent operation, and Metro Mirror must be considered as disruptive for data access, when at the end of the migration, we must map the Metro Mirror target VDisk to the server.

With this example, we show how you can migrate data with intracluster Metro Mirror using a Space-Efficient VDisk as a target VDisk. We also show how the real capacity and the free

Chapter 9. Data migration 779

Page 806: San

capacity change following the used capacity changing during the Metro Mirror synchronization. Follow these steps:

1. We use a fully allocated VDisk named VD_Full, and we create a Metro Mirror relationship with a Space-Efficient VDisk named VD_SEV.

Example 9-68 shows the two VDisks and the rcrelationship creation.

Example 9-68 VDisks and rcrelationship

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk -filtervalue name=VD*id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count fast_write_state2 VD_Full 0 io_grp0 online 1 MDG_DS47 15.00GB striped 60050768018401BF280000000000000B 0 1 empty7 VD_SEV 0 io_grp0 online 1 MDG_DS83 15.00GB striped 60050768018401BF280000000000000F 0 1 empty

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_Fullid 2name VD_FullIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 0mdisk_grp_name MDG_DS47capacity 15.00GBtype stripedformatted yesmdisk_idmdisk_nameFC_idFC_nameRC_id 2RC_name vdisk_UID 60050768018401BF2800000000000010throttling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1copy_id 0status onlinesync yesprimary yesmdisk_grp_id 0mdisk_grp_name MDG_DS47type stripedmdisk_idmdisk_name

780 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 807: San

fast_write_state emptyused_capacity 15.00GBreal_capacity 15.00GBfree_capacity 0.00MBoverallocation 100autoexpandwarninggrainsizeIBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEVid 7name VD_SEVIO_group_id 0IO_group_name io_grp0status onlinemdisk_grp_id 1mdisk_grp_name MDG_DS83capacity 15.00GBtype stripedformatted nomdisk_idmdisk_nameFC_idFC_nameRC_idRC_namevdisk_UID 60050768018401BF280000000000000Fthrottling 0preferred_node_id 1fast_write_state emptycache readwriteudid 0fc_map_count 0sync_rate 50copy_count 1copy_id 0status onlinesync yesprimary yesmdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 0.41MBreal_capacity 307.20MBfree_capacity 306.79MBoverallocation 5000autoexpand offwarning 1grainsize 32IBM_2145:ITSO-CLS2:admin>svctask mkrcrelationship -cluster 0000020061006FCA -master VD_Full -aux VD_SEV -name MM_SEV_relRC Relationship, id [2], successfully createdIBM_2145:ITSO-CLS2:admin>svcinfo lsrcrelationship MM_SEV_rel

Chapter 9. Data migration 781

Page 808: San

id 2name MM_SEV_relmaster_cluster_id 0000020061006FCAmaster_cluster_name ITSO-CLS2master_vdisk_id 2master_vdisk_name VD_Fullaux_cluster_id 0000020061006FCAaux_cluster_name ITSO-CLS2aux_vdisk_id 7aux_vdisk_name VD_SEVprimary masterconsistency_group_idconsistency_group_namestate inconsistent_stoppedbg_copy_priority 50progress 0freeze_timestatus onlinesynccopy_type metro

2. We start the rcrelationship and observe how the space allocation in the target VDisk will change until it reaches the total of the used capacity.

Example 9-69 shows how to start the rcrelationship and shows the space allocation changing.

Example 9-69 rcrelationship and space allocation

IBM_2145:ITSO-CLS2:admin>svctask startrcrelationship MM_SEV_rel

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEVid 7name VD_SEVIO_group_id 0IO_group_name io_grp0status offlinemdisk_grp_id 1mdisk_grp_name MDG_DS83capacity 15.00GBtype stripedformatted no..

type stripedmdisk_idmdisk_namefast_write_state not_emptyused_capacity 3.64GBreal_capacity 3.95GBfree_capacity 312.89MBoverallocation 380autoexpand onwarning 80grainsize 32

782 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 809: San

IBM_2145:ITSO-CLS2:admin>svcinfo lsvdisk VD_SEVid 7name VD_SEVIO_group_id 0mdisk_grp_id 1mdisk_grp_name MDG_DS83type stripedmdisk_idmdisk_namefast_write_state emptyused_capacity 15.02GBreal_capacity 15.03GBfree_capacity 11.97MBoverallocation 99autoexpand onwarning 80grainsize 32

3. In conclusion, it is possible to use Metro Mirror to migrate data, and we can use a Space-Efficient VDisk as the target VDisk. However, this action does not make sense, because at the end of the initial data synchronization, the Space-Efficient VDisk will allocate as much space as the source (in our case, VD_Full). If you want to use Metro Mirror to migrate your data, we suggest that you use a fully allocated VDisk for the source and the target VDisks.

Chapter 9. Data migration 783

Page 810: San

784 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 811: San

Appendix A. Scripting

In this appendix, we present a high-level overview of how to automate various tasks by creating scripts using the IBM System Storage SAN Volume Controller (SVC) command-line interface (CLI).

A

© Copyright IBM Corp. 2010. All rights reserved. 785

Page 812: San

Scripting structureWhen creating scripts to automate tasks on the SVC, use the structure that is illustrated in Figure A-1.

Figure A-1 Scripting structure for SVC task automation

Creating a Secure Shell connection to the SVCWhen creating a connection to the SVC, if you are running the script, you must have access to a private key that corresponds to a public key that has been previously uploaded to the SVC. The private key is used to establish the Secure Shell (SSH) connection that is needed to use the CLI on the SVC. If the SSH keypair is generated without a passphrase, you can connect without the need of special scripting to parse in the passphrase.

On UNIX systems, you can use the ssh command to create an SSH connection with the SVC. On Windows systems, you can use a utility called plink.exe, which is provided with the PuTTY tool, to create an SSH connection with the SVC. In the following examples, we use plink to create the SSH connection to the SVC.

Executing the commandsWhen using the CLI, you can use the examples in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339 for inspiration, or refer to the IBM System Storage SAN Volume Controller Command-Line Interface User’s Guide, which you can download from the SVC documentation page for each SVC code level at this Web site:

http://www-304.ibm.com/systems/support/supportsite.wss/supportresources?brandind=5000033&familyind=5329743&taskind=1

Performing loggingWhen using the CLI, not all commands provide a usable response to determine the status of the invoked command. Therefore, we recommend that you always create checks that can be logged for monitoring and troubleshooting purposes.

Create connection

(SSH) to the SVC

Run the commandRun the

command(s)

Perform logging

Scheduled activation

orManual

activation

786 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 813: San

Automated virtual disk creationIn the following example, we create a simple bat script to be used to automate virtual disk (VDisk) creation to illustrate how scripts are created. Creating scripts to automate SVC administrative tasks is not limited to bat scripting, and you can, in principle, encapsulate the CLI commands in scripts using any programming language that you prefer, or you can use program applets to perform routine tasks.

Connecting to the SVC using a predefined SSH connectionThe easiest way to create an SSH connection to the SVC is when plink can call a predefined PuTTY session, as shown in Figure A-2 on page 788.

Define a session, including this information:

� The auto-login user name and setting the auto-login user name to your SVC admin user name (for example, admin). This parameter is set under the Connection Data category.

� The private key for authentication (for example, icat.ppk). This key is the private key that you have already created. This parameter is set under the Connection Session Auth category.

� The IP address of the SVC cluster. This parameter is set under the Session category.

� A session name. Our example uses SVC:cluster1.

Your version of PuTTY might have these parameters set in other categories.

Appendix A. Scripting 787

Page 814: San

Figure A-2 Using a predefined SSH connection with plink

To use this predefined PuTTY session, use this syntax:

plink SVC1:cluster1

If a predefined PuTTY session is not used, use this syntax:

plink [email protected] -i "C:\DirectoryPath\KeyName.PPK"

Using a CLI command to create VDisksIn our example, we decided the following parameters are variables when creating the VDisks:

� VDisk size (in GB): %1

� VDisk name: %2

� Managed Disk Group (MDG): %3

Use the following command:

svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3

788 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 815: San

Listing created VDisksTo log the fact that our script created the VDisk that we defined when executing the script, we use the -filtervalue parameter:

svcinfo lsvdisk -filtervalue 'name=%2' >> C:\DirectoryPath\VDiskScript.log

Invoking the VDiskScript.bat sample script Finally, putting it all together, we create our sample bat script for creating a VDisk, as shown in Figure A-3.

Figure A-3 VDiskScript.bat

Using the script, we create a VDisk with the following parameters:

� VDisk size (in GB): 20 (%1)

� VDisk name: Host1_F_Drive (%2)

� MDG: 1 (%3)

Example A-1 shows executing the script to create a VDisk.

Example: A-1 Executing the script to create the VDisk

E:\SVC_Jobs>VDiskScript 4 Host1_E_Drive 1E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size 4 -unit gb -name Host1_E_Drive -mdiskgrp 1Virtual Disk, id [32], successfully created

E:\SVC_Jobs>plink SVC1:Cluster1 -l admin svcinfo lsvdisk -filtervalue 'name=Host1_E_Drive' 1>>E:\SVC_Jobs\VDiskScript.log

From the output of the log, as shown in Example A-2, we verify that the VDisk is created as intended.

Example: A-2 Log file output from VDiskScript.bat

id name IO_group_id IO_group_name status mdisk_grp_id mdisk_grp_name capacity type FC_id FC_name RC_id RC_name vdisk_UID fc_map_count copy_count32 Host1_E_Drive 0 io_grp0 online 1 MDG_DS47 4.0GB striped 60050768018301BF280000000000002E 0 1

-------------------------------------VDiskScript.bat---------------------------

plink SVC1 -l admin svctask mkvdisk -iogrp 0 -vtype striped -size %1 -unit gb -name %2 -mdiskgrp %3

plink SVC1 -l admin svcinfo lsvdisk -filtervalue 'name=%2' >> E:\SVC_Jobs\VDiskScript.log-------------------------------------------------------------------------------

Appendix A. Scripting 789

Page 816: San

SVC treeWe provide another example of using scripting to communicate with the SVC. This script displays a tree-like structure for the SVC, as shown in Example A-3.

We have written this script in Perl to work without modification using Perl on UNIX systems (such as AIX or Linux), Perl for Windows, or Perl in a Windows Cygwin environment.

Example: A-3 SVC tree script output

$ ./svctree.pl 10.0.1.119 admin /cygdrive/c/Keys/icat.ssh+ ITSO-CLS2 (10.0.1.119) + CONTROLLERS + DS4500 (0) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + DS4700 (1) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + MDISK GROUPS + MDG_0_DS45 (ID: 0 CAP: 144.0GB FREE: 120.0GB) + mdisk0 (ID: 0 CAP: 36.0GB MODE: managed) + mdisk1 (ID: 1 CAP: 36.0GB MODE: managed) + mdisk2 (ID: 2 CAP: 36.0GB MODE: managed) + mdisk_3 (ID: 3 CAP: 36.0GB MODE: managed) + aix_imgmdg (ID: 7 CAP: 13.0GB FREE: 3.0GB) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed) + Kanaga_AIX1 (ID: 25 CAP: 8.0GB MODE: managed) + iogrp0 (0) + NODES + Node2 (5) + Node1 (2) + HOSTS + W2k8 (0) + Senegal (1) + VSS_FREE (2) + msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped STAT: online) + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped STAT: online) + VSS_RESERVED (3) + Kanaga (5) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many STAT: online) + VDISKS + MDG_SE_VDisk3 (ID: 0 CAP: 10.2GB TYPE: many) + mdisk2 (ID: 10 CAP: 36.0GB MODE: managed CONT: DS4500) + mdisk_3 (ID: 12 CAP: 36.0GB MODE: managed CONT: DS4500) + A_Kanaga_VD_IM1 (ID: 9 CAP: 10.0GB TYPE: many) + Kanaga_AIX (ID: 24 CAP: 5.0GB MODE: managed CONT: DS4700) + Kanaga_AIX1 (ID: 24 CAP: 8.0GB MODE: managed CONT: DS4700)

790 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 817: San

+ msvc0001 (ID: 10 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500) + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500) + msvc0002 (ID: 11 CAP: 12.0GB TYPE: striped) + mdisk0 (ID: 8 CAP: 36.0GB MODE: managed CONT: DS4500) + mdisk1 (ID: 9 CAP: 36.0GB MODE: managed CONT: DS4500) + iogrp1 (1) + NODES + HOSTS + VDISKS + iogrp2 (2) + NODES + HOSTS + VDISKS + iogrp3 (3) + NODES + HOSTS + VDISKS + recovery_io_grp (4) + NODES + HOSTS + VDISKS + recovery_io_grp (4) + NODES + HOSTS + itsosvc1 (2200642269468) + VDISKS

Example A-4 shows the coding for our script.

Example: A-4 svctree.pl

#!/usr/bin/perl

$SSHCLIENT = “ssh”; # (plink or ssh)

$HOST = $ARGV[0];$USER = ($ARGV[1] ? $ARGV[1] : “admin”);$PRIVATEKEY = ($ARGV[2] ? $ARGV[2] : “/path/toprivatekey”);$DEBUG = 0;

die(sprintf(“Please call script with cluster IP address. The syntax is: \n%s ipaddress loginname privatekey\n”,$0))

if (! $HOST);

sub TalkToSVC() {my $COMMAND = shift;my $NODELIM = shift;my $ARGUMENT = shift;my @info;

if ($SSHCLIENT eq “plink” || $SSHCLIENT eq “ssh”) {$SSH = sprintf(‘%s -i %s %s@%s ‘,$SSHCLIENT,$PRIVATEKEY,$USER,$HOST);

} else {die (“ERROR: Unknown SSHCLIENT [$SSHCLIENT]\n”);

Appendix A. Scripting 791

Page 818: San

}

if ($NODELIM) {$CMD = “$SSH svcinfo $COMMAND $ARGUMENT\n”;

} else {$CMD = “$SSH svcinfo $COMMAND -delim : $ARGUMENT\n”;

}print “Running $CMD” if ($DEBUG);

open SVC,”$CMD|”;while (<SVC>) {

print “Got [$_]\n” if ($DEBUG);chomp;push(@info,$_);

}close SVC;

return @info;}

sub DelimToHash() {my $COMMAND = shift;my $MULTILINE = shift;my $NODELIM = shift;my $ARGUMENT = shift;my %hash;

@details = &TalkToSVC($COMMAND,$NODELIM,$ARGUMENT);

print “$COMMAND: Got [“,join(‘|’,@details).”]\n” if ($DEBUG);

my $linenum = 0;foreach (@details) {

print “$linenum, $_” if ($debug);

if ($linenum == 0) {@heading = split(‘:’,$_);

} else {@line = split(‘:’,$_);

$counter = 0;foreach $id (@heading) {

printf(“$COMMAND: ID [%s], value [%s]\n”,$id,$line[$counter]) if ($DEBUG);

if ($MULTILINE) {$hash{$linenum,$id} = $line[$counter++];

} else {$hash{$id} = $line[$counter++];

}}

}

$linenum++;}

792 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 819: San

return %hash;}

sub TreeLine() {my $indent = shift;my $line = shift;my $last = shift;

for ($tab=1;$tab<=$indent;$tab++) {print “ “;

}

if (! $last) {print “+ $line\n”;

} else {print “| $line\n”;

}}

sub TreeData() {my $indent = shift;my $printline = shift;*data = shift;*list = shift;*condition = shift;my $item;

foreach $item (sort keys %data) {@show = ();($numitem,$detail) = split($;,$item);next if ($numitem == $lastnumitem);$lastnumitem = $numitem;

printf(“CONDITION:SRC [%s], DST [%s], DSTVAL [%s]\n”,$condition{“SRC”},$condition{“DST”},$data{$numitem,$condition{“DST”}}) if ($DEBUG);

next if (($condition{“SRC”} && $condition{“DST”}) && ($condition{“SRC”} != $data{$numitem,$condition{“DST”}}));

foreach (@list) {push(@show,$data{$numitem,$_})

}

&TreeLine($indent,sprintf($printline,@show),0);}

}

# Gather our cluster information.%clusters = &DelimToHash(‘lscluster’,1);%iogrps = &DelimToHash(‘lsiogrp’,1);%nodes = &DelimToHash(‘lsnode’,1);%hosts = &DelimToHash(‘lshost’,1);%vdisks = &DelimToHash(‘lsvdisk’,1);%mdisks = &DelimToHash(‘lsmdisk’,1);%controllers = &DelimToHash(‘lscontroller’,1);

Appendix A. Scripting 793

Page 820: San

%mdiskgrps = &DelimToHash(‘lsmdiskgrp’,1);

# We are now ready to display it.# CLUSTER$indent = 0;foreach $cluster (sort keys %clusters) {

($numcluster,$detail) = split($;,$cluster);next if ($numcluster == $lastnumcluster);$lastnumcluster = $numcluster;next if ("$clusters{$numcluster,'location'}" =~ /remote/);&TreeLine($indent,sprintf(‘%s

(%s)’,$clusters{$numcluster,’name’},$clusters{$numcluster,’cluster_IP_address’}),0);

# CONTROLLERS&TreeLine($indentiogrp+1,’CONTROLLERS’,0);$lastnumcontroller = ““;foreach $controller (sort keys %controllers) {

$indentcontroller = $indent+2;

($numcontroller,$detail) = split($;,$controller);next if ($numcontroller == $lastnumcontroller);$lastnumcontroller = $numcontroller;

&TreeLine($indentcontroller,sprintf(‘%s (%s)’,

$controllers{$numcontroller,’controller_name’},$controllers{$numcontroller,’id’})

,0);

# MDISKS&TreeData($indentcontroller+1,

‘%s (ID: %s CAP: %s MODE: %s)’,*mdisks,[‘name’,’id’,’capacity’,’mode’],

{“SRC”=>$controllers{$numcontroller,’controller_name’},”DST”=>”controller_name”});

}

# MDISKGRPS&TreeLine($indentiogrp+1,’MDISK GROUPS’,0,[]);$lastnummdiskgrp = ““;foreach $mdiskgrp (sort keys %mdiskgrps) {

$indentmdiskgrp = $indent+2;

($nummdiskgrp,$detail) = split($;,$mdiskgrp);next if ($nummdiskgrp == $lastnummdiskgrp);$lastnummdiskgrp = $nummdiskgrp;

&TreeLine($indentmdiskgrp,sprintf(‘%s (ID: %s CAP: %s FREE: %s)’,

$mdiskgrps{$nummdiskgrp,’name’},$mdiskgrps{$nummdiskgrp,’id’},$mdiskgrps{$nummdiskgrp,’capacity’},

794 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 821: San

$mdiskgrps{$nummdiskgrp,’free_capacity’}),0);

# MDISKS&TreeData($indentcontroller+1,

‘%s (ID: %s CAP: %s MODE: %s)’,*mdisks,[‘name’,’id’,’capacity’,’mode’],{“SRC”=>$mdiskgrps{$nummdiskgrp,’id’},”DST”=>”mdisk_grp_id”});

}

# IOGROUP$lastnumiogrp = ““;foreach $iogrp (sort keys %iogrps) {

$indentiogrp = $indent+1;

($numiogrp,$detail) = split($;,$iogrp);next if ($numiogrp == $lastnumiogrp);$lastnumiogrp = $numiogrp;

&TreeLine($indentiogrp,sprintf(‘%s (%s)’,$iogrps{$numiogrp,’name’},$iogrps{$numiogrp,’id’}),0);

$indentiogrp++;

# NODES&TreeLine($indentiogrp,’NODES’,0);&TreeData($indentiogrp+1,

‘%s (%s)’,*nodes,[‘name’,’id’],{“SRC”=>$iogrps{$numiogrp,’id’},”DST”=>”IO_group_id”});

# HOSTS&TreeLine($indentiogrp,’HOSTS’,0);$lastnumhost = ““;%iogrphosts = &DelimToHash(‘lsiogrphost’,1,0,$iogrps{$numiogrp,’id’});foreach $host (sort keys %iogrphosts) {

my $indenthost = $indentiogrp+1;

($numhost,$detail) = split($;,$host);next if ($numhost == $lastnumhost);$lastnumhost = $numhost;

&TreeLine($indenthost,sprintf(‘%s

(%s)’,$iogrphosts{$numhost,’name’},$iogrphosts{$numhost,’id’}),0);

# HOSTVDISKMAP%vdiskhostmap = &DelimToHash(‘lshostvdiskmap’,1,0,$hosts{$numhost,’id’});

$lastnumvdisk = ““;foreach $vdiskhost (sort keys %vdiskhostmap) {

($numvdisk,$detail) = split($;,$vdiskhost);

Appendix A. Scripting 795

Page 822: San

next if ($numvdisk == $lastnumvdisk);$lastnumvdisk = $numvdisk;

next if ($vdisks{$numvdisk,’IO_group_id’} != $iogrps{$numiogrp,’id’});

&TreeData($indenthost+1,‘%s (ID: %s CAP: %s TYPE: %s STAT: %s)’,*vdisks,[‘name’,’id’,’capacity’,’type’,’status’],{“SRC”=>$vdiskhostmap{$numvdisk,’vdisk_id’},”DST”=>”id”});

}}

# VDISKS&TreeLine($indentiogrp,’VDISKS’,0);$lastnumvdisk = ““;foreach $vdisk (sort keys %vdisks) {

my $indentvdisk = $indentiogrp+1;

($numvdisk,$detail) = split($;,$vdisk);next if ($numvdisk == $lastnumvdisk);$lastnumvdisk = $numvdisk;

&TreeLine($indentvdisk,sprintf(‘%s (ID: %s CAP: %s TYPE: %s)’,

$vdisks{$numvdisk,’name’},$vdisks{$numvdisk,’id’},$vdisks{$numvdisk,’capacity’},$vdisks{$numvdisk,’type’}),

0)if ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’});

# VDISKMEMBERSif ($iogrps{$numiogrp,’id’} == $vdisks{$numvdisk,’IO_group_id’}) {

%vdiskmembers = &DelimToHash(‘lsvdiskmember’,1,1,$vdisks{$numvdisk,’id’});

foreach $vdiskmember (sort keys %vdiskmembers) {&TreeData($indentvdisk+1,

‘%s (ID: %s CAP: %s MODE: %s CONT: %s)’,*mdisks,[‘name’,’id’,’capacity’,’mode’,’controller_name’],{“SRC”=>$vdiskmembers{$vdiskmember},”DST”=>”id”});

}}

}}

}

796 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 823: San

Scripting alternativesFor an alternative to scripting, visit the Tivoli Storage Manager for Advanced Copy Services product page:

http://www.ibm.com/software/tivoli/products/storage-mgr-advanced-copy-services/

Additionally, IBM provides a suite of scripting tools that is based on Perl. You can download these scripting tools from this Web site:

http://www.alphaworks.ibm.com/tech/svctools

Appendix A. Scripting 797

Page 824: San

798 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 825: San

Appendix B. Node replacement

In this appendix, we discuss the process to replace nodes. For the latest information about replacing a node, refer to the development page at one of the following Web sites:

� IBM employees:

http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437

� IBM Business Partners (login required):

http://partners.boulder.ibm.com/src/atsmastr.nsf/WebIndex/TD104437

� Clients:

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/TD104437

B

© Copyright IBM Corp. 2010. All rights reserved. 799

Page 826: San

Replacing nodes nondisruptively You can replace the IBM System Storage SAN Volume Controller (SVC) 2145-4F2, SAN Volume Controller 2145-8F2, and SAN Volume Controller 2145-8F4 nodes with SAN Volume Controller 2145-8G4 nodes in an existing, active cluster without having an outage on the SVC or on your host applications. This procedure does not require that you change your storage area network (SAN) environment, because the replacement (new) node uses the same worldwide node name (WWNN) as the node that you replace. In fact, you can use this procedure to replace any model node with another model node.

This task assumes that the following conditions exist:

� The cluster software is at V4.2.0 or later for older to newer model node replacements. The exception is the 2145-8G4 model node, which requires that the cluster is running V4.2.0 or later.

� The new nodes that are configured are not powered on and not connected.

� All nodes that are configured in the cluster are present.

� All errors in the cluster error log are fixed.

� There are no virtual disks (VDisks), managed disks (MDisks), or controllers with a status of degraded or offline.

� The SVC configuration has been backed up through the CLI or GUI, and the file has been saved to the Master Console.

� Download, install, and run the latest “SVC Software Upgrade Test Utility” from this Web site to verify that there are no known issues with the current cluster environment before beginning the node upgrade procedure:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

� You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node.

Perform the following steps to replace the nodes:

1. Perform the following steps to determine the node_name or node_id of the node that you want to replace, the iogroup_id or iogroup_name to which it belongs, and to determine which of the nodes is the configuration node. If the configuration node is to be replaced, we recommend that you upgrade it last. If you already can identify which physical node equates to a node_name or node_id, the iogroup_id or iogroup_name to which it belongs, and which node is the configuration node, you can skip this step and proceed to step 2:

a. Issue the following command from the command-line interface (CLI):

svcinfo lsnode -delim :

b. Under the config node column, look for the status of yes and record the node_name or node_id of this node for later use.

c. Under the id and name columns, record the node_name or node_id of all of the other nodes in the cluster.

Recommendation: If you are planning to redeploy the old nodes in your environment to create a test cluster or to add to another cluster, you must ensure that each WWNN of these old nodes is set to a unique number on your SAN. We recommend that you document the factory WWNN of the new nodes that you use to replace the old nodes and, in effect, swap the WWNN so that each node still has a unique number. Failure to do so can lead to a duplicate WWNN and worldwide port name (WWPN), causing unpredictable SAN problems.

800 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 827: San

d. Under the IO_group_id and IO_group_name columns, record the iogroup_id or iogroup_name for all of the nodes in the cluster.

e. Issue the following command from the CLI for each node_name or node_id to determine the front_panel_id for each node and record the ID. This front_panel_id is physically located on the front of every node (it is not the serial number), and you can use this front_panel_id to determine which physical node equates to the node_name or node_id that you plan to replace:

svcinfo lsnodevpd node_name or node_id

2. Perform the following steps to record the WWNN of the node that you want to replace:

a. Issue the following command from the CLI, where node_name or node_id is the name or ID of the node for which you want to determine the WWNN:

svcinfo lsnode -delim : node_name or node_id

b. Record the WWNN of the node that you want to replace.

3. Verify that all VDisks, MDisks, and disk controllers are online and that none are in a state of “Degraded”. If there are any VDisks, MDisks, or controllers in this state, resolve this issue before going forward, or the loss of access to data might occur when you perform step 4. This action is an especially important step if this node is the second node in the I/O Group to be replaced.

Issue the following commands from the CLI, where object_id or object_name is the controller ID or controller name that you want to view. Verify that each disk controller shows its status as “degraded no”:

svcinfo lsvdisk -filtervalue “status=degraded”svcinfo lsmdisk -filtervalue “status=degraded”svcinfo lscontroller object_id or object_name

4. Issue the following CLI command to shut down the node that will be replaced, where node_name or node_id is the name or ID of the node that you want to delete:

svctask stopcluster -node node_name or node_id

Issue the following CLI command to ensure that the node is shut down and that the status is “offline”, where node_name or node_id is the name or ID of the original node. The node status must be “offline”:

svcinfo lsnode node_name or node_id

5. Issue the following CLI command to delete this node from the cluster and the I/O Group, where node_name or node_id is the name or ID of the node that you want to delete:

svctask rmnode node_name or node_id

6. Issue the following CLI command to ensure that the node is no longer a member of the cluster, where node_name or node_id is the name or ID of the original node. Do not list the node in the command output:

svcinfo lsnode node_name or node_id

Important:

� Do not power off the node through the front panel instead of using this command.

� Be careful that you do not issue the stopcluster command without the -node node_name or node_id parameter, because you will shut down the entire cluster if you do.

Appendix B. Node replacement 801

Page 828: San

7. Perform the following steps to change the WWNN of the node that you just deleted to FFFFF:

a. Disconnect the four FC cables from this node before powering the node on in the next step.

b. Power on this node using the power button on the front panel and wait for it to boot up before going to the next step.

c. From the front panel of the node, press the down button until the Node: panel is displayed, and then use the right and left navigation buttons to display the Status: panel.

d. Press and hold the down button, press and release the select button, and then release the down button. The WWNN of the node is displayed.

e. Press and hold the down button, press and release the select button, and then release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted.

f. Press the up or down button to increment or decrement the character that is displayed.

g. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step f for each field. At the end of this step, the characters that are displayed must be FFFFF.

h. Press the select button to retain the characters that you have updated and return to the WWNN window.

i. Press the select button again to apply the characters as the new WWNN for the node.

Important recommendation:

� Record and mark the Fibre Channel (FC) cables with the SVC node port number (1-4) before removing them from the back of the node that is being replaced. You must reconnect the cables on the new node exactly as they were connected on the old node. Looking at the back of the node, the FC ports on the SVC nodes are numbered 1-4 from left to right and must be reconnected in the same order, or the port IDs will change, which can affect the hosts’ access to VDisks or cause problems with adding the new node back into the cluster. The SVC Hardware Installation Guide for your model shows the port numbering of the various node models.

� Failure to disconnect the FC cables now will likely cause SAN devices and SAN management software to discover these new WWPNs that are generated when the WWNN is changed to FFFFF in the following steps. This discovery might cause ghost records to be seen after the node is powered down. These ghost records do not necessarily cause a problem, but you might have to reboot a SAN device to clear out the record.

� In addition, the ghost records might cause problems with AIX dynamic tracking functioning correctly, assuming that it is enabled, so we highly recommend disconnecting the node’s FC cables as instructed in the following step before continuing to any other steps.

Note: The characters wrap F to 0 or 0 to F.

802 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 829: San

8. Power off this node using the power button on the front panel and remove the node from the rack, if desired.

9. Install the replacement node and its uninterruptible power supply unit in the rack and connect the node to the uninterruptible power supply unit cables according to the SVC Hardware Installation Guide, which is available at this Web site:

http://www.ibm.com/storage/support/2145

10.Power on the replacement node from the front panel with the FC cables disconnected. After the node has booted, ensure that the node displays Cluster: on the front panel and nothing else. If a word other than Cluster: is displayed, contact IBM Support for assistance before continuing.

11.Record the WWNN of this new node, because you will need the WWNN if you plan to redeploy the old nodes that are being replaced. Perform the following steps to change the WWNN of the replacement node to match the WWNN that you recorded in step 2 on page 801:

a. From the front panel of the node, press the down button until the Node: panel is displayed, and then use the right and left navigation buttons to display the Status: panel.

b. Press and hold the down button, press and release the select button, and then, release the down button. The WWNN of the node is displayed. Record this number for use in the redeployment of the old nodes.

c. Press and hold the down button, press and release the select button, and then, release the down button to enter the WWNN edit mode. The first character of the WWNN is highlighted.

d. Press the up or down button to increment or decrement the character that is displayed.

e. Press the left navigation button to move to the next field or the right navigation button to return to the previous field and repeat step d for each field. At the end of this step, the characters that are displayed must be the same as the WWNN that you recorded in step 2 on page 801.

f. Press the select button to retain the characters that you have updated, and return to the WWNN panel.

g. Press the select button to apply the characters as the new WWNN for the node.

h. The node displays Cluster: on the front panel and is now ready to begin the process of adding the node to the cluster. If another word is displayed, contact IBM Support for assistance before continuing.

12.Connect the FC cables to the same port numbers on the new node that they were connected to originally on the old node. See step 7 on page 802.

Note: You must press the select button twice as steps h and i instruct you to do. After step h, it might appear that the WWNN has been changed, but step i actually applies the change.

Note: Do not connect the FC cables to the new node during this step.

Press select twice: You must press the select button twice as steps f and g instruct you to do. After step f, it might appear that the WWNN has been changed, but step g actually applies the change.

Appendix B. Node replacement 803

Page 830: San

13.Issue the following CLI command to verify that the last five characters of the WWNN are correct:

svcinfo lsnodecandidate

14.Add the node to the cluster and ensure that it is added back to the same I/O Group as the original node. Using the following command, where wwnn_arg and iogroup_name or iogroup_id are the items that you recorded in steps 1 on page 800 and 2 on page 801.

svctask addnode -wwnodename wwnn_arg -iogrp iogroup_name or iogroup_id

15.Verify that all of the VDisks for this I/O Group are back online and are no longer degraded. If you are perform the node replacement process disruptively, so that no I/O occurs to the I/O Group, you still must wait a certain period of time (we recommend 30 minutes in this case, too) to make sure that the new node is back online and available to take over before you replace the next node in the I/O Group. See step 3 on page 801.

Both nodes in the I/O Group cache data; however, the cache sizes are asymmetric if the remaining partner node in the I/O Group is a SAN Volume Controller 2145-4F2 node. The replacement node is limited by the cache size of the partner node in the I/O Group in this case. Therefore, the replacement node does not utilize the full 8 GB cache size until the other 2145-4F2 node in the I/O Group is replaced.

You do not have to reconfigure the host multipathing device drivers because the replacement node uses the same WWNN and WWPNs as the previous node. The multipathing device drivers detect the recovery of paths that are available to the replacement node.

The host multipathing device drivers take approximately 30 minutes to recover the paths. Therefore, do not upgrade the other node in the I/O Group for at least 30 minutes after successfully upgrading the first node in the I/O Group. If you have other nodes in other I/O Groups to upgrade, you can perform other upgrades while you wait the 30 minutes for the host multipathing device drivers to recover the paths.

16.Repeat steps 2 on page 801 to 15 for each node that you want to replace.

Expanding an existing SVC clusterIn this section, we describe how to expand an existing SVC cluster with new nodes. You can only expand an SVC cluster with node pairs, which means that you always have to add at least two nodes to your existing cluster. The maximum number of nodes is eight.

Important: Do not connect the new nodes to other ports at the switch or at the director, because using other ports will cause port IDs to change, which can affect the hosts’ access to VDisks or cause problems with adding the new node back into the cluster. The new nodes have 4 Gbps host bus adapters (HBAs) in them. The temptation is to move them to 4 Gbps switch or director ports at the same time, but we do not recommend moving them while performing the hardware node upgrade. Moving the node cables to faster ports on the switch or director is a separate process that needs to be planned independently of upgrading the nodes in the cluster.

Note: If the WWNN does not match the original node’s WWNN exactly as recorded in step 2 on page 801, you must repeat step 11 on page 803.

804 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 831: San

This task assumes the following situation:

� Your cluster contains six or fewer nodes.

� All nodes that are configured in the cluster are present.

� All errors in the cluster error log are fixed.

� All managed disks (MDisks) are online.

� You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node.

� There are no VDisks, MDisks, or controllers with a status of degraded or offline.

� The SVC configuration has been backed up through the CLI or GUI, and the file has been saved to the Master Console.

� Download, install, and run the latest “SVC Software Upgrade Test Utility” from this Web site to verify that there are no known issues with the current cluster environment before beginning the node upgrade procedure:

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

Perform the following steps to add nodes to an existing cluster:

1. Depending on the model of the node that is being added, it might be necessary to upgrade the existing SVC cluster software to a level that supports the hardware model:

– The model 2145-8G4 requires Version 4.2.x or later.

– The model 2145-8F4 requires Version 4.1.x or later.

– The model 2145-8F2 requires Version 3.1.x or later.

– The 2145-4F2 is the original model and thus is supported by Version 1 through Version 4. We highly recommend that you upgrade the existing cluster to the latest level of SVC software that is available; however, the minimum level of SVC cluster software that is recommended for the 4F2 is Version 3.1.0.5.

2. Install additional nodes and uninterruptible power supply units in a rack. Do not connect them to the SAN at this time.

3. Ensure that each node that is being added has a unique WWNN. Duplicate WWNNs can cause serious problems on a SAN and must be avoided. This example shows how this problem might occur:

The nodes came from cluster ABC where they were replaced by brand new nodes. The procedure to replace these nodes in cluster ABC required changing each brand new node’s WWNN to the old node’s WWNN. Adding these nodes now to the same SAN causes duplicate WWNNs to appear with unpredictable results. You will need to power up each node separately while it is disconnected from the SAN and use the front panel to view the current WWNN. If necessary, change the WWNN to a unique name on the SAN. If required, contact IBM Support for assistance before continuing.

4. Power up additional uninterruptible power supply units and nodes. Do not connect them to the SAN at this time.

5. Ensure that each node displays Cluster: on the front panel and nothing else. If another word is displayed, contact IBM Support for assistance before continuing.

6. Connect additional nodes to the LAN.

7. Connect additional nodes to the SAN fabrics.

Appendix B. Node replacement 805

Page 832: San

8. Zone additional node ports in the existing SVC-only zones. You must have an SVC zone in each fabric with nothing but the ports from the SVC nodes in it. These zones are necessary for the initial formation of the cluster, because nodes need to see each other to form a cluster. This zone might not exist, and the only way that the SVC nodes see each other is through a storage zone that includes all of the node ports. However, we highly recommend that you have a separate zone in each fabric with only the SVC node ports included to avoid the risk of the nodes losing communication with each other if the storage zones are changed or deleted.

9. Zone new node ports in the existing SVC/Storage zones. You must have an SVC/Storage zone in each fabric for each disk subsystem that is used with the SVC. Each zone must have all of the SVC ports in that fabric, along with all of the disk subsystem ports in that fabric that will be used by the SVC to access the physical disks.

10.On each disk subsystem that is seen by the SVC, use its management interface to map the LUNs that are currently used by the SVC to all of the new WWPNs of the new nodes that will be added to the SVC cluster. This step is a critical step, because the new nodes must see the same LUNs that the existing SVC cluster nodes see before adding the new nodes to the cluster; otherwise, problems might arise. Also, note that all of the SVC ports that are zoned with the back-end storage must see all of the LUNs that are presented to SVC through all of those same storage ports, or the SVC will mark the devices as degraded.

11.After all of these activities have been completed, you can add the additional nodes to the cluster by using the SVC GUI or CLI. The cluster does not mark any devices as degraded, because the new nodes will see the same cluster configuration, the same storage zoning, and the same LUNs as the existing nodes.

12.Check the status of the controllers and MDisks to ensure that there is nothing marked degraded. If a controller or MDisk is marked degraded, it is not configured properly, and you must fix the configuration immediately before performing any other action on the cluster. If you cannot determine fairly quickly what is wrong, remove the newly added nodes from the cluster until the problem is resolved. You can contact IBM Support for assistance.

Moving VDisks to a new I/O GroupAfter the new nodes are added to a cluster, you might want to move VDisk ownership from one I/O Group to another I/O Group to balance the workload. This action is currently a disruptive process. The host applications will have to be quiesced during the process. The actual moving of the VDisk in SVC is simple and quick; however, certain host operating systems might need to have their file systems and Volume Groups varied off or removed, along with their disks and multiple paths to the VDisks deleted and rediscovered. In effect, it is the equivalent of discovering the VDisks again as when they were initially brought under SVC

Important: Do not add the additional nodes to the existing cluster before the following zoning and masking steps are completed, or the SVC will enter a degraded mode and log errors with unpredictable results.

Exceptions: There are exceptions when EMC DMX/Symmetrix or HDS storage is involved. For further information, review the SVC Software Installation and Configuration Guide, which is available at this Web site:

http://www.ibm.com/storage/support/2145

806 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 833: San

control. This task is not a difficult process, but it can take time to complete, so you must plan accordingly.

This task assumes the following situation:

� All of the steps that are described in “Expanding an existing SVC cluster” on page 804 are completed.

� All nodes that are configured in the cluster are present.

� All errors in the cluster error log are fixed.

� All MDisks are online.

� There are no VDisks, MDisks, or controllers with a status of degraded or offline.

� The SVC configuration has been backed up through the CLI or GUI, and the file has been saved to the Master Console.

Perform the following steps to move the VDisks:

1. Stop the host I/O.

2. Vary off your file system or shut down your host, depending on your operating system.

3. Move all of the VDisks from the I/O Group of the nodes that you are replacing to the new I/O Group.

4. If you had your host shut down, start it again.

5. From each host, issue a rescan of the multipathing software to discover the new paths to the VDisks.

6. See the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all paths have been recovered.

7. Vary on your file system.

8. Restart the host I/O.

9. Repeat steps 1 to 8 for each VDisk in the cluster that you want to replace.

Replacing nodes disruptively (rezoning the SAN)You can replace SAN Volume Controller 2145-4F2, SAN Volume Controller 2145-8F2, or SAN Volume Controller 2145-8F4 nodes with SAN Volume Controller 2145-8G4 nodes. This task disrupts your environment, because you must rezone your SAN, and the host multipathing device drivers must discover new paths. Access to VDisks is lost during this task. In fact, you can use this procedure to replace any model node with a separate model node.

This task assumes that the following conditions exist:

� The cluster software is at V4.2.0 or later.

� All nodes that are configured in the cluster are present.

� The new nodes that are configured are not powered on and not connected.

� All errors in the cluster error log are fixed.

� All MDisks are online.

� You have a 2145 uninterruptible power supply-1U (2145 UPS-1U) unit for each new SAN Volume Controller 2145-8G4 node.

� There are no VDisks, MDisks, or controllers with a status of degraded or offline.

Appendix B. Node replacement 807

Page 834: San

� The SVC configuration has been backed up through the CLI or GUI, and the file has been saved to the Master Console.

� Download, install, and run the latest “SVC Software Upgrade Test Utility” from this Web site to verify that there are no known issues with the current cluster environment before beginning the node upgrade procedure.

http://www-1.ibm.com/support/docview.wss?rs=591&uid=ssg1S4000585

Perform the following steps to replace the nodes:

1. Quiesce all I/O from the hosts that access the I/O Group of the node that you are replacing.

2. Delete the node that you want to replace from the cluster and I/O Group. The node is not deleted until the SAN Volume Controller cache is destaged to disk. During this time, the partner node in the I/O Group transitions to write-through mode.

3. You can use the CLI or the SAN Volume Controller Console to verify that the deletion process has completed.

4. Ensure that the node is no longer a member of the cluster.

5. Power off the node, and remove it from the rack.

6. Install the replacement (new) node in the rack, and connect the uninterruptible power supply unit cables and the FC cables.

7. Power on the node.

8. Rezone your switch zones to remove the ports of the node that you are replacing from the host and storage zones. Replace these ports with the ports of the replacement node.

9. Add the replacement node to the cluster and the I/O Group.

10.From each host, issue a rescan of the multipathing software to discover the new paths to VDisks. If your system is inactive, you can perform this step after you have replaced all of the nodes in the cluster. The host multipathing device drivers take approximately 30 minutes to recover the paths.

11.Refer to the documentation that is provided with your multipathing device driver for information about how to query paths to ensure that all of the paths have been recovered before proceeding to the next step.

12.Repeat steps 1 to 10 for the partner node in the I/O Group.

13.Repeat steps 1 to 11 for each node in the cluster that you want to replace.

14.Resume host I/O.

Important: Both nodes in the I/O Group cache data; however, the cache sizes are asymmetric. The replacement node is limited by the cache size of the partner node in the I/O Group. Therefore, the replacement node does not utilize the full size of its cache.

Symmetric cache sizes: After you have upgraded both nodes in the I/O Group, the cache sizes are symmetric, and the full 8 GB of cache is utilized.

808 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 835: San

Appendix C. Performance data and statistics gathering

It is not the intent of this book to describe performance data and statistics gathering in-depth. We show a method to process the statistics that we have gathered. For a more comprehensive look at the performance of the IBM System Storage SAN Volume Controller (SVC), we recommend SAN Volume Controller Best Practices and Performance Guidelines, SG24-7521, at this Web site:

http://www.redbooks.ibm.com/abstracts/sg247521.html?Open

Although this book was written at an SVC V4.3.x level, many of the underlying principles remain applicable to SVC 5.1.

C

© Copyright IBM Corp. 2010. All rights reserved. 809

Page 836: San

SVC performance overview

While storage virtualization with SVC improves flexibility and provides simpler management of a storage infrastructure, it can also provide a substantial performance advantage for a variety of workloads. The SVC’s caching capability and its ability to stripe VDisks across multiple disk arrays are the reasons why performance improvement is significant when implemented with midrange disk subsystems, because this technology is often only provided with high-end enterprise disk subsystems.

To ensure the desired performance and capacity of your storage infrastructure, from time to time, we recommend that you conduct a performance and capacity analysis to reveal the business requirements of your storage environment.

Performance considerations

When discussing performance for a system, it always comes down to identifying the bottleneck, and thereby, the limiting factor of a given system. At the same time, you must take into consideration the component for whose workload you do identify a limiting factor, because it might not be the same component that is identified as the limiting factor for other workloads.

When designing a storage infrastructure using SVC, or using an SVC storage infrastructure, you must therefore take into consideration the performance and capacity of your infrastructure. Ensuring that your SVC is monitored is a key point to ensure that you obtain the desired performance.

SVC

The SVC cluster is scalable up to eight nodes, and the performance is almost linear when adding more nodes into an SVC cluster, until it becomes limited by other components in the storage infrastructure. While virtualization with the SVC provides a great deal of flexibility, it does not diminish the necessity to have a storage area network (SAN) and disk subsystems that can deliver the desired performance. Essentially, SVC performance improvements are gained by having as many managed disks (MDisks) as possible, therefore, creating a greater level of concurrent I/O to the back end without overloading a single disk or array.

In the following sections, we discuss the performance of the SVC and assume that there are no bottlenecks in the SAN or on the disk subsystem.

Performance monitoring

In this section, we discuss several performance monitoring techniques.

Collecting performance statistics

By default, performance statistics are not collected. You can start or stop performance collection by using the svctask startstats and svctask stopstats commands, as described in Chapter 7, “SAN Volume Controller operations using the command-line interface” on page 339. You can also start or stop performance collection by using the SVC GUI as described Chapter 8, “SAN Volume Controller operations using the GUI” on page 469.

810 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 837: San

Statistics gathering is enabled or disabled on a cluster basis. When gathering is enabled, all of the nodes in the cluster gather statistics.

SVC supports sampling periods of the gathering of statistics from 1 to 60 minutes in steps of one minute.

Previous versions of the SVC used to provide per cluster statistics. These statistics were later superseded by per node statistics, which provide a greater range of information. SVC 5.1.0 onward only provides per node statistics; per cluster statistics are no longer generated. Clients need to use per node statistics instead.

Statistics file naming The files that are generated are written to the /dumps/iostats/ directory.

The file name is of the format:

� Nm_stats_<node_frontpanel_id>_<date>_<time> for MDisk statistics

� Nv_stats_<node_frontpanel_id>_<date>_<time> for VDisk statistics

� Nn_stats_<node_frontpanel_id>_<date>_<time> for node statistics

The node_frontpanel_id is of the node on which the statistics were collected.

The date is in the form <yymmdd> and the time is in the form <hhmmss>.

An example of an mdisk statistics filename is:

Nm_stats_1_020808_105224

Example 9-70 shows typical MDisk and VDisk statistics file names.

Example 9-70 Filename of per node statistics

IBM_2145:ITSO-CLS2:admin>svcinfo lsiostatsdumpsid iostat_filename0 Nm_stats_110775_090904_0643371 Nv_stats_110775_090904_0643372 Nn_stats_110775_090904_0643373 Nm_stats_110775_090904_0644374 Nv_stats_110775_090904_0644375 Nn_stats_110775_090904_0644376 Nm_stats_110775_090904_0645377 Nv_stats_110775_090904_0645378 Nn_stats_110775_090904_064537

After you have saved your performance statistics data files, because they are in .xml format, you can format and merge your data to get more detail about the performance in your SVC environment.

Tip: You can use pscp.exe, which is installed with PuTTY, from an MS-DOS command-line prompt to copy these files to local drives. You can use WordPad to open them, for example:

C:\Program Files\PuTTY>pscp -load ITSO-CLS1 [email protected]:/dumps/iostats/* c:\temp\iostats

Use the -load parameter to specify the session that is defined in PuTTY.

Appendix C. Performance data and statistics gathering 811

Page 838: San

An example of an unsupported tool that is provided “as is” is the SVC Performance Monitor svcmon User’s Guide, which you can obtain from this Web site:

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS3177

You can also process your statistics data with a spreadsheet application to get the report that is shown in Figure C-1.

Figure C-1 Spreadsheet example

Performance data collection and TotalStorage Productivity Center for Disk

Even though the performance statistics data files are readable as standard .xml files, TotalStorage Productivity Center for Disk is the official and supported IBM tool that is used to collect and analyze statistics data and provide a performance report for storage subsystems.

TotalStorage Productivity Center for Disk comes preinstalled on your System Storage Productivity Center Console and can be made available by activating the specific licensing for TotalStorage Productivity Center for Disk.

By activating this license, you upgrade your running TotalStorage Productivity Center-Basic Edition to a TotalStorage Productivity Center for Disk edition.

You can obtain more information about using TotalStorage Productivity Center to monitor your storage subsystem in SAN Storage Performance Management Using TotalStorage Productivity Center, SG24-7364, at this Web site:

http://www.redbooks.ibm.com/abstracts/sg247364.html?Open

812 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 839: San

IBM TotalStorage Reporter for Disk (a utility for anyone running IBM TotalStorage Productivity Center) provides more information about creating a performance report. This utility is available at this Web site:

http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS2618

IBM is withdrawing TotalStorage Productivity Center Reporter for Disk for Tivoli Storage Productivity Center Version 4.1. The replacement function for this utility is packaged with Tivoli Productivity Center Version 4.1 in Business Intelligence and Reporting Tools (BIRT).

Appendix C. Performance data and statistics gathering 813

Page 840: San

814 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 841: San

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks publications

For information about ordering these publications, see “How to get IBM Redbooks publications” on page 817. Note that several of the documents referenced here might be available in softcopy only.

� IBM System Storage SAN Volume Controller, SG24-6423-05

� Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687

� IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848

� IBM System Storage: Implementing an IBM SAN, SG24-6116

� Introduction to Storage Area Networks, SG24-5470

� SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574

� SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521

� Using the SVC for Business Continuity, SG24-7371

� IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547

� IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548

Other publications

These publications are also relevant as further information sources:

� IBM System Storage Open Software Family SAN Volume Controller: Planning Guide, GA22-1052

� IBM System Storage Master Console: Installation and User’s Guide, GC30-4090

� Subsystem Device Driver User’s Guide for the IBM TotalStorage Enterprise Storage Server and the IBM System Storage SAN Volume Controller, SC26-7540

� IBM System Storage Open Software Family SAN Volume Controller: Installation Guide, SC26-7541

� IBM System Storage Open Software Family SAN Volume Controller: Service Guide, SC26-7542

� IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide, SC26-7543

� IBM System Storage Open Software Family SAN Volume Controller: Command-Line Interface User’s Guide, SC26-7544

� IBM System Storage Open Software Family SAN Volume Controller: CIM Agent Developers Reference, SC26-7545

� IBM TotalStorage Multipath Subsystem Device Driver User’s Guide, SC30-4096

© Copyright IBM Corp. 2010. All rights reserved. 815

Page 842: San

� IBM System Storage Open Software Family SAN Volume Controller: Host Attachment Guide, SC26-7563

� IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation Guide, GC52-1356

� IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation Guide, GC27-2219

� IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation Guide, GC27-2220

� IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware Installation Guide, GC27-2221

� IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905-05

� Command Line Interface User’s Guide, SG26-7903-05

Online resources

These Web sites are also relevant as further information sources:

� IBM TotalStorage home page:

http://www.storage.ibm.com

� SAN Volume Controller supported platform:

http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html

� Download site for Windows Secure Shell (SSH) freeware:

http://www.chiark.greenend.org.uk/~sgtatham/putty

� IBM site to download SSH for AIX:

http://oss.software.ibm.com/developerworks/projects/openssh

� Open source site for SSH for Windows and Mac:

http://www.openssh.com/windows.html

� Cygwin Linux-like environment for Windows:

http://www.cygwin.com

� IBM Tivoli Storage Area Network Manager site:

http://www-306.ibm.com/software/sysmgmt/products/support/IBMTivoliStorageAreaNetworkManager.html

� Microsoft Knowledge Base Article 131658:

http://support.microsoft.com/support/kb/articles/Q131/6/58.asp

� Microsoft Knowledge Base Article 149927:

http://support.microsoft.com/support/kb/articles/Q149/9/27.asp

� Sysinternals home page:

http://www.sysinternals.com

� Subsystem Device Driver download site:

http://www-1.ibm.com/servers/storage/support/software/sdd/index.html

816 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 843: San

� IBM TotalStorage Virtualization home page:

http://www-1.ibm.com/servers/storage/software/virtualization/index.html

� SVC support page:

http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continue.x=1

� SVC online documentation:

http://publib.boulder.ibm.com/infocenter/svcic/v3r1m0/index.jsp

� lBM Redbooks publications about SVC:

http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC

How to get IBM Redbooks publications

You can search for, view, or download IBM Redbooks publications, Redpapers, Webdocs, draft publications and additional materials, as well as order hardcopy IBM Redbooks publications, at this Web site:

ibm.com/redbooks

Help from IBM

IBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

Related publications 817

Page 844: San

818 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 845: San

Index

Numerics64-bit kernel 57

Aabends 465abends dump 465access pattern 517active quorum disk 36active SVC cluster 563add a new volume 167, 172add a node 389add additional ports 501add an HBA 354Add SSH Public Key 129administration tasks 493, 559Advanced Copy Services 93AIX host system 181AIX specific information 162AIX toolbox 181AIX-based hosts 162alias 27alias string 158aliases 27analysis 100, 655, 810application server guidelines 92application testing 257assign VDisks 369assigned VDisk 167, 172asynchronous 309asynchronous notifications 280–281Asynchronous Peer-to-Peer Remote Copy 309asynchronous remote 310asynchronous remote copy 32, 283, 309, 311asynchronous replication 331asynchronously 309attributes 527audit log 40Authentication 160authentication 41, 58, 132authentication service 44Autoexpand 25automate tasks 786automatic Linux system 225automatic update process 226automatically discover 342automatically formatted 52automatically restarted 644automation 378auxiliary 320, 328, 425, 447auxiliary VDisk 310, 321, 328available managed disks 343

© Copyright IBM Corp. 2010. All rights reserved.

Bback-end application 60background copy 301, 309, 321, 328background copy bandwidth 333background copy progress 420, 443background copy rate 276–277backup 257

of data with minimal impact on production 262backup speed 257backup time 257bandwidth 66, 95, 319, 585, 610bandwidth impact 333basic setup requirements 130bat script 787bind 252bitmaps 261boot 99boss node 35bottleneck 49bottlenecks 100, 102, 810budget 26budget allowance 26business requirements 100, 810

Ccable connections 73cable length 48cache 37, 269, 310caching 101caching capability 100, 810candidate node 389capacity 90, 180capacity information 538capacity measurement 507CDB 27challenge message 30Challenge-Handshake Authentication Protocol 30, 160, 352, 496change the IP addresses 384Channel extender 59channel extender 62channels 317CHAP 30, 160, 352, 496CHAP authentication 30, 160CHAP secret 30, 160check software levels 636chpartnership 333chrcconsistgrp 335chrcrelationship 335chunks 88, 683CIM agent 38CIM Client 38CIMOM 28, 38, 125, 159CLI 125, 434

819

Page 846: San

commands 181scripting for SVC task automation 378

Cluster 59cluster 34

adding nodes 560creation 388, 560error log 460IP address 114shutting down 342, 386, 396, 550time zone 384viewing properties 380, 544

cluster error log 655Cluster management 38cluster nodes 34cluster overview 34cluster partnership 291, 317cluster properties 385clustered ethernet port 160clustered server resources 34clusters 66colliding rites 312Colliding writes 311Command Descriptor Block 27command syntax 378COMPASS architecture 46compression 98concepts 7concurrent instances 682concurrent software upgrade 450configurable warning capacity 25configuration 153

restoring 672configuration node 35, 48, 59, 160, 388, 560configuration rules 52configure AIX 162configure SDD 252configuring the GUI 117connected 295, 322connected state 298, 323, 325connectivity 36consistency 284, 311, 324consistency freeze 298, 307, 325Consistency Group 59consistency group 262, 264–265

limits 265consistent 32, 296–297, 323–324consistent data set 256Consistent Stopped state 294, 321Consistent Synchronized state 294, 321, 599, 626ConsistentDisconnected 300, 327ConsistentStopped 298, 325ConsistentSynchronized 299, 326constrained link 320container 88contingency capacity 25controller, renaming 341conventional storage 675cookie crumbs recovery 468cooling 67Copied 59

copy bandwidth 95, 333copy operation 33copy process 306, 335copy rate 267, 277copy rate parameter 93Copy Services

managing 397, 566COPY_COMPLETED 280copying state 403corruption 257Counterpart SAN 59counterpart SAN 59, 62, 102CPU cycle 49create a FlashCopy 399create a new VDisk 505create an SVC partnership 585, 609create mapping command 398–399, 566, 568create New Cluster 119create SVC partnership 415, 436creating a VDisk 356creating managed disk groups 484credential caching 45current cluster state 35Cygwin 214

Ddata

backup with minimal impact on production 262moving and migration 256

data change rates 98data consistency 309, 397data corruption 323data flow 77data migration 67, 682data migration and moving 256data mining 257data mover appliance 371database log 315database update 314degraded mode 86delete

a FlashCopy 406a host 354a host port 356a port 502a VDisk 367, 513, 540ports 355

Delete consistency group command 407, 579Delete mapping command 578dependent writes 264, 288–289, 314–315destaged 37destructive 461detect the new MDisks 342detected 342device specific modules 188differentiator 50directory protocol 44dirty bit 302, 328disconnected 295, 322disconnected state 323

820 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 847: San

discovering assigned VDisk 167, 172, 190discovering newly assigned MDisks 481disk access profile 365disk controller

renaming 478systems 340viewing details 340, 477

disk internal controllers 50disk timeout value 246disk zone 76Diskpart 196display summary information 343displaying managed disks 490distance 61, 282distance limitations 282documentation 66, 475DSMs 188dump

I/O statistics 463I/O trace 463listing 462, 663other nodes 464

durability 50dynamic pathing 249–250dynamic shrinking 536dynamic tracking 163

Eelapsed time 93empty MDG 346empty state 301, 328Enterprise Storage Server (ESS) 281entire VDisk 262error 298, 322, 325, 345, 460, 655Error Code 59, 646error handling 278Error ID 60error log 460, 655

analyzing 655file 645

error notification 458, 647error number 646error priority 656ESS 44ESS (Enterprise Storage Server) 281ESS server 44ESS to SVC 687ESS token 44eth0 48eth0 port 48eth1 48Ethernet 73Ethernet connection 74event 460, 655event log 462events 293, 320Excluded 60excludes 481Execute Metro Mirror 419, 441expand

a VDisk 177, 195, 367a volume 196

expand a space-efficient VDisk 367expiry timestamp 44expiry timestamps 45extended distance solutions 282Extent 60extent 88, 676extent level 676extent sizes 88

Ffabric

remote 102fabric interconnect 61factory WWNN 800failover 60, 249, 311failover only 229failover situation 282fan-in 60fast fail 163fast restore 257FAStT 281FC optical distance 48feature log 662feature, licensing 659features, licensing 461featurization log 463Featurization Settings 122Fibre Channel interfaces 47Fibre Channel port fan in 62, 102Fibre Channel Port Login 28Fibre Channel port logins 60Fibre Channel ports 73file system 232filtering 379, 470filters 379fixed error 460, 655FlashCopy 33, 256

bitmap 266how it works 257, 261image mode disk 270indirection layer 266mapping 257mapping events 271rules 270serialization of I/O 278synthesis 278

FlashCopy indirection layer 266FlashCopy mapping 262, 271FlashCopy mapping states 274

Copying 274Idling/Copied 274Prepared 275Preparing 275Stopped 274Suspended 274

FlashCopy mappings 265FlashCopy properties 265FlashCopy rate 93

Index 821

Page 848: San

flexibility 100, 810flush the cache 573forced deletion 501foreground I/O latency 333format 506, 510, 515, 522free extents 367front-end application 60FRU 60Full Feature Phase 28

Ggateway IP address 114GBICs 61general housekeeping 476, 544generating output 379generator 128geographically dispersed 281Global Mirror guidelines 96Global Mirror protocol 32Global Mirror relationship 313Global Mirror remote copy technique 310gminterdelaysimulation 330gmintradelaysimulation 330gmlinktolerance 330–331governing 26governing rate 26governing throttle 517graceful manner 391grain 60, 266, 278grain sizes 93grains 93, 277granularity 262GUI 117, 131

HHardware Management Console 38hardware nodes 46, 56hardware overview 46hash function 30HBA 60, 350HBA fails 86HBA ports 92heartbeat signal 36heartbeat traffic 95help 475, 543high availability 34, 66home directory 181host

and application server guidelines 92configuration 153creating 350deleting 500information 494showing 375systems 76

host adapter configuration settings 183host bus adapter 350Host ID 60host workload 526

housekeeping 476, 544HP-UX support information 249–250

II/O budget 26I/O Governing 26I/O governing 26, 365, 517I/O governing rate 365I/O Group 61I/O group 37, 61–62

name 473renaming 392, 559viewing details 391

I/O pair 69I/O per secs 66I/O statistics dump 463I/O trace dump 463ICAT 38–39identical data 320idling 299, 326idling state 306, 335IdlingDisconnected 300, 326Image Mode 61image mode 526, 685image mode disk 270image mode MDisk 685image mode to image mode 705image mode to managed mode 700image mode VDisk 680image mode virtual disks 91inappropriate zoning 84inconsistent 296, 323Inconsistent Copying state 294, 321Inconsistent Stopped state 294, 321, 598–599, 626InconsistentCopying 298, 325InconsistentDisconnected 300, 327InconsistentStopped 298, 324index number 666Index/Secret/Challenge 30indirection layer 266indirection layer algorithm 267informational error logs 280initiator 158initiator name 27input power 386install 65insufficient bandwidth 278integrity 264, 289, 315interaction with the cache 269intercluster communication and zoning 317intercluster link 291, 317intercluster link bandwidth 333intercluster link maintenance 291–292, 317intercluster Metro Mirror 282, 309intercluster zoning 291–292, 317Internet Storage Name Service 30, 61, 159interswitch link (ISL) 62interval 385intracluster Metro Mirror 281, 309IP address

822 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 849: San

modifying 383, 545IP addresses 66, 545IP subnet 74ipconfig 137IPv4 136ipv4 and 48IPv4 stack 141IPv6 136IPv6 address 140IPv6 addresses 137IQN 27, 60, 158iSCSI 26, 49, 66, 159iSCSI Address 27iSCSI client 158iSCSI IP address failover 160iSCSI Multipathing 31iSCSI Name 27iSCSI node 27iSCSI protocol 57iSCSI Qualified Name 27, 60iSCSI support 57–58iSCSI target node failover 160ISL (interswitch link) 62ISL hop count 282, 309iSNS 30, 61, 159issue CLI commands 214ivp6 48

JJumbo Frames 30

Kkernel level 226key 160key files on AIX 181

LLAN Interfaces 48last extent 686latency 32, 95LBA 302, 328license 114license feature 659licensing feature 461licensing feature settings 461, 659limiting factor 100, 810link errors 47Linux 181Linux kernel 35Linux on Intel 225list dump 462list of MDisks 491list of VDisks 492list the dumps 663listing dumps 462, 663Load balancing 229Local authentication 40local cluster 303, 330

Local fabric 61local fabric interconnect 61Local users 42log 315logged 460Logical Block Address 302, 328logical configuration data 466Login Phase 28logs 314lsrcrelationshipcandidate 334LU 61LUNs 61

Mmagnetic disks 50maintenance levels 183maintenance procedures 645maintenance tasks 449, 635Managed 61Managed disk 61managed disk 61, 479

displaying 490working with 477

managed disk group 347creating 484viewing 486

Managed Disks 61managed mode MDisk 685managed mode to image mode 702managed mode virtual disk 91management 100, 810map a VDisk 516map a VDisk to a host 368mapping 261mapping events 271mapping state 271Master 62master 320, 328master console 67master VDisk 321, 328maximum supported configurations 58MC 62MD5 checksum hash 30MDG 61MDG information 538MDG level 347MDGs 67MDisk 61, 67, 479, 490

adding 346, 488discovering 342, 481including 345, 481information 479modes 685name parameter 343removing 349, 489renaming 344, 480showing 374, 491, 537showing in group 346

MDisk groupcreating 348, 484

Index 823

Page 850: San

deleting 349, 487name 473renaming 348, 486showing 346, 374, 482, 538viewing information 348

MDiskgrp 61Metro Mirror 281Metro Mirror consistency group 304, 306–308, 334–337Metro Mirror features 283, 311Metro Mirror process 292, 319Metro Mirror relationship 305–306, 308, 313, 334–335, 337, 597, 624microcode 36Microsoft Active Directory 43Microsoft Cluster 195Microsoft Multi Path Input Output 188migrate 675migrate a VDisk 680migrate between MDGs 680migrate data 685migrate VDisks 370migrating multiple extents 676migration

algorithm 683functional overview 682operations 676overview 676tips 687

migration activities 676migration phase 526migration process 371migration progress 681migration threads 676mirrored 310mirrored copy 309mirrored VDisks 54mkpartnership 333mkrcconsistgrp 334mkrcrelationship 334MLC 49modify a host 353modifying a VDisk 364mount 232mount point 232moving and migrating data 256MPIO 92, 188MSCS 195MTU sizes 30, 159multi layer cell 49multipath configuration 165multipath I/O 92multipath storage solution 188multipathing device driver 92Multipathing drivers 31multiple disk arrays 100, 810multiple extents 676multiple paths 31multiple virtual machines 240

Nnetwork bandwidth 98Network Entity 158Network Portals 158new code 644new disks 169, 175new mapping 368Node 62node 35, 61, 387

adding 388adding to cluster 560deleting 390failure 278port 60renaming 390shutting down 390using the GUI 559viewing details 388

node details 388node discovery 666node dumps 464node level 387Node Unique ID 35nodes 66non-preferred path 249non-redundant 59non-zero contingency 25N-port 62

Ooffline rules 679offload features 30older disk systems 101on screen content 379, 470, 543online help 475, 543on-screen content 379OpenSSH 181OpenSSH client 214operating system versions 183ordering 32, 264organizing on-screen content 379other node dumps 464overall performance needs 66Oversubscription 62oversubscription 62overwritten 261, 457

Ppackage numbering and version 450, 636parallelism 682partial extents 25partial last extent 686partnership 291, 317, 330passphrase 128path failover 249path failure 279path offline 279path offline for source VDisk 279path offline for target VDisk 280

824 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 851: San

path offline state 279path-selection policy algorithms 229peak 333peak workload 95pended 26per cluster 682per managed disk 682performance 90performance advantage 100, 810performance boost 45performance considerations 810performance improvement 100, 810performance monitoring tool 96performance requirements 66performance scalability 34performance statistics 96performance throttling 517physical location 67physical planning 67physical rules 69physical site 67Physical Volume Links 250PiT 34PiT consistent data 257PiT copy 266PiT semantics 264planning rules 66plink 786PLOGI 28Point in Time 34point in time 33point-in-time copy 297, 324policy decision 302, 328port

adding 354, 501deleting 355, 502

port binding 252port mask 93Power Systems 181PPRC

background copy 301, 309, 328commands 303, 329configuration limits 329detailed states 298, 324

preferred access node 91preferred path 249pre-installation planning 66Prepare 62prepare (pre-trigger) FlashCopy mapping command 401PREPARE_COMPLETED 280preparing volumes 172, 177pre-trigger 401primary 311, 425, 447primary copy 328priority 371priority setting 371private key 125, 128, 181, 786production VDisk 328provisioning 333pseudo device driver 165

public key 125, 128, 181, 786PuTTY 39, 125, 130, 387

CLI session 134default location 128security alert 135

PuTTY application 134, 390PuTTY Installation 214PuTTY Key Generator 128–129PuTTY Key Generator GUI 126PuTTY Secure Copy 452PuTTY session 129, 135PuTTY SSH client software 214PVLinks 250

QQLogic HBAs 226Queue Full Condition 26quiesce 387quiesce time 573quiesced 806quorum 35quorum candidates 36Quorum Disk 35quorum disk 35, 666

setting 666quorum disk candidate 36quorum disks 25

RRAID 62RAID controller 76RAMAC 50RAS 62read workload 53real capacity 25real-time synchronized 281reassign the VDisk 370recall commands 340, 379recommended levels 636Redbooks Web site 817

Contact us xxiiiredundancy 48, 96redundant 59Redundant SAN 62redundant SAN 62redundant SVC 563relationship 262, 309, 319relationship state diagram 293, 320reliability 90Reliability Availability and Serviceability 62Remote 62Remote authentication 40remote cluster 61remote fabric 61, 102

interconnect 61Remote users 43remove a disk 211remove a VDisk 181remove an MDG 349

Index 825

Page 852: San

remove WWPN definitions 355rename a disk controller 478rename an MDG 486rename an MDisk 480renaming an I/O group 559repartitioning 90rescan disks 193restart the cluster 387restart the node 391restarting 424, 446restore points 258restore procedure 672Reverse FlashCopy 34, 258reverse FlashCopy 57RFC3720 27rmrcconsistgrp 337rmrcrelationship 337round robin 91, 229, 249

Ssample script 789SAN Boot Support 249, 251SAN definitions 102SAN fabric 76SAN planning 74SAN Volume Controller 62

documentation 475general housekeeping 476, 544help 475, 543virtualization 38

SAN Volume Controller (SVC) 62SAN zoning 125SATA 97scalable 102, 810scalable architecture 51SCM 50scripting 302, 329, 378scripts 196, 785SCSI 62SCSI Disk 61SCSI primitives 342SDD 91–92, 162, 165, 170, 176, 251SDD (Subsystem Device Driver) 170, 176, 226, 251, 689SDD Dynamic Pathing 249SDD installation 165SDD package version 165, 185SDDDSM 188secondary 311secondary copy 328secondary site 66secure data flow 125secure session 390Secure Shell (SSH) 125Secure Shell connection 38separate physical IP networks 48sequential 91, 356, 506, 510, 522, 532serial numbers 167, 174serialization 278serialization of I/O by FlashCopy 278Service Location Protocol 30, 63, 159

service, maintenance using the GUI 635set attributes 527set the cluster time zone 549set up Metro Mirror 413, 434, 583, 607SEV 365shells 378show the MDG 538show the MDisks 537shrink a VDisk 536shrinking 536shrinkvdisksize 372shut down 195shut down a single node 390shut down the cluster 386, 550Simple Network Management Protocol 302, 329, 345single layer cell 49single point of failure 62single sign on 58single sign-on 39, 44site 67SLC 49SLP 30, 63, 159SLP daemon 30SNIA 2SNMP 302, 329, 345SNMP alerts 481SNMP manager 458SNMP trap 280software upgrade 450, 636–637software upgrade packages 636Solid State Disk 57Solid State Drive 34Solid State Drives 46solution 100sort 473sort criteria 473sorting 473source 277, 328space-efficient 359Space-efficient background copy 319space-efficient VDisk 372, 526space-efficient VDisks 509Space-Efficient Virtual Disk 57space-efficient volume 372special migration 687split per second 93splitting the SAN 62SPoF 62spreading the load 90SSD 51SSD market 50SSD solution 50SSD storage 52SSH 38, 125, 786SSH (Secure Shell) 125SSH Client 39SSH client 181, 214SSH client software 125SSH key 41SSH keys 125, 130

826 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 853: San

SSH server 125SSH-2 125SSO 44stack 684stand-alone Metro Mirror relationship 418, 441start (trigger) FlashCopy mapping command 402, 404, 574start a PPRC relationship command 306, 335startrcrelationship 335state 298, 324–325

connected 295, 322consistent 296–297, 323–324ConsistentDisconnected 300, 327ConsistentStopped 298, 325ConsistentSynchronized 299, 326disconnected 295, 322empty 301, 328idling 299, 326IdlingDisconnected 300, 326inconsistent 296, 323InconsistentCopying 298, 325InconsistentDisconnected 300, 327InconsistentStopped 298, 324overview 293, 322synchronized 297, 324

state fragments 296, 323state overview 295, 329state transitions 280, 322states 271, 277, 293, 320statistics 385statistics collection 547

starting 547stopping 386, 548

statistics dump 463stop 322stop FlashCopy consistency group 406, 576stop FlashCopy mapping command 405STOP_COMPLETED 280stoprcconsistgrp 336stoprcrelationship 335storage cache 37storage capacity 66Storage Class Memory 50stripe VDisks 100, 810striped 506, 510, 522, 532striped VDisk 356subnet mask IP address 114Subsystem Device Driver (SDD) 170, 176, 226, 251, 689Subsystem Device Driver DSM 188SUN Solaris support information 249superuser 381surviving node 390suspended mapping 405SVC

basic installation 111task automation 378

SVC cluster 560SVC cluster candidates 585, 610SVC cluster partnership 303, 330SVC cluster software 639

SVC configuration 66backing up 668deleting the backup 672restoring 672

SVC Console 38SVC device 63SVC GUI 39SVC installations 86SVC master console 125SVC node 37, 86SVC PPRC functions 283SVC setup 154SVC SSD storage 52SVC superuser 41svcinfo 340, 344, 378svcinfo lsfreeextents 681svcinfo lshbaportcandidate 354svcinfo lsmdiskextent 681svcinfo lsmigrate 681svcinfo lsVDisk 373svcinfo lsVDiskextent 681svcinfo lsVDiskmember 374svctask 340, 344, 378, 381svctask chlicense 461svctask finderr 456svctask mkfcmap 303–306, 330, 333–335, 398–399, 566, 568switching copy direction 425, 447, 606, 632switchrcconsistgrp 338switchrcrelationship 337symmetrical 1symmetrical network 62symmetrical virtualization 1synchronized 297, 320, 324synchronized clocks 45synchronizing 319synchronous data mirroring 57synchronous reads 684synchronous writes 684synthesis 278Syslog error event logging 58System Storage Productivity Center 63

TT0 34target 158, 328target name 27test new applications 257threads parameter 519threshold level 26throttles 517throttling parameters 517tie breaker 35tie-break situations 35tie-break solution 666tie-breaker 35time 384time zone 384timeout 246timestamp 44–45

Index 827

Page 854: San

Time-Zero copy 34Tivoli Directory Server 43Tivoli Embedded Security Services 40, 44Tivoli Integrated Portal 39Tivoli Storage Productivity Center 39Tivoli Storage Productivity Center for Data 39Tivoli Storage Productivity Center for Disk 39Tivoli Storage Productivity Center for Replication 39Tivoli Storage Productivity Center Standard Edition 39token 44–45token expiry timestamp 45token facility 44trace dump 463traffic 95traffic profile activity 66transitions 685trigger 402, 404, 574

Uunallocated capacity 198unallocated region 319unassign 514unconfigured nodes 389undetected data corruption 323unfixed error 460, 655uninterruptible power supply 73, 86, 386, 451unmanaged MDisk 685unmap a VDisk 370up2date 225updates 225upgrade 636–637upgrade precautions 450upgrading software 636use of Metro Mirror 301, 328used capacity 25used free capacity 25User account migration 38using SDD 170, 176, 226, 251

VVDisk 490

assigning 516assigning to host 368creating 356, 358, 505creating in image mode 359, 526deleting 367, 509, 513discovering assigned 167, 172, 190expanding 367I/O governing 364image mode migration concept 685information 358, 505mapped to this host 369migrating 92, 370, 518modifying 364, 517path offline for source 279path offline for target 280showing 492showing for MDisk 373, 482showing map to a host 539

showing using group 373shrinking 371, 519working with 356

VDisk discovery 159VDisk mirror 526VDisk Mirroring 53VDisk-to-host mapping 370

deleting 514Veritas Volume Manager 249View I/O Group details 391viewing managed disk groups 486virtual disk 262, 356, 468, 504Virtual Machine File System 238, 240virtualization 38VLUN 61VMFS 238, 240–242VMFS datastore 244volume group 177Voting Set 35voting set 35vpath configured 169, 175

Wwarning capacity 25warning threshold 372Web interface 252Windows 2000 based hosts 182Windows 2000 host configuration 182, 238Windows 2003 188Windows host system CLI 214Windows NT and 2000 specific information 182working with managed disks 477workload cycle 96worldwide node name 800worldwide port name 164Write data 37Write ordering 324write ordering 288, 313, 323write through mode 86write workload 96writes 314write-through mode 37WWNN 800WWPNs 164, 350, 355, 497

YYaST Online Update 225

Zzero buffer 319zero contingency 25Zero Detection 57zero-detection algorithm 25zone 76zoning capabilities 76zoning recommendation 194, 208

828 Implementing the IBM System Storage SAN Volume Controller V5.1

Page 855: San

(1.5” spine)1.5”<

-> 1.998”

789 <->

1051 pages

Implem

enting the IBM System

Storage SAN Volume Controller V5.1

Implem

enting the IBM System

Storage SAN Volum

e Controller V5.1

Implem

enting the IBM System

Storage SAN Volum

e Controller V5.1

Implem

enting the IBM System

Storage SAN Volume Controller V5.1

Page 856: San

Implem

enting the IBM System

Storage SAN Volum

e Controller V5.1

Implem

enting the IBM System

Storage SAN Volum

e Controller V5.1

Page 857: San
Page 858: San

®

SG24-6423-07 ISBN 0738434035

INTERNATIONAL TECHNICALSUPPORTORGANIZATION

BUILDING TECHNICAL INFORMATION BASED ON PRACTICAL EXPERIENCE

IBM Redbooks are developed by the IBM International Technical Support Organization. Experts from IBM, Customers and Partners from around the world create timely technical information based on realistic scenarios. Specific recommendations are provided to help you implement IT solutions more effectively in your environment.

For more information:ibm.com/redbooks

®

Implementing the IBM System Storage SAN Volume Controller V5.1

Install, use, and troubleshoot the SAN Volume Controller

Learn about and how to attach iSCSI hosts

Understand what solid-state drives have to offer

This IBM Redbooks publication is a detailed technical guide to the IBM System Storage SAN Volume Controller (SVC), a virtualization appliance solution that maps virtualized volumes that are visible to hosts and applications to physical volumes on storage devices. Each server within the SAN has its own set of virtual storage addresses, which are mapped to physical addresses. If the physical addresses change, the server continues running using the same virtual addresses that it had before. This capability means that volumes or storage can be added or moved while the server is still running. The IBM virtualization technology improves the management of information at the “block” level in a network, enabling applications and servers to share storage devices on a network.

This book is intended to allow you to implement the SVC at a 5.1.0 release level with a minimum of effort.

Back cover