Dmp Admin 60pr1 Sol

download Dmp Admin 60pr1 Sol

of 166

Transcript of Dmp Admin 60pr1 Sol

  • 7/29/2019 Dmp Admin 60pr1 Sol

    1/166

    Veritas Dynamic

    Multi-PathingAdministrator's Guide

    Solaris

    6.0 Platform Release 1

    August 2012

  • 7/29/2019 Dmp Admin 60pr1 Sol

    2/166

    VeritasDynamic Multi-Pathing Administrator's Guide

    Thesoftwaredescribed in this book is furnished under a license agreement and maybe used

    only in accordance with the terms of the agreement.

    Product version: 6.0 PR1

    Document version: 6.0PR1.1

    Legal Notice

    Copyright 2012 Symantec Corporation. All rights reserved.

    Symantec, the Symantec logo, Veritas, Veritas Storage Foundation, CommandCentral,

    NetBackup, Enterprise Vault, and LiveUpdate are trademarks or registered trademarks of

    Symantec corporation or its affiliates in the U.S. and other countries. Other names may be

    trademarks of their respective owners.The product described in this document is distributed under licenses restricting its use,

    copying, distribution, and decompilation/reverse engineering. No part of this document

    may be reproduced in any form by any means without prior written authorization of

    Symantec Corporation and its licensors, if any.

    THEDOCUMENTATION ISPROVIDED"ASIS" ANDALL EXPRESS ORIMPLIED CONDITIONS,

    REPRESENTATIONS AND WARRANTIES, INCLUDING ANY IMPLIED WARRANTY OF

    MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR NON-INFRINGEMENT,

    ARE DISCLAIMED, EXCEPT TO THE EXTENT THAT SUCH DISCLAIMERS ARE HELD TO

    BELEGALLYINVALID.SYMANTECCORPORATIONSHALLNOT BELIABLE FORINCIDENTAL

    OR CONSEQUENTIAL DAMAGES IN CONNECTION WITH THE FURNISHING,

    PERFORMANCE, OR USE OF THIS DOCUMENTATION. THE INFORMATION CONTAINEDIN THIS DOCUMENTATION IS SUBJECT TO CHANGE WITHOUT NOTICE.

    TheLicensedSoftwareand Documentation are deemed to be commercial computer software

    as defined in FAR12.212 andsubject to restricted rights as defined in FARSection 52.227-19

    "Commercial Computer Software - Restricted Rights" and DFARS 227.7202, "Rights in

    Commercial Computer Software or Commercial Computer Software Documentation", as

    applicable, and any successor regulations. Any use, modification, reproduction release,

    performance,display or disclosure of theLicensed Software and Documentation by theU.S.

    Government shall be solely in accordance with the terms of this Agreement.

  • 7/29/2019 Dmp Admin 60pr1 Sol

    3/166

    Symantec Corporation

    350 Ellis Street

    Mountain View, CA 94043

    http://www.symantec.com

    http://www.symantec.com/http://www.symantec.com/
  • 7/29/2019 Dmp Admin 60pr1 Sol

    4/166

    Technical Support

    Symantec Technical Support maintains support centers globally. Technical

    Supports primary role is to respond to specific queries about product features

    andfunctionality. The Technical Support group also createscontentfor ouronline

    Knowledge Base. The Technical Support group works collaboratively with the

    other functional areas within Symantec to answer your questions in a timely

    fashion. Forexample,theTechnicalSupportgroupworkswith Product Engineering

    andSymantec Security Response to provide alerting services andvirus definition

    updates.

    Symantecs support offerings include the following:

    A range of support options that give you the flexibility to select the right

    amount of service for any size organization Telephone and/or Web-based support that provides rapid response and

    up-to-the-minute information

    Upgrade assurance that delivers software upgrades

    Global support purchased on a regional business hours or 24 hours a day, 7

    days a week basis

    Premium service offerings that include Account Management Services

    For information about Symantecs support offerings, you can visit our Web site

    at the following URL:

    www.symantec.com/business/support/index.jsp

    All support services will be delivered in accordance with your support agreement

    and the then-current enterprise technical support policy.

    Contacting Technical Support

    Customers with a current support agreement may access Technical Support

    information at the following URL:

    www.symantec.com/business/support/contact_techsupp_static.jsp

    Before contacting Technical Support, make sure you have satisfied the systemrequirements that are listed in your product documentation. Also, you should be

    at thecomputer on which theproblem occurred, in case it is necessaryto replicate

    the problem.

    When you contact Technical Support, please have the following information

    available:

    Product release level

    http://www.symantec.com/business/support/index.jsphttp://www.symantec.com/business/support/contact_techsupp_static.jsphttp://www.symantec.com/business/support/contact_techsupp_static.jsphttp://www.symantec.com/business/support/index.jsp
  • 7/29/2019 Dmp Admin 60pr1 Sol

    5/166

    Hardware information

    Available memory, disk space, and NIC information

    Operating system Version and patch level

    Network topology

    Router, gateway, and IP address information

    Problem description:

    Error messages and log files

    Troubleshooting that was performed before contacting Symantec

    Recent software configuration changes and network changes

    Licensing and registration

    If your Symantecproduct requires registrationora license key, accessourtechnical

    support Web page at the following URL:

    www.symantec.com/business/support/

    Customer service

    Customer service information is available at the following URL:

    www.symantec.com/business/support/

    Customer Service is available to assist with non-technical questions, such as the

    following types of issues:

    Questions regarding product licensing or serialization

    Product registration updates, such as address or name changes

    General product information (features, language availability, local dealers)

    Latest information about product updates and upgrades

    Information about upgrade assurance and support contracts

    Information about the Symantec Buying Programs

    Advice about Symantec's technical support options

    Nontechnical presales questions

    Issues that are related to CD-ROMs or manuals

    http://www.symantec.com/business/support/http://www.symantec.com/business/support/http://www.symantec.com/business/support/http://www.symantec.com/business/support/
  • 7/29/2019 Dmp Admin 60pr1 Sol

    6/166

    Documentation

    Product guides are available on the media in PDF format. Make sure that you are

    using the current version of the documentation. The document version appears

    on page 2 of each guide. The latest product documentation is available on theSymantec Web site.

    https://sort.symantec.com/documents

    Your feedback on product documentation is important to us. Send suggestions

    for improvements and reports on errors or omissions. Include the title and

    document version (located on the second page), and chapter and section titles of

    the text on which you are reporting. Send feedback to:

    [email protected]

    For information regarding the latest HOWTO articles, documentation updates,

    or to ask a question regarding product documentation, visit the Storage and

    Clustering Documentation forum on Symantec Connect.

    https://www-secure.symantec.com/connect/storage-management/

    forums/storage-and-clustering-documentation

    About Symantec Connect

    Symantec Connect is the peer-to-peer technical community site for Symantecs

    enterprise customers. Participantscanconnect andshare information with other

    product users, including creating forum posts, articles, videos, downloads, blogs

    and suggesting ideas, as well as interact with Symantec product teams and

    Technical Support. Content is rated by the community, and members receive

    reward points for their contributions.

    http://www.symantec.com/connect/storage-management

    Support agreement resources

    If you want to contact Symantec regarding an existing support agreement, please

    contact the support agreement administration team for your region as follows:

    [email protected] and Japan

    [email protected], Middle-East, and Africa

    [email protected] America and Latin America

    https://sort.symantec.com/documentsmailto:[email protected]://www-secure.symantec.com/connect/storage-management/forums/storage-and-clustering-documentationhttps://www-secure.symantec.com/connect/storage-management/forums/storage-and-clustering-documentationhttp://www.symantec.com/connect/storage-managementmailto:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]:[email protected]://www.symantec.com/connect/storage-managementhttps://www-secure.symantec.com/connect/storage-management/forums/storage-and-clustering-documentationhttps://www-secure.symantec.com/connect/storage-management/forums/storage-and-clustering-documentationmailto:[email protected]://sort.symantec.com/documents
  • 7/29/2019 Dmp Admin 60pr1 Sol

    7/166

    Technical Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    Chapter 1 Understanding DMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    About Veritas Dynamic Multi-Pathing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    How DMP works ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

    How DMP monitors I/O on paths ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    Load balancing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Dynamic Reconfiguration .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    DMP in a clustered environment ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

    Multiple paths to disk arrays ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Device discovery ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    Disk devices ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    Disk device naming in DMP .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

    About operating system-based naming .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    About enclosure-based naming .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

    Chapter 2 Setting up DMP to manage native devices . . . . . . . . . . . . . . . . . . . . 27

    About setting up DMP to manage native devices ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

    Migrating ZFS pools to DMP .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    Migrating to DMP from EMC PowerPath .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

    Migrating to DMP from Hitachi Data Link Manager (HDLM) .... . . . . . . . . . . . . . 30

    Migrating to DMP from Sun Multipath IO (MPxIO) ... . . . . . . . . . . . . . . . . . . . . . . . . . . 31

    Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic

    Storage Management (ASM) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    Enabling Dynamic Multi-Pathing (DMP) devices for use with

    Oracle Automatic Storage Management (ASM) .... . . . . . . . . . . . . . . . . . . 32

    Removing Dynamic Multi-Pathing (DMP) devices from the listing

    of Oracle Automatic Storage Management (ASM) disks ... .

    3 3

    Migrating Oracle Automatic Storage Management (ASM) disk

    groups on operating system devices to Dynamic

    Multi-Pathing (DMP) devices ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    Adding DMP devices to an existing ZFS pool or creating a new ZFS

    pool ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    Displaying the DMP configuration for native ZFS support ... . . . . . . . . . . . . . . . . . 38

    Contents

  • 7/29/2019 Dmp Admin 60pr1 Sol

    8/166

    Removing DMP support for native devices ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    Chapter 3 Administering DMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

    About enabling and disabling I/O for controllers and storage

    processors ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

    About displaying Veritas Dynamic Multi-Pathing (DMP)

    information .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    Displaying the paths to a disk ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    Setting customized names for DMP nodes ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    DMP coexistence with native multi-pathing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

    Administering DMP using vxdmpadm .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    Retrieving information about a DMP node .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

    Displaying consolidated information about the DMP nodes ... . . . . . . . . . 49

    Displaying the members of a LUN group .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Displaying paths controlled by a DMPnode, controller, enclosure,

    or array port ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

    Displaying information about controllers ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

    Displaying information about enclosures ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    Displaying information about array ports ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

    Displaying DMP path information for devices under third-party

    driver control ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    Displaying extended device attributes ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    Suppressing or including devices from VxVM control ... . . . . . . . . . . . . . . . . 60

    Gathering and displaying DMP I/O statistics ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

    Setting the attributes of the paths to an enclosure ... . . . . . . . . . . . . . . . . . . . . . 66

    Displaying the path redundancy level of a device or enclosure

    under DMP control ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

    Specifying the minimum number of active paths for a device or

    enclosure under DMP control ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    Displaying the DMP I/O policy ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    Specifying the DMP I/O policy ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

    Disabling I/O for paths, controllers or array ports ... . . . . . . . . . . . . . . . . . . . . . 76

    Enabling I/O for paths, controllers or array ports ... . . . . . . . . . . . . . . . . . . . . . . 77

    Renaming an enclosure ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

    Configuring DMP's response to I/O failures ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Configuring the DMP I/O throttling mechanism .... . . . . . . . . . . . . . . . . . . . . . . . 80

    Configuring DMP's Subpaths Failover Groups (SFG) ... . . . . . . . . . . . . . . . . . . 81

    Configuring DMP's Low Impact Path Probing .... . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    Displaying recovery option values ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

    Configuring DMP path restoration policies ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

    Stopping the DMP path restoration thread .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    Displaying the status of the DMP path restoration thread .... . . . . . . . . . . 85

    Contents8

  • 7/29/2019 Dmp Admin 60pr1 Sol

    9/166

    Configuring array policy modules ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

    Chapter 4 Administering disks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    About disk management ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

    Discovering and configuring newly added disk devices ... . . . . . . . . . . . . . . . . . . . . . 87

    Partial device discovery ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

    Discovering disks and dynamically adding disk arrays ... . . . . . . . . . . . . . . . 89

    Third-party driver coexistence ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

    How to administer the Device Discovery Layer ... . . . . . . . . . . . . . . . . . . . . . . . . . 93

    VxVM coexistence with ZFS .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

    Changing the disk device naming scheme .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

    Displaying the disk-naming scheme .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

    Regenerating persistent device names .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

    Changing device naming for TPD-controlled enclosures ... . . . . . . . . . . . 110Simple or nopriv disks with enclosure-based naming .... . . . . . . . . . . . . . . 112

    Discoveringthe associationbetween enclosure-based disk names and

    OS-based disk names .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

    Chapter 5 Online dynamic reconfiguration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    About online dynamic reconfiguration .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

    Reconfiguring a LUN online that is under DMP control ... . . . . . . . . . . . . . . . . . . . 115

    Removing LUNs dynamically from an existing target ID .... . . . . . . . . . . 116

    Adding new LUNs dynamically to a new target ID .... . . . . . . . . . . . . . . . . . . . 118

    About detecting target ID reuse if the operating system devicetree is not cleaned up .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

    Scanning an operating system device tree after adding or

    removing LUNs .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    Cleaning up the operating system device tree after removing

    LUNs .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

    Upgrading the array controller firmware online ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

    Replacing a host bus adapter on an M5000 server managed by

    DMP .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

    Chapter 6 Event monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

    About the Dynamic Multi-Pathing (DMP) event source daemon

    (vxesd) ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

    Fabric Monitoring and proactive error detection .... . . . . . . . . . . . . . . . . . . . . . . . . . . 130

    Dynamic Multi-Pathing (DMP) automated device discovery on

    Solaris ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

    Dynamic Multi-Pathing (DMP) discovery of iSCSI and SAN Fibre

    Channel topology .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    Contents

  • 7/29/2019 Dmp Admin 60pr1 Sol

    10/166

    DMP event logging .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

    Starting andstopping the DynamicMulti-Pathing (DMP) event source

    daemon .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

    Chapter 7 Performance monitoring and tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

    About tuning Veritas Dynamic Multi-Pathing (DMP) with

    templates ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

    DMP tuning templates ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

    Example DMP tuning template ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

    Tuning a DMP host with a configuration attribute template ... . . . . . . . . . . . . 140

    Managing the DMP configuration files ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    Resetting the DMP tunable parameters and attributes to the default

    values ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    DMP tunable parameters and attributes that are supported fortemplates ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

    DMP tunable parameters ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

    Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

    Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 161

    Contents10

  • 7/29/2019 Dmp Admin 60pr1 Sol

    11/166

    Understanding DMP

    This chapter includes the following topics:

    About Veritas Dynamic Multi-Pathing

    How DMP works

    Multiple paths to disk arrays

    Device discovery

    Disk devices

    Disk device naming in DMP

    About Veritas Dynamic Multi-PathingVeritas Dynamic Multi-Pathing (DMP) provides multi-pathing functionality for

    the operating system native devices configured on the system. DMP creates DMP

    metadevices (also known as DMP nodes) to represent all the device paths to the

    same physical LUN.

    DMP is available as a component of Storage Foundation. DMP supports Veritas

    Volume Manager (VxVM) volumes on DMP metadevices, and Veritas File System

    (VxFS) file systems on those volumes.

    DMP is also available as a stand-alone product, which extends DMP metadevices

    to support ZFS. You can create ZFS pools on DMP metadevices. DMP supports

    only non-root ZFS file systems.

    Veritas DynamicMulti-Pathingcanbe licensedseparately fromStorageFoundation

    products. Veritas Volume Manager and Veritas File System functionality is not

    provided with a DMP license.

    DMP functionality is available with a Storage Foundation Enterprise license, SF

    HA Enterprise license, and Standard license.

    1Chapter

  • 7/29/2019 Dmp Admin 60pr1 Sol

    12/166

    Veritas Volume Manager (VxVM) volumes and disk groups can co-exist with ZFS

    pools, but each device can only support one of the types. If a disk has a VxVM

    label, then the disk is not available to ZFS. Similarly, if a disk is in use by ZFS, then

    the disk is not available to VxVM.

    How DMP worksVeritas Dynamic Multi-Pathing (DMP) provides greater availability, reliability,

    andperformanceby using path failover andload balancing. This feature is available

    for multiported disk arrays from various vendors.

    Disk arrays can be connected to host systems through multiple paths. To detect

    the various paths to a disk, DMP uses a mechanism that is specific to each

    supported array. DMP can also differentiate between different enclosures of a

    supported array that are connected to the same host system.

    See Discovering and configuring newly added disk devices on page 87.

    The multi-pathing policy that is used by DMP depends on the characteristics of

    the disk array.

    DMP supports the following standard array types:

    Allows several paths to be used concurrently for

    I/O. Such arrays allowDMPto provide greater I/O

    throughput by balancing the I/O load uniformly

    across the multiple paths to the LUNs. In theevent that one path fails, DMP automatically

    routes I/O over the other available paths.

    Active/Active (A/A)

    A/A-A or Asymmetric Active/Active arrays can

    be accessed through secondarystorage paths with

    little performance degradation. The behavior is

    similar to ALUA, except that it does not support

    those SCSI commands which an ALUA array

    supports.

    Asymmetric Active/Active (A/A-A)

    DMP supports all variants of ALUA.Asymmetric Logical Unit Access

    (ALUA)

    Understanding DMPHow DMP works

    12

  • 7/29/2019 Dmp Admin 60pr1 Sol

    13/166

    Allowsaccess to itsLUNs (logical units;real disks

    or virtual disks created using hardware) via the

    primary (active) path on a single controller (also

    known as an access port or a storage processor)during normal operation.

    In implicit failover mode (or autotrespass mode),

    an A/P array automatically fails over by

    scheduling I/Oto thesecondary (passive)path on

    a separate controller if the primary path fails.

    This passive port is not used for I/O until the

    active port fails. In A/P arrays, path failover can

    occur for a single LUN if I/O fails on the primary

    path.

    This policy supports concurrent I/O and load

    balancing by having multiple primary paths intoa controller. This functionality is provided by a

    controller with multiple ports,or by theinsertion

    ofa SAN switchbetween anarray and a controller.

    Failover to the secondary (passive) path occurs

    only if all the active primary paths fail.

    Active/Passive (A/P)

    The appropriate command must be issued to the

    array to make theLUNs fail over to thesecondary

    path.

    This policy supports concurrent I/O and load

    balancing by having multiple primary paths intoa controller. This functionality is provided by a

    controller with multiple ports,or by theinsertion

    ofa SAN switchbetween anarray and a controller.

    Failover to the secondary (passive) path occurs

    only if all the active primary paths fail.

    Active/Passive in explicit failover mode

    or non-autotrespass mode (A/P-F)

    Understanding DMPHow DMP works

  • 7/29/2019 Dmp Admin 60pr1 Sol

    14/166

    ForActive/Passivearrayswith LUN groupfailover

    (A/P-G arrays), a group of LUNs that are

    connected through a controller is treated as a

    single failover entity. Unlike A/P arrays, failoveroccurs at the controller level, and not for

    individual LUNs. The primary controller and the

    secondary controller are each connected to a

    separate group of LUNs. If a single LUN in the

    primary controllers LUN group fails, all LUNs in

    that group fail over to the secondary controller.

    This policy supports concurrent I/O and load

    balancing by having multiple primary paths into

    a controller. This functionality is provided by a

    controller with multiple ports,or by theinsertion

    ofa SANswitch between anarray and a controller.Failover to the secondary (passive) path occurs

    only if all the active primary paths fail.

    Active/Passive withLUN groupfailover

    (A/P-G)

    An array policy module (APM) may define array types to DMP in addition to the

    standard types for the arrays that it supports.

    Veritas Dynamic Multi-Pathing uses DMP metanodes (DMP nodes) to access disk

    devices connected to the system. For each disk in a supported array, DMP maps

    one node to the set of paths that are connected to the disk. Additionally, DMP

    associates the appropriate multi-pathing policy for the disk array with the node.

    For disks in an unsupported array, DMP maps a separate node to each path thatis connected to a disk. The raw and block devices for the nodes are created in the

    directories /dev/vx/rdmp and /dev/vx/dmp respectively.

    Figure 1-1 shows how DMP sets up a node for a disk in a supported disk array.

    Figure 1-1 How DMP represents multiple physical paths to a disk as one node

    Host

    Disk

    Multiple paths

    Multiple paths

    Single DMP node

    Mapped by DMP

    c2c1

    VxVM

    DMP

    Understanding DMPHow DMP works

    14

  • 7/29/2019 Dmp Admin 60pr1 Sol

    15/166

    DMP implements a disk device naming scheme that allows you to recognize to

    which array a disk belongs.

    Figure 1-2 shows an example where two paths, c1t99d0 and c2t99d0, exist to a

    single disk in theenclosure, but VxVM uses thesingle DMP node, enc0_0, to access

    it.

    Figure 1-2 Example of multi-pathing for a disk enclosure in a SAN environment

    enc0_0Mappedby DMP

    VxVM

    DMP

    Host

    Fibre Channelswitches

    Disk enclosureenc0

    Disk is c1t99d0 or c2t99d0depending on the path

    c2t99d0c1t99d0

    c1 c2

    See About enclosure-based naming on page 21.

    See Changing the disk device naming scheme on page 107.

    How DMP monitors I/O on paths

    In VxVM priorto release 5.0, DMP hadone kernel daemon (errord) thatperformed

    error processing, and another (restored) that performed path restoration

    activities.

    Fromrelease 5.0, DMP maintains a poolofkernel threads thatare usedto perform

    such tasks as error processing, path restoration, statistics collection, and SCSI

    request callbacks. The vxdmpadm gettune command can be used to provide

    information aboutthethreads.Thename restored hasbeen retained forbackward

    compatibility.

    One kernel thread responds to I/O failures on a path by initiating a probe of the

    host bus adapter (HBA) that corresponds to the path. Another thread then takes

    the appropriate action according to the response from the HBA. The action taken

    Understanding DMPHow DMP works

  • 7/29/2019 Dmp Admin 60pr1 Sol

    16/166

    can be to retry the I/O request on the path, or to fail the path and reschedule the

    I/O on an alternate path.

    The restore kernel task is woken periodically (typically every 5 minutes) to check

    the health of the paths, and to resume I/O on paths that have been restored. Assome paths may suffer from intermittent failure, I/O is only resumed on a path

    if the path has remained healthy for a given period of time (by default, 5 minutes).

    DMP can be configured with different policies for checking the paths.

    See Configuring DMP path restoration policies on page 83.

    The statistics-gathering task records the start and end time of each I/O request,

    and the number of I/O failures and retries on each path. DMP can be configured

    to use this information to prevent the SCSI driver being flooded by I/O requests.

    This feature is known as I/O throttling.

    If an I/Orequest relates to a mirrored volume, VxVM specifies the FAILFAST flag.

    In such cases, DMP does not retry failed I/O requests on the path, and instead

    marks the disks on that path as having failed.

    See Path failover mechanism on page 16.

    See I/O throttling on page 17.

    Path failover mechanism

    DMP enhances system availability when used with disk arrays having multiple

    paths. In the event of the loss of a path to a disk array, DMP automatically selects

    the next available path for I/O requests without intervention from the

    administrator.

    DMP is also informed when a connection is repaired or restored, and when you

    add or remove devices after the system has been fully booted (provided that the

    operating system recognizes the devices correctly).

    If required, the responseof DMP to I/O failure on a path can be tuned for the paths

    to individual arrays. DMPcan be configured to time outan I/Orequest eitherafter

    a given period of time has elapsed without the request succeeding, or after a given

    number of retries on a path have failed.

    See Configuring DMP's response to I/O failures on page 78.

    Subpaths Failover Group (SFG)

    A subpaths failover group (SFG) represents a group of paths which could fail and

    restore together. When an I/O error is encountered ona path in an SFG,DMP does

    proactive path probing on the other paths of that SFG as well. This behavior adds

    greatly to the performance of path failover thus improving IO performance.

    Understanding DMPHow DMP works

    16

  • 7/29/2019 Dmp Admin 60pr1 Sol

    17/166

    Currently the criteria followed by DMP to form the subpaths failover groups is to

    bundle the paths with the same endpoints from the host to the array into one

    logical storage failover group.

    See Configuring DMP's Subpaths Failover Groups (SFG) on page 81.

    Low Impact Path Probing (LIPP)

    The restore daemon in DMP keeps probing the LUN paths periodically. This

    behavior helps DMP to keep the path states up-to-date even though IO activity is

    not there on the paths. Low Impact Path Probing adds logic to the restore daemon

    to optimize the number of the probes performed while the path status is being

    updated by the restore daemon. This optimization is achieved with the help of

    the logical subpaths failover groups. With LIPP logic in place, DMP probes only

    limited number of paths within an SFG, instead of probing all the paths in an SFG.

    Based on these probe results, DMP determines the states of all the paths in that

    SFG.

    See Configuring DMP's Low Impact Path Probing on page 82.

    I/O throttling

    If I/O throttling is enabled, and the number of outstanding I/O requests builds up

    on a path that has become less responsive, DMP can be configured to prevent new

    I/O requests being sent on the path either when the number of outstanding I/O

    requests has reached a given value, or a given time has elapsed since the last

    successful I/O request on the path. While throttling is applied to a path, the newI/O requests on that path are scheduled on other available paths. The throttling

    is removed from the path if the HBA reports no error on the path, or if an

    outstanding I/O request on the path succeeds.

    See Configuring the DMP I/O throttling mechanism on page 80.

    Load balancing

    By default, Veritas Dynamic Multi-Pathing (DMP) uses the Minimum Queue I/O

    policy for load balancing across paths for Active/Active (A/A), Active/Passive

    (A/P), Active/Passive with explicit failover (A/P-F) and Active/Passive with groupfailover (A/P-G) disk arrays. Load balancing maximizes I/O throughput by using

    the total bandwidth of all available paths. I/O is sent down the path which has the

    minimum outstanding I/Os.

    For A/P diskarrays, I/O is sent down the primary paths. If all of the primary paths

    fail, I/O is switched over to the available secondary paths. As the continuous

    transfer of ownership of LUNs from one controller to another results in severe

    Understanding DMPHow DMP works

  • 7/29/2019 Dmp Admin 60pr1 Sol

    18/166

    I/Oslowdown,load balancingacrossprimary andsecondarypathsis notperformed

    for A/P disk arrays unless they support concurrent I/O.

    For A/P, A/P-F and A/P-G arrays, load balancing is performed across all the

    currently active paths as is done for A/A arrays.

    You can change the I/O policy for the paths to an enclosure or disk array.

    See Specifying the DMP I/O policy on page 69.

    Dynamic Reconfiguration

    Dynamic Reconfiguration (DR) is a feature that is available on some high-end

    enterprise systems. Itallows some components (such as CPUs, memory, andother

    controllers or I/O boards) to be reconfigured while the system is still running.

    The reconfigured component might be handling the disks controlled by VxVM.

    See About enabling and disabling I/O for controllers and storage processors

    on page 41.

    See About online dynamic reconfiguration on page 115.

    DMP in a clustered environment

    Note: Youneed an additional license to usetheclusterfeature of VxVM. Clustering

    is only supported for VxVM.

    In a clustered environment where Active/Passive type disk arrays are shared by

    multiple hosts, all nodes in the cluster must access the disk via the same physical

    storage controller port. Accessing a disk via multiple paths simultaneously can

    severely degrade I/Operformance(sometimes referredto as the ping-pong effect).

    Path failover on a singlecluster node is also coordinated across the cluster so that

    all the nodes continue to share the same physical path.

    Prior to release 4.1 of VxVM, the clustering and DMP features could not handle

    automatic failback in A/P arrays when a path was restored, and did not support

    failback for explicit failover mode arrays. Failback could only be implemented

    manually by running the vxdctl enable command on each cluster node after thepath failure had been corrected. From release 4.1, failback is now an automatic

    cluster-wide operation that is coordinatedby themasternode. Automatic failback

    in explicit failover mode arraysis also handled by issuingthe appropriate low-level

    command.

    Understanding DMPHow DMP works

    18

  • 7/29/2019 Dmp Admin 60pr1 Sol

    19/166

    Note: Support for automatic failback of an A/Parray requires that an appropriate

    ASL (and APM, if required) is installed on the system.

    See Discovering disks and dynamically adding disk arrays on page 89.

    For Active/Active type disk arrays, any disk can be simultaneously accessed

    through all available physical paths to it. In a clustered environment, the nodes

    do not need to access a disk via the same physical path.

    See How to administer the Device Discovery Layer on page 93.

    See Configuring array policy modules on page 85.

    About enabling or disabling controllers with shared disk groups

    SeeHow to administer the Device Discovery Layer

    on page 93.

    See DMP in a clustered environment on page 18.

    Prior to release 5.0, VxVM did not allow enabling or disabling of paths or

    controllers connected to a disk that is part of a shared Veritas Volume Manager

    disk group. From VxVM 5.0 onward, such operations are supported on shared

    DMP nodes in a cluster.

    Multiple paths to disk arrays

    Some disk arrays provide multiple ports to access their disk devices. These ports,coupled with the host bus adaptor (HBA) controller and any data bus or I/O

    processor local to the array, make up multiple hardware paths to access the disk

    devices. Such disk arrays are called multipathed disk arrays. This type of disk

    array can be connected to host systems in many different configurations, (such

    as multiple ports connected to different controllers on a single host, chaining of

    the ports through a single controller on a host, or ports connected to different

    hosts simultaneously).

    See How DMP works on page 12.

    Device discoveryDevice discovery is the term used to describe the process of discovering the disks

    that are attached to a host. This feature is important for DMP because it needs to

    support a growingnumberof diskarrays froma number of vendors. In conjunction

    with the ability to discover the devices attached to a host, the Device Discovery

    service enables you to add support for new disk arrays. The Device discovery uses

    a facility called the Device Discovery Layer (DDL).

    Understanding DMPMultiple paths to disk arrays

  • 7/29/2019 Dmp Admin 60pr1 Sol

    20/166

    The DDL enables you to add support for new disk arrays without the need for a

    reboot.

    This means that you can dynamically add a new disk array to a host, and run a

    command which scans the operating systems device tree for all the attached diskdevices, and reconfigures DMP with the new device database.

    See How to administer the Device Discovery Layer on page 93.

    Disk devicesThe device name (sometimes referred to as devname or disk accessname) defines

    the name of a disk device as it is known to the operating system.

    Such devices are usually, but not always, located in the /dev/[r]dsk directories.

    Devices that are specific to hardware from certain vendors may use their ownpath name conventions.

    Dynamic Multi-Pathing (DMP) uses the device name to create metadevices in the

    /dev/vx/[r]dmp directories. DMP uses the metadevices (or DMP nodes) to

    represent disks that can be accessed by one or more physical paths, perhaps via

    different controllers. The number of access paths that are available depends on

    whether the disk is a single disk, or is part of a multiported disk array that is

    connected to a system.

    You can use the vxdisk utility to display the paths that are subsumed by a DMP

    metadevice, and to display the status of each path (for example, whether it isenabled or disabled).

    See How DMP works on page 12.

    Device names may also be remapped as enclosure-based names.

    See Disk device naming in DMP on page 20.

    Disk device naming in DMPDevice names for disks are assigned according to the naming scheme which you

    specify to DMP. The format of the device name may vary for different categoriesof disks.

    See Disk categories on page 90.

    Device names can use one of the following naming schemes:

    About operating system-based naming

    Enclosure-based naming

    Understanding DMPDisk devices

    20

  • 7/29/2019 Dmp Admin 60pr1 Sol

    21/166

    Devices with device names longer than 31 characters always use enclosure-based

    names.

    By default, DMP uses enclosure-based naming. You can change the disk device

    naming scheme if required.

    See Changing the disk device naming scheme on page 107.

    About operating system-based naming

    In the OS-based naming scheme, all disk devices are named using the c#t#d#s#

    format.

    The syntax of a device name is c#t#d#s#, where c# represents a controller on a

    host bus adapter, t# is the target controller ID, d# identifies a disk on the target

    controller, and s# represents a partition (or slice) on the disk.

    Note: For non-EFI disks, the slice s2 represents the entire disk. For both EFI and

    non-EFI disks, the entire disk is implied if the slice is omitted from the device

    name.

    DMP assigns the name of the DMP meta-device (disk access name) from the

    multiple paths to the disk. DMP sorts the names by controller, and selects the

    smallest controller number. For example, c1 rather than c2. If multiple paths are

    seen from the same controller, then DMP uses the path with the smallest target

    name. This behavior make it easierto correlatedevices with theunderlying storage.

    If a CVM cluster is symmetric, each node in the cluster accesses the same set of

    disks. This naming scheme makes the naming consistent across nodes in a

    symmetric cluster.

    The boot disk (which contains the root file system and is used when booting the

    system) is often identified to VxVM by the device name c0t0d0.

    By default, OS-based names are not persistent, and are regenerated if the system

    configuration changes the device name as recognized by the operating system. If

    you do not want the OS-based names to change after reboot, set the persistence

    attribute for the naming scheme.See Changing the disk device naming scheme on page 107.

    About enclosure-based naming

    Enclosure-based naming provides an alternative to operating system-based device

    naming. In a Storage Area Network (SAN) that uses Fibre Channel switches,

    informationaboutdisk locationprovidedbytheoperating systemmaynot correctly

    Understanding DMPDisk device naming in DMP

  • 7/29/2019 Dmp Admin 60pr1 Sol

    22/166

    indicate the physical location of the disks. Enclosure-based naming allows DMP

    to accessenclosuresas separate physical entities. By configuring redundant copies

    of your data on separate enclosures, you can safeguard against failure of one or

    more enclosures.Enclosure-based naming allows disk devices to be named for enclosures rather

    than for the controllers through which they are accessed. For example, c#t#d#s#

    naming assigns controller-based device names to disks in separate enclosures

    that are connected to the same host controller.

    Figure 1-3 shows a typicalSANenvironment where hostcontrollers are connected

    to multiple enclosures through a Fibre Channel switch.

    Figure 1-3 Example configuration for disk enclosures connected via a fibre

    channel switch

    enc0 enc2

    Host

    Fibre Channelswitch

    Disk enclosures

    c1

    enc1

    In such a configuration, enclosure-based naming can be used to refer to each disk

    within an enclosure. For example, the device names for the disks in enclosure

    enc0 are named enc0_0, enc0_1, and so on. The main benefit of this scheme is

    that it allowsyou to quickly determine where a disk is physically located in a large

    SAN configuration.

    In most disk arrays, youcanusehardware-based storage management to represent

    several physical disks as one LUN to the operating system. In such cases, VxVM

    also sees a single logical disk device rather than its component disks. For this

    Understanding DMPDisk device naming in DMP

    22

  • 7/29/2019 Dmp Admin 60pr1 Sol

    23/166

    reason, when reference is made to a disk within an enclosure, this disk may be

    either a physical disk or a LUN.

    Another important benefit of enclosure-based naming is that it enables VxVM to

    avoid placing redundant copies of data in the same enclosure. This is a good thingto avoid as each enclosure can be considered to be a separate fault domain. For

    example, if a mirrored volume were configured only on the disks in enclosure

    enc1, the failure of the cable between the switch and the enclosure would make

    the entire volume unavailable.

    If required, you can replace the default name that DMP assigns to an enclosure

    with one that is more meaningful to your configuration.

    See Renaming an enclosure on page 78.

    Figure 1-4 shows a High Availability (HA) configuration where redundant-loop

    access to storage is implemented by connecting independent controllers on thehost to separate switches with independent paths to the enclosures.

    Figure 1-4 Example HA configuration using multiple switches to provide

    redundant loop access

    enc0 enc2

    Host

    Fibre Channelswitches

    Disk enclosures

    c1 c2

    enc1

    Such a configuration protects against the failure of one of the host controllers

    (c1 and c2), or of the cable between the host and one of the switches. In this

    example, each disk is known by the same name to VxVM for all of the paths over

    which it can be accessed.

    Understanding DMPDisk device naming in DMP

  • 7/29/2019 Dmp Admin 60pr1 Sol

    24/166

    Forexample, thedisk deviceenc0_0 represents a singledisk forwhich twodifferent

    paths are known to the operating system, such as c1t99d0 and c2t99d0.

    See Disk device naming in DMP on page 20.

    See Changing the disk device naming scheme on page 107.

    To take account of fault domains when configuring data redundancy, you can

    control how mirrored volumes are laid out across enclosures.

    Enclosure-based naming

    By default, DMP uses enclosure-based naming.

    Enclosure-based naming operates as follows:

    All fabric or non-fabric disks in supported disk arrays are named using the

    enclosure_name_# format. For example, disks in the supported disk array,enggdept are named enggdept_0, enggdept_1, enggdept_2 and so on.

    You can use the vxdmpadm command to administer enclosure names.

    See Renaming an enclosure on page 78.

    See the vxdmpadm(1M) manual page.

    Disks in the DISKS category (JBOD disks) are named using the Disk_# format.

    Disks in the OTHER_DISKS category (disks that are not multipathed by DMP)

    are named using the c#t#d#s# format.

    By default, enclosure-based names are persistent, so they do not change afterreboot.

    If a CVM cluster is symmetric, each node in the cluster accesses the same set of

    disks. Enclosure-based names provide a consistent naming system so that the

    device names are the same on each node.

    To display the native OS device names of a DMP disk (such as mydg01), use the

    following command:

    # vxdisk path | grep diskname

    See Renaming an enclosure on page 78.

    See Disk categories on page 90.

    Enclosure based naming with the Array Volume Identifier (AVID)attribute

    By default, DMP assigns enclosure-based names to DMP meta-devices using an

    array-specific attribute called the Array Volume ID (AVID). The AVID provides a

    unique identifier for theLUN that is provided by thearray. TheASL corresponding

    Understanding DMPDisk device naming in DMP

    24

  • 7/29/2019 Dmp Admin 60pr1 Sol

    25/166

    to the array provides the AVID property. Within an array enclosure, DMP uses

    the Array Volume Identifier (AVID) as an index in the DMP metanode name. The

    DMP metanode name is in the format enclosureID_AVID.

    With the introduction of AVID to the EBN naming scheme, identifying storagedevices becomes much easier. The array volume identifier (AVID) enables you to

    have consistent device naming across multiple nodes connected to the same

    storage. The disk access name never changes, because it is based on the name

    defined by the array itself.

    Note: DMP does not support AVID with PowerPath names.

    If DMP does not have access to a devices AVID, it retrieves another unique LUN

    identifier called the LUN serial number. DMP sorts the devices based on the LUN

    Serial Number (LSN), and then assigns the index number. All hosts see the same

    set of devices, so all hosts will have the same sorted list, leading to consistent

    device indices across the cluster. In this case, the DMP metanode name is in the

    format enclosureID_index.

    DMP also supports a scalable framework, that allows you to fully customize the

    device names on a host by applying a device naming file that associates custom

    names with cabinet and LUN serial numbers.

    If a CVM cluster is symmetric, each node in the cluster accesses the same set of

    disks. Enclosure-based names provide a consistent naming system so that the

    device names are the same on each node.The DMP utilities such as vxdisk list display the DMP metanode name, which

    includes the AVID property. Use the AVID to correlate the DMP metanode name

    to the LUN displayed in the array management interface (GUI or CLI) .

    For example, on an EMC CX array where the enclosure is emc_clariion0 and the

    array volume ID provided by the ASL is 91, the DMP metanode name is

    emc_clariion0_91.The followingsample output shows the DMPmetanode names:

    $ vxdisk list

    emc_clariion0_91 auto:cdsdisk emc_clariion0_91 dg1 online share

    emc_clariion0_92 auto:cdsdisk emc_clariion0_92 dg1 online shareemc_clariion0_93 auto:cdsdisk emc_clariion0_93 dg1 online share

    emc_clariion0_282 auto:cdsdisk emc_clariion0_282 dg1 online share

    emc_clariion0_283 auto:cdsdisk emc_clariion0_283 dg1 online share

    emc_clariion0_284 auto:cdsdisk emc_clariion0_284 dg1 online share

    # vxddladm get namingscheme

    NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID

    Understanding DMPDisk device naming in DMP

  • 7/29/2019 Dmp Admin 60pr1 Sol

    26/166

    ======================================================

    Enclosure Based Yes Yes Yes

    Understanding DMPDisk device naming in DMP

    26

  • 7/29/2019 Dmp Admin 60pr1 Sol

    27/166

    Setting up DMP to managenative devices

    This chapter includes the following topics:

    About setting up DMP to manage native devices

    Migrating ZFS pools to DMP

    Migrating to DMP from EMC PowerPath

    Migrating to DMP from Hitachi Data Link Manager (HDLM)

    Migrating to DMP from Sun Multipath IO (MPxIO)

    Using Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage

    Management (ASM)

    Adding DMP devices to an existing ZFS pool or creating a new ZFS pool

    Displaying the DMP configuration for native ZFS support

    Removing DMP support for native devices

    About setting up DMP to manage native devicesYoucanuseDMPinsteadof third-party drivers foradvanced storage management.

    This section describes how to set up DMP to manage ZFS pools and any ZFS file

    systems that operate on those pools.

    After you install DMP, set up DMP for use with ZFS. To set up DMP for use with

    ZFS, turn on the dmp_native_support tunable. When this tunable is turned on,

    DMP enables support for ZFS on any device that does not have a VxVM label and

    is not in control of any third party multi-pathing (TPD) software. In addition,

    2Chapter

  • 7/29/2019 Dmp Admin 60pr1 Sol

    28/166

    turning on the dmp_native_support tunable migrates any ZFS pools that are not

    in use onto DMP devices.

    The dmp_native_support tunable enables DMP support for ZFS, as follows:

    If the ZFSpools arenot inuse, turning onnative support migrates

    the devices to DMP devices.

    If the ZFS pools arein use, then perform the steps to turn off the

    devices and migrate the devices to DMP.

    ZFS pools

    Native support is not enabled for any device that has a VxVM

    label. To make the device available for ZFS, remove the VxVM

    label.

    VxVM devices cancoexist with nativedevices under DMPcontrol.

    Veritas Volume

    Manager (VxVM)

    devices

    If a disk is already multi-pathed with a third-party driver (TPD),DMP does not manage the devices unless you remove TPD

    support. After removing TPD support, turn on the

    dmp_native_support tunable to migrate the devices.

    If ZFS pools are constructed over TPD devices, then perform the

    steps to migrate the ZFS pools onto DMP devices.

    See Migrating ZFS pools to DMP on page 28.

    Devices that aremulti-pathed with

    Third-party drivers

    (TPD)

    To turn on the dmp_native_support tunable, use the following command:

    # vxdmpadm settune dmp_native_support=on

    The first time this operation is performed, the command reports if a pool is in

    use, and does not migrate those devices. To migrate the pool onto DMP, stop the

    pool. Then execute the vxdmpadm settune command again to migrate the pool

    onto DMP.

    Toverifythevalue ofthedmp_native_supporttunable,usethefollowingcommand:

    # vxdmpadm gettune dmp_native_support

    Tunable Current Value Default Value

    --------------------------------------------------

    dmp_native_support on off

    Migrating ZFS pools to DMPYoucanuseDMPinsteadof third-party drivers foradvanced storage management.

    This section describes how to set up DMP to manage ZFS pools and thefile systems

    operating on them.

    Setting up DMP to manage native devicesMigrating ZFS pools to DMP

    28

  • 7/29/2019 Dmp Admin 60pr1 Sol

    29/166

    To set up DMP, migrate the devices from the existing third-party device drivers

    to DMP.

    Table 2-1 shows the supported native solutions and migration paths.

    Table 2-1 Supported migration paths

    Migration procedureNative solutionOperating system

    See Migrating to DMP from EMC

    PowerPath on page 29.

    EMC PowerPathSolaris 10

    See Migrating to DMP from Hitachi

    Data Link Manager (HDLM)

    on page 30.

    Hitachi Data Link

    Manager (HDLM)

    Solaris 10

    See

    Migrating to DMP from SunMultipath IO (MPxIO) on page 31.Sun Multipath IO(MPxIO)Solaris 10

    Migrating to DMP from EMC PowerPathThis procedure describes removing devices from EMC PowerPath control and

    enabling DMP on the devices.

    Plan for system downtime for the following procedure.

    The migration steps involve system downtime on a host due to the following:

    Need to stop applications

    Need to stop the VCS services if using VCS

    To remove devices from EMC PowerPath control and enable DMP

    1 Turn on the DMP support for the ZFS pool.

    # vxdmpadm settune dmp_native_support=on

    2 Stop the applications that use the PowerPath meta-devices.

    In a VCS environment, stop the VCS service group of the application, which

    will stop the application.

    3 Unmount anyfile systemsthat usethevolumegroup on thePowerPathdevice.

    4 Export the ZFS pools that use the PowerPath device.

    # zpool export poolname

    Setting up DMP to manage native devicesMigrating to DMP from EMC PowerPath

  • 7/29/2019 Dmp Admin 60pr1 Sol

    30/166

    5 Remove the disk access names for the PowerPath devices from VxVM.

    # vxdisk rm emcpowerXXXX

    Where emcpowerXXXXis the name of the device.

    6 Take the device out of PowerPath control:

    # powermt unmanage dev=pp_device_name

    or

    # powermt unmanage class=array_class

    7 Verifythat the PowerPath devicehasbeen removed from PowerPath control.

    # powermt display dev=all

    8 Run a device scan to bring the devices under DMP control:

    # vxdisk scandisks

    9 Mount the file systems.

    10 Restart the applications.

    Migrating to DMP from Hitachi Data Link Manager

    (HDLM)Thisprocedure describes removingdevicesfromHDLM controlandenablingDMP

    on the devices.

    Note: DMPcannotco-exist with HDLM;HDLM must be removed from the system.

    Plan for system downtime for the following procedure.

    The migration steps involve system downtime on a host due to the following:

    Need to stop applications

    Need to stop the VCS services if using VCS

    The procedure involves one or more host reboots

    To remove devices from Hitachi Data Link Manager (HDLM) and enable DMP

    1 Stop the applications using the HDLM meta-device

    2 Unmount any file systems that use the volume group on the HDLM device.

    Setting up DMP to manage native devicesMigrating to DMP from Hitachi Data Link Manager (HDLM)

    30

  • 7/29/2019 Dmp Admin 60pr1 Sol

    31/166

    3 Export the ZFS pools that use the HDLM device.

    # zpool export poolname

    4 Uninstall the HDLM package.

    5 Turn on the DMP support for the ZFS pool.

    # vxdmpadm settune dmp_native_support=on

    6 Reboot the system.

    7 After the reboot, DMP controls the devices. If there were any ZFS pools onHDLM devices they are migrated onto DMP devices.

    8 Mount the file systems.

    9 Restart the applications.

    Migrating to DMP from Sun Multipath IO (MPxIO)This procedure describes removing devices from MPxIO control and enabling

    DMP on the devices.

    Plan for system downtime for the following procedure.

    The migration steps involve system downtime on a host due to the following:

    Need to stop applications

    Need to stop the VCS services if using VCS

    The procedure involves one or more host reboots

    To take devices out of MPxIO control and enable DMP on the devices

    1 Stop the applications that use MPxIO devices.

    2 Unmount all the file systems that use MPxIO devices.

    3 Deactivate the ZFS pools operating on MPxIO devices.

    4 Turn on the DMP support for the ZFS pools.# vxdmpadm settune dmp_native_support=on

    5 Disable MPxIO using the following command.

    # stmsboot -d

    Setting up DMP to manage native devicesMigrating to DMP from Sun Multipath IO (MPxIO)

  • 7/29/2019 Dmp Admin 60pr1 Sol

    32/166

    6 Reboot the system.

    After the reboot, DMP controls the ZFS pools. Any ZFS pools are migrated

    onto DMP devices.

    7 Mount the file systems.

    8 Restart the applications.

    Using Dynamic Multi-Pathing (DMP) devices withOracle Automatic Storage Management (ASM)

    This release of DMP supports using DMP devices with Oracle Automatic Storage

    Management (ASM). DMP supports the following operations:

    See Enabling Dynamic Multi-Pathing (DMP) devices for use with OracleAutomatic Storage Management (ASM) on page 32.

    SeeRemoving Dynamic Multi-Pathing(DMP)devices from thelistingof Oracle

    Automatic Storage Management (ASM) disks on page 33.

    See Migrating Oracle Automatic Storage Management (ASM) disk groups on

    operating systemdevicesto DynamicMulti-Pathing (DMP) deviceson page34.

    Enabling Dynamic Multi-Pathing (DMP) devices for use with OracleAutomatic Storage Management (ASM)

    Enable DMP support for ASM to make DMP devices visible to ASM as available

    disks.

    Setting up DMP to manage native devicesUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)

    32

  • 7/29/2019 Dmp Admin 60pr1 Sol

    33/166

    To make DMP devices visible to ASM

    1 From ASM, make sure ASM_DISKSTRING is set to the value /dev/raw/*.

    The value for ASM_DISKSTRING is /dev/vx/rdmp/*.

    SQL> show parameter ASM_DISKSTRING;

    NAME TYPE VALUE

    -------------------- ----------- ---------------

    asm_diskstring string /dev/vx/rdmp/*

    2 As root user, enable DMP devices for use with ASM.

    # vxdmpraw enable usernamegroupname [devicename ...]

    For example:

    # vxdmpraw enable oracle dba eva4k6k0_1

    3 From ASM, confirm that ASM can see these new devices.

    SQL> select name,path,header_status from v$asm_disk;

    NAME PATH HEADER_STATUS

    ---------------------------------------------

    ... ....... ....

    /dev/vx/rdmp/eva4k6k0_1 CANDIDATE

    ... ....... ....

    Removing Dynamic Multi-Pathing (DMP) devices from the listing ofOracle Automatic Storage Management (ASM) disks

    To remove DMP devices from the listing of ASM disks, disable DMP support for

    ASM from the device. You cannot remove DMP support for ASM from a device

    that is in an ASM disk group.

    To remove the DMP device from the listing of ASM disks

    1 If the device is part of any ASM disk group, remove the device from the ASMdisk group.

    2 As root user, disable DMP devices for use with ASM.

    # vxdmpraw disable diskname

    For example:

    # vxdmpraw disable eva4k6k0_1

    Setting up DMP to manage native devicesUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)

  • 7/29/2019 Dmp Admin 60pr1 Sol

    34/166

    Migrating Oracle Automatic Storage Management (ASM) disk groupson operating system devices to Dynamic Multi-Pathing (DMP) devices

    When an existing ASM disk group uses operating system native devices as disks,you can migrate these devices to Veritas Dynamic Multi-Pathing control. If the

    OS devices are controlled by other multi-pathing drivers, this operation requires

    system downtime to migrate the devices to DMP control.

    After this procedure, the ASM disk group uses the migrated DMP devices as its

    disks.

    "From ASM" indicates that you perform the step as the user running the ASM

    instance.

    "As root user" indicates that you perform the step as the root user.

    To migrate an ASM disk group from operating system devices to DMP devices

    1 From ASM, identify theASM diskgroup that you wantto migrate,andidentifythe disks under its control.

    2 From ASM, dismount the ASM disk group.

    3 If the devices are controlled by other multi-pathing drivers, migrate thedevices to DMP control. Perform these steps as root user.

    Migrate from MPxIO or PowerPath.

    4 As root user, enable DMP support for the ASM disk group identified in step

    1.

    # vxdmpraw enable username groupname [devicename ...]

    Where usernamerepresents the ASM user running the ASM instance, and

    groupnamerepresents the UNIX groupname of the specified user-id. If you

    specify one or more devicenames, DMP support for ASM is enabled for those

    devices. If you do not specify a devicename, DMP support is enabled for all

    devices in the system that have an ASM signature.

    5 From ASM, set ASM_DISKSTRING.

    Set ASM_DISKSTRING to /dev/vx/rdmp/*

    6 From ASM, confirm that the devices are available to ASM.

    7 From ASM, mount the ASM disk groups. The disk groups are mounted onDMP devices.

    Setting up DMP to manage native devicesUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)

    34

  • 7/29/2019 Dmp Admin 60pr1 Sol

    35/166

    AIX, HP-UX, and Solaris example: To migrate an ASM disk group from operating

    system devices to DMP devices

    1 From ASM, identify theASM diskgroup that you wantto migrate,andidentify

    the disks under its control.

    SQL> select name, state from v$asm_diskgroup;

    NAME STATE

    ------------------------------ -----------

    ASM_DG1 MOUNTED

    SQL> select path,header_status from v$asm_disk;

    NAME PATH HEADER_STATUS

    ------------------------------------------------------------

    ASM_DG1_0001 /dev/rdsk/c2t5006016130206782d9s6 MEMBER

    ASM_DG1_0000 /dev/rdsk/c2t50001FE1500A8F08d1s6 MEMBER

    2 From ASM, dismount the ASM disk group.

    SQL> alter diskgroup ASM_DG1 dismount;

    Diskgroup altered.

    SQL> select name , state from v$asm_diskgroup;

    NAME STATE

    ------------------------------ -----------

    ASM_DG1 DISMOUNTED

    3 If the devices are controlled by other multi-pathing drivers, migrate thedevices to DMP control. Perform these steps as root user.

    Note: This step requires planned downtime of the system.

    4 As root user, enable DMP support for the ASM disk group identified in step1, in one of the following ways:

    To migrate selected ASM diskgroups, use the vxdmpadm command to

    determine the DMP nodes that correspond to the OS devices.

    # vxdmpadm getdmpnode nodename=c2t5d9

    NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME

    ========================================================

    EVA4K6K0_0 ENABLED EVA4K6K 4 4 0 EVA4K6K0

    Use the device name in the command below:

    Setting up DMP to manage native devicesUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)

  • 7/29/2019 Dmp Admin 60pr1 Sol

    36/166

    # vxdmpraw enable oracle dba eva4k6k0_0 eva4k6k0_9 \

    emc_clariion0_208

    If you do not specify a devicename, DMP support is enabled for all devices

    in the disk group that have an ASM signature. For example:

    # vxdmpraw enable oracle dba

    5 From ASM, set ASM_DISKSTRING to the value /dev/vx/rdmp/*.

    SQL> alter system set ASM_DISKSTRING='/dev/vx/rdmp/*';

    System altered.

    SQL> show parameter ASM_DISKSTRING;

    NAME TYPE VALUE-------------------------- --------- -------------------

    asm_diskstring string /dev/vx/rdmp/*

    6 From ASM, confirm that the devices are available to ASM.

    SQL> select name,path,header_status from v$asm_disk where

    header_status='MEMBER';

    NAME PATH HEADER_STATUS

    --------------------------------------------------------

    /dev/vx/rdmp/EVA4K6K0_0s6 MEMBER

    /dev/vx/rdmp/EMC_CLARiiON0_208s6 MEMBER

    7 From ASM, mount the ASM disk groups. The disk groups are mounted onDMP devices.

    SQL> alter diskgroup ASM_DG1 mount;

    Diskgroup altered.

    SQL> select name, state from v$asm_diskgroup;

    NAME STATE

    ------------------------------ -----------

    ASM_DG1 MOUNTED

    SQL> select name,path,header_status from v$asm_disk where

    header_status='MEMBER';

    NAME PATH HEADER_STATUS

    -----------------------------------------------------------

    ASM_DG1_0000 /dev/vx/rdmp/EVA4K6K0_0s6 MEMBER

    ASM_DG1_0001 /dev/vx/rdmp/EMC_CLARiiON0_208s6 MEMBER

    Setting up DMP to manage native devicesUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)

    36

  • 7/29/2019 Dmp Admin 60pr1 Sol

    37/166

    Adding DMP devices to an existing ZFS pool orcreating a new ZFS pool

    Whenthedmp_native_support isON,you can create a new ZFS poolonanavailable

    DMP device. You can also add an available DMP device to an existing ZFS pool.

    After the ZFS pools are on DMP devices, you can use any of the ZFS commands to

    manage the pools.

    To create a new ZFS pool on a DMP device or add a DMP device to an existing ZFS

    pool

    1 Choose disks that are available for use by ZFS. The vxdisk list command

    displays disks that are not in use by VxVM with the TYPE auto:none and the

    STATUS Online invalid.

    # vxdisk list

    DEVICE TYPE DISK GROUP STATUS

    . . .

    tagmastore-usp0_0079 auto:none - - online invalid

    tagmastore-usp0_0080 auto:none - - online invalid

    2 Create a new ZFS pool on a DMP device.

    # zpool create newpool tagmastore-usp0_0079s2

    # zpool status newpool

    pool: newpool

    state: ONLINE

    scrub: none requested

    config:

    NAME STATE READ WRITE CKSUM

    newpool ONLINE 0 0 0

    tagmastore-usp0_0079s2 ONLINE 0 0 0

    Setting up DMP to manage native devicesAdding DMP devices to an existing ZFS pool or creating a new ZFS pool

  • 7/29/2019 Dmp Admin 60pr1 Sol

    38/166

    3 Add a DMP device to an existing ZFS pool.

    # zpool add newpool tagmastore-usp0_0080s2

    # zpool status newpool

    pool: newpool

    state: ONLINE

    scrub: none requested

    config:

    NAME STATE READ WRITE CKSUM

    newpool ONLINE 0 0 0

    tagmastore-usp0_0079s2 ONLINE 0 0 0

    tagmastore-usp0_0080s2 ONLINE 0 0 0

    errors: No known data errors

    4 Run the following command to trigger DMP discovery of the devices:

    # vxdisk scandisks

    5 After the discovery completes, the disks are shown as in use by ZFS:

    # vxdisk list

    . . .

    tagmastore-usp0_0079 auto:ZFS - - ZFStagmastore-usp0_0080 auto:ZFS - - ZFS

    Displaying the DMP configuration for native ZFSsupport

    WhenDMPis enabled fornative devices,thedmp_native_support attribute displays

    as ON. When the tunable is ON, all DMP disks are available for native volumes

    except:

    Devices that have a VxVM label

    If you initialize a disk for VxVM use, then the native multi-pathing feature is

    automatically disabled for the disk. When the VxVM label is removed, the

    native multi-pathing is enabled.

    Devices that are multi-pathed with Third-party drivers

    If a disk is already multi-pathed with a third-party driver (TPD), DMP does not

    manage the devices unless TPD support is removed.

    Setting up DMP to manage native devicesDisplaying the DMP configuration for native ZFS support

    38

  • 7/29/2019 Dmp Admin 60pr1 Sol

    39/166

    To display whether DMP is enabled

    1 Display the attribute dmp_native_support.

    # vxdmpadm gettune dmp_native_support

    2 When thedmp_native_support tunable isON, use the vxdisk listcommand

    to display availablevolumes. Volumes availableto ZFSdisplay with the TYPE

    auto:none. Volumes that are already in use by ZFS display with the TYPE

    auto:ZFS.

    Removing DMP support for native devicesThe dmp_native_support tunable is persistent across reboots and package

    upgrades.

    You can remove an individual device from control by ZFS if you initialize it for

    VxVM, or if you set up TPD multi-pathing for that device.

    To remove support for native devices from all DMP devices, turn off the

    dmp_native_support tunable.

    To turn off the dmp_native support tunable:

    # vxdmpadm settune dmp_native_support=off

    To view the value of the dmp_native_support tunable:

    # vxdmpadm gettune dmp_native_support

    Tunable Current Value Default Value

    --------------------- ---------------- --------------

    dmp_native_support off off

    Setting up DMP to manage native devicesRemoving DMP support for native devices

  • 7/29/2019 Dmp Admin 60pr1 Sol

    40/166

    Setting up DMP to manage native devicesRemoving DMP support for native devices

    40

  • 7/29/2019 Dmp Admin 60pr1 Sol

    41/166

    Administering DMP

    This chapter includes the following topics:

    About enabling and disabling I/O for controllers and storage processors

    About displaying Veritas Dynamic Multi-Pathing (DMP) information

    Displaying the paths to a disk

    Setting customized names for DMP nodes

    DMP coexistence with native multi-pathing

    Administering DMP using vxdmpadm

    About enabling and disabling I/O for controllers andstorage processors

    DMP lets you to turn off I/O through an HBA controller or the array port of a

    storageprocessor so thatyou canperform administrative operations.Thisfeature

    can be used for maintenance of HBA controllers on the host, or array ports that

    are attached todiskarrays supportedby DMP.I/O operations to theHBA controller

    or the array port can be turned back on after the maintenance task is completed.

    You can accomplish these operations using the vxdmpadm command.

    For Active/Active type disk arrays, when you disable the I/O through an HBAcontroller or array port, the I/O continues on the remaining paths. For

    Active/Passive type disk arrays, if disabling I/O through an HBA controller or

    array port resulted in all primary paths being disabled, DMP will failover to

    secondary paths and I/O will continue on them.

    DMP does not support the operationsto enableI/O or disable I/O for the controllers

    that use Third-Party Drivers (TPD) for multi-pathing.

    3Chapter

  • 7/29/2019 Dmp Admin 60pr1 Sol

    42/166

    After theadministrativeoperationis over, usethe vxdmpadm command to re-enable

    the paths through the HBA controllers.

    See Disabling I/O for paths, controllers or array ports on page 76.

    See Enabling I/O for paths, controllers or array ports on page 77.

    You can also perform certain reconfiguration operations dynamically online.

    See About online dynamic reconfiguration on page 115.

    About displaying Veritas Dynamic Multi-Pathing(DMP) information

    You can use the vxdmpadm command to list DMPdatabase informationandperform

    other administrative tasks. This command allows you to list all controllers that

    are connected to disks, and other related information that is stored in the DMP

    database. You can use this information to locate system hardware, and to help

    you decide which controllers need to be enabled or disabled.

    The vxdmpadm command also provides usefulinformation such as disk array serial

    numbers, which DMP devices (disks) are connected to the disk array, and which

    paths are connected to a particular controller, enclosure or array port.

    You can also use the vxdisk command to display the multi-pathing information

    fora particular metadevice. Themetadevice is a devicerepresentation of a physical

    disk having multiple physical paths through the systems HBA controllers. In

    DMP, all the physical disks in the system are represented as metadevices with

    one or more physical paths.

    Displaying Veritas Dynamic Multi-Pathing (DMP) information includes the

    following topics:

    See Retrieving information about a DMP node on page 48.

    See Displaying consolidated information about the DMP nodes on page 49.

    See Displaying the members of a LUN group on page 51.

    See Displaying paths controlled by a DMP node, controller, enclosure, or array

    port on page 51.

    See Displaying information about controllers on page 54.

    See Displaying information about enclosures on page 55.

    See Displaying extended device attributes on page 57.

    See Displaying the paths to a disk on page 43.

    See Administering DMP using vxdmpadm on page 47.

    Administering DMPAbout displaying Veritas Dynamic Multi-Pathing (DMP) information

    42

  • 7/29/2019 Dmp Admin 60pr1 Sol

    43/166

    Displaying the paths to a diskSee About displaying Veritas Dynamic Multi-Pathing (DMP) information

    on page 42.The vxdisk command is used to display the multi-pathing information for a

    particular metadevice. The metadevice is a device representation of a physical

    disk having multiple physical paths through the systems HBA controllers. In

    DMP, all the physical disks in the system are represented as metadevices with

    one or more physical paths.

    To display the multi-pathing information on a system

    Use the vxdisk path command to display the relationships between the

    device paths, disk access names, disk media names and disk groups on a

    system as shown here:

    # vxdisk path

    SUBPATH DANAME DMNAME GROUP STATE

    c1t0d0s2 c1t0d0s2 mydg01 mydg ENABLED

    c4t0d0s2 c1t0d0s2 mydg01 mydg ENABLED

    c1t1d0s2 c1t1d0s2 mydg02 mydg ENABLED

    c4t1d0s2 c1t1d0s2 mydg02 mydg ENABLED

    .

    .

    .

    This shows that two paths exist to each of the two disks, mydg01 and mydg02,

    and also indicates that each disk is in the ENABLED state.

    Administering DMPDisplaying the paths to a disk

  • 7/29/2019 Dmp Admin 60pr1 Sol

    44/166

    To view multi-pathing information for a particular metadevice

    1 Use the following command to view multi-pathing information for thespecified device:

    # vxdisk list devicename

    For example, the command output displays multi-pathing information for

    the device c2t0d0s2 as follows:

    # vxdisk list c2t0d0s2

    Device c2t0d0

    devicetag c2t0d0

    type sliced

    hostid system01

    .

    .

    .

    Multipathing information:

    numpaths: 2

    c2t0d0s2 state=enabled type=primary

    c1t0d0s2 state=disabled type=secondary

    In the sample output above, the numpaths line shows that there are 2 paths

    to the device. The next two lines in the "Multipathing information" section

    show that one path is active (state=enabled) and that the other path hasfailed (state=disabled).

    The type field is shown for disks on Active/Passive type disk arrays such as

    the EMC CLARiiON, Hitachi HDS 9200 and 9500, Sun StorEdge 6xxx, and Sun

    StorEdge T3 array. This field indicates the primary and secondary paths to

    the disk.

    The type field is not displayed for disks on Active/Active type disk arrays

    such as theEMCSymmetrix, Hitachi HDS 99xx and Sun StorEdge 99xx Series,

    and IBM ESS Series. Such arrays have no concept of primary and secondary

    paths.

    Administering DMPDisplaying the paths to a disk

    44

  • 7/29/2019 Dmp Admin 60pr1 Sol

    45/166

    2 Alternately, you can use the following command to view multi-pathinginformation:

    # vxdmpadm getsubpaths dmpnodename=devicename

    For example, to view multi-pathing information for eva4k6k0_6, use the

    following command:

    # vxdmpadm getsubpaths dmpnodename=eva4k6k0_6

    NAME STATE[A] PATH-TYPE[M] CTLR-NAME ENCLR-TYPE ENCLR-NAME ATTRS

    ======================================================================================

    c0t50001FE1500A8F08d7s2 ENABLED(A) PRIMARY c0 EVA4K6K eva4k6k0 -

    c0t50001FE1500A8F09d7s2 ENABLED(A) PRIMARY c0 EVA4K6K eva4k6k0 -

    c0t50001FE1500A8F0Cd7s2 ENABLED SECONDARY c0 EVA4K6K eva4k6k0 -

    c0t50001FE1500A8F0Dd7s2 ENABLED SECONDARY c0 EVA4K6K eva4k6k0 -

    Setting customized names for DMP nodesTheDMPnode name is themeta devicename which represents themultiple paths

    to a disk. The DMP node name is generated from the device name according to

    the DMP naming scheme.

    See Disk device naming in DMP on page 20.

    You can specify a customized name for a DMP node. User-specified names arepersistent even if names persistence is turned off.

    You cannot assign a customized name that is already in use by a device. However,

    if you assign names that follow the same naming conventions as the names that

    the DDL generates, a name collision can potentially occur when a device is added.

    If the user-defined name for a DMP device is the same asthe DDL-generated name

    for another DMP device, the vxdisk list command output displays one of the

    devices as 'error'.

    To specify a custom name for a DMP node

    Use the following command:

    # vxdmpadm setattr dmpnode dmpnodename name=name

    You can also assign names from an input file. This enables you to customize the

    DMP nodes on the system with meaningful names.

    Administering DMPSetting customized names for DMP nodes

  • 7/29/2019 Dmp Admin 60pr1 Sol

    46/166

    To assign DMP nodes from a file

    1 Use the script vxgetdmpnames to get a sample file populated from the devices

    in your configuration. The sample file shows the format required and serves

    as a template to specify your customized names.

    2 To assign the names, use the following command:

    # vxddladm assign names file=pathname

    To clear custom names

    To clear the names, and use the default OSN or EBN names, use the following

    command:

    # vxddladm -c assign names

    DMP coexistence with native multi-pathingDynamic Multi-Pathing (DMP) supports using multi-pathing with raw devices.

    The dmp_native_multipathing tunable controls the behavior. If the

    dmp_native_multipathing tunable is set to on, DMP intercepts I/O requests,

    operations such as open, close, and ioctls sent on the raw device path.

    If the dmp_native_multipathing tunable is set to off, these requests are sent

    directlyto theraw device. InA/PFarrays, theformatcommand on Solaris platform

    does not show the extra attributes (like vendor ID, product ID and geometry

    information) of the passive paths. To avoid this issue, enable the

    dmp_native_multipathing tunable. DMP intercepts the request and routes it on

    the primary path.

    For A/P arrays, turning on the dmp_native_multipathing feature enables the

    commands to succeed without trespassing. The feature has no benefit for A/A or

    A/A-A arrays.

    DMP native multi-pathing should not be enabled if one of the following tools are

    already managing multi-pathing:

    EMC PowerPath

    Sun StorEdge Traffic Manager (also called MPxIO)

    IfEMC PowerPath is installed first, the command to set dmp_native_multipathing

    to on fails. If VxVM is installed first, ensure that dmp_native_multipathing is

    set to off before installing EMC PowerPath.

    Administering DMPDMP coexistence with native multi-pathing

    46

  • 7/29/2019 Dmp Admin 60pr1 Sol

    47/166

    Administering DMP using vxdmpadmThe vxdmpadm utility is a command line administrative interface to DMP.

    You can use the vxdmpadm utility to perform the following tasks:

    Retrieve the name of the DMP device corresponding to a particular path.

    See Retrieving information about a DMP node on page 48.

    Display consolidated information about the DMP nodes

    See Displaying consolidated information about the DMP nodes on page 49.

    Display the members of a LUN group.

    See Displaying the members of a LUN group on page 51.

    List all paths under a DMP device node, HBA controller, enclosure, or array

    port.SeeDisplaying paths controlledby a DMPnode, controller,enclosure,or array

    port on page 51.

    Display information about the HBA controllers on the host.

    See Displaying information about controllers on page 54.

    Display information about enclosures.

    See Displaying information about enclosures on page 55.

    Display information about array ports that are connected to the storage

    processors of enclosures.

    SeeDisplaying information about array ports

    on page 55.

    Display information about devices that are controlled by third-party

    multi-pathing drivers.

    See Displaying DMP path information for devices under third-party driver

    control on page 56.

    Display extended devices attributes.

    See Displaying extended device attributes on page 57.

    See Suppressing or including devices from VxVM control on page 60.

    Suppress or include devices from DMP control.

    Gather I/O statistics for a DMP node, enclosure, path or controller.