DRD Rehosting

30
1 Exploring DRD Rehosting in HP-UX 11i v2 and 11i v3 July 2010 Technical white paper Table of Contents Introduction ......................................................................................................................................... 1 Basic rehosting .................................................................................................................................... 2 Installing required software ............................................................................................................... 2 Creating the DRD clone .................................................................................................................... 3 Creating the system information file .................................................................................................... 3 General layout of the system information file.................................................................................... 4 Using the SYSINFO_INTERACTIVE parameter .................................................................................. 4 Setting the SYSINFO_HOSTNAME parameter ................................................................................. 4 Setting other parameters for the entire system .................................................................................. 5 The SYSINFO_PROCESSED parameter............................................................................................ 5 Identifying a network interface ...................................................................................................... 6 Managing a network interface with DHCP....................................................................................... 6 Using static parameters to configure a network interface ................................................................... 6 Additional static parameters for a network interface ......................................................................... 7 Sample system information files ...................................................................................................... 7 Using drd rehost to copy the system information file to the EFI partition ............................................. 8 Using drd status to check for EFI/HPUX/SYSINFO.TXT ................................................................... 8 Using drd unrehost to review contents of EFI/HPUX/SYSINFO.TXT ............................................... 9 Booting the image with SYSINFO.TXT on the target system ................................................................. 9 Limitations and recommendations for the initial release of drd rehost............................................... 10 Using drd rehost to provision a new BL870C Blade with HP-UX 11i v3 ............................................... 10 Assumptions .................................................................................................................................. 10 Steps for provisioning the new Blade ................................................................................................ 11 Using drd rehost to provision Integrity VMs ...................................................................................... 19 Assumptions .................................................................................................................................. 19 Special considerations for HP-UX 11i v3 prior to Update 3 (March 2008 or earlier) .............................. 19 HP-UX 11i v2 - Special considerations .............................................................................................. 20 Special considerations for avoiding reboots of Integrity VMs ............................................................... 20 Steps for provisioning the new VM ................................................................................................... 20 Glossary ........................................................................................................................................... 29 For more information .......................................................................................................................... 30 Call to action .................................................................................................................................... 30 Introduction With the introduction of Dynamic Root Disk (DRD) revisions B.1131.A.3.2.1949 for HP-UX 11i v3 (11.31), and the subsequent revision B.1123.A.3.3.221 for HP-UX 11i v2 (11.23), system administrators can boot a DRD clone on a system other than the one where it was created. This capability is referred to as rehosting and can be utilized on systems with an LVM-managed, Itanium®- based copy of HP-UX11i v2 or 11i v3. Rehosting enables a number of new uses for DRD clones. In

Transcript of DRD Rehosting

Page 1: DRD Rehosting

1

Exploring DRD Rehosting in HP-UX 11i v2 and 11i v3

July 2010

Technical white paper

Table of Contents

Introduction ......................................................................................................................................... 1

Basic rehosting .................................................................................................................................... 2 Installing required software ............................................................................................................... 2 Creating the DRD clone .................................................................................................................... 3 Creating the system information file .................................................................................................... 3

General layout of the system information file .................................................................................... 4 Using the SYSINFO_INTERACTIVE parameter .................................................................................. 4 Setting the SYSINFO_HOSTNAME parameter ................................................................................. 4 Setting other parameters for the entire system .................................................................................. 5 The SYSINFO_PROCESSED parameter............................................................................................ 5 Identifying a network interface ...................................................................................................... 6 Managing a network interface with DHCP ....................................................................................... 6 Using static parameters to configure a network interface ................................................................... 6 Additional static parameters for a network interface ......................................................................... 7 Sample system information files ...................................................................................................... 7

Using drd rehost to copy the system information file to the EFI partition ............................................. 8 Using drd status to check for EFI/HPUX/SYSINFO.TXT ................................................................... 8 Using drd unrehost to review contents of EFI/HPUX/SYSINFO.TXT ............................................... 9 Booting the image with SYSINFO.TXT on the target system ................................................................. 9 Limitations and recommendations for the initial release of drd rehost ............................................... 10

Using drd rehost to provision a new BL870C Blade with HP-UX 11i v3 ............................................... 10 Assumptions .................................................................................................................................. 10 Steps for provisioning the new Blade ................................................................................................ 11

Using drd rehost to provision Integrity VMs ...................................................................................... 19 Assumptions .................................................................................................................................. 19 Special considerations for HP-UX 11i v3 prior to Update 3 (March 2008 or earlier) .............................. 19 HP-UX 11i v2 - Special considerations .............................................................................................. 20 Special considerations for avoiding reboots of Integrity VMs ............................................................... 20 Steps for provisioning the new VM ................................................................................................... 20

Glossary ........................................................................................................................................... 29

For more information .......................................................................................................................... 30

Call to action .................................................................................................................................... 30

Introduction With the introduction of Dynamic Root Disk (DRD) revisions B.1131.A.3.2.1949 for HP-UX 11i v3 (11.31), and the subsequent revision B.1123.A.3.3.221 for HP-UX 11i v2 (11.23), system administrators can boot a DRD clone on a system other than the one where it was created. This capability is referred to as rehosting and can be utilized on systems with an LVM-managed, Itanium®-based copy of HP-UX11i v2 or 11i v3. Rehosting enables a number of new uses for DRD clones. In

Page 2: DRD Rehosting

this paper, we focus on the use of DRD clones to provision new systems, specifically Itanium blades running HP-UX 11iv3 and Integrity Virtual Machines running HP-UX 11i v2 or 11i v3. The basis for both provisioning scenarios is DRD rehosting. Before addressing each of them, we give an overview of the basic steps used in rehosting a system image. For details of commonly used terms in this white paper, see the Glossary.

Note: The path to DRD commands is: /opt/drd/bin

Basic rehosting The common steps to all rehosting scenarios are: 1. Creation of a system image using the drd clone command 2. Specification of system information (such as hostname, IP addresses, language, and timezone) for

the target system where the clone image will be booted 3. Processing of the system information by the auto_parms(1M) utilities during boot of the image

on the target system To perform these steps, minimal revisions of Dynamic Root Disk and auto_parms (1M), delivered in the SystemAdmin.FIRST-BOOT fileset, are required.

Installing required software To use the DRD rehosting functionality, three software components are required:

1. HP recommends that customers install the latest web release or latest media release of DRD (HP software product DynRootDisk) that is available. The most recent version of DRD is available (along with any dependencies) from the DRD downloads webpage, where download instructions are provided. Support for the drd rehost command was introduced for HP-UX 11i v3 (11.31) in version B.1131.A.3.2.1949 of DRD and for HP-UX 11i v2 in version B.1123.A.3.3.221 of DRD Version B.1131.A.3.2.1949 is available on media with all the Operating Environments (OEs) released in September 2008 or later. Version B.1123.A.3.3.221 is available on the Application Releases media for March 2009 or later.

2. Support for auto_parms processing of system information was introduced for HP-UX 11i v3 (11.31) in the PHCO_36525 patch, and for HP-UX 11i v2 (11.23) in the PHCO_38232 patch. The appropriate patch that enhances auto_parms is automatically downloaded and installed when DRD is downloaded and installed.

3. The format of the system information file is described in sysinfo (4), which is delivered for HP-UX 11i v3 (11.31) in the PHCO_39064 patch (or a superseding patch) to SystemAdmin.FIRST-BOOT, and for HP-UX 11i v2 (11.23) in the PHCO_39215 patch (or a superseding patch.) For instructions on downloading these patches, please see the DRD downloads webpage.

Page 3: DRD Rehosting

3

Creating the DRD clone Perform the following steps to create the DRD clone: 1. Choose an LVM-managed, Itanium-based system that will be used to create the clone. This is the

source system and it must have the version of HP-UX that you want to use for the target system. 2. Identify a disk on the source system that can be moved to the target system—where the image will

be booted. This disk is the target disk. Typically, the disk is a SAN LUN that can be unpresented from the source system (being cloned) and presented to the target system (to be booted) by changing the port or World Wide Port Name (WWPN) zoning. Ensure that the disk is available—that is, it is not currently in use by the source system, and the disk is large enough to hold a copy of the root volume group of the source system. For further guidance on choosing a target disk, see drd-clone (1M), Dynamic Root Disk System Administrator's Guide (~1 MB PDF), or Dynamic Root Disk: Quick Start & Best Practices (~300 KB PDF.)

3. Issue the “drd clone” command. If the target disk was previously used for an LVM volume group, VxVM disk group, or as a boot disk, you will need to set the overwrite option to true. In the following example, the target disk is /dev/disk/disk10.

# drd clone –v –x overwrite=true –t /dev/disk/disk10

Caution: The “-x overwrite=true” option should only be used if you are sure that the disk is not currently in use.

4. (Optional) Install additional software to the clone using “drd runcmd swinstall …” or

modify kernel tunables on the clone using “drd runcmd kctune …”.

Creating the system information file The drd clone command above produced a disk image that can be booted on a new system. However, some system configuration information, such as hostname and IP address, must be changed so the image can be used on the target system. The system information file contains this data, which is needed to define the target system. The descriptions of variables specified in the file are provided in sysinfo(4). After the appropriate DynRootDisk software and supporting patches have been installed, a sample system information file is available at /etc/opt/drd/default_sysinfo_file. As delivered, the file contains a single variable assignment that will trigger interactive use of auto_parms(1M) when a rehosted disk is booted. The file can be modified by a system administrator as desired. In addition, the file may be renamed or copied to another location and modified because the file path is supplied as a parameter to the "drd rehost" command. For security reasons, HP recommends that you restrict write access to system information files. The original, delivered version of the system information file is available at /usr/newconfig/etc/opt/drd/default_sysinfo_file.

Page 4: DRD Rehosting

General layout of the system information file Parameters may be entered in any order in the file. If a parameter is listed multiple times, the last occurrence is used. Each parameter must have the form <NAME>=<VALUE> The parameters’ names, listed below, all begin with the text string “SYSINFO_”. Some parameters are set once for the entire system. Array parameters (those ending in “[n]” for a non-negative integer n) are set for their corresponding network interfaces. In general, the parameters listed below that are set for the entire system are not required, and they default to the value already present on the system. The exception is SYSINFO_HOSTNAME, which must be specified whenever SYSINFO_INTERACTIVE is not set to ALWAYS. (See below for acceptable values of SYSINFO_HOSTNAME.) In contrast, the network interface parameters must be specified for each network interface that will be used on the target system. Pre-existing network interface information on the system will be removed before the system information file is processed by auto_parms(1M). For example, the contents of /etc/rc.config.d/netconf and /etc/hosts are reverted to the content as they were originally delivered (that is, to the “newconfig" template for each of them), without any information specific to the original source system. Using the SYSINFO_INTERACTIVE parameter

The single setting SYSINFO_INTERACTIVE=ALWAYS can be used to trigger the interactive FIRST-BOOT interface to be displayed by auto_parms(1M) when the image is booted. This is the only value that is defined in the delivered sample system information file. If the interactive interface is not desired, this variable must be removed from the file or set to “ON_ERROR”, its default value. Setting the SYSINFO_HOSTNAME parameter The hostname must be set in the system information file if SYSINFO_INTERACTIVE is not set to ALWAYS. The syntax for the hostname is SYSINFO_HOSTNAME=<hostname> The value of the hostname must be a valid domain. It must not end in a period. The initial period-delimited segment (or the entire name if no periods are used), must conform to all of the following rules:

• It must consist of the characters [a-z] [A-Z][0-9] [-,_] • It must not begin with a digit • It must not end with "-" or "_" • It must not be longer than the HOST_NAME_MAX value returned by getconf(1).

Page 5: DRD Rehosting

5

Setting other parameters for the entire system The remaining parameters for the entire system are optional. If omitted, the value on the system will be used. The following is a list of all the system-wide parameters: SYSINFO_DNS_DOMAIN=<DNS domain name> Default - DNS domain setting on the system is not changed Status - Optional Value - The name of the Domain Name Service domain SYSINFO_DNS_SERVER=<DNS server IP address> Default - DNS server setting on the system is not changed Status - Optional Value - The Domain Name Service server, specified as an IP address in decimal-dot notation (for example,192.1.2.3) SYSINFO_LANGUAGE=<language> Default - The language on the system is not changed. Status - Optional Value - A value displayed by locale -a. The language must be installed in the operating system SYSINFO_NIS_SERVER=<NIS server IP address> Default - The NIS server on the system is not changed. Status - Optional Value - The Network Information Service server, specified as an IP address in decimal-dot notation (for example, 192.1.2.3) SYSINFO_TIMEZONE=<timezone> Default - The timezone setting on the system is not changed. Status - Optional Value - A value listed in /usr/lib/tztab

Note: Due to a defect in the initial implementation of the drd rehost command, a timezone containing the colon character (“:”) cannot be specified.

The SYSINFO_PROCESSED parameter The SYSINFO_PROCESSED parameter must not be included in the system information file. It is added by auto_parms(1M) to prevent processing of the file in subsequent reboots.

Page 6: DRD Rehosting

If you have acquired the system information file with the drd unrehost command (described later) on a system that has already been rehosted, you must remove the parameter before re-using the file. Identifying a network interface You must identify each network interface that you will use on the target system by MAC (media access control) address or the hardware path of the interface. If both SYSINFO_MAC_ADDRESS and SYSINFO_LAN_HW_PATH are specified, SYSINFO_MAC_ADDRESS takes precedence and SYSINFO_LAN_HW_PATH is ignored. The number “n” used as the array index has a minimum value of 0 and a maximum value of 1023. All the parameters for a given interface must use the same index. SYSINFO_MAC_ADDRESS[n] Default - None Status - This parameter is required for an interface if SYSINFO_LAN_HW_PATH is not specified for the interface. Value - The prefix "0x" followed by a value of the MAC address expressed as 12 hex digits (for example, 0x0017A451E718) In the initial rehosting support, the alphabetic characters must be in uppercase. or SYSINFO_LAN_HW_PATH[n] Default - None Status - The SYSINFO_LAN_HW_PATH must be specified for any interface for which SYSINFO_MAC_ADDRESS is not supplied. Value - A hardware path, specified by non-negative integers, separated by forward slashes ("/"). Managing a network interface with DHCP If the interface is to be managed by DHCP, the only other parameter required for the interface is SYSINFO_DHCP_ENABLE[n]=1: SYSINFO_DHCP_ENABLE[n] Default - 0 Status - For a given network interface, either SYSINFO_DHCP_ENABLE must be set to 1 or static network parameters must be specified. Value - 0 - DHCP client functionality is not enabled for this interface. 1 - DHCP client functionality is enabled for this interface. Using static parameters to configure a network interface If the network interface will not be managed by DHCP, then the SYSINFO_IP_ADDRESS[n] and SYSINFO_SUBNET_MASK[n] must be specified for the interface: SYSINFO_IP_ADDRESS[n]=<decimal-dot IP address> Default - None

Page 7: DRD Rehosting

7

Status - An IP address must be specified for any network interface for which SYSINFO_DHCP_ENABLE is not set to 1. Value - An IP address in decimal-dot notation (for example, 192.1.2.3) SYSINFO_SUBNET_MASK[n]=<hexadecimal or decimal-dot mask> Default - None Status - SYSINFO_SUBNET_MASK must be set for any interface for which SYSINFO_DHCP_ENABLE is not set to 1. Value - Subnet mask in hexadecimal or decimal-dot notation (for example, 255.255.255.0) Additional static parameters for a network interface In addition to the required SYSINFO_IP_ADDRESS[n] and SYSINFO_SUBNET_MASK[n], the following parameters may be specified for a statically-configured network interface: SYSINFO_ROUTE_COUNT[n]=<0 or 1> Default - None Status - SYSINFO_ROUTE_COUNT is optional for a network interface. Value - 0 - SYSINFO_ROUTE_GATEWAY is a local or loopback interface. 1 - SYSINFO_ROUTE_GATEWAY is a remote interface. SYSINFO_ROUTE_DESTINATION[n] Default - None Status - SYSINFO_ROUTE_DESTINATION is optional for a network interface. Value - The value must be set to "default" for any interface for which DHCP is not enabled. SYSINFO_ROUTE_GATEWAY[n] Default - None Status - The SYSINFO_ROUTE_GATEWAY is optional for a network interface. Value - Gateway hostname or IP address in decimal-dot notation (e.g., 192.1.2.3) If the loopback interface, 127.0.0.1, is specified, SYSINFO_ROUTE_COUNT must be set to 0 for this interface. Sample system information files A system information (sysinfo) file that causes the FIRST-BOOT interactive interface to be displayed consists of the following single line: SYSINFO_INTERFACTIVE=ALWAYS A sysinfo file that specifies the hostname and causes a DHCP server to be contacted for configuring a system with the single network interface with MAC address 0x0017A451E718 consists of the following lines: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718

Page 8: DRD Rehosting

SYSINFO_DHCP_ENABLE[0]=1 A sysinfo file that does NOT use DHCP for configuring a system with the single network interface with MAC address 0x0017A451E718, with additional network parameters specified, consists of the following: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_IP_ADDRESS[0]=192.2.3.4 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 A sysinfo file that specifies the hostname, causes a DHCP server to be contacted for configuring a system with the single network interface (with MAC address 0x0017A451E718), and sets TIMEZONE to MST7MDT consists of the following lines: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_DHCP_ENABLE[0]=1 SYSINFO_TIMEZONE=MST7MDT

Using drd rehost to copy the system information file to the EFI partition After the clone and system information file have been created, the drd rehost command can be used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in preparation for processing by auto_parms(1M) during the boot of the image. The following example uses the /var/opt/drd/tmp/info_for_newhost system information file. # drd rehost –f /var/opt/drd/tmp/info_for_newhost Note that in this example, the default target, which is the inactive image, is used. If you want to only check the syntax of the system information file, without copying it to the /EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command: # drd rehost –p –f /var/opt/drd/tmp/info_for_newhost

Using drd status to check for EFI/HPUX/SYSINFO.TXT If you are not sure whether you have written a system information file to the Extensible Firmware Interface (EFI) partition of the inactive system image, you can use the drd status command to check for its existence. Here is sample output of the drd status command: # drd status

Page 9: DRD Rehosting

9

======= 07/28/09 15:08:55 MDT BEGIN Displaying DRD Clone Image Information (user=root) (jobid=srcsys) * Clone Disk: /dev/disk/disk10 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT present * Clone Creation Date: 07/01/09 15:54:35 MDT * Clone Mirror Disk: None * Mirror EFI Partition: None * Mirror Rehost Status: SYSINFO.TXT not present * Original Disk: /dev/disk/disk9 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk9) * Activated Disk: Original Disk (/dev/disk/disk9) ======= 07/28/09 15:09:08 MDT END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=srcsys)

Using drd unrehost to review contents of EFI/HPUX/SYSINFO.TXT If you want to review the contents of the system information file in the EFI partition, you can use the drd unrehost command to copy it to a file system location you specify and remove it from the EFI partition. When you are satisfied with the contents, execute the drd rehost command again to copy it back to the EFI partition. The following command copies /EFI/HPUX/SYSINFO.TXT to /var/opt/drd/tmp/file_to_review and removes it from the EFI partition: # drd unrehost –f /var/opt/drd/tmp/file_to_review When you are satisfied with the file, you need to rerun the drd rehost command to return the file to the EFI partition of the inactive image: # drd rehost –f /var/opt/drd/tmp/file_to_review

Booting the image with SYSINFO.TXT on the target system You need to use your SAN software to unpresent the SAN LUN from the source system and present it to the target system. The techniques for doing this are specific to the SAN software you are using. If the target system is already booted on another disk or SAN LUN, you can use setboot to set the primary bootpath to the new LUN. If not, you can interrupt the boot process to choose the new LUN, identifying it by the presence of SYSINFO.TXT in the /EFI/HPUX directory. Alternatively, you can use one of the techniques mentioned below for Integrity VMs or Blades. During the boot, the auto_parms(1M) enhancements provided by PHCO_36525 or PHCO_38232 (or superseding patches) will process /EFI/HPUX/SYSINFO.TXT.

Page 10: DRD Rehosting

Limitations and recommendations for the initial release of drd rehost The initial release of drd rehost has been tested on:

• Integrity Virtual Machines running HP-UX 11i v2 and 11i v3 • Integrity Blades running HP-UX 11i v3

In addition, preliminary testing shows that simple (single root volume group) standalone LVM-managed, Itanium-based systems running a September 2008 or later Operating Environment can be rehosted to another system with the exact same hardware. The benefit of the September 2008 (or later) Operating Environment is the availability of “Self healing of boot disk configuration”, provided by LVM and described in the September 2008 release notes. See the section on Boot Resiliency in HP-UX 11i v3 Version 3 Release Notes: HP 9000 and HP Integrity Server and the Summary of Changes for Logical Volume Manager in HP-UX 11i Version 3 September 2008 Release Notes: Operating Environments Update Release (both documents are in the Getting Started documents) for more information on this LVM feature.

Using drd rehost to provision a new BL870C Blade with HP-UX 11i v3 In this scenario, a new blade has been ordered. To deploy the blade as quickly as possible, the system administrator wants to perform as many setup steps as possible, prior to receiving the blade.

Note: If the blade is configured to use Virtual Connect, a virtual WWPN can be allocated before the hardware arrives, and a SAN LUN can be presented to that port name. Aside from this efficiency, rehosting a DRD clone from one blade to another does not depend on Virtual Connect.

Assumptions 1. To simplify the discussion, the new blade will be installed in a pre-existing enclosure. 2. A pre-existing blade with an LVM root volume group is installed with the desired software. This

blade will be cloned to provide a boot disk for the new blade, so it will be known as the source. To make use of the LVM “Boot Disk Configuration Self-Healing” feature introduced in HP-UX 11i v3 Update 3 (September 2008), the Operating Environment installed on the source is HP-UX 11i v3 Update 3 or later.

3. A SAN LUN is used for the disk image that will be rehosted, and the SAN management software

is used to make the LUN available to the pre-existing and new blades. 4. The required version of DRD and required patches to FIRST-BOOT, as described in Installing

Required Software, are installed on the source system.

Page 11: DRD Rehosting

11

Steps for provisioning the new Blade 1. If you are using:

• Pass-Through modules rather than Virtual Connect, you need the physical MAC address and WWPN (World Wide Port Name) from the system, so the actual hardware must already be present. Refer to HP Integrity BL860C Server Blade HP Service Guide for information on determining the MAC address and WWPN.

• Virtual Connect to manage the blade, you need not wait for the hardware to be physically present. Create a Virtual Connect Profile for the new blade. The steps here are shown on Virtual Connect Enterprise Manager, which simplifies management of multiple blade enclosures.

From the initial Virtual Connect Enterprise Manager screen, select Define a Profile from Profile Management.

Page 12: DRD Rehosting

The Create Profile dialog then appears at the bottom of the screen:

Enter a name for the profile, the VC Domain Group, Network Names, FC San Name, SAN Boot Option, and VC Domain. These parameters have probably already been defined for other blades in the enclosure. More information about all the parameters can be found in the HP Virtual Connect Enterprise Manager User Guide. Next, assign the Profile to the bay where you intend to locate the new blade, and click the OK button. To see the WWPN and MAC address that were assigned to the new profile, check the box next to the new profile, and click the Edit button.

Page 13: DRD Rehosting

13

The MAC addresses and WWPNs are displayed. You will need these items in subsequent steps. 2. Communicate the new MAC address to the network administrator. If the network to which the

new blade is connected is not managed by DHCP, obtain the IP address corresponding to the MAC address from the network administrator.

3. Use the source system to determine the size of SAN LUN needed as a boot disk for the new

blade. Request that the storage administrator: a) Create a host entry for the new blade, using the WWPN determined in Step 1. b) Assign a new LUN big enough for the boot disk, and present it with read/write access to

both the source system and the new blade 4. When the new SAN LUN is available, identify its device special file (DSF) on the source system.

One way to do this is to obtain the WWID (World Wide ID) of the LUN from the storage administrator, then use the scsimgr command to identify the device file:

# scsimgr get_attr all_lun -a wwid | more <snip> SCSI ATTRIBUTES FOR LUN : /dev/rdisk/disk19 name = wwid

Page 14: DRD Rehosting

current = 0x600508b40001123c0000d00001f90000 default = saved =

5. Issue the drd clone command on the source system with the new LUN as the target disk:

Caution: Only use the -x overwrite=true option if you are sure that the target disk is not currently in use and can be overwritten.

# drd clone -v -x overwrite=true -t /dev/disk/disk19

======= 09/19/08 16:54:51 MDT BEGIN Clone System Image (user=root) (jobid=drdbl2) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * The disk "/dev/disk/disk19" contains data which will be overwritten. * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image * Making New System Image Bootable * Unmounting New System Image Clone * System image: "sysimage_001" on disk "/dev/disk/disk19" ======= 09/19/08 17:08:39 MDT END Clone System Image succeeded. (user=root) (jobid=drdbl2) 6. Create a sysinfo file with information needed to boot the clone disk on the new blade.

It is convenient to start with a copy of the template /etc/opt/drd/default_sysinfo_file delivered with DRD. This file contains comments indicating the syntax of each variable.

a. # cp /etc/opt/drd/default_sysinfo_file \ /var/opt/drd/tmp/drdblade.sysinfo

b. # vi /var/opt/drd/tmp/drdblade_demo.sysinfo c. If you want to supply all information to interactive screens displayed when the new blade

is booted, no changes to the file are needed.

If you would prefer that the boot proceed without need for human interaction, comment out the line SYSINFO_INTERACTIVE=ALWAYS, and add the needed information to the file. In this case, you must specify at least the hostname and the network information for the interfaces defined in the profile. For a given network interface specified as index “n”, you must specify either SYSINFO_MAC_ADDRESS[n] or SYSINFO_LAN_HW)_PATH[n] (with SYSINFO_MAC_ADDRESS[n] taking precedence) AND either SYSINFO_DHCP_ENABLE[n]=1 or SYSINFO_IP_ADDRESS[n] and SYSINFO_SUBNET_MASK[n]. In the latter case, it is usually helpful to also specify SYSINFO_GATEWAY_ADDRESS[n]. For the first interface you want to configure, specify:

Page 15: DRD Rehosting

15

SYSINFO_MAC_ADDRESS[0]=< MAC from Virtual Connect profile in hex format> and either SYSINFO_DHCP_ENABLE[0]=1 or SYSINFO_IP_ADDRESS[0]=<IP address from network administrator> SYSINFO_SUBNET_MASK[0]=<subnet mask from network administrator> In addition, you are likely to need SYSINFO_GATEWAY_ADDRESS[0]=<gateway address> Check sysinfo(4) and the default_sysinfo_file template for complete information on all the variables that are supported.

Note: The initial support for the sysinfo file requires that the letters A-F in the sysinfo file’s MAC address must be entered in uppercase letters. This restriction is removed on HP-UX 11i v3 by PHCO_38608, which supersedes PHCO_36525.

d. Here is a sample listing of the non-comment lines in a sysinfo file:

# grep -v "^#" /var/opt/drd/tmp/drdblade.sysinfo SYSINFO_HOSTNAME=drdbl1 SYSINFO_MAC_ADDRESS[0]=0x0017A477000C SYSINFO_IP_ADDRESS[0]=15.1.50.70 SYSINFO_SUBNET_MASK[0]=255.255.248.0 SYSINFO_GATEWAY[0]=15.1.48.1 SYSINFO_ROUTE_COUNT[0]=1

7. Issue the drd rehost command, using the sysinfo file just created. The inactive system image

(the clone that was just created) is the default target. This command copies the sysinfo file to the EFI partition of the clone.

# drd rehost -v -f /var/opt/drd/tmp/drdblade.sysinfo

======= 09/19/08 17:22:41 MDT BEGIN Rehost System Image (user=root) (jobid=drdbl2) * Checking System Restrictions * Checking for Valid Inactive System Image * Locating Inactive System Image * Choosing image to rehost. * Validating Target to Be Rehosted * Validating New System Personality * The messages from validation of the system info file "/var/opt/drd/tmp/drdblade.sysinfo" are given below. * The value "drdbl1" for "SYSINFO_HOSTNAME" passes the syntax check. * The value "0x0017A477000C" for "SYSINFO_MAC_ADDRESS[0]" passes the syntax check. * The value 15.1.50.70 for SYSINFO_IP_ADDRESS[0] passes the

Page 16: DRD Rehosting

syntax check. * The value 1 for SYSINFO_ROUTE_COUNT[0] passes the syntax check. * The value 255.255.248.0 for SYSINFO_SUBNET_MASK[0] passes the syntax check. * End of messages from validation of the system info file "/var/opt/drd/tmp/drdblade.sysinfo". * The file "/var/opt/drd/tmp/drdblade.sysinfo" passes the syntax check. * Copying New System Personality * The sysinfo file "/var/opt/drd/tmp/drdblade.sysinfo" has been successfully copied to the target "/dev/disk/disk19". ======= 09/19/08 17:22:52 MDT END Rehost System Image succeeded. (user=root) (jobid=drdbl2) 8. When the new blade arrives, it can be booted from the rehosted disk. If DHCP is used to

manage the network, the Integrated Lights Out Management Processor IP address will be obtained from DHCP and displayed on the Front Display Panel of the new blade. Otherwise, a serial console can be connected. See Accessing the Integrated Lights-Out Management Processor for further information. From the EFI Boot Manager Menu, select the EFI shell. You will see a screen similar to the following.

The “fs<n>” entries indicate EFI file systems. The goal is to boot from the SAN file system containing the /EFI/HPUX/SYSINFO.TXT file. In the display, the first “fs<n>” entry representing a Fibre LUN is “fs3”. We see the SYSINFO.TXT file in the /EFI/HPUX directory of “fs3”:

Page 17: DRD Rehosting

17

Entering hpux.efi starts the HP-UX bootloader on the rehosted disk.

Note: By default, the EFI boot utilities do not scan for SAN LUNs. To extend the scan to the SAN, use the following steps:

a) From the EFI Boot Manager Menu, exit to the EFI shell. b) To determine the driver number (in the DRV column) for the FC driver, issue the command:

drivers -b c) To determine the controller number, issue the command:

drvcfg <driver_number> d) To start the Fibre Channel Driver Configuration Utility, issue the command:

drvcfg -s <driver_number> <controller_number> e) Select Option 4: Edit boot Settings f) Select Option 6: EFI Variable EFIFCScanLevel g) Enter y to create the variable. h) Enter 1 to set the value of the EFIFCScanLevel. i) Enter 0 to go to Previous Menu. j) Enter 12 to quit. k) To rescan the devices, issue the command:

reconnect -r l) Issue the command:

map -r

The last map -r command will show “fs<n>” entries for SAN LUNs. Proceed as above to identify the disk containing /EFI/HPUX/SYSINFO.TXT.

The type command, which is available in the EFI shell, can also be used to display contents of the SYSINFO.TXT file.

Page 18: DRD Rehosting

More information on EFI commands can be found in Appendix D of HP Integrity BL860C Server Blade HP Service Guide.

9. You can use the EFI shell command, bcfg, described in Chapter 4 of HP Integrity BL860C Server

Blade HP Service Guide to set the primary boot path, but you may find it easier to use the setboot command after HP-UX is booted to set the booted disk as the primary boot path. The device file of the boot disk can be determined from vgdisplay of the root group.

10. After the new blade is booted on the clone LUN, the

/var/opt/drd/registry/registry.xml file must be removed. (The requirement that this file must be removed will be eliminated in the future.)

11. After booting up the new blade, you might want to remove the /EFI/HPUX/SYSINFO.TXT file

from the EFI partition. To do so, enter the command:

# drd unrehost –t <dsf_of_root_disk>

Note: A special variable has been set in /EFI/HPUX/SYSINFO.TXT to prevent processing of the file on subsequent events. Removal of the file clarifies to system administrators that the disk is no longer subject to rehosting.

12. If the release of HP-UX on the source system was earlier than September 2008, error messages

might be issued when vgdisplay or lvlnboot are run. In this case, run the commands:

# vgscan –k –f /dev/vg00 # lvlnboot –R /dev/vg00 See the section on Boot Resiliency in LVM New Features in HP-UX 11i v3 (in the White Papers documents) for more details.

13. Only the root disk is presented to the new blade. However, if other disks were in use for other volume groups on the source system, they will still appear in /etc/lvmtab and /etc/fstab and might need to be removed. For example, the command, vgdisplay -v, might report errors such as the following:

vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vg01". vgdisplay: Volume group not activated. vgdisplay: Cannot display volume group "/dev/vgswap". The following commands can be used to remove the non-root volume groups that were in use on the source system: # mv -f /etc/lvmtab /etc/lvmtab.save3 # mv -f /etc/lvmtab_p /etc/lvmtab_p.save3 # vgscan –a You might also want to import other volume groups from disks that have been presented to the new system.

Page 19: DRD Rehosting

19

The /etc/fstab file can be edited to remove entries not available on the new blade.

14. The contents of /stand/bootconf must be checked to ensure that the current device for the boot disk is recorded. The format of a line representing an LVM-managed disk is an “l” (ell) in column one, followed by a space, followed by the block device file of the HP-UX (second) partition of the boot disk. The boot disk can be determined from vgdisplay of the root group (usually vg00.)

15. You might need to contact your network administrator to arrange for additional configuration of the new blade on your DNS, NIS, or DHCP servers.

16. If additional applications use configuration or licensing information specific to a particular host

(such as hostname or IP address), they might need to be updated.

Using drd rehost to provision Integrity VMs In this scenario, a new virtual machine (VM) is needed.

Assumptions • There is an existing VM on the VM host where the new VM will be created. The existing VM has

an LVM root volume group and the desired software and patches. The root group of the pre-existing VM will be cloned to provide a boot disk for the new VM, so the pre-existing VM will be known as the source.

• On the VM host, a virtual switch is already defined, sufficient memory is available to boot a new

VM, and sufficient disk space is available to provision the boot disk for the new VM. The disk space may be a raw disk, or may be an LVM logical volume.

• The required version of DRD, as described in Installing required software, are installed on the

source VM. Ideal Configuration: Source VM is running HP-UX 11i v3 Update 3 (September 2008) or later.

Enhancements to LVM in HP-UX 11i v3 allow a system to boot from a disk or SAN LUN, whose device special file does not match the device special file listed first in /etc/lvmtab on the disk. Further enhancements were made to this feature in the September 2008 release of LVM to remove the need to run any LVM “cleanup” commands after the boot completed. More information on this feature is available in the section on Boot Resiliency in HP-UX 11i v3 Version 3 Release Notes: HP 9000 and HP Integrity Servers (in the Getting Started documents) and the Summary of Changes for Logical Volume Manager in LVM New Features in HP-UX 11i v3 (in the White Papers documents) for more information on this LVM feature.

Special considerations for HP-UX 11i v3 prior to Update 3 (March 2008 or earlier) Additional cleanup commands to be run after the target VM boots are noted below.

Page 20: DRD Rehosting

HP-UX 11i v2 - Special considerations The Boot Resiliency feature for LVM is not available on HP-UX 11i v2 (11.23.) Two approaches can be used to address the lack of Boot Resiliency in HP-UX 11i v2:

(a) Simple Generating VM: Use a source VM with a very simple I/O configuration when provisioning the target VM. This simple source VM may be one that is not actually used to run applications; rather, it is used as a generator of new VMs.

The simple source VM for HP-UX 11i v2 should be configured with an avio_stor disk that is the boot disk for the source, and an avio_stor disk that is the clone target, and no additional disks. The two device files should be specified (using the hpvmcreate or hpvmmodify command) with the (bus, device, target) triples (0,1,0) for the root disk and (0,1,1) for the clone target. Details for moving the clone target to the target VM are provided below.

(b) Single-user-mode Repair: In this case, no restrictions are made on the source VM. The target VM will initially be booted into single user mode to adjust the LVM metadata for the boot disk. Details on this approach are provided below.

Special considerations for avoiding reboots of Integrity VMs In general, storage can be added and deleted to Integrity VMs without a reboot. However, addition of a virtual storage controller does require that the hpvm guest be restarted. Virtual storage controllers are created automatically when a storage device needing that controller is added to the guest. Thus, reboots can be avoided by creating at least one disk of each storage type that will be needed before the VM is deployed in production. For example, if both SCSI disks and avio_stor disks will be used, create one of each before the VM is deployed in production. Note also that explicitly specifying the triple “bus, device, target” in the addition of a storage device will result in creation of a new controller if one with that “bus, device” pair does not already exist. See hpvmresources(5) for further information on resource specification in the hpvmmodify(1M) command. In addition, if all devices using a given controller have been deleted, the controller itself will be deleted upon the next restart of the virtual machine.

Steps for provisioning the new VM In the discussion below, the source VM is drdivm1 and the new VM being provisioned is drdivm2. The initial steps are performed on the VM host: 1. On the VM host, create a new VM with a network interface only. This will provide the MAC

address assigned by the VM host to the virtual Network Interface Card. The MAC address will be needed later in setting up the boot disk for the new VM.

The following command creates the VM “drdivm2“, with one CPU, 2 GB of memory, and a virtual network interface to the switch “myvswtch“: # hpvmcreate -P drdivm2 -c 1 -r 2G \ -a network:avio_lan::vswitch:myvswtch

Page 21: DRD Rehosting

21

The hpvmstatus command can be used to verify that the new VM was successfully created and to determine the virtual MAC address of the new VM:

# hpvmstatus -d -P drdivm2 [Virtual Machine Devices] [Storage Interface Details] [Network Interface Details] network:avio_lan:0,0,0xB22171007484:vswitch:myvswtch [Misc Interface Details] serial:com1::tty:console

2. On the VM host, add a disk to the source VM, “drdivm1“, that is large enough to contain all the logical volumes in the root volume group. The backing store can be a raw disk or an LVM or VxVM volume, and must be available—not in use on the host or on any other VM. # hpvmmodify -P drdivm1 -a disk:avio_stor::disk:/dev/rdisk/disk15 Special cons iderat ion for HP - UX 11i v 2: If the Simple Generating VM Approach is used, the clone target must be defined on the simple source VM with the hpvmresource triple. If it is added in this step, the hpvmmodify command (assuming the device file on the VM host is /dev/dsk/c5t0d0).

# hpvmmodify -P drdivm1 \ -a disk:avio_stor:0,1,1:disk:/dev/dsk/c5t0d0

If the target disk has already been added to the system, the triple in use can be checked by issuing:

# hpvmstatus –P drdivm1 –d

The next actions take place on the source VM: 3. On the source VM, run the drd clone command to create a boot disk for the new VM.

Before running drd clone, you need to determine the device file of the newly added disk. On HP-UX 11i v3, run ioscan –fNC disk on the source VM. (Do not use the –k option because the VM needs to discover the newly added disk.) The instance number displayed is the number to be appended to “disk” in the device file. For example, if the disk has instance number 3, the device file is /dev/disk/disk3. On HP-UX 11i v2, run ioscan –fnC disk on the source VM. (Do not use the –k option because the VM needs to discover the newly added disk.) The newly displayed device file can then be used for the clone target. The following command on HP-UX 11i v3 creates a clone of the source VM to the disk /dev/disk/disk3, which is the device file of the disk with backing store /dev/disk/disk18 on the host. The –x overwrite=true option is used to ignore any LVM, VxVM, or boot records that might exist on the new disk.

Page 22: DRD Rehosting

Caution: Only use the -x overwrite=true option when you are sure the target disk is correct, the target disk is not currently in use, and the target disk can be overwritten.

# drd clone -v -x overwrite=true -t /dev/disk/disk3 ======= 09/25/08 16:34:03 MDT BEGIN Clone System Image (user=root) (jobid=drdivm1) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * The disk "/dev/disk/disk3" contains data which will be overwritten. * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image * Making New System Image Bootable * Unmounting New System Image Clone * System image: "sysimage_001" on disk "/dev/disk/disk3" ======= 09/25/08 17:09:46 MDT END Clone System Image succeeded. (user=root) (jobid=drdivm1)

4. On the source VM, create a system information file for the new VM. You need at least the following information:

• The hostname of the new VM • The MAC address of the new VM, determined in Step 1 above when the new VM was

created • The IP address of the new VM’s network interface—UNLESS it will managed by DHCP • The subnet mask of the new VM’s network interface—UNLESS it will be managed by

DHCP

In addition, you will probably want to supply a gateway interface for the network interface if it is not managed by DHCP, as well as information about NIS and/or DNS servers.

Copy the template sysinfo file that is delivered by DRD, /etc/opt/drd/default_sysinfo_file, to a location where you will edit it. The following command copies the template to /var/opt/drd/tmp/drdivm2.sysinfo: # cp /etc/opt/drd/default_sysinfo_file \ /var/opt/drd/tmp/drdivm2.sysinfo Using the comments in the file, the information supplied in the Creating the system information file section, or sysinfo(4), edit the copied file, commenting out the “SYSINFO_INTERACTIVE=ALWAYS” line and adding lines for the information in the bulleted list above.

Page 23: DRD Rehosting

23

Note: The initial support for the sysinfo file requires that the letters A-F in the sysinfo file’s MAC address must be entered in uppercase letters. This restriction is removed on HP-UX 11i v3 by PHCO_38608, which supersedes PHCO_36525.

After you have finished editing the file, the non-comment lines will be similar to those displayed below: # grep -v "^#" /var/opt/drd/tmp/drdivm2.sysinfo SYSINFO_HOSTNAME=drdivm2 SYSINFO_MAC_ADDRESS[0]=0x22431DF569E3 SYSINFO_IP_ADDRESS[0]=15.1.52.164 SYSINFO_SUBNET_MASK[0]=255.255.248.0 SYSINFO_ROUTE_GATEWAY[0]=15.1.48.1 SYSINFO_ROUTE_COUNT[0]=1 SYSINFO_ROUTE_DESTINATION[0]=default Note that the MAC address in the sysinfo file must be specified in the format documented in sysinfo(4), which differs slightly from the format of the MAC address in the hpvmstatus output.

5. On the source VM, run the drd rehost command to copy the system information file created above to the EFI partition of the clone disk. This provides information that will be processed by the auto_parms utility when the new VM is booted.

# drd rehost -v -f /var/opt/drd/tmp/drdivm2.sysinfo ======= 09/25/08 21:02:10 MDT BEGIN Rehost System Image (user=root) (jobid=drdivm1) * Checking System Restrictions * Checking for Valid Inactive System Image * Locating Inactive System Image * Choosing image to rehost. * Validating Target to Be Rehosted * Validating New System Personality * The messages from validation of the system info file "/var/opt/drd/tmp/drdivm2.sysinfo" are given below. * * The value "drdivm2" for "SYSINFO_HOSTNAME" passes the syntax check. * The value "0x22431DF569E3" for "SYSINFO_MAC_ADDRESS[0]" Passes the syntax check. * The value 15.1.52.164 for SYSINFO_IP_ADDRESS[0] passes the syntax check. * The value default for SYSINFO_ROUTE_DESTINATION[0] passes The syntax check. * The value 15.1.48.1 for SYSINFO_ROUTE_GATEWAY[0] passes the syntax check. * The value 1 for SYSINFO_ROUTE_COUNT[0] passes the syntax check. * The value 255.255.248.0 for SYSINFO_SUBNET_MASK[0] passes the syntax check. * End of messages from validation of the system info file "/var/opt/drd/tmp/drdivm2.sysinfo".

Page 24: DRD Rehosting

* The file "/var/opt/drd/tmp/drdivm2.sysinfo" passes the syntax check. * Copying New System Personality * The sysinfo file "/var/opt/drd/tmp/drdivm2.sysinfo" has been successfully copied to the target "/dev/disk/disk3". ======= 09/25/08 21:02:18 MDT END Rehost System Image succeeded. (user=root) (jobid=drdivm1)

6. You can run the drd status command on the source VM to verify that the system information file has been copied to SYSINFO.TXT on the clone.

# drd status ======= 07/28/09 15:08:55 MDT BEGIN Displaying DRD Clone Image Information (user=root) (jobid=drdivm1) * Clone Disk: /dev/disk/disk3 * Clone EFI Partition: AUTO file present, Boot loader present * Clone Rehost Status: SYSINFO.TXT present * Clone Creation Date: 07/01/09 15:54:35 MDT * Clone Mirror Disk: None * Mirror EFI Partition: None * Mirror Rehost Status: SYSINFO.TXT not present * Original Disk: /dev/disk/disk1 * Original EFI Partition: AUTO file present, Boot loader present * Original Rehost Status: SYSINFO.TXT not present * Booted Disk: Original Disk (/dev/disk/disk1) * Activated Disk: Original Disk (/dev/disk/disk1) ======= 07/28/09 15:09:08 MDT END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=drdivm1) Specia l cons iderat ion for HP - UX 11i v 2: If you are using the Single-user-mode repair method, copy the mapfile created for the clone to the root file system of the clone itself:

# drd mount # cp –p /var/opt/drd/mapfiles/drd00mapfile \ /var/opt/drd/mnts/sysimage_001/ # drd umount

The next steps are executed on the VM host: 7. On the VM host, run the hpvmmodify command to move the clone disk from the source VM to

the new VM:

# hpvmmodify -P drdivm1 -d disk:avio_stor::disk:/dev/rdisk/disk18 For HP-UX 11i v3, or if the Single-user-mode repair approach is used for HP-UX 11i v2, the clone can be moved to the target VM with a simple hpvmmodify command. The hpvm resource need not be specified: # hpvmmodify –P drdivm2 -a disk:avio_stor::disk:/dev/rdisk/disk18 Special cons iderat ion for HP - UX 11i v 2: The clone must be moved to the target VM, taking care to preserve the device file that identifies the disk.

Page 25: DRD Rehosting

25

If the Simple Generating VM approach has been used, the device file of the clone disk on the source was /dev/dsk/c0t0d1. Because no disks were defined on the target VM when it was created, there will be no conflict in using the triple (0,1,1) for the disk on the target VM. The following command adds the disk to the target VM: # hpvmmodify -P drdivm2 \ -a disk:avio_stor:0,1,1:disk:/dev/rdisk/disk18

8. On the VM host, start the new VM with the hpvmstart command, then run hpvmconsole,

followed by CO, to connect to the EFI Boot Manager interface: # hpvmstart -P drdivm2 # hpvmconsole -P drdivm2 o From the EFI Boot Manager, choose the EFI shell. o Enter “fs0:” to choose Part1 of the boot disk. o Enter “cd EFI\HPUX” to change to the HPUX directory. o Enter “hpux.efi” to start the boot loader.

Specia l cons iderat ions for HP - UX 11i v 2: If the Single-user-mode repair approach is used, interrupt the bootloader to get the HP-UX prompt, and then enter:

# boot –lm

This boots the target VM into single user mode. You can then use the following steps, which are a continuation of s tep 8 above, to repair the LVM metadata: (These steps are adapted from the Changing the LVM Boot Device Hardware Path for a Virtual Partition section in the HP-UX Virtual Partitions Administrator’s Guide (in the User Guide documents.)

8.1. Run insf and ioscan to get the device filename of the boot device:

# insf -e

# ioscan -fnC disk

8.2. Run vgscan to get the device filenames of the boot device:

# vgscan

8.3. Remove the old information about root volume group:

# vgexport /dev/vg00

You might have to remove /etc/lvmtab.

8.4. Prepare to import the root volume group (vg00):

# mkdir /dev/vg00

# mknod /dev/vg00/group c 64 0x000000

8.5. Import the root volume group (vg00):

# vgimport -m /drd00mapfile \ /dev/vg00/<block_device_file_of_HPUX_partition>

Page 26: DRD Rehosting

Where the device file name is obtained from the ioscan and vgscan commands above. For example:

# vgimport –m /drd00mapfile /dev/vg00 /dev/dsk/c0t0d0s2)

8.6. Activate the root volume group (vg00):

# vgchange -a y /dev/vg00

You might also have to cleanup and prepare LVM logical volume to be root, boot, primary swap, or dump volume as follows:

# lvrmboot -r /dev/vg00/ # lvlnboot –b /dev/vg00/lvol1 # lvlnboot –r /dev/vg00/lvol3 # lvlnboot –s /dev/vg00/lvol2 # lvlnboot –d /dev/vg00/lvol2 # mount

8.7. Verify that the hardware path for the boot device matches the primary boot path:

# lvlnboot -v /dev/vg00

If the hardware path has not changed to the primary boot path, change it by running lvlnboot with the recovery (-R) option. This step is normally not necessary: # lvlnboot -R /dev/vg00

8.8. Reboot the VM. If the Single-user-mode repair approach is used, a double reboot might automatically be executed.

The new VM will boot.

9. After the new VM boots, you can login and set the new disk as the primary boot path.

Commands similar to the following can be used:

# vgdisplay -v vg00 | grep –e disk –e dsk PV Name /dev/disk/disk3_p2 On HP-UX 11i v3 (11.31) the device special file can be used to set the primary boot path: # setboot -p /dev/disk/disk3 Primary boot path set to 0/0/1/0.0x0.0x0 (/dev/disk/disk3) Special cons iderat ion for HP - UX 11i v 2: On HP-UX 11i v2, the hardware path must be supplied to the setboot command. To determine the hardware path of the boot disk, issue: # ioscan -fnkC disk

Page 27: DRD Rehosting

27

Class I H/W Path Driver S/W State H/W Type Description ========================================================================== disk 1 0/0/1/0.1.0 sdisk CLAIMED DEVICE HP Virtual Disk /dev/dsk/c0t1d0 /dev/dsk/c0t1d0s2 /dev/rdsk/c0t1d0 /dev/rdsk/c0t1d0s2 /dev/dsk/c0t1d0s1 /dev/dsk/c0t1d0s3 /dev/rdsk/c0t1d0s1 /dev/rdsk/c0t1d0s3 The primary bootpath can then be set: # setboot -p 0/0/1/0.1.0

10. After the new VM is booted, the /var/opt/drd/registry/registry.xml file must be removed from both the source VM and the new VM. (The need to manually remove this file will be addressed in the future.)

11. After the new VM is booted, the /EFI/HPUX/SYSIDENT.TXT file can safely be left in the EFI

partition of the rehosted disk. However, if you wish to remove it, you can use the “drd unrehost” command, specifying the boot disk as the target of the command:

# drd unrehost -t /dev/disk/disk3 ======= 09/25/08 21:25:09 MDT BEGIN Rehost System Image (user=root) (jobid=drdivm2) * Checking System Restrictions * Validating Target to Be Unrehosted * Removing Sysinfo file ======= 09/25/08 21:25:16 MDT END Rehost System Image succeeded. (user=root) (jobid=drdivm2)

A copy of the file can also be saved by specifying the -f option.

12. If the release of HP-UX 11i v3 on the source system is earlier than September 2008, error messages might be issued when vgdisplay or lvlnboot are run. In this case, run the following commands:

# vgscan –k –f /dev/vg00 # lvlnboot –R /dev/vg00 See the section on Boot Resiliency in LVM New Features in HP-UX 11i v3 (in the White Paper documents) for more details.

13. After the new VM is booted, you might want to remove non-root volume groups that were present on the source VM but not on the target. Because the root disk is the only disk currently assigned to the target VM, the simplest way to do this is to rename /etc/lvmtab (and /etc/lvmtab_p, if it exists) and re-create /etc/lvmtab and /etc/lvmtab_p with the vgscan command:

# mv /etc/lvmtab /etc/lvmtab.save # mv /etc/lvmtab_p /etc/lvmtab_p.save # vgscan –a Creating "/etc/lvmtab". *** LVMTAB has been created successfully. *** Do the following to resync the information on the disk.

Page 28: DRD Rehosting

*** #1. vgchange -a y *** #2. lvlnboot -R # vgchange -a y Volume group "/dev/vg00" is already active on this system. # lvlnboot -R Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf The /etc/fstab file can be edited to remove entries not available on the new VM.

14. The contents of /stand/bootconf must be checked to ensure that the current device for the boot disk is recorded. The format of a line representing an LVM-managed disk is an “l” (ell) in column one, followed by a space, followed by the block device file of the HP-UX (second) partition of the boot disk. The boot disk can be determined from vgdisplay of the root group (usually vg00).

15. You might need to contact your network administrator to arrange for additional configuration of the new blade on your DNS, NIS, or DHCP servers.

16. If additional applications use configuration or licensing information specific to a particular host

(such as hostname or IP address), they might need to be updated.

Page 29: DRD Rehosting

29

Glossary Term Definition

booted system environment

The system environment that is currently running, also known as the current, active, or running system environment.

CLI Command line user interface

clone • (noun) clone – a system image clone

• (verb) clone – to clone a system image

cloned system image

A copy of the critical file systems from the system image of a booted system environment, produced by the drd clone command.

A cloned system image may be inactive, or the cloned system image may be booted, in which case the system activities are started and the clone becomes the system image in the booted system environment. When a particular system image is booted, all other system images are inactive.

A system administrator may modify a cloned system image by installing software on it using the “drd runcmd” command.

DRD Dynamic Root Disk. The collection of utilities that manages creation, modification, and booting of system images.

EFI Extensible Firmware Interface is the firmware interface for Itanium®-based systems.

Also the name of the first partition on a HP-UX boot disk.

inactive system image

A system image that is not the booted system environment. This system image can be modified while the booted system environment remains in production.

LVM Logical Volume Manager. The logical volume manager (LVM), a subsystem that manages disk space, is supplied at no charge with HP-UX.

OE Operating Environment

original system environment

A booted system environment whose system image is cloned to create another system image. Each system image has exactly one original system environment (that is, the booted system environment at the time the drd clone command was issued).

root file system The file system that is mounted at /.

system environment

The combination of the system image and the system activities that comprise a running installation of HP-UX.

system image The filesystems and their contents that comprise an installation of HP-UX, residing on disk, and therefore persisting across reboots.

Page 30: DRD Rehosting

For more information To read more about Dynamic Root Disk, go to www.hp.com/go/drd.

Call to action HP welcomes your input. Please give us comments about this white paper, or suggestions for LVM or related documentation, through our technical documentation feedback website: http://docs.hp.com/en/feedback.html © 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. 5900-0595, July 2010

Share with colleagues

Share with colleagues