Installing RHCS on RHEL

18
Author : Ibrahimpasha Vorubai Installing RHCS on RHEL-4/5/6 The following steps are an example to be followed to install RedHat Cluster Services on a two-node cluster in a RHEL 4 update 5 and 6 environments. Ethernet NIC requirements: Need node computer with at least 2GB memory and 4 Ethernet ports. Ethernet port Use for ----------------- ---------------- Eth0 Lab network public interface Eth1 heartbeat port bonding Eth2 and 3 Use for iSCSI interface Sample Entries to be made for iscsi Interface in /etc/sysconfig/network-scripts/ifcfg-eth* e.g. DEVICE=eth3 BOOTPROTO=none HWADDR=00:04:23:C3:80:90 ONBOOT=yes TYPE=Ethernet DHCP_HOSTNAME=ca-ftibm-02 IPADDR=192.168.100.27 --first 3 octets are same as slammers iscsi address NETMASK=255.255.255.0 USERCTL=no IPV6INIT=no PEERDNS=yes Sample Entries to be made for Public (management) interface in /etc/sysconfig/network-scripts/ifcfg-eth* DEVICE=eth1 BOOTPROTO=none HWADDR=00:11:25:8D:4F:FA IPADDR=10.22.165.27

description

GUIDE TO CONFIGURE CLUSTER

Transcript of Installing RHCS on RHEL

Page 1: Installing RHCS on RHEL

Author : Ibrahimpasha Vorubai

Installing RHCS on RHEL-4/5/6

The following steps are an example to be followed to install RedHat Cluster Services on a two-node cluster in a RHEL 4 update 5 and 6 environments.

Ethernet NIC requirements:

Need node computer with at least 2GB memory and 4 Ethernet ports.

Ethernet port Use for ----------------- ---------------- Eth0 Lab network public interface Eth1 heartbeat port bonding Eth2 and 3 Use for iSCSI interface

Sample Entries to be made for iscsi Interface in /etc/sysconfig/network-scripts/ifcfg-eth*e.g.

DEVICE=eth3BOOTPROTO=noneHWADDR=00:04:23:C3:80:90ONBOOT=yesTYPE=EthernetDHCP_HOSTNAME=ca-ftibm-02IPADDR=192.168.100.27 --first 3 octets are same as slammers iscsi address NETMASK=255.255.255.0USERCTL=noIPV6INIT=noPEERDNS=yes

Sample Entries to be made for Public (management) interface in /etc/sysconfig/network-scripts/ifcfg-eth*

DEVICE=eth1BOOTPROTO=noneHWADDR=00:11:25:8D:4F:FAIPADDR=10.22.165.27NETMASK=255.255.255.0ONBOOT=yesTYPE=EthernetGATEWAY=10.22.165.1

Sample Entries to be made for Heartbit interface in /etc/sysconfig/network-scripts/ifcfg-eth*

DEVICE=eth2BOOTPROTO=noneHWADDR=00:11:25:8D:4F:FBONBOOT=yesUSERCTL=no

Page 2: Installing RHCS on RHEL

MASTER=bond0SLAVE=yes#TYPE=Ethernet#DHCP_HOSTNAME=ca-ftibm-02TYPE=Ethernet

Sample Entries to be made for bond0 interface in /etc/sysconfig/network-scripts/ifcfg-bond0

DEVICE=bond0USERCTL=noONBOOT=yesBROADCAST=10.10.10.255NETWORK=10.10.10.0NETMASK=255.255.255.0GATEWAY=10.10.10.1IPADDR='10.10.10.6'TYPE=Ethernet

Install desired RHEL4 update version.

Install supported OS according to PXE network installation procedures or use CD media on each node computer. For clustering, a two node membership will be set up. Also, as part of the OS installation, include the X windowing and GNOME packages in the kickstart configuration file or select from CD custom installation.For example, the kernel and architecture version for RHEL4 update 6 is:

2.6.9-67.ELsmp #1 SMP Wed Nov 7 13:56:44 EST 2007 x86_64 x86_64

Enable the bonding module for Ethernet port bonding. This is for the heartbeat communication between nodes.

1. Perform the following on each of the node members of the cluster. In /etc/modprobe.conf add these lines preferably right after the NIC definitions.

Example: Right after this line alias eth3 e1000:

alias bond0 bonding options bond0 mode=1 miimon=100

2. Edit and add the following to /etc/sysconfig/network-scripts/ifcfg-eth1 file. Example:

DEVICE=eth1 BOOTPROTO=none #HWADDR=xx:xx:xx:xx (Uncoment this else u will not see the Interface) ONBOOT=yes USERCTL=no MASTER=bond0 SLAVE=yes #TYPE=Ethernet

Page 3: Installing RHCS on RHEL

3. Create new file as /etc/sysconfig/network-scripts/ifcfg-bond0 and add the following: Example:

DEVICE=bond0 USERCTL=no ONBOOT=yes BROADCAST=10.10.10.255 NETWORK=10.10.10.0 NETMASK=255.255.255.0 GATEWAY=10.10.10.1 IPADDR=10.10.10.5 , this should be a different IP for the other node. Example: 10.10.10.6

4. Edit and add the IPs and host names to /etc/hosts file.

Example:

127.0.0.1 localhost.localdomain localhost 10.34.5.39 cofunintel02.eng.trans.corp cofunintel02 10.34.5.40 cofunintel03.eng.trans.corp cofunintel03 10.10.10.5 cofunintel02p 10.10.10.6 cofunintel03p

5. Each node should have the same entries in the /etc/hosts file; ensure that the hosts file is the same on both nodes.

6. Edit the /etc/resolv.conf and add or check the following for the CO lab environment: domain eng.trans.corp search eng.trans.corp lab.pillar trans.corp nameserver 10.34.0.10 nameserver 10.32.0.10

7. Restart the network. # service network restart

After network restarts, eth1 and bond0 should be configured. Example output:

# ifconfig -abond0 Link encap:Ethernet HWaddr 00:14:22:20:AF:5F inet addr:10.10.10.7 Bcast:10.10.10.255 Mask:255.255.255.0 inet6 addr: fe80::214:22ff:fe20:af5f/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:266 (266.0 b) TX bytes:2340 (2.2 KiB)

eth0 Link encap:Ethernet HWaddr 00:14:22:20:AF:5E inet addr:10.34.5.60 Bcast:10.34.5.255 Mask:255.255.255.0 inet6 addr: fe80::214:22ff:fe20:af5e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:4810880 errors:0 dropped:0 overruns:0 frame:0 TX packets:3394478 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1718891670 (1.6 GiB) TX bytes:2417513088 (2.2 GiB)

Page 4: Installing RHCS on RHEL

Base address:0xccc0 Memory:fe4e0000-fe500000

eth1 Link encap:Ethernet HWaddr 00:14:22:20:AF:5F inet6 addr: fe80::214:22ff:fe20:af5f/64 Scope:Link UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:3 errors:0 dropped:0 overruns:0 frame:0 TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:266 (266.0 b) TX bytes:2340 (2.2 KiB) Base address:0xbcc0 Memory:fe2e0000-fe300000

lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:104 errors:0 dropped:0 overruns:0 frame:0 TX packets:104 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:7148 (6.9 KiB) TX bytes:7148 (6.9 KiB)

sit0 Link encap:IPv6-in-IPv4 NOARP MTU:1480 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)

8. Install HBA driver kit under test according to its installation and APM Handover Notes.9. Reboot cluster nodes.

Install cluster suite package on each node.

1. Ensure that the appropriate RHCS-5.4 packages are selected for the specific platform.

2. Install RHCS and GFS RPM packages from CD

Here is a list of packages needed:

- cman, cman-kernel- rgmanager- GFS - kmod-gfs2-sysstat- system-config-cluster

CLVM and lvm2-cluster are part of the GFS installation packages.

Cluster services configuration.

Ensure cluster services are off for every runlevel on each node. Some of the services might have the run level set to on when the packages are installed, so ensure that all run levels are off to avoid starting them when first starting the cluster configuration.

Page 5: Installing RHCS on RHEL

Example:

[root@cofunintel02 ~]# chkconfig --list cmancman 0:off 1:off 2:off 3:off 4:off 5:off 6:off [root@cofunintel02 ~]# chkconfig --list rgmanagerrgmanager 0:off 1:off 2:off 3:off 4:off 5:off 6:off[root@cofunintel02 ~]# chkconfig --list clvmdclvmd 0:off 1:off 2:off 3:off 4:off 5:off 6:off[root@cofunintel02 ~]# chkconfig --list gfsgfs 0:off 1:off 2:off 3:ff 4:off 5:off 6:off

If any of these services are on on any runlevel, turn them off by: # chkconfig --level 0123456 <service name> off

Modify the lvm configuration file.

1. Edit the /etc/lvm/lvm.conf file as follows: Look for the line "locking_type = 1" and change it to "locking_type=3"2. Look for the line "locking_library= "/lib/liblvm2clusterlock.so" and uncomment it.3. Add a filter to detect the dm devices and reject everything else. Here is the filter: # By default we accept every block device:

##filter = [ "a/.*/" ] filter = [ "a/dm-.*/", "r/.*/" ]

Create a cluster configuration file under /etc/cluster/

The configuration file is named cluster.confThis is a bare bones file in order for the cluster services to start up:

<?xml version="1.0"?><cluster config_version="1" name=" RHEL5U6"> ##Here name is cluster name

<cman expected_votes="1" two_node="1"/>

<clusternodes> <clusternode name="ca-ftibm-01" votes="1" nodeid="1"> <fence> <method name="single"> <device name="human" nodename="ca-ftibm-01"/> </method> </fence> </clusternode> <clusternode name="ca-ftibm-02" votes="1" nodeid="2"> <fence> <method name="single"> <device name="human" nodename="ca-ftibm-02"/> </method> </fence> </clusternode></clusternodes><fencedevices> <fencedevice name="human" agent=" fence_ack_manual "/></fencedevices>

Page 6: Installing RHCS on RHEL

<rm> <failoverdomains> <failoverdomain name="test_domain" ordered="0" restricted="0"> <failoverdomainnode name="ca-ftibm-01" priority="1"/> <failoverdomainnode name="ca-ftibm-02" priority="1"/> </failoverdomain> </failoverdomains> <resources> <ip address=" 10.21.5.99 " monitor_link="1"/> </resources> <service autostart="1" domain="test_domain" name="test" recovery="relocate"> <ip ref=" 10.21.5.99 "/> </service> </rm></cluster>

ip address should be unused ip in management IP range.

Copy or scp same configuration file to other node under /etc/cluster/cluster.conf.

Manually start the cluster services as follows:

1. Perform each step below one at a time on each node at about the same time. # service cman start #service clvmd start # service rgmanager start

[root@ca-ftibm-01 cluster]# service cman startStarting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done [ OK ][root@ca-ftibm-01 cluster]#[root@ca-ftibm-01 cluster]# service clvmd startStarting clvmd: [ OK ]Activating VGs: [ OK ][root@ca-ftibm-01 cluster]# service rgmanager startStarting Cluster Service Manager: [ OK ][root@ca-ftibm-01 cluster]# ccsd and fenced service are included in cman service in RHEL 5.42. Check cluster status with the "clustat" utility # clustat

The clustat execution displays something like this:[root@ca-ftibm-02 local]# clustatCluster Status for RHEL5U6 @ Fri Nov 13 02:11:26 2009Member Status: Quorate

Member Name ID Status ------ ---- ---- ------ ca-ftibm-01 1 Online, rgmanager

Page 7: Installing RHCS on RHEL

ca-ftibm-02 2 Online, Local, rgmanager

Service Name Owner (Last) State ------- ---- ----- ------ ----- service:test ca-ftibm-02 started[root@ca-ftibm-02 local]#[root@ca-ftibm-02 local]# cman_tool statusVersion: 6.2.0Config Version: 1Cluster Name: cs-clu64-01Cluster Id: 101Cluster Member: YesCluster Generation: 211584Membership state: Cluster-MemberNodes: 2Expected votes: 1Total votes: 2Quorum: 1Active subsystems: 9Flags: 2node DirtyPorts Bound: 0 11 177Node name: ca-ftibm-02Node ID: 2Multicast addresses: 239.192.0.101Node addresses: 10.22.153.27[root@ca-ftibm-02 local]#

Using LVM to create a physical volume, a volume group and logical volume

To create a shared raw device to run I/O to in a clustered environment that all the cluster nodes can see the storage, perform the following:

1. Ensure that all the cluster nodes detect the same LUNs.

[root@ca-ftibm-02 mapper]#cd /dev/mapper[root@ca-ftibm-02 mapper]# ls2000b080059001259 2000b08005b001259 control2000b08005a001259 2000b08005c001259[root@ca-ftibm-02 mapper]#[root@ca-ftibm-01 mapper]# ls2000b080059001259 2000b08005b001259 control2000b08005a001259 2000b08005c001259 [root@ca-ftibm-01 mapper]#

2. Use lvm to create a physical, volume and logical device as follows:

[root@ca-ftibm-02 mapper]# pvcreate /dev/mapper/2000b080059001259 /dev/mapper/200b08005a001259 /dev/mapper/2000b08005b001259 /dev/mapper/2000b08005c001259 Physical volume "/dev/mapper/2000b080059001259" successfully created Physical volume "/dev/mapper/2000b08005a001259" successfully created Physical volume "/dev/mapper/2000b08005b001259" successfully created Physical volume "/dev/mapper/2000b08005c001259" successfully created[root@ca-ftibm-02 mapper]# pvs

Page 8: Installing RHCS on RHEL

PV VG Fmt Attr PSize PFree /dev/mapper/2000b080059001259 lvm2 -- 50.24G 50.24G /dev/mapper/2000b08005a001259 lvm2 -- 50.24G 50.24G /dev/mapper/2000b08005b001259 lvm2 -- 50.24G 50.24G /dev/mapper/2000b08005c001259 lvm2 -- 50.24G 50.24G[root@ca-ftibm-02 mapper]# vgcreate test /dev/mapper/2000b080059001259 /dev/mapper/2000b08005a001259 /dev/mapper/2000b08005b001259 /dev/mapper/2000b08005c001259 Clustered volume group "test" successfully created[root@ca-ftibm-02 mapper]# vgs VG #PV #LV #SN Attr VSize VFree test 4 0 0 wz--nc 200.97G 200.97G[root@ca-ftibm-02 mapper]#[root@ca-ftibm-02 mapper]# lvcreate -L 10GB test Error locking on node ca-ftibm-01: Volume group for uuid not found: 1R5lJKiwT7Q5Xx5AIEvHSHx8Y2zKvXKHPzNTORelsbYmw3GHYLenCk7kueeMkdp8 Aborting. Failed to activate new LV to wipe the start of it. *** for this i rebooted the node ca-ftibm-01 and once it comes up,i started all cluster related services (cman,clvmd,rgmanager)[root@ca-ftibm-01 ~]# reboot[root@ca-ftibm-01 ~]# service cman startStarting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done [ OK ][root@ca-ftibm-01 ~]# service clvmd startStarting clvmd: [ OK ]Activating VGs: 0 logical volume(s) in volume group "test" now active [ OK ][root@ca-ftibm-01 ~]# service rgmanager startStarting Cluster Service Manager: [ OK ][root@ca-ftibm-01 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lvol0 test -wi-a- 10.00G[root@ca-ftibm-01 ~]# ll /dev/mapper/total 0brw-rw---- 1 root disk 253, 1 Nov 12 22:07 2000b080059001259brw-rw---- 1 root disk 253, 2 Nov 12 22:07 2000b08005a001259brw-rw---- 1 root disk 253, 0 Nov 12 22:07 2000b08005b001259brw-rw---- 1 root disk 253, 3 Nov 12 22:07 2000b08005c001259crw------- 1 root root 10, 63 Nov 12 22:07 controlbrw-rw---- 1 root disk 253, 4 Nov 12 22:09 test-lvol0[root@ca-ftibm-01 ~]#

[root@ca-ftibm-02 mapper]#[root@ca-ftibm-02 mapper]#[root@ca-ftibm-02 mapper]#[root@ca-ftibm-02 mapper]#[root@ca-ftibm-02 mapper]# lvcreate -L 10GB test clvmd not running on node ca-ftibm-01

Page 9: Installing RHCS on RHEL

Unable to drop cached metadata for VG test.[root@ca-ftibm-02 mapper]# lvcreate -L 10GB test Logical volume "lvol0" created[root@ca-ftibm-02 mapper]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lvol0 test -wi-a- 10.00G[root@ca-ftibm-02 mapper]# ll /dev/mapper/total 0brw-rw---- 1 root disk 253, 1 Nov 12 22:02 2000b080059001259brw-rw---- 1 root disk 253, 2 Nov 12 22:02 2000b08005a001259brw-rw---- 1 root disk 253, 0 Nov 12 22:02 2000b08005b001259brw-rw---- 1 root disk 253, 3 Nov 12 22:02 2000b08005c001259crw------- 1 root root 10, 63 Nov 12 21:46 controlbrw-rw---- 1 root disk 253, 4 Nov 12 22:09 test-lvol0[root@ca-ftibm-02 mapper]#

[root@ca-ftibm-02 calsoft]# mkfs.gfs2 -t RHEL5U6:gfs-1 -p lock_dlm -j 2 /dev/mapper/test-lvol0This will destroy any data on /dev/mapper/test-lvol0. It appears to contain a gfs filesystem.

Are you sure you want to proceed? [y/n] y

Device: /dev/mapper/test-lvol0Blocksize: 4096Device Size 10.00 GB (2621440 blocks)Filesystem Size: 10.00 GB (2621438 blocks)Journals: 2Resource Groups: 40Locking Protocol: "lock_dlm"Lock Table: "cs-clu64-01:gfs-1"UUID: 6FBEE954-5DEA-F7D4-6125-77075A701EEC

[root@ca-ftibm-02 calsoft]# mount -t gfs2 /dev/mapper/test-lvol0 /plr[root@ca-ftibm-02 calsoft]#If while mounting u get following error then it means u don’t have kmod-gfs2*.rpm So download the rpm and then try mount command it will pass/sbin/mount.gfs: error mounting lockproto lock_dlm

Final Cluster.conf file should look like as follow on both cluster node

[Wed May 25 23:13:59 ca-ftibm-01]#cat /etc/cluster/cluster.conf<?xml version="1.0"?><cluster config_version="1" name="RHEL5U6">

<cman expected_votes="1" two_node="1"/>

<clusternodes> <clusternode name="ca-ftibm-01" votes="1" nodeid="1"> <fence> <method name="single"> <device name="human" nodename="ca-ftibm-02"/> </method> </fence>

Page 10: Installing RHCS on RHEL

</clusternode> <clusternode name="ca-ftibm-02" votes="1" nodeid="2"> <fence> <method name="single"> <device name="human" nodename="ca-ftibm-02"/> </method> </fence> </clusternode></clusternodes><fencedevices> <fencedevice name="human" agent="fence_manual"/></fencedevices> <rm> <failoverdomains> <failoverdomain name="test_domain" ordered="0" restricted="0"> <failoverdomainnode name="ca-ftibm-01" priority="1"/> <failoverdomainnode name="ca-ftibm-02" priority="1"/> </failoverdomain> </failoverdomains> <resources> <ip address="10.21.5.99 " monitor_link="1"/> <clusterfs device="/dev/mapper/test-lvol0" force_unmount="0" fsid="59240" fstype="gfs2" mountpoint="/plr" name="RHEL-gfs" options=""/> </resources> <service autostart="1" domain="test_domain" name="test" recovery="relocate"> <ip ref="10.21.5.99 "/> </service>

<service autostart="1" domain="test_domain" name="RHEL-io"> <script file="/clustest/ddfo" name="RHEL-ddfo"/> <clusterfs ref="RHEL-gfs"/> </service>

</rm></cluster>[root@ca-ftibm-02 calsoft]# mount/dev/sda6 on / type ext3 (rw)proc on /proc type proc (rw)sysfs on /sys type sysfs (rw)devpts on /dev/pts type devpts (rw,gid=5,mode=620)/dev/sda8 on /mnt type ext3 (rw)/dev/sda5 on /tmp type ext3 (rw)/dev/sda3 on /var type ext3 (rw)/dev/sda2 on /usr type ext3 (rw)/dev/sda1 on /boot type ext3 (rw)tmpfs on /dev/shm type tmpfs (rw)none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)none on /sys/kernel/config type configfs (rw)/dev/mapper/test-lvol0 on /plr type gfs2 (rw,hostdata=jid=0:id=131074:first=1)[root@ca-ftibm-02 calsoft]#./starter Enter the directory where the test files are going to be copied. The default location is /clustest. Be sure to include the leading (/) when specifying a different directory or hit return to accept the default. [/clustest] >

Page 11: Installing RHCS on RHEL

Copied all necessary files on to /clustest.

******************************************** What do you want to call this script?

Enter the script name only (default is ddfo) >

Script name is ddfo.

********************************************************************************************************************************************************************** The following require the absolute path names, be sure to include the leading '/'.**********************************************************************************************************************************************************************

Absolute path name of the mountpoint to use is required.

Enter mount point (ex. /mnt/plr0) > /plr

Mount point to use is /plr.

Absolute path for the Logical volume to mount is required here.

Enter the logical volume to test including the leading (/). (ex. /dev/mapper/vg0-lv0) > /dev/mapper/test-lvol0 Logical Volume name is /dev/mapper/test-lvol0.

Do you want to start running /clustest/ddfo? (y/[n]) > yStarting IO....Writing file ca-ftibm-02.systemtest.local:/plr/1G1.1000000+0 records in1000000+0 records out1024000000 bytes (1.0 GB) copied, 12.219 seconds, 83.8 MB/s

real 0m12.251suser 0m0.737ssys 0m10.305sWriting file ca-ftibm-02.systemtest.local:/plr/1G2.

[root@ca-ftibm-01 ~]# iostat -kd sdd sdj 2Linux 2.6.18-164.6.1.el5 (ca-ftibm-01.systemtest.local) 11/13/2009

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsdd 1.68 0.52 281.34 5988 3249952sdj 1.67 0.08 281.48 892 3251492

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsdd 191.50 0.00 33788.00 0 67576sdj 193.50 10.00 33990.00 20 67980

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsdd 189.50 0.00 34376.00 0 68752sdj 189.00 0.00 34000.00 0 68000

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtnsdd 196.50 10.00 34376.00 20 68752

Page 12: Installing RHCS on RHEL

sdj 188.50 0.00 34000.00 0 68000

Verify device mapper version that is installed with default OS installation

1. On each cluster node perform the following command to verify version package installed: # rpm -qa |grep mapper2. If no device-mapper is shown, install version available from same installed OS.

The device-mapper version for RHEL4 u5 should be at least "device-mapper-1.02.17-3.el4" on the x86-32. For RHEL4 U6, the version should be at least: device-mapper-1.02.21-1.el4

Install device multipath tools from Pillar's APM release repository

1. Copy multipath tools rpm package to a temporary directory on each cluster node.2. Install package as follows: #rpm -ivh <multipath rpm package name>3. Read the Handover Notes for any outstanding installation note, issues and/or fixes.

Install axiompm package from Pillar's APM release repository

1. Copy axiompm rpm package to a temporary directory on each cluster node.2. Install package as follows: #rpm -ivh <axiompm rpm package name> for OS installed platform3. Read the Handover notes for release for any outstanding issues/fixes.

Configure multipath.conf file to blacklist internal scsi devices on host client if any, so multipaths are not assigned to them.

1. Edit /etc/multipath.conf and add the internal scsi device to be blacklisted. You can use the devices or the WWID of the internal SCSI devices. The WWID is the desired method because sometimes the internal device might change when rebooting or adding new devices. To get the WWID of a device, use the following command:

# /sbin/scsi_id -g -u -s /block/sdx, where sdx would be the internal scsi device name. Example, see the two devices (sda and sdb) in red or the wwid in bold below. The preferred method is to use the wwid only.

### Copyright 2008 Pillar Data Systems, Inc.## This is the Pillar default multipath-tools configuration file##

# default uses : /udev# getuid_callout : "/sbin/scsi_id -g -u -s /block/%n"# prio_callout : the call to obtain a path for alua device "/sbin/mpath_prio_alua "# path_checker : pillar enhanced tur

blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st|sda|sdb)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" wwid <the wwid returned from command above>

Page 13: Installing RHCS on RHEL

} ...

2. Save the file and restart multipathd

# /etc/init.d/multipathd restart

3. Check that internal scsi devices are blacklisted and all the other Axiom devices have multipaths assigned.

# multipath -v3 –ll

4. Restart axiompm

# /etc/init.d/axiompmd restart

5. Check Axiom GUI for Axiom path manager communication and host port status

Verify that content are specific for the hosts and are from axiompm.

1. Axiompm communicates with host clients2. Axiom GUI's Storage tab/ hosts link/Path Manager Column for the attached host(s) is in a communicating status.3. Axiompm log files exists for each of the sanhosts communicating with Axiom.

pvcreate /dev/mapper/2000b080000001350 on each of the LUNs.

Then perform:

vgcreate -c y <volume name> /dev/mapper/2000b080000001350 for each LUN.

After that, create a logical volume:

lvcreate -L <size of logical volume to create> <Volume Group name>

Here are examples of each:

[root@cofunrh05 ~]# pvs PV VG Fmt Attr PSize PFree /dev/mapper/2000b080000001350 Cluvg_01 lvm2 a- 120.99G 1016.00M /dev/mapper/2000b080001001350 Cluvg_02 lvm2 a- 200.97G 996.00M /dev/mapper/2000b080002001350 Cluvg_03 lvm2 a- 100.48G 496.00M /dev/mapper/2000b080003001350 Cluvg_04 lvm2 a- 70.75G 768.00M

[root@cofunrh05 ~]# vgs

VG #PV #LV #SN Attr VSize VFree Cluvg_01 1 1 0 wz--nc 120.99G 1016.00M Cluvg_02 1 1 0 wz--nc 200.97G 996.00M Cluvg_03 1 1 0 wz--nc 100.48G 496.00M Cluvg_04 1 1 0 wz--nc 70.75G 768.00M

[root@cofunrh05 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy%

Page 14: Installing RHCS on RHEL

lvol0 Cluvg_01 -wi-a- 120.00G lvol0 Cluvg_02 -wi-a- 200.00G lvol0 Cluvg_03 -wi-a- 100.00G lvol0 Cluvg_04 -wi-a- 70.00G

When using DT as the I/O driver use what is listed under /dev/mapper/<volume and logical name>

Example:

[root@cofunrh05 ~]# ls -l /dev/mappertotal 0brw-rw---- 1 root disk 253, 0 Apr 16 16:59 2000b080000001350brw-rw---- 1 root disk 253, 1 Apr 16 16:59 2000b080001001350brw-rw---- 1 root disk 253, 2 Apr 16 16:59 2000b080002001350brw-rw---- 1 root disk 253, 3 Apr 16 16:59 2000b080003001350brw-rw---- 1 root disk 253, 6 Apr 21 09:27 Cluvg_01-lvol0brw-rw---- 1 root disk 253, 5 Apr 21 09:27 Cluvg_02-lvol0brw-rw---- 1 root disk 253, 7 Apr 21 09:27 Cluvg_03-lvol0brw-rw---- 1 root disk 253, 4 Apr 21 09:27 Cluvg_04-lvol0

So, with DT, the I/O command would look like:

#./dt of=/dev/mapper/Cluvg_01-lvol0 bs=64k capacity=120g pattern=iot log=/tmp/dt_01.log&

For quick test case execution involving I/O, change the capacity to a smaller value such as 6 Gigs in order for a complete I/O pass to finish within minutes.