Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to...

31
Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.1) Author Names: Brandon Perkins, Chris Negus Technical Review Team: Rob Washburn, Chris Keller, Mikkilineni Suresh Babu, Bryan Yount 5/16/2013 INTRODUCTION To make your RHEV-M highly available, you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite (RHCS) high availability clusters eliminate single points of failure, so if the node on which a service (which in this case, includes resources needed by RHEV-M) is running should become inoperative, the service can start up again (fail over) to another cluster node with minimal interruption and no data loss. Red Hat supports two options for making your RHEV-M 3.1 highly available: RHEV-M as a highly available virtual machine: This approach (not covered in this tech brief) lets you configure a single RHEV-M as a virtual machine that is brought up on another host if the RHEV-M goes down. It offers simpler configuration, but can result in a longer downtime of a few minutes when a VM goes down. Read about this approach here: https://access.redhat.com/knowledge/docs/en- US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s1- virt_machine_resources-ccs-CA.html RHEV-M as a highly available service: This tech brief describes how to configure Red Hat Enterprise Virtualization Manager (RHEV-M) in a two-node, RHCS highly available (HA) cluster. If you want further information about the various components covered in this guide, refer to the following: RHEL 6 Cluster Administration Guide. Describes how to configure a Red Hat Enterprise Linux Cluster. Refer to this guide for help extending or modifying your cluster: http://docs.redhat.com/docs/en- US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html Red Hat Enterprise Virtualization Installation Guide. Describes the non-clustered installation of a RHEV-M, as well as other RHEV components: https://access.redhat.com/site/documentation/en- US/Red_Hat_Enterprise_Virtualization/3.1/html- single/Installation_Guide/index.html After completing the main content of this document to create the RHEV-M 3.1 HA Cluster, refer to the following appendices for additional information: Appendix A: Changing your RHEV-M Cluster: Describes how to make sure that your nodes stay synchronized when you make changes, such as setting new passwords and replacing certificates. Appendix B: Updating the RHEV-M Cluster: Describes how to update RHEL, RHEV and Cluster software on your cluster nodes in a way that keeps your nodes functioning and synchronized. Appendix C: Sample cluster.conf File: Contains a listing of the cluster.conf file that is produced from Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 1

Transcript of Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to...

Page 1: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Setting Up a Highly Available Red Hat Enterprise Virtualization Manager (RHEV 3.1)Author Names: Brandon Perkins, Chris Negus Technical Review Team: Rob Washburn, Chris Keller, Mikkilineni Suresh Babu, Bryan Yount5/16/2013

INTRODUCTIONTo make your RHEV-M highly available, you can configure it to run as a service in an HA cluster. Red Hat

Cluster Suite (RHCS) high availability clusters eliminate single points of failure, so if the node on which a

service (which in this case, includes resources needed by RHEV-M) is running should become inoperative,

the service can start up again (fail over) to another cluster node with minimal interruption and no data loss.

Red Hat supports two options for making your RHEV-M 3.1 highly available:

• RHEV-M as a highly available virtual machine: This approach (not covered in this tech brief) lets

you configure a single RHEV-M as a virtual machine that is brought up on another host if the RHEV-M

goes down. It offers simpler configuration, but can result in a longer downtime of a few minutes when

a VM goes down. Read about this approach here:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/s1-virt_machine_resources-ccs-CA.html

• RHEV-M as a highly available service: This tech brief describes how to configure Red Hat

Enterprise Virtualization Manager (RHEV-M) in a two-node, RHCS highly available (HA) cluster.

If you want further information about the various components covered in this guide, refer to the following:

• RHEL 6 Cluster Administration Guide. Describes how to configure a Red Hat Enterprise Linux

Cluster. Refer to this guide for help extending or modifying your cluster:

http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Cluster_Administration/index.html

• Red Hat Enterprise Virtualization Installation Guide. Describes the non-clustered installation of a

RHEV-M, as well as other RHEV components:

https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Virtualization/3.1/html-single/Installation_Guide/index.html

After completing the main content of this document to create the RHEV-M 3.1 HA Cluster, refer to the

following appendices for additional information:

• Appendix A: Changing your RHEV-M Cluster: Describes how to make sure that your nodes stay

synchronized when you make changes, such as setting new passwords and replacing certificates.

• Appendix B: Updating the RHEV-M Cluster: Describes how to update RHEL, RHEV and Cluster

software on your cluster nodes in a way that keeps your nodes functioning and synchronized.

• Appendix C: Sample cluster.conf File: Contains a listing of the cluster.conf file that is produced from

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 1

Page 2: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

the cluster configuration done in this document.

NOTE: Although not strictly required, it is generally better to run at least a three node cluster. Besides offering extra resources, the additional node makes it less likely you will end up in a "split-brain" condition, where both nodes believe they control the cluster.

NOTE: The procedures in this tech brief contain several long, complex commands. Consider copying this document, or plain text copies of it, to the cluster nodes so you can copy and paste commands into the shell. In particular, it is critical that you get the names of directories exactly right when you set up shared storage. Copying and pasting directory names can help prevent errors.

UNDERSTANDING SYSTEM REQUIREMENTSThere are many different ways of setting up a high availability RHEV-M cluster. In our example, we used the

following components:

• Two cluster nodes. Install two machines with Red Hat Enterprise Linux 6 to act as cluster nodes.

• A cluster web user interface. A Red Hat Enterprise Linux system (not on either of the cluster nodes)

running the luci web-based high-availability administration application. You want this running on a

system outside the cluster, so if either node goes down, the management interface is not affected.

• Network storage. Shared network storage is required. This procedure shows how to use HA LVM

from a RHEL 6 system, which is backed by iSCSI storage. (Fibre channel and NFS are other

technologies you could use instead of iSCSI.)

• Red Hat products. This procedure combines components from Red Hat Enterprise Linux, Red Hat

Cluster Suite, Red Hat Enterprise Virtualization, and (optionally) Red Hat Enterprise Linux Server

Resilient Storage.

Using this information, set up two physical systems as cluster nodes (running ricci), another physical system

that holds the cluster manager web user interface (running luci), and a final system or other shared storage

device to contain the HA LVM storage. Figure 1 shows the layout of the systems used to test this procedure:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 2

Page 3: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Figure 1: Example RHEV-M on HA cluster configuration

For our example, we used two NICS on the cluster nodes. We used a 192.168.99.0 network for

communication within the cluster and 192.168.100.0 for the network facing the RHEV environment. We used

a SAN and created a high-availability LVM with multiple logical volumes that are shared by the cluster. The

procedures that follow describe how to set up the cluster nodes, cluster web user interface, HA LVM storage,

and the clustered service running the RHEV-M.

CONFIGURE CLUSTER NODES (RICCI)Follow the steps below to install and configure two (or more) Red Hat Enterprise Linux systems as cluster

nodes.

1. Choose Cluster hardware. The computer used to run a cluster node for the RHEV-M must meet

RHEL hardware requirements, as well as the more stringent RHEV-M requirements. Refer to here

for more information:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Virtualization/3.1/html-single/Installation_Guide/index.html#sect-Hardware_Requirements

2. Install Red Hat Enterprise Linux 6 Server. On both nodes, install RHEL as described here:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/Installation_Guide/index.html

3. Register RHEL. On both nodes, register with RHN and subscribe to the Red Hat Enterprise Linux

Server (v. 6 for 64-bit x86_64) (rhel-x86_64-server-6) base/parent channel with your Red Hat

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 3

Page 4: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Network username and password:

# /usr/sbin/rhnreg_ks --serverUrl=https://xmlrpc.rhn.redhat.com/XMLRPC \ --username=[username] --password=[password]# rhn-channel -lrhel-x86_64-server-6

4. Subscribe to RHEL channels. On both nodes, subscribe to the following child channels:

• jbappplatform-6-x86_64-server-6-rpm—JBoss Application Platform (v 6 in rpm)

• rhel-x86_64-server-6-rhevm-3.1—(Red Hat Enterprise Virtualization Manager: (v.3.1 for 64-bit

AMD64 / Intel64)

• rhel-x86_64-server-ha-6—Red Hat Enterprise Linux Server High Availability (v. 6 for 64-bit

AMD64 / Intel64)

• rhel-x86_64-server-supplementary-6—Red Hat Enterprise Linux Server Supplementary

Software (v. 6 64-bit AMD64 / Intel64)

To add these child channels, type the following command, replacing username and password with

the user and password for your RHN account:

# /usr/sbin/rhn-channel -u [username] -p [password] \-c jbappplatform-6-x86_64-server-6-rpm \-c rhel-x86_64-server-6-rhevm-3.1 -c rhel-x86_64-server-ha-6 \-c rhel-x86_64-server-supplementary-6 -a# rhn-channel -l jbappplatform-6-x86_64-server-6-rpm rhel-x86_64-server-6 rhel-x86_64-server-6-rhevm-3.1 rhel-x86_64-server-ha-6 rhel-x86_64-server-supplementary-6

5. Update packages. On both nodes, to make sure you have the latest RHEL packages, run yum

update, then reboot (rebooting is especially important if you get an updated kernel):

# yum update -y# reboot

6. Get shared block device. Have some form of shared block device (with at least 25G of space),

such as fibre channel or iSCSI, available to both of the nodes. For our example, we assume a

shared iSCSI device that appears as /dev/sdb on both nodes.

7. File configuration management (optional). Steps in this procedure use commands to copy files

from one node to other nodes, with the expectation that those files shouldn't change. To ensure that

those files remain in sync, however, the more ideal case is to place the files and directories under

configuration management (such as Puppet, Chef, CFEngine, or Ansible). Then, if a file that is not

in a shared directory changes on one node, the CMS will either alert you or resolve the issue. If

your company offers a Red Hat Satellite server, it is able to provide that functionality:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Network_Satellite/5.5/html/Reference_Guide/sect-Reference_Guide-Configuration.html

8. Install HA software. On both nodes, install the "High Availability" group of RPMs:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 4

Page 5: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# yum -y groupinstall "High Availability"

9. Create firewall rules. On both nodes, make sure that the RHEV-M service is protected by

enabling the firewall and opening those ports needed for the clustered RHEV-M to work properly.

We created a complete firewall file that includes separate rule chains to allow connections to the

Network File System (NFS), RHEV manager (RHEVM), HA Cluster (RHHA) services.

Copy and paste the following rules into the /etc/sysconfig/iptables file on each node:

*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [34:3794] :NFS - [0:0] :RHEVM - [0:0] :RHHA - [0:0] -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT -A INPUT -j NFS -A INPUT -j RHEVM -A INPUT -j RHHA -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited -A NFS -p tcp -m state --state NEW -m multiport --dports 111,892,875 -j ACCEPT -A NFS -p tcp -m state --state NEW -m multiport --dports 662,2049,32803 -j ACCEPT -A NFS -p udp -m state --state NEW -m multiport --dports 111,892,875,662,32769 -j ACCEPT -A RHEVM -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT -A RHEVM -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT -A RHHA -d 224.0.0.0/4 -p udp -j ACCEPT-A RHHA -p igmp -j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 40040,40042,41040,41966 -j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 41967,41968,41969,14567 -j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 16851,11111,21064,50006 -j ACCEPT -A RHHA -p tcp -m state --state NEW -m multiport --dports 50008,50009,8084 -j ACCEPT -A RHHA -p udp -m state --state NEW -m multiport --dports 6809,50007,5404,5405 -j ACCEPT COMMIT

10. Enable firewall. On both nodes, with firewall rules in place enable and start the iptables service:

# chkservice iptables on && service iptables start

11. Start ricci service. On both nodes, start the ricci daemon and configure it to start on boot:

# chkconfig ricci on && service ricci start

12. Change ricci password. On both nodes, set the password for the user ricci.

# passwd ricciChanging password for user ricci.New password: ********Retype new password: ********

Create Shared Filesystems

On node1, create the logical volume and filesystem on the shared block storage using standard LVM2 and

file system commands. Replace /dev/sdb with the device name for your shared block storage device.

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 5

Page 6: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# fdisk /dev/sdb...Command (m for help): nCommand actione extendedp primary partition (1-4) pPartition number (1-4): 1First cylinder (1-45771, default 1): ENTERUsing default value 1Last cylinder, +cylinders or +size{K,M,G} (1-45771, default 45771): ENTERUsing default value 45771Command (m for help): tSelected partition 1Hex code (type L to list codes): 8eChanged system type of partition 1 to 8e (Linux LVM)Command (m for help): w

# pvcreate /dev/sdb1 Writing physical volume data to disk "/dev/sdb1"Physical volume "/dev/sdb1" successfully created

# vgcreate RHEVMVolGroup /dev/sdb1 Volume group "RHEVMVolGroup" successfully created

# for i in lv_share_jasperreports_server_pro lv_share_ovirt_engine_dwh \ lv_share_ovirt_engine_reports lv_share_ovirt_engine; do lvcreate \ -L1.00g -n $i RHEVMVolGroup; done Logical volume "lv_share_jasperreports_server_pro" created Logical volume "lv_share_ovirt_engine_dwh" created Logical volume "lv_share_ovirt_engine_reports" created Logical volume "lv_share_ovirt_engine" created

# lvcreate -L10.00g -n lv_lib_exports RHEVMVolGroupLogical volume "lv_lib_exports" created

# lvcreate -L2.00g -n lv_lib_ovirt_engine RHEVMVolGroup Logical volume "lv_lib_ovirt_engine" created

# lvcreate -L5.00g -n lv_lib_pgsql RHEVMVolGroup Logical volume "lv_lib_pgsql" created

# for i in $(ls -1 /dev/RHEVMVolGroup/lv_*); do mkfs.ext4 $i; done mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux ... This filesystem will be automatically checked every 26 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override.

Create Filesystem Mount Points

On both cluster nodes, before adding the filesystem resources, we need to create all mount points needed

for the shared LVM volumes we created.

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 6

Page 7: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

1. Create shared mount points. On both nodes, create the shared mount points and check to make

sure they exist as follows:

# for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do mkdir -p $i; done

2. Check mount directories. Check that the shared mount point directories exist:

# for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do ls -d $i; done /usr/share/jasperreports-server-pro /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine /var/lib/pgsql

3. Temporarily mount shared directories. On node1 ONLY, mount the shared directory volumes so

that the RHEV-M software you are about to install can be installed on the shared directories.

# mount /dev/mapper/RHEVMVolGroup-lv_share_jasperreports_server_pro \ /usr/share/jasperreports-server-pro# mount /dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine_dwh \ /usr/share/ovirt-engine-dwh# mount /dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine_reports \ /usr/share/ovirt-engine-reports# mount /dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine \ /usr/share/ovirt-engine# mount /dev/mapper/RHEVMVolGroup-lv_lib_exports /var/lib/exports# mount /dev/mapper/RHEVMVolGroup-lv_lib_ovirt_engine /var/lib/ovirt-engine # mount /dev/mapper/RHEVMVolGroup-lv_lib_pgsql /var/lib/pgsql

Install RHEV-M Software

On both nodes, you need to install the rhevm software. On the first node only, you are going to configure the

RHEVM (rhevm-setup). Additionally, on the second node, you are going to remove directories that, instead

of using locally, you are going to rely on the shared directories.

1. Install RHEV-M. On both nodes, install rhevm-setup:

# yum -y install rhevm-setup Loaded plugins: rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package rhevm-setup.noarch 0:3.1.0-32.el6ev will be installed ... xom.noarch 0:1.2.7-1._redhat_1.1.ep6.el6.1 yum-plugin-versionlock.noarch 0:1.1.30-14.el6

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 7

Page 8: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

zip.x86_64 0:3.0-1.el6 Complete!

This will pull in a number of dependencies, including JBoss AS which includes hundreds of RPMs.

So the process may take some time.

2. Verify Java. On both nodes, verify that the Java alternative is pointing to version 1.7:

# stat -c %N /etc/alternatives/java`/etc/alternatives/java' -> `/usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java'

If this is not the case, run the following:

# alternatives --set java /usr/lib/jvm/jre-1.7.0-openjdk.x86_64/bin/java

3. Remove directories from node2 ONLY. On node2 ONLY, remove the contents of the local

directories that will ultimately be replaced by the shared directories created from node1.

# for i in /usr/share/jasperreports-server-pro \/usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \/usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \/var/lib/pgsql; do rm -rf $i && mkdir -p $i && ls -d $i; done/usr/share/jasperreports-server-pro/usr/share/ovirt-engine-dwh/usr/share/ovirt-engine-reports/usr/share/ovirt-engine/var/lib/exports/var/lib/ovirt-engine/var/lib/pgsql

4. Make ovirt logs cluster-safe. On both nodes, modify /etc/cron.daily/ovirt-cron on both

nodes to be cluster-safe. Add the following lines AFTER "#!/bin/sh":

if [ ! -d /usr/share/ovirt-engine/lost+found ]; then exit 0 fi

As a result, the file should look as follows when you use the head command to display the

beginning of the file:

# head /etc/cron.daily/ovirt-cron#!/bin/sh

if [ ! -d /usr/share/ovirt-engine/lost+found ]; then exit 0fi

#compress log4j log files, delete old ones

/usr/share/ovirt-engine/scripts/ovirtlogrot.sh /var/log/ovirt-engine 480 > /dev/null

EXITVALUE=$?...

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 8

Page 9: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Run rhevm-setup

RHEV Manager is now installed on both machines but is not configured.

1. Run rhevm-setup on node1 ONLY. On node1 ONLY, run rhevm-setup command.

WARNING: The "Host fully qualified domain name" must be the hostname you use to contact the RHEV-M service (regardless of the node it is running on). It should not be the local hostname of an individual node. This fqdn must map to the IP address you configure for the RHEV-M service when you set up the cluster, later in this procedure.

# rhevm-setup Welcome to RHEV Manager setup utilityIn order to proceed the installer must stop the ovirt-engine serviceWould you like to stop the ovirt-engine service? (yes|no): yesStopping ovirt-engine service... RHEV Manager uses httpd to proxy requests to the application server.It looks like the httpd installed locally is being actively used.The installer can override current configuration .Alternatively you can use JBoss directly (on ports higher than 1024)Do you wish to override current httpd configuration and restart the service? ['yes'| 'no'] [yes] : yes HTTP Port [80] : 80 HTTPS Port [443] : 443 Host fully qualified domain name. Note: this name should be fully resolvable [node1.example.com] : myrhevm.example.com <- Use shared hostname!!!Enter a password for an internal RHEV Manager administrator user (admin@internal) : ********Confirm password : ********Organization Name for the Certificate [node1.example.com] : Example.comThe default storage type you will be using ['NFS'| 'FC'| 'ISCSI'| 'POSIXFS'] [NFS] : NFSEnter DB type for installation ['remote'| 'local'] [local] : localEnter a password for a local RHEV Manager DB admin user (engine) : ********Confirm password : ********Configure NFS share on this server to be used as an ISO Domain? ['yes'| 'no'] [yes] : yesLocal ISO domain path [/var/lib/exports/iso] : /var/lib/exports/isoFirewall ports need to be opened.The installer can configure iptables automatically overriding the current configuration. The old configuration will be backed up.Alternately you can configure the firewall later using an example iptables file found under /etc/ovirt-engine/iptables.exampleConfigure iptables ? ['yes'| 'no']: no <--iptables already done manuallyRHEV Manager will be installed using the following configuration:

================================================================= override-httpd-config: yes http-port: 80 https-port: 443 host-fqdn: myrhevm.example.com auth-pass: ******** org-name: Example.com default-dc-type: NFS db-remote-install: local db-local-pass: ********

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 9

Page 10: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

nfs-mp: /var/lib/exports/iso config-nfs: yes override-iptables: no Proceed with the configuration listed above? (yes|no): yesInstalling:Configuring RHEV Manager... [ DONE ]... **** Installation completed successfully ****** (Please allow RHEV Manager a few moments to start up.....) **** To access RHEV Manager browse to http://rhevm.example.com:80Additional information:...

2. Add RHEV-M authentication (optional). At this point, you can authenticate to the RHEV-M via a

web browser on the real interface, not the floating virtual IP, using the admin account and password

you just entered. If you want to configure centralized IPA or Active Directory authentication for your

RHEV-M, you can do so with the rhevm-manage-domains command. Here is an example of an IPA

configuration (the syntax is the same for adding an Active Directory server):

WARNING: If this command is run later on, the /etc/ovirt-engine/krb5.conf file will need to be synchronized between nodes.

# rhevm-manage-domains -action=add -domain=ipaserver.example.com -user=admin -provider=IPA -interactiveEnter password: ******** # rhevm-manage-domains -action=validate Domain ipaserver.example.com is valid Manage Domains completed successfully

3. Setup NFS Shared Resource for ISO Domain (optional). If during rhevm-setup, 'yes' was

selected for the question "Configure NFS share on this server to be used as an ISO Domain?", this

step must be completed. If the NFS share will be located on another machine, this step can be

skipped.

• Copy /etc/sysconfig/nfs. From node1, copy /etc/sysconfig/nfs to node2:

# scp /etc/sysconfig/nfs node2:/etc/sysconfig/nfs

• Add NFS firewall rules. For both nodes, allow inbound access to the NFS-related ports as

described later when we create the firewall for both nodes.

• Copy /etc/exports (optional). Copy the /etc/exports from node1 to node2 for consistency:

# scp /etc/exports node2:/etc/exports

Or revert the file to its native state on both nodes (as NFS exporting is handled by RHEL-HA):

# cp /dev/null /etc/exports

• Set SELinux context. On both nodes, set and persist the SELinux context for the export

directory:

# semanage fcontext -a -t public_content_rw_t "/var/lib/exports/iso(/.*)?"

4. Synchronize configuration between nodes. Several configuration files that were modified on

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 10

Page 11: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

node1 during the rhevm-setup process need to be copied to node2. Follow these steps to do that:

• Turn on SELinux boolean. RHEV-M requires Apache HTTP Daemon scripts and modules to

connect to the network using TCP. On node2, therefore you need to turn on the

httpd_can_network_connect boolean:

# setsebool -P httpd_can_network_connect 1

• Copy files from node1 to node2. From node1, copy configuration files and keys to node2 as

follows (be sure to replace node2 with the hostname of the other computer in your cluster):

WARNING: Whenever any of these configuration files change, they should be re-synced with other nodes in the cluster by running this command again. Changing passwords and certificates or running commands such as rhevm-manage-domains can result in changes to some of these files. See Appendix A for details.

# for i in /etc/httpd/conf.d/ovirt-engine.conf \ /etc/httpd/conf.d/ssl.conf /etc/httpd/conf/httpd.conf \ /etc/ovirt-engine/ /etc/pki/ovirt-engine/ \ /etc/sysconfig/ovirt-engine \ /etc/yum/pluginconf.d/versionlock.list; do rsync -e ssh -avx $i \ node2:$i; done

5. Verify RHEV-M is sane. That is, at this point you should be able to test that all is well with RHEV-M

running on node1, by using the IP address or hostname of node1 (the virtual IP resource is not yet

available). From a web browser, go to the following URL (using your own hostname or IP address):

http://node1.example.com:80

Select the Web Admin Portal and login as admin. If you can see the RHEVM Web Administration

page, you can begin to prepare your nodes to be added to a cluster. If you configured RHEV-M to

use external authentication, you can now add a domain user as an administrator from the “Users”

tab of the web administrative interface.

6. Shut down services on node1. On node1, immediately stop httpd, ovirt-engine, postgresql, and

nfs services:

# for i in httpd ovirt-engine postgresql nfs; do service $i stop; doneStopping httpd: [ OK ]Stopping engine-service: [ OK ]Stopping postgresql service: [ OK ]Shutting down NFS daemon: [ OK ]Shutting down NFS mountd: [ OK ]Shutting down NFS quotas: [ OK ]Shutting down NFS services: [ OK ]

7. Turn off services. Turn off httpd, ovirt-engine, postgresql, and nfs services so they don't start up

automatically on reboot (the cluster is configured later to start these services as needed):

# for i in httpd ovirt-engine postgresql nfs; do chkconfig $i off && chkconfig --list $i; donehttpd 0:off 1:off 2:off 3:off 4:off 5:off 6:offovirt-engine 0:off 1:off 2:off 3:off 4:off 5:off 6:offpostgresql 0:off 1:off 2:off 3:off 4:off 5:off 6:offnfs 0:off 1:off 2:off 3:off 4:off 5:off 6:off

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 11

Page 12: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

8. Unmount filesystems from node1. To prepare node1 so it can be added to the cluster, you need

to unmount the temporarily mounted file systems and stop the services (so that the cluster manager

can handle those tasks):

# umount /usr/share/jasperreports-server-pro# umount /usr/share/ovirt-engine-dwh# umount /usr/share/ovirt-engine-reports# umount /usr/share/ovirt-engine# umount /var/lib/exports# umount /var/lib/ovirt-engine# umount /var/lib/pgsql

At this point, both nodes are configured and the RHEV-M has been verified to work directly on node1. Now

that the filesystems are created and software is installed on them, the next step is to prepare the filesystems

to become highly available.

Setting Up Highly Available LVM Storage

There are many ways of sharing filesystems. Highly Available LVM (HA-LVM) can be setup to use one of two

methods for achieving its mandate of exclusive logical volume activation:

• LVM tagging. The first method uses local machine locking and LVM "tags". This method has the

advantage of not requiring any LVM cluster packages; however, there are more steps involved in

setting it up and it does not prevent an administrator from mistakenly removing a logical volume from

a node in the cluster where it is not active.

• CLVM. The second method uses Clustered Logical Volume Manager (CLVM), but it will only ever

activate the logical volumes exclusively. This has the advantage of easier setup and better prevention

of administrative mistakes (like removing a logical volume that is in use). In order to use CLVM, the

High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must

be running.

Only ONE HA-LVM method can be used to setup HA RHEV-M! Those two choices are described below:

Choice #1: Set up HA-LVM failover (tagging method)

To set up HA-LVM failover by using tags in the /etc/lvm/lvm.conf file, perform the following steps:

1. Set locking type. On both nodes, ensure that the parameter "locking_type" in the global section of

"/etc/lvm/lvm.conf" is set to the value "1":

# lvmconf --disable-cluster # grep '^ locking_type = ' /etc/lvm/lvm.conf locking_type = 1

2. Edit volume_list in lvm.conf. Edit the "volume_list" field in /etc/lvm/lvm.conf. Include the name of

your root volume group and your hostname. (The hostname MUST exactly match the name of the

local node, preceded by "@", that will go into the /etc/cluster/cluster.conf file you will configure later

in this procedure when you set up your cluster.) Below are sample entries from /etc/lvm/lvm.conf

on node1:

# grep ' volume_list = ' /etc/lvm/lvm.conf

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 12

Page 13: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] volume_list = [ "vg_node1", "@node1.example.com" ]

and /etc/lvm/lvm.conf on node2:

# grep ' volume_list = ' /etc/lvm/lvm.conf # volume_list = [ "vg1", "vg2/lvol1", "@tag1", "@*" ] volume_list = [ "vg_node2", "@node2.example.com" ]

This tag will be used to activate shared volume groups or logical volumes. DO NOT include the

names of any volume groups that are to be shared using HA-LVM.

3. Rebuild initrd. On both cluster nodes, to include the changes form lvm.conf in your initial RAM

disk, update the initrd. With the latest kernel running on your system, run the following command:

# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

4. Reboot nodes. Reboot both cluster nodes to make sure the correct initial RAM disk is being used.

With the cluster nodes prepared, the next step is to set up the cluster on the Highly Available Cluster

Manager (luci), then create the resources and cluster service that allows that service to failover to different

nodes when necessary.

Choice #2: Set up HA-LVM failover (CLVM)

To set up HA-LVM failover by using the CLVM variant (instead of the HA-LVM tagging method described

"Set up HA-LVM failover (tagging method)"), perform the following steps:

1. Identify logical volume group as highly available. On both nodes, change the logical volume

group to identify it as a clustered volume group. For example, given the name RHEVMVolGroup,

you would type the following:

# vgchange -cy RHEVMVolGroup Volume group "RHEVMVolGroup" successfully changed

2. Subscribe to Resilient Storage. Install the Resilient Storage Add-On by subscribing to the rhel-

x86_64-server-rs-6 (Red Hat Enterprise Linux Server Resilient Storage (v. 6 for 64-bit AMD64 /

Intel64)) child channel:

# /usr/sbin/rhn-channel -u [user] -p [passwd] -c rhel-x86_64-server-rs-6 -a# rhn-channel -l jbappplatform-6-x86_64-server-6-rpm rhel-x86_64-server-6 rhel-x86_64-server-6-rhevm-3.1 rhel-x86_64-server-ha-6 rhel-x86_64-server-rs-6 <- Resilient storage channel rhel-x86_64-server-supplementary-6

3. Install Resilient Storage. On both nodes, install the "Resilient Storage" group of RPMs:

# yum -y groupinstall "Resilient Storage" Loaded plugins: rhnplugin This system is receiving updates from RHN Classic or RHN Satellite. Setting up Group Process

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 13

Page 14: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Package ccs-0.16.2-55.el6.x86_64 already installed and latest version Resolving Dependencies ... Complete!

4. Set locking type. On both nodes, ensure that the parameter "locking_type" in the global section of

"/etc/lvm/lvm.conf" is set to the value "3". You can do that with the lvmconf command as follows:

# lvmconf --enable-cluster # grep '^ locking_type = ' /etc/lvm/lvm.conf locking_type = 3

5. Rebuild initrd. To include the changes form lvm.conf in your initial RAM disk, update the initrd on

all your cluster nodes. With the latest kernel running on your system, run the following command:

# dracut -H -f /boot/initramfs-$(uname -r).img $(uname -r)

6. Reboot nodes. Reboot both cluster nodes to make sure the correct initial RAM disk is being used.

7. Start clvmd. On both nodes, start the clvmd daemon and configure it to start on boot:

# chkconfig clvmd on && service clvmd start

8. Identify logical volumes as highly available. On only one node, change the logical volumes to identify them as clustered. For example, given the name RHEVMVolGroup, you would type the following:

# for i in $(ls -1 /dev/RHEVMVolGroup/lv_*); do lvchange -an $i; done

CONFIGURE CLUSTER FROM HA MANAGER (LUCI)If you don't already have an existing RHEL6 luci install, install the cluster manager (luci) on a RHEL system,

other than your cluster nodes. Then begin configuring the cluster as described below:

1. Install Red Hat Enterprise Linux 6 Server. On the machine to run luci, install RHEL as described

here:

https://access.redhat.com/knowledge/docs/en-US/Red_Hat_Enterprise_Linux/6/html-single/Installation_Guide/index.html

2. Register RHEL. On luci machine, register with RHN and subscribe to the Red Hat Enterprise Linux

Server (v. 6 for 64-bit x86_64) (rhel-x86_64-server-6) base/parent channel:

# /usr/sbin/rhnreg_ks --serverUrl=https://xmlrpc.rhn.redhat.com/XMLRPC \ --username=[username] --password=[password]# rhn-channel -lrhel-x86_64-server-6

3. Subscribe to RHEL HA channel. On luci, subscribe to the following child channel:

• rhel-x86_64-server-ha-6—Red Hat Enterprise Linux Server High Availability (v. 6 for 64-bit

AMD64 / Intel64)

To add these child channels, type the following command, replacing username and password with the user

and password for your RHN account:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 14

Page 15: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# /usr/sbin/rhn-channel -u [username] -p [password] \-c rhel-x86_64-server-ha-6 -a# rhn-channel -l rhel-x86_64-server-6 rhel-x86_64-server-ha-6

4. Install luci. Install the luci RPMs:

# yum -y install luci

5. Start luci. Start the luci daemon:

# chkconfig luci on# service luci startAdding following auto-detected host IDs (IP addresses/domain names), corresponding to `luci.example.com' address, to the configuration ofself-managed certificate `/var/lib/luci/etc/cacert.config' (you canchange them by editing `/var/lib/luci/etc/cacert.config', removing thegenerated certificate `/var/lib/luci/certs/host.pem' and restarting luci): (none suitable found, you can still do it manually as mentioned above)Generating a 2048 bit RSA private keyWriting new private key to '/var/lib/luci/certs/host.pem'Starting saslauthd: [ OK ]Start luci... [ OK ] Point your web browser to https://luci.example.com:8084 (or equivalent) to access luci

6. Login to luci. As instructed by the start-up script, point your web browser to the address shown

(https://luci.example.com:8084 in this example) and log in as the root user, as prompted.

Create the Cluster in luci

Once you are logged into luci, you need to create a cluster and add the two cluster nodes to it.

1. Name the cluster. Select Manage Clusters -> Create, then fill in the Cluster Name (for example,

RHEVMCluster).

2. Identify cluster nodes. Fill in the Node Name (fully-qualified domain name or name in /etc/hosts)

and Password (the password for the user ricci) for the first cluster node. Click the Add Another

Node button and add the same information for the second cluster node. (Repeat if you had decided

to create more than two nodes.)

3. Add cluster options. Select the following options, then click the Create Cluster button:

• Use the Same Password for All Nodes: Select this check box.

• Download Packages: Select this radio button.

• Reboot Nodes Before Joining Cluster: Select this check box.

• Enable Shared Storage Support: Leave this unchecked.

After you click the Create Cluster button, if the nodes can be contacted luci will set up each

cluster node, downloading packages as needed, and add each node to the cluster. When each

node is set up, the High Availability Management screen appears.

4. Create failover domain. Click the Failover Domains tab. Click the Add button and fill in the

following information as prompted:

• Name. Fill in any node you like (such as prefer_node1).

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 15

Page 16: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

• Prioritized. Check this box.

• Restricted. Check this box.

• Member. Click the Member box for each node.

• Priority. Add a "1" for node1; Add a "2" for node2 under the Priority column.

Click Create to apply the changes to the fail over domain.

5. Add Fence Devices. Configure appropriate fence devices for the hardware you have. Add a fence

device and instance for each node. These settings will be particular to your hardware and software

configuration. Refer to the Cluster Administration Guide and Fence Device article for help with

configuring fence devices:

• Cluster Administration Guide: https://access.redhat.com/knowledge/docs/en-

US/Red_Hat_Enterprise_Linux/6/html-single/Cluster_Administration/

• Fence Device Article: https://access.redhat.com/knowledge/articles/28603

Add Resources for the Cluster

With the cluster created, next create several resources to put in the new cluster service (named rhevm).

These include: an IP address for the service, shared file systems, and other resources.

Add IP Address Resource

1. Add an IP address resource. Select the Resources tab, then click Add and choose IP Address.

2. Fill in IP address information. Enter the following:

• IP Address. Fill in a valid IP address. Ultimately, this IP address (192.168.100.3 in our

example) is used from a web browser to access the RHEV-M (for example,

https://192.168.100.3:80).

• Monitor Link. Check this box.

3. Submit information. Click the Submit button.

Create HA LVM Resource

From the High Availability Management (luci) web interface, create an HA LVM resource for each logical

volume created in the "Setting Up Highly Available LVM Storage" section. Start by selecting the cluster

(RHEVMCluster), then do the following:

1. Add HA LVM resource. Click on Resources, then click Add and select HA LVM.

2. Fill in HA LVM information. Enter the following:

• Name. Fill in RHEVM HA LVM.

• Volume Group Name. Fill in RHEVMVolGroup.

• Logical Volume Name. Leave this blank.

3. Submit. Press the "Submit" button.

Create Shared Logical Volume Resources

From the luci web interface, add seven file system resources using the values in Table 1 below:

WARNING: It is critical that you get all the mount point names exactly as shown! For any of the mount point directories that are not shared (because you typed the name wrong), the files in that directory will be installed

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 16

Page 17: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

only on the local disk of the first node and will not be shared. A service may work on one node, but fail to run on another. Double-check all mount point names!

Table 1 - Filesystem Resource Information

Name File

System

Type

Mount Point Device, FS label, or UUID

lv_share_jasperreports_

server_pro

ext4 /usr/share/jasperreports-

server-pro

/dev/mapper/RHEVMVolGroup-

lv_share_jasperreports_server_pro

lv_share_ovirt_engine_

dwh

ext4 /usr/share/ovirt-engine-

dwh

/dev/mapper/RHEVMVolGroup-

lv_share_ovirt_engine_dwh

lv_share_ovirt_engine_r

eports

ext4 /usr/share/ovirt-engine-

reports

/dev/mapper/RHEVMVolGroup-

lv_share_ovirt_engine_reports

lv_share_ovirt_engine ext4 /usr/share/ovirt-engine /dev/mapper/RHEVMVolGroup-

lv_share_ovirt_engine

lv_lib_exports ext4 /var/lib/exports /dev/mapper/RHEVMVolGroup-lv_lib_exports

lv_lib_ovirt_engine ext4 /var/lib/ovirt-engine /dev/mapper/RHEVMVolGroup-

lv_lib_ovirt_engine

lv_lib_pgsql ext4 /var/lib/pgsql /dev/mapper/RHEVMVolGroup-lv_lib_pgsql

1. Add Filesystem resource. Click on Resources, then click Add and select Filesystem.

2. Fill in Filesystem information. Enter the following (if you can, copy and paste from table):

• Name. Fill in the Name from Table 1.

• Filesystem Type. Fill in the Filesystem Type from Table 1.

• Mount Point. Fill in the Mount Point from Table 1. (This value is critical!)

• Device, FS label, or UUID. Fill in the Device, FS label, or UUID from Table 1.

• Mount options. Leave blank.

• Filesystem ID (optional). Leave blank.

• Reboot host node if unmount fails. Check this box.

3. Submit. Press the "Submit" button.

4. Repeat until all filesystems from Table 1 are created.

Create NFS Resources (optional)

From the luci web interface, select the two node cluster, then add the NFS resource that represents your

ISO domain.

1. Add NFS Client resource. Click on Resources, then click Add and select NFS Client.

2. Fill in NFS Client information. Enter the following:

• Name. Enter rhev iso clients

• Target Hostname, Wildcard, or Netgroup. Enter 0.0.0.0

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 17

Page 18: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

WARNING: In a production environment, 0.0.0.0 should NOT be used. The hosts you allow here should be restricted to the smallest group of systems/networks as possible.

• Allow Recovery of This NFS Client. Check this box.

• Options. Enter rw

• Submit. Press the "Submit" button.

3. Add NFS v3 Export resource. Click on Resources, then click Add and select NFS v3 Export.

4. Fill in NFS v3 Export information. Enter the following:

• Name. Enter rhev iso exports

• Submit. Press the "Submit" button.

Create Resources for Services Started by rhevm

From the luci web interface, select the two node cluster. The add several services needed to run on the

RHEV-M.

Add postgresql Service Script

Create a resource for the postgresql service:

1. Add Script resource for postgresql. Click on Resources, then click Add and select Script.

2. Fill in Script information. Enter the following:

• Name. Fill in postgresql

• Full path to script file. Fill in /etc/rc.d/init.d/postgresql

3. Submit. Press the "Submit" button.

Add JBoss AS oVirt Engine Service Script

Create a resource for the ovirt-engine service:

1. Add Script resource for ovirt-engine. Click on Resources, then click Add and select Script.

2. Fill in Script information. Enter the following:

• Name. Fill in ovirt-engine

• Full path to script file. Fill in /etc/rc.d/init.d/ovirt-engine

3. Submit. Press the "Submit" button.

Add Apache HTTP Daemon Service Script

Create a resource for the Apache service:

1. Add Apache resource for httpd. Click on Resources, then click Add and select Apache.

2. Fill in Apache information. Enter the following:

• Name. Fill in httpd

• Shutdown Wait (seconds). Change to something that works for your environment, such as 5

3. All other options leave as they are.

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 18

Page 19: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

4. Submit. Press the "Submit" button.

Create rhevm Cluster Service and Add Resources

Next, create the rhevm service and add each of the resources (created earlier) to the rhevm service. From

luci, with RHEVMCluster still selected, add the new rhevm service as follows:

NOTE: After you click the Submit button (assuming you selected Automatically Start this Service as described below) the rhevm service will try to start. We have you click the Submit button after each step so you can see if that step causes an error. If an error occurs, recheck that the offending resource is correct or go to the "Trying the RHEV-M Cluster Service" section for troubleshooting tips.

1. Add a Service Group. Click on the Service Groups tab and select Add.

2. Fill in Service Group information.

• Service name. Assign a name to the serve (for example, rhevm)

• Automatically start this service. Check this box.

• Failover Domain. Select the prefer_node1 you created earlier.

• Recovery Policy. Select Relocate.

3. Submit. Press the "Submit" button.

Add Resources to rhevm Cluster Service

1. Add the IP address resource.

• Select the rhevm Service Group. Click on "Service Groups" and select rhevm.

• Select the Add Resource button at the bottom of the screen and select the IP Address resource

you created earlier.

• Submit. Press the "Submit" button.

2. Add RHEVM HA LVM resource.

• Select the rhevm Service Group. Click on "Service Groups" and select rhevm.

• Select the Add Resource button at the bottom of the screen and select "RHEVM HA LVM".

• Submit. Press the "Submit" button

3. Add Filesystem resources. Add all seven filesystem resources from (represented by the Name

field in Table 1) to the rhevm" service group:

• Select the rhevm Service Group. Click on "Service Groups" and select rhevm.

• Select the Add Resource button at the bottom of the screen and select the Name of the first

Filesystem resource (see Table 1).

• Repeat. Repeat the bullet items until all seven filesystem resources are added.

• Submit. Press the "Submit" button

4. Add Service resources. Add services resources to the "rhevm" service group. For "Name" use

each of these service resource names: postgresql, ovirt-engine, and httpd:

• Select the rhevm Service Group. Click on "Service Groups" and select rhevm.

• Select the Add Resource button and select the Name of the first services resource.

• Repeat. Repeat these bullets until all three resources are added.

• Submit. Press the "Submit" button.

5. Add RHEV ISO Exports and Clients resources (optional).

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 19

Page 20: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

• Select the rhevm Service Group. Click on "Service Groups" and select rhevm.

• Find lv_lib_exports. From the resources on the rhevm service page, find the "lv_lib_exports"

Filesystem resource and select Add Child Resource inside that block. (In other words, you want

to make the new resource dependent on lv_lib_exports.)

• Choose Select a Resource Type and select the "rhev iso exports" entry

• Find NFS v3 Export. At the bottom of the NFS v3 Export resource you just added, select the

"Add Child Resource" inside that block.

• Choose Select a Resource Type and select the "rhev iso clients" entry

• Submit. Press the "Submit" button

At this point, you should be able to test that the basic rhevm service is running on the cluster. We

recommend that you try:

• Accessing the RHEV-M from your web browser

• Moving the rhevm service to another node

• Then try to access the RHEV-M again, as described in the next section.

After that, you can add the remaining resources (oVirt Event Notifier, Data Warehouse, and Reports) to your

RHEV-M.

TRYING THE RHEV-M CLUSTER SERVICEAssuming you set the rhevm service to automatically start, you should be able to access the RHEVM from

your web browser, then test that it is still accessible when you move it to a different node.

1. Access the RHEV-M. From a web browser, open the Red Hat Enterprise Virtualization Manager

(RHEV-M) using the hostname or IP address you used to identify the service (not the direct name

or IP address of a node):

http://myrhevm.example.com

2. Login to the RHEV-M. Select the web Admin Portal and, when prompted, login to the RHEV-M. If

you can successfully log into the RHEV-M. You can proceed to testing the cluster.

3. Check where rhevm service is running. From a shell on either cluster node, check where the

rhevm service is currently active:

# clustat -s rhevmService Name Owner (Last) State ------- ---- ----- ------ -----service:rhevm node1.example.com started

4. Move rhevm to different cluster node. From either node, relocate the rhevm service to another

cluster node:

# clusvcadm -r rhevm Trying to relocate service:rhevm...Success service:rhevm is now running on node2.example.com

5. Login to the RHEV-M again. Try again to login to the RHEV-M from a web browser. If it works,

then the service was able to successfully relocate to another node.

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 20

Page 21: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

If the rhevm service appears to be working, continue on to configuring additional RHEV-M services.

SET UP ADDITIONAL RHEV-M SERVICESBesides the basic service, you can also set up additional RHEV-M services and add them to the cluster.

These services include the oVirt Event Notification service, Data Warehousing for oVirt, and Reports.

Setup oVirt event notification service (Optional)

On both nodes, do the following:

• Modify /etc/ovirt-engine/notifier/notifier.conf to change "MAIL_SERVER=" as

follows (replacing localhost for a valid SMTP server if localhost is not configured as an MTA):

MAIL_SERVER=localhost

On luci, do the following:

1. Add the resource. Click on Resources and click on Add.

2. Add Script. Click Select a Resource Type and select Script. Then fill in the following information

• Name. Use engine-notifierd.

• Full Path to Script File. Use /etc/rc.d/init.d/engine-notifierd

3. Submit. Press the "Submit" button.

4. Add to rhevm service. Click on Service Groups.

5. Click on rhevm.

6. Near the bottom of the screen, select the Add Resource button.

7. Click Select a Resource Type and select the engine-notifierd resource.

8. Submit. Press the "Submit" button.

Test the rhevm-notifierd service

Do the following to test the engine-notifierd service:

1. Select some events to be registered to and use a valid email address.

2. Trigger said events (For example: moving a host into and out of maintenance mode).

3. Check that you receive the emails (note: if you do nothing else, the email will appear to come from

the actual node, not the virtual IP, that ran the alert, I consider this good, but postfix/sendmail can

be configured to come from the virtual IP as well).

4. Relocate the rhevm service group either through luci or clusvcadm.

5. Once the service is completely moved to the other node, trigger the same events from step 2.

6. Again check that you receive the mail.

Set Up Data Warehouse for oVirt

On both nodes, do the following:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 21

Page 22: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

1. Freeze the "rhevm" service group:

# clusvcadm -Z rhevmLocal machine freezing service:rhevm...Success

2. On both nodes, install the rhevm-dwh RPM:

# yum -y install rhevm-dwh

3. On both nodes, modify /etc/cron.hourly/ovirt_engine_dwh_watchdog.cron to be

cluster-safe. Add the following lines AFTER #!/bin/bash:

if [ ! -d /usr/share/ovirt-engine-dwh/lost+found ]; then exit 0 fi

so that it should look like:

#!/bin/bash #

if [ ! -d /usr/share/ovirt-engine-dwh/lost+found ]; then exit 0 fi

# ETL functions library. . /usr/share/ovirt-engine-dwh/etl/etl-common-functions.sh

4. On the node that is NOT running the rhevm service group, remove files that are served by our

HA LVM:

# /bin/rm -r /usr/share/ovirt-engine-dwh/*

5. On the node that IS running the rhevm service group, run the oVirt engine DWH setup script:

# rhevm-dwh-setup In order to proceed the installer must stop the ovirt-engine service Would you like to stop the ovirt-engine service? (yes|no): yes Stopping ovirt-engine... [ DONE ] Setting DB connectivity... [ DONE ] Upgrade DB... [ DONE ] Starting ovirt-engine... [ DONE ] Starting oVirt-ETL... [ DONE ] Successfully installed rhevm-dwh.| The installation log file is available at: /var/log/ovirt-engine/rhevm-dwh-setup-CCYY_MM_DD_hh_mm_ss.log

NOTE: If there is an ERROR while Starting oVirt-ETL, modify the "RUN_PROPERTIES" variable of the /usr/share/ovirt-engine-dwh/etl/history_service.sh file:

#RUN_PROPERTIES="-Xms256M -Xmx1024M -Djavax.net.ssl.trustStore=/etc/pki/ovirt- engine/.keystore -Djavax.net.ssl.trustStorePassword=mypass"RUN_PROPERTIES=" -Djsse.enableSNIExtension=false -Xms256M -Xmx1024M -Djavax.net.ssl.trustStore=/etc/pki/ovirt-engine/.keystore -Djavax.net.ssl.trustStorePassword=mypass"

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 22

Page 23: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Please see: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7127374 for more information. After making the modification, run this step again.

6. On the node that IS running the rhevm service group, copy configuration files from that node to

the other node (for example, node1 to node2; Be sure to change node2 in the command below to

the name of your other cluster node!):

# for i in /etc/sysconfig/ovirt-engine \ /etc/ovirt-engine/ovirt-engine-dwh/Default.properties; do rsync -e ssh -avx \ $i node2:$i; done

7. On both nodes, disable the ovirt-engine-dwhd service from starting automatically:

# chkconfig ovirt-engine-dwhd off# chkconfig --list ovirt-engine-dwhdovirt-engine-dwhd 0:off 1:off 2:off 3:off 4:off 5:off 6:off

On luci, do the following:

1. Add the resource. Click on Resources and click on Add.

2. Add Script. Click Select a Resource Type and select Script. Then fill in the following information

• Name. Use ovirt-engine-dwhd

• Full Path to Script File. Use /etc/rc.d/init.d/ovirt-engine-dwhd

3. Submit. Press the "Submit" button.

4. Add to rhevm service. Click on Service Groups and click on rhevm.

5. Add ovirt-engine-dwhd as child to httpd. Find the httpd resource and select the Add Child

Resource button.

6. Click Select a Resource Type and select the ovirt-engine-dwhd resource.

7. Submit. Press the "Submit" button.

8. Unfreeze rhevm. On the node where rhevm is frozen, unfreeze the rhevm service group:

# clusvcadm -U rhevmLocal machine unfreezing service:rhevm...Success

9. Relocate rhevm. On the node where rhevm is running, relocate the rhevm service group:

# clusvcadm -r rhevm

10. Check ovirt-engine-dwhd. On the node running the rhevm service, check the service is running:

# service ovirt-engine-dwhd status

Set up Reports for oVirt

1. Install rhevm-reports. On both nodes, with the rhevm service still frozen, install the rhevm-reports

RPM, this will also pull-in a few other dependencies:

# yum -y install rhevm-reports

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 23

Page 24: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

2. On the node that is NOT running the rhevm service group, remove files that are served by our

HA LVM:

# /bin/rm -r /usr/share/jasperreports-server-pro/* \/usr/share/ovirt-engine-reports/*

3. On the node that rhevm IS running the rhevm service group, execute rhevm-reports-setup:

# rhevm-reports-setup Welcome to ovirt-engine-reports setup utilityIn order to proceed the installer must stop the ovirt-engine serviceWould you like to stop the ovirt-engine service? (yes|no): yesStopping ovirt-engine... [ DONE ]Please choose a password for the admin users (rhevm-admin and superuser): ********* Re-type password: *********...Successfully installed ovirt-engine-reports.The installation log file is available at: /var/log/ovirt-engine/ovirt-engine-reports-setup-CCYY_MM_DD_hh_mm_ss.log

4. On either node, unfreeze the rhevm service group:

# clusvcadm -U rhevmLocal machine unfreezing service:rhevm...Success

5. Verify reporting. Login to the Reports Portal interface and verify basic reporting works.

6. On the node that IS running the rhevm service group, copy configuration files from that node to

the other node (for example, node1 to node2)::

# for i in /etc/sysconfig/ovirt-engine \/etc/ovirt-engine/jrs-deployment.version; do rsync -e ssh -avx \ $i node2:$i; done

7. Relocate rhevm. On either node, relocate the rhevm service group:

# clusvcadm -r rhevm

8. Check Reports Portal. Login to the Reports Portal again and verify basic reporting works.

At this point, your RHEV-M 3.1 HA cluster is complete. Figure 2 shows the resources that have been added

to luci before they are added to the rhevm service group:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 24

Page 25: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

Figure 2: Resources created for the rhevm Service Group

APPENDIX A: CHANGING THE RHEV-M CLUSTERMost changes to your RHEV-M configuration are stored in shared directories. So, when another node takes

over the cluster, it automatically gets all the latest data.

There are, however, a few ways you might change your RHEV-M that are not automatically proliferated to

other nodes. In those cases, you need to either use file config management tools (such as Puppet, as

mentioned earlier) or manually copy files from the node where files were changed to the other nodes. Here

are some examples:

Changing RHEV-M's postgres User Passwords

To change passwords for your postgres database users, refer to the following article:

https://access.redhat.com/site/solutions/277213

Assuming you are changing postgres user passwords on node1, you need to run the following commands to

make the necessary changes and get the nodes in sync:

1. Freeze the rhevm service. From node1, type the following as root:

# clusvcadm -Z rhevm

2. Change postgres password. On node1 (assuming the rhevm service is frozen there), run the

procedure from this article: https://access.redhat.com/site/solutions/277213.

3. Copy files from node1 to node2. Assuming rhevm is frozen on node 1, run the following

command (substituting the hostname of your second node for node2):

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 25

Page 26: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# for i in /etc/sysconfig/ovirt-engine \/etc/ovirt-engine/.pgpass \/etc/ovirt-engine/ovirt-engine-dwh/Default.properties; do rsync -e ssh -avx \ $i node2:$i; done

4. Unfreeze and relocate rhevm. On either node, unfreeze and relocate the rhevm service group:

# clusvcadm -U rhevm# clusvcadm -r rhevm

If the rhevm service relocates successfully, you are done. If a service fails, make sure your postgres user

passwords in the files just copied match those in the database.

Changing the RHEV-M's FQDN

To change the fully-qualified domain name (FQDN) of the RHEV-M you need to backup the RHEV-M

database, reinstall the RHEV-M, set up the RHEV-M (changing the FQDN at that time), then restore the

database. Red Hat Support has a procedure for doing this on a single RHEV-M that has not yet been tested

in a cluster. Please contact Red Hat Support if you need to do this and ask about solution 342103.

Replacing Certificates

If you need to replace the certificates used when users access the RHEV-M through an https connection via

a Web browser, you can do that as follows:

1. Freeze the rhevm service. From node1 (or whatever node is currently running the rhevm service),

type the following as root:

# clusvcadm -Z rhevm

2. Create keys and certificates. Create a server private key, server certificate, and server certificate

chain files. The following article describes how to do this for RHEV 3.0, including how to make

requests for certificates from a Certificate Authority:

https://access.redhat.com/site/articles/216903

When the RHEV 3.1 version of this document is done, it will be available from this same link.

3. Copy certificate files. On node1, place the new private key, server certificate and server certificate

chain files into the /etc/pki/ovirt-engine/certs/, /etc/pki/ovirt-engine/keys/,

and /etc/pki/ovirt-engine/ directories, respectively. Use new names for those files (for this

example, my- is placed in front of each file name).

4. Edit the ssl.conf file. On node1, change three lines in the /etc/httpd/conf.d/ssl.conf to

reflect the locations of your new certificate, private key and chain files. Here are examples:

SSLCertificateFile /etc/pki/ovirt-engine/certs/my-engine.cerSSLCertificateKeyFile /etc/pki/ovirt-engine/keys/my-engine_id_rsaSSLCertificateChainFile /etc/pki/ovirt-engine/my-ca.pem

5. Restart the httpd service. On node1, restart the apache Web server.

# service httpd restart

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 26

Page 27: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

6. Test access to the RHEV-M. Access the RHEV-M from a Web browser using the https port (443).

For example, https://rhevm.example.com:443. Accept and examine the certificate you are

presented with.

7. Stop the httpd service. From node1, stop the httpd service as follows:

# service httpd stop

8. Copy files from node1 to node2. Assuming rhevm is still frozen on node 1, run the following

command. Substitute the hostname of your second node for node2 and change the names of the

certificate and key files to matched the names you used:

# for i in /etc/pki/ovirt-engine/certs/my-engine.cer \/etc/httpd/conf.d/ssl.conf \/etc/pki/ovirt-engine/keys/my-engine_id_rsa \/etc/pki/ovirt-engine/my-ca.pem; do rsync -e ssh -avx $i node2:$i; done

9. Unfreeze and relocate rhevm. On either node, unfreeze and relocate the rhevm service group:

# clusvcadm -U rhevm# clusvcadm -r rhevm

10. Retest access to the RHEV-M. Make sure that the service successfully moves to the other node

and that you can access the RHEV-M from a Web browser again.

Configuring rhevm-* tools

Configuration files for rhevm-log-collector, rhevm-iso-uploader, and other rhevm-* commands can be set

manually to store information such as user name, password, and hostname for the RHEV-M. If you do

change any of these files on an active node, be sure to copy those files to the other nodes in the cluster.

Examples of these files include:

• /etc/ovirt-engine/logcollector.conf

• /etc/ovirt-engine/isouploader.conf

• /etc/ovirt-engine/imageuploader.conf

APPENDIX B: UPDATING THE RHEV-M CLUSTERAssuming that you have an HA RHEV Manager 3.1 as described in this document, this procedure describes

how to update to new RHEV-M 3.1 packages as they become available. This guide does NOT cover the

update of the Cluster client-side code. Generally speaking, those RPMs can be replaced in-place while the

cluster is running, however it is always advised to do that update with the cluster not running.

Setup yum repositories

There should be no reason to modify yum repositories. If you need to, then you are likely doing an upgrade

(going to a new release of RHEV) and not an update (which simply updates packages within the same

RHEV version).

Update RHEV-M

1. Find rhevm node. Find which node is running the rhevm service group and freeze the service:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 27

Page 28: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# clustat -s rhevmService Name Owner (Last) State ------- ---- ----- ------ ----- service:rhevm node1.example.com started # clusvcadm -Z rhevmLocal machine freezing service:rhevm...Success

2. Update packages. On the node that IS currently running the rhevm service group, run the yum

command to update the rhevm-setup package:

# yum -y update rhevm-setupLoaded plugins: rhnplugin, versionlock This system is receiving updates from RHN Classic or RHN Satellite. Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package rhevm-setup.x86_64 0:3.0.8_0001-1.el6_3 will be updated ---> Package rhevm-setup.noarch 0:3.1.0-32.el6ev will be an update ... Dependency Installed:python-cheetah.x86_64 0:2.4.1-1.el6 python-markdown.noarch 0:2.0.1-3.1.el6 python-pygments.noarch 0:1.1.1-1.el6 python-setuptools.noarch 0:0.6.10-3.el6 Updated: rhevm-setup.noarch 0:3.1.0-32.el6ev Complete!

3. Run upgrade. On the system that is running the rhevm service group, run rhevm-upgrade:

# rhevm-upgradeLoaded plugins: rhnplugin, versionlockChecking for updates... (This may take several minutes)Stopping JBoss Service... [ DONE ]Backing Up RHEVM DB... [ DONE ]Updating rpms... [ DONE ]Updating RHEVM DB... [ DONE ]Running post install configuration... [ DONE ]Starting JBoss... [ DONE ]RHEV Manager upgrade finished successfully!Upgrade log available at /var/log/rhevm/rhevm-upgrade_2011_09_16_14_06_10.logDB Backup available at /usr/share/rhevm/db-backups/tmpoboMgU.sql

4. Unfreeze rhevm. Unfreeze the rhevm service group:

# clusvcadm -U rhevmLocal machine unfreezing service:rhevm...Success# clustat -s rhevmService Name Owner (Last) State ------- ---- ----- ------ ----- service:rhevm node1.example.com started

5. Verify RHEV-M is sane. That is, at this point you should be able to test that all is well with RHEV-M

running on the node, by using the virtual IP resource.

6. Copy rpm list. On the node that is running the rhevm service group, collect the full list of RPMs

and copy it to the other node:

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 28

Copyright © 2013 Red Hat, Inc. “Red Hat,” Red Hat Linux, the Red Hat “Shadowman” logo, and the products listed are trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries.

www.redhat.com

Page 29: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

# rpm -qa | sort | tee node1-rpms.txt# scp node1-rpms.txt node2:/root/

7. Install rpms on extra node(s). On the node that is NOT running the rhevm service group, collect

the full list of RPMs and install them:

# rpm -qa | sort | tee node2-rpms.txt# yum -y update $(comm -23 node1-rpms.txt node2-rpms.txt)

8. Remove files on extra node(s). On the node that is NOT running the rhevm service group,

remove the contents of the shared filesystem directories:

# clustat -s rhevm Service Name Owner (Last) State ------- ---- ----- ------ ----- service:rhevm node1.example.com started

# for i in /usr/share/jasperreports-server-pro \ /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports \ /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine \ /var/lib/pgsql; do rm -rf $i && mkdir -p $i && ls -d $i; done /usr/share/jasperreports-server-pro /usr/share/ovirt-engine-dwh /usr/share/ovirt-engine-reports /usr/share/ovirt-engine /var/lib/exports /var/lib/ovirt-engine /var/lib/pgsql

9. Copy files. Copy specific files from node1 to node2:

# for i in /etc/httpd/conf.d/ovirt-engine.conf \/etc/httpd/conf.d/ssl.conf /etc/httpd/conf/httpd.conf \/etc/ovirt-engine/ /etc/pki/ovirt-engine/ \/etc/sysconfig/ovirt-engine \/etc/yum/pluginconf.d/versionlock.list; do rsync -e ssh -avx $i \node2:$i; done

10. Relocate rhevm. Relocate the "rhevm" service group:

# clusvcadm -r rhevmTrying to relocate service:rhevm...Successservice:rhevm is now running on node2.example.com

11. Verify RHEV-M is sane. That is, at this point you should be able to test that all is well with RHEV-M

running on the node, by using the virtual IP resource from a browser to login to the RHEV-M.

APPENDIX B: SAMPLE CLUSTER.CONF FILE

The /etc/cluster/cluster.conf file shown below reflects the cluster created from the procedure you just

finished. It is good to become familiar with this file and its contents if you need to debug your RHEV-M setup.

<?xml version="1.0"?> <cluster config_version="28" name="RHEVMCluster"> <clusternodes>

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 29

Page 30: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

<clusternode name="node1.example.com" nodeid="1"> <fence> <method name="0"> <device name="fence-dev.example.com" port="node1"/> </method> </fence> </clusternode> <clusternode name="node2.example.com" nodeid="2"> <fence> <method name="0"> <device name="fence-dev.example.com" port="node2"/> </method> </fence> </clusternode> </clusternodes> <cman expected_votes="1" two_node="1"/> <rm> <failoverdomains> <failoverdomain name="prefer_node1" ordered="1" restricted="1"> <failoverdomainnode name="node1.example.com" priority="1"/> <failoverdomainnode name="node2.example.com" priority="2"/> </failoverdomain> </failoverdomains> <resources> <ip address="192.168.100.3" sleeptime="10"/> <lvm name="RHEVMHALVM" vg_name="RHEVMVolGroup"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_share_jasperreports_server_pro" fsid="52851" fstype="ext4" mountpoint="/usr/share/jasperreports-server-pro" name="lv_share_jasperreports_server_pro" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine_dwh" fsid="17763" fstype="ext4" mountpoint="/usr/share/ovirt-engine-dwh" name="lv_share_ovirt_engine_dwh" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine_reports" fsid="29857" fstype="ext4" mountpoint="/usr/share/ovirt-engine-reports" name="lv_share_ovirt_engine_reports" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_share_ovirt_engine" fsid="4293" fstype="ext4" mountpoint="/usr/share/ovirt-engine" name="lv_share_ovirt_engine" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_lib_exports" fsid="43476" fstype="ext4" mountpoint="/var/lib/exports" name="lv_lib_exports" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_lib_ovirt_engine" fsid="5020" fstype="ext4" mountpoint="/var/lib/ovirt-engine" name="lv_lib_ovirt_engine" self_fence="1"/> <fs device="/dev/mapper/RHEVMVolGroup-lv_lib_pgsql" fsid="4325" fstype="ext4" mountpoint="/var/lib/pgsql" name="lv_lib_pgsql" self_fence="1"/> <nfsclient allow_recover="1" name="rhev iso clients" options="rw" target="0.0.0.0"/> <nfsexport name="rhev iso exports"/> <script file="/etc/rc.d/init.d/postgresql" name="postgresql"/> <script file="/etc/rc.d/init.d/ovirt-engine" name="ovirt-engine"/> <apache config_file="conf/httpd.conf" name="httpd" server_root="/etc/httpd" shutdown_wait="5"/> <script file="/etc/rc.d/init.d/engine-notifierd" name="engine- notifierd"/> <script file="/etc/rc.d/init.d/ovirt-engine-dwhd" name="ovirt-engine-dwhd"/> </resources> <service domain="prefer_node1" name="rhevm" recovery="relocate"> <ip ref="192.168.99.44"/> <lvm ref="RHEVMHALVM"/> <fs ref="lv_share_jasperreports_server_pro"/> <fs ref="lv_share_ovirt_engine_dwh"/> <fs ref="lv_share_ovirt_engine_reports"/> <fs ref="lv_share_ovirt_engine"/>

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 30

Page 31: Setting Up a Highly Available Red Hat Enterprise ... · PDF file... you can configure it to run as a service in an HA cluster. Red Hat Cluster Suite ... how to set up the cluster ...

<fs ref="lv_lib_exports"> <nfsexport ref="rhev iso exports"> <nfsclient ref="rhev iso clients"/> </nfsexport> </fs> <fs ref="lv_lib_ovirt_engine"/> <fs ref="lv_lib_pgsql"/> <script ref="postgresql"/> <script ref="ovirt-engine"/> <apache ref="httpd"> <script ref="ovirt-engine-dwhd"/> </apache> <script ref="engine-notifierd"/>

</service> </rm> <fencedevices> <fencedevice agent="device" ipaddr="addr" login="user" name="fence-dev.example.com" passwd="pass" power_wait="5" ssl="on"/> </fencedevices></cluster>

SUMMARYThe procedure in this tech brief describes how to configure a Red Hat Virtualization Manager (RHEV-M) in a

highly available cluster. It combines several different Red Hat products (Red Hat Enterprise Linux, Red Hat

Enterprise Virtualization, and Red Hat Cluster Suite.). Shared resources in this procedure include a shared

IP address, shared directories (illustrated here using HA LVM), and shared services.

Setting up a Highly Available RHEV-M 3.1 | Perkins, Negus 31