Linux Clustering Software + Surface Reconstruction from Point Clouds
Setting Up Clustering in Linux - Initq
-
Upload
ralagarsan -
Category
Documents
-
view
216 -
download
0
Transcript of Setting Up Clustering in Linux - Initq
-
8/10/2019 Setting Up Clustering in Linux - Initq
1/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
Download the needed iso
1. Download the latest rhel5 or rhel6
2. Download openfiler
Install rhel and openfiler in Virtualbox
The following articles will show you how to setup rhel and openfiler in Virtualbox.
1. Use this article to setup 3 machines. 2 for the cluster nodes and 1 for luci. Setup RHEL in VirtualBox
2. You will need openfiler for storage. This can be used as iscsi or NAS simulations. Setup openfiler in VirtualBox
3. Setup networking in VirtualBox
Setup Multipathing on the nodes
You will need to setup multipathing on both nodes to simulate failure. Please refer to Setting up Multipathing on Linuxto set this up.
Turn off iptables
Please turn off iptables on all 4 of your VMs. Both nodes, luci and openfiler.
Install ricci on two rhel nodes
[root@node1 ~]# yum install ricciLoaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to registe
Cluster |1.5 kB 00:00ClusterStorage |1.5 kB 00:00Server |1.5 kB 00:00VT |1.3 kB 00:00Setting up Install Process
Package ricci-0.12.2-64.el5.x86_64 already installed and latest version
Nothing to do
Install luci on luci VM
[root@luci ~]# yum install luciLoaded plugins: product-id, security, subscription-manager
This system is not registered to Red Hat Subscription Management. You can use subscription-manager to registe
Cluster |1.5 kB 00:00ClusterStorage |1.5 kB 00:00Server |1.5 kB 00:00VT |1.3 kB 00:00Setting up Install Process
Package luci-0.12.2-64.el5.x86_64 already installed and latest version
Nothing to do
Initialize luci
[root@luci etc]# luci_admin passwordThe luci site has not been initialized.
To initialize it, execute
/usr/sbin/luci_admin init[root@luci etc]# /usr/sbin/luci_admin initInitializing the luci server
Creating the 'admin'user
-
8/10/2019 Setting Up Clustering in Linux - Initq
2/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
Enter password:
Confirm password:
Please wait...
The admin password has been successfully set.
Generating SSL certificates...
The luci server has been successfully initialized
Access luciPoint your web browser to https://luci.localdomain:8084 to access luci
Install apache on both nodes
[root@node2 ~]# yum install httpd
Fix hosts files on both nodes, luci and openfiler
Please fix the /etc/hosts file on all 4 of your machines with exactly the same entries.
192.168.1.203 luci luci.localdomain
192.168.1.202 node1
192.168.1.201 node2
192.168.1.200 openfiler
Fix ricci password on both nodes
[root@node1 ~]# passwd ricci
Create and setup cluster from luci gui
1. Cluster/ Create a New Cluster. Call it mycluster.
2. Add your two nodes with the ricci password you set on both nodes.
3. Check boxes: Download Packages, Reboot the nodes, Enable share storage support.
Go to node 1 and node 2 and do:
[root@node1 ~]# tail -f /var/log/secure /var/log/messages
Go back to luci and click "Create Cluster". Now watch your screen on node1 and node2. Both nodes will be rebooted after the packages
are installed.
Check cluster.conf on both nodes
[root@node2 ~]# cat /etc/cluster/cluster.conf
https://luci.localdomain:8084/ -
8/10/2019 Setting Up Clustering in Linux - Initq
3/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
Turn on cman service if there are issues
If you see any node issues in luci then go to that node and check the cluster.conf file, correct it and start/restart the cman service.
Add resources to your cluster
1. Click on Cluster/Resources.
2. Choose IP address resource and add an unused ip address in your subnet. We will choose 192.168.1.204. Check the monitor the
link box. Number of seconds should be kept the same.
3. Add file system resource. Choose the mpath0 partition we created in Setting up Multipathing on Linuxsection. Add the name
mpath0p1, filesystem will be ext3, mount point will be /var/www/html and device will be /dev/mapper/mpath0p1. Check boxes for
force mount, reboot host node if unmount fails and check file system before mounting.
4. Add Script resource to restart apache. In the name field just put httpd and in the full path to script file put /etc/init.d/httpd
Failover domain
1. Click Cluster/Failover Domains.
2. Add a Failover Domain.3. Name it node2.
4. Check prioritized, Restrict failover to this domain's member and do not fail back service in this domain.
5. Check both boxes for members and change priority for node2 to 2.
Fence Devices
1. Click Clustering/Shared Fence Devices.
2. Add a Fence Device.
3. Choose the type of device you have. In our case we will choose Virtual Machine Fencing.
4. Give it a name and select.
Setup apache directory for cluster
[root@node1 www]# mount /dev/mapper/mpath0mpath0 mpath0p1
[root@node1 www]# mount /dev/mapper/mpath0p1 /var/www/html[root@node1 www]# vi /var/www/html/index.html
Put something inthis file.
[root@node1 www]# /etc/init.d/httpd restartStopping httpd: [FAILED]Starting httpd: httpd: apr_sockaddr_info_get()failed fornode1.localdomainhttpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName
[ OK ]
[root@node1 www]#
Stop the manual httpd services
[root@node1 www]# /etc/init.d/httpd stopStopping httpd: [ OK ][root@node1 www]# umount /var/www/html/
rgmanager is running on both nodes
-
8/10/2019 Setting Up Clustering in Linux - Initq
4/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
[root@node2 cluster]# /etc/init.d/rgmanager startStarting Cluster Service Manager: [ OK ][root@node2 init.d]# chkconfig rgmanager on
Unlock resource group
[root@node1 init.d]# clusvcadm -lResource groupslocked[root@node1 init.d]# clusvcadm -uResource groupsunlocked
Create Cluster Service
1. Click on Cluster/mycluster/Service.
2. Name is mywebservice.
3. Check boxes to automatically start the service and run exclusive.
4. Failover domain node2.
5. Recover policy relocate.
6. Submit.
Add resources to Service
1. Click on mywebservice and add resource to service.
2. Choose the three resources we created, ip, mpath0p1 and script.
3. Save
Check your cluster
[root@node1 init.d]# clustatCluster Status formycluster @Sat Sep 21 21:35:19 2013Member Status: Quorate
Member Name ID Status
---------- ----------
node1 1 Online, Local, rgmanager
node2 2 Online, rgmanager
Service Name Owner (Last) State----------- ----------- -----
service:mywebservice node1 started
Check your Webpage
1. Check http://192.168.1.204/
Test the cluster
[root@node1 init.d]# /etc/init.d/httpd statushttpd (pid 17150)is running...
[root@node1 init.d]# clustatCluster Status formycluster @Sat Sep 21 21:48:23 2013Member Status: Quorate
Member Name ID Status
---------- ----------
node1 1 Online, Local, rgmanager
http://192.168.1.204/ -
8/10/2019 Setting Up Clustering in Linux - Initq
5/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
node2 2 Online, rgmanager
Service Name Owner (Last) State----------- ----------- -----
service:mywebservice node1 started
[root@node2 init.d]# /etc/init.d/httpd statushttpd is stopped
We can see that apache is running on node1 only. We will reboot node1 and see if the service is started on second node.
[root@node2 init.d]# clustatCluster Status formycluster @Sat Sep 21 21:49:49 2013Member Status: Quorate
Member Name ID Status
---------- ----------
node1 1 Online
node2 2 Online, Local, rgmanager
Service Name Owner (Last) State----------- ----------- -----
service:mywebservice (node1) stopped[root@node2 init.d]# clustatCluster Status formycluster @Sat Sep 21 21:49:52 2013Member Status: Quorate
Member Name ID Status
---------- ----------
node1 1 Online
node2 2 Online, Local, rgmanager
Service Name Owner (Last) State----------- ----------- -----
service:mywebservice node2 starting
Check Virtual ip, mount and httpd on node2
[root@node2 init.d]# ip addr show1: lo: mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:4e:27:55 brd ff:ff:ff:ff:ff:ff inet 192.168.1.201/24 brd 192.168.1.255 scope global eth0 inet 192.168.1.204/24 scope global secondary eth0 inet6 fe80::a00:27ff:fe4e:2755/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 08:00:27:af:27:89 brd ff:ff:ff:ff:ff:ff inet 10.0.2.2/24 brd 10.0.2.255 scope global eth1 inet6 fe80::a00:27ff:feaf:2789/64 scope link
valid_lft forever preferred_lft forever
4: sit0: mtu 1480 qdisc nooplink/sit 0.0.0.0 brd 0.0.0.0
[root@node2 init.d]# df -hFilesystem Size Used Avail Use%Mounted on/dev/mapper/VolGroup00-LogVol00 19G 2.4G 16G 14%//dev/sda1 99M 13M 82M 14%/boottmpfs 249M 0 249M 0%/dev/shm/dev/hdc 4.1G 4.1G 0 100%/mnt/cdrom
-
8/10/2019 Setting Up Clustering in Linux - Initq
6/6
10/12/2014 Setting up Clustering in Linux - Initq
http://initq.com/index.php/Setting_up_Clustering_in_Linux
/dev/mapper/mpath0p1 4.7G 138M 4.4G 4%/var/www/html
[root@node2 init.d]# /etc/init.d/httpd statushttpd (pid 16372)is running...
DLM and CLVMd
Cluster based locking used in for DLM and CLVMd.
GFS with clustering
Please read GFS2 with clusterto understand GFS.