8/3/2019 11g Rac Installation Final Doc
1/58
By
Rajani Kumar Katam, Oracle RAC DBA.
Satyam Computer Services private Ltd.
Step by step installation Oracle 11g (11.1.0.6.0) RAC on Red hat Enterprise LINUX AS 4 with
screenshots.
The following are the sequence of steps that are to be executed on the nodes.
Installthe Linux Operating System Install Required Linux Packages for Oracle RAC (refer oracledoc fortherequired packages Packages
variesdepending onthe version ofthe operatingsystem).
Network ConfigurationUsingthe Network Configurationapplication, youneedto configure both NIC devices
aswellasthe /etc/hosts file. Both ofthesetaskscan becompletedusingthe Network
Configuration GUI. Noticethatthe /etc/hostssettingsarethesame for bothnodes.
Forexampleweneedto specify theentries inthe /etc/hosts fileas below in boththenodes.
etc/hosts127.0.0.1 localhost.localdomain localhost
# Public Network - (eth0)192.168.1.100 linux1192.168.1.101 linux2
# Private Interconnect - (eth1)192.168.2.100 linux1-priv192.168.2.101 linux2-priv
# Public Virtual IP (VIP) addresses - (eth0)192.168.1.200 linux1-vip192.168.1.201 linux2-vip
8/3/2019 11g Rac Installation Final Doc
2/58
Create "oracle" Userand Directoriesgroupadd -g 501 oinstallgroupadd -g 502 dbagroupadd -g 504 asm
useradd -m -u 501 -g oinstall -G dba, asm -d /home/oracle -s /bin/bash
-c "Oracle Software Owner" oracle
#mkdir -p /u01/app/oracle# chown -R oracle:oinstall /u01/app# chmod -R 775 /u01/app
Creating directory for oracle clusterware.
#mkdir -p /u01/app/crs# chown -R oracle:oinstall /u01/app/crs# chmod -R 775 /u01/app/crs
Create Mount Point for OCFS2 / Clusterware
Let's now create the mount point for the Oracle Cluster File System, Release 2 (OCFS2) that will
be used to store the two Oracle Clusterware shared files (OCR file and voting disk file)
#mkdir -p /u02/oradata/orcl# chown -R oracle:oinstall /u02/oradata/orcl# chmod -R 775 /u02/oradata/orcl
Configure the Linux Servers for Oracle
Edit the .bash_profile file and set the required environment variables
in both the nodes.
PATH=$PATH:$HOME/bin
export ORACLE_SID=hrms1
export ORACLE_HOME=/u02/app/oracle/db_home
export ORA_CRS_HOME=/u02/app/oracle/crs_home
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/lib
unset USERNAME
8/3/2019 11g Rac Installation Final Doc
3/58
Swap Space Considerations
Installing Oracle Database 11gRelease 1 requires a minimum of 1GB of memory
# cat /proc/meminfo | grep MemTotal # cat /proc/meminfo | grep SwapTotal
Configuring Kernel Parameters and Shell LimitsIn both the nodes
sysctl -pnet.ipv4.ip_forward = 0net.ipv4.conf.default.rp_filter = 1net.ipv4.conf.default.accept_source_route = 0kernel.sysrq = 0kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1kernel.msgmnb = 65536kernel.msgmax = 65536net.core.rmem_default = 4194304net.core.rmem_max = 4194304net.core.wmem_default = 262144net.core.wmem_max = 262144kernel.shmmax = 1073741823kernel.sem = 250 32000 100 128fs.file-max = 65536net.ipv4.ip_local_port_range = 1024 65000
Setting Shell Limits for the oracle User
cat >> /etc/security/limits.conf
8/3/2019 11g Rac Installation Final Doc
4/58
8/3/2019 11g Rac Installation Final Doc
5/58
On the "Node Configuration" dialog, click the [Add] button. This will bring up the "Add Node" dialog.
In the "Add Node" dialog, enter theHost name andIP address for the first node in the
cluster. Leave theIP Portset to its default value of 7777. In my example, I added both
nodes using linux1 / 192.168.1.100 for the first node and linux2 / 192.168.1.101 forthe second node
Click [Apply] on the "Node Configuration" dialog - All nodes should now be"Active".
After verifying all values are correct, exit the application using [File] ->[Quit].
This needs to be performed on both Oracle RAC nodes in the cluster.
Configure O2CB to Start on Boot
# /etc/init.d/o2cb offline ocfs2# /etc/init.d/o2cb unload
# /etc/init.d/o2cb configure
Format the OCFS2Filesystem
Create a partition on the SAN or shared storage for storing ocrfile and voting disk files that are
created at the time of cluster ware installation.
(use fdisk command as root user and create a partition)
NOTE:
Always recommended to create 4 partitions so that we can maintain redundant copies of voting
disk file and OCR file
$su -
8/3/2019 11g Rac Installation Final Doc
6/58
#mkfs.ocfs2 -b 4K -C 32K -N 4 L ocfs2 /dev/sde2
Mount the OCFS2File system
$su -
#mount -t ocfs2 -o datavolume,nointr L ocfs2 /u02/oradata/orcl (the nameof the file system where u want to mount tat partition).
Configure OCFS2 to Mount Automatically at Startup
We can do that by adding the following line to the /etc/fstab file on both Oracle RAC nodes
in the cluster:
LABEL=ocfs2 /u02/oradata/orcl ocfs2 _netdev,datavolume,nointr 0 0
10.Install & Configure Automatic Storage Management libraries(ASMLib 2.0)
# rpm -Uvh oracleasm-support-2.0.4-1.el5.i386.rpm# rpm -Uvh oracleasm-2.6.18-8.el5-2.0.4-1.el5.i686.rpm# rpm -Uvh oracleasmlib-2.0.3-1.el5.i386.rpm
Configuring and Loading the ASMLib Packages
$su -# /etc/init.d/oracleasm configure
Create ASM Disks for Oracle
$su -
# /etc/init.d/oracleasm createdisk vol1 /dev/sde1
# /etc/init.d/oracleasm createdisk VOL2 /dev/sde2
NOTE:
Create the number of disks depending on your requirement.
# /etc/init.d/oracleasm scandisks# /etc/init.d/oracleasm listdisksVOL1VOL2VOL3VOL4
10 .Pre-Installation Tasks for Oracle Clusterware 11g
Verifying the Hardware and Operating System Setup with C
VU
$./runcluvfy.sh stage -post hwos -n hcslinux1, hcslinux2 verbose
11.Installing Oracle clusterware software
Note:
Before installing clusterware please verify remote host access and user equivalence using ssh.
$sh runInstaller
8/3/2019 11g Rac Installation Final Doc
7/58
8/3/2019 11g Rac Installation Final Doc
8/58
8/3/2019 11g Rac Installation Final Doc
9/58
8/3/2019 11g Rac Installation Final Doc
10/58
8/3/2019 11g Rac Installation Final Doc
11/58
8/3/2019 11g Rac Installation Final Doc
12/58
8/3/2019 11g Rac Installation Final Doc
13/58
8/3/2019 11g Rac Installation Final Doc
14/58
8/3/2019 11g Rac Installation Final Doc
15/58
8/3/2019 11g Rac Installation Final Doc
16/58
8/3/2019 11g Rac Installation Final Doc
17/58
8/3/2019 11g Rac Installation Final Doc
18/58
[root@hcslnx01crs_home]#shroot.sh
Checkingto see if Oracle CRS stack isalready configured
Settingthe permissions on OCR backup directory
Settingup Networksocketdirectories
Oracle Cluster Registry configurationupgradedsuccessfully
Successfully accumulatednecessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node1:hcslnx01hcslnx01-priv hcslnx01
node 2:hcslnx02 hcslnx02-priv hcslnx02
8/3/2019 11g Rac Installation Final Doc
19/58
Creating OCR keys foruser 'root', privgrp 'root'..
Operationsuccessful.
Now formatting votingdevice: /ocfs2/voting_file
Format of1 votingdevicescomplete.
Startup will bequeuedto initwithin 30seconds.
Addingdaemonsto inittab
Expectingthe CRS daemonsto beup within 600seconds.
Cluster Synchronization Services isactive onthesenodes.
hcslnx01
Cluster Synchronization Services is inactive onthesenodes.
hcslnx02
Localnodecheckingcomplete. Runroot.sh onremainingnodesto start CRS daemons.
[root@hcslnx01crs_home]#
[root@hcslnx02 crs_home]#shroot.sh
WARNING:directory '/' isnot owned by root
Checkingto see if Oracle CRS stack isalready configured
Settingthe permissions on OCR backup directory
Settingup Networksocketdirectories
Oracle Cluster Registry configurationupgradedsuccessfully
Thedirectory '/' isnot owned by root. Changing ownerto root
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is11 Release1.
Successfully accumulatednecessary OCR keys.
8/3/2019 11g Rac Installation Final Doc
20/58
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node :
node1:hcslnx01hcslnx01-priv hcslnx01
node 2:hcslnx02 hcslnx02-priv hcslnx02
clscfg: Argumentscheck outsuccessfully.
NO KEYS WERE WRITTEN. Supply -force parameterto override.
-force isdestructiveandwilldestroy any previouscluster
configuration.
Oracle Cluster Registry forclusterhasalready been initialized
Startup will bequeuedto initwithin 30seconds.
Addingdaemonsto inittab
Expectingthe CRS daemonsto beup within 600seconds.
Cluster Synchronization Services isactive onthesenodes.
hcslnx01
hcslnx02
Cluster Synchronization Services isactive onallthenodes.
Waiting forthe Oracle CRSD and EVMD to start
Oracle CRS stack installedandrunningunder init(1M)
Running vipca(silent) forconfiguringnodeapps
Creating VIP applicationresource on (2)nodes...
Creating GSD applicationresource on (2)nodes...
Creating ONS applicationresource on (2)nodes...
Starting VIP applicationresource on (2)nodes...
8/3/2019 11g Rac Installation Final Doc
21/58
Starting GSD applicationresource on (2)nodes...
Starting ONS applicationresource on (2)nodes...
Done.
Youhavenewmail in /var/spool/mail/root
[root@hcslnx02 crs_home]#
8/3/2019 11g Rac Installation Final Doc
22/58
12.Verify Oracle Clusterware Installation
CheckClusterNodes
$$ORA_CRS_HOME/bin/olsnodes -nlinux1 1linux2 2
Confirm Oracle Clusterware Function
$$ORA_CRS_HOME/bin/crs_stat -t -vName Type R/RA F/FT Target State Host----------------------------------------------------------------------ora.linux1.gsd application 0/5 0/0 ONLINE ONLINE linux1
ora.linux1.ons application 0/3 0/0 ONLINE ONLINE linux1ora.linux1.vip application 0/0 0/0 ONLINE ONLINE linux1ora.linux2.gsd application 0/5 0/0 ONLINE ONLINE linux2ora.linux2.ons application 0/3 0/0 ONLINE ONLINE linux2ora.linux2.vip application 0/0 0/0 ONLINE ONLINE linux2
CheckCRSStatus
$$ORA_CRS_HOME/bin/crsctl check crsCluster Synchronization Services appears healthyCluster Ready Services appears healthy
8/3/2019 11g Rac Installation Final Doc
23/58
Event Manager appears healthy
Check Oracle Clusterware Auto-Start Scripts
$ls -l /etc/init.d/init.*-rwxr-xr-x 1 root root 2236 Oct 12 22:08 /etc/init.d/init.crs-rwxr-xr-x 1 root root 5290 Oct 12 22:08 /etc/init.d/init.crsd-rwxr-xr-x 1 root root 49416 Oct 12 22:08 /etc/init.d/init.cssd-rwxr-xr-x 1 root root 3859 Oct 12 22:08 /etc/init.d/init.evmd
13. Install Oracle Database 11g Software
$sh runInstaller
Installing Oracle11g RAC software
8/3/2019 11g Rac Installation Final Doc
24/58
8/3/2019 11g Rac Installation Final Doc
25/58
8/3/2019 11g Rac Installation Final Doc
26/58
8/3/2019 11g Rac Installation Final Doc
27/58
8/3/2019 11g Rac Installation Final Doc
28/58
8/3/2019 11g Rac Installation Final Doc
29/58
8/3/2019 11g Rac Installation Final Doc
30/58
8/3/2019 11g Rac Installation Final Doc
31/58
8/3/2019 11g Rac Installation Final Doc
32/58
8/3/2019 11g Rac Installation Final Doc
33/58
8/3/2019 11g Rac Installation Final Doc
34/58
NOTE:
Theabovecommandhasto beexecutedmanullay onthe failednode by connectingas Oracleuser.
Creating 11g RAC database by invoking DBCA utility.
8/3/2019 11g Rac Installation Final Doc
35/58
8/3/2019 11g Rac Installation Final Doc
36/58
8/3/2019 11g Rac Installation Final Doc
37/58
8/3/2019 11g Rac Installation Final Doc
38/58
8/3/2019 11g Rac Installation Final Doc
39/58
8/3/2019 11g Rac Installation Final Doc
40/58
8/3/2019 11g Rac Installation Final Doc
41/58
8/3/2019 11g Rac Installation Final Doc
42/58
8/3/2019 11g Rac Installation Final Doc
43/58
8/3/2019 11g Rac Installation Final Doc
44/58
8/3/2019 11g Rac Installation Final Doc
45/58
8/3/2019 11g Rac Installation Final Doc
46/58
8/3/2019 11g Rac Installation Final Doc
47/58
8/3/2019 11g Rac Installation Final Doc
48/58
8/3/2019 11g Rac Installation Final Doc
49/58
8/3/2019 11g Rac Installation Final Doc
50/58
8/3/2019 11g Rac Installation Final Doc
51/58
8/3/2019 11g Rac Installation Final Doc
52/58
8/3/2019 11g Rac Installation Final Doc
53/58
8/3/2019 11g Rac Installation Final Doc
54/58
8/3/2019 11g Rac Installation Final Doc
55/58
8/3/2019 11g Rac Installation Final Doc
56/58
8/3/2019 11g Rac Installation Final Doc
57/58
NOTE:
There are five node-level tasks defined for SRVCTL:
Addinganddeletingnode-levelapplications Settingandunsettingtheenvironment fornode-levelapplications Administeringnodeapplications Administering ASM instances Startingandstoppingagroup of programsthat includes virtualIP addresses,listeners, Oracle
Notification Services,and Oracle Enterprise Manageragents (formaintenance purposes).
Status of all instances and services
$srvctl status database -d orclInstance orcl1 is running on node linux1Instance orcl2 is running on node linux2
Status of a single instance
$srvctl status instance -d orcl -i orcl2Instance orcl2 is running on node linux2
Status of node applications on a particular node
$srvctl status nodeapps -n linux1
8/3/2019 11g Rac Installation Final Doc
58/58
VIP is running on node: linux1GSD is running on node: linux1Listener is running on node: linux1ONS daemon is running on node: linux1
Status of an ASM instance
$srvctl status asm -n linux1ASM instance +ASM1 is running on node linux1.
List all configured databases
$srvctl config databaseorcl
Display configuration for ourRAC database
$srvctl config database -d orcllinux1 orcl1 /u01/app/oracle/product/11.1.0/db_1linux2 orcl2 /u01/app/oracle/product/11.1.0/db_1
Display the configuration for node applications - (VIP, GSD, ONS, Listener)
$srvctl config nodeapps -n linux1 -a -g -s -lVIP exists.: /linux1-vip/192.168.1.200/255.255.255.0/eth0GSD exists.ONS daemon exists.Listener exists.
Display the configuration for the ASM instance(s)
$srvctl config asm -n linux1+ASM1 /u01/app/oracle/product/11.1.0/db_1
BY
Rajani Kumar Katam
Satyam Computer Services private Ltd.
Top Related