Oracle 11g RAC 3node add
Transcript of Oracle 11g RAC 3node add
11g RAC 3node 추가작업 Internal Use Only
Oracle 11g RAC 3node
add
Author : 김 종 인
Creation Date : 2014년 4월 10일
Last Updated : 2014년 4월 10일
Version : 1.0
페이지 1 / 28
11g RAC 3node 추가작업 Internal Use Only
Oracle 11g RAC 3node
add작업
본 문서를 통해 x86_64환경의 Linux 운영체제에 오라클 11g RAC 2node
11.2.0.4환경에서 3node 로 node 추가하는 과정을 가이드 하고자 한다.
몇가지 다양한 선택사항이 있을 수 있고, 그에 따른 과정이 있기 때문에,
여러분의 환경과 목적에 다를 수 있으므로 이점 참고하기 바란다.
(본 문서에서는 Manually 하게도 가능하지만 DB생성을 GUI 환경을 통해
진행함)
- 테스트 진행환경 –
항목 세부내역
Virtual Box 4.2.16
OS OEL 5.8
grid & Database 11gR2 11.2.0.4
CPU 1
Memory 2048M
node test1, test2, (test3)
※ 본 테스트 시 노트북 Memory 가 8G인 경우 다른 작업은 불가할
정도로 느려지니 정신건강에 해로운 점 참고바랍니다.
페이지 2 / 28
11g RAC 3node 추가작업 Internal Use Only
Ⅰ. 현재 구성 확인
1. 현재 2 node 의 구성환경을 확인한다.
node Public Virtual Private scan name scan ip
test1 192.168.56.21 192.168.56.31 10.10.10.1
test-scan 192.168.56.30 test2 192.168.56.22 192.168.56.32 10.10.10.2
test3 192.168.56.23 192.168.56.33 10.10.10.3
cat /etc/hosts
# Public
192.168.56.21 test1
192.168.56.22 test2
# Virtual
192.168.56.31 test1-vip
192.168.56.32 test2-vip
# Private
10.10.10.1 test1-priv
10.10.10.2 test2-priv
# SCAN
192.168.56.30 test-scan
[root:/root]#crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE test1
ONLINE ONLINE test2
ora.LISTENER.lsnr
ONLINE ONLINE test1
ONLINE ONLINE test2
ora.RECO.dg
ONLINE ONLINE test1
ONLINE ONLINE test2
ora.asm
ONLINE ONLINE test1 Started
ONLINE ONLINE test2 Started
ora.gsd
OFFLINE OFFLINE test1
페이지 3 / 28
11g RAC 3node 추가작업 Internal Use Only
OFFLINE OFFLINE test2
ora.net1.network
ONLINE ONLINE test1
ONLINE ONLINE test2
ora.ons
ONLINE ONLINE test1
ONLINE ONLINE test2
ora.registry.acfs
ONLINE ONLINE test1
ONLINE ONLINE test2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE test1
ora.cvu
1 ONLINE ONLINE test1
ora.oc4j
1 ONLINE ONLINE test1
ora.rac.db
1 ONLINE ONLINE test1 Open
2 ONLINE ONLINE test2 Open
ora.scan1.vip
1 ONLINE ONLINE test1
ora.test1.vip
1 ONLINE ONLINE test1
ora.test2.vip
1 ONLINE ONLINE test2
Ⅱ. 신규 서버의 구성
- machine 은 OS 를 신규설치하거나 복제를 한다.
- 공유 디스크 부분은 기존 2node 구성시와 마찬가지로 VirtaulBox에서 기존 디스크 선택하기로
쉽게 공유설정할수 있다.
페이지 4 / 28
11g RAC 3node 추가작업 Internal Use Only
1. 각 노드의 /etc/hosts 파일 수정
# Public
192.168.56.21 test1
192.168.56.22 test2
192.168.56.23 test3
# Virtual
192.168.56.31 test1-vip
192.168.56.32 test2-vip
192.168.56.33 test3-vip
# Private
10.10.10.1 test1-priv
10.10.10.2 test2-priv
10.10.10.3 test3-priv
2. ssh 설정/접속 테스트
(root 유저)
[root:/install/grid/sshsetup]#ls
sshUserSetup.sh
[root:/install/grid/sshsetup]#./sshUserSetup.sh -user oracle -hosts "test1 test2 test3" -noPromptPassphrase -advanced
The output of this script is also logged into /tmp/sshUserSetup_2014-04-10-14-19-44.log
Hosts are test1 test2 test3
user is oracle
Platform:- Linux
Checking if the remote hosts are reachable
PING test1 (127.0.0.1) 56(84) bytes of data.
64 bytes from test1 (127.0.0.1): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from test1 (127.0.0.1): icmp_seq=2 ttl=64 time=0.027 ms
64 bytes from test1 (127.0.0.1): icmp_seq=3 ttl=64 time=0.036 ms
64 bytes from test1 (127.0.0.1): icmp_seq=4 ttl=64 time=0.047 ms
64 bytes from test1 (127.0.0.1): icmp_seq=5 ttl=64 time=0.050 ms
--- test1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.015/0.035/0.050/0.012 ms
PING test2 (192.168.56.22) 56(84) bytes of data.
64 bytes from test2 (192.168.56.22): icmp_seq=1 ttl=64 time=0.505 ms
64 bytes from test2 (192.168.56.22): icmp_seq=2 ttl=64 time=0.619 ms
64 bytes from test2 (192.168.56.22): icmp_seq=3 ttl=64 time=0.872 ms
64 bytes from test2 (192.168.56.22): icmp_seq=4 ttl=64 time=0.339 ms
64 bytes from test2 (192.168.56.22): icmp_seq=5 ttl=64 time=0.470 ms
--- test2 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4000ms
rtt min/avg/max/mdev = 0.339/0.561/0.872/0.179 ms
PING test3 (192.168.56.23) 56(84) bytes of data.
페이지 5 / 28
11g RAC 3node 추가작업 Internal Use Only
64 bytes from test3 (192.168.56.23): icmp_seq=1 ttl=64 time=1.98 ms
64 bytes from test3 (192.168.56.23): icmp_seq=2 ttl=64 time=0.371 ms
64 bytes from test3 (192.168.56.23): icmp_seq=3 ttl=64 time=0.590 ms
64 bytes from test3 (192.168.56.23): icmp_seq=4 ttl=64 time=0.464 ms
64 bytes from test3 (192.168.56.23): icmp_seq=5 ttl=64 time=0.491 ms
--- test3 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4002ms
rtt min/avg/max/mdev = 0.371/0.779/1.981/0.605 ms
Remote host reachability check succeeded.
The following hosts are reachable: test1 test2 test3.
The following hosts are not reachable: .
All hosts are reachable. Proceeding further...
firsthost test1
numhosts 3
The script will setup SSH connectivity from the host test1 to all
the remote hosts. After the script is executed, the user can use SSH to run
commands on the remote hosts or copy files between this host test1
and the remote hosts without being prompted for passwords or confirmations.
………………….생략
-Verification from complete-
SSH verification complete.
3. 각 노드의 환경 pre-check (runcluvfy.sh 수행)
노드 추가 작업에 있어 사전작업으로 중요하게 Check 하는 부분은 기존 2node 와 신규노드의
환경이 동일하게 구성되어 있는지 확인하는 부분이다.
prechek 를 반드시 수행하여 설치시 error 나는 부분이 없도록 확인한다.
Script 의 로그를 확인하여 무시해도 되는 부분 (DNS 설정 및 NTP 등)은 넘어가고 FAIL 부분
등은 수정가능하다면 반드시 처리하고 진행하도록 한다. (각 Script의 세부옵션은 다르지만
항목은 대부분 비슷하므로 첫번째만 수행해도 무방하다.)
[oracle:/install/grid]#ls
install readme.html response rpm runcluvfy.sh runInstaller sshsetup stage welcome.html
[oracle:/install/grid]#
[oracle:/install/grid]#./runcluvfy.sh stage -pre crsinst -n test1,test2,test3 -r 11gR2 -verbose > ./pre_check.log
[oracle:/install/grid]#./runcluvfy.sh comp peer -n test3 -refnode test1 -r 11gR2 > ./pre_check2.log
[oracle:/install/grid]#./runcluvfy.sh stage -pre nodeadd -n test3 -fixup -verbose > ./pre_check3.log
4. 신규에 Grid Infrastructure 설치
- addNode.sh 수행
(기존에 Grid Infrastructure 가 설치되어 있는 노드(test1)에서 아래와 같은 Script를 수행한다.)
[oracle:/oragrid/product/11.2.0.4/oui/bin]#
페이지 6 / 28
11g RAC 3node 추가작업 Internal Use Only
[oracle:/oragrid/product/11.2.0.4/oui/bin]#export IGNORE_PREADDNODE_CHECKS=Y <- 이전 단계에서 Check 했으니
생략
[oracle:/oragrid/product/11.2.0.4/oui/bin]#./addNode.sh CLUSTER_NEW_NODES={test3}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={test3-vip}
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3997 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes test2,test3 are available
............................................................... 100% Done.
.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oragrid/product/11.2.0.4
New Nodes
Space Requirements
New Nodes
test3
/oragrid: Required 4.69GB : Available 12.98GB
Installed Products
Product Names
Oracle Grid Infrastructure 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
Enterprise Manager Common Core Files 10.2.0.4.5
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Server) 11.2.0.4.0
Installation Plugin Files 11.2.0.4.0
Universal Storage Manager Files 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Automatic Storage Management Assistant 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Expat libraries 2.0.1.0.1
Oracle Containers for Java 11.2.0.4.0
페이지 7 / 28
11g RAC 3node 추가작업 Internal Use Only
Perl Modules 5.10.0.0.1
Secure Socket Layer 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Oracle Net Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Enterprise Manager plugin Common Files 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.4.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.5
Deinstallation Tool 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Cluster Verification Utility Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle LDAP administration 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
Agent Required Support Files 10.2.0.4.5
Parser Generator Required Support Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
페이지 8 / 28
11g RAC 3node 추가작업 Internal Use Only
Oracle Net 11.2.0.4.0
PL/SQL 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Cluster Ready Services Files 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Thursday, April 10, 2014 3:08:54 PM KST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Thursday, April 10, 2014 3:08:59 PM KST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Thursday, April 10, 2014 3:16:50 PM KST)
. 100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been
registered as the central inventory of this system.
To register the new inventory please run the script at '/oragrid/oraInventory/orainstRoot.sh' with root privileges on nodes
'test3'.
If you do not register the inventory, you may not be able to update or patch the products you installed.
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list
below is followed by a list of nodes.
/oragrid/oraInventory/orainstRoot.sh #On nodes test3
/oragrid/product/11.2.0.4/root.sh #On nodes test3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
3. Run the scripts in each cluster node
The Cluster Node Addition of /oragrid/product/11.2.0.4 was successful.
Please check '/tmp/silentInstall.log' for more details.
==========================================================
Remote Copy 가 끝났으면 test3에서 Script 를 수행한다.
[root@test3 /root]# /oragrid/oraInventory/orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /oragrid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
페이지 9 / 28
11g RAC 3node 추가작업 Internal Use Only
Changing groupname of /oragrid/oraInventory to dba.
The execution of the script is complete.
[root@test3 /root]# /oragrid/product/11.2.0.4/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oragrid/product/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /oragrid/product/11.2.0.4/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node test1, number 1, and
is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@test3 /root]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
21901180 3387164 17383520 17% /
/dev/sda3 9920624 153740 9254816 2% /oracle
/dev/sda2 14877092 4327884 9781304 31% /oragrid
/dev/sda1 101086 23562 72305 25% /boot
tmpfs 1025332 0 1025332 0% /dev/shm
[root:/root]#crsctl stat res -t
--------------------------------------------------------------------------------
페이지 10 / 28
11g RAC 3node 추가작업 Internal Use Only
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
ora.LISTENER.lsnr
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
ora.RECO.dg
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
ora.asm
ONLINE ONLINE test1 Started
ONLINE ONLINE test2 Started
ONLINE ONLINE test3 Started
ora.gsd
OFFLINE OFFLINE test1
OFFLINE OFFLINE test2
OFFLINE OFFLINE test3
ora.net1.network
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
ora.ons
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
ora.registry.acfs
ONLINE ONLINE test1
ONLINE ONLINE test2
ONLINE ONLINE test3
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE test1
ora.cvu
1 ONLINE ONLINE test1
ora.oc4j
1 ONLINE ONLINE test1
ora.rac.db
1 ONLINE ONLINE test1 Open
2 ONLINE ONLINE test2 Open
ora.scan1.vip
페이지 11 / 28
11g RAC 3node 추가작업 Internal Use Only
1 ONLINE ONLINE test1
ora.test1.vip
1 ONLINE ONLINE test1
ora.test2.vip
1 ONLINE ONLINE test2
ora.test3.vip
1 ONLINE ONLINE test3
Binary Copy 가 완료되고 CRS Resource도 추가되었다.
5. 신규에 ORACLE engine 설치
- addNode.sh 수행
(기존에 Database software 가 설치되어 있는 노드(test1)에서 아래와 같은 Script를 수행한다.)
[oracle:/oracle/product/11.2.0.4/oui/bin]#./addNode.sh -silent CLUSTER_NEW_NODES={test3}
Performing pre-checks for node addition
Checking node reachability...
Node reachability check passed from node "test1"
Checking user equivalence...
User equivalence check passed for user "oracle"
WARNING:
Node "test3" already appears to be part of cluster
Pre-check for node addition was successful.
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 3822 MB Passed
Oracle Universal Installer, Version 11.2.0.4.0 Production
Copyright (C) 1999, 2013, Oracle. All rights reserved.
Performing tests to see whether nodes test2,test3 are available
............................................................... 100% Done.
..
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
Source: /oracle/product/11.2.0.4
New Nodes
Space Requirements
New Nodes
test3
/oracle: Required 4.25GB : Available 8.61GB
Installed Products
Product Names
Oracle Database 11g 11.2.0.4.0
Java Development Kit 1.5.0.51.10
Installer SDK Component 11.2.0.4.0
Oracle One-Off Patch Installer 11.2.0.3.4
Oracle Universal Installer 11.2.0.4.0
Oracle USM Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Deconfiguration 10.3.1.0.0
페이지 12 / 28
11g RAC 3node 추가작업 Internal Use Only
Oracle DBCA Deconfiguration 11.2.0.4.0
Oracle RAC Deconfiguration 11.2.0.4.0
Oracle Database Deconfiguration 11.2.0.4.0
Oracle Configuration Manager Client 10.3.2.1.0
Oracle Configuration Manager 10.3.8.1.0
Oracle ODBC Driverfor Instant Client 11.2.0.4.0
LDAP Required Support Files 11.2.0.4.0
SSL Required Support Files for InstantClient 11.2.0.4.0
Bali Share 1.1.18.0.0
Oracle Extended Windowing Toolkit 3.4.47.0.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
Oracle Real Application Testing 11.2.0.4.0
Oracle Database Vault J2EE Application 11.2.0.4.0
Oracle Label Security 11.2.0.4.0
Oracle Data Mining RDBMS Files 11.2.0.4.0
Oracle OLAP RDBMS Files 11.2.0.4.0
Oracle OLAP API 11.2.0.4.0
Platform Required Support Files 11.2.0.4.0
Oracle Database Vault option 11.2.0.4.0
Oracle RAC Required Support Files-HAS 11.2.0.4.0
SQL*Plus Required Support Files 11.2.0.4.0
Oracle Display Fonts 9.0.2.0.0
Oracle Ice Browser 5.2.3.6.0
Oracle JDBC Server Support Package 11.2.0.4.0
Oracle SQL Developer 11.2.0.4.0
Oracle Application Express 11.2.0.4.0
XDK Required Support Files 11.2.0.4.0
RDBMS Required Support Files for Instant Client 11.2.0.4.0
SQLJ Runtime 11.2.0.4.0
Database Workspace Manager 11.2.0.4.0
RDBMS Required Support Files Runtime 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
Exadata Storage Server 11.2.0.1.0
Provisioning Advisor Framework 10.2.0.4.3
Enterprise Manager Database Plugin -- Repository Support 11.2.0.4.0
Enterprise Manager Repository Core Files 10.2.0.4.5
Enterprise Manager Database Plugin -- Agent Support 11.2.0.4.0
Enterprise Manager Grid Control Core Files 10.2.0.4.5
Enterprise Manager Common Core Files 10.2.0.4.5
Enterprise Manager Agent Core Files 10.2.0.4.5
RDBMS Required Support Files 11.2.0.4.0
regexp 2.1.9.0.0
Agent Required Support Files 10.2.0.4.5
Oracle 11g Warehouse Builder Required Files 11.2.0.4.0
Oracle Notification Service (eONS) 11.2.0.4.0
Oracle Text Required Support Files 11.2.0.4.0
Parser Generator Required Support Files 11.2.0.4.0
Oracle Database 11g Multimedia Files 11.2.0.4.0
Oracle Multimedia Java Advanced Imaging 11.2.0.4.0
페이지 13 / 28
11g RAC 3node 추가작업 Internal Use Only
Oracle Multimedia Annotator 11.2.0.4.0
Oracle JDBC/OCI Instant Client 11.2.0.4.0
Oracle Multimedia Locator RDBMS Files 11.2.0.4.0
Precompiler Required Support Files 11.2.0.4.0
Oracle Core Required Support Files 11.2.0.4.0
Sample Schema Data 11.2.0.4.0
Oracle Starter Database 11.2.0.4.0
Oracle Message Gateway Common Files 11.2.0.4.0
Oracle XML Query 11.2.0.4.0
XML Parser for Oracle JVM 11.2.0.4.0
Oracle Help For Java 4.2.9.0.0
Installation Plugin Files 11.2.0.4.0
Enterprise Manager Common Files 10.2.0.4.5
Expat libraries 2.0.1.0.1
Deinstallation Tool 11.2.0.4.0
Oracle Quality of Service Management (Client) 11.2.0.4.0
Perl Modules 5.10.0.0.1
JAccelerator (COMPANION) 11.2.0.4.0
Oracle Containers for Java 11.2.0.4.0
Perl Interpreter 5.10.0.0.2
Oracle Net Required Support Files 11.2.0.4.0
Secure Socket Layer 11.2.0.4.0
Oracle Universal Connection Pool 11.2.0.4.0
Oracle JDBC/THIN Interfaces 11.2.0.4.0
Oracle Multimedia Client Option 11.2.0.4.0
Oracle Java Client 11.2.0.4.0
Character Set Migration Utility 11.2.0.4.0
Oracle Code Editor 1.2.1.0.0I
PL/SQL Embedded Gateway 11.2.0.4.0
OLAP SQL Scripts 11.2.0.4.0
Database SQL Scripts 11.2.0.4.0
Oracle Locale Builder 11.2.0.4.0
Oracle Globalization Support 11.2.0.4.0
SQL*Plus Files for Instant Client 11.2.0.4.0
Required Support Files 11.2.0.4.0
Oracle Database User Interface 2.2.13.0.0
Oracle ODBC Driver 11.2.0.4.0
Oracle Notification Service 11.2.0.3.0
XML Parser for Java 11.2.0.4.0
Oracle Security Developer Tools 11.2.0.4.0
Oracle Wallet Manager 11.2.0.4.0
Cluster Verification Utility Common Files 11.2.0.4.0
Oracle Clusterware RDBMS Files 11.2.0.4.0
Oracle UIX 2.2.24.6.0
Enterprise Manager plugin Common Files 11.2.0.4.0
HAS Common Files 11.2.0.4.0
Precompiler Common Files 11.2.0.4.0
Installation Common Files 11.2.0.4.0
Oracle Help for the Web 2.0.14.0.0
페이지 14 / 28
11g RAC 3node 추가작업 Internal Use Only
Oracle LDAP administration 11.2.0.4.0
Buildtools Common Files 11.2.0.4.0
Assistant Common Files 11.2.0.4.0
Oracle Recovery Manager 11.2.0.4.0
PL/SQL 11.2.0.4.0
Generic Connectivity Common Files 11.2.0.4.0
Oracle Database Gateway for ODBC 11.2.0.4.0
Oracle Programmer 11.2.0.4.0
Oracle Database Utilities 11.2.0.4.0
Enterprise Manager Agent 10.2.0.4.5
SQL*Plus 11.2.0.4.0
Oracle Netca Client 11.2.0.4.0
Oracle Multimedia Locator 11.2.0.4.0
Oracle Call Interface (OCI) 11.2.0.4.0
Oracle Multimedia 11.2.0.4.0
Oracle Net 11.2.0.4.0
Oracle XML Development Kit 11.2.0.4.0
Oracle Internet Directory Client 11.2.0.4.0
Database Configuration and Upgrade Assistants 11.2.0.4.0
Oracle JVM 11.2.0.4.0
Oracle Advanced Security 11.2.0.4.0
Oracle Net Listener 11.2.0.4.0
Oracle Enterprise Manager Console DB 11.2.0.4.0
HAS Files for DB 11.2.0.4.0
Oracle Text 11.2.0.4.0
Oracle Net Services 11.2.0.4.0
Oracle Database 11g 11.2.0.4.0
Oracle OLAP 11.2.0.4.0
Oracle Spatial 11.2.0.4.0
Oracle Partitioning 11.2.0.4.0
Enterprise Edition Options 11.2.0.4.0
-----------------------------------------------------------------------------
Instantiating scripts for add node (Thursday, April 10, 2014 3:32:46 PM KST)
. 1% Done.
Instantiation of add node scripts complete
Copying to remote nodes (Thursday, April 10, 2014 3:32:52 PM KST)
............................................................................................... 96% Done.
Home copied to new nodes
Saving inventory on nodes (Thursday, April 10, 2014 3:48:34 PM KST)
. 100% Done.
Save inventory complete
WARNING:
The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list
below is followed by a list of nodes.
/oracle/product/11.2.0.4/root.sh #On nodes test3
To execute the configuration scripts:
1. Open a terminal window
2. Log in as "root"
페이지 15 / 28
11g RAC 3node 추가작업 Internal Use Only
3. Run the scripts in each cluster node
==========================================================
Remote Copy 가 끝났으면 test3에서 Script 를 수행한다.
[root@test3 /root]# /oracle/product/11.2.0.4/root.sh
Performing root user operation for Oracle 11g
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /oracle/product/11.2.0.4
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
The Cluster Node Addition of /oracle/product/11.2.0.4 was successful.
Please check '/tmp/silentInstall.log' for more details.
[root@test3 /root]# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
21901180 3433868 17336816 17% /
/dev/sda3 9920624 4670524 4738032 50% /oracle
/dev/sda2 14877092 4360924 9748264 31% /oragrid
/dev/sda1 101086 23562 72305 25% /boot
tmpfs 1025332 117736 907596 12% /dev/shm
Binary Copy 가 완료 되었다. 이제 database 를 생성한다.
6. Database 생성
test1에서 dbca 수행
[oracle@test1 /home/oracle]# dbca
페이지 16 / 28
11g RAC 3node 추가작업 Internal Use Only
페이지 17 / 28
11g RAC 3node 추가작업 Internal Use Only
만약 해당과정에서 test3에서 dbca 를 수행했다면 local node에 instance가 active 되어 있지 않으므로 아래와 같은
에러를 만날 것이다.
페이지 18 / 28
11g RAC 3node 추가작업 Internal Use Only
페이지 19 / 28
11g RAC 3node 추가작업 Internal Use Only
[oracle@test3 /home/oracle]# sqlplus "/as sysdba"
SQL*Plus: Release 11.2.0.4.0 Production on Thu Apr 10 16:28:48 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
SQL> archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +RECO
Oldest online log sequence 1
Next log sequence to archive 2
페이지 20 / 28
11g RAC 3node 추가작업 Internal Use Only
Current log sequence 2
기존 DB가 Archive mode 이면 신규노드도 자동으로 Archive mode 로 open 됨을 볼수 있다.
- 수동으로 DB 생성하기
1. DB 생성전 test1 의 /dbs 아래 파일들을 test3 /dbs로 copy 한후 Rename 한다.
[oracle@test3 /oracle/product/11.2.0.4/dbs]# ls -al
total 40
drwxr-xr-x 2 oracle dba 4096 Apr 11 13:09 .
drwxr-xr-x 79 oracle dba 4096 Apr 10 16:20 ..
-rw-rw---- 1 oracle dba 1544 Apr 11 13:09 hc_RAC3.dat
-rw-r--r-- 1 oracle dba 2851 Apr 10 15:43 init.ora
-rw-r--r-- 1 oracle dba 56 Apr 10 16:16 initRAC3.ora
-rw-r----- 1 oracle dba 1536 Apr 9 11:39 orapwRAC3
현재 spfile 를 사용하므로 initRAC3.ora 의 내용은 아래와 같다.
[oracle@test3 /oracle/product/11.2.0.4/dbs]# cat initRAC3.ora
SPFILE='+DATA/RAC/spfileRAC.ora' # line added by Agent
2. /etc/oratab 수정
+ASM3:/oragrid/product/11.2.0.4:N # line added by Agent
RAC:/oracle/product/11.2.0.4:N # line added by Agent
3. test1 에 접속해서 public log thread, undo tablespace 등 생성
[root:/root]#su - oracle
[oracle:/home/oracle]#sqlplus "/as sysdba"
SQL> alter database add logfile thread 3 group 5 ('+DATA') size 50M, group 6 ('+DATA') size 50M;
SQL> alter database enable public thread 3;
SQL> create undo tablespace undotbs3 datafile '+DATA' size 200M autoextend on;
SQL> alter system set undo_tablespace='undotbs3' scope=spfile sid='RAC3';
SQL> alter system set instance_number=3 scope=spfile sid='RAC3';
SQL> alter system set cluster_database_instances=3 scope=spfile sid='*';
4. test3에서 instance add/start
[oracle@test3 /home/oracle]#./srvctl add instance -d RAC -i RAC3 -n test3
[oracle@test3 /home/oracle]#./srvctl status database -d RAC -v
[oracle@test3 /home/oracle]#./srvctl config database -d RAC
페이지 21 / 28
11g RAC 3node 추가작업 Internal Use Only
[oracle@test3 /home/oracle]#./srvctl start instnace -d RAC -i RAC3
[oracle@test3 /home/oracle]#./rvctl status database -d orcl -v
SQL> col host_name format a10
SQL> set line 300
SQL> select
INSTANCE_NAME,HOST_NAME,VERSION,STARTUP_TIME,STATUS,ACTIVE_STATE,INSTANCE_ROLE,DATABASE_STATUS from
gv$INSTANCE;
INSTANCE_NAME HOST_NAME VERSION STARTUP_TIME STATUS ACTIVE_ST INSTANCE_ROLE DATABASE_STATUS
---------------- ----------- ----------------- ------------------ ------------ --------- ------------------ -----------------
RAC1 test1 11.2.0.4.0 11-APR-14 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
RAC3 test3 11.2.0.4.0 11-APR-14 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
RAC2 test2 11.2.0.4.0 11-APR-14 OPEN NORMAL PRIMARY_INSTANCE ACTIVE
DB 생성이 끝났으면 어떤 식으로 진행되었는지 alertlog 를 살펴보면서 이해해 보자.
tes1번의 alertlog
Thu Apr 10 16:14:55 2014
CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS3" DATAFILE SIZE 210M AUTOEXTEND ON NEXT 5120K MAXSIZE
32767M BLOCKSIZE 8192
Completed: CREATE SMALLFILE UNDO TABLESPACE "UNDOTBS3" DATAFILE SIZE 210M AUTOEXTEND ON NEXT 5120K
MAXSIZE 32767M BLOCKSIZE 8192
ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 5 SIZE 51200K,
GROUP 6 SIZE 51200K
Thu Apr 10 16:15:23 2014
Completed: ALTER DATABASE ADD LOGFILE THREAD 3 GROUP 5 SIZE 51200K,
GROUP 6 SIZE 51200K
ALTER DATABASE ENABLE PUBLIC THREAD 3
Completed: ALTER DATABASE ENABLE PUBLIC THREAD 3
ALTER SYSTEM SET instance_number=3 SCOPE=SPFILE SID='RAC3';
ALTER SYSTEM SET thread=3 SCOPE=SPFILE SID='RAC3';
ALTER SYSTEM SET undo_tablespace='UNDOTBS3' SCOPE=SPFILE SID='RAC3';
Thu Apr 10 16:15:25 2014
Redo thread 3 internally disabled at seq 1 (CKPT)
Thu Apr 10 16:15:25 2014
ARC3: Archiving disabled thread 3 sequence 1
Archived Log entry 18 added for thread 3 sequence 1 ID 0x90faff1c dest 1:
Thu Apr 10 16:17:06 2014
Reconfiguration started (old inc 4, new inc 6)
List of instances:
1 2 3 (myinst: 1)
Global Resource Directory frozen
Communication channels reestablished
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
페이지 22 / 28
11g RAC 3node 추가작업 Internal Use Only
Thu Apr 10 16:17:06 2014
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
Thu Apr 10 16:17:08 2014
minact-scn: Master returning as live inst:2 has inc# mismatch instinc:4 cur:6 errcnt:0
minact-scn: Master returning as live inst:3 has inc# mismatch instinc:0 cur:6 errcnt:0
Thu Apr 10 16:17:44 2014
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
tes2번의 alertlog
Thu Apr 10 16:17:07 2014
Reconfiguration started (old inc 4, new inc 6)
List of instances:
1 2 3 (myinst: 2)
Global Resource Directory frozen
Communication channels reestablished
Thu Apr 10 16:17:07 2014
* domain 0 valid = 1 according to instance 1
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
Thu Apr 10 16:17:07 2014
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
Thu Apr 10 16:17:45 2014
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
3번의 alertlog
Thu Apr 10 16:16:59 2014
Starting ORACLE instance (normal)
LICENSE_MAX_SESSION = 0
LICENSE_SESSIONS_WARNING = 0
Initial number of CPU is 1
Private Interface 'eth1:1' configured from GPnP for use as a private interconnect.
[name='eth1:1', type=1, ip=169.254.83.153, mac=08-00-27-67-b3-e9, net=169.254.0.0/16, mask=255.255.0.0,
use=haip:cluster_interconnect/62]
페이지 23 / 28
11g RAC 3node 추가작업 Internal Use Only
Public Interface 'eth0' configured from GPnP for use as a public interface.
[name='eth0', type=1, ip=192.168.56.23, mac=08-00-27-7b-9d-40, net=192.168.56.0/24, mask=255.255.255.0,
use=public/1]
Public Interface 'eth0:1' configured from GPnP for use as a public interface.
[name='eth0:1', type=1, ip=192.168.56.33, mac=08-00-27-7b-9d-40, net=192.168.56.0/24, mask=255.255.255.0,
use=public/1]
Shared memory segment for instance monitoring created
CELL communication is configured to use 0 interface(s):
CELL IP affinity details:
NUMA status: non-NUMA system
cellaffinity.ora status: N/A
CELL communication will use 1 IP group(s):
Grp 0:
Picked latch-free SCN scheme 3
Autotune of undo retention is turned on.
LICENSE_MAX_USERS = 0
SYS auditing is disabled
Starting up:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options.
ORACLE_HOME = /oracle/product/11.2.0.4
System name: Linux
Node name: test3
Release: 2.6.32-300.10.1.el5uek
Version: #1 SMP Wed Feb 22 17:37:40 EST 2012
Machine: x86_64
Using parameter settings in server-side pfile /oracle/product/11.2.0.4/dbs/initRAC3.ora
System parameters with non-default values:
processes = 150
spfile = "+DATA/rac/spfilerac.ora"
memory_target = 800M
control_files = "+DATA/rac/controlfile/current.256.844429151"
control_files = "+RECO/rac/controlfile/current.256.844429153"
db_block_size = 8192
compatible = "11.2.0.4.0"
log_archive_dest_1 = "LOCATION=+RECO"
log_archive_format = "%t_%s_%r.arc"
cluster_database = TRUE
db_create_file_dest = "+DATA"
db_recovery_file_dest = "+RECO"
db_recovery_file_dest_size= 5727M
thread = 3
undo_tablespace = "UNDOTBS3"
instance_number = 3
remote_login_passwordfile= "EXCLUSIVE"
db_domain = ""
remote_listener = "test-scan:1521"
audit_file_dest = "/oracle/admin/RAC/adump"
페이지 24 / 28
11g RAC 3node 추가작업 Internal Use Only
audit_trail = "DB"
db_name = "RAC"
open_cursors = 300
diagnostic_dest = "/oracle"
Cluster communication is configured to use the following interface(s) for this instance
169.254.83.153
cluster interconnect IPC version:Oracle UDP/IP (generic)
IPC Vendor 1 proto 2
Thu Apr 10 16:17:02 2014
PMON started with pid=2, OS id=30489
Thu Apr 10 16:17:02 2014
PSP0 started with pid=3, OS id=30495
Thu Apr 10 16:17:03 2014
VKTM started with pid=4, OS id=30568 at elevated priority
VKTM running at (1)millisec precision with DBRM quantum (100)ms
Thu Apr 10 16:17:03 2014
GEN0 started with pid=5, OS id=30574
Thu Apr 10 16:17:03 2014
DIAG started with pid=6, OS id=30578
Thu Apr 10 16:17:03 2014
DBRM started with pid=7, OS id=30582
Thu Apr 10 16:17:03 2014
PING started with pid=8, OS id=30586
Thu Apr 10 16:17:03 2014
ACMS started with pid=9, OS id=30590
Thu Apr 10 16:17:03 2014
DIA0 started with pid=10, OS id=30594
Thu Apr 10 16:17:03 2014
LMON started with pid=11, OS id=30598
Thu Apr 10 16:17:03 2014
LMD0 started with pid=12, OS id=30602
* Load Monitor used for high load check
* New Low - High Load Threshold Range = [960 - 1280]
Thu Apr 10 16:17:03 2014
LMS0 started with pid=13, OS id=30606 at elevated priority
Thu Apr 10 16:17:03 2014
RMS0 started with pid=14, OS id=30612
Thu Apr 10 16:17:03 2014
LMHB started with pid=15, OS id=30616
Thu Apr 10 16:17:03 2014
MMAN started with pid=16, OS id=30620
Thu Apr 10 16:17:03 2014
DBW0 started with pid=17, OS id=30624
Thu Apr 10 16:17:03 2014
LGWR started with pid=18, OS id=30628
Thu Apr 10 16:17:03 2014
CKPT started with pid=19, OS id=30632
Thu Apr 10 16:17:03 2014
SMON started with pid=20, OS id=30636
페이지 25 / 28
11g RAC 3node 추가작업 Internal Use Only
Thu Apr 10 16:17:03 2014
RECO started with pid=21, OS id=30640
Thu Apr 10 16:17:03 2014
RBAL started with pid=22, OS id=30649
Thu Apr 10 16:17:03 2014
ASMB started with pid=23, OS id=30653
Thu Apr 10 16:17:03 2014
MMON started with pid=24, OS id=30657
Thu Apr 10 16:17:03 2014
MMNL started with pid=25, OS id=30663
NOTE: initiating MARK startup
Starting background process MARK
lmon registered with NM - instance number 3 (internal mem no 2)
Thu Apr 10 16:17:03 2014
MARK started with pid=26, OS id=30670
NOTE: MARK has subscribed
Reconfiguration started (old inc 0, new inc 6)
List of instances:
1 2 3 (myinst: 3)
Global Resource Directory frozen
* allocate domain 0, invalid = TRUE
Communication channels reestablished
* domain 0 valid = 1 according to instance 1
Master broadcasted resource hash value bitmaps
Non-local Process blocks cleaned out
LMS 0: 0 GCS shadows cancelled, 0 closed, 0 Xw survived
Set master node info
Submitted all remote-enqueue requests
Dwn-cvts replayed, VALBLKs dubious
All grantable enqueues granted
Submitted all GCS remote-cache requests
Fix write in gcs resources
Reconfiguration complete
Thu Apr 10 16:17:05 2014
LCK0 started with pid=28, OS id=30780
Starting background process RSMN
Thu Apr 10 16:17:06 2014
RSMN started with pid=29, OS id=30789
ORACLE_BASE not set in environment. It is recommended
that ORACLE_BASE be set in the environment
Thu Apr 10 16:17:07 2014
ALTER SYSTEM SET local_listener=' (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.56.33)(PORT=1521))' SCOPE=MEMORY
SID='RAC3';
ALTER DATABASE MOUNT /* db agent *//* {1:12172:726} */
NOTE: Loaded library: System
SUCCESS: diskgroup DATA was mounted
NOTE: dependency between database RAC and diskgroup resource ora.DATA.dg is established
SUCCESS: diskgroup RECO was mounted
NOTE: dependency between database RAC and diskgroup resource ora.RECO.dg is established
페이지 26 / 28
11g RAC 3node 추가작업 Internal Use Only
Thu Apr 10 16:17:14 2014
Successful mount of redo thread 3, with mount id 2432518228
Database mounted in Shared Mode (CLUSTER_DATABASE=TRUE)
Lost write protection disabled
Create Relation IPS_PACKAGE_UNPACK_HISTORY
Completed: ALTER DATABASE MOUNT /* db agent *//* {1:12172:726} */
Thu Apr 10 16:17:17 2014
ALTER DATABASE OPEN /* db agent *//* {1:12172:726} */
Picked broadcast on commit scheme to generate SCNs
ARCH: STARTING ARCH PROCESSES
Thu Apr 10 16:17:18 2014
ARC0 started with pid=32, OS id=31124
ARC0: Archival started
ARCH: STARTING ARCH PROCESSES COMPLETE
ARC0: STARTING ARCH PROCESSES
Thu Apr 10 16:17:19 2014
ARC1 started with pid=33, OS id=31139
Thu Apr 10 16:17:19 2014
ARC2 started with pid=34, OS id=31145
ARC1: Archival started
ARC2: Archival started
Thu Apr 10 16:17:19 2014
ARC3 started with pid=35, OS id=31149
ARC1: Becoming the 'no FAL' ARCH
ARC1: Becoming the 'no SRL' ARCH
ARC2: Becoming the heartbeat ARCH
ARC3: Archival started
ARC0: STARTING ARCH PROCESSES COMPLETE
Thread 3 opened at log sequence 2
Current log# 6 seq# 2 mem# 0: +DATA/rac/onlinelog/group_6.270.844532115
Current log# 6 seq# 2 mem# 1: +RECO/rac/onlinelog/group_6.279.844532119
Successful open of redo thread 3
MTTR advisory is disabled because FAST_START_MTTR_TARGET is not set
Thu Apr 10 16:17:20 2014
SMON: enabling cache recovery
Thu Apr 10 16:17:26 2014
minact-scn: Inst 3 is a slave inc#:6 mmon proc-id:30657 status:0x2
minact-scn status: grec-scn:0x0000.00000000 gmin-scn:0x0000.00000000 gcalc-scn:0x0000.00000000
[30801] Successfully onlined Undo Tablespace 6.
Undo initialization finished serial:0 start:7766754 end:7769254 diff:2500 (25 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Verifying 11g file header compatibility for tablespace encryption completed
SMON: enabling tx recovery
Thu Apr 10 16:17:27 2014
Database Characterset is AL32UTF8
No Resource Manager plan active
Starting background process GTX0
Thu Apr 10 16:17:31 2014
GTX0 started with pid=37, OS id=31246
페이지 27 / 28
11g RAC 3node 추가작업 Internal Use Only
Starting background process RCBG
Thu Apr 10 16:17:31 2014
RCBG started with pid=38, OS id=31255
replication_dependency_tracking turned off (no async multimaster replication found)
Starting background process QMNC
Thu Apr 10 16:17:32 2014
QMNC started with pid=39, OS id=31271
Completed: ALTER DATABASE OPEN /* db agent *//* {1:12172:726} */
Thu Apr 10 16:17:37 2014
Starting background process SMCO
Thu Apr 10 16:17:37 2014
SMCO started with pid=30, OS id=31305
Thu Apr 10 16:17:39 2014
Starting background process CJQ0
Thu Apr 10 16:17:39 2014
CJQ0 started with pid=43, OS id=31344
Setting Resource Manager plan SCHEDULER[0x32DC]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Thu Apr 10 16:17:43 2014
Starting background process VKRM
Thu Apr 10 16:17:43 2014
VKRM started with pid=47, OS id=31366
6. 작업 Summary
실제로 노드 추가 전에 사전 Check 작업만 잘 해 놓으면 실질적인 작업은 addNode.sh 를 2번만 수행시키면 되기
때문에 절차가 아주 간소함을 알 수 있다.
Ⅳ. References
1. http://balaoracledba.com/2013/12/24/add-node-to-oracle-rac-11gr211-2-0-3-on-oracle-linux-6-5-2-node-rac-
infrastructure/
2. http://www.idevelopment.info/
Add a Node to an Existing Oracle RAC 11g R2 Cluster on Linux - (RHEL 5)
3. node 추가 작업시 발생하는 에러는 아래와 같은 로그위치를 참조하여 살펴보도록 한다.
1) /tmp/OraInstall${TIMESTAMP}
2) $CENTRAL_INVENTORY/logs
3) $TEMP/OraInstall${TIMESTAMP}
4)$TMPDIR/OraInstall${TIMESTAMP}
5) $ORACLE_HOME/cfgtoollogs/oui
installActions${TIMESTAMP}.log
oraInstall${TIMESTAMP}.err
oraInstall${TIMESTAMP}.out
페이지 28 / 28