Maximum Availability Clone Architecture

download Maximum Availability Clone Architecture

of 57

Transcript of Maximum Availability Clone Architecture

  • 8/6/2019 Maximum Availability Clone Architecture

    1/57

    MaximumAvailability

    Clone Architecture

    Oracle Best Practices For High

    Availability of Clone

    Prepare by

    Sachin Deshpande

  • 8/6/2019 Maximum Availability Clone Architecture

    2/57

    Sachin Deshpande

    Contents

    1 Introduction ................................................................................................................. 51.1 Purpose............................................................................................................. 51.2 Background ...................................................................................................... 51.3 Scope & Application......................................................................................... 5

    1.4 Related Documents........................................................................................... 61.5 Advantage and disadvantage of High Availability Clone................................... 6

    2 Architecture and Concepts ..................................................................................... 72.1 Glossary ........................................................................................................... 72.2 RAC Architecture............................................................................................ 72.3 Data Guard Architecture................................................................................... 8

    Log Transport Services..................................................................................... 9

    Log Apply Services .......................................................................................... 9

    Data Guard Broker............................................................................................ 92.4 How it Works ................................................................................................. 10

    Archived Log Shipping................................................................................... 10

    Standby Redo logs.......................................................................................... 102.5 Levels of Protection........................................................................................ 10

    Maximum Protection ...................................................................................... 10

    Maximum Availability.................................................................................... 11

    Maximum Performance .................................................................................. 11

    Pros and Cons................................................................................................. 113 Prerequisites......................................................................................................... 12

    3.1 Hardware........................................................................................................ 123.2 Network......................................................................................................... 123.4 Real Application Clusters ............................................................................... 12

    3.4.1 ASM........................................................................................................ 123.4.2 Raw Devices............................................................................................ 12

    4 Creating a Data Guard Environment...................................................................... 144.1 Assumptions ....................................................................................................... 144.2 Procedure - Summary ..................................................................................... 144.3 Configure PRIMARY and STANDBY Sites................................................... 15

    4.3.1 Run Cluster Verification Utility (CVU) ................................................... 154.3.2 Install Oracle Clusterware 11g Release 2................................................ 164.3.3 Install Oracle Database Software 11g Release 2 and Upgrade ApplicationsDatabase to 11g Release 2..................................................................................... 174.3.4 Listener Configuration in 11gR2............................................................. 174.3.4.1 Listener Configuration in 11gR2 Clusterware............................................ 184.3.4.2 Listener requirements for converting to Oracle RAC ................................. 184.3.4.3 Listener requirements for AutoConfig ....................................................... 18

    4.3.4 Configure Shared Storage................................................................................. 184.3.5 Convert 11g Database to Oracle RAC ......................................................... 194.5.6 Post-Migration Steps........................................................................................ 204.3.7 Enable AutoConfig on Applications Database Tier........................................... 21

    4.3.7.1 Steps to Perform On All Oracle RAC Nodes.............................................. 214.3.7.2 Shut Down Instances and Listeners............................................................ 22

  • 8/6/2019 Maximum Availability Clone Architecture

    3/57

    Sachin Deshpande

    4.3.7.3 Update Server Parameter File Settings....................................................... 224.3.8 Establish Applications Environment for Oracle RAC ....................................... 24

    4.3.8.1 Preparatory Steps....................................................................................... 244.3.8.2 Implement Load Balancing........................................................................ 25

    4.3.9 Configure Parallel Concurrent Processing ........................................................ 25

    4.9.1 Check Prerequisites for Parallel Concurrent Processing................................ 254.9.2 Set Up PCP .................................................................................................. 264.9.3 Set Up Transaction Managers....................................................................... 264.9.4 Set Up Load Balancing of Concurrent Processing Tiers................................ 27

    5 Creating the RAC Standby Database.................................................................... 285.1 Configure Primary and Standby sites................................................................... 285.2 Install Oracle Software on each site..................................................................... 285.3 Server Names / VIPs ........................................................................................... 28

    5.4 Configure Oracle Networking ......................................................................... 285.5 Configure ASM on each Site ........................................................................... 325.6 Prepare Primary Database for Duplication....................................................... 32

    5.7 Duplicate the Primary database. ...................................................................... 345.8 Create an SPFILE for the Standby Database.................................................... 345.9 Create secondary control files.......................................................................... 355.10 Cluster-enable the Standby Database............................................................. 365.11 Temporary Files ............................................................................................ 375.12 Create Standby Redo Logs ............................................................................ 37

    6 Configuring Data Guard using SQL Plus............................................................... 386.1 Introduction..................................................................................................... 386.2 Configure the Standby Database...................................................................... 386.3 Configure the Primary Database ...................................................................... 386.4 Set the Protection Mode .................................................................................. 396.5 Enable Redo Transport & Redo Apply ............................................................ 39

    7 Configuring Data Guard using the Data Guard Broker ......................................... 407.1 Introduction..................................................................................................... 407.2 Broker Configuration Files.............................................................................. 407.3 Enabling the Broker ........................................................................................ 407.4 Creating a Broker Configuration...................................................................... 417.5 Enable the Broker Configuration..................................................................... 417.6 Broker Customisation...................................................................................... 41

    8 Converting the Physical Standby Database to a Snapshot Standby Database ......... 448.1 Run Oracle Pre-Clone Procedures on the Source System ................................ 45

    Database Tier........................................................................................................ 45Application Tier.................................................................................................... 45Application Binaries ............................................................................................ 45

    8.2 Copy Application binaries from source to all the application servers............... 458.3 Configure Standby server Application Node................................................... 468.5 Configure Primary Application Node.............................................................. 468.6 Setup Custom .ENV on all Application Tiers.................................................. 468.7 Run AutoConfig on all Application Nodes...................................................... 478.9 Startup standby application server services and validate application................ 47

  • 8/6/2019 Maximum Availability Clone Architecture

    4/57

    Sachin Deshpande

    9 Monitoring................................................................................................................. 519.1.1 Introduction.................................................................................................. 519.1.2 Log Files ...................................................................................................... 519.1.3 Fixed Views ................................................................................................. 51

    10 Management ..................................................................................................... 52

    10.1 Switchover........................................................................................................ 5210.1.1 Switchover using SQL Plus ........................................................................ 5210.1.2 Switchover using Data Guard Broker.......................................................... 52

    10.2 Failover............................................................................................................. 5210.2.1 Failover using SQL Plus............................................................................. 5310.2.2 Failover using Data Guard Broker .............................................................. 53

    10.3 Forced failover .................................................................................................. 5310.3.1 Forced Failover using Data Guard Broker................................................... 53

    10.4 Opening a Standby Database Read Only ........................................................... 5310.5 Real Time Apply / Real Time Query ................................................................. 54

    11 Appendix A........................................................................................................ 54

    11.1 Assumptions ..................................................................................................... 5411.2 RMAN Backup ................................................................................................. 5411.2.1 New Backup............................................................................................... 5411.2.2 Existing Backup ......................................................................................... 55

    11.3 Creating the Standby Database.......................................................................... 5511.3.1 Prerequisites ............................................................................................... 5511.3.2 Procedure ................................................................................................... 5511.2 Broker Configuration Files............................................................................ 56

    Appropriate values for these parameters can be found above (9.6.1).......................... 5611.3 Concept of DML,DDL,TCL and DCL ..................................................... 56

  • 8/6/2019 Maximum Availability Clone Architecture

    5/57

    Sachin Deshpande

    1 Introduction

    1.1 Purpose

    The purpose of this document to provide step by step procedure configuration of OracleEbiz High Availability Application using Oracle 11g Data Guard which is also known asMaximum Availability Clone for critical business issue. This document will also coverconfiguration set up of Oracle E-Biz with RAC and RAC standby database with advancefeature of 11g Data Guard. High Availability Clone is new concept clone

    1.2 BackgroundOrganizations are using Oracle databases to store mission critical information. Thisinformation must be kept safe even in the event of a major disaster. But major disaster

    occurred rarely in real life as compare to Business stop because of Critical bugs/issues,Year/Month End Closing issue or Modules in Ebiz application not working. During suchcondition reproducing such issue in Development instance is difficult or will requireclone of Database and application, in both case it is time consuming proce

    ss. With Oracle 11g standby Data Guard, this time can be reduce to just half an hour evenwith high volume of database size. This process uses Oracle11g standby database forclone purpose. Once critical issue resolved, Maximum Availability Clone database canbe bring back as normal standby database within short time and all changes in Productionapply to standby database.

    Maximum Availability Clone Architecture uses Snapshot Standby Database conceptof Oracle11g Data Guard. During Snapshot Standby database, standby database broughtup in Read/write mode. User can perform any DML (Data Manipulation Language),DDL (Data Definition Language), DCL (Data Control Language) and TCL (TransactionControl) statements and perform testing of issue in Maximum Availability CloneApplication.

    This Maximum Availability Clone comes true with Advance and New feature of Oracle11g Standby Data Guard which require separate data guard licenses.

    This document will describe an end-to-end process for creating a High Availability

    environment which utilizes both Real Application Clusters (Oracle RAC) 11g and DataGuard 11g.

    1.3 Scope & ApplicationThis document will examine the steps undertaken to create Oracle 11g RAC database,Configuration of E-biz, Data Guard installation and Oracle Ebiz High AvailabilityApplication clone. It will also cover the procedures, which need to be undertaken should

  • 8/6/2019 Maximum Availability Clone Architecture

    6/57

    Sachin Deshpande

    one wish to switchover or failover from the primary site to the site where the standbydatabase is located.

    1.4 Related Documents Oracle Data Guard Concepts and Administration 11g Release 1 Sep 2007 Oracle Data Guard Broker 11g Release 1 Sep 2007 Maximum Availability Architecture Business Continuity for Oracle E-Business Release 11i Using Oracle 11g Physical

    Standby Database [ID 1068913.1]

    1.5 Advantage and disadvantage of High Availability Clone

    Advantage1. High Availability clone required less time with recent Production backup.2. Data sync after conversion into standby with primary happen faster than 10g.3. Multiple times reproduce High availability clone as on requirement.4. Reduced hardware costs.5. Real Application Testing.6. Reduced database licensing costs. (Since I can now easily transform a physical

    standby database from its disaster recovery mode into application testing mode,

    this translates into one less Oracle 11g database that needs to be licensed.)

    Disadvantage1. Reincarnation may take a long time. Depending on how much READ WRITE

    activity has taken place on the snapshot standby incarnation of the database.

    2. If the primary database is unavailable when reincarnation is attempted, and one ormore of the as-yet unapplied archived redo logs are corrupted or missing on thestandby site, reincarnation isnt possible until those logs have been recoveredfrom some other source perhaps even from tape backups.

  • 8/6/2019 Maximum Availability Clone Architecture

    7/57

    Sachin Deshpande

    2 Architecture and Concepts2.1 GlossaryPrimary Site This is where the users connect to access the production database.

    Standby Site This is where the standby database is maintained. It is also known asthe disaster recovery (DR) site/ Oracle Ebiz High availabilityApplication.. Users will only connect to this site in the event of a failover orswitchover and disaster action plan.

    Disaster Non-availability of the primary site or Disaster action plan

    2.2 RAC ArchitectureOracle E-Biz with Real Application Clusters (Oracle RAC) consists of a number of

    separate computer systems joined

    Oracle RAC creates a database on the shared disks using ASM, RAW partitions orClustered Filesystems. The most common implementation in Oracle 11g is using ASM.Each cluster node runs an Oracle Instance that is used to interact with the database.Users can connect to any one of the Oracle instances to run their applications.

  • 8/6/2019 Maximum Availability Clone Architecture

    8/57

    Sachin Deshpande

    2.3 Data Guard ArchitectureIdeally the architecture of the node(s) located at the standby site will be the same as that

    of the primary, although this is not mandatory. When planning the architecture of thestandby system, the following will need to be considered (especially if that system doesnot have the same architecture as the primary system)

    If a failover is required (unplanned outage of the primary site), can the standby sitehandle the expected workload?

    If the standby site is going to be used whilst maintenance is being performed on theprimary site (planned outage), can it handle the expected workload?

    If the standby site is going to be used fordisaster action plan for High Severity issue.

    Assuming that capacities have been catered for, then the following is true:It is not necessary for:

    the standby site to be clustered.

    the standby site to use RAW devices (unless it is a cluster itself).

    NOTE: The requirement for having both sites (primary and standby) equipped withidentical software versions and/or operating systems has been relaxed. Customers cannow have flexible configurations. The status of what is currently allowed is reflected in

    Metalink Note 413484.1:Data Guard Support for Heterogeneous Primary and Standby Systems in Same DataGuard ConfigurationThis note will always have the latest support information.NOTE: If the standby system is a cluster, only one node of that cluster will be able tosynchronize the standby database. The remaining nodes will stay idle until the databaseon the standby system is opened for normal (productive) operation. The Data GuardBroker, discussed later in this document, enables automatic failover of the apply nodeto a surviving node of the standby cluster in the event that the original apply node failsfor whatever reason.

  • 8/6/2019 Maximum Availability Clone Architecture

    9/57

    Sachin Deshpande

    Log Transport ServicesThe Log Transport Services (aka Redo Transport Services) are designed to propagate

    changes from the primary database to the standby database in two ways either byshipping archivelogs (ARCH) or by transmitting redo data continuously as it is processedby the Logwriter process (LNS).

    Log Apply ServicesLog Apply Services (aka Redo Apply with Physical and SQL Apply with Logicalstandby databases) are responsible for applying the redo information to the standbydatabase from the archived or the standby redo log files.

    Data Guard BrokerData Guard Broker is the management and monitoring component that helps create,control, and monitor a primary database protected by one or more physical standbydatabases. Usage of the Broker is supported in RAC environments.

  • 8/6/2019 Maximum Availability Clone Architecture

    10/57

    Sachin Deshpande

    2.4 How it WorksAs information is written to the primary database this information is also transmitted andapplied to the standby database. These are basically two phases that are not directlycoupled to each other.

    Archived Log ShippingIn this scenario, the primary database instance(s) will generate archive logs. As soon asthe archive logs are created they are shipped over to the standby site (by ARCHprocesses) where they will be applied to the database. This could be done immediately orafter a configured delay.

    Standby Redo logsThe preferred method of transferring redo information is utilizing the LNS process whichtransmits redo data as the Logwriter background process is flushing the redo buffer and

    writing to an online redo log file. This results in a continuous redo stream across thenetwork (no peaks here). Best practices require that the receiving standby site haveStandby redo Logs (SRLs) configured. The incoming redo is then written directly to theselogs.

    The number of SRLs should be equal to the sum of all online redo logs in the primarydatabase + 1 , e.g. 4 RAC instances each with 4 redo logs would give in 17 SRLs.Whenever an entry gets written to an online redo log in any primary instance then thatentry is simultaneously written to one of the standby redo logs. When an logswitchoperation is performed on a primary online redo log, then a log switch will also occur onthe standby database which means that the current SRL will be archived to a local

    directory on the standby system.

    The feature Real Time Apply was introduced in Oracle 10g. When Real Time Apply isused, redo is applied to the standby database as it is written to the standby redo log, ratherthan waiting for a log switch before applying the redo.

    2.5 Levels of ProtectionData Guard can be configured to offer differing degrees of protection. These aresummarized below:

    Maximum ProtectionThis solution guarantees Zero Data Loss between the primary and the standby sites formultiple failure scenarios (e.g. network failure and primary site failure). If the laststandby site becomes unavailable for any reason then the primary database is shutdown.

  • 8/6/2019 Maximum Availability Clone Architecture

    11/57

    Sachin Deshpande

    Maximum AvailabilityThis solution provides zero data loss for single-failure scenarios. If however the standbysite becomes unavailable (single failure), this protection mode places the emphasis onavailability and work is allowed to continue on the primary database. If a secondfailure, for example the loss of the primary database, occurs during this period, then there

    will be unprotected transactions on the primary database that will be lost. Once availableagain, the standby database will automatically retrieve any data from the primarydatabases archive log files that have not been transferred and will resynchronize againwithout requiring manual intervention.

    Maximum PerformanceThis solution works either by asynchronously shipping redo through the LNS process orby shipping archived redo logs through the ARCH process from the primary to thestandby database as they are generated. This means that a degree of data loss canpotentially be experienced if there is a failure, since there is never a guarantee that theprimary and standby databases are in sync.

    Pros and ConsMaximum Protection and Maximum Availability require high specification network linksbetween the primary and the standby sites. As data is written on the primary database thisinformation is simultaneously written on the standby site. This means that the addedbenefit of zero data loss protection must be weighed against the potential impact toprimary database performance should your network not have the required bandwidth orhave high RTT latency typically found in WAN environments.Maximum Performance has no impact on the primary database performance, but theasynchronous nature of redo transport can allow for data loss in case of a failover.

  • 8/6/2019 Maximum Availability Clone Architecture

    12/57

    Sachin Deshpande

    3 Prerequisites3.1 HardwareThe standby site will ideally be hosted on hardware able to support the primary sites

    workload. Whilst this is not mandatory, it ensures that if a failover to the standby site isrequired, similar service levels will be attained.

    3.2 NetworkIn order to facilitate a Maximum Protection/Availability standby site, the primarydatabase will synchronously write redo log information to both the primary and standbysite(s). It is therefore essential that the network link between these sites:

    Is reliable (no single point of failure).

    Has suitable bandwidth (depends on the amount of expected redo).

    Has very low latency.

    3.3 SoftwareIn previous Oracle releases both the primary and standby sites have been required to runthe same version of the database software. This requirement has been substantiallyrelaxed. For current support of mixed environments please see Metalink Note 413484.1 .

    3.4 Real Application Clusters

    3.4.1 ASM

    If the primary database utilizes ASM for disk storage it is strongly recommended to haveASM configured also on the standby site.NOTE: ASM is the preferred storage management for Oracle RAC databases and is alsobeing increasingly used for non-RAC databases as well.

    3.4.2 Raw Devices

    If the primary database is an Oracle RAC database and utilizes raw devices and thestandby database is a cluster database as well, then raw devices also need to beconfigured on the standby site.

    If however the standby site is NOT a cluster, then there is no need for the standbydatabase to use raw devices. In this case the following will make management simpler:

    3.4.2.1 Non Raw Devices

    1. Create a directory on the standby site to hold the data files.

  • 8/6/2019 Maximum Availability Clone Architecture

    13/57

    Sachin Deshpande

    2. Create the same directories on each of the primary RAC nodes.

    3. On the primary RAC nodes create symbolic links from this directory to the raw devices.

    4. When creating the RAC database use these symbolic links rather than the raw device

    names.

    Advantages

    Data Guard can automatically manage database files. I.e. when a file is added to theprimary database it automatically gets also added to the standby database. If the directorystructure is not exactly the same on both sites then a filename conversion has to beconfigured by using the init.ora-Parameters db_file_name_convert andlog_file_name_convert.E.g. /u01/oradata/dbname/system.dbf is the same regardless of the site.This is also true if Oracle Managed Files (OMF) are being used.

  • 8/6/2019 Maximum Availability Clone Architecture

    14/57

    Sachin Deshpande

    4 Creating a Data Guard Environment

    4.1 AssumptionsFor the purpose of the instructions below the following have been assumed:

    Primary Host Names are europa and callisto Standby Host Names are dione and hyperion The primary database will be referred to as ORCL Virtual Names are europa-vip, callisto-vip, dione-vip and hyperion-vip Both the primary and standby databases use ASM for storage The following ASM disk groups are being used +DATA (for data) and +FRA for

    Recovery/Flashback

    The standby database will be referred to as ORCLSBY1 Oracle Managed Files will be used. ORACLE_BASE is set to /u01/app/oracle

    Where these names are used below they will be highlighted as above.

    Verify Kernel ParametersDetailed hardware and OS requirements are detailed in Advanced Installation OracleGrid Infrastructure for a Cluster Pre-installation Tasks [Linux]

    Set up Shared StorageThese storage options are detailed in Oracle Grid Infrastructure Installation Guide 11gRelease 2 (11.2) - Configuring Storage Linux.

    Check Account SetupConfigure the oracle account's environment for Oracle Clusterware and Oracle Database11gR2, as per the Oracle Grid Infrastructure Installation Guide 11g Release 2 (11.2)Linux

    Configure Secure Shell on All Cluster Nodes

    For further details on manual set up of passwordless ssh, see Appendix E in Oracle GridInfrastructure Installation Guide 11g Release 2 (11.2) for Linux).

    4.2 Procedure - SummaryThe procedure to create a Data Guard environment is summarized below. Furthersections go into detail about how to perform each of the following tasks:

  • 8/6/2019 Maximum Availability Clone Architecture

    15/57

    Sachin Deshpande

    Configure PRIMARY and STANDBY Sites Install Oracle Software on each site. Configure Oracle Networking on each site Configure ASM on both sites Configure listeners on each site Configure Oracle Networking on each site. Create initialization files (Primary/Standby). Duplicate the primary database to the standby site Create a server parameter file for the standby site Create extra Standby Control Files Create Standby Redo Log files Register standby database with cluster Configure the Data Guard Broker Place standby database into appropriate protection mode

    Monitor.

    4.3 Configure PRIMARY and STANDBY Sites

    4.3.1 Run Cluster Verification Utility (CVU)

    The installer will automatically run the Cluster Verify tool and provide fix up scripts forOS issues. However, you can also run CVU prior to installation to check for potentialissues.

    1. Install the cvuqdisk package as per installing the cvuqdisk Package for Linux inOracle Grid Infrastructure Installation Guide 11g Release 2 (11.2) for Linux).2. Use the following command to determine which pre-installation steps have beencompleted, and which need to be performed:

    /runcluvfy.sh stage -pre crsinst -n

    /u01/app/dumpora11g/runcluvfy.sh stage -pre crsinst -n europa-vip

    /u01/app/dumpora11g/runcluvfy.sh stage -pre crsinst -n callisto-

    vip

    Substitute with with the names of the nodes in your cluster, separatedby commas.

    3. Use the following command to check networking setup with CVU:/runcluvfy.sh comp nodecon -n

    [-verbose]

  • 8/6/2019 Maximum Availability Clone Architecture

    16/57

    Sachin Deshpande

    4. Use the following command to check operating system requirements with CVU:/runcluvfy.sh comp sys -n -p

    {crs|database} \

    -osdba osdba_group -orainv orainv_group -verbose

    Substitute with a comma-separated list of the names of the nodes inyour cluster.

    4.3.2 Install Oracle Clusterware 11g Release 2

    1. Use the same oraInventory location that was created during the installation ofOracle Applications Release 11i, and make a backup of oraInventory beforeinstallation.

    2. Start runInstaller from the Oracle Clusterware 11g Release 2 staging area, andinstall as per your requirements - see Oracle Grid Infrastructure Installation Guide

    11g Release 2 (11.2) Linux

    Note: Many customers will have an existing Grid Infrastructure install tailored totheir requirements, so can skip this step. Those less experienced with clusterware,or who are perhaps doing a test install, should refer to Appendix C for an examplewalk through.

    3. Confirming Oracle Clusterware function:1. After installation, log in as root, and use the following command to

    confirm that your Oracle Clusterware installation is running correctly:

    /bin/crs_stat -t -v

    2. Successful Oracle Clusterware operation can also be verified using thefollowing command:

    /bin/crsctl check crs

    CRS-4638: Oracle High Availability Services is online

    CRS-4537: Cluster Ready Services is online

    CRS-4529: Cluster Synchronization Services is online

    CRS-4533: Event Manager is online

    3. Post-Install Actions1. By default, the Global Services Daemon (GSD) is not started on

    the cluster. To start GSD, change directory to the and issue the following commands:

    srvctl enable nodeapps g

    srvctl start nodeapps

  • 8/6/2019 Maximum Availability Clone Architecture

    17/57

    Sachin Deshpande

    4.3.3 Install Oracle Database Software 11g Release 2 and UpgradeApplications Database to 11g Release 2

    Note: You should take a full backup of the oraInventory directory before startingthis stage, in which you will run the Oracle Universal Installer (runInstaller) tocarry out an Oracle Database Installation with Oracle RAC. In the Cluster Nodes

    Window, verify the cluster nodes shown for the installation. Select all nodesincluded in your Oracle RAC cluster.

    To install Oracle Database 11g Release 2 software and upgrade existing databaseto 11g Release 2, refer to the interoperability note, Oracle Applications Release11i with Oracle Database 11g Release 2, following all the instructions and stepslisted there except these:

    o Start the new database listener (Conditional)o Implement and run AutoConfigo Restart Applications server processes (Conditional)

    Note: Ensure the database software is installed on all nodes in the cluster.

    4.3.4 Listener Configuration in 11gR2

    Listener configuration can often be confusing when converting an Oracle E-BusinessSuite database to use Oracle RAC.

    There are two types of listener in 11gR2 Clusterware: the Scan listener and generaldatabase listeners. The Scan listener provides a single named access point for clients, and

    replaces the use of Virtual IP addresses (VIP) in client connection requests (tnsnames.oraaliases). However, connection requests can still be routed via the VIP name, as bothaccess methods are fully supported.

    Note: At present, AutoConfig does not support Scan listeners. This will beaddressed in a future version of AutoConfig.

    To start or stop a listener from srvctl, three configuration components are required:

    An Oracle Home from which to run lsncrtl The listener.ora file under the TNS_ADMIN network directory The listener name (defined in listener.ora) to start and stop

    The Oracle Home can either be the Infrastructure home or a database home. TheTNS_ADMIN directory can be any accessible directory. The listener name must beunique within the listener.ora file. See Oracle Real Application Clusters Administrationand Deployment Guide 11g Release 2 (11.2)

    There are three issues to be considered:

  • 8/6/2019 Maximum Availability Clone Architecture

    18/57

    Sachin Deshpande

    Listener configuration in 11gR2 Clusterware. Listener requirements for converting to Oracle RAC Listener requirements for AutoConfig

    4.3.4.1 Listener Configuration in 11gR2 Clusterware

    In 11gR2, listeners are configured at the cluster level, and all nodes inherit the port andenvironment settings. This means that the TNS_ADMIN directory path will be the sameon all nodes. So to create a new listener, listener_ebs, on port 1522, running from thedatabase ORACLE_HOME and with a user defined TNS_ADMIN directory, you wouldexecute commands based on the following:

    srvctl add listener -l listener_ebs -o -p 1522

    srvctl setenv listener -l listener_ebs -T TNS_ADMIN= $TNS_ADMIN

    When the listener starts, it will run from the database ORACLE_HOME. srvctl managesthe listener.ora file across all nodes.

    4.3.4.2 Listener requirements for converting to Oracle RAC

    Tools such as rconfig impose additional restrictions on the choice of listener. The listenermust be the default listener, and it must run from the Grid Infrastructure home. So theexample in 3.3.1 above would need to be changed to:

    srvctl add listener -p 1522

    After conversion, you can reconfigure the listener as required.

    4.3.4.3 Listener requirements for AutoConfig

    The current version of AutoConfig creates listener names of the formlistener_, i.e. different listener names on each node in the cluster. Thisissue is being tracked via bug 8312164, and a future version of AutoConfig the listenername will be a user-defined context variable.

    4.3.4 Configure Shared Storage

    This document does not discuss the setup of shared storage as there are no Oracle E-Business Suite specific tasks in setting up ASM, NFS (NAS) or clustered storage.

    For ASM, refer to Oracle Database Storage Administrator's Guide11g Release 2(11.2)

    For configuring shared storage, refer to Configuring Storage for GridInfrastructure for a Cluster and Oracle Real Application Clusters (Oracle RAC)(Linux)

    See also Oracle Database Administrator's Guide 11g Release 2 (11.2)

  • 8/6/2019 Maximum Availability Clone Architecture

    19/57

    Sachin Deshpande

    4.3.5 Convert 11g Database to Oracle RACThere are three options for converting to Oracle RAC, all of which are detailed inConverting to Oracle Real Application Clusters from Single-Instance Oracle Databases.

    DBCA rconfig Enterprise Manager

    All these will convert an Oracle E-Business Suite database to Oracle RAC. Which ispreferred is a matter of customer choice.

    Prerequisites for conversion are as follows:

    A clustered Grid Infrastructure install with at least one Scan listener address(Section 3.1)

    The default listener running from the Grid Infrastructure home (Section 3.1)o

    The port can either be the default, or specified during the GridInfrastructure install.

    11gR2 ORACLE_HOME installed on all nodes in the cluster (Section 3.2) Shared storage - the database files can already be on shared storage [ CFS or

    ASM ] or moved to ASM as part of the conversion (Section 3.4)

    As an example, the steps involved for Admin Managed rconfig conversion are detailedbelow:

    1. As the oracle user, navigate to the 11gR2 directory$11gR2_ORACLE_HOME/assistants/ rconfig/sampleXMLs, and open the sample

    file ConvertToRAC_AdminManaged.xml using a text editor such as vi. ThisXML sample file contains comment lines that provide instructions on how to editthe file to suit your site's specific needs.

    2. Make a copy of the sample ConvertToRAC.xml file, and modify the parametersas required for your system. Keep a note of the name of your modified copy.

    Note: Study the example file (and associated notes) in Appendix A before youedit your own file and run rconfig.

    3. Execute rconfig using the command convert verify="ONLY" before carrying outthe actual conversion. This optional but recommended step will perform a test runto validate parameters and identify any issues that need to be corrected beforeconversion takes place.

    Note: Specify the 'SourceDBHome' variable inConvertToRAC_AdminManaged.xml as Non-RAC Oracle Home

  • 8/6/2019 Maximum Availability Clone Architecture

    20/57

    Sachin Deshpande

    (). If you wish to specify asNEW_ORACLE_HOME, start the database from new Oracle Home.

    4.

    Shut down the database instance.

    5. If you are not using an spfile for database startup, you must convert to spfilebefore running rconfig.

    SQL>create spfile='' from pfile;

    6. Move the $SOURCE_ORACLE_HOME/dbs/spfile.ora for thisinstance to the shared location.

    7. Take a backup of existing$SOURCE_ORACLE_HOME/dbs/init.ora, and create a new

    $SOURCE_ORACLE_HOME/dbs/init.ora with the followingparameter:

    spfile='/spfile.ora'

    8. Start up the database instance.9. Navigate to $11gR2_ORACLE_HOME/bin, and run rconfig:

    ./rconfig

    This rconfig command will perform the following tasks:

    1. Migrate the database to ASM storage (if ASM is specified as the storageoption in the configuration XML file)

    2. Create database instances on all nodes in the cluster3. Configure listener and NetService entries4. Configure and register CRS resources5. Start the instances on all nodes in the cluster

    See Appendix D for known issues with database conversion.4.5.6 Post-Migration Steps

    If you have used the above tools to convert to Oracle RAC, they may change someconfiguration options. Most notably, your database will now be in archivelog mode,regardless of whether it was before conversion. If you do not want to use archivelogmode, perform the following steps:

    1. Mount but do not open the database, using the command startup mount

  • 8/6/2019 Maximum Availability Clone Architecture

    21/57

    Sachin Deshpande

    2. Use the command alter database noarchivelog to disable archiving3. Shut down the database with the shutdown immediate command4. Start up the database with the startup command.

    For further details of how to control archiving, see Oracle Database Administrator's

    Guide 11g Release 2 (11.2).

    4.3.7 Enable AutoConfig on Applications Database Tier

    4.3.7.1 Steps to Perform On All Oracle RAC Nodes

    1. Ensure that you have applied the Oracle Applications patches listed in theprerequisites section above.

    2. Execute $AD_TOP/bin/admkappsutil.pl on the applications tier to generate anappsutil.zip file for the database tier.

    3. Copy (e.g. via ftp) the appsutil.zip file to the database tier in the11gR2_ORACLE_HOME.

    4. Unzip the appsutil.zip file to create the appsutil directory in the11gR2_ORACLE_HOME.

    5. Create a directory under/network/admin. Use the new instance name whilecreating the context directory. For example, if your database name is VISRAC,and you want to use "vis" as the instance prefix, create the directory as vis1_.

    6. Set the following environment variables:ORACLE_HOME =

    LD_LIBRARY_PATH = /lib,

    /ctx/lib

    ORACLE_SID =

    PATH= $PATH:$ORACLE_HOME/bin;

    TNS_ADMIN = $ORACLE_HOME/network/admin/

    7. Copy the tnsnames.ora file from $ORACLE_HOME/network/admin to the$TNS_ADMIN directory, and edit the aliases for SID=.

    8. As the APPS user, run the following command on theprimary node to de-registerthe current configuration:

    SQL>exec fnd_conc_clone.setup_clean;

  • 8/6/2019 Maximum Availability Clone Architecture

    22/57

    Sachin Deshpande

    9. From the 11gR2 ORACLE_HOME/appsutil/bin directory, create an instance-specific XML context file by executing the command:

    adbldxml.pl tier=db appsuser= appspasswd=

    10.Set the value of s_virtual host_name to point to the virtual hostname for thedatabase host, by editing the database context file$ORACLE_HOME/appsutil/_hostname.xml

    11.From the 11gR2 ORACLE_HOME/appsutil/bin directory, execute AutoConfig onthe database tier by running the adconfig.pl script.

    12.Check the AutoConfig log file located in the /appsutil/log//

  • 8/6/2019 Maximum Availability Clone Architecture

    23/57

    Sachin Deshpande

    remote_listenero As per 3.3 AutoConfig does not currently support the SCAN listener. To

    be able to connect using AutoConfig generated TNS Aliases theremote_listener must be set to the _remote AutoConfig alias.This is because AutoConfig connection aliases [ e.g. ,

    ] use the VIP host name rather than the SCAN name in theTNS descriptor. If you do not set the remote_listener as detailed here andinstead use the SCAN listener, then although AutoConfig aliases willcontinue to function, server side load balancing will not be in effect as thedatabase listeners will only be aware of the local instance.

    cluster_database cluster_database_instances undo_tablespace instance_name instance_number thread

    o These six parameters will all have been set as part of the conversion. Thecontext variables should be updated to be in sync with the database.

    4.3.7.4 Update SRVCTL for New listener.ora

    As mentioned in Section 3.3, AutoConfig creates listeners of the formlistener_. If you intend to manage an Oracle E-Business Suite databasewith SRVCTL, you must perform the following steps:

    1. If you wish to use the port allocated to the default listener, stop and remove thedefault listener.

    2. Add Oracle E-Business Suite listener:srvctl add listener -l listener_ -o -p

    srvctl setenv listener -l listener_ -T TNS_ADMIN=

    $ORACLE_HOME/network/admin

    3. Edit AutoConfig listener.ora and change LISTENER_ toLISTENER_ (for example, LISTENER_EBS).

    4. On each node, add the AutoConfig listener.ora as an ifile in the$ORACLE_HOME/network/admin/listener.ora.

    5. Copy the AutoConfig tnsnames.ora to $ORACLE_HOME/network/admin.6. Add TNS_ADMIN to database:

    srvctl setenv database -d -T TNS_ADMIN=

    $ORACLE_HOME/network/admin

  • 8/6/2019 Maximum Availability Clone Architecture

    24/57

    Sachin Deshpande

    7. Start up the database instances and listeners on all nodes. The database can nowbe managed via SRVCTL.

    To ensure all AutoConfig TNS aliases are set up to recognize all available nodes, re-runAutoConfig on all nodes. For more details of AutoConfig, see Using AutoConfig to

    Manage System Configurations with Oracle E-Business Suite 11i.

    4.3.8 Establish Applications Environment for Oracle RAC

    4.3.8.1 Preparatory Steps

    Carry out the following steps on all application tier nodes:

    1. Source the Oracle Applications environment.2. Edit SID=< Instance 1 > and PORT= in the

    $TNS_ADMIN/tnsnames.ora file to connect to one of the instances in the RACenvironment.

    3. Confirm you can connect to one of the instances in the Oracle RAC environmentbefore running AutoConfig.

    4. Execute AutoConfig by running the command:$AD_TOP/bin/adconfig.sh

    contextfile=$APPL_TOP/admin/.

    For more information on AutoConfig, see My Oracle Support Knowledge

    Document 165195.1, Using AutoConfig to Manage System Configurations withOracle E-Business Suite 11i.

    5. Check the $APPL_TOP/admin//log/AutoConfig log file for errors.

    6. Source the environment by using the latest environment file generated.7. Validate the tnsnames.ora and listener.ora files located in the

    $ORACLE_HOME/network/admin and $IAS_ORACLE_HOME/network/admin.In particular, ensure that the correct TNS aliases have been generated for loadbalancing and failover, and that all the aliases are defined using the virtualhostnames.

    8. Verify the dbc file located at $FND_SECURE. Ensure that the parameter'APPS_JDBC_URL' is configured with all instances in the environment, and'load_balance' is set to 'YES'.

  • 8/6/2019 Maximum Availability Clone Architecture

    25/57

    Sachin Deshpande

    Note: If your database and application tiers are running on same node, and if yourconcurrent managers do not start, follow the relevant steps in My Oracle SupportKnowledge Document 434613.1.

    4.3.8.2 Implement Load Balancing

    Implement load balancing for the Oracle Applications database connections:

    1. Follow the substeps (1) and (2) below to run the context editor (via the OracleApplications Manager interface) and modify the value of "Tools OHTWO_TASK"(s_tools_twotask), "iAS OH TWO_TASK"(s_weboh_twotask), and"Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias).

    1. To load-balance the forms-based Applications database connections, setthe value of "Tools OH TWO_TASK" to point to the _806_balancealias generated in the tnsnames.ora file.

    2. To load-balance the Self-Service (HTML-based) Applications databaseconnections, set the value of iAS OH TWO_TASK" and "Apps JDBCConnect Alias" to point to the _balance alias generatedin the tnsnames.ora file.

    2. Execute AutoConfig by running the command:$AD_TOP/bin/adconfig.sh

    contextfile=$APPL_TOP/admin/

    3. Restart the Oracle Applications processes using the new scripts generated byAutoConfig.

    4. Ensure that value of the profile option "Application Database ID" is set to dbcfilename generated in $FND_SECURE.

    Note: If you are adding a new node to the application tier, repeat this sequence ofsteps to set up load balancing on the new application tier node.

    4.3.9 Configure Parallel Concurrent Processing

    4.9.1 Check Prerequisites for Parallel Concurrent Processing

    1. Confirm that you have more than one Concurrent Processing node in yourenvironment.

    2. If you do not, refer to My Oracle Support Knowledge Document 230672.1 forsteps needed to clone the application tier.

  • 8/6/2019 Maximum Availability Clone Architecture

    26/57

    Sachin Deshpande

    4.9.2 Set Up PCP

    1. Execute AutoConfig by using$COMMON_TOP/admin/scripts//adautocfg.sh on all concurrentnodes.

    2. Source the application environment by using $APPL_TOP/APPSORA.env.3. Check the configuration files tnsnames.ora and listener.ora located under the 8.0.6

    ORACLE_HOME at $ORACLE_HOME /network/admin/. Ensure thatyou have information of all the other concurrent nodes for FNDSM and FNDFSentries.

    4. Restart the Applications listener processes on each application node.5. Log in to Oracle E-Business Suite Release 11i as SYSADMIN and choose the

    System Administrator Responsibility. Navigate to Install > Nodes screen, and

    ensure that each node in the cluster is registered.

    6. Verify that the Internal Monitor for each node is defined properly, with correctprimary and secondary node specification and work shift details. Also confirmthat the Internal Monitor manager is activated from Concurrent > Manager >Administrator, activating the manager as required. For example, Internal Monitor:Host2 must have the primary node as host2 and secondary node as host3.

    7. On all Concurrent Processing nodes, set the $APPLCSF environment variable topoint to a log directory on a shared file system.

    8.

    On all Concurrent Processing nodes, set the $APPLPTMP environment variableto the value of the UTL_FILE_DIR entry in init.ora on the database nodes. Thisvalue should be a directory on a shared file system.

    9. Set profile option 'Concurrent: PCP Instance Check' to 'OFF' if instance-sensitivefailover is not required. Setting it to 'ON' means that concurrent managers will failover to a secondary application tier node if the database instance to which it isconnected goes down.

    4.9.3 Set Up Transaction Managers

    1. Shut down application tier services on all nodes.2. Shut down all the database instances cleanly in the RAC environment, using the

    command:

    SQL>shutdown immediate;

    3. Edit $ORACLE_HOME/dbs/_ifile.ora and add these parameters:

  • 8/6/2019 Maximum Availability Clone Architecture

    27/57

    Sachin Deshpande

    _lm_global_posts=TRUE _immediate_commit_propagation=TRUE

    4. Start the instance on each database node, one by one.5.

    Start up the Application tier services on all nodes.

    6. Log on to Oracle E-Business Suite 11i as SYSADMIN, and choose the SystemAdministrator Responsibility. Navigate to Profile > System and change the profileoption Concurrent: TM Transport Type' to QUEUE', and verify that thetransaction manager works across the RAC instance.

    7. Navigate to Concurrent > Manager > Define screen, and set up the primary andsecondary node names for transaction managers.

    8. Restart the concurrent managers.4.9.4 Set Up Load Balancing of Concurrent Processing Tiers

    1. Edit the Applications context file (via Oracle Applications Manager), setting thevalue of Concurrent Manager TWO_TASK to the load balancing alias created inthe previous step.

    2. On all concurrent processing nodes, run AutoConfig with the command:$COMMON_TOP/admin/scripts//adautocfg.sh.

  • 8/6/2019 Maximum Availability Clone Architecture

    28/57

    Sachin Deshpande

    5 Creating the RAC Standby Database

    5.1 Configure Primary and Standby sites

    To make management of the environment (and the configuration of Data Guard) simpler,it is recommended that the Primary and Standby machines have exactly the samestructure, i.e.

    ORACLE_HOME points to the same mount point on both sites. ORACLE_BASE/admin points to the same mount point on both sites. ASM Disk Groups are the same on both sites

    5.2 Install Oracle Software on each site.The Oracle software will be installed from the Oracle Media on both sites. This willgenerally include:

    Oracle Clusterware Oracle database executables for use by ASM Oracle database executables for use by the RDBMS

    5.3 Server Names /VIPsIn Oracle Real Application Clusters 11g virtual server names and IP addresses are usedand maintained by Oracle Cluster Ready Services (CRS). Examples of a cluster naming isas follows: Note: Both short and fully qualified names will exist.

    Server Name/Alias/Host Entry Purpose

    europa.local Public Host Name (PRIMARY Node 1)callisto.local Public Host Name (PRIMARY Node 2)dione.local Public Host Name (STANDBY Node 1)

    hyperion.local Public Host Name (STANDBY Node 2)europa-vip.local Public Virtual Name (PRIMARY Node 1)

    callisto-vip.local Public Virtual Name (PRIMARY Node 2)dione-vip.local Public Virtual Name (STANDBY Node 1)

    hyperion-vip.local Public Virtual Name (STANDBY Node 2)

    5.4 Configure Oracle Networking

    5.4.1 Configure Listener on Each Site

    Each site will have a listener defined which will be running from the ASM Oracle Home.The following listeners have been defined in this example configuration.

    Primary Role

    Listener_europa

  • 8/6/2019 Maximum Availability Clone Architecture

    29/57

    Sachin Deshpande

    Listener_callisto

    Listener_dioneListener_hyperion

    5.4.2 Static RegistrationOracle must be able to access all instances of both databases whether they are in an open,mounted or closed state. This means that these must be statically registered with thelistener.These entries will have a special name which will be used to facilitate the use of the DataGuard Broker, discussed later.

    5.4.3 Sample Listener.ora

    LISTENER_dione =

    (DESCRIPTION_LIST =

    (DESCRIPTION =

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione-vip)(PORT = 1521)

    (IP = FIRST))

    )

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione)(PORT = 1521)

    (IP = FIRST))

    )

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))

    )

    )

    )

    SID_LIST_LISTENER_dione =(SID_LIST =

    (SID_DESC =

    (GLOBAL_DBNAME=ORCLSBY1_dgmgrl.local)

    (SID_NAME = ORCLSBY11)

    (ORACLE_HOME = $ORACLE_HOME)

    )

    )

  • 8/6/2019 Maximum Availability Clone Architecture

    30/57

    Sachin Deshpande

    5.4.4 Configure TNS entries on each site.

    In order to make things simpler the same network service names will be generated oneach site. These service names will be called:

    Alias CommentsORCL1_DGMGRL.local Points to the ORCL instance on europa using the servi

    name ORCL_DGMGRL.local. This can be used forcreating the standby database.

    ORCL1.local Points to the ORCL instance on europa. using theservice name ORCL.local

    ORCL2.local Points to the ORCL instance on callisto using theservice name ORCL.local

    ORCL.local Points to the ORCL database i.e. Contains all databaseinstances.

    ORCLSBY11_DGMGRL.local Points to the ORCLSBY1 instance on dione using the

    service name ORCLSBY11_DGMGRL ** This will bused for the database duplication.

    ORCLSBY11.local Points to the ORCLSBY1 instance on dione using theservice name ORCLSBY1.local

    ORCLSBY12.local Points to the ORCLSBY1 instance on hyperion usingthe service name ORCLSBY1.local

    ORCLSBY1.local Points to the ORCLSBY1 database i.e. Contains all thedatabase instances

    listener_DB_UNIQUE_NAME.local This will be a tns alias entry consisting of two addresslines. The first address line will be the address of thelistener on Node1 and the second will be the address o

    the listener on Node 2.Placing both of the above listeners in the address listwill ensure that the database automatically registers wiboth nodes.There must be two sets of entries. One for the standbynodes call listener_ORCLSBY1 and one for the primarnodes called listener_ORCL

    5.4.4.1 Sample tnsnames.ora (europa)ORCL1_DGMGRL.local =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = europa-vip)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCL_DGMGRL.local)

    )

    )

  • 8/6/2019 Maximum Availability Clone Architecture

    31/57

    Sachin Deshpande

    ORCL1.local =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = europa-vip)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCL.local)

    (INSTANCE_NAME = ORCL1)

    )

    )

    ORCL2.local =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = callisto-vip)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCL.local)

    (INSTANCE_NAME = ORCL2)

    )

    )

    ORCL.local =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = europa-vip)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = callisto-vip)(PORT = 1521))

    (LOAD_BALANCE = yes)

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCL.local)

    )

    )

    ORCLSBY11_DGMGRL.local =

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione-vip)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCLSBY1_DGMGRL.local)

    )

    )

    ORCLSBY12.local=

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = hyperion-vip)(PORT = 1521))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCLSBY1.local)

    (INSTANCE_NAME=ORCLSBY12)

    )

    )

    ORCLSBY11.local=

    (DESCRIPTION =

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione-vip)(PORT = 1521))(CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCLSBY1.local)

    (INSTANCE_NAME=ORCLSBY11)

    )

    )

    ORCLSBY1.local=

    (DESCRIPTION =

    (ADDRESS_LIST=

  • 8/6/2019 Maximum Availability Clone Architecture

    32/57

    Sachin Deshpande

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione-vip)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = hyperion-vip)(PORT = 1521)))

    (CONNECT_DATA =

    (SERVER = DEDICATED)

    (SERVICE_NAME = ORCLSBY1.local)

    )

    )

    LISTENERS_ORCL.local=

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP)(HOST = europa-vip)(PORT = 1521))

    (ADDRESS = (PROTOCOL = TCP)(HOST = callisto-vip)(PORT = 1521))

    )

    5.5 Configure ASM on each Site

    As this is an Oracle RAC database using ASM it is strongly recommended that ASM isalso configured on the standby site before continuing. To keep things simple it isassumed that the disk groups created on the standby site have the same names as those onthe primary.

    5.6 Prepare Primary Database for Duplication.

    Certain initialisation parameters are only applicable when a database is running in eithera standby or primary database role. Defining ALL of the parameters on BOTH sites willensure that, if the roles are switched (Primary becomes Standby and Standby becomes thenew Primary), then no further configuration will be necessary.Some of the parameters will however be node-specific; therefore there will be one set ofparameters for the Primary site nodes and one for the Standby site nodes.

    5.6.1 Primary S ite Preparation

    The following initialisation parameters should be set on the primary site prior toduplication. Whilst they are only applicable to the primary site, they will be equallyconfigured on the standby site.

    Dg_broker_config_file1 Point this to a file within the ASM disk group Note Fileneed not exist.

    Dg_broker_config_file2 Point this to a file within the ASM disk group Note Fileneed not exist.

    db_block_checksum To enable datablock integrity checking (OPTIONAL)

    db_block_checking To enable datablock consistency checking (OPTIONAL)

    As long as performance implications allow and do not violate existing SLAs it should bemandatory to have db_block_checksum and db_block_checking enabled.Additionally, the following must also be configured:Archive Log ModeThe primary database must be placed into archive log mode.Forced Logging

  • 8/6/2019 Maximum Availability Clone Architecture

    33/57

    Sachin Deshpande

    The standby database is kept up to date by applying transactions on the standby site,which have been recorded in the online redo logs. In some environments that have notpreviously utilized Data Guard, the NOLOGGING option may have been utilized toenhance database performance. Usage of this feature in a Data Guard protectedenvironment is strongly undesirable.

    From Oracle version 9.2, Oracle introduced a method to prevent NOLOGGINGtransactions from occurring. This is known asforced logging mode of the database. Toenable forced logging, issue the following command on the primary database:

    alter database force logging;Password FileThe primary database must be configured to use an external password file. This isgenerally done at the time of installation. If not, then a password file can be created usingthe following command:

    orapwd file=$ORACLE_HOME/dbs/orapwORCL1 password=oracle

    Before issuing the command ensure that the ORACLE_SID is set to the appropriateinstance in this case ORCL1.Repeat this for each node of the cluster.

    Also ensure that the initialisation parameter remote_login_passwordfile is set toexclusive.As with Oracle11.1 the Orale Net sessions for Redo Transport can alternatively beauhenticated through SSL (see also section 6.2.1 in the Data Guard Concepts manual).

    5.6.2 Standby Site PreparationInitialisation FileAs part of the duplication process a temporary initialisation file will be used. For thepurposes of this document this file will be called /tmp/initORCL.ora have one line:db_name=ORCL

    Password File

    by copying the password file from the primary site to the standby site and renaming it toreflect the standby instances.Repeat this for each node of the cluster.Additionally ensure that the initialisation parameter remote_login_passwordfile is set toexclusive.

    Create Audit File DestinationCreate a directory on each node of the standby system to hold audit files.mkdir /u01/app/oracle/admin/ORCLSBY1/adumpStart Standby InstanceNow that everything is in place the standby instance needs to be started ready forduplication to commence:export ORACLE_SID=ORCLSBY11

    sqlplus / as sysdba

  • 8/6/2019 Maximum Availability Clone Architecture

    34/57

    Sachin Deshpande

    startup nomount pfile=/tmp/initORCL.ora

    Test ConnectionFrom the primary database test the connection to the standby database using thecommand:sqlplus sys/oracle@ORCLSBY1_dgmgrl as sysdbaThis should successfully connect.

    5.7 Duplicate the Primary database.

    The standby database is created from the primary database. In order to achieve this, up toOracle10g a backup of the primary database needs to be made and transferred to thestandby and restored. Oracle RMAN 11g simplifies this process by providing a newmethod which allows an on the fly-duplicate to take place. This will be the method usedhere (the pre-11g method is described in the Appendicies).From the primary database invoke RMAN using the following command:export ORACLE_SID=ORCL1

    rman target / auxiliary sys/oracle@ORCLSBY11_dgmgrl

    NOTE: If RMAN returns the error rman: cant open target then ensure thatORACLE_HOME/bin appears first in the PATH because there exists a Linux utilityalso named RMAN.Next, issue the following duplicate command:duplicate target database for standby from active database

    spfile

    set db_unique_name=ORCLSBY1

    set control_files=+DATA/ORCLSBY1/controlfile/control01.dbf

    set instance_number=1

    set audit_file_dest=/u01/app/oracle/admin/ORCLSBY1/adump

    set remote_listener=LISTENERS_ORCLSBY1

    nofilenamecheck;

    5.8 Create an SPFILE for the Standby Database

    By default the RMAN duplicate command will have created an spfile for the instancelocated in $ORACLE_HOME/dbs.This file will contain entries that refer to the instance names on the primary database. Aspart of this creation process the database name is being changed to reflect theDB_UNIQUE_NAME for the standby database, and as such the spfile created isessentially worthless. A new spfile will now be created using the contents of the primarydatabases spfile.

    5.8.1 Get location of the Control File

    Before starting this process, note down the value of the control_files parameter from thecurrently running standby database.

  • 8/6/2019 Maximum Availability Clone Architecture

    35/57

    Sachin Deshpande

    5.8.2 Create a textual initialisation file

    The first stage in the process requires that the primary databases initialisation parametersbe dumped to a text file:set ORACLE_SID=ORCL1

    sqlplus / as sysdba

    create pfile=/tmp/initORCLSBY1.ora from spfile;

    Copy the created file /tmp/initORCLSBY1.ora to the standby server.

    5.8.3 Edit the init.ora

    On the standby server, edit the /tmp/initORCLSBY1.ora file:NOTE: Change every occurrence of ORCL with ORCLSBY1 with the exception of theparameter DB_NAME which must NOT change.Set the control_files parameter to reflect the value obtained in 5.8.1 above. This will mostlikely be +DATA/ORCLSBY1/controlfile/control01.dbf.Save the changes.

    5.8.4 Create SPFILE

    Having created the textual initialisation file it now needs to be converted to a spfile andstored within ASM by issuing:export ORACLE_SID=ORCLSBY11

    sqlplus / as sysdba

    create spfile=+DATA/ORCLSBY1/spfileORCLSBY1.ora from

    pfile= /tmp/initORCLSBY1.ora

    5.8.5 Create Pointer FileWith the spfile now being in ASM, the RDBMS instances need to be told where to find it.Create a file in the $ORACLE_HOME/dbs directory of standby node 1 (dione) calledinitORCLSBY11.ora . This file will contain one line:spfile=+DATA/ORCLSBY1/spfileORCLSBY1.ora Create a file in the $ORACLE_HOME/dbs directory of standby node 2 (hyperion) calledinitORCLSBY12.ora . This file will also contain one line:spfile= +DATA/ORCLSBY1/spfileORCLSBY1.ora

    Additionally remove the RMAN created spfile from $ORACLE_HOME/dbs located onstandby node 1 (dione)

    5.9 Create secondary control files

    When the RMAN duplicate completed, it created a standby database with only onecontrol file. This is not good practice, so the next step in the process is to create extracontrol files.This is a two-stage process:

    1. Shutdown and startup the database using nomount :

  • 8/6/2019 Maximum Availability Clone Architecture

    36/57

    Sachin Deshpande

    shutdown immediate;

    startup nomount;

    2. Change the value of the control_files parameter to +DATA, +FRAalter system set control_files=+DATA, +FRA scope=spfile;

    3. Shutdown and startup the database again :shutdown immediate;

    startup nomount;

    3. Use RMAN to duplicate the control file already present:export ORACLE_SID=ORCLSBY11

    rman target /

    restore controlfile from +DATA/ORCLSBY1/controlfile/control01.dbf

    This will create a control file in both the ASM Disk groups +DATA and +FRA. It willalso update the control file parameter in the spfile.If you wish 3 to have control files simply update the control_files parameter to include

    the original controlfile as well as the ones just created.

    5.10 Cluster-enable the Standby Database

    The standby database now needs to be brought under clusterware control, i.e. registeredwith Cluster Ready Services.Before commencing, check that it is possible to start the instance on the second standbynode (hyperion):export ORACLE_SID=ORCLSBY12

    sqlplus / as sysdba

    startup mount;

    NOTE: Resolve any issues before moving on to the next steps.

    5.10.1 Ensure Server Side Load Balancing is configured

    Check whether the init.ora parameter remote_listener is defined in the standby instances.If the parameter is not present then create an entry in the tnsnames.ora files (of allstandby nodes) which has the following format:LISTENERS_ORCLSBY1.local =

    (DESCRIPTION =

    (ADDRESS_LIST =

    (ADDRESS = (PROTOCOL = TCP)(HOST = dione-vip.local)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = hyperion-vip.local)(PORT = 1521))

    )

    )

    Then set the value of the parameter remote_listener to LISTENERS_ ORCLSBY1.local.

  • 8/6/2019 Maximum Availability Clone Architecture

    37/57

    Sachin Deshpande

    5.10.2 Register the Database with CRS

    Issue the following commands to register the database with Oracle Cluster ReadyServices:srvctl add database d ORCLSBY1 o $ORACLE_HOME m local p

    +DATA/ORCLSBY1/spfileORCLSBY1.ora n ORCL r physical_standby s

    mount

    srvctl add instance d ORCLSBY1 i ORCLSBY11 n dione

    srvctl add instance d ORCLSBY1 i ORCLSBY12 n hyperion

    5.10.3 Test

    Test that the above has worked by stopping any running standby instances and thenstarting the database (all instances) using the command:srvctl start database d ORCLSBY1

    Once started check that the associated instances are running by using the command:srvctl status database d ORCLSBY1

    5.11 Temporary Files

    Temporary files associated with a temporary tablespace are automatically created with astandby database.

    5.12 Create Standby Redo Logs

    Standby Redo Logs (SRL) are used to store redo data from the primary databases whenthe transport is configured using the Logwriter (LGWR), which is the default.Each standby redo log file must be at least as large as the largest redo log file in theprimary database. It is recommended that all redo log files in the primary database andthe standby redo logs in the respective standby database(s) be of the same size.The recommended number of SRLs is :(# of online redo logs per primary instance + 1) * # of instances .Whilst standby redo logs are only used by the standby site, they should be defined onboth the primary as well as the standby sites. This will ensure that if the two databaseschange their roles (primary-> standby and standby -> primary) then no extraconfiguration will be required.The standby database must be mounted (mount as standby is the default) before SRLscan be created.SRLs are created as follows (the size given below is just an example and has to beadjusted to the current environment):

    1. sqlplus / a sysdba

    2. startup mount

    3. alter database add standby logfile SIZE 100M;

    NOTE: Standby Redo Logs are also created in logfile groups. But be aware of the factthat group numbers then must be greater than the group numbers which are associatedwith the ORLs in the primary database. Wrt group numbering Oracle makes no differencebetween ORLs and SRLs.NOTE: Standby Redo Logs need to be created on both databases.

  • 8/6/2019 Maximum Availability Clone Architecture

    38/57

    Sachin Deshpande

    The standby database is now created. The next stage in the process concerns enablingtransaction synchronisation. There are two ways of doing this:1. Using SQL Plus

    2. Using the Data Guard Broker

    6 Configuring Data Guard using SQL Plus

    6.1 Introduction

    This section of the document describes the process of setting up a physical standbydatabase environment using SQLPlus and manually setting database initialisationparameters.

    6.2 Configure the Standby Database

    The following initialisation parameters need to be set on the standby database:

    Parameter Value (dione) Value (hyperion)

    db_unique_name ORCLSBY1

    db_block_checking TRUE (OPTIONAL)

    db_block_checksum TRUE (OPTIONAL)

    log_archive_config dg_config=(ORCL, ORCLSBY1)

    log_archive_max_processe

    s

    5

    fal_client ORCLSBY11.local ORCLSBY12.local

    fal_server ORCL1.local, ORCL2.local

    Standby_file_management Auto

    log_archive_dest_2 service=ORCL LGWR SYNC AFFIRMdb_unique_name=PRIMARY_ORCL

    VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)

    log_archive_dest_2

    (Max. Performance Mode)

    service=ORCL ARCH

    db_unique_name=PRIMARY_ORCL

    VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE

    6.3 Configure the Primary Database

    The following initialisation parameters need to be set on the primary database:

    Parameter Value (europa) Value (callisto)

    db_unique_name MOON

    db_block_checking TRUE (OPTIONAL)

    db_block_checksum TRUE (OPTIONAL)

    log_archive_config dg_config=(MOON, SUN)

  • 8/6/2019 Maximum Availability Clone Architecture

    39/57

    Sachin Deshpande

    log_archive_max_pr

    ocesses

    5

    fal_client MOON1.local MOON2.local

    fal_server SUN1.local, SUN2.local

    standby_file_manag

    ement

    Auto

    Log_archive_dest_2 service=SUN LGWR SYNC AFFIRM db_unique_name=SUNVALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)

    Log_archive_dest_2

    (Max. Performance

    Mode)

    service=SUN ARCH db_unique_name=SUN

    VALID_FOR=(ALL_LOGFILES,PRIMARY_ROLE)

    6.4 Set the Protection Mode

    In order to specify the protection mode, the primary database must be mounted but notopened.NOTE: The database must be mounted in exclusive mode which effectively means thatall RAC instances but one be shutdown and the remaining instance be started with aparameter setting of cluster_database=false.Once this is the case then the following statement must be issued on the primary site:If using Maximum Protection mode then use the command:Alter database set standby database to maximize protection;

    If using Maximum Availability mode then use the command:Alter database set standby database to maximize availability;

    If using Maximum Performance mode then use the command:Alter database set standby database to maximize performance;

    6.5 Enable Redo Transport & Redo Apply

    Enabling the transport and application of redo to the standby database is achieved by thefollowing:

    6.5.1 Standby S ite

    The standby database needs to be placed into Managed Recovery mode. This is achievedby issuing the statement:Alter database recover managed standby database disconnect;

    Oracle 10gR2 introduced Real Time redo apply (SRLs required). Enabling real timeapply is achieved by issuing the statement:alter database recover managed standby database using current logfile

    disconnect;

  • 8/6/2019 Maximum Availability Clone Architecture

    40/57

    Sachin Deshpande

    6.5.2 Primary Site:

    Set:log_archive_dest_state_2=enable

    in the init.ora file or issue via SQLPlus :

    alter system set log_archive_dest_state_2=enable

    7 Configuring Data Guard using the Data GuardBroker

    7.1 Introduction

    The Data Guard Broker has a command line interface, which can be used to simplifymanagement of whole Data Guard environments. When using the Broker, configurationinformation is stored within the Broker itself. When the Broker starts (enabled by adatabase initialisation parameter), it will use a series of ALTER SYSTEM statements to

    set up transaction synchronisation between the primary and standby sites. The parametersit sets are the same as those mentioned in the SQLPlus example earlier. It is thereforeimperative that database configuration changes are made only via the broker, not bydirectly editing initialisation parameters. Failure to obey this rule will result in the Brokeroverwriting those values. NOTE: If using Grid Control to manage a Data Guardenvironment the Broker must be configured.

    7.2 Broker Configuration Files

    The Data Guard Broker uses two files to hold its configuration information. By defaultthese files are located in the $ORACLE_HOME/dbs directory. In a RAC environmentthis is not appropriate as all database instances need to have access to the sameconfiguration information. Before continuing with the Broker configuration make surethat the Broker files are configured such that they point to shared storage (in this caseASM).This can be checked by looking at the values of the parameters:dg_broker_config_file1 and dg_broker_config_file2 .Appropriate values for these parameters can be found above (5.6.1).

    7.3 Enabling the BrokerBefore the Broker can be used it must first be enabled. This is done by changing the valueof the database initialisation parameter dg_broker_start to true : alter system setdg_broker_start=true;NOTE: This needs to be performed on both the primary and standby site.

  • 8/6/2019 Maximum Availability Clone Architecture

    41/57

    Sachin Deshpande

    7.4 Creating a Broker Configuration

    A Broker configuration is created either using Grid Control or the DGMGRL commandline interface. This document uses the latter.Start dgmgrl using the commanddgmgrl sys/oracle

    ** NOTE: Do not use / alone because this would cause problems later.Enter the following to create the Data Guard configuration:create configuration ORCL_ORCLSBY1 as primary database is ORCL connect identifieris ORCL.local;add database ORCLSBY1 as connect identifier is ORCLSBY1.local maintained asphysical;

    7.5 Enable the Broker Configuration

    Once the Broker configuration has been created then it needs to be enabled before it canbe used. This is achieved using the following command: enable configuration;

    NOTE: This file appears on all nodes.NOTE: Replace with the value of DB_UNIQUE_NAMENOTE: Replace with the value of ORACLE_SID.NOTE: Secondary instances are not displayed until the configuration is enabled.

    7.6 Broker Customisation

    This will provide a basic configuration. Once the standby database starts, the Data GuardBroker will automatically put the standby database into managed recovery mode asdetailed above.

    This basic configuration is not enough however to sustain a complete environment. Itmust be further customised and this is done by setting Data Guard properties. Thefollowing properties should be defined:dgmgrl sys/oracle

    edit database ORCL set property LogArchiveMaxProcesses=5;

    edit database ORCLSBY1 set property LogArchiveMaxProcesses=5;

    edit database ORCL set property StandbyFileManagement=auto;

    edit database ORCLSBY1 set property StandbyFileManagement=auto;

    7.6.1 Maximum Availability/Protection

    Additionally, if using Maximum Availability / Protection mode, the following values

    need to be set:edit database ORCL set property LogXptMode=SYNC;

    edit database ORCLSBY1 set property LogXptMode=SYNC;

    edit configuration set protection mode as maxavailability |

    maxprotection;

  • 8/6/2019 Maximum Availability Clone Architecture

    42/57

    Sachin Deshpande

    Configuration of Oracle Ebiz High availability Application.

    A snapshot standby database is a fully updateable standby database created by convertinga physical standby database into a snapshot standby database.

    A snapshot standby database receives and archives, but does not apply, redo data from aprimary database. The redo data received from the primary database is applied once asnapshot standby database is converted back into a physical standby database, afterdiscarding all local updates to the snapshot standby database.

    A snapshot standby can be created from Enterprise Manager, the Data Guard Brokercommand line interface (DGMGRL) or from SQL*Plus.

    In this OBE, we will be using DGMGRL to create a Snapshot Standby database.

    Converting to Snapshot Standby using DG Broker

    On the physical standby database, determine if Flashback Database is enabled byquerying V$DATABASE.

    SQL>SELECT FLASHBACK_ON FROM V$DATABASE; ---should return no rows.

    Invoke DGMGRL and connect as sys/oracle. Disable the MRP process

    export ORACLE_SID=ORCLSBY1

    DGMGRL

    DGMGRL>CONNECT SYS/ORACLE

    DGMGRL>EXIT DATABASE ORCLSBY1 SET STATE=APPLY_OFF;

  • 8/6/2019 Maximum Availability Clone Architecture

    43/57

    Sachin Deshpande

    If Flashback Database has not been enabled for the physical standby database, enable itby using the ALTER DATABASE command. You will need to shutdown orclsby1 andissue the command startup mount to issue the ALTER DATABASE command.

    SQL>SHUTDWON IMMEDIATE;

    SQL>STARTUP MOUNT;

    SQL>ALTER DATABASE FLASHBACK ON;

    Invoke DGMGRL again and restart the MRP process

    DGMGRL>CONNECT SYS/ORACLE

    DGMGRL>EXIT DATABASE ORCLSBY1 SET STATE=APPLY_ON;

  • 8/6/2019 Maximum Availability Clone Architecture

    44/57

    Sachin Deshpande

    8 Converting the Physical Standby Database to aSnapshot Standby Database

    To convert the physical standby database to a snapshot standby database, perform the

    following steps:

    Invoke DGMGRL in your primary database window and connect to the primary database.

    Convert the physical standby database to a snapshot standby database by executing theCONVERT DATABASE TO SNAPSHOT STANDBY DATABASE DGMGRLcommand.

    On Primary Node$export ORACLE_SID=ORCL

    $DGMGRL

    $ DGMGRL>CONNECT SYS/ORACLE$ DGMGRL>convert database ORCLSBY1 to snapshot standby;

    Verify that the database was successfully converted by issuing the SHOWCONFIGURATION command$ DGMGRL>show configuration

    Updating the Databases and Verifying Redo Shipment

    To update the databases, perform the following steps:

    To confirm that redo data is being received by the standby database, queryV$MANAGED_STANDBY on the snapshot standby database and note the value in theBLOCK# column.

  • 8/6/2019 Maximum Availability Clone Architecture

    45/57

    Sachin Deshpande

    8.1 Run Oracle Pre-Clone Procedures on the Source System

    Run Oracle Pre-Clone on one node in each of the following tiers:

    Database Concurrent Manager Application Server (if file system is different from Concurrent Manager)

    Database Tier

    $ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME

    $ perl adpreclone.pl dbTier

    Review the log file for errors.

    Application Tier

    $ cd $COMMON_TOP/admin/scripts/$CONTEXT_NAME

    $ perl adpreclone.pl appsTier

    Review the log file for errors.

    Application Binaries

    Backup of Source Application Tiertar cvf appl.tar appl

    tar cvf comn.tar comn

    tar cvf ora.tar ora

    8.2 Copy Application binaries from source to all the application servers

    1. Transfer the backup files to target systemscp *.tar :/app/ebs/at

  • 8/6/2019 Maximum Availability Clone Architecture

    46/57

    Sachin Deshpande

    2. Extract the softwaretar xvf appl.tar

    tar xvf comn.tar

    tar xvf ora.tar

    3. Rename the source directories to the target namemv appl appl

    mv comn comn

    mv ora ora

    1. Run Autoconfig on DB Tier$ perl adbldxml.pl tier=db appsuser=apps appspasswd=xxxxxx

    2. Make sure all services are up and running on Database Server3. Remove PROD node information4. Execute the following script as using apps account

    exec FND_CONC_CLONE.SETUP_CLEAN;

    8.3 Configure Standby server Application Node

    o Run adcfgclone on the Primary Admin / Concurrent Manager NodeNote: This step is required on the First Node only for each shared file system

    export APPL_TOP= /app/ebs/at/appl

    cd /app/ebs/at/comn/clone/bin

    perl adcfgclone.pl appsTier

    8.5 Configure Primary Application Node

    o Run adcfgclone on the Primary Admin / Concurrent Manager NodeNote: This step is required on the First Node only for each shared file system

    export APPL_TOP= /app/ebs/at/applcd /app/ebs/at/comn/clone/bin

    perl adcfgclone.pl appsTier

    8.6 Setup Custom .ENV on all Application Tiers

    o Edit $APPL_TOP/customXXAPPL.envo Change all references to ORCLXXXXX

  • 8/6/2019 Maximum Availability Clone Architecture

    47/57

    Sachin Deshpande

    o Make entry of this file in customORCLXXXXX_.env8.7 Run AutoConfig on all Application Nodes

    $COMMON_TOP/admin/scripts/$CONTEXT_NAME/adautocfg.sh

    appspass="xxxx"

    8.9 Startup standby application server services and validate application

    Now Standby E-Biz application server is mirror copy of Production. During this time testof Critical issue/defect can be performed. During this time any changes in PROD instancewill not applied to standby and vice-versa. Once testing complete, perform following stepto bring Snapshot standby database and application to normal standby server.

    Application Tier:-Shutdown cleanly all application service.

    Standby Database Tier:-

    Stop all database services.

    Converting the Snapshot Standby Database to a Physical Standby Database

    To convert the snapshot standby database to a physical standby database, perform thefollowing steps:

    Now that you have completed your work on the snapshot standby database, convert thesnapshot standby database back to a physical standby database. You should be on theprimary server to complete this step.

  • 8/6/2019 Maximum Availability Clone Architecture

    48/57

    Sachin Deshpande

    Verify the status of the standby database by executing the SHOW CONFIGURATIONcommand

  • 8/6/2019 Maximum Availability Clone Architecture

    49/57

    Sachin Deshpande

    Invoke SQL*Plus in your primary database window. Switch redo log files on the primarydatabase.

    Invoke DGMGRL in your standby database window. Stop the MRP process on thephysical standby database.

    Open the physical standby database in read-only mode.

  • 8/6/2019 Maximum Availability Clone Architecture

    50/57

    Sachin Deshpande

    Shut down the physical standby database and restart it in MOUNT mode. This changesthe physical standby state from read only to a state ready to receive redo.

    Invoke DGMGRL and restart the MRP process.

  • 8/6/2019 Maximum Availability Clone Architecture

    51/57

    Sachin Deshpande

    After completing above step, now all changes made in standby database during testingrevoked. Standby database is now sync with Primary Database after applying all pendingarchive logs files.

    9 Monitoring

    9.1.1 Introduction

    Once the configuration has been created, it is essential to check that everything is runningsmoothly. The following sections identify different ways of monitoring the environment.

    9.1.2 Log Files

    When an archive operation occurs it is entered into the alert log. Whenever a log switchoccurs on the primary this will be registered in the alert log. When an archive log apply isperformed on the standby database then this will be registered in the standby instancesalert log.

    9.1.3 Fixed Views

    The following fixed views can be used to monitor the Data Guard Broker:

    Primary Site

    V$ARCHIVE_DEST Describes for the current instance all the arch

    redo log destinations, their current value, mod

    and status.

    V$ARCHIVE_DEST_STATUS Displays runtime and configuration information

    the archived redo log destinations.

    V$ARCHIVED_LOG Displays archived redo log information from the

    control file, including archived log names.

    V$DATABASE Provides database information from the control

    file. Including the status of the database.

    V$LOG The view contains log file information from the

    online redo logs.

    Standby Site

    V$ARCHIVED_LOG Displays archived redo log information from the

    control file, including archived log names.

    V$DATABASE Provides database information from the control

    file, including the status of the database.

    V$LOGFILE Contains information about the online/standby r

    logs.

    V$MANAGED_S