60Tips

55

Transcript of 60Tips

Page 1: 60Tips

60 Tips in 60 Minutes

OverviewYou traveled all this way to the conference to either: relax, network, vacation or learn something new. I have been working with Oracle products for a decade now - primarily as a development DBA. In that time, I’ve come across some valuable goodies I’d like to pass around. Also, I’ve run across misunderstandings of Oracle terms and functionality. Many of these misconceptions, I’ve had myself. During this presentation, you’ll get a boatload of tips and tricks you’ll find helpful back at your office. We’ll also talk about several important concepts Oracle concepts that need clarification. During this paper, I hope to:

Provide a database administrator and database developer with practical tips to make your job easier.

Expose common myths and misunderstandings with useful SQL and explanations. Load you up with so many tips, your head will bust.

The tips are divided into several areas: Application development Database Administration Performance Tuning Backup & Recovery Recovery Manager

Kenny Smith Page 1 60 Tips in 60 Minutes

Page 2: 60Tips

60 Tips in 60 Minutes..........................................................................................................1Overview..............................................................................................................................1Application Development – (Lesley Stahl) 12....................................................................4

Send email from a SQL*Plus session...........................................................................4Interpolate when an export/import/refresh will finish..................................................4Confirm the Model equals the Schema.........................................................................6Deploy Schema changes from your Modeling Tool.....................................................6Compile all invalid code with one command................................................................7Quickly Load a complicated file into Oracle................................................................7Trigger on a Materialized View (Snapshot) Table........................................................8Export some of the Table rows.....................................................................................8Transportable tablespaces for application testing.........................................................8SQL*Plus - Input Truncated to ‘x’ Characters.............................................................9SQL*Plus - Show the Time at the SQL Prompt...........................................................9SQL*Plus - Show Instance Parameters.........................................................................9

Database Administration – Ed Bradley 13........................................................................10Connecting as Internal in SQL*Plus, Export and Import...........................................10Confirm Your Database Links are Working with one Script......................................10Find and remove duplicate rows with a query............................................................11Find and remove duplicate rows using constraint exceptions.....................................11Find what each Oracle Process is Doing.....................................................................12Find the Memory used by Oracle Processes...............................................................13Collect Schema Statistics the Easy Way.....................................................................15Export to a Unix Pipe..................................................................................................16Import from a Unix Pipe.............................................................................................16Manage the Windows tnsnames.ora file location.......................................................17Tablespace – temporary local to dictionary................................................................17Make sure NoLogging is not Logging........................................................................18Oracle 9i – Set the Default Temporary Tablespace....................................................18

Database Tuning – (Steve Kroft) 7....................................................................................19Load up on Latches.....................................................................................................19Tune the Application SQL in the Database................................................................19Find Busy SQL with Statspack...................................................................................20Tune SQL with SQL*Plus Autotrace..........................................................................20Trace a Session in Process..........................................................................................23Use IPC for local connections.....................................................................................24Write big files with SQL*Plus versus UTL_FILE......................................................25

Mike Wallace - Backup & Recovery Tips 10....................................................................25Try This Simple Cold Backup Script..........................................................................25Try This Simple Hot Backup Script............................................................................26Apply an Online Redo Log file to Complete Recovery..............................................28Write Your Control File To Trace..............................................................................28Create A Database Clone Using An Open Backup.....................................................28Create a Standby Database Using an Open Backup...................................................28

Kenny Smith Page 2 60 Tips in 60 Minutes

Page 3: 60Tips

LogMiner - Find Details of a Data Change.................................................................29LogMiner - Perform Capacity Planning......................................................................29LogMiner - Find the Time & SCN of a Table Drop...................................................30Backup A Primary Database Via The Standby...........................................................30

Recovery Manager Tips 15................................................................................................31Employ Variables in RMAN Scripts..........................................................................31Configure the Location of RMAN’s Snapshot Control File.......................................31Deploy Redundant Recovery Catalogs.......................................................................31Explore Your Date Options with RMAN...................................................................32Make Redundant RMAN Archive Log Backups........................................................32The List Command in a Nutshell................................................................................33The Report Command in a Nutshell...........................................................................33Find RMAN Information from the Target Control File..............................................33Find RMAN Information from the RMAN Catalog Views........................................34Create a Duplicate database with RMAN...................................................................34Create a Standby database with RMAN.....................................................................35Perform Tablespace Recovery with RMAN...............................................................35Automate RMAN Backup Set Removal.....................................................................35Configure a Retention Policy in RMAN 9i.................................................................36Configure Channels on RMAN 9i..............................................................................36

General Database Reflections: Andy Rooney -.................................................................37The Best Oracle Book is….........................................................................................37The Best Oracle Resource is…...................................................................................37The Best way to Learn Oracle is….............................................................................37

Extra...................................................................................................................................37Log your current commit System Change Number....................................................37Remote Diagnostic Agent (RDA) on Metalink...........................................................38Roll forward versus roll back......................................................................................38Monitor RMAN operations.........................................................................................38Configure the Recovery Manager MML....................................................................39

Kenny Smith Page 3 60 Tips in 60 Minutes

Page 4: 60Tips

Application Development Tips

Send email from a SQL*Plus sessionWhen you're running SQL in SQL*Plus on a Unix system, you can send yourself an email or a page from the SQL prompt. This might be useful if you have created an index on a large table and you need to be notified when the index is created. You might notify yourself when a long running stored procedure completes. You can also send user variables from SQL*Plus within the message. Here's how: 1. Use the '!' at the SQL prompt (or the host command) to shell out to the operating

system. 2. Issue the mailx command with the string to send and the email address as parameters. 3. Include any SQL*Plus variables in the string parameter. For example: Say your email

address is [email protected]. Also your skytel pin is 12345678. You can send a message to you pager by emailing [email protected]. If you want to get an email after a large index, you might run this from SQL*Plus:

SQL> define index_name = 'big_index'

SQL> create &index_name on big_table (id);

SQL> !echo "Created index &index_name" | mailx [email protected]

If you want to receive a page containing database results after a stored procedure call, do something like this:

SQL> VARIABLE bind_result NUMBER;

SQL> execute very_slow_process(:bind_result);

SQL> column user_result new_value user_result noprint;

SQL> select :bind_result user_result from dual;

SQL> !echo "Done very_slow_process: &user_result" | mailx [email protected]

The column command and select from dual allow you to swap a bind variable value into a user variable for the pager message.

Interpolate when an export/import/refresh will finishWhen you’ve got a SQL process that takes a long time, you can project the time that the work will done. Depending on the type of work, you can take two statistic samples of the session. From these samples, you can determine the rate of work. Interpolating the rate

Kenny Smith Page 4 60 Tips in 60 Minutes

Page 5: 60Tips

against the work remaining, you can find the time that the job will be done. Session statistics you can use to measure progress are:

'table scan rows gotten', ‘bytes received via SQL*Net from client’, ‘bytes received via SQL*Net from dblink’, ‘bytes sent via SQL*Net from client’, ‘bytes sent via SQL*Net from dblink’ etc. Pick a statistic that relates to the work you are doing.

This script can be used to project the finish time of export of a table. In this script, the number of rows exported will be 2298410.

Rem Determine how much time until a long job is finished.

Rem Take two measurements and interpolate the finish time.

Rem Parameters:

Rem Session id of the process to interpolate

Rem Statistic to take a sample of total work to accomplish

Rem Start statistic for the session ** must know this beforehand

Rem Seconds to take the sample

set verify off serveroutput on feedback off;

define vsid = 10;

define vstat = 'table scan rows gotten';

define vtotal = 2298410;

define vstart = 0;

define vseconds = 20;

alter session set nls_date_format = 'HH24:MI:SS MON-DD';

DECLARE

CURSOR c_measure is

SELECT value, sysdate

FROM v$sesstat s, v$statname n

WHERE s.statistic# = n.statistic#

AND n.name = '&vstat'

AND s.sid = &vsid;

t1 DATE; -- First Sample Time

t2 DATE; -- Second Sample Time

m1 NUMBER; -- First Sample Measurement

m2 NUMBER; -- Second Sample Measurement

v_rate NUMBER; -- Rate of work per second

v_finish INTEGER; -- Remaining seconds til finished

Kenny Smith Page 5 60 Tips in 60 Minutes

Page 6: 60Tips

BEGIN

OPEN c_measure;

FETCH c_measure INTO m1, t1;

CLOSE c_measure;

DBMS_LOCK.SLEEP(&vseconds); -- Wait between samples

OPEN c_measure;

FETCH c_measure INTO m2, t2;

CLOSE c_measure;

IF (m2 - m1 > 10) THEN

v_rate := (m2 - m1) / ((t2-t1)*24*60*60);

v_finish := (&vtotal - &vstart - m2) / v_rate;

dbms_output.put_line('Sample time: &vseconds');

dbms_output.put_line('Remaining work: ' ||(&vtotal - &vstart - m2)||

' &vstat');

dbms_output.put_line('Moving at:'||to_char(v_rate,'9,999,990.9')||

' per second');

dbms_output.put_line('Finished at... '|| (t2+v_finish/(24*60*60)));

ELSE

dbms_output.put_line('Nothing is happening');

END IF;

END;

/

Confirm the Model equals the SchemaIf you have your custom application schema in a modeling tool (Designer, ER/WIN etc), you can confirm that the current application schema matches the model schema. By regularly comparing the model schema to the actual schema, you can be sure that the database tables, indexes, constraints, stored code and so on exist as you’ve modeled them. Many modeling tools allow you to compare your model directly to the schema. If yours doesn’t, create a user and deploy your user to that schema. Then compare the objects from the production schema to the deployed schema. You can perform the comparison by creating a listing of objects and code using queries from the data dictionary. You can also compare the database schemas using a change management software like Oracle Enterprise Manager’s Change Management Pack or Quest software’s Schema Manager.

Kenny Smith Page 6 60 Tips in 60 Minutes

Page 7: 60Tips

Deploy Schema changes from your Modeling ToolWhen it comes time to change your database schema, use your modeling tool to effect that change. This way you can keep your system model up to date with the database schema. Your model can then serve as accurate and up-to-date documentation of your current schema.How do you deploy changes from your modeling tool when the modeling tool is not sophisticated enough to propagate complex relationships like foreign key relationships? Combine the modeling tool with a change management tool like Oracle Enterprise Manager’s Change Management Pack or Quest software’s Schema Manager.

1. Using the modeling tool Deploy your new schema to a newly created user.2. Using a change management tool, compare the newly created user schema to the

production schema.3. Using a change management tool, compare the differences of the newly created

user schema to the production schema.

Compile all invalid code with one commandWhen a schema change is made to a table in the database, it invalidates all code and views that depend on that view. The views and code are automatically compiled when that view or code is accessed. But that real-time compile will take some time and may not succeed if the schema change prevents the code or view to compile successfully. You’d be wise to compile any invalid code or views after the schema change.

You can compile an entire schema using an Oracle supplied package procedure or via an Oracle supplied script.

The Oracle supplied package named DBMS_UTILITY has a procedure named COMPILE_SCHEMA. This procedure will compile all stored code, views etc for the schema provided.

The best way to compile all database objects that are invalid is to use a script in the $ORACLE_HOME/rdbms/admin directory named utlrp.sql. This script finds all objects in the data dictionary that are invalid and compiles them. This script is typically mentioned in patch notes but you can use it any time a schema change occurs.

Quickly Load a complicated file into OracleIf you need to load file data into a database, you can use SQL*Load to accomplish this feat. But if the file is complicated, you’ll have to make SQL*Load work hard with WHEN clauses and multiple record types. Most files with data in them do not contain one format for each line in the file. To get the data in the file quickly into the database try this:

1. Create a simple table with two columns: the line number and a large text field2. Create a simple SQL*Load control file to load all text of each line into this table

Kenny Smith Page 7 60 Tips in 60 Minutes

Page 8: 60Tips

3. Create views to read from the table based on the format of the file

The views can use functions like SUBSTR to dissect each line according to the file format.

For more details: see “SQL*Load and Views for Quick file Review” – December 2001 Element K Journals

Trigger on a Materialized View (Snapshot) TableYou may be called upon to move data as it changes from one database to another one. If you want, you can create an enormous amount of code to track changes in data. However Oracle Replication technology can take care of that for you. Try this trick to move and format data from one database to another:

1. Create a materialized view log on the source table.2. Create a database link on the target database for the source database.3. Create a materialized view of the source table on the target database.4. Create a stored procedure/package that processes a row from the materialized

view table5. Create a trigger on the materialized view table to call the stored

procedure/package

During refreshes of the materialized view, the stored procedure will be called as the trigger fires. You can control the work performed by the stored procedure/package to place data in another table as you’d like it formatted or manipulated. Note: do not change the data in the materialized view table.

For more information: see presentation “Snapshot + Snapshot log + Trigger = Datamart”. IOUG 2001.

Export some of the Table rowsWhen exporting the rows of a table, you don’t have to dump all the rows. Add a WHERE clause to the export for that file via the QUERY parameter. If you want to export some of the rows of a table, do it like this:

# Export a single table with a query

USERID = system/manager

FILE = some_table.dmp

TABLES = (SOMEUSER.SOME_TABLE)

QUERY = "WHERE create_date < SYSDATE"

Kenny Smith Page 8 60 Tips in 60 Minutes

Page 9: 60Tips

Transportable tablespaces for application testingTransportable tablespace feature has a number of valuable uses. One nice development use for this database feature is to assist in application testing. Here’s how:

1. Isolate application data to a set of tablespaces2. Configure the application data so that the data contains an appropriate test

baseline.3. Take the tablespaces offline and export the meta data about those tablespaces to a

file.4. Run tests on the application data.5. Reset the application data by removing the application data tablespaces and

importing the baseline tablespace back into the database.

SQL*Plus - Input Truncated to ‘x’ CharactersHave you run a script and seen output on your SQL*Plus screen that say "Input Truncated To 'x' Characters"? Do you get tired of it like I do? This message occurs when you run a script file that does not have a carriage return (CR) at the end of the last line of the script. To avoid this message, go to the end of the last line in your script, hit delete, and then insert a CR. Save the script and run it again.

SQL*Plus - Show the Time at the SQL PromptYou want to know how long a SQL query ran? You want to know how long a PL/SQL job took to complete? You can fetch the system date and format in before and after the SQL*Plus operation you want to perform. A better way to check the time is to set a SQL*Plus variable named time. If you type this at the SQL*Plus prompt:set time on

then the completion of each command will display a prompt with the current time. You can figure out the amount of time your procedure ran by subtracting the two times displayed at the SQL*Plus prompt.14:30:10 SQL > execute mylongprocedure();

PL/SQL procedure successfully completed.

14:50:10 SQL >

Your procedure took 20 minutes to run.

SQL*Plus - Show Instance ParametersYou often need to see the database parameter settings for your session or for the entire database. To do so, you can select from the v$parameter or v$nls_parameters dynamic views. A quicker way to find a parameter setting, use the SQL*Plus show parameter command. This command will return all parameter values containing the letters you submit to it. For example, if you want to see all the parameters containing the word processes, type this:

Kenny Smith Page 9 60 Tips in 60 Minutes

Page 10: 60Tips

SQL> show parameter processes

NAME TYPE VALUE

------------------------------------ ------- ------

aq_tm_processes integer 0

db_writer_processes integer 1

job_queue_processes integer 0

log_archive_max_processes integer 1

processes integer 400

Database Administration Tips

Connecting as Internal in SQL*Plus, Export and Import

It is time to get used to controlling your database from SQL*Plus rather than Server Manager. Yet many DBAs still using server manager for tasks like startup and shutdown.

To connect as internal in SQL*Plus, connect as SYSDBA as any user who possesses the SYSDBA privilege. You cannot provide the password on the command line. You must use the /nolog option.LINUX> sqlplus /nolog

SQL> Connect sys/change_on_install as sysdba

To connect as internal in Export or Import, place your connect string in the quotation marks within a parameter file like this:

USERID = "sys/pinnacle as sysdba"

Confirm Your Database Links are Working with one ScriptYour database uses database links to communicate with other databases. Some times the database links become inactive because:1. Net8 configuration files change2. User ids are altered3. Databases are upgraded

You can confirm all your database links are active by running this script. It selects from the DUAL table on the database that the link connects to. If the link works, you get a message that the link is active. Otherwise, you'll get an error message.

Kenny Smith Page 10 60 Tips in 60 Minutes

Page 11: 60Tips

Run this from SQL*Plus:

prompt Enter the connection information to the database you want to check

connect &1/&2@&3;

prompt Checking the Public database links

set feedback off heading off term off pagesize 0

prompt Create a file of database link select statements

spool /tmp/links.sql

Prompt Prompt Checking Database Links .........

SELECT 'select '''||owner||'.'||db_link||' is Active!''

FROM dual@'||db_link||';'||chr(10)||'rollback;'

FROM dba_db_links;

spool off;

set term on

prompt Testing the database links

@/tmp/links.sql

prompt Testing Done

Find and remove duplicate rows with a queryIf you try to place a primary key on a table, you may find that you have duplicate key values in the table. To find those duplicates, you can run a query to find them or enable a constraint with an exceptions table.Define two SQL*Plus user variables. One is the table with duplicate data. The other are the key columns of the table:define vtable = test

define vkeys = 'a, b'

This query will find the duplicate rows using the user variables.

Rem Find the duplicate rows with a query

SELECT &vkeys , max(rowid)

FROM &vtable

GROUP BY &vkeys

HAVING COUNT(*) > 1;

Kenny Smith Page 11 60 Tips in 60 Minutes

Page 12: 60Tips

This SQL will delete one of duplicate rows; the row deleted has the maximum row id value.

Rem Remove the duplicate rows with a query

DELETE FROM &vtable

WHERE (&vkeys , rowid)

IN (SELECT &vkeys , max(rowid)

FROM &vtable

GROUP BY &vkeys

HAVING COUNT(*) > 1);

Find and remove duplicate rows using constraint exceptionsDefine two SQL*Plus user variables. One is the table with duplicate data. The other are the key columns of the table:define vtable = test

define vkeys = 'a, b'

You can also use the exceptions feature of the table constraint addition. Create an exception table with the Oracle supplied script named utlexcpt.sql in the $ORACLE_HOME/rdbms/admin directory. Then add a constraint using the user defined variables like this:ALTER TABLE &vtable add

(CONSTRAINT &vtable._pk PRIMARY KEY (&vkeys.)

USING INDEX STORAGE (initial 5K) EXCEPTIONS INTO exceptions);

Duplicate rows in your table will have their rows inserted into EXCEPTIONS table, even though the constraint addition fails. To find the duplicate rows, join the table to the exception table like this:Rem Find the duplicate rows with using exceptions

SELECT &vtable..*

FROM &vtable., exceptions

WHERE &vtable..rowid = exceptions.row_id;

Delete rows from your table using the exceptions table and the user defined variables like this:Rem Remove the duplicate rows with using exceptions

DELETE FROM &vtable

WHERE (&vkeys , rowid)

Kenny Smith Page 12 60 Tips in 60 Minutes

Page 13: 60Tips

IN (SELECT &vkeys , max(rowid)

FROM &vtable

WHERE rowid IN

(SELECT &vtable..rowed

FROM &vtable., exceptions

WHERE &vtable..rowid = exceptions.row_id)

GROUP BY &vkeys

HAVING COUNT(*) > 1);

Find what each Oracle Process is DoingHere is a handy script that will correlate the operating system process with the Oracle process. You’ll also be able to see the most current SQL run by that process. Therefore, if you find an OS process is consuming significant resources in a monitoring tool like top, you can run this SQL, correlate the OS process to the Oracle session identifier and see the most recent SQL executed.

set linesize 180 pagesize 1000

column pu format a8 heading 'O/S|Login|ID' justify left

column su format a8 heading 'Oracle|User ID' justify left

column stat format a8 heading 'Session|Status' justify left

column ssid format 999999 heading 'Oracle|Session|ID' justify right

column sser format 999999 heading 'Oracle|Serial|No' justify right

column spid format 999999 heading 'UNIX|Process|ID' justify right

column txt format a28 heading 'Current Statment' justify center word

spool pid_sid.lst

SELECT p.username pu,

s.username su,

s.status stat,

s.sid ssid,

s.serial# sser,

lpad(p.spid,7) spid,

substr(sa.sql_text,1,540) txt

FROM v$process p, v$session s, v$sqlarea sa

WHERE p.addr=s.paddr

Kenny Smith Page 13 60 Tips in 60 Minutes

Page 14: 60Tips

AND s.username is not null

AND s.sql_address=sa.address(+)

AND s.sql_hash_value=sa.hash_value(+)

ORDER BY 1,2,7

/

spool off

Find the Memory used by Oracle ProcessesHow much memory is Oracle using on the Unix host server? Oracle consumes memory in the user processes and in the shared memory area (SGA). To find the amount of memory used by typing one command at the SQL*Plus prompt:

SQL> show sga

Total System Global Area 130447388 bytes

Fixed Size 75804 bytes

Variable Size 55074816 bytes

Database Buffers 75218944 bytes

Redo Buffers 77824 bytes

To find the amount of memory for the Oracle shadow processes can be a little trickier. Operating system commands like top and ps don’t reflect the actual memory used by the Oracle shadow processes. These commands typically included shared memory structures in the memory calculation. Therefore, the processes appear to be taking much more memory than they actually do.

Using a unix utility named pmap, we can see the process memory allocation for Oracle processes. When Oracle reported large memory use, most of the memory reported was actually shared memory mapping. Look for memory mapped files name heap, stack and perhaps text to get the real memory required for a specific Oracle shadow process.

21056: oraclebanklink (LOCAL=NO)

Address Kbytes Resident Shared Private Permissions Mapped File

019FE000 672 616 - 616 read/write/exec [ heap ]

FFBDA000 88 88 - 88 read/write/exec [ stack ]

Kenny Smith Page 14 60 Tips in 60 Minutes

Page 15: 60Tips

-------- ------ ------ ------ ------

total Kb 991032 979976 19520 960456

This Script will tell you how much memory has been used by Oracle sessions in the stack and the heap of each session and sums up the memory for you:

prompt Displaying User Memory Usage

set pagesize 66 linesize 80

column amount format 9,999,999,999,999 heading "Total Memory"

column memory format a30 heading "Memory Type"

break on report

compute Sum of amount on report

SELECT sid, n.name|| '('||s.statistic#||')' memory, value amount

FROM v$sesstat s , v$statname n

WHERE s.statistic# = n.statistic#

AND n.name like '%ga memory%'

ORDER BY value;

SID Memory Type Total Memory

---------- ------------------------------ ------------------

122 session pga memory(20) 529,088

122 session pga memory max(21) 529,088

7 session pga memory(20) 529,684

7 session pga memory max(21) 529,684

143 session pga memory(20) 1,028,160

143 session pga memory max(21) 1,028,160

2 session pga memory(20) 1,876,164

2 session pga memory max(21) 1,876,164

------------------

sum 75,747,640

Kenny Smith Page 15 60 Tips in 60 Minutes

Page 16: 60Tips

For information of operating system and Oracle process memory see these Metalink notes: 1433.996, 1874.997, 174555.1, 15566.1, 2081081.6 and 2059705.6.

Collect Schema Statistics the Easy Way

The Oracle Server optimizer requires table statistics to make accurate cost estimations when executing SQL. You can analyze your tables the hard way with a SQL*Plus script like this:

set newpage 0 space 0 linesize 200 pagesize 0

set echo off feedback off heading off termout off verify off

spool analyze.sql

SELECT 'analyze table '|| table_name || ' estimate statistics;'

FROM user_tables;

spool off

set termout on feedback on

start analyze.sql

Or you can perform your analyze the easy way from the SQL*Plus like this:

execute DBMS_STATS.GATHER_SCHEMA_STATS (USER);

or execute dbms_utility.analyze_schema('PERFSTAT','COMPUTE');

For more information on how dbms_stats or dbms_utility can make your DBA world less ‘analytical’, read about it in Oracle8i Supplied PL/SQL Packages Reference.

Export to a Unix PipeOracle Export and Import in version 8 can read a file contained in many pieces with the FILESIZE parameter. This way you can export a large amount of data into several equally size files. This same functionality is available with some nifty Unix features with older versions of Export and Import. Using a Unix pipe and the split command, you can accomplish a similar functionality. For example, if you have a 2 Gb file size limit on your operating system, you can export Oracle data larger than that like this in C Shell:

setenv UID system/manager

setenv FN exp.`date +%j_%Y`.dmp.gz

Kenny Smith Page 16 60 Tips in 60 Minutes

Page 17: 60Tips

setenv PIPE /backup/exp_tmp.dmp

setenv ORACLE_SID PROD

# Make a Unix pipe

mknod $PIPE p

# Split the pipe and use a function on the pipe in the background

split -b 500m $PIPE $FN. &

# Write the export to the pipe, send output to a log file

exp userid=$UID file=$PIPE tables=big_table direct=y>>& big_table.log

# Remove the pipe

rm –f $PIPE

Import from a Unix Pipe

If you’ve created an export file using the pipe and split technique above, you can import that file like this:setenv UID sys/change_on_install

setenv FN exp.`date +%j_%Y`.dmp.gz

setenv PIPE /backup/exp_tmp.dmp

setenv ORACLE_SID PROD

# Make a Unix pipe

mknod $PIPE p

# Read the file in the background to the pipe

cat `echo $FN.* | sort` > $PIPE &

# Import from the pipe file as the pipe is being created

imp userid=$UID file=$PIPE full=y commit=y >>& imp_big_table.log

# Remove the pipe

rm –f $PIPE

Manage the Windows tnsnames.ora file locationIf you are having trouble with your database connection, you may need to examine which tnsnames file you are using. If you have many tnsnames files on your machine, you may not be reading the one you think you are.

Kenny Smith Page 17 60 Tips in 60 Minutes

Page 18: 60Tips

You can have a local version and a system version. When a client connection is requested the service name or parameter is first searched in the local version of this configuration file. If the service is not found in the local version it is searched in the system version. The system version is located in the "$ORACLE_HOME\NET80\ADMIN" directory for Oracle 8.0 software. The system version is located in the "$ORACLE_HOME\NETWORK\ADMIN" directory for Oracle 7, 8i and 9i software. A local version can exist in the current working directory where the application is running. For example, if, on Windows NT you start SQL*Plus in "$ORACLE_HOME\BIN", then Net8 looks for a local TNSNAMES.ORA in "ORANT\BIN" before looking for the system version. The connection hassles increase when you have multiple local files in various directories. Manage only TNSNAMES.ORA file exist and that it be located in a single directory. On Unix platforms, setting the environment variable $TNS_ADMIN= <location_of_file> will direct Net8 to look for the file in that location. On Windows NT, the TNS_ADMIN in located in the registry. Place the Net8 configuration files in a single directory. Then set the TNS_ADMIN environment variable to point to that one directory.

Tablespace – temporary local to dictionary

If you use a local temporary tablespace for database disk sorting, you might get stuck. If that tablespace fill, your SQL or application will return an ORA-3212. This error is raised whenever Oracle fails to allocate temporary segment in permanent locally managed tablespace. That SQL could be an index creations or Select with an order by, group by or a table join and so on. The problem is that Oracle does not allow you to create temporary objects in permanent locally managed tablespace. Therefore, you have to make Oracle create it somewhere else, or to switch the tablespace to become TEMPORARY (ad b.), or to switch it to become DICTIONARY managed (ad c.).

1. Change temporary tablespace for the user to a tablespace, that is not permanent and locally managed: SQL> alter user SCOTT temporary tablespace <another temp tablespace>;

2. Rebuild tablespace to become TEMPORARY. Since it is locally managed tablespace, the only way of rebuilding is to drop it and recreate it again (consider that you lose all segments stored in this tablespace when you drop it using INCLUDING CONTENTS clause).

3. Switch this tablespace to become DICTIONARY managed. Call dbms_space_admin.tablespace_migrate_from_local() procedure: SQL> exec dbms_space_admin.tablespace_migrate_from_local('TEMP');

Make sure NoLogging is not LoggingWhen you want to reduce redo log activity on a database object like a table, partition, index or tablespace, redo will still be generated normally for those objects during normal SQL operations. Redo will not be created as normal on objects set for nologging when the operation is one of these:

Kenny Smith Page 18 60 Tips in 60 Minutes

Page 19: 60Tips

Direct load using SQL*Loader Direct-load INSERT CREATE TABLE ... AS SELECT CREATE INDEX ALTER TABLE ... MOVE or SPLIT PARTITION ALTER INDEX ... SPLIT or REBUILD PARTITION ALTER INDEX ... REBUILD

When you prevent normal redo from being written, you trade better performance for data recoverability.

Oracle 9i – Set the Default Temporary TablespaceOops, I did it again. I forgot to specify the temporary tablespace for a user on an Oracle 8i database. Therefore, when the user has to sort some things out, the sorts occur in the SYSTEM tablespace. Yikes.With the introduction of default temporary tablespace in Oracle 9i, the SYSTEM tablespace is no longer used as the default storage location for temporary data. You can specify the default temporary tablespace for users when a temporary tablespace is not specified. Either specify the default temporary tablespace at database creation time or use the alter tablespace command.ALTER TABLESPACE DEFAULT TEMPORARY TABLESPACE temp;

Database Tuning Tips

Load up on LatchesA latch is a type of a lock that can be very quickly acquired and freed. Latches are typically used to prevent more than one process from executing the same piece of code at a given time. A process acquires a latch when working with a structure in the SGA, It continues to hold the latch for the period of time it works with the structure. The latch is dropped when the process is finished with the structure. Each latch protects a different set of data, identified by the name of the latch. If a required latch is busy, the process requesting it spins, tries again and if still not available, spins again. The loop is repeated up to a maximum number of times determined by the initialization parameter _SPIN_COUNT. If after this entire loop, the latch is still not available, the process must yield the CPU and go to sleep. Initially is sleeps for one centisecond. This time is doubled in every subsequent sleep. This causes a slowdown to occur and results in additional CPU usage, until a latch is available. The CPU usage is a consequence of the "spinning" of the process. "Spinning" means that the process continues to look for the availability of the latch after certain intervals of time, during which it sleeps.

Kenny Smith Page 19 60 Tips in 60 Minutes

Page 20: 60Tips

To increase the number of latches available to your database, increase the DB_BLOCK_LRU_LATCHES to a value of two times the number of CPUs on the server machine. The default value of this parameter of this parameter is half the number of CPUs on your machine. Increasing this value make more latches available and reduce latch contention.

Tune the Application SQL in the DatabaseThe best way to get your database to run faster is to reduce its workload. To reduce the database workload, look at the load profile in the Oracle supplied Statspack utility. You can get a quick synopsis of the work being performed by the database during the two snapshots.

My experience is that the best way to reduce the overall workload on the machine is to reduce the consistent gets. When SQL has to work very hard to find rows in the database, the consistent gets will really pile up. As you tune SQL with indexes, table statistics via analyze and by review SQL is written well, you’ll see the consistent gets decrease. Less consistent gets means that SQL is getting its work done with fewer databases reads. Notice the improvement in these two load profiles:

Load Profile

~~~~~~~~~~~~ Per Second Per Transaction

--------------- ---------------

Redo size: 10,037.40 1,899.28

Logical reads: 23,740.08 6,599.90

Block changes: 58.40 11.05

Physical reads: 444.21 84.05

After tuning the SQL, the database is accomplishing the same amount of work but is doing this work with less than half the logical database reads.

Load Profile

~~~~~~~~~~~~ Per Second Per Transaction

--------------- ---------------

Redo size: 8,483.57 1,806.05

Logical reads: 11,868.43 2,526.64

Block changes: 49.20 10.47

Physical reads: 1,241.27 264.25

Kenny Smith Page 20 60 Tips in 60 Minutes

Page 21: 60Tips

Less database reads can be also achieved by rethinking the work on the application process. If you have a slow response time or a long running SQL job, review the database process to see if you can cut some corners in your processing.

Find Busy SQL with StatspackIf you want to tune SQL but don’t know which SQL needs the work, you can use Oracle’s Statspack to find that SQL. While collecting SQL statistics, Statspack will find SQL where the Buffer Gets Threshold exists a defined amount (default value is 10,000). The Statspack report displays that SQL in the report so that you can know which SQL executed the most logical database reads during your sampling period. Inefficient SQL will typically show up as a heavy logical read consumer.Other Statspack SQL that will be displayed are SQL that read blocks from disk (default threshold of 1000) and actual executions (default threshold of 100).

Tune SQL with SQL*Plus AutotraceOnce you identify slow running SQL with Statspack (or other utilities like Tracing plus tkprof, Oracle Enterprise Manager’s Top Sessions or a vendor monitoring tool), you can use SQL*Plus autotrace to examine the execution plan. For example:

Consider this actual query that was found to be expensive and time consuming.SELECT count(*)

FROM report r, bank u

WHERE r.type=:u1

AND r.bankid=:u2

AND r.userbankaccountid =u.userbankaccountid

AND r.statusid=3

ORDER BY LastModified;

This SQL uses two bind variables to return a count from a join of two tables. The SQL was executed like this:Execution Plan

----------------------------------------------------------

0 SELECT STATEMENT Optimizer=CHOOSE

1 0 SORT (AGGREGATE)

2 1 NESTED LOOPS

3 2 TABLE ACCESS (FULL) OF 'BANK'

4 2 TABLE ACCESS (BY INDEX ROWID) OF 'REPORT'

Kenny Smith Page 21 60 Tips in 60 Minutes

Page 22: 60Tips

5 4 AND-EQUAL

6 5 INDEX (RANGE SCAN) OF 'REPORT_BANKID'

7 5 INDEX (RANGE SCAN) OF 'BANKACCOUNTID'

This query reads each row of the BANK and reads two indexes on the REPORT table. Therefore, two reads from the indexes on the REPORT table and a table read occurs for each row of the BANK. Problem is that the BANK table has over 350,000 rows. Therefore, to get a count of the rows for this query, Oracle has to make three or more reads multiplied by 350,000! The execution statistics for this query look like this:Statistics

----------------------------------------------------------

0 recursive calls

4 db block gets

2124587 consistent gets 36 physical reads

0 redo size

187 bytes sent via SQL*Net to client

316 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

The bad thing here is the consistent gets of over 2 million. All these reads take CPU time because all but 36 reads came from memory. Why? Because it keeps reading the same report index and table blocks over and over. Bottom line to the user is the amount of time required for the query to return is the elapsed time. By looking at a trace file for this query, you can see the elapsed time in this output:call count cpu elapsed disk query

------- ------ -------- ---------- ---------- ----------

Parse 0 0.00 0.00 0 0

Kenny Smith Page 22 60 Tips in 60 Minutes

Page 23: 60Tips

Execute 5 0.00 0.01 0 0

Fetch 5 526.09 548.96 667 10538601

------- ------ -------- ---------- ---------- ----------

total 10 526.09 548.97 667 10538601

Bottom line, each time the query runs, it takes about 100 seconds to complete. During the unprocessed reports function, this query is called five times. This one query, executed times takes almost ten minutes to answer the count query five times.

By adding one index and analyzing the BANK and REPORT tables, the optimizer was able to choose a different execution path. The index added for this query was an index on the BANK table and the USERBANKACCOUNTID column.

Execution Plan

----------------------------------------------------------

0 SELECT STATEMENT Optimizer=CHOOSE (Cost=85 Card=1 Bytes=20)

1 0 SORT (AGGREGATE)

2 1 NESTED LOOPS (Cost=85 Card=4435 Bytes=88700)

3 2 TABLE ACCESS (BY INDEX ROWID) OF 'REPORT'

4 3 AND-EQUAL

5 4 INDEX (RANGE SCAN) OF 'REPORT_TYPE' (NON-UNIQUE)

6 4 INDEX (RANGE SCAN) OF 'REPORT_BANKID'

7 2 INDEX (RANGE SCAN) OF 'BANKACCOUNTID'

This query now looks up report by the two indexes as before. But during the join to the BANK table, the join occurs via an index. The cardinality of the indexed columns is high (they have many distinct values in the column). Therefore, little of the index has to be read to find the table rows needed for the count. The good new shows up the in the execution statistics.Statistics

----------------------------------------------------------

0 recursive calls

Kenny Smith Page 23 60 Tips in 60 Minutes

Page 24: 60Tips

0 db block gets

130 consistent gets 0 physical reads

0 redo size

187 bytes sent via SQL*Net to client

316 bytes received via SQL*Net from client

2 SQL*Net roundtrips to/from client

0 sorts (memory)

0 sorts (disk)

1 rows processed

Only 130 consistent gets are required to resolve this query. Therefore, the database does much less work to yield the same answer. Subsequently, the times improve also for this query as shown in the trace output.

call count cpu elapsed disk query

------- ------ -------- ---------- ---------- ----------

Parse 3 0.00 0.00 0 0

Execute 3 0.00 0.00 0 0

Fetch 6 0.02 0.02 0 771

------- ------ -------- ---------- ---------- ----------

total 12 0.02 0.02 0 771

The improvement is dramatic. During this execution of this report function, the query was executed three times. The elapsed time for that one query executed three times was

Kenny Smith Page 24 60 Tips in 60 Minutes

Page 25: 60Tips

two hundredths of a second. This query improvement and others show how this report function can run in fifteen minutes on one day and then run in fifteen seconds on the next day.

Trace a Session in ProcessA trace file for a user session can provide valuable tuning information about the session’s SQL statement. Using that user trace file and the Oracle supplied tkprof utility, you can quickly find the SQL statements running during that session that consumed the most resources. These SQL statements can be evaluated for possible improvement. To trace a SQL session that you are creating, issue the ALTER SESSION command to turn on tracing. The tracing will turn off by disconnecting or with another ALTER SESSION command:ALTER SESSION SET SQL_TRACE = TRUE;

ALTER SESSION SET SQL_TRACE = FALSE;

If the application you’d like to trace is currently in process or if you are unable to place tracing statements in the application code, you can still trace an execution of that application process. Find the application process you’d like to trace via its Process Id, Username, Session Id, Machine name or other values in the V$SESSION dynamic view. Once you’ve identified that session, use its session id and the session serial number (found in the V$SESSION view) and call a supplied Oracle supplied package procedure named DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION. For example: if you want to find the SQL being executed by a long running application, identify the session id and serial number. For a session id of 100 and serial number of 123, begin tracing that session with this command:SQL> execute SYS.DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(100, 123, ‘TRUE’);

Turn off tracing of that session by making the same procedure call but change the to a value of ‘FALSE’. Note that tracing will automatically stop when the session disconnects. The trace file will contain trace information beginning at the time the tracing began.

Use IPC for local connections

When a process is on the same machine as the server, use the IPC protocol for connectivity instead of TCP. Inner Process Communication on the same machine does not have the overhead of packet building and deciphering that TCP has. I’ve seen a SQL job that runs in 10 minutes using TCP on a local machine run as fast as 1 minute using an IPC connection. The difference in time is most dramatic with the Oracle process has to send and/or receive large amounts of data to and from the database. For example, a SQL*Plus connection that counts the number of rows of some tables will run about the same amount of time, whether the database connection is made via IPC or TCP. But if the

Kenny Smith Page 25 60 Tips in 60 Minutes

Page 26: 60Tips

SQL*Plus connection spools much data to a file, the IPC connection will often be much faster – depending on the data transmitted and the machine workload on the TCP stack.

You can set up your tnsnames file like this on a local machine so that local connection with use IPC connections first and then TCP connection second.

PROD =

(DESCRIPTION =

(ADDRESS_LIST =

(ADDRESS = (PROTOCOL = IPC)(Key = IPCKEY))

(ADDRESS = (PROTOCOL = TCP)(HOST = MYHOST)(PORT = 1521))

)

(CONNECT_DATA =

(SID = PROD)

)

)

To see if the connections are being made via IPC or TCP, turn on listener logging and review the listener log file.

Write big files with SQL*Plus versus UTL_FILEMany applications need to place Oracle data into a file to be sent to another system. Often, the Oracle supplied UTL_FILE package to write data to the operating system file. Sometimes, the file can be quite large.

I’ve found that creating files with a lot of data runs much faster in SQL*Plus than via the UTL_FILE. Consider this table with some time comparisons on some sample systems. A large table was selected from the database and written to an operating system file using either SQL*Plus or UTL_FILE package. The files created were identical. A SQL*Plus script created the data much faster.

Condition File Size SQL*Plus UTL_FILEOracle 8.0.4SunOs 5.6100k rows

21 MB 50 seconds 79 seconds

Oracle 8.1.7SunOs 5.6100k rows

25 MB 36 seconds 52 seconds

Oracle 8.1.7Windows NT 4100k rows

30 MB 32 seconds 68 seconds

Kenny Smith Page 26 60 Tips in 60 Minutes

Page 27: 60Tips

Backup & Recovery Tips

Try This Simple Cold Backup ScriptNeed to perform a cold backup fast? This script lets you create a backup from SQL*Plus connected as SYSDBA.

Rem Set SQL*Plus variables to manipulate output

set feedback off heading off verify off

set pagesize 0 linesize 200

Rem Set SQL*Plus user variables used in script

Rem Linux User variables

define dir = '/backup'

define fil = '/tmp/closed_backup_commands.sql'

prompt *** Spooling to &fil

spool &fil

select 'host cp '|| name ||' &dir' from v$datafile order by 1;

select 'host cp '|| member ||' &dir' from v$logfile order by 1;

select 'host cp '|| name ||' &dir' from v$controlfile order by 1;

select 'host cp '|| name ||' &dir' from v$tempfile order by 1;

spool off;

Rem Shutdown the database cleanly

shutdown immediate;

Rem Run the copy file commands

@&fil

Rem Start the database again

startup;

On Linux, connect to SQL*Plus as SYSDBA and run this script like this:LINUX> sqlplus /nolog

SQL> connect system/manager as sysdba

SQL> @closed_backup.sql

For more information see Chapter 4 of Oracle Backup and Recovery 101.

Kenny Smith Page 27 60 Tips in 60 Minutes

Page 28: 60Tips

Try This Simple Hot Backup ScriptI’ve seen hot backup scripts a plenty. Most of them are quite involved. Maybe you’d like a simple one that you can build upon. Heck, you can even use this script just like this. These commands, run from SQL*Plus, uses PL/SQL to create a hot backup script from the data dictionary.

set feedback off pagesize 0 heading off verify off

set linesize 100 trimspool on

Rem Set SQL*Plus user variables used in script

define dir = '/backup'

define fil = '/tmp/open_backup_commands.sql'

define spo = '&dir/open_backup_output.lst'

prompt *** Spooling to &fil

set serveroutput on

spool &fil

prompt spool &spo

prompt alter system switch logfile;;

DECLARE

CURSOR cur_tablespace IS

SELECT tablespace_name

FROM dba_tablespaces;

CURSOR cur_datafile (tn VARCHAR) IS

SELECT file_name

FROM dba_data_files

WHERE tablespace_name = tn;

BEGIN

FOR ct IN cur_tablespace LOOP

dbms_output.put_line ('alter tablespace '||ct.tablespace_name||

' begin backup;');

FOR cd IN cur_datafile (ct.tablespace_name) LOOP

dbms_output.put_line ('host cp '||cd.file_name||' &dir');

END LOOP;

dbms_output.put_line ('alter tablespace '||ct.tablespace_name||

' end backup;');

Kenny Smith Page 28 60 Tips in 60 Minutes

Page 29: 60Tips

END LOOP;

END;

/

prompt alter system switch logfile;;

prompt spool off

spool off;

Rem Run the copy file commands from the operating system

@&fil

On Linux, connect to SQL*Plus as SYSDBA and run this script like this:LINUX> sqlplus /nolog

SQL> connect system/manager as SYSDBA

SQL> @open_backup.sql

For more information see Chapter 5 of Oracle Backup and Recovery 101.

Apply an Online Redo Log file to Complete RecoveryWhen you are performing complete recovery, the recover command prompts you for the archive logs required. Eventually, Oracle prompts you for an archive log file that does not exist yet. Actually, it is asking for the current online redo log file. Supply that file to the recover command and you’ll see the media recovery complete response.

Write Your Control File To TraceDuring backups, be sure to add this line to your script:SQL > ALTER DATABASE BACKUP CONTROLFILE TO TRACE;

In the event that your control files are corrupted or get lost, you’ll be able to recreate the control file with this file.

Create A Database Clone Using An Open BackupYou can create a duplicate of a database from an open database backup. To do so:

1. Create an open backup2. Create a trace of the control file3. Move the backup datafiles to the target location (directory or machine)4. Edit the create file control script5. Run the create control file script from SQL*Plus as SYSDBA.6. Recover the database using archive logs.

Kenny Smith Page 29 60 Tips in 60 Minutes

Page 30: 60Tips

7. Open the database reset logs.

For more information see Chapter 6 of Oracle Backup and Recovery 101.

Create a Standby Database Using an Open BackupYou don’t have to shut down a database to create a standby database of it. Here’s the short list of things do create a standby of a database while it is open:

1. Create an open backup2. Create a standby control file 3. Move the backup datafiles to the target location (directory or machine)4. Move the standby control file to the target location5. Edit the standby database parameter file to use the standby control file6. Startup the standby instance and mount the standby control file7. If the datafiles have a different location than the primary database, rename those

datafiles with the ALTER DATABASE command.8. Copy the archive log files beginning when the open backup was taken9. Recover the database using archive logs10. Deploy an automated redo propagation mechanism

To create a standby control file to a file in the tmp directory, use this command:SQL > ALTER DATABASE CREATE STANDBY CONTROLFILE AS ‘/tmp/standby.ctl’;

For more information see Chapter 7 of Oracle Backup and Recovery 101.

LogMiner - Find Details of a Data ChangeChanges in data may occur on your database that are unexpected or made by mistake. The place to look for the details of those changes is the contents of the redo log files. You can mine the log files to find the details of a database change. Configure LogMiner with archive logs create around the time of the suspected data change.

To find the details of an update on the SCOTT.EMP table, run this query:

SQL> SELECT operation, timestamp, sql_redo, sql_undo, username, session_info

2 FROM v$logmnr_contents

3 WHERE seg_name = 'EMP'

4 AND seg_owner = 'SCOTT'

4 AND seg_type_name = 'TABLE';

For more information see Chapter 9 of Oracle Backup and Recovery 101.

Kenny Smith Page 30 60 Tips in 60 Minutes

Page 31: 60Tips

LogMiner - Perform Capacity PlanningAs a DBA, you can use LogMiner to do some performance tuning or capacity planning. Say that you’ve got a table that is busy capturing audit information for your application. Load up LogMiner with an hour’s worth (or some other time period) of archive log file. Then run a query against those archive logs to see the insert rate on the table:

SELECT operation, to_char(timestamp,'HH') hour, count(*) total

FROM v$logmnr_contents

WHERE seg_name = 'AUDIT_LOG'

AND seg_owner = 'SOMEUSER'

AND seg_type_name = 'TABLE'

GROUP BY operation, to_char(timestamp,'HH');

OPERATION HOUR TOTAL

--------- ---- ----------

INSERT 10 16

INSERT 11 16

INSERT 12 16

INSERT 13 16

The results of this query show you that the AUDIT_LOG table is adding rows at the rate

of sixteen per hour.

For more information see Chapter 9 of Oracle Backup and Recovery 101.

LogMiner - Find the Time & SCN of a Table DropLogMiner only translates redo log information for data manipulation language (insert, update, delete). You will not be able to find a DROP TABLE statement in the V$LOGMNR_CONTENTS view. But LogMiner can find statements performing DML on data dictionary tables, you can search the view for occurrences of a deletion from the SYS.OBJ$ or the SYS.TAB$ data dictionary tables. When a table is dropped, you’ll find a delete against the SYS.OBJ$ and SYS.TAB$ tables.

SQL> SELECT seg_name, operation, scn, timestamp, count(*)

2 FROM v$logmnr_contents

3 WHERE operation = 'DELETE'

4 GROUP by seg_name, operation, scn, timestamp

5 ORDER by scn;

SEG_NAME OPERATION SCN TIMESTAMP COUNT(*)

Kenny Smith Page 31 60 Tips in 60 Minutes

Page 32: 60Tips

--------------- --------------- ---------- ------------------- ----------

FET$ DELETE 362948 2002-01-10:07:59:11 1

COL$ DELETE 362952 2002-01-10:07:59:16 4

OBJ$ DELETE 362952 2002-01-10:07:59:16 1TAB$ DELETE 362952 2002-01-10:07:59:16 1

SEG$ DELETE 362954 2002-01-10:07:59:16 1

UET$ DELETE 362954 2002-01-10:07:59:16 1

You can select more information about that statement by selecting data for that exact

SCN and compare that object number to the object number in the dictionary file created

for LogMiner’s use. This way you can confirm the time and SCN of a drop table

command.

SQL> SELECT sql_redo FROM v$logmnr_contents

2 WHERE scn = 362952 and seg_name = 'OBJ$';

For more information see Chapter 9 of Oracle Backup and Recovery 101.

Backup A Primary Database Via The StandbyIf you’ve got a standby database and you want to reduce the workload on your primary database server, you can perform backups on the standby database instead of the primary. The standby is an exact copy of the primary except that the database files may be located in a different place with a different name. The backup of the standby can be used to restore and recover the primary in needed.

Recovery Manager Tips

Employ Variables in RMAN ScriptsRMAN does not support variables within its scripts. You can create a script that uses environment variables and a “here document” to make use of variables in your RMAN backups:export ILEVEL=1 # Incremental level

export BDIR=/backup/monday # Backup directory

rman nocatalog <<EOF

connect target sys/practice@practice;

run {

allocate channel d1 type disk;

Kenny Smith Page 32 60 Tips in 60 Minutes

Page 33: 60Tips

backup

incremental level =${ILEVEL} cumulative

database

format '${BDIR}/db${ILEVEL}_%d_%s_%p_%t'

tag = 'WHOLE_LEVEL_${ILEVEL}';

}

EOF

Configure the Location of RMAN’s Snapshot Control FileYou might have noticed that the a strange file has been showing up in the $ORACLE_HOME/dbs directory. For a database named PROD, this file will be named on Unix or SNCFPROD.ORA on Windows. In order for a full resynchronization to occur, RMAN makes a snapshot copy of the current target control file. The snapshot control file insures that the control file information is not changing while the catalog is being up updated. You can control the name and location of that file during an Oracle 8 or 8i RMAN operation using the set command:RMAN> set snapshot controlfile name to '/backup/PROD/snap_prod.ctl.f';

With Oracle 9i, you can configure the snapshot control file for all RMAN operation with the configure command:RMAN> configure snapshot controlfile name to ‘/backup/PROD/snap_prod.ctl.f’;

Deploy Redundant Recovery CatalogsThe recovery catalog contains critical backup details contained in the target database control file. The catalog can be used to recover the target database, especially when the target database control file is not available. What if you’d like to have a redundant catalog for your RMAN backups? Here’s how:

1. Export the catalog owner to a binary dump file2. Import the catalog owner to a different user on another database3. Connect to each target database and to the new catalog user4. Issue the resync command.5. After each backup of the target databases, connect to target database and the new

catalog user and issue the resync command.

For more information see Chapter 11 of Oracle Backup and Recovery 101.

Explore Your Date Options with RMAN

Kenny Smith Page 33 60 Tips in 60 Minutes

Page 34: 60Tips

You’ve got a variety of options when specifying dates with RMAN. Look at some date samples for some illustrations of how you can establish date values with RMAN.

Date String Value

SYSDATE The current date and time

TRUNC (SYSDATE) Midnight of the current day

SYSDATE The current date and time less 7 days

’13-JAN-2002’ Midnight on January 13th, 2002 if the session NLS_DATE_FORMAT = ‘DD-MON-YYYY’

TO_DATE (’13-JAN-2002 13:00’, ‘DD-MON-YYYY HH24:MI’

One PM on January 13th, 2002 no matter what the session value for NLS_DATE_FORMAT

ADD_MONTHS(SYSDATE, 1) One month after from the current date and time.

ADD_MONTHS(SYSDATE, -1) One month before from the current date and time.

LAST_DAY(SYSDATE, 1) The last day of the current date and time.

NEXT_DAY(' SYSDATE', 'SUNDAY')

The date of the next Sunday from the current date and time.

For more information see Chapter 12 of Oracle Backup and Recovery 101.

Make Redundant RMAN Archive Log Backups

RMAN can backup your archive log files and delete them once they have been successfully backed up. That accomplishes two things: your redo log files are copied and the disk containing those files is consistently cleaned up so that it does not fill up. Here’s how you might be sure that you have at least three copies of your archive logs before you remove them:1. Make a baseline backup of all existing archive log files for your PRACTICE database

using the ARCHIVELOG ALL option for the first two days of your new backup schedule.

2. Each day, make a backup of all existing archive log files containing redo from the last two days. (Use “from time ‘SYSDATE – 2’)

3. Each day, make backup of the logfiles containing redo from three days ago until two days ago (Use “from time 'SYSDATE-3' until time 'SYSDATE-2'”)

Note that when including backups within a time range, any archive logs containing redo within the date/time range are included in the backup.

Kenny Smith Page 34 60 Tips in 60 Minutes

Page 35: 60Tips

The List Command in a NutshellThe List command in RMAN gives you an inventory of the backups that the target control file and the recovery catalog know about. You have lots of options with the LIST command.

Each list operation can look at the database incarnations, backup or copies. For backups and copies, you can filter the output to show information about the

database, tablespaces, datafiles, control files or archive log fileso For backups and copies of the database, tablespaces, datafiles or control

files, you can filter the output by backup/copy completion time, tag, recoverability, device type, string matching (Like) to show information about the database, tablespaces, datafiles, control files or archive log files

o Archive log backups and copies can be bounded by time, SCN or log sequence

The Report Command in a NutshellThe List command tells you what you’ve got. Reports tell you what you need. You have four report options and two of them are very handy in confirming that your current backups will allow you to recover.

Need Backup – Identifies datafiles that require a new backup for a specified threshold (days, incremental count or redundancy) to achieve a complete recovery.

Unrecoverable – Identifies any datafiles that require backup because of an unrecoverable operation.

Obsolete – Identifies backups that are not needed and can be deleted.

Schema – Displays the database physical schema for the target database.

Find RMAN Information from the Target Control FileRMAN backup information is stored in the target control file. In addition the RMAN LIST command, you can look at backup details by selecting from v$ dynamic views of the target database. Query from V$BACKUP_SET (backup sets), V$BACKUP_PIECE (backup pieces), V$BACKUP_DATAFILE (datafile backups), V$DATAFILE_COPY (datafile image copies), V$BACKUP_CONTROLFILE (control file backups), V$CONTROLFILE_COPY (control file image copies) and V$BACKUP_REDOLOG (archive log backups).Here is a simple sample query to look at RMAN backups of datafiles.

SQL> SELECT file#, completion_time time,

2 checkpoint_change# change#, set_count

3 FROM v$backup_datafile;

Kenny Smith Page 35 60 Tips in 60 Minutes

Page 36: 60Tips

Find RMAN Information from the RMAN Catalog ViewsRMAN backup information may be stored in a schema named the Recovery Manager catalog. In addition the RMAN LIST command, you can look at backup details by selecting from RC tables of the catalog database user. Query from RC_BACKUP_SET (backup sets), RC_BACKUP_PIECE (backup pieces), RC_BACKUP_DATAFILE (datafile backups), RC_DATAFILE_COPY (datafile image copies), RC_BACKUP_CONTROLFILE (control file backups), RC_CONTROLFILE_COPY (control file image copies) and RC_BACKUP_REDOLOG (archive log backups).Here is a simple sample query to look at RMAN backups of datafiles for the production database:SQL> SELECT file#, completion_time time, checkpoint_change# change#,

2 bdf_key key, set_count,

3 decode(status,'A','AVAIL','O','UNUSABLE','D', 'DELETED') Status

4 FROM rc_backup_datafile

5 WHERE db_name = 'PROD';

Create a Duplicate database with RMANRMAN makes creating a duplicate database a snap. Connect to the target database, the catalog schema and an auxiliary instance and run these few commands to create a copy of the target database as database TEST:

run {

set command id to 'Create Duplicate Database';

allocate auxiliary channel d1 type disk;

duplicate target database to TEST;

}

For more details: see chapter 15 of Oracle Backup and Recovery 101.

Create a Standby database with RMANRMAN will create a standby of an open database with just a few simple commands. Connect to the target database, the catalog schema an auxiliary instance. Run these few simple commands and your standby database will be created and recovered for you:run {

set command id to 'Create Standby Database';

set until logseq 101 thread 1;

allocate auxiliary channel d1 type disk;

duplicate target database for standby;

Kenny Smith Page 36 60 Tips in 60 Minutes

Page 37: 60Tips

dorecover;

}

The standby database will be recovered to (but not including) log sequence 101. For more details: see chapter 16 of Oracle Backup and Recovery 101.

Perform Tablespace Recovery with RMANRMAN will perform a tablespace point in time recovery with just a few simple commands. Connect to the target database, the catalog schema an auxiliary instance. Run these commands to recover the USERS tablespace up to (but not including) time 15:34:38 on July 22:

run {

set command id to 'Perform TSPITR';

set until time "to_date('22-JUL-2002 15:34:38,

'DD-MON-YYYY HH24:MI:SS')";

allocate clone channel d1 type disk;

recover tablespace users;

}

For more details: see chapter 17 of Oracle Backup and Recovery 101.

Automate RMAN Backup Set RemovalIn Oracle 8 and 8i, it can be a hassle to remove old backups. First you have to find them. Once found then you can delete them. You can select from the target control V$ views or the catalog RC views with SQL like below. This SQL will create a list of backup sets that were created over two weeks ago and remove them.

define fil = /tmp/delete_backupset.rcv'

prompt *** Spooling to &fil

Rem Create a file containing all the file copy commands needed for physical

backup

spool &fil

prompt connect target system/manager;;

prompt allocate channel for delete type disk;;

SELECT 'change backupset '||bs_key||' delete;'

FROM rc_backup_set

Kenny Smith Page 37 60 Tips in 60 Minutes

Page 38: 60Tips

WHERE completion_time < SYSDATE - 14;

prompt release channel;;

spool off;

Configure a Retention Policy in RMAN 9iA backup retention policy can be configured in RMAN in Oracle version 9i. If you want to retain backups for four weeks, configure RMAN to retain backups with the configure command:RMAN> configure retention policy to recovery window of 28 days;

When a backup becomes older than four weeks, RMAN marks it as obsolete. To delete these old backups, allocate a maintenance channel and delete them:RMAN> allocate channel for maintenance device type disk;

RMAN> delete obsolete;

RMAN> release channel;

Configure Channels on RMAN 9iWith Oracle 9i, RMAN channels can be pre-configured. These pre-configured channels will be automatically allocated during RMAN operations. You can configure the device type, parallelism, format, maxpiecesize and other values for the default channels. This way, scripts can be much easier to write. Use the Show all command to see the show all command to see all channel configurations.An Oracle 8 and 8i RMAN backup script looks like this:Run {

allocate channel d1 type disk;

backup incremental level = 0

database format ‘/backup/db_%d_%s_%p_%t’

tag = ‘WHOLE_LEVEL_0’

}

The same work can be accomplished via pre-configured channels like this:RMAN> backup incremental level 0 database tag = ‘WHOLE_LEVEL_0’;

For more details: see appendix B of Oracle Backup and Recovery 101.

Kenny Smith Page 38 60 Tips in 60 Minutes

Page 39: 60Tips

General Database Reflections

The Best Oracle Book is…Oracle Concepts Manual

The Best Oracle Resource is…Metalink

The Best way to Learn Oracle is…Practice

Kenny Smith Page 39 60 Tips in 60 Minutes

Page 40: 60Tips

Extra Tips – In case you haven’t had enough already

Log your current commit System Change Number

The system change number (SCN) in an Oracle database is an ever-increasing internal timestamp that uniquely identifies a committed version of the database. Every time a user commits a transaction, Oracle records a new SCN. Oracle uses the SCN for many purposes, including controlfile/datafile consistency, recovery, TX consistent reads, redo, undo, and the list goes on.How can you know the current SCN for your transaction? You can use userenv('COMMITSCN') in an insert statement. (This function call with the ‘COMMITSCN’ argument cannot be used in a select statement). Capture your transactions SCN like this:

SQL> create table my_scn (col1 NUMBER);

SQL> insert into my_scn values (userenv('COMMITSCN'));

SQL> select * from my_scn;

There it is. Your transaction SCN.

Why would you want the current system change number? If you subscribe to Element K tips, then I’m sure you can think of something.

Remote Diagnostic Agent (RDA) on MetalinkWhen you set up a tar with Oracle, it gets tiresome supplying the Oracle support person with all your database version information. It takes a long time just to get the question or problem you have.On your next support call, go to metalink.oracle.com and search for RDA. Choose the Remote Diagnostic Agent (RDA) entry. You’ll find is a set of Unix shell scripts to gather detailed information from an Oracle environment. The scripts are focused to collect information that will aid in problem diagnosis, however the output is also very useful to see the overall system configuration. Download the tar archive file and you’ll be impressed by all the database environment information collected. Send this information back to Oracle or use it yourself for your own diagnosis.The scripts are tested on the following Unix platforms:

Sun Solaris (2.5 - 8) HP-UX (10.X and 11.X) Compaq Unix (OSF1) 4.0

Kenny Smith Page 40 60 Tips in 60 Minutes

Page 41: 60Tips

Roll forward versus roll back.In discussions of Oracle database mechanisms, you here the terms roll forward and rollback. Roll forward refers to the process Oracle goes through to apply changes contained in the redo log files (both online and archive). The database clock (as measured by the system change number) is moved forward within the blocks of the datafile that are changed within the redo log vectors. Roll forward occurs during database, tablespace or datafile recovery and during crash recovery. Rollback is the process of undoing uncommitted database transactions. The blocks copied to the rollback segments during transactions as a copy of the block for other transaction to read. When the instance aborts, the undo information in the redo log files must be applied to the database during the roll forward process of recovery. Therefore, during recovery, the database must roll forward and roll back.

Monitor RMAN operationsWhile RMAN is running, you can monitor the progress of the operation by looking at V$SESSION_LONGOPS like this:SQL> SELECT sid, serial#, context, sofar, totalwork,

round(sofar/totalwork*100,2) “% Complete”

2 FROM v$session_longops

3 WHERE opname LIKE ‘RMAN:%’

4 AND opname NOT LIKE ‘RMAN: aggregate%’;

Configure the Recovery Manager MMLThe Media Management Layer allows RMAN to communicate directly to a tape vendors software. To configure the MML for your tape vendor, contact the vendor or navigate to their website. Most vendors have detailed instruction on setting up the MML for their tape software on different operating system. A typical set up will go something like this:

1. Shut down all Oracle databases sharing the current ORACLE_HOME directory.

2. Delete the old symbolic link for libobk.so:rm $ORACLE_HOME/lib/libobk.so

3. Create a symbolic link between libobk.so and the shared library that you want to use. (in this example, the tape vendor’s library file is named libxyz.so)% ln  -s  $ORACLE_HOME/lib/libobk.so  $TAPE_SOFTWARE/lib/libxyz.so

4. Verify that the link is successful by using the sbttest test program to back up a file. For example, enter:% sbttest testfile.txt -trace sbtio.log

Kenny Smith Page 41 60 Tips in 60 Minutes

Page 42: 60Tips

For more information see Chapter 11 of Oracle Backup and Recovery 101.

Kenny Smith Page 42 60 Tips in 60 Minutes