Daily Work

88
Thanks Kumaravelu S DBA From: Raghu, Nadupalle (Cognizant) Sent: Wednesday, April 18, 2007 12:28 PM Subject: Subjec t: Memory Notification: Library Cache Object Loaded Into Sga Type: PROBLEM Last Revision Date: 30-MAR- 2007 Statu s: PUBLISHED In this Document Symptoms Changes Cause Solution References Applies to: Oracle Server - Enterprise Edition - Version: This problem can occur on any platform. . Symptoms The following messages are reported in alert.log after 10g Release 2 is installed. Memory Notification: Library Cache Object loaded into SGA Heap size 2294K exceeds notification threshold (2048K)

Transcript of Daily Work

Page 1: Daily Work

ThanksKumaravelu SDBA

From: Raghu, Nadupalle (Cognizant) Sent: Wednesday, April 18, 2007 12:28 PMSubject:

Subject: Memory Notification: Library Cache Object Loaded Into Sga

  Type: PROBLEM  Last Revision Date: 30-MAR-2007 Status: PUBLISHED

In this Document  Symptoms  Changes  Cause  Solution  References

Applies to: Oracle Server - Enterprise Edition - Version: This problem can occur on any platform..

Symptoms

The following messages are reported in alert.log after 10g Release 2 is installed.

        Memory Notification: Library Cache Object loaded into SGA        Heap size 2294K exceeds notification threshold (2048K)

Changes

Installed / Upgraded to 10g Release 2

Cause

These are warning messages that should not cause the program responsible for these errors to fail.  They appear as a result of new event messaging mechanism and memory manager in 10g Release 2.

The meaning is that the process is just spending a lot of time in finding free memory extents during an allocate as the memory may be heavily fragmented.  Fragmentation in

Page 2: Daily Work

memory is impossible to eliminate completely, however, continued messages of large allocations in memory indicate there are tuning opportunities on the application.  

The messages do not imply that an ORA-4031 is about to happen. 

Solution

In 10g we have a new undocumented parameter that sets the KGL heap size warning threshold.   This parameter was not present in 10gR1.  Warnings are written if heap size exceeds this threshold.    Set  _kgl_large_heap_warning_threshold  to a reasonable high value or zero to prevent these warning messages. Value needs to be set in bytes.

If you want to set this to 8192 (8192 * 1024) and are using an spfile:

(logged in as "/ as sysdba")

SQL> alter system set "_kgl_large_heap_warning_threshold"=8388608 scope=spfile ;

SQL> shutdown immediate SQL> startup

SQL> show parameter _kgl_large_heap_warning_thresholdNAME TYPE VALUE------------------------------------ ----------- ------------------------------_kgl_large_heap_warning_threshold integer 8388608

If using an "old-style" init parameter,

Edit the init parameter file and add

_kgl_large_heap_warning_threshold=8388608

 

NOTE:  The default threshold in 10.2.0.1 is 2M.   So these messages could show up frequently in some application environments.

In 10.2.0.2,  the threshold was increased to 50MB after regression tests, so this should be a reasonable and recommended value.   If you continue to see the these warning messages in the alert log after applying 10.2.0.2 or higher, an SR may be in order to investigate if you are encountering a bug in the Shared Pool.

DATABASE LINK

Page 3: Daily Work

create database link CDOI3 connect to cdo identified by cdo using 'CDOI3.cts.com';select * from cdo.t1@CDOI3;

10.237.5.154

User Name:oc4jadminPassword : pass1234

https://metalink.oracle.com/metalink/plsql/f?p=110:19:4410067257338331514::NO:::

Oracle Server-Enterprise and Standard Edition/DBA Administration Techn

Page 4: Daily Work

ical Forum

Displayed below are the messages of the selected thread.

Thread Status: Closed

From: Sara Dyer 18-Feb-05 14:57 Subject: ORA-04020 on startup of database

RDBMS Version: 9.2.0.4.0Operating System and Version: HP-UX B.11.00Error Number (if applicable): ORA-04020Product (i.e. SQL*Loader, Import, etc.): Product Version:

ORA-04020 on startup of database

I’m attempting to set up multi-master replication. I ran catalog.sql as sys as suggested in Note:122039.1. The below error occurred when running catalog.sql. I now cannot connect to the database using enterprise manager or a web application. I can only connect via sqlplus. I have restarted the database several times and each time I startup the below error occurs. I have tried running utlrp.sql and receive the same error. ERROR at line 15: ORA-04020: deadlock detected while trying to lock object SYS.DBMS_REPUTIL ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at line 24 ORA-06508: PL/SQL: could not find program unit being called ORA-06512: at line 24

Also the following error occurs when attempting to access the web application.

Fri, 18 Feb 2005 12:06:21 GMT ORA-04020: deadlock detected while trying to lock object SYS.DBMS_STANDARD DAD name: devltimetrk PROCEDURE : time_sheet.display URL : http://144.10.126.144:1643/pls/devlTimeTrk/time_sheet.display

Page 5: Daily Work

From: Otto Rodriguez 18-Feb-05 21:59 Subject: Re : ORA-04020 on startup of database

Try the following: 1. Set parameters in your updated initSID.ora (create from spfile): AQ_TM_PROCESSES=0 _SYSTEM_TRIG_ENABLED=FALSE

2. Rename spfile, shutdown and STARTUP MIGRATE 3. Run catalog.sql again 4. Comment parameters added in step 1 5. Rename back your spfile 6. Shutdown and STARTUP normal

From: Sara Dyer 22-Feb-05 16:34 Subject: Re : ORA-04020 on startup of database

That fixed my original problem. I put my pfile back the way it was and now I am getting this - ORACLE instance started.

Total System Global Area 488075536 bytes Fixed Size 737552 bytes Variable Size 452984832 bytes Database Buffers 33554432 bytes Redo Buffers 798720 bytes Database mounted. ORA-00604: error occurred at recursive SQL level 1 ORA-04045: errors during recompilation/revalidation of XDB.DBMS_XDBZ0 ORA-04098: trigger 'SYS.T_ALTER_USER_B' is invalid and failed re-validation

I tried recompiling everything with utlrp.sql but recieved the trigger is invalid error and I tried adding "_system_trig_enabled" and setting it to "*._system_trig_enabled=TRUE" in my pfile, no help.

Page 6: Daily Work

Thank you,

Sara

ORA-12518: TNS:listener could not hand off client connection

Your server is probably running out of memory and need to swap memory to disk. One cause can be an Oracle process consuming too much memory.

A possible workaround is to set following parameter in the listener.ora and restart the listener:DIRECT_HANDOFF_TTC_LISTENER=OFF

Should you be working with Multi threaded server connections, you might need to increase the value of large_pool_size.

START INFORMICS REPOSITARY

Su – informatCd /informatica/repositoryserver./ pmrepserverhttp://www.oracle.com/technology/books/10g_books.html

FOR SUN-SOLARIS 10G CONSOLE

smc&

AUTO EXTEND ON DATABASE TEMPFILE OR DATAFILE

alter database tempfile ‘file_name’ autoextend on;

HOW TO CREATE DATABASE MANUALLY

A)INIT.ORA PARAMETER

spool off$ ksh$ set -o viinstance_name=DWDEVdb_name=DWDEVbackground_dump_dest=/oradata2/oracle9i/admin/DWDEV/bdumpuser_dump_dest=/oradata2/oracle9i/admin/DWDEV/udumpcore_dump_dest=/oradata2/oracle9i/admin/DWDEV/cdumpcontrol_files=("/oradata2/oracle9i/admin/DWDEV/control01.ctl","/oradata2/oracle9i/admin/DWDEV/control02.ctl")

Page 7: Daily Work

compatible=9.2.0.0.0remote_login_passwordfile=EXCLUSIVEundo_management=AUTOundo_tablespace=undo1

B) STARTUP NOMOUNT;

C)

SQL> create database DWDEVd 2 atafile '/oradata2/oracle9i/admin/DWDEV/DWDEV1.dbf' size 2048m 3 logfile group 1 '/oradata2/oracle9i/admin/DWDEV/log1.rdo' size 200m, 4 group 2 '/oradata2/oracle9i/admin/DWDEV/log2.rdo' size 200md 5 efault temporary tablespace temp 6 tempfile '/oradata2/oracle9i/admin/DWDEVtemp01.dbf' size 10m; 7 undo tablespace undot1 datafile '/oradata2/oracle9i/admin/DWDEV/undot1.dbf size 100M;

D) Run catalog & catproc

NO. OF CPU RUNNING IN THE SERVER

psrinfopsrinfo –v

ASSIGN DEFAULT TABLESPACE FOR THE USER

alter user SAMCORE default tablespace smdest_data quota unlimited on

CREATE CONSTRAINT

create table ri_primary_key_1 (

a number,

b number,

c number,

constraint pk_name primary key (a, b)

);

Page 8: Daily Work

Alter table table_name add constraint some_name primary key (columname1,coulumname2);

ENABLE NO VALIDATE & DROP CONSTAINT

alter table test1 modify DAY_OF_WEEK varchar2(1) not null enable novalidate

ALTER TABLE egg DROP CONSTRAINT eggREFchicken;ALTER TABLE chicken DROP CONSTRAINT chickenREFegg;

Insert into table_name select * from table_nameCreate table table_name as selet * from table_name

DROP THE DATABASE

The following shows the steps to drop a database in Unix enviroment. In order to delete a database, there are few things need to be taken care of. First, all the database related files eg *.dbf, *.ctl, *.rdo, *.arc need to be deleted. Then, the entry in listener.ora and tnsnames.ora need to be removed. Third, all the database links need to be removed since it will be invalid anyways.

It depends how you login to oracle account in Unix, you should have environment set for the user oracle. To confirm that the environment variable is set, do a env|grep ORACLE and you will notice that your ORACLE_SID=SOME_SID and ORACLE_HOME=SOME_PATH. If you do not already have the ORACLE_SID and ORACLE_HOME set, do it now.

Make sure also, that you set the ORACLE_SID and ORACLE_HOME correct else you will end up deleting other database. Next, you will have to query all the database related files from dictionaries in order to identify which files to delete. Do the following:

01. Login as connect / as sysdba at svrmgrl02. startup the database if it's not already started. The database must at least mounted.03. spool /tmp/deletelist.lst 04. select name from v$datafile; (This will get all the datafiles; alternatively, you can select file_name from dba_data_files) 05. select member from v$logfile;06. select name from v$controlfile;07. archive log list (archive_log_dest is where the archived destination is)08. locating ifile by issuing show parameter ifile (alternatively, check the content of init.ora)09. spool off 10. Delete in O/S level the files listed in /tmp/deletelist.lst11. remove all the entries which refer to the deleted database in tnsnames.ora and listener.ora (located in $ORACLE_HOME/network/admin) 12. remove all database links referring to the deleted database.

Page 9: Daily Work

13. check "/var/opt/oracle/oratab" to make sure there is no entry of the database deleted. If yes, remove it.14. DONE

SQL> select DAY_OF_WEEK,count(DAY_OF_WEEK) as cnt from PPM_AR_BROADCAST_HOUR group by DAY_OF_WEEK ;

CHANGE THE NLS_DATABASE_PARAMETER

select * from nls_database_parameters where parameter='NLS_CHARACTERSET';

ALTER THE FILE TO OFFLINE

alter database tempfile '/oradata2/rating9i/data/temp01.dbf' offline;

alter database tempfile '/oradata2/rating9i/data/temp01.dbf' online;

SQL> alter table PPMENO_PROD.PPMENO_MEDIA_ENCODING_ENTITY modify (MINOR_STATION_IND varchar2(2 byte) constraint a1 not null enable novalidate);/migration/oracle9i/bin/pupbld/migration/oracle9i/sqlplus/admin/pupbld.sql/u01/app/oracle/product/8.1.6/bin/pupbld/u01/app/oracle/product/8.1.6/sqlplus/admin/pupbld.sql/data1/ora92/orainstall/bin/pupbld/data1/ora92/orainstall/sqlplus/admin/pupbld.sql

STATSPACK INSTALLATION

Statspack Installation

Steps1:Create tablespace tablespace_name datafile ‘/filename.dbf’’ size 500M; 2: /opt/oracle/rdbms/admin 3.run the command in sql prompt /opt/oracle/rdbms/admin/spcreate.sql 4.

grant select on PPMDP_STEN.ppmdp_media_stream to public;

create public synonym ppmdp_media_stream for PPMDP_STEN.ppmdp_media_stream

IMP UTILITY

connected to ORACLE

Page 10: Daily Work

The errors occur on Oracle database installed in Windows machine too. Actually the problem can occurs in any platform of Oracle database. It usually happens when try to import into new database.

The problem occurs because imp utiliy encounters error out when trying to execute some commands.

The solution to solve the problem is as following:

Login as sys in the SQLPLUS and run the following sqls

$OH/rdbms/admin/dbmsread.sql$OH/rdbms/admin/prvtread.plb

After executing the above sql scripts, retry the import. The error should disappears.

Select grantee,granted_name from dba_role_privs;

UNDOTBS

Guidelines for Using Partition-Level Import

Partition-level Import can only be specified in table mode. It lets you selectively load data from specified partitions or subpartitions in an export file. Keep the following guidelines in mind when using partition-level import.

Import always stores the rows according to the partitioning scheme of the target table.

Partition-level Import inserts only the row data from the specified source partitions or subpartitions.

If the target table is partitioned, partition-level Import rejects any rows that fall above the highest partition of the target table.

Partition-level Import cannot import a nonpartitioned exported table. However, a partitioned table can be imported from a nonpartitioned exported table using table-level Import.

Partition-level Import is legal only if the source table (that is, the table called tablename at export time) was partitioned and exists in the Export file.

If the partition or subpartition name is not a valid partition in the export file, Import generates a warning.

The partition or subpartition name in the parameter refers to only the partition or subpartition in the Export file, which may not contain all of the data of the table on the export source system.

If ROWS=y (default), and the table does not exist in the Import target system, the table is created and all rows from the source partition or subpartition are inserted into the partition or subpartition of the target table.

Page 11: Daily Work

If ROWS=y (default) and IGNORE=y, but the table already existed before Import, all rows for the specified partition or subpartition in the table are inserted into the table. The rows are stored according to the existing partitioning scheme of the target table.

If ROWS=n, Import does not insert data into the target table and continues to process other objects associated with the specified table and partition or subpartition in the file.

If the target table is nonpartitioned, the partitions and subpartitions are imported into the entire table. Import requires IGNORE=y to import one or more partitions or subpartitions from the Export file into a nonpartitioned table on the import target system.

USER CREATION IN OS

useradd -d /export/home/S106255 -m s106462

FIND UPTIME OF THE DATABASE

SQL> select TO_CHAR(startup_time,'mm-dd-yy hh24:mi:ss') from v$instance;

SQL> select property_name, property_value from database_properties;

The SQL will return the following results, look for DEFAULT_TEMP_TABLESPACE for the setting:

PROPERTY_NAME PROPERTY_VALUE—————————— ——————————DICT.BASE 2DEFAULT_TEMP_TABLESPACE TEMPDBTIMEZONE +01:00NLS_NCHAR_CHARACTERSET AL16UTF16GLOBAL_DB_NAME ARON.GENERALI.CHEXPORT_VIEWS_VERSION 8NLS_LANGUAGE AMERICANNLS_TERRITORY AMERICANLS_CURRENCY $NLS_ISO_CURRENCY AMERICANLS_NUMERIC_CHARACTERS .,NLS_CHARACTERSET WE8ISO8859P1NLS_CALENDAR GREGORIANNLS_DATE_FORMAT DD-MON-RR

Page 12: Daily Work

NLS_DATE_LANGUAGE AMERICANNLS_SORT BINARYNLS_TIME_FORMAT HH.MI.SSXFF AMNLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AMNLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZRNLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZRNLS_DUAL_CURRENCY $NLS_COMP BINARYNLS_LENGTH_SEMANTICS BYTENLS_NCHAR_CONV_EXCP FALSENLS_RDBMS_VERSION 9.2.0.6.0

If default temporary tablespace is wrong the alter it with the following command:

SQL> alter database default temporary tablespace temp;

To check default temporary tablespace for all users of the database:

SQL> select username, temporary_tablespace, account_status from dba_users;

will return the following result, check if all users TEMPORARY_TABLESPACE is set to correct settings:

USERNAME TEMPORARY_TABLESPACE ACCOUNT_STATUS—————————— —————————— ——————————–SYS TEMPRY OPENSYSTEM TEMP OPENOUTLN TEMP OPENDBSNMP TEMP OPENDBMONITOR TEMP OPENTEST TEMP OPENWMSYS TEMP EXPIRED & LOCKED

If wrong temporary tablespace is found, alter it with the correct tablespace name (for example, sys) with the following SQL:

SQL> alter user sys temporary tablespace temp;

Alternatively, recreate or add a datafile to your temporary tablespace and change the default temporary tablespace for your database;

SQL> drop tablespace temp including contents and datafiles;

Page 13: Daily Work

SQL> create temporary tablespace temp tempfile ‘/db/temp01.dbf’ size 100m autoextend off extent management local uniform size 1m;

SQL> alter database default temporary tablespace temp;

HOW TO DISABLE CONSTAINT

alter table PPM_PROD.PPM_SAMPLE_HH_CHRSTCS disable constraint SHC_S_FK;

WHAT WILL HAPPEN IF U DELETE THOSE FILES

And what can happen if I delete them?

Its two very large files (150-160 MB each):9.2.0/assistants/dbca/templates/Data_Warehouse.dfj9.2.0/assistants/dbca/templates/Transaction_Processing.dfj

  

Re: What are these files GOOD for ? [message #126248 is a reply to message #126216 ]

Sat, 02 July 2005 00:09

AchchanMessages: 86Registered: June 2005

Member

Hi,Files that have a .DJF extension contain the predefined redo logs and datafiles for seed templates in DBCA.If you delete them you wont be able to use those db creation templates in future.

FOR JVM INSTALLATION IN ORACLE

For JVM installation.

We have to run this script.initjvma.sql

DB_DOMAIN NAME PARAMETER

db_domain

Page 14: Daily Work

GLOBAL_NAMES=TRUEALTER DATABASE RENAME GLOBAL_NAME TO WEBDV.CTS.COM;

CREATE TABLE STRUCTURE

SQL> select DBMS_METADATA.GET_DDL('TABLE','LOGOFF_TBL','COORS_TARGET') from dual; CREATE OR REPLACE TRIGGER SYS.trg_logoffBEFORE logoff ON DATABASEBEGIN INSERT INTO SYS.logoff_tbl VALUES(sys_context('userenv','session_user'), SYSDATE);END;

BACKUP PATH

507 mount 10.237.101.37:/unixbkp /backup 508 cd /backup 509 df -k . 510 cd /backup 511 ls 512 clear 513 ls 514 mkdir jpmc_bak 515 cd jpmc_bak 516 ls 517 df -k /u02 518 pwd 519 ls /u02 520 pwd 521 cp -rpf /u02/ccsystst . 522 ls -ltr 523 history

NO.OF CPU

isainfo –v

HOW TO LOCK THE USER ACCOUNT IN ORACLE

Alter user user_name account lock;

CHANGE TABLESPACE BLOCK SIZE ISSUE

Db_2k_cache_size=10m

Page 15: Daily Work

OEM IN ORACLE 10G

Emctl status dbconsole

http://hostname:port/em

SET ENVIRONMENT VARIABLE IN ORACLE

export PATH=/opt/java1.4/bin:$PATHexport JAVA_HOME=/opt/java1.4/jre ora9i 8837 1 0 May 24 ? 11:47 ora_pmon_poi ora9i 2305 1 0 Mar 29 ? 23:59 ora_pmon_portal ora9i 2321 1 0 Mar 29 ? 24:17 ora_pmon_EDMS ora10g 17394 1 0 Apr 02 ? 128:57 ora_pmon_POI2 orainst 14743 14365 0 11:02:43 pts/3 0:00 grep pmonCREATE DIRECTORY  create directory utl_dir as ‘path’;grant all on directory utl;

Modify the given parameter

utl_file_dir

If any timeout request

Sqlnet.Inbound_connect_Timeout

Any privilege for DBMS package

Grant execute on dbms.stats to username;

select s.username, s.status,  s.sid,     s.serial#, p.spid,     s.machine, s.process from   v$session s, v$process pwhere  s.process  = ‘27229’ and    s.paddr    = p.addr;

Load dump to the Sybase database

Load database database_name from

Load database database_name from “compress:path”

Load database database_name from stripe_on “compress:path01”

Page 16: Daily Work

Stripe on “compress:path02”

Dump database database_name to ‘path’;s

Those scripts should run for install JVM/javavm/install/initjvm.sql

/opt/oracle10g/xdk/admin/initxml.sql

/opt/oracle10g//xdk/admin/xmlja.sql

/opt/oracle10g/rdbms/admin/catjava.sql

/opt/oracle10g/rdbms/admin/catexf.sql

Once the database has been restarted, resolve any invalid objects by

running the utlrp.sql script e.g.:

@?/opt/oracle10g/rdbms/admin/utlrp.sql

Those scripts should run for Uninstall JVM

/rdbms/admin/catnoexf.sql

/rdbms/admin/rmaqjms.sql

/rdbms/admin/rmcdc.sql

/xdk/admin/rmxml.sql

/javavm/install/rmjvm.sql

SELECT dbms_metadata.get_ddl('TABLESPACE', tablespace_name) FROM dba_tablespaces;

10.237.101.37—Backup Report

SYBASE –Database

1.su – syb

2.dscp

3.open

4.listall

5.isql –Usa –Sddm(database Name)

6.sp-who

Page 17: Daily Work

7.go

8.shutdown with nowait

9./Sybase/syb125/ASE-12-5/install

10. startserver –f RUN–gsms

online database gem_curr

11.sp –helpdb

12.sp-configure

13.sp-config parameter newvalue=0is

vgdisplay -v vg02 | grep "LV Name" |more

For Truncate the table

dump tran test_saatchi with truncate_only

backupsp_helpdb test_saatchi

cd $SYSBASE

MORE INTERFACE

Sybadmin-pW

MAX_ENABLED_ROLES = 70

svrmgrlconnect internalstartupshutdown abort

The following command for gather statistics of Number of rows in each table

exec dbms_stats.gather_database_stats();

create or replace procedure sess1.kill_session ( v_sid number, v_serial number ) as v_varchar2 varchar2(100); begin execute immediate 'ALTER SYSTEM KILL SESSION ''' || v_sid || ',' || v_serial || ''''; end;

Page 18: Daily Work

HARD MOUNT DRIVE

mount -o hard,rw,noac,rsize=32768,wsize=32768,suid,proto=tcp,vers=3 10.237.101.37:/unixbkp /backup

oradim -delete -sid EPToradim -new -sid EPT -pfile D:\oracleHome2\database\initEPT.oraSET TNS_ADMIN=C:\oracle\ora92\network\admin

Alter user user_name quota unlimted on tablespace_name;

This is most likely a bug. I would recommend to apply patchset 9.2.0.7. As oracle recommends at least 9.2.0.3 versiyon. Anyway you can try below fix.

Change the listener and database services Log On user to domain user who is a member of the groups domain admin and ORA_DBA group. The default setting is Local System Account.

- Run regedit- Drill down to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services- Locate and delete the OracleOraHome92TNSListener (or whatever the listener   name is)- Reboot the entire Windows box- When started and logged on as the Oracle user, go to a DOS / Command prompt- Run 'lsnrctl start <listener_name>' without the single quotes and replacing   <listener_name> with the name.- An OS error of 1060 will be seen (normal) as the service is missing.- The listener should start correctly, or the next logical error may display.

By the way can you explain backward of the problem? Did you do any upgrade? May you use double oracle_home?‘/var/opt/oracle/--Install.loc

zfs set quota=10G datapool/zfsoracle

Page 19: Daily Work

select oracle_username, os_user_name,locked_mode,object_name,object_type from v$locked_object a, dba_objects b where a.object_id=b.object_id

Select distinct b.username, b.osuser, b.machine,b.terminal,mode_held,mode_requested, b.logon_time, SESSION WAIT sw.* From dba_ddl_locks a, v$session b, v$session_wait sw Where name=' and a.session_id=b.sid and status='ACTIVE' and sw.sid=b.sid;

spcreate.sqlspreport.sqlfor i in SAGACEND SAGACENB GLATTD STNC wrkshdevdoORACLE_SID=$iexport ORACLE_SIDsqlplus "/ as sysdba" << !select sum(bytes)/1024/1024 from dba_data_files;exit!done

/opt/infoall/info

For Hp-ux File Extend

fuser -c /oradata2

umount /oradata2

lvextend -L 40000M /dev/vg00/lvol7extendfs /dev/vg00/rlvoradata2

mount /oradata2

/dev/vg01/lvwls    2097152 1457349  610113   70% /weblogic

ALL_TAB_PRIVS All object grants where the user or public is granteeALL_TAB_PRIVS_MADE All object grants made by user or on user owned objectsALL_TAB_PRIVS_RECD All object grants to user or publicDBA_SYS_PRIVS System privileges granted to users and rolesDBA_ROLES List of all roles in the databaseDBA_ROLE_PRIVS Roles granted to users and to other rolesROLE_ROLE_PRIVS Roles granted to other rolesROLE_SYS_PRIVS System privileges granted to rolesROLE_TAB_PRIVS Table privileges granted to rolesSESSION_PRIVS All privileges currently available to user

Page 20: Daily Work

SESSION_ROLES All roles currently available to userUSER_SYS_PRIVS System privileges granted to current userUSER_TAB_PRIV Grants on objects where current user is grantee, grantor, or owner

DBA_TAB_PRIVS:"/etc/ftpd/ftpusers

bash-3.00# zfs create datapool/telebash-3.00# zfs set mountpoint=/app datapool/appbash-3.00# zfs set quota=10G datapool/app

EXECUTE dbms_session.set_sql_trace (FALSE);

SELECT SUBSTR (df.NAME, 1, 70) file_name, df.bytes / 1024 / 1024 allocated_mb,((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0))used_mb,NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mbFROM v$datafile df, dba_free_space dfsWHERE df.file# = dfs.file_id(+)GROUP BY dfs.file_id, df.NAME, df.file#, df.bytesORDER BY file_name;

purge table name-of-table

purge index name-of-table

purge recyclebin

purge dba_recyclebin

purge tablespace name-of-tablespace

purge tablespace name-of-tablespaces user name-of-user

date ; dd if = <input file>  of = < output file > ; date

isainfo –v-output of the o2 is 32 bit or 64 bit

10.237.209.11

isql -Udba -Scso_otpw:SQL

for start and stop the databasescript: /sybdata1/syb126/IQ/cso_ot/

Page 21: Daily Work

Recover database;Alter database open;

10.237.204.69

SELECT dbms_metadata.get_ddl('TABLESPACE', tablespace_name)FROM dba_tablespaces;

 Here is the query to get the details based on Unix PID select s.username, s.status,  s.sid,     s.serial#, p.spid,     s.machine, s.process from   v$session s, v$process pwhere  s.process  = <unix_pid> and    s.paddr    = p.addr 

CREATE CONTROLFILE SET DATABASE "GMACDEV" RESETLOGS NOARCHIVELOG-- SET STANDBY TO MAXIMIZE PERFORMANCE MAXLOGFILES 16 MAXLOGMEMBERS 2 MAXDATAFILES 30 MAXINSTANCES 1 MAXLOGHISTORY 112LOGFILE GROUP 1 '/gmac/GMACDEV/log/log1.rdo' SIZE 100M, GROUP 2 '/gmac/GMACDEV/log/log2.rdo' SIZE 100M-- STANDBY LOGFILEDATAFILE '/gmac/GMACDEV/data/system.dbf', '/gmac/GMACDEV/data/undo.dbf', '/gmac/GMACDEV/data/user.dbf', '/gmac/GMACDEV/data/test.dbf'CHARACTER SET US7ASCII;select * from nls_database_parameters 2 where parameter = any('NLS_CHARACTERSET','NLS_NCHAR_CHARACTERSET');

    EXECUTE dbms_session.set_sql_trace (TRUE);PL/SQL PROFILER

If you are not already configured DBMS_PROFILE package look the following script in Oracle Home->rdbms->adminSCRIPT : PROFLOAD.SQL and PROFTAB.SQL

\\10.237.5.164\Softwares

My problem:When I don't use tnsnames and want to use ipc protocol then I get the following

Page 22: Daily Work

error.SQL> connect myuserid/mypasswordERROR:ORA-01034: ORACLE not availableORA-27121: unable to determine size of shared memory segmentSVR4 Error: 13: Permission denied

Answer to your problem:=======================Make sure file 'oracle' has the following permissions:cd $ORACLE_HOME/bin6751 If not...1. Login as oracle user2. Shutdown (normal) the db3. Go to $ORACLE_HOME/bin4. Execute the followingchmod 6751 oracle5. Check the file permissions on oracle using the followingls -l oracle they should be:-rwsr-s--x Startup the db and try connecting as dba or non-oracle user. if this will not work:- make sure permisions on oracle filesystems are set correctly ( set to 755)- filesystem must be mounted with correct setuid

date '+DATE: %m/%d/%y%nTIME:%H:%M:%S'

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOME/bin/emca for Linux and $ORACLE_HOME\Bin\emca.bat for Windows).

$ emca -repos create$ emca -config dbcontrol db

STARTED EMCA at Fri May 14 10:43:22 MEST 2004Enter the following information about the databaseto be configured.

Listener port number: 1521Database SID: AKI1Service name: AKI1.WORLD

Page 23: Daily Work

Email address for notification: [email protected] gateway for notification: mailhostPassword for dbsnmp: xxxxxxxPassword for sysman: xxxxxxxPassword for sys: xxxxxxx

---------------------------------------------------------You have specified the following settings

Database ORACLE_HOME: /opt/oracle/product/10.1.0Enterprise Manager ORACLE_HOME: /opt/oracle/product/10.1.0

Database host name ..........: akiraListener port number .........: 1521Database SID .................: AKI1Service name .................: AKI1Email address for notification: [email protected] gateway for notification: mailhost---------------------------------------------------------Do you wish to continue? [yes/no]: yesAM oracle.sysman.emcp.EMConfig updateReposVarsINFO: Updating file ../config/repository.variables ...

Now wait about 10 Minutes to complete!

M oracle.sysman.emcp.EMConfig createRepositoryINFO: Creating repository ...M oracle.sysman.emcp.EMConfig performINFO: Repository was created successfullyM oracle.sysman.emcp.util.PortQuery findUsedPortsINFO: Searching services file for used portAM oracle.sysman.emcp.EMConfig getProperties......................INFO: Starting the DBConsole ...AM oracle.sysman.emcp.EMConfig performINFO: DBConsole is started successfullyINFO: >>>>>>>>>>> The Enterprise Manager URL is http://akira:5500/em <<<<<<<<<<<Enterprise Manager configuration is completed successfullyFINISHED EMCA at Fri May 14 10:55:25 MEST 2004

Try to connect to the database Control

http://akira:5500/em

emca -deconfig dbcontrol db -repos drop

1> select name from sysconfigures;2> goMsg 102, Level 15, State 1:Server 'ddm', Line 1:

Page 24: Daily Work

Incorrect syntax near ';'.1> select name from sysconfigures where name like '%device%';2> goMsg 102, Level 15, State 1:Server 'ddm', Line 1:Incorrect syntax near ';'.1> select name from sysconfigures where name like '%device%'2> go name

-------------------------------------------------------------------------------- number of devices

suspend audit when device full

(2 rows affected)1> sp_configure 'number of devices'2> go Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 #36 60 60 number dynamic

(1 row affected)(return status = 0)1> sp_configure 'number of devices',702> go00:00000:00027:2007/03/15 14:52:10.46 server Configuration file '/sybase/syb125/ASE-12_5/ddm.cfg' has been written and the previous version has been renamed to '/sybase/syb125/ASE-12_5/ddm.046'.00:00000:00027:2007/03/15 14:52:10.48 server The configuration option 'number of devices' has been changed by 'sa' from '60' to '70'. Parameter Name Default Memory Used Config Value Run Value Unit Type ------------------------------ ----------- ----------- ------------ ----------- -------------------- ---------- number of devices 10 #44 70 70 number dynamic

(1 row affected)Configuration option changed. The SQL Server need not be rebooted since theoption is dynamic.

Page 25: Daily Work

Changing the value of 'number of devices' to '70' increases the amount of memoryASE uses by 12 K.(return status = 0)

disk initname='gem_hist_data7',physname='/data/syb125/gem_hist/gem_hist_data7.dat',size='1600M'go

This Query is used to find out the object name and lock id

select c.owner,c.object_name,c.object_type,b.sid,b.serial#,b.status,b.osuser,b.machine from v$locked_object a ,v$session b,dba_objects c where b.sid = a.session_id and a.object_id = c.object_id;

For apply patches for migration one version to another version1.Run the setup.exe2.shu down the database3.startup migrate4.Run the below scriptscatpatch.sqlcatcio.sqlutlrp.sqlcatexp.sql5.shu immediate

Find out the locked object and sql query

select a.object_name, b.oracle_username, b.os_user_name,c.sid, c.serial#,c.terminal, d.sql_textfrom sys.dba_objects a,v$locked_object b,v$session c,v$sqltext d where a.object_id = b.object_id and c.sid = b.session_id and c.sql_hash_value = d.hash_value

HP-UX Cron tab

NAME

crontab - user crontab file

Page 26: Daily Work

SYNOPSIS

crontab [file]

crontab -r

crontab -l

DESCRIPTION

crontab copies the specified file, or standard input if no file is

specified, into a directory that holds all users' crontab files (see

cron(1M)). The -r option removes a user's crontab from the crontab

directory. crontab -l lists the crontab file for the invoking user.

Users are permitted to use crontab if their names appear in the file

/usr/lib/cron/cron.allow. If that file does not exist, the file

/usr/lib/cron/cron.deny is checked to determine if the user should be

denied access to crontab. If neither file exists, only root is

allowed to submit a job. If only cron.deny exists and is empty,

global usage is permitted. The allow/deny files consist of one user

name per line.

A crontab file consists of lines of six fields each. The fields are

separated by spaces or tabs. The first five are integer patterns that

specify the following:

minute (0-59),

Page 27: Daily Work

hour (0-23),

day of the month (1-31),

month of the year (1-12),

day of the week (0-6 with 0=Sunday).

select s.machine from v$process p,v$session s where s.paddr=p.addr and spid=17143;The Oracle Management Agent (OMA) is part of the Oracle Enterprise Manager Grid Control software. OMA is an operating system process that, when deployed on each monitored host, is responsible for monitoring all targets on the host and communicating their status to the Oracle Management Server (OMS), which stores it in the Oracle Management Repository (OMR).

Donald K. Burleson

Oracle Tips

Locating and deleting unused indexes in Oracle9i

One of the most serious causes of poor DML performance is the existence of unused indexes. All SQL inserts, updates and delete will run slower if they have to update a large number of indexes each time a row in a table is changed.

Sadly, many Oracle professionals will allocate indexes whenever they see a column that is referenced in the WHERE clause of an SQL query. While this approach makes SQL run fast, function-based Oracle indexes make it possible to over-allocate indexes on table columns. This over-allocation of indexes can cripple the performance of loads on critical Oracle tables.

Until Oracle9i, there was no way to identify those indexes that were not being used by SQL queries. This tip describes the Oracle9i method that allows the DBA to locate and delete un-used indexes.

The approach is quite simple. Oracle9i has a tool that allows you to monitor index usage with an alter index command. You can then query and find those indexes that are unused and drop them from the database.

Here is a script that will turn on monitoring of usage for all indexes in a system:

Page 28: Daily Work

set pages 999;set heading off;

spool run_monitor.sql

select   'alter index '||owner||'.'||index_name||' monitoring usage;'from   dba_indexeswhere   owner not in ('SYS','SYSTEM','PERFSTAT');

spool off;

@run_monitor

Next, we wait until a significant amount of SQL has executed on our database, and then query the new v$object_usage view:

select   index_name,   table_name,   usedfrom   v$object_usage;

Here we see that v$object_usage has a single column called used, which will be set to YES or NO.  Sadly, this will not tell you how many times the index has been used, but this tool is useful for investigating unused indexes.

    INDEX_NAME               TABLE_NAME     MON USED    ---------------         --------------- --- ----    CUSTOMER_LAST_NAME_IDX   CUSTOMER       YES NO  

If you like Oracle tuning, you might enjoy my latest book “Oracle Tuning: The DefinitiveReference” by Rampant TechPress. (I don’t think it is right to charge a fortune for books!) and you can buy it right now at this link:

http://www.rampant-books.com/book_2005_1_awr_proactive_tuning.htm

sysoper privileges

Perform STARTUP and SHUTDOWN operations CREATE SPFILE

ALTER DATABASE OPEN/MOUNT/BACKUP

Page 29: Daily Work

ALTER DATABASE ARCHIVELOG

ALTER DATABASE RECOVER (Complete recovery only. Any form of incomplete recovery, such as UNTIL TIME|CHANGE|CANCEL|CONTROLFILE requires connecting as SYSDBA.)

Includes the RESTRICTED SESSION privilege.

Changing the Character Set After Database Creation

In some cases, you may wish to change the existing database character set. For instance, you may find that the number of languages that need to be supported in your database have increased. In most cases, you will need to do a full export/import to properly convert all data to the new character set. However, if and only if, the new character set is a strict superset of the current character set, it is possible to use the ALTER DATABASE CHARACTER SET to expedite the change in the database character set.

The target character set is a strict superset if and only if each and every codepoint in the source character set is available in the target character set, with the same corresponding codepoint value. For instance the following migration scenarios can take advantage of the ALTER DATABASE CHARACTER SET command since US7ASCII is a strict subset of WE8ISO8859P1, AL24UTFFSS, and UTF8:

Current Character Set New Character SetNew Character Set is strict superset?US7ASCII WE8ISO8859P1yes US7ASCIIALT24UTFFSS yesUS7ASCIIUTF8yes

WARNING: Attempting to change the database character set to a character set that is not a strict superset can result in data loss and data corruption. To ensure data integrity, whenever migrating to a new character set that is not a strict superset, you must use export/import. It is essential to do a full backup of the database before using the ALTER DATABASE [NATIONAL] CHARACTER SET statement, since the command cannot be rolled back. The syntax is:

ALTER DATABASE [] CHARACTER SET ;ALTER DATABASE [] NATIONAL CHARACTER SET;

Page 30: Daily Work

The database name is optional. The character set name should be specified without quotes, for example:

ALTER DATABASE CHARACTER SET WE8ISO8859P1;

To change the database character set, perform the following steps. Not all of them are absolutely necessary, but they are highly recommended:

SQL> SHUTDOWN IMMEDIATE; -- or NORMAL

SQL> STARTUP MOUNT;SQL> ALTER SYSTEM ENABLE RESTRICED SESSION;SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;SQL> ALTER DATABASE OPEN;SQL> ALTER DATABASE CHARACTER SET ;SQL> SHUTDOWN IMMEDIATE; -- or NORMALSQL> STARTUP;

To change the national character set, replace the ALTER DATABASE CHARACTER SET statement with ALTER DATABASE NATIONAL CHARACTER SET. You can issue both commands together if desired

bash-3.00# zfs create datapool1/dbbackupsbash-3.00# zfs set mountpoint=/dbbackups datapool/dbbackupsbash-3.00# zfs set quota=10G datapool/dbbackups

/var/spool/cron/crontabs

1.Touch user2.check cron.deny file also

how to caluculate the database sizeeeeee?

SELECT segment_type, segment_name,BLOCKS*2048/1024 "Kb"FROM DBA_SEGMENTSWHERE OWNER=UPPER('<owner>') AND SEGMENT_NAME = UPPER('<table_name>');

You should substract emptied blocks from this table, using:

ANALYZE TABLE <owner>.<table_name> ESTIMATE STATISTICS;

SELECT TABLE_NAME, EMPTY_BLOCKS*2048/1024 "Kb"FROM DBA_TABLESWHERE OWNER=UPPER('<owner>') AND TABLE_NAME = UPPER('<table_name>');

Page 31: Daily Work

May 23 If you want to know about database size just calculate

DATAFILE SIZE + CONTROL FILE SIZE + REDO LOG FILE SIZE

RegardsTajhttp://dbataj.blogspot.com Jun 1 (13 hours ago) babu is correct...but analyse the indexes also...if u wanna know the actual used space, use dba_extents instead of dba_segments

Oracle Managed Files (OMF)OMF simplifies the creation of databases as Oracle does all OS operations and file naming. It has several advantages including:

Automatic cleanup of the filesystem when database objects are dropped. Standardized naming of database files. Increased portability since file specifications are not needed. Simplified creation of test systems on differing operating systems. No unused files wasting disk space.

The location of database files is defined using the DB_CREATE_FILE_DEST parameter. If it is defined on its own all files are placed in the same location. If the DB_CREATE_ONLINE_LOG_DEST_n parameter is defined alternate locations and levels of multiplexing can be defined for Logfiles. These parameters are dymanic and can be changed using the ALTER SYSTEM statement:

ALTER SYSTEM SET DB_CREATE_FILE_DEST='C:\Oracle\Oradata\TSH1';

Files typically have a default size of 100M and are named using the following formats where u% is a unique 8 digit code, g% is the logfile group number, and %t is the tablespace name:

File Type Format

Controlfiles ora_%u.ctl

Redo Log Files ora_%g_%u.log

Datafiles ora_%t_%u.dbf

Temporary Datafiles ora_%t_%u.tmp

Managing Controlfiles Using OMF Managing Redo Log Files Using OMF Managing Tablespaces Using OMF

Page 32: Daily Work

Default Temporary Tablespace

Managing Controlfiles Using OMFDuring database creation the controlfile names are not specified. Instead, a controlfile is created for each DB_CREATE_ONLINE_LOG_DEST_n specified in the init.ora file. Once the database creation is complete the CONTROL_FILES parameter can be set in the init.ora file using the generated names shown in the V$CONTROLFILE view.

Managing Redo Log Files Using OMFWhen using OMF for redo logs the DB_CREAT_ONLINE_LOG_DEST_n parameters in the init.ora file decide on the locations and numbers of logfile members. For exmple:

DB_CREATE_ONLINE_LOG_DEST_1 = c:\Oracle\Oradata\TSH1

DB_CREATE_ONLINE_LOG_DEST_2 = d:\Oracle\Oradata\TSH1

The above parameters mean two members will be created for the logfile group in the specified locations when the ALTER DATABASE ADD LOGFILE; statement is issued. Oracle will name the file and increment the group number if they are not specified.

The ALTER DATABASE DROP LOGFILE GROUP 3; statement will remove the group and it members from the database and delete the files at operating system level.

Managing Tablespaces Using OMFAs shown previously the DB_CREATE_FILE_DEST parameter in the init.ora file specifies the location of the datafiles for OMF tablespaces. Since the file location is specified and Oracle will name the file, new tablespaces can be created using the following statement:

CREATE TABLESPACE tsh_data;

The resultant datafiles will have a default size of 100M and AUTOEXTEND UNLIMITED. For a specific size file use:

CREATE TABLESPACE tsh_data DATAFILE SIZE 150M;

To add a datafile to a tablespace use:

ALTER TABLESPACE tsh_data ADD DATAFILE;

If a tablespace is dropped, Oracle will remove the OS files also. For tablespaces not using the OMF feature this cleanup can be performed by issuing the statement:

DROP TABLESPACE tsh_data INCLUDING CONTENTS AND DATAFILES;

Default Temporary TablespaceIn previous releases, if you forgot to assign a temporary tablespace to a user the SYSTEM tablespace was used. This can cause contention and is considered bad practice. To prevent this 9i gives you the ability to assign a default temporary tablespace. If a temporary tablespace is not explicitly assigned the user is assigned to this tablespace.

A default temporary tablespace can be created during database creation or assigned afterwards:

CREATE DATABASE TSH1

Page 33: Daily Work

....

DEFAULT TEMPORARY TABLESPACE dts1

TEMPFILE 'c:\Oracle\Oradata\dts_1.f' SIZE 20M

EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M;

-- or

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE dts2;

A default temporary tablespace cannot be taken offline until a new default temporary tablespace is brought online.

Hope this helps. Regards Tim...

Oracle9i's Auto Segment Space Management OptionJames F. Koopmann(Database Expert) Posted 1/12/2006Comments (3) | Trackbacks (0)

Oracle has done it again. Venture with me down what seems like a small option but in fact has major implications on what we, as DBAs no longer have to manage.

The world of database performance and tuning is changing very fast. Every time I look at new features, it convinces me more and more that databases are becoming auto-tunable and self-healing. We could argue for quite awhile that DBAs will or will not become obsolete in the future, but I think our current nitch is the acceptance of new technology and our ability to empower the companies we work for by using it. With Oracle9i, Oracle has given a peek into the future of where it is going when tuning, not only the database but applications as well. The little gem that Oracle has snuck in is Its' new automated segment space management option.What Is It

If you haven't read the manuals yet, please do. You will quickly realize that Oracle is pushing us to use locally managed tablespaces. Because all the information to manage segments and blocks is kept in bitmaps in locally managed tablespaces, the access to the data dictionary is relieved. Not only does this not generate redo, contention is reduce. Along with the push to locally managed tablespaces, is the push to use automatic segment space management. This option takes total control of the parameters FREELISTS, FREELIST GROUPS, and PCTUSED. That means that Oracle will track and manage the used and free space in datablocks using bitmaps for all objects defined in the tablespace for which it has been defined.How It Use to Be

Page 34: Daily Work

In the olden days, everything was dictionary-managed tablespaces. How objects were being used within tablespaces made setting FREELIST, FREELIST GROUPS, and PCTUSED an ordeal. Typically, you would sit down and look at the type of DML that was going to be executed, the number of users executing the DML, the size of rows in tables, and how the data would grow over time. You would then come up with an idea of how to set FREELIST, PCTUSED, and PCTFREE in order to get the best usage of space when weighed against performance of DML. If you didn't know what you were doing or even if you did, you constantly had to monitor contention and space to verify and plan your next attempt. Let's spend a bit of time getting accustomed to these parameters.FREELIST

This is a list of blocks kept in the segment header that may be used for new rows being inserted into a table. When an insert is being done, Oracle gets the next block on the freelist and uses it for the insert. When multiple inserts are requested from multiple processes, there is the potential for a high level of contention since the multiple processes will be getting the same block from the freelist, until it is full, and inserting into it. Depending on how much contention you can live with, you need to determine how many freelists you need so that the multiple processes can access their own freelist.PCTUSED

This is a storage parameter that states when a certain percentage of a block begin used falls below PCTUSED, that block should be placed back on the freelist for available inserts. The issue with using a value for PCTUSED was that you had to balance the need for performance, a low PCTUSED to keep blocks off the freelist, against a high PCTUSED to keep space usage under control.FREELIST GROUPS

Basically used for multiple instances to access an object. This setting can also be used to move the freelists to other blocks beside the segment header and thus give some relief to segment header contention.Why Is Auto Segment Space Management Good

I have come up with a short list of reasons why you might want to switch to auto segment space management. I truly think you can find something that you will like.

* No worries* No wasted time searching for problems that don't exist.* No planning needed for storage parameters* Out of the box performance for created objects* No need to monitor levels of insert/update/delete rates* Improvement in space utilization* Better performance than most can tune or plan for with concurrent access to objects* Avoidance of data fragmentation* Minimal data dictionary access* Better indicator of the state of a data block* Further more, the method that Oracle uses to keep track of the availability of free space in a block is much more granular than the singular nature of the old, on the freelist or off the freelist scenario.

Create a Tablespace for Auto Segment Space Management

Page 35: Daily Work

Creating a tablespace for Auto Segment Space Management is quite simple. Include the statement at the end of the CREATE TABLESPACE statement. Here is an example.

CREATE TABLESPACE no_space_worries_ts DATAFILE '/oradata/mysid/datafiles/nospaceworries01.dbf' SIZE 100MEXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO;

The AUTO keyword tells Oracle to use bitmaps for managing space for segments.Check What You Have Defined

To determine your current tablespace definition, query the data dictionary.

select tablespace_name,contents,extent_management,allocation_type,segment_space_management from dba_tablespaces;

How Do You Switch To Auto Segment Space Management

Realize that you can't change the method of segment space management by an ALTER statement. You must create a new permanent, locally managed tablespace and state auto segment space management and then migrate the objects.Optional Procedures

Oracle has a package called DBMS_REPAIR that contains a procedure called SEGMENT_FIX_STATUS that will allow you to fix corruption of the bitmap states. This procedure will recalculate the bitmap states based on either block contents or a specified value.

The package DBMS_SPACE contains a procedure called SPACE_USAGE that gives information about how space is being used within blocks under the segment high water mark.Let Oracle Take Over

Maybe it's my old age or years of doing the mundane tasks as a DBA that wants to embrace this feature. If there is one thing I have learned from using Oracle databases, it's that Oracle has gotten a ton better at making sure new features work and are geared at truly making database performance better. Here is just one instance where I think we can embrace Oracles' attempt to take over a mundane task that is has been prone to error in the wrong hands. After all, it isn't rocket science when you get down to it and will probably be gone in the next release anyway.

Select DBTIMEZONE from dual; is used to determine the time zone of a Database

AuditingThe auditing mechanism for Oracle is extremely flexible so I'll only discuss performing full auditing on a single user:

Page 36: Daily Work

Server Setup Audit Options View Audit Trail Maintenence Security

Server SetupTo allow auditing on the server you must:

Set "audit_trail = true" in the init.ora file. Run the $ORACLE_HOME/rdbms/admin/cataudit.sql script while connected as SYS.

Audit OptionsAssuming that the "fireid" user is to be audited:

CONNECT sys/password AS SYSDBA

AUDIT ALL BY fireid BY ACCESS;

AUDIT SELECT TABLE, UPDATE TABLE, INSERT TABLE, DELETE TABLE BY fireid BY ACCESS;

AUDIT EXECUTE PROCEDURE BY fireid BY ACCESS;

These options audit all DDL & DML issued by "fireid", along with some system events. DDL (CREATE, ALTER & DROP of objects) DML (INSERT UPDATE, DELETE, SELECT, EXECUTE). SYSTEM EVENTS (LOGON, LOGOFF etc.)

View Audit TrailThe audit trail is stored in the SYS.AUD$ table. It's contents can be viewed directly or via the following views:

DBA_AUDIT_EXISTS DBA_AUDIT_OBJECT DBA_AUDIT_SESSION DBA_AUDIT_STATEMENT DBA_AUDIT_TRAIL DBA_OBJ_AUDIT_OPTS DBA_PRIV_AUDIT_OPTS DBA_STMT_AUDIT_OPTS

The audit trail contains alot of data, but the following are most likely to be of interest: Username : Oracle Username. Terminal : Machine that the user performed the action from. Timestamp : When the action occured. Object Owner : The owner of the object that was interacted with. Object Name : The name of the object that was interacted with. Action Name : The action that occured against the object. (INSERT, UPDATE, DELETE,

SELECT, EXECUTE)

Page 37: Daily Work

MaintenanceThe audit trail must be deleted/archived on a regular basis to prevent the SYS.AUD$ table growing to an unnacceptable size.

SecurityOnly DBAs should have maintenance access to the audit trail. If SELECT access is required by any applications this can be granted to any users, or alternatively a specific user may be created for this.

Auditing modifications of the data in the audit trail itself can be achieved as follows:

AUDIT INSERT, UPDATE, DELETE ON sys.aud$ BY ACCESS;

2.sqlplus '/as sysdba'-HP-UX/AIX

EXEC DBMS_UTILITY.compile_schema('ATT');

EXEC DBMS_UTILITY.analyze_schema('ATT','COMPUTE') ;

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046

 Doc ID:

Note:224270.1Type: DIAGNOSTIC

TOOLS

 Last Revision

Date: 30-MAY-2007

Status: PUBLISHED

 

Abstract

Trace Analyzer TRCANLZR - Interpreting Raw SQL Traces with Binds and/or Waits generated by EVENT 10046

Reads a raw SQL Trace generated by standard SQL Trace or by EVENT 10046 (Level 4, 8 or 12), and generates a comprehensive HTML report with performance related details: time summary, call summary (parse, execute, fetch), identification of top SQL, row source plan, explain plan, CBO statistics, wait events, values of bind variables, I/O summary per schema object, latches, hot blocks, etc.

Output HTML report includes all the details found on TKPROF, plus additional information normally requested and used for a transaction performance analysis. Generated report is more readable and extensive than text format used on prior version of this tool and on current TKPROF.

 

Product Name, Product Version

RDBMS 9i (9.2), 10g, or higher

Page 38: Daily Work

Can be used for Oracle Apps 11i or higher, or for any other application running on top of an Oracle database

Platform Platform independent

Date Created Version 2.4.3 on May, 2007.

Author Carlos Sierra

  Instructions

Execution Environment:

Once this tool is installed (under its own schema), it is executed from SQL*Plus from

the schema owning the transaction that generated the raw SQL Trace.

For example, if used on an Oracle Applications instance, execute using the APPS user.

Access Privileges:

To install, it requires connection as a user with SYSDBA privilege.

Once installed, it does not require special privileges, and it can be executed from

any schema user.

Usage (standard method):

sqlplus <usr>/<pwd>

exec trca$i.trace_analyzer('<raw trace filename on udump directory>');

Note: For maximum report details, execute on same instance and under same schema user

that generated the raw SQL Trace being analyzed. If executed on a different

instance or user, details as explain plans and I/O summaries are skipped.

Example for standard method:

Page 39: Daily Work

sqlplus apps/apps;

exec trca$i.trace_analyzer('vis_ora_14328_SYSADMIN.trc');

Note: Trace Analyzer may take a long time to execute on large trace files. Most traces

are less than 50M and they may take just a few minutes. But as a reference,

a trace with size of 500M, may take more than 1 hour.

Alternate Usage (generating report as spool file into local directory):

sqlplus <usr>/<pwd>

start trcanlzr.sql <tra`ce filename to be analyzed>;

Note: Use this alternate method only if you need to generate the Trace Analyzer report

into a local client directory (i.e. spool file). This method generates first the

report into the UDUMP directory, then copies it into the local directory using a

spool file.

Example generating report into local directory:

sqlplus apps/apps;

start trcanlzr.sql vis_ora_14328_SYSADMIN.trc;

Installation:

Instructions, sample report and tool are included in file trca.zip.

PROOFREAD THIS SCRIPT BEFORE USING IT! Due to differences in the way text

editors, e-mail packages, and operating systems handle text formatting (spaces,

tabs, and carriage returns), this script may not be in an executable state

Page 40: Daily Work

when you first receive it. Check over the script to ensure that errors of

this type are corrected. The script will produce an output file named trcanlzr_n_m.html.

This file can be viewed in a browser or uploaded for support analysis.

  Description

For a given raw SQL Trace generated by EVENT 10046 it provides:

1. Trace identification including actual start and completion time, host name, instance,

size, RDBMS version, etc.

2. SQL statements count, user and internal, total and unique.

3. Time summary: elapsed, cpu, non-idle wait events, idle wait events, for user

(recursive and non-recursive) and for internal.

4. Call summary for user and internal, with elapsed, cpu, logical reads, physical reads

and similar performance details.

5. Summary of wait events, classified by non-idle and idle for user and for internal

(if EVENT 10046 level 8 or 12 was used generating trace).

6. Top 20 SQL statements in terms of importance for SQL tuning analysis.

7. List of all unique SQL statements with one-line performance summary per statement.

8. Gaps of no trace activity found on file.

9. List of transactions found (commits and rollbacks).

10. Oracle errors if any.

11. I/O core waits including schema objects affected (tables, indexes, partitions), when

traced with level 8 or 12.

12. Top 5 hot blocks, indicating the schema objects (level 8 or 12).

13. Latch wait summary, by name and specific parameters (level 8 or 12).

14. Non-default initialization parameters.

Page 41: Daily Work

For every SQL statement included in the trace, it includes:

1. Cursor header with SQL statement text, hash value, length, line on trace, depth,

user, etc.

2. Oracle errors if any.

3. Call summary (parse, execute and fetch totals).

4. Non-idle and idle wait events (if traced with level 8 or 12).

5. I/O and latch waits summary (if level 8 or 12).

6. First execution and top 10 for particular SQL statement.

7. List of bind variables values for first and top 10 executions if trace was generated

using EVENT 10046 level 4 or 12.

8. Cumulative row source plan for all executions of SQL statement.

9. Detailed explain plan if Trace Analyzer is executed on same instance where trace was

generated, and if SQL statement made the Top 20 list.

10. Table, index and partition details including row count, CBO statistics and indexed

columns if the SQL statement generated an explain plan.

  Execution Parameters (most common)

1. p_trace_filename (req)

Name of the raw SQL Trace to be analyzed. The file should exist in the UDUMP directory.

This parameter is case sensitive.

Example:

sqlplus apps/apps;

exec trca$i.trace_analyzer('vis_ora_14328_SYSADMIN.trc');

2. p_output_filename (opt)

Name of the output file. The report is generated into the UDUMP

Page 42: Daily Work

directory.

Example:

sqlplus apps/apps;

exec trca$i.trace_analyzer('vis_ora_14328_SYSADMIN.trc', 'Booking Line.html');

  References

Interpreting Raw SQL_TRACE and DBMS_SUPPORT.START_TRACE output

39817.1

Troubleshooting Oracle Apps Performance Issues

169935.1

SQLTXPLAIN.SQL

Enhanced Explain Plan and diagnostic info for one SQL statement

215187.1

Implementing and Using the PL/SQL Profiler

243755.1

SQLAREAT.SQL

SQL Area, Plan and Statistics for Top DML

238684.1

TraceAnalyzer_IOUG.pdf (included in compressed file)

White paper

"Oracle Trace Analysis on Steroids - An in-depth look at the Trace Analyzer"

written and presented by Dave Moore, DBI

at the COLLABORATE06 event - IOUG OAUG

in Nashville, Tennessee, USA,

during April 23 - 27, 2006

General InformationNote: Trace Analyzer is a little known tool downloadable from Oracle that is an improvement and substitute for TKPROF for analyzing trace files.

Download Trace MetaLink Note 224270.1

Page 43: Daily Work

Analyzerhttp://metalink.oracle.com/metalink/plsql/ml2_gui.startup

Install Trace Analyzer

Read the instructions in INSTRUCTIONS.TXT to install the product.

As the instructions are not the clearest the following is what I did to install TraceAnalyzer so that it would be owned by the SYSTEM schema

1. Created a directory named INSTALL2. Unzipped TRCA.zip into the INSTALL directory3. Created a directory under $ORACLE_HOME named TraceAnalyzer4. Moved the .sql files from the INSTALL to the TraceAnalyzer directory5. Logged onto Oracle as SYS

   conn / as sysdba

6. Performed the following grants to SYSTEM

GRANT SELECT ON dba_indexes TO <schema_name>;GRANT SELECT ON dba_ind_columns TO <schema_name>;GRANT SELECT ON dba_objects TO <schema_name>;GRANT SELECT ON dba_tables TO <schema_name>;GRANT SELECT ON dba_temp_files TO <schema_name>;GRANT SELECT ON dba_users TO <schema_name>;GRANT SELECT ON v_$instance TO <schema_name>;GRANT SELECT ON v_$latchname TO <schema_name>;GRANT SELECT ON v_$parameter TO <schema_name>;

7. Connected to Oracle as SYSTEM8. Ran the installation script TRCACREA.sql

If any error occur recompile the package TRCA$ and correct errors.They will most likely be caused by permissions granted throughroles by SYS rather than being granted explicitly as required.

 

Page 44: Daily Work

Running Trace Analyzer

Run TraceAnalyzer

Assuming the name of the trace file is orabase_ora_1708.trc and that the trace file is located at /oracle/admin/orabase/udump

CONN system/<password>

ALTER SESSION SET EVENTS '10046 trace name context forever, level 12';

SELECT COUNT(*)FROM dba_tables t, dba_indexes iWHERE t.table_name = i.table_name;

ALTER SESSION SET sql_trace = FALSE;

Log onto an operating system session and navigate to the TraceAnalyzer directory:

c:> cd oracle\ora92\TraceAnalyzer

Start SQL*Plus

c:\oracle\ora92\TraceAnalzyer> sqlplus system/<pwd>@<service_name>

Run Trace Analysis

@TRCANLZR.sql UDUMP orabase_ora_1708.trc

Exit SQL*Plus

The trace analysis will be located in the TraceAnalyzer directorywith the name TRCANLZR_orabase_ora_1708.LOG.

Run TKPROF on the same trace file for comparison.

Oracle 10g linux TNS-12546 errorReply from eltorio on 3/14/2005 7:08:00 AM

I've got an answer wich is working, for info: The problem was with the /var/tmp/.oracle this directory had root.root as owner on one linux box and oracle.dba on the working linux box ! Why I don't know but I changed

chown -R oracle.dba /var/tmp/.oracle

Page 45: Daily Work

bootinfo –K

command for solaris hardware details

/usr/platform/`uname -i`/sbin/prtdiag prtconf | grep Memisainfo –v

/etc/fstab

Query for find session details

select * from v$sql where HASH_VALUE=(select s.sql_hash_value from v$process p,v$session s where s.paddr=p.addr and p.spid=11270);

select   sid,name,value from   v$statname n,v$sesstat s where   n.STATISTIC# = s.STATISTIC# and   name like 'session%memory%'order by 3 asc;select s.username, s.status,  s.sid,     s.serial#, p.spid,     s.machine, s.process from   v$session s, v$process p where  p.spid = 17883 and    s.paddr    = p.addr; 

SELECT units FROM v$sql,v$session_longopsWHERE sql_address = addressAND sql_hash_value = hash_valueORDER BY address, hash_value, child_number;

LGWR & DBWR===========These two processes are more I/O bound, but when the O.S. needs patches or is misbehaving, then they"spin" (wait) until the I/O operation to complete. The spinning is a CPU operation. Slowness or Failures in the Async I/O operations show themselves like this. You control the dbwr by setting either the db_writer_processes or dbwr_io_slaves parameter in your parameter file. You should generally set the db_writer_processes to a value less than or equal to the number of cpu's you have on your server. Setting this parameter lower or to it's default value of 1 when you are experiencing CPU spikes may help prevent CPU performance problems from occurring. If setting this parameter lower does help the contention on your processors, but you take an overall performance hit after lowering this parameter, you may need to add additional CPU to your server before increasing this parameter back to the way that you had it. In addition, having async i/o enabled with different combinations of these parameters can also cause performance problems and CPU spikes. See the following note for more

Page 46: Daily Work

information about db_writer_process and dbwr_io_slaves and how they relate to async I/O:

- <Note.97291.1> DB_WRITER_PROCESSES or DBWR_IO_SLAVES?

If LGWR appears to be intermittently taking up 100% CPU, you may be hitting the issue discussed in the following bug:

- <Bug:2656965> LGWR SPINS AND CONSUMES ALL CPU AND INSTANCE HANGS

The workaround to prevent lgwr from spinning is to set the following hidden parameter in your parameter file:

_lgwr_async_io=false

This parameter turns of async i/o for lgwr but leaves it intact for the rest of the database server.

Jobs also known as SNPn =======================The snp processes do the automatic refresh of materialized views (snapshots) which can be very CPU consuming. It is best to see what job is being executed in DBA_JOBS_RUNNINGwhen the CPU utilization is on the raise. Even on their own, they consume a fair amount of CPU because they are in a infinite loop querying the job queue. Some system statistics can be very distorted when they are enabled.

<Bug:1286684> CPU_USED_BY_THIS_SESSION IS FALSE WHEN JOB_QUEUE_PROCESSES IS SET

If you plan to use resource manager in 8.1.7:<bug:1319202> RESOURCE MANAGER PLAN DOES NOT WORK FOR THE JOB MECHANYSM

Advance Queuing also known as AQ, QMN======================================The AQ processes send and receive messages mostly through tables. If they are using too muchCPU is because of the queries over those tables or some bug.

<Bug:1462218> QMN PROCESSES QUERYING DEF$_AQCALL USING LARGE AMOUNTS OF CPU when combined with replication.

Page 47: Daily Work

@ <Bug:1559103> QMN PROCESS USES A LOT OF CPU CYCLES

The best is to keep the version and patches up to date.

An oracle (user) process (Back)-----------------------------------------

Large Queries, Procedure compilation or execution, Space management and Sorting are examples of operations with very high CPU usage. Besides the UNIX or NT way to find a CPU intensive process Oracle has its own statistics. The statistic is calculated by requesting the CPU clock at different time intervals of Oracle processing and incrementing the statistic# with the difference:

1 select name from v$statname2* where statistic#=12SQL> /

NAME---------------------------------CPU used by this session

"CPU used by this session" statistic is given in 1/100ths of a second. Eg: a value of 22 mean 0.22 seconds in 8i.

Other statistics can be found via CONSUMED_CPU_TIME Of view V$RSRC_CONSUMER_GROUPin Oracle9i. It differ a little bit from the CPU used by this session.(see <Note:215848.1>)Also, do not confuse this time with the timing done in the sql_trace (10046 event) since some of those timings are in microseconds.(see <Note:39817.1>)

The Following Query can give a good idea of what the session is doing and how much CPU they have consumed:

select ss.sid,se.command,ss.value CPU ,se.username,se.program from v$sesstat ss, v$session sewhere ss.statistic# in (select statistic# from v$statname where name = 'CPU used by this session')and se.sid=ss.sid

Page 48: Daily Work

and ss.sid>6order by ss.sid/

For the values of command please look at the definition of V$session in the reference manual.

To find out what sql the problem session(s) are executing, run the following query:

select s.sid, event, wait_time, w.seq#, q.sql_textfrom v$session_wait w, v$session s, v$process p, v$sqlarea qwhere s.paddr=p.addr ands.sid=&p ands.sql_address=q.address;

To check if your Oracle Binary is 32 bit or 64 bit

SELECT Length(addr)*4 || '-bits' word_length FROM v$process WHERE ROWNUM =1;

Alter database datafile ‘/oradata1/CDOi1/data/users01.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/sysaux01.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/undotbs01.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/system01.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/CDO_TS.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/cdo_is.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/SFMETA_TS.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/SFMETA_IS.dbf’ autoextend off;Alter database datafile ‘ /oradata1/CDOi1/data/SFUSER_TS.dbf’ autoextend off;Alter database datafile ‘/oradata1/CDOi1/data/SFUSER_IS.dbf’ autoextend off;YES

Alter database datafile /oradata1/CDOi1/data/SFWEB_TS.dbfYES

Alter database datafile /oradata1/CDOi1/data/SFWEB_IS.dbfYES

FILE_NAME---------------------------------------------AUT---Alter database datafile /oradata1/CDOi1/data/ ULOG_TS.dbfYES

Alter database datafile /oracle/CDOi1/data/users02.dbfYES

Page 49: Daily Work

Subject: MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013 from IE when KeepAlive used

  Doc ID: Note:269980.1 Type: PROBLEM

 Last Revision

Date: 08-FEB-2007

Status: ARCHIVED

PURPOSE

-------

Identify intermittent HTTP-500 errors caused by possible Microsoft Internet

Explorer bug. The information in this article applies to releases of:

- Oracle Containers for J2EE (OC4J)

- Oracle Application Server 10g (9.0.4.x)

- Oracle9iAS Release 2 (9.0.3.x)

- Oracle9iAS Release 2 (9.0.2.x)

Scope

-----

This note may apply if you have recently applied Microsoft Internet Explorer

browser patches.

Symptoms

--------

- You are seeing the following possible sequences of MOD_OC4J errors in the

Oracle HTTP Server error_log file

Unix: $ORACLE_HOME/Apache/Apache/logs/error_log

Windows: %ORACLE_HOME%\Apache\Apache\logs\error_log

Page 50: Daily Work

(a) MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,

MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013, MOD_OC4J_0207

(b) MOD_OC4J_0015, MOD_OC4J_0078,

MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,

MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013, MOD_OC4J_0207

(c) MOD_OC4J_0145, MOD_OC4J_0119, MOD_OC4J_0013,

MOD_OC4J_0080, MOD_OC4J_0058, MOD_OC4J_0035

(d) MOD_OC4J_0121, MOD_OC4J_0013, MOD_OC4J_0080, MOD_OC4J_0058

The above list is not definitive and other sequences may be possible.

The following is one example sequence as seen in a log file:

MOD_OC4J_0145: There is no oc4j process (for destination: home)

available to service request.

MOD_OC4J_0119: Failed to get an oc4j process for destination: home.

MOD_OC4J_0013: Failed to call destination: home's service() to service

the request.

MOD_OC4J_0145: There is no oc4j process (for destination: home) available

to service request.

MOD_OC4J_0119: Failed to get an oc4j process for destination: home.

MOD_OC4J_0013: Failed to call destination: home's service() to service

the request.

Page 51: Daily Work

MOD_OC4J_0207: In internal process table, failed to find an available

oc4j process for destination: home

Changes

-------

- The problem may be introduced by apply following Microsoft patches

o Microsoft 832894 security update

(MS04-004: Cumulative security update for Internet Explorer)

or

o Microsoft 821814 hotfix

- It may be seen only with certain browsers such as Internet Explorer

5.x and 6.x

- The client machines will have a wininet.dll with a version number of

6.0.2800.1405. To identify this

Use Windows Explorer to locate the file at %WINNT%\system32\wininet.dll

-> Right click on the file

-> Select "Properties"

-> click on the "Version" tab.

(see http://support.microsoft.com/default.aspx?scid=kb;en-us;831167

for further details)

Page 52: Daily Work

Cause

-----

This Windows bug causes a change in behavior when HTTP POST requests are

resubmitted, which can occur when the HTTP server terminates the browser

clients open connections that exceeded their allowed HTTP 1.1 "KeepAlive"

idle time. In these cases the requests are resubmitted by the browser without

the needed HTTP headers.

Fix

---

It is possible to address this issue by applying Microsoft patches to

the client systems where the browser is running.

As a more viable workaround it should be possible to disable the KeepAlive

timeout by restarting the HTTP Server component after making the following

configuration changes to httpd.conf

Unix: $ORACLE_HOME/Apache/Apache/conf/httpd.conf

Windows: %ORACLE_HOME%\Apache\Apache\conf\httpd.conf

1. Locate the KeepAlive directive in httpd.conf

Page 53: Daily Work

KeepAlive On

2. Replace the KeepAlive directive in httpd.conf with

# vvv Oracle Note 269980.1 vvvvvvv

# KeepAlive On

KeepAlive Off

# ^^^ Oracle Note 269980.1 ^^^^^^^

3. If you are making this change manually, please run following command to

propagate these changes into the central configuration repository.

Unix: $ORACLE_HOME/dcm/bin/dcmctl updateConfig -co ohs -v -d

Windows: %ORACLE_HOME%\dcm\bin\dcmctl updateConfig -co ohs -v -d

- This step is not needed if the changes are mande via Enterprise Manager.

References

----------

http://support.microsoft.com/default.aspx?scid=kb;en-us;831167

Checked for relevancy 2/8/2007

Activate your FREE membership today

Expert Answer Center > Expert Knowledgebase > View Answer

Page 54: Daily Work

Expert Knowledgebase

  EXPERT KNOWLEDGEBASE HOME  

     

RSS FEEDS  

I am having a problem exporting an Oracle database. The error I got is, "exporting operators, exporting referential integrity constraints, exporting triggers."

EXP-00056: ORACLE error 6550 encountered

ORA-06550: line 1, column 26:

PLS-00201: identifier 'XDB.DBMS_XDBUTIL_INT' must be declared

ORA-06550: line 1, column 14:

PL/SQL: Statement ignored

EXP-00056: ORACLE error 6550 encountered

ORA-06550: line 1, column 26:

PLS-00201: identifier 'XDB.DBMS_XDBUTIL_INT' must be declared

ORA-06550: line 1, column 14:

PL/SQL: Statement ignored

EXP-00000: Export terminated unsuccessfully

Please tell me how can I solve this. QUESTION POSED ON: 23 SEP 2004

QUESTION ANSWERED BY: Brian Peasland

Page 55: Daily Work

First, verify that this package exists with the following query:

SELECT status,object_id,object_type,owner,object_name

FROM dba_objects

WHERE object_name = 'DBMS_XDBUTIL_INT';

If the status is INVALID, then recompile the package. If the package does not exist, then you will have to install it as follows:

SQL> connect sys/ AS SYSDBA

SQL> @?/rdbms/admin/prvtxdb.plb

SQL> exit

SELECT object_name, object_type, statusFROM user_objects WHERE object_type LIKE 'JAVA%';offline NORMAL performs a checkpoint for all data files in the tablespace. All of these data files must be online. You need not perform media recovery on this tablespace before bringing it back online. You must use this option if the database is in noarchivelog mode. TEMPORARY performs a checkpoint for all online data files in the tablespace but does not ensure that all files can be written. Any offline files may require media recovery before you bring the tablespace back online. IMMEDIATE does not ensure that tablespace files are available and does not perform a checkpoint. You must perform media recovery on the tablespace before bringing it back online.

OUTLN user is responsible for maintaining the stability between the plans for your queries with stored outlines.

DBSNMP user is the one responsible to maintain the performance stats from enterprise manager. even you can do this as SYS user.however connecting to the database as SYS user is not recommended by oracle.

AIX –FIND MEMORY SIZE

Prtconf

 1.Login in that db2 user, su - db2inst1bash 2.Go to sqllib directory

Page 56: Daily Work

 cd sqllib

3.Stopping the instance

$ db2stop

4.Start an instance

As an instance owner on the host running db2, issue the following command

$ db2start

Dataflow Error

set serveroutput on size 1000000Range for this size is 2000 to 1000000.

From documentation:/*OPEN_CURSORS specifies the maximum number of open cursors (handles to private SQL areas) a session can have at once. You can use this parameter to prevent a session from opening an excessive number of cursors.It is important to set the value of OPEN_CURSORS high enough to prevent your application from running out of open cursors. The number will vary from one application to another. Assuming that a session does not open the number of cursors specified by OPEN_CURSORS, there is no added overhead to setting this value higher than actually needed.*/

Werner

Billy Verreynne

Posts: 4,016 Registered: 5/27/99

Re: no of open cursor

 

Page 57: Daily Work

Posted: Aug 26, 2007 10:33 PM  

in response to: 174313

> how to resolve this if no. of open cursor exeeds then value given in init.ora

The error is caused, in the vast majority of cases, by application code leaking cursors.

I.e. application code defining ref cursors, using ref cursors.. but never closing ref cursors.

I've in fact never see this not to be the case.

The WORSE thing you can do is increase that parameter as that simply moves the wall a few metres further away... allowing yourself to run even faster into faster it.

The following SQL identifies SQL cursors with multiple cursor handles for that SQL by the same session. It is unusual for an application to have more than 2 or so cursor handles opened for the very same SQL. Typically one will see a "cursor leaking" application with 100's of open cursor handles for the very same SQL.

select c.sid, c.address, c.hash_value,

Page 58: Daily Work

COUNT(*) as "Cursor Copies"from v$open_cursor cgroup by c.sid, c.address, c.hash_valuehaving COUNT(*) > 2order by 3 DESC

Once the application has been identified using V$SESSION, you can use V$SQLTEXT to identify the actual SQL statement of which the app creates so many handles.. and then trace and fix the problem in the application.

Nagaraj for performance tuning,

you may first start checking the following views/tablesDBA_WAITERS

V$SESSION_LONGOPSv$system_waits & v$system_events

if you have statspack report generated then you can have a look at the timed events. This is what I could find out from otn and through google.

Apparantly sqlnet.ora (also known as Profile) is a configuration file and contains the parameters that specify preferences for how a client or server uses Net8 (oracle's network

services funcionality) features. The file is located in $ORACLE_HOME/network/admin on UNIX and ORACLE_HOME\network\admin on Windows.

A little about Net8: : Net8 establishes network sessions and transfers data between a client machine and a server or between two servers. It is located on each machine in the network and once a network session is established, Net8 acts as a data courier for the client and the

server.

Some other configuration files used by Net8 are :

1) Local Naming Configuration File (TNSNAMES.ORA)2) Listener Configuration File (LISTENER.ORA)

3) Oracle Names Server Configuration File (NAMES.ORA) : The Oracle Names server configuration file (NAMES.ORA) contains the parameters that specify the location, domain

information, and optional configuration parameters for each Oracle Names server. NAMES.ORA is located in $ORACLE_HOME/network/admin on UNIX and ORACLE_HOME\network\admin on

Windows NT.

4) Oracle Connection Manager Configuration File (CMAN.ORA) : The Connection Manager configuration file (CMAN.ORA) contains the parameters that specify preferences for using

Page 59: Daily Work

Oracle Connection Manager. CMAN.ORA is located at $ORACLE_HOME/network/admin on UNIX and ORACLE_HOME\network\admin on Windows NT.

first  |  < previous  |  next >  |  last

Restore database ' /sybdata1/syb126/IQ/cso_ot/cso_ot.db'from ' /backup/sybase/ctsintcocso6/csoase/cso_ot 'rename IQ_SYSTEM_MAIN to '/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep_NEW.iq'rename IQ_SYSTEM_MAIN1 to '/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep02.iq'rename IQ_SYSTEM_MAIN2 to '/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep03.iq'rename IQ_SYSTEM_TEMP to '/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep.iqtmp'rename IQ_SYSTEM_TEMP1 to '/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep02.iqtmp'select * from sysiqfile sp_iqstatus

stop_asiqRestore database ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep_new.db’ from ‘/sybdata1/dump/cso_ot.dmp’rename IQ_SYSTEM_MAIN to ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep_new.iq’rename IQ_SYSTEM_MAIN1 to ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep02_new.iq’rename IQ_SYSTEM_MAIN2 to ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep03_new.iq’rename IQ_SYSTEM_TEMP to ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep_new.iqtmp'

rename IQ_SYSTEM_TEMP1 to ‘/sybdata1/syb126/IQ/cso_dataprep/cso_dataprep02_new.iqtmp'

A symbolic link is a pointer to another file or directory. It can be used just like the original file or directory. A symbolic link appears in a long listing (ls -l) with a reference to the original file/directory. A symbolic link, as opposed to a hard link, is required when linking from one filesystem to another and can be used within a filesystem as well.

To create a symbolic link, the syntax of the command is similar to a copy or move command:

existing file first, destination file second. For example, to link the directory

/export/space/common/archive to /archive for easy access, use:

ln -s /export/space/common/archive /archive

A hard link is a reference to a file or directory that appears just like a file or directory, not a link. Hard links only work within a

Page 60: Daily Work

filesystem. In other words, don't use hard links between mounted filesystems. A hard link is only a reference to the original file, not a copy of the file. If the original file is deleted, the information will be lost.user

To create a hard link of the file /export/home/fred/stuff to /var/tmp/thing, use:

ln /export/home/fred/stuff /var/tmp/thing

The syntax for creating a hard link of a directory is the same. To create a hard link of

/var/www/html to /var/www/webroot, use:

ln /var/www/html /var/www/webrootselect ' alter ' || segment_type,segment_name || ' move tablespace xyz' from dba_segments where tablespace_name='RAKESH';

>spool off

result of the query will stores in the spool file objects_move.log

>@<urpath>\objects_move.log

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name='XYZ' ;

rebuild the indexes

and gather statistics for those objects

if u want to move all the objects to another tablesapce just do the following....

>spool <urpath>\objects_move.log

> select ' alter ' || segment_type,segment_name || ' move tablespace xyz' from dba_segments where tablespace_name='RAKESH';

>spool off

result of the query will stores in the spool file objects_move.log

>@<urpath>\objects_move.log

Now check the objects in xyz tablespace

SELECT SEGMENT_NAME FROM DBA_SEGMENTS WHERE tablespace_name='XYZ' ;

rebuild the indexes

and gather statistics for those objects

Page 61: Daily Work

How to enable trace in Oracle

1. Enable trace at instance level

Put the following line in init.ora. It will enable trace for all sessions and the backgroundprocesses

sql_trace = TRUE

to disable trace:

sql_trace = FALSE

- or -

to enable tracing without restarting database run the following command in sqlplus

SQLPLUS> ALTER SYSTEM SET trace_enabled = TRUE;

to stop trace run:

SQLPLUS> ALTER SYSTEM SET trace_enabled = FALSE;

2. Enable trace at session level

to start trace:

ALTER SESSION SET sql_trace = TRUE;

to stop trace:

ALTER SESSION SET sql_trace = FALSE;

- or -

EXECUTE dbms_session.set_sql_trace (TRUE);EXECUTE dbms_session.set_sql_trace (FALSE);

- or -

EXECUTE dbms_support.start_trace;EXECUTE dbms_support.stop_trace;

3. Enable trace in another session

Find out SID and SERIAL# from v$session. For example:

SELECT * FROM v$session WHERE osuser = OSUSER;

to start trace:

Page 62: Daily Work

EXECUTE dbms_support.start_trace_in_session (SID, SERIAL#);

to stop trace:

EXECUTE dbms_support.stop_trace_in_session (SID, SERIAL#);

- or -

EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, TRUE);EXECUTE dbms_system.set_sql_trace_in_session (SID, SERIAL#, FALSE);

Using orapwd to Connect Remotely as SYSDBAAugust 5,  2003Don Burleson 

The Oracle orapwd utility assists the DBA with granting SYSDBA and SYSOPER privileges to other users.  By default, the user SYS is the only user that has these privileges.  Creating a password file via orapwd enables remote users to connect with administrative privileges through SQL*Net.  The SYSOPER privilege allows instance startup, shutdown, mount, and dismount.  It allows the DBA to perform general database maintenance without viewing user data.  The SYSDBA privilege is the same as connect internal was in prior versions.  It provides the ability to do everything, unrestricted.  If orapwd has not yet been executed, attempting to grant SYSDBA or SYSOPER privileges will result in the following error: SQL> grant sysdba to scott;  ORA-01994: GRANT failed: cannot add users to public password file

 The following steps can be performed to grant other users these privileges:  

1.      Create the password file.  This is done by executing the following command:  

$ orapwd file=filename  password=password entries=max_users 

The filename is the name of the file that will hold the password information.  The file location will default to the current directory unless the full path is specified.  The contents are encrypted and are unreadable. The password required is the one for the SYS user of the database.  The max_usersis the number of database users that can be granted SYSDBA or SYSOPER.  This parameter should be set to a higher value than the number of anticipated users to prevent having to delete and recreate the password file. 

 2.      Edit the init.ora parameter remote_login_passwordfile.  This parameter must be set

Page 63: Daily Work

to either SHARED or EXCLUSIVE.When set to SHARED, the password file can be used by multiple databases, yet only the SYS user is recognized.  When set to EXCLUSIVE, the file can be used by only one database, yet multiple users can exist in the file.  The parameter setting can be confirmed by:  SQL> show parameter password NAME                          TYPE        VALUE----------------------------- ----------- ----------remote_login_passwordfile     string      EXCLUSIVE

 3.      Grant SYSDBA or SYSOPER to users.  When SYSDBA or SYSOPER privileges

are granted to a user, that user's name and privilege information are added to the password file.

 SQL> grant sysdba to scott; Grant succeeded. 

4.      Confirm that the user is listed in the password file.  

SQL> select * from v$pwfile_users; USERNAME                       SYSDBA SYSOPER------------------------------ ------ -------SYS                            TRUE   TRUESCOTT                          TRUE   FALSE

 

Now the user SCOTT can connect as SYSDBA.  Administrative users can be connected and authenticated to a local or remote database by using the SQL*Plus connect command.  They must connect using their username and password, and with the AS SYSDBA or AS SYSOPER clause: SQL> connect scott/tiger as sysdba;Connected.

 The DBA utilizes the orapwd utility to grant SYSDBA and SYSOPER privileges to other database users.  The SYS password should never be shared and should be highly classified.

 

Deep inside the operating system executables there are many utilities at the fingertips of Oracle professionals, but until now there has been no advice on how to use these utilities. From tnsping.exe to dbv.exe to wrap.exe, Dave Moore describes each utility and has working examples in the online code depot. Your time savings from a single script is worth the price of this great book. Get your copy of Oracle Utilities: Using Hidden Programs, Import/Export, SQL

Page 64: Daily Work

Loader, oradebug, Dbverify, Tkprof and More today and receive immediate access to the Online Code Depot! 

Oracle 9i Automatic PGA Memory Management

With Oracle 9i a new method of tuning the PGA memory areas was introduced. Automatic PGA Memory Management may be used in place of setting the sort_area_size, sort_area_retained_size, sort_area_hash_size and other related memory management parameters that all Oracle DBA's are familiar with. Those parameters may however still be used. See the following for an interesting discussion on this topic:

The Snark Research Mechanism

The PGA memory management may now controlled by just two parameters if that's how you choose to set it up.

pga_aggregate_target, workarea_size_policy

Note that work_area_size_policy can be altered per database session, allowing manual memory management on a per session basis if needed. eg. a session is loading a large import file and a rather large sort_area_size is needed. A logon trigger could be used to set the work_area_size policy for the account doing the import.

A session is normally allowed to use up to approximately 5% of the PGA memory available. This is controlled by the undocumented initialization parameter _smm_max_size. This value is specified in kilobytes. eg. a value of 1000 really means 1000k. As with all undocumented parameters, don't expect help from Oracle support with it, as you are not supposed to use it. If you experiment with it, do so on a test system.

Also note that Automate PGA management can only be used for dedicated server sessions.

For more some good reading on Automatic PGA management, please see:

Oracle Documentation for Tuning PGA

The documentation contains some good guidelines for initial settings, and how to monitor and tune them as needed.

Page 65: Daily Work

If your 9i database is currently using manual PGA management, there are views available to help you make a reasonable estimate for the setting.

If your database also has statspack statistics, then there is also historical information available to help you determine the setting.

An initial setting can be determined by simply monitoring the amount of PGA memory being used by the system as seen in v$pgastat, and by querying the v$pga_target_for_estimate view.

v$pgastat:

select *

from v$pgastat

order by lower(name)

/

NAME VALUE UNIT

---------------------------------------- ------------------ ------------

aggregate PGA auto target 8,294,400.00 bytes

aggregate PGA target parameter 25,165,824.00 bytes

bytes processed 24,929,280.00 bytes

cache hit percentage 86.31 percent

extra bytes read/written 3,953,664.00 bytes

global memory bound 1,257,472.00 bytes

maximum PGA allocated 26,661,888.00 bytes

maximum PGA used for auto workareas 172,032.00 bytes

maximum PGA used for manual workareas 525,312.00 bytes

over allocation count .00

PGA memory freed back to OS 6,750,208.00 bytes

total freeable PGA memory 65,536.00 bytes

total PGA allocated 23,957,504.00 bytes

Page 66: Daily Work

total PGA inuse 15,283,200.00 bytes

total PGA used for auto workareas .00 bytes

total PGA used for manual workareas .00 bytes

16 rows selected.

The statistic "maximum PGA allocated" will display the maximum amount of PGA memory allocated during the life of the instance.

The statistic "maximum PGA used for auto workareas" and "maximum PGA used for manual workareas" will display the maximum amount of PGA memory used for each type of workarea during the life of the instance.

v$pga_target_advice:

select *

from v$pga_target_advice

order by pga_target_for_estimate

/

PGA TARGET PGA TARGET ESTIMATED EXTRA ESTIMATED PGA ESTIMATED OVER

FOR EST FACTOR ADV BYTES PROCESSED BYTES RW CACHE HIT % ALLOC COUNT

---------------- ---------- --- ---------------- ---------------- ------------- --------------

12,582,912 .50 ON 17,250,304 0 100.00 3

18,874,368 .75 ON 17,250,304 0 100.00 3

25,165,824 1.00 ON 17,250,304 0 100.00 0

30,198,784 1.20 ON 17,250,304 0 100.00 0

35,231,744 1.40 ON 17,250,304 0 100.00 0

Page 67: Daily Work

40,264,704 1.60 ON 17,250,304 0 100.00 0

45,297,664 1.80 ON 17,250,304 0 100.00 0

50,331,648 2.00 ON 17,250,304 0 100.00 0

75,497,472 3.00 ON 17,250,304 0 100.00 0

100,663,296 4.00 ON 17,250,304 0 100.00 0

150,994,944 6.00 ON 17,250,304 0 100.00 0

201,326,592 8.00 ON 17,250,304 0 100.00 0

12 rows selected.

Querying v$pga_target_advice can help you determine a good setting for pga_aggregate_target. As seen in the previous query, an 18M PGA setting would have caused Oracle to allocate more memory than specified on 3 occasions. With a 25M PGA, this would not have happened.

Keep in mind that pga_aggregate_target is not set in stone. It is used to help Oracle better manage PGA memory, but Oracle will exceed this setting if necessary.

There are other views that are also useful for PGA memory management.

v$process:

select

max(pga_used_mem) max_pga_used_mem

, max(pga_alloc_mem) max_pga_alloc_mem

, max(pga_max_mem) max_pga_max_mem

from v$process

/

This will show the maximum PGA usage per process:

Page 68: Daily Work

select

max(pga_used_mem) max_pga_used_mem

, max(pga_alloc_mem) max_pga_alloc_mem

, max(pga_max_mem) max_pga_max_mem

from v$process

/

This displays the sum of all current PGA usage per process:

select

sum(pga_used_mem) sum_pga_used_mem

, sum(pga_alloc_mem) sum_pga_alloc_mem

, sum(pga_max_mem) sum_pga_max_mem

from v$process

/

Be sure to read the documentation referenced earlier, it contains an excellent explanation of Automatic PGA Memory Management.

Following are some already canned scripts that may be of use.

PGA Monitoring Scripts

These are the steps to get the user who issues "drop table" command in a database:

1.login into the db as sysdba

2. sql>show parameter audit_trail - - - >checks if the audit trail is turned on

if the output is :

NAME TYPE VALUE------------------------------------ ----------- ------------------------------audit_trail string DBthen go to step 3.else2(a) shutdown immediate - - - - [to enable the audit trail]

Page 69: Daily Work

(b) edit init.ora in the location $ORACLE_HOME/admin/pfile to put the entry audit_trail=db(c) create spfile from pifle;(c) startup

3. truncate table aud$ - - - > to remove any audit trail data residing in the table.3 sql>audit table; - - - >this starts auditing events pertaining to tables.

4. select action_name,username,userhost,to_char(timestamp,'dd-mon-yyyy:hh24:mi:ss') from dba_audit_trail where action_name like '%DROP TABLE%'; - - - - >this query gives you the username along with the the userhos from where the 'username' is connected.

CREATE DATABASE '/sybdata1/syb126/IQ/csoperf/csoperf.db' iq path '/sybdata1/syb126/IQ/csoperf/csoperf01.iq' iq size 2000message path '/sybdata1/syb126/IQ/csoperf/csoperf.iqmsg'temporary path '/sybdata1/syb126/IQ/csoperf/csoperf.iqtmp' temporary size 1000iq page size 65536

system

temp            1000MB

iq_system_main    2000MB

iq_system_main2  1000MBiq_system_main3  5000MB

iq_system_msg    

create dbspace IQ_SYSTEM_MAIN2 as 'E:\sybIQdbs\csoperf\csoperf02.iq' IQ STORE size 1000gocreate dbspace IQ_SYSTEM_MAIN3 as 'E:\sybIQdbs\csoperf\csoperf03.iq' IQ STORE size 1000go

http://10.237.99.28:9090/applications.do

Can someone explain to me the difference between differential incretmental and cumulative incremental backups please

Oct 16 (1 day ago) Suraj

RE: Incremantal RMAN BackupsA differential backup backs-up ONLY the files that changed since the last FULL BACKUP. For

example, suppose you do a full backup on Sunday. On Monday you back up only the files that changed since Sunday, on Tuesday you back up only the files that changed since Sunday, and

so on until the next full backup.

Differential backups are quicker than full backups because so much less data is being backed up. But the amount of data being backed up grows with each differential backup until the next

Page 70: Daily Work

full back up. Differential backups are more flexible than full backups, but still unwieldy to do more than about once a day, especially as the next full backup approaches.

Incremental backups also back up only the changed data, but they only back up the data that has changed since the LAST BACKUP — be it a full or incremental backup. They are sometimes

called "differential incremental backups," while differential backups are sometimes called "cumulative incremental backups."

Suppose, if you do an incremental backup on Tuesday, you only back up the data that changed since the incremental backup on Monday. The result is a much smaller, faster backup.

Oct 16 (1 day ago) Arpan

Thanks for the responseWhile I do believe you were on the right track, I think you might have gotten some terms

mixed up. According to some documentation on otn:

There are two types of incremental backups :1) Differential Incremental Backups: RMAN backsup all the blocks that have changed since thte most recent incremental backup at level 1 or level 0. For example, in a differential level 1

backup, RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed

since the base level 0 backup.

2) Cumulative Incremental Backups: RMAN backs up all the blocks used since the most recent level 0 incremental backup. Cumulative incremental backups reduce the work needed

for a restore by ensuring that you only need one incremental backup from any particular level. Cumulative backups require more space and time than differential backups, however, because

they duplicate the work done by previous backups at the same level.

If you would like to read the entire document (its a short one) you can find it at this site:

http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc1005.htm Suraj

RE: Incremantal RMAN BackupsI Tried to explain you things in a very simple way. I am not able to find anything I am missing.

If yes, please let me know.

ORA-27154 :post/wait create failed > > ORA-27300: OS system dependent operation:semget failed with status: 28 > > ORA-27301: OS failure message: No space left on device

> "No space left on device" sounds quite clear for me. > Maybe the disk where you want to create the database is full. Another > point colud be insufficient swap space but I would expect another error > message for that.

Note that the error message is linked to semget. You seem to have run out of semaphores. You configure the max number of semphores in /etc/system:

set semsys:seminfo_semmni=100 set semsys:seminfo_semmns=1024 set semsys:seminfo_semmsl=256