#oracle list datafiles in tablespace
Explore tagged Tumblr posts
Text
What is control file and datafile in Oracle?
What is control file and datafile in Oracle? #oracledba #oracle #shripal #database
In this article, we are going to learn what is control file and datafile in Oracle. Control files In the Oracle database environment, each and every database has its own separate control files. The control files also called binary files which record the physical structure of the oracle database. What is in Controlfile The controlfile also include: Name of the Database.Datafile and Redo log…
View On WordPress
#create datafile in oracle#data file and control file in oracle#how to check control file location in oracle#how to find datafile path in oracle 11g#how to read control file in oracle#oracle database control file example#oracle list datafiles in tablespace#oracle tablespace full#sysaux tablespace in oracle#types of files in oracle database#types of tablespace in oracle#users tablespace in oracle#What are the types of data files?#what is an instance of a database#what is an oracle instance vs oracle database#What is control file and datafile in Oracle?#what is control file in oracle#what is datafile in oracle#What is difference between table and tablespace?#What is Oracle database instance?#What is the difference between tablespace and datafile?#Where are data files stored in Oracle?#Where is the control file in Oracle?#Which mode control files are opened for the instance?
0 notes
Text
RMAN QUICK LEARN– FOR THE BEGINNERS
Oracle Recovery Manager (RMAN) is Oracle’s preferred method or tools by which we are able to take backups and restore and recover our database. You must develop a proper backup strategy which provides maximum flexibility to Restore & Recover the DB from any kind of failure. To develop a proper backup strategy you must decide the type of requirement then after think the possible backup option. The recommended backup strategy must include the backup of all datafiles, Archivelog and spfile & controlfile autobackup. To take online or hot backups database must be in archivelog mode. You can however use RMAN to take an offline or cold backup.Note: Selecting the backup storage media is also important consideration. If you are storing your backup on disk then it is recommended to keep an extra copy of backup at any other server. CREATING RECOVERY CATALOG: Oracle recommended to use separate database for RMAN catalog. Consider in below steps the database is already created: 1. Create tablespace for RMAN: SQL> create tablespace RTBS datafile 'D:ORACLEORADATARTBS01.DBF' size 200M extent management local uniform size 5M; 2. Create RMAN catalog user: SQL> create user CATALOG identified by CATALOG default tablespace RTBS quota unlimited on RTBS; 3. Grant some privileges to RMAN user: SQL> Grant connect, resource to CATALOG; SQL> grant recovery_catalog_owner to CATALOG; 4. Connect into catalog database and create the catalog: % rman catalog RMAN_USER/RMAN_PASSWORD@cat_db log=create_catalog.log RMAN> create catalog tablespace RTBS; RMAN> exit; 5. Connect into the target database and into the catalog database: % rman target sys/oracle@target_db RMAN> connect catalog RMAN_USER/RMAN_PASSWORD@cat_db 6. Connected into the both databases, register target database: RMAN> register database; The following list gives an overview of the commands and their uses in RMAN. For details description search the related topics of separate post on my blog: http://shahiddba.blogspot.com/INITIALIZATION PARAMETER: Some RMAN related database initialization parameters: control_file_record_keep_time: Time in days to retention records in the Control File. (default: 7 days) large_pool_size: Memory pool used for RMAN in backup/restore operations. shared_pool_size: Memory pool used for RMAN in backup/restore operations (only if large pool is not configured). CONNECTING RMANexport ORACLE_SID= --Linux platformset ORACLE_SID== --Windows platformTo connect on a target database execute RMAN.EXE then RMAN>connect target / RMAN>connect target username/password RMAN>connect target username/password@target_db To connect on a catalog database:RMAN>connect catalog username/password RMAN>connect catalog username/password@catalog_db To connect directly from the command prompt:C:>rman target / --target with nocatalog Recovery Manager: Release 9.2.0.1.0 - ProductionCopyright (c) 1995, 2002, Oracle Corporation. All rights reserved.connected to target database: RMAN (DBID=63198018)using target database controlfile instead of recovery catalogC:>rman target sys/oracle@orcl3 catalog catalog/catalog@rman --with catalogRecovery Manager: Release 9.2.0.1.0 - ProductionCopyright (c) 1995, 2002, Oracle Corporation. All rights reserved.connected to target database: SADHAN (DBID=63198018)connected to recovery catalog databaseRMAN PARAMETERSRMAN parameters can be set to a specified value and remain persistent. This information is stored in the target database’s controlfile (By default). Alternatively you can store this backup information into recovery catalog. If you connect without catalog or only to the target database, your repository should be in the controlfile.SHOW/CONFIGURE – SHOW command will show current values for set parameters and CONFIGURE – Command to set new value for parameterRMAN> show all;using target database control file instead of recovery catalogRMAN configuration parameters are:CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # defaultCONFIGURE BACKUP OPTIMIZATION OFF; # defaultCONFIGURE DEFAULT DEVICE TYPE TO ; # defaultCONFIGURE CONTROLFILE AUTOBACKUP ON;RMAN>show datafile backup copies; RMAN>show default device type; RMAN>show device type; RMAN>show channel; RMAN>show retention policy;RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;old RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP OFF;new RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters are successfully stored CONFIGURE channel device type disk format 'D:oraback%U'; You can set many parameters by configuring them first and making them persistent or you can override them (discard any persistent configuration) by specifying them explicitly in your RMAN backup command. Setting Default Recommended Controlfile autobackup off on Retention policy to redundancy 1 to recovery window of 30 days Device type disk parallelism 1 ... disk|sbt prallelism 2 ... Default device type to disk to disk Backup optimization off off Channel device type none disk parms=‘...’ Maxsetsize unlimited depends on your database size Appending CLEAR or NONE at the last of configuration parameter command will reset the configuration to default and none setting.CONFIGURE RETENTION POLICY CLEAR;CONFIGURE RETENTION POLICY NONE; Overriding the configured retention policy: change backupset 421 keep forever nologs; change datafilecopy 'D:oracleoradatausers01.dbf' keep until 'SYSDATE+30';RMAN BACKUP SCRIPTS:Backing up the database can be done with just a few commands or can be made with numerous options. RMAN> backup database;RMAN> backup as compressed backupset database;RMAN> Backup INCREMENTAL level=0 database;RMAN> Backup database TAG=Weekly_Sadhan;RMAN> Backup database MAXSETSIZE=2g;RMAN> backup TABLESPACE orafin;You may also combine options together in a single backup and for multi channel backup.RMAN> Backup INCREMENTAL level=1 as COMPRESSED backupset databaseFORMAT 'H:ORABACK%U' maxsetsize 2G; backup full datafile x,y,z incremental level x include current controlfile archivelog all delete input copies x filesperset x maxsetsize xM diskratio x format = 'D:oraback%U';run {allocate channel d1 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db";allocate channel d2 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db";allocate channel d3 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db"; backup incremental level 0 tag Sadhan_Full_DBbackup filesperset 8 FORMAT "H:orabackWeekly_%T_FULL_%d-%s_%p.db" DATABASE; SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup archivelog all tag Sadhan_Full_Archiveback filesperset 8 format "H:orabackWeekly_%T_FULL_%d-%s_%p.arch"; release channel d1; release channel d2; release channel d3; } The COPY command and some copy scripts: copy datafile 'D:oracleoradatausers01.dbf' TO 'H:orabackusers01.dbf' tag=DF3, datafile 4 to TO 'H:orabackusers04.dbf' tag=DF4, archivelog 'arch_1060.arch' TO 'arch_1060.bak' tag=CP2ARCH16; run { allocate channel c1 type disk; copy datafile 'd:oracleoradatausers01.dbf' TO 'h:orabackusers01.dbf' tag=DF3, archivelog 'arch_1060.arch' TO 'arch_1060.bak' tag=CP2ARCH16; }COMPRESSED – Compresses the backup as it is taken.INCREMENTAL – Selecting incremental allows to backup only changes since last full backup.FORMAT – Allows you to specify an alternate location.TAG – You can name your backup.MAXSETSIZE – Limits backup piece size.TABLESPACE – Allows you to backup only a tablespace.RMAN MAINTAINANCE :You can review your RMAN backups using the LIST command. You can use LIST with options to customize what you want RMAN to return to you.RMAN> list backup SUMMARY;RMAN> list ARCHIVELOG ALL;RMAN> list backup COMPLETED before ‘02-FEB-09’;RMAN> list backup of database TAG Weekly_sadhan; RMAN> list backup of datafile "D:oracleoradatasadhanusers01.dbf" SUMMARY;You can test your backups using the validate command.RMAN> list copy of tablespace "SYSTEM"; You can ask RMAN to report backup information. RMAN> restore database validate; RMAN> report schema; RMAN> report need backup; RMAN> report need backup incremental 3 database; RMAN> report need backup days 3; RMAN> report need backup days 3 tablespace system; RMAN>report need backup redundancy 2; RMAN>report need backup recovery window of 3 days; RMAN> report unrecoverable; RMAN> report obsolete; RMAN> delete obsolete; RMAN> delete noprompt obsolete; RMAN> crosscheck; RMAN> crosscheck backup; RMAN> crosscheck backupset of database; RMAN> crosscheck copy; RMAN> delete expired; --use this after crosscheck command RMAN> delete noprompt expired backup of tablespace users; To delete backup and copies: RMAN> delete backupset 104; RMAN> delete datafilecopy 'D:oracleoradatausers01.dbf'; To change the status of some backups or copies to unavailable come back to available: RMAN>change backup of controlfile unavaliable; RMAN>change backup of controlfile available; RMAN>change datafilecopy 'H:orabackusers01.dbf' unavailable; RMAN>change copy of archivelog sequence between 230 and 240 unavailable; To catalog or uncatalog in RMAN repository some copies of datafiles, archivelogs and controlfies made by users using OS commands: RMAN>catalog datafilecopy 'F:orabacksample01.dbf'; RMAN>catalog archivelog 'E:oraclearch_404.arc', 'F:oraclearch_410.arc'; RMAN>catalog controlfilecopy 'H:oracleoradatacontrolfile.ctl'; RMAN> change datafilecopy 'F:orabacksample01.dbf' uncatalog; RMAN> change archivelog 'E:oraclearch_404.arc', 'E:oraclearch_410.arc' uncatalog; RMAN> change controlfilecopy 'H:oracleoradatacontrolfile.ctl' uncatalog; RESTORING & RECOVERING WITH RMAN BACKUPYou can perform easily restore & recover operation with RMAN. Depending on the situation you can select either complete or incomplete recovery process. The complete recovery process applies all the redo or archivelog where as incomplete recovery does not apply all of the redo or archive logs. In this case of recovery, as you are not going to complete recovery of the database to the most current time, you must tell Oracle when to terminate recovery. Note: You must open your database with resetlogs option after each incomplete recovery. The resetlogs operation starts the database with a new stream of log sequence number starting with sequence 1. DATAFILE – Restore specified datafile.CONTROLFILE – To restore controlfile from backup database must be in nomount.ARCHIVELOG or ARCHIVELOG from until – Restore archivelog to location there were backed up.TABLESPACE – Restores all the datafiles associated with specified tablespace. It can be done with database open.RECOVER TABLESPACE/DATAFILE:If a non-system tablespace or datafile is missing or corrupted, recovery can be performed while the database remains open.STARTUP; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)Use OS commands to restore the missing or corrupted datafile to its original location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbfSQL>ALTER DATABASE DATAFILE3 OFFLINE; (tablespace cannot be used because the database is not open)SQL>ALTER DATABASE OPEN;SQL>RECOVER DATAFILE 3;SQL>ALTER TABLESPACE ONLINE; (Alternatively you can use ‘alter database’ command to take datafile online)If the problem is only the single file then restore only that particualr file otherwise restore & recover whole tablespace. The database can be in use while recovering the whole tablespace.run { sql ‘alter tablespace users offline’; allocate channel c1 device type disk|sbt; restore tablespace users; recover tablespace users; sql ‘alter tablespace users online’;}If the problem is in SYSTEM datafile or tableapce then you cannnot open the database. You need sifficient downtime to recover it. If problem is in more than one file then it is better to recover whole tablepace or database.startup mountrun { allocate channel c1 device type disk|sbt; allocate channel c2 device type disk|sbt; restore database check readonly; recover database; alter database open;}DATABASE DISASTER RECOVERY:Disaster recovery plans start with risk assessment. We need to identify all the risks that our data center can face such as: All datafiles are lost, All copies of current controlfile are lost, All online redolog group member are lost, Loss of OS, loss of a disk drive, complete loss of our server etc: Our disaster plan should give brief description about recovery from above disaster. Planning Disaster Recovery in advance is essential for DBA to avoid any worrying or panic situation.The below method is used for complete disaster recovery on the same as well as different server. set dbid=xxxxxxxstartup nomount;run {allocate channel c1 device type disk|sbt;restore spfile to ‘some_location’ from autobackup;recover database; alter database open resetlogs;}shutdown immediate;startup nomount;run { allocate channel c1 device type disk|sbt; restore controlfile from autobackup;alter database mount; } RMAN> restore database;RMAN> recover database; --no need incase of cold backupRMAN> alter database open resetlogs;}DATABASE POINT INTIME RECOVERY:DBPITR enables you to recover a database to some time in the past. For example, if logical error occurred today at 10.00 AM, DBPITR enables you to restore the entire database to the state it was in at 09:59 AM there by removing the effect of error but also remove all other valid update that occur since 09:59 AM. DBPTIR requires the database is in archivelog mode, and existing backup of database created before the point in time to which you wish to recover must exists, and all the archivelog and online logs created from the time of backup until the point in time to which you wish to recover must exist as well. RMAN> shutdown Abort; RMAN> startup mount; RMAN> run { Set until time to_date('12-May-2012 00:00:00′, ‘DD-MON-YYYY HH24:MI:SS'); restore database; recover database; }RMAN> alter database open resetlogs;Caution: It is highly recommended that you must backup your controlfile and online redo log file before invoking DBPITR. So you can recover back to the current point in time in case of any issue.Oracle will automatically stop recovery when the time specified in the RECOVER command has been reached. Oracle will respond with a recovery successful message.SCN/CHANGE BASED RECOVERY:Change-based recovery allows the DBA to recover to a desired point of System change number (SCN). This situation is most likely to occur if archive logfiles or redo logfiles needed for recovery are lost or damaged and cannot be restored.Steps:– If the database is still open, shut down the database using the SHUTDOWN command with the ABORT option.– Make a full backup of the database including all datafiles, a control file, and the parameter files in case an error is made during the recovery.– Restore backups of all datafiles. Make sure the backups were taken before the point in time you are going to recover to. Any datafiles added after the point in time you are recovering to should not be restored. They will not be used in the recovery and will have to be recreated after recovery is complete. Any data in the datafiles created after the point of recovery will be lost.– Make sure read-only tablespace are offline before you start recovery so recovery does not try to update the datafile headers.RMAN> shutdown Abort; RMAN> startup mount; RMAN>run { set until SCN 1048438; restore database; recover database; alter database open resetlogs; }RMAN> restore database until sequence 9923; --Archived log sequence number RMAN> recover database until sequence 9923; --Archived log sequence number RMAN> alter database open resetlogs;Note: Query with V$LOG_HISTORY and check the alert.log to find the SCN of an event and recover to a prior SCN.IMPORTANT VIEW: Views to consult into the target database: v$backup_device: Device types accepted for backups by RMAN. v$archived_log: Redo logs archived. v$backup_corruption: Corrupted blocks in backups. v$copy_corruption: Corrupted blocks in copies. v$database_block_corruption: Corrupted blocks in the database after last backup. v$backup_datafile: Backups of datafiles. v$backup_redolog: Backups of redo logs. v$backup_set: Backup sets made. v$backup_piece: Pieces of previous backup sets made. v$session_long_ops: Long operations running at this time. Views to consult into the RMAN catalog database: rc_database: Information about the target database. rc_datafile: Information about the datafiles of target database. rc_tablespace: Information about the tablespaces of target database. rc_stored_script: Stored scripts. rc_stored_script_line: Source of stored scripts. For More Information on RMAN click on the below link: Different RMAN Recovery Scenarios 24-Feb-13 Synchronizes the Test database with RMAN Cold Backup 16-Feb-13 Plan B: Renovate old Apps Server Hardware 27-Jan-13 Plan A: Renovate old Apps Server Hardware 25-Jan-13 Planning to Renovate old Apps Server Hardware 24-Jan-13 Duplicate Database with RMAN without Connecting to Target Database 23-Jan-13 Different RMAN Errors and their Solution 24-Nov-12 Block Media Recovery using RMAN 4-Nov-12 New features in RMAN since Oracle9i/10g 14-Oct-12 A Shell Script To Take RMAN Cold/Hot and Export Backup 7-Oct-12 Automate Rman Backup on Windows Environment 3-Sep-12 How to take cold backup of oracle database? 26-Aug-12 Deleting RMAN Backups 22-Aug-12 Script: RMAN Hot Backup on Linux Environment 1-Aug-12 How RMAN behave with the allocated channel during backup 31-Jul-12 RMAN Important Commands Description. 7-Jul-12 Script: Crontab Use for RMAN Backup 2-Jun-12 RMAN Report and Show Commands 16-May-12 RMAN backup on a Windows server thruogh DBMS_SCHEDULING 15-May-12 Format Parameter of Rman Backup 12-May-12 Rman Backup with Stored Script 12-May-12 Rman: Disaster Recovery from the Scratch 6-May-12 RMAN- Change-Based (SCN) Recovery 30-Apr-12 RMAN-Time-Based Recovery 30-Apr-12 RMAN – Cold backup Restore 23-Apr-12 RMAN Backup on Network Storage 22-Apr-12 Rman Catalog Backup Script 18-Apr-12 Point to be considered with RMAN Backup Scripts 11-Apr-12 Monitoring RMAN Through V$ Views 7-Apr-12 RMAN Weekly and Daily Backup Scripts 25-Mar-12 Unregister Database from RMAN: 6-Mar-12
1 note
·
View note
Text
Oracle DBA Cheet Sheet
Tablespace & Datafile Details ============================= set lines 200 pages 200 col tablespace_name for a35 col file_name for a70 select file_id, tablespace_name, file_name, bytes/1024/1024 MB, status from dba_data_files;
Table Analyze Details ===================== set lines 200 pages 200 col owner for a30 col table_name for a30 col tablespace_name for a35 select owner, table_name, tablespace_name, NUM_ROWS, LAST_ANALYZED from dba_tables where owner='&TableOwner' and table_name='&TableName';
Session Details =============== set lines 200 pages 200 col MACHINE for a25 select inst_id, sid, serial#, username, program, machine, status from gv$session where username not in ('SYS','SYSTEM','DBSNMP') and username is not null order by 1; select inst_id, username, count(*) "No_of_Sessions" from gv$session where username not in ('SYS','SYSTEM','DBSNMP') and username is not null and status='INACTIVE' group by inst_id, username order by 3 desc; select inst_id, username, program, machine, status from gv$session where machine like '%&MachineName%' and username is not null order by 1;
Parameter value =============== set lines 200 pages 200 col name for a35 col value for a70 select inst_id, name, value from gv$parameter where name like '%&Parameter%' order by inst_id;
User Details ============= set lines 200 pages 200 col username for a30 col profile for a30 select username, account_status, lock_date, expiry_date, profile from dba_users where username like '%&username%' order by username;
List and Remove Files and directories ===================================== ls |grep -i cdmp_20110224|xargs rm -r
Tablespace Usage (1) ==================== set pages 999; set lines 132; SELECT * FROM ( SELECT c.tablespace_name, ROUND(a.bytes/1048576,2) MB_Allocated, ROUND(b.bytes/1048576,2) MB_Free, ROUND((a.bytes-b.bytes)/1048576,2) MB_Used, ROUND(b.bytes/a.bytes * 100,2) tot_Pct_Free, ROUND((a.bytes-b.bytes)/a.bytes,2) * 100 tot_Pct_Used FROM ( SELECT tablespace_name, SUM(a.bytes) bytes FROM sys.DBA_DATA_FILES a GROUP BY tablespace_name ) a, ( SELECT a.tablespace_name, NVL(SUM(b.bytes),0) bytes FROM sys.DBA_DATA_FILES a, sys.DBA_FREE_SPACE b WHERE a.tablespace_name = b.tablespace_name (+) AND a.file_id = b.file_id (+) GROUP BY a.tablespace_name ) b, sys.DBA_TABLESPACES c WHERE a.tablespace_name = b.tablespace_name(+) AND a.tablespace_name = c.tablespace_name ) WHERE tot_Pct_Used >=0 ORDER BY tablespace_name;
Tablespace usage (2) ==================== select d.tablespace_name, d.file_name, d.bytes/1024/1024 Alloc_MB, f.bytes/1024/1024 Free_MB from dba_data_files d, dba_free_space f where d.file_id=f.file_id order by 1;
select d.tablespace_name, sum(d.bytes/1024/1024) Alloc_MB, sum(f.bytes/1024/1024) Free_MB from dba_data_files d, dba_free_space f where d.file_id=f.file_id group by d.tablespace_name order by 1;
Datafile added to Tablespace by date ==================================== select v.file#, to_char(v.CREATION_TIME, 'dd-mon-yy hh24:mi:ss') Creation_Date, d.file_name, d.bytes/1024/1024 MB from dba_data_files d, v$datafile v where d.tablespace_name='XXGTM_DAT' and d.file_id = v.file#;
Added in last 72 hours ====================== select v.file#, to_char(v.CREATION_TIME, 'dd-mon-yy hh24:mi:ss') Creation_Date, d.file_name, d.bytes/1024/1024 MB from dba_data_files d, v$datafile v where d.tablespace_name='XXGTM_DAT' and d.file_id = v.file# and v.creation_time > sysdate - 20;
Monitor SQL Execution History (Toad) ==================================== Set lines 200 pages 200 select ss.snap_id, ss.instance_number node, begin_interval_time, sql_id, plan_hash_value, nvl(executions_delta,0) execs, rows_processed_total Total_rows, (elapsed_time_delta/decode(nvl(executions_delta,0),0,1,executions_delta))/1000000 avg_etime, (buffer_gets_delta/decode(nvl(buffer_gets_delta,0),0,1,executions_delta)) avg_lio, (DISK_READS_DELTA/decode(nvl(DISK_READS_DELTA,0),0,1,executions_delta)) avg_pio,SQL_PROFILE from DBA_HIST_SQLSTAT S, DBA_HIST_SNAPSHOT SS where sql_id = '9vv8244bcq529' and ss.snap_id = S.snap_id and ss.instance_number = S.instance_number and executions_delta > 0 order by 1, 2, 3;
Check SQL Plan ============== select * from table(DBMS_XPLAN.DISPLAY_CURSOR('9vv8244bcq529'));
OHS Version ============ export ORACLE_HOME=/apps/envname/product/fmw LD_LIBRARY_PATH=$ORACLE_HOME/ohs/lib:$ORACLE_HOME/oracle_common/lib:$ORACLE_HOME/lib:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH
cd /apps/envname/product/fmw/ohs/bin
/apps/envname/product/fmw/ohs/bin > ./httpd -version
Find duplicate rows in a table. =============================== set lines 1000 col ACTIVATION_ID for a50; col USER_ID for a30; SELECT ACTIVATION_ID, LFORM_ID,USER_ID FROM DBA_BTDEL1.LMS_LFORM_ACTIVATION GROUP BY ACTIVATION_ID, LFORM_ID,USER_ID HAVING count(*) > 1;
Partition Tables in database ============================ set lines 200 pages 200 col owner for a30 col table_name for a30 col partition_name for a30 select t.owner, t.table_name, s.PARTITION_NAME, s.bytes/1024/1024 MB from dba_tables t, dba_segments s where t.partitioned = 'YES' and t.owner not in ('SYS','SYSTEM') and t.table_name=s.segment_name order by 2, 4;
Who is using my system tablespace ================================= select owner, segment_type, sum(bytes/1024/1024) MB, count(*), tablespace_name from dba_segments where tablespace_name in ('SYSTEM','SYSAUX') group by owner, segment_type, tablespace_name order by 1;
What are the largest/biggest tables of my DB. ============================================= col segment_name for a30 Select * from (select owner, segment_name, segment_type, bytes/1024/1024 MB from dba_segments order by bytes/1024/1024 desc) where rownum <=30;
ASM Disk Group Details ====================== cd /oracle/product/grid_home/bin ./kfod disks=all asm_diskstring='ORCL:*' -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 557693 Mb ORCL:DBPRD_AR_544G_01 2: 557693 Mb ORCL:DBPRD_DT01_544G_01 3: 557693 Mb ORCL:DBPRD_FRA_544G_01 4: 16378 Mb ORCL:DBPRD_RC_16G_001 5: 16378 Mb ORCL:DBPRD_RC_16G_002 6: 16378 Mb ORCL:DBPRD_RC_16G_003 7: 16378 Mb ORCL:DBPRD_RC_16G_004 8: 16378 Mb ORCL:DBPRD_RC_16G_005 9: 16378 Mb ORCL:DBPRD_RC_M_16G_001 10: 16378 Mb ORCL:DBPRD_RC_M_16G_002 11: 16378 Mb ORCL:DBPRD_RC_M_16G_003 12: 16378 Mb ORCL:DBPRD_RC_M_16G_004 13: 16378 Mb ORCL:DBPRD_RC_M_16G_005 14: 1019 Mb ORCL:GRID_NPRD_3026_CL_A_1G_1 15: 1019 Mb ORCL:GRID_NPRD_3026_CL_A_1G_2 16: 1019 Mb ORCL:GRID_NPRD_3026_CL_B_1G_1 17: 1019 Mb ORCL:GRID_NPRD_3026_CL_B_1G_2 18: 1019 Mb ORCL:GRID_NPRD_3026_CL_C_1G_1 19: 1019 Mb ORCL:GRID_NPRD_3026_CL_C_1G_2
./kfod disks=all asm_diskstring='/dev/oracleasm/disks/*' -------------------------------------------------------------------------------- Disk Size Path User Group ================================================================================ 1: 557693 Mb /dev/oracleasm/disks/DBPRD_AR_544G_01 oracle dba 2: 557693 Mb /dev/oracleasm/disks/DBPRD_DT01_544G_01 oracle dba 3: 557693 Mb /dev/oracleasm/disks/DBPRD_FRA_544G_01 oracle dba 4: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_16G_001 oracle dba 5: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_16G_002 oracle dba 6: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_16G_003 oracle dba 7: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_16G_004 oracle dba 8: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_16G_005 oracle dba 9: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_M_16G_001 oracle dba 10: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_M_16G_002 oracle dba 11: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_M_16G_003 oracle dba 12: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_M_16G_004 oracle dba 13: 16378 Mb /dev/oracleasm/disks/DBPRD_RC_M_16G_005 oracle dba 14: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_A_1G_1 oracle dba 15: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_A_1G_2 oracle dba 16: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_B_1G_1 oracle dba 17: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_B_1G_2 oracle dba 18: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_C_1G_1 oracle dba 19: 1019 Mb /dev/oracleasm/disks/GRID_NPRD_3026_CL_C_1G_2 oracle dba
Clear SQL Cache =============== SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';
ADDRESS HASH_VALUE ---------------- ---------- 000000085FD77CF0 808321886
SQL> exec DBMS_SHARED_POOL.PURGE ('000000085FD77CF0, 808321886', 'C');
PL/SQL procedure successfully completed.
SQL> select ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '7yc%';
no rows selected
Thread/dump =========== jstack -l <pid> > <file-path> kill -3 pid
Get the object name with block ID ================================== SET PAUSE ON SET PAUSE 'Press Return to Continue' SET PAGESIZE 60 SET LINESIZE 300
COLUMN segment_name FORMAT A24 COLUMN segment_type FORMAT A24
SELECT segment_name, segment_type, block_id, blocks FROM dba_extents WHERE file_id = &file_no AND ( &block_value BETWEEN block_id AND ( block_id + blocks ) ) /
DB link details ================ col DB_LINK for a30 col OWNER for a30 col USERNAME for a30 col HOST for a30 select * from dba_db_links;
1 note
·
View note
Text
RMAN tablespace and datafile backup Script for Oracle Database
RMAN tablespace and datafile backup Script for Oracle Database
Following scripts will backup the tablespace and datafile present in Oracle Database. First you need to list the schema with RMAN Report: RMAN> report schema; Report of database schema for database with db_unique_name XE List of Permanent Datafiles =========================== File Size(MB) Tablespace RB segs Datafile Name ---- -------- -------------------- ------- ------------------------ 1…
View On WordPress
0 notes
Text
300+ TOP Oracle Database Backup & Recovery Interview Questions and Answers
Oracle Database Backup and Recovery Interview Questions for freshers experienced :-
1. What is an Oracle database Partial Backup? A Partial Backup is any operating system backup short of a full backup, taken while the database is open or shut down. 2. What is an Oracle database Full Backup? A full backup is an operating system backup of all data files, on-line redo log files and control file that constitute ORACLE database and the parameter. 3. What is the difference between oracle media recovery and crash recovery? Media recovery is a process to recover database from backup when physical disk failure occure. cash recovery is a automated process take care by oracle when instance failure occure. 4. What is db_recovery_file_dest in oracle? When do you need to set this value? Give me the steps to perform the point in time recovery with a backup which is taken before the reset logs of the db? Tell me about the steps required to enable the RMAN backup for a target database? In oracle db_recovery_file_dest specifies a default location of flash recovery area which contains multiplexed current control files, online redo logs as well as archived logs, Rman backups,flash back logs. db_recovery_file_dest_size should be specified as well. 5. What is Restricted Mode of Instance Startup in Oracle? To Enable Restricted Session Alter system enable restricted session; To Disable Restricted Session Alter system disable restricted session; To Start the Database in Restricted Mode STARTUP RESTRICT By starting Instance in restricted mode it will not allow all users to access and only users with restriction privilege will be allowed to access. This will be done time of make some data changes so that no users should be allowed to access data on time of changes happening 6. What is the difference between recovery and restoring of the oracle database? Here is a scenario to understand Restore & Recovery Sunday 10pm: Database is backed up. and is running fine. Monday 11am: Went down / crashed due to some reason. To bring up the database, we have 2 options: Simple Restore: copying files from backup taken sunday night and open the database. Here, we loose all the changes that are done since sunday night. Restore and Recovery: Copying files from backup taken sunday night and applying all the archivelog and redo log files to bring up the database to the point of failure. Here you dont loose the changes done until monday 11 am. Restore: copying files from the backup overwriting the existing database files Recovery: applying the changes to the database till point of failure. these changes are recorded in online redolog and archivelog (which are the backups of redolog) files. 7. What are the different tools available for hot backups in Oracle? Is it preferable to take it manually all the time or it depends on the size of the database? A hot backup can be done by either RMAN,User Managed Backups by puting tablespace in backup mode my OEM which does the same as the user managed backup.But the Backup depends upon the size of the database you are using . if the database size in TB the RMAN backup will take more than 10 hours to complete and if the database is critical you can' wait for long to go for so long in this case their are special backup techniques which are given by vendors like TIVOLI and Netbackup they provide BC Vol backup called Business content Volumn Sync which copies a snapshot of the primary data to another place and backsup the database from one SAN to another with in 15 min for 2 TB of database and is the preferable method for big companies. 8. What do you mean by Oracle MEDIA RECOVERY? When physical disk fail, physical database file corrupt then media recovery required 9. What is the disk migration? what is the steps involved in oracle disk migration? Disk migration is noting but, migration of data from one OS dependent database to another Dependent database. The steps involved in this are first go to your target database and export all your data into flat files next in the destination database during the installation of the database, it asks for data source ,instead of giving the data of the oracle provided , give the path of the flat file you exported previously . 10. What are the advantages of operating a database in ARCHIVELOG mode over operating it in NO ARCHIVELOG mode in Oracle? Database in archivelog mode chance to take hot backup and no data loss in recovery. you can use RMAN for backup and recovery .disadvantage is poor ferformance, and more chance to crash disc.
Oracle Database Backup & Recovery Interview Questions 11. Why more redos are generated when the oracle database is in begin backup mode? During begin backup mode datafile headers get freezed and as a result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs. 12. What is the use of FULL option in EXP command? A flag to indicate whether full database export should be performed. 13. What is the use of OWNER option in EXP command? List of table accounts should be exported. 14. What is the use of TABLES option in EXP command? List of tables should be exported. 15. What is the use of RECORD LENGTH option in EXP command? Record length in bytes. 16. What is the use of INCTYPE option in EXP command? Type export should be performed COMPLETE, CUMULATIVE, INCREMENTAL. 17. What is the use of RECORD option in EXP command? For Incremental exports, the flag indirect whether a record will be stores data dictionary tables recording the export. 18. What is the use of PARFILE option in EXP command? Name of the parameter file to be passed for export. 19. What is the use of ANALYSE option in EXP command? A flag to indicate whether statistical information about the exported objects should be written to export dump file. 20. What is the use of CONSISTENT option in EXP command? A flag to indicate whether a read consistent version of all the exported objects should be maintained. 21. What is use of LOG (Ver 7) option in EXP command? The name of the file which log of the export will be written. 22. What is the use of FILE option in IMP command? The name of the file from which import should be performed. 23. What is the use of SHOW option in IMP command? A flag to indicate whether file content should be displayed or not. 24. What is the use of IGNORE option in IMP command? A flag to indicate whether the import should ignore errors encounter when issuing CREATE commands. 25. What is the use of GRANT option in IMP command? A flag to indicate whether grants on database objects will be imported. 26. What is the use of INDEXES option in IMP command? A flag to indicate whether import should import index on tables or not. 27. What is the use of ROWS option in IMP command? A flag to indicate whether rows should be imported. If this is set to 'N' then only DDL for database objects will be executed. 28. What are the different methods of backing up oracle database? Logical Backups Cold Backups Hot Backups (Archive log) 29. What is a logical backup? Logical backup involves reading a set of database records and writing them into a file. Export utility is used for taking backup and Import utility is used to recover from backup. 30. What is cold backup? What are the elements of it? Cold backup is taking backup of all physical files after normal shutdown of database. We need to take. All Data files. All Control files. All on-line redo log files. The init.ora file (Optional) 31. What are the different kinds of export backups? Full backup - Complete database Incremental backup - Only affected tables from last incremental date/full backup date. Cumulative backup - Only affected table from the last cumulative date/full backup date. 32. What is hot backup and how it can be taken? Taking backup of archive log files when database is open. For this the ARCHIVELOG mode should be enabled. The following files need to be backed up. All data files. All Archive log, redo log files. On control file. 33. What is the use of FILE option in EXP command? To give the export file name. 34. What is the use of COMPRESS option in EXP command? Flag to indicate whether export should compress fragmented segments into single extents. 35. What is the use of GRANT option in EXP command? A flag to indicate whether grants on database objects will be exported or not. Value is 'Y' or 'N'. 36. What is the use of INDEXES option in EXP command? A flag to indicate whether indexes on tables will be exported. 37. What is the use of ROWS option in EXP command? Flag to indicate whether table rows should be exported. If 'N' only DDL statements for the database objects will be created. 38. What is the use of CONSTRAINTS option in EXP command? A flag to indicate whether constraints on table need to be exported. Oracle Database Backup and Recovery Questions and Answers Pdf Download Read the full article
0 notes
Text
RMAN QUICK LEARN– FOR THE BEGINNERS
Oracle Recovery Manager (RMAN) is Oracle’s preferred method or tools by which we are able to take backups and restore and recover our database. You must develop a proper backup strategy which provides maximum flexibility to Restore & Recover the DB from any kind of failure. To develop a proper backup strategy you must decide the type of requirement then after think the possible backup option. The recommended backup strategy must include the backup of all datafiles, Archivelog and spfile & controlfile autobackup. To take online or hot backups database must be in archivelog mode. You can however use RMAN to take an offline or cold backup.Note: Selecting the backup storage media is also important consideration. If you are storing your backup on disk then it is recommended to keep an extra copy of backup at any other server. CREATING RECOVERY CATALOG: Oracle recommended to use separate database for RMAN catalog. Consider in below steps the database is already created: 1. Create tablespace for RMAN: SQL> create tablespace RTBS datafile 'D:ORACLEORADATARTBS01.DBF' size 200M extent management local uniform size 5M; 2. Create RMAN catalog user: SQL> create user CATALOG identified by CATALOG default tablespace RTBS quota unlimited on RTBS; 3. Grant some privileges to RMAN user: SQL> Grant connect, resource to CATALOG; SQL> grant recovery_catalog_owner to CATALOG; 4. Connect into catalog database and create the catalog: % rman catalog RMAN_USER/RMAN_PASSWORD@cat_db log=create_catalog.log RMAN> create catalog tablespace RTBS; RMAN> exit; 5. Connect into the target database and into the catalog database: % rman target sys/oracle@target_db RMAN> connect catalog RMAN_USER/RMAN_PASSWORD@cat_db 6. Connected into the both databases, register target database: RMAN> register database; The following list gives an overview of the commands and their uses in RMAN. For details description search the related topics of separate post on my blog: http://shahiddba.blogspot.com/INITIALIZATION PARAMETER: Some RMAN related database initialization parameters: control_file_record_keep_time: Time in days to retention records in the Control File. (default: 7 days) large_pool_size: Memory pool used for RMAN in backup/restore operations. shared_pool_size: Memory pool used for RMAN in backup/restore operations (only if large pool is not configured). CONNECTING RMANexport ORACLE_SID= --Linux platformset ORACLE_SID== --Windows platformTo connect on a target database execute RMAN.EXE then RMAN>connect target / RMAN>connect target username/password RMAN>connect target username/password@target_db To connect on a catalog database:RMAN>connect catalog username/password RMAN>connect catalog username/password@catalog_db To connect directly from the command prompt:C:>rman target / --target with nocatalog Recovery Manager: Release 9.2.0.1.0 - ProductionCopyright (c) 1995, 2002, Oracle Corporation. All rights reserved.connected to target database: RMAN (DBID=63198018)using target database controlfile instead of recovery catalogC:>rman target sys/oracle@orcl3 catalog catalog/catalog@rman --with catalogRecovery Manager: Release 9.2.0.1.0 - ProductionCopyright (c) 1995, 2002, Oracle Corporation. All rights reserved.connected to target database: SADHAN (DBID=63198018)connected to recovery catalog databaseRMAN PARAMETERSRMAN parameters can be set to a specified value and remain persistent. This information is stored in the target database’s controlfile (By default). Alternatively you can store this backup information into recovery catalog. If you connect without catalog or only to the target database, your repository should be in the controlfile.SHOW/CONFIGURE – SHOW command will show current values for set parameters and CONFIGURE – Command to set new value for parameterRMAN> show all;using target database control file instead of recovery catalogRMAN configuration parameters are:CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # defaultCONFIGURE BACKUP OPTIMIZATION OFF; # defaultCONFIGURE DEFAULT DEVICE TYPE TO ; # defaultCONFIGURE CONTROLFILE AUTOBACKUP ON;RMAN>show datafile backup copies; RMAN>show default device type; RMAN>show device type; RMAN>show channel; RMAN>show retention policy;RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;old RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP OFF;new RMAN configuration parameters:CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters are successfully stored CONFIGURE channel device type disk format 'D:oraback%U'; You can set many parameters by configuring them first and making them persistent or you can override them (discard any persistent configuration) by specifying them explicitly in your RMAN backup command. Setting Default Recommended Controlfile autobackup off on Retention policy to redundancy 1 to recovery window of 30 days Device type disk parallelism 1 ... disk|sbt prallelism 2 ... Default device type to disk to disk Backup optimization off off Channel device type none disk parms=‘...’ Maxsetsize unlimited depends on your database size Appending CLEAR or NONE at the last of configuration parameter command will reset the configuration to default and none setting.CONFIGURE RETENTION POLICY CLEAR;CONFIGURE RETENTION POLICY NONE; Overriding the configured retention policy: change backupset 421 keep forever nologs; change datafilecopy 'D:oracleoradatausers01.dbf' keep until 'SYSDATE+30';RMAN BACKUP SCRIPTS:Backing up the database can be done with just a few commands or can be made with numerous options. RMAN> backup database;RMAN> backup as compressed backupset database;RMAN> Backup INCREMENTAL level=0 database;RMAN> Backup database TAG=Weekly_Sadhan;RMAN> Backup database MAXSETSIZE=2g;RMAN> backup TABLESPACE orafin;You may also combine options together in a single backup and for multi channel backup.RMAN> Backup INCREMENTAL level=1 as COMPRESSED backupset databaseFORMAT 'H:ORABACK%U' maxsetsize 2G; backup full datafile x,y,z incremental level x include current controlfile archivelog all delete input copies x filesperset x maxsetsize xM diskratio x format = 'D:oraback%U';run {allocate channel d1 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db";allocate channel d2 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db";allocate channel d3 type disk FORMAT "H:orabackWeekly_%T_L0_%d-%s_%p.db"; backup incremental level 0 tag Sadhan_Full_DBbackup filesperset 8 FORMAT "H:orabackWeekly_%T_FULL_%d-%s_%p.db" DATABASE; SQL 'ALTER SYSTEM ARCHIVE LOG CURRENT'; backup archivelog all tag Sadhan_Full_Archiveback filesperset 8 format "H:orabackWeekly_%T_FULL_%d-%s_%p.arch"; release channel d1; release channel d2; release channel d3; } The COPY command and some copy scripts: copy datafile 'D:oracleoradatausers01.dbf' TO 'H:orabackusers01.dbf' tag=DF3, datafile 4 to TO 'H:orabackusers04.dbf' tag=DF4, archivelog 'arch_1060.arch' TO 'arch_1060.bak' tag=CP2ARCH16; run { allocate channel c1 type disk; copy datafile 'd:oracleoradatausers01.dbf' TO 'h:orabackusers01.dbf' tag=DF3, archivelog 'arch_1060.arch' TO 'arch_1060.bak' tag=CP2ARCH16; }COMPRESSED – Compresses the backup as it is taken.INCREMENTAL – Selecting incremental allows to backup only changes since last full backup.FORMAT – Allows you to specify an alternate location.TAG – You can name your backup.MAXSETSIZE – Limits backup piece size.TABLESPACE – Allows you to backup only a tablespace.RMAN MAINTAINANCE :You can review your RMAN backups using the LIST command. You can use LIST with options to customize what you want RMAN to return to you.RMAN> list backup SUMMARY;RMAN> list ARCHIVELOG ALL;RMAN> list backup COMPLETED before ‘02-FEB-09’;RMAN> list backup of database TAG Weekly_sadhan; RMAN> list backup of datafile "D:oracleoradatasadhanusers01.dbf" SUMMARY;You can test your backups using the validate command.RMAN> list copy of tablespace "SYSTEM"; You can ask RMAN to report backup information. RMAN> restore database validate; RMAN> report schema; RMAN> report need backup; RMAN> report need backup incremental 3 database; RMAN> report need backup days 3; RMAN> report need backup days 3 tablespace system; RMAN>report need backup redundancy 2; RMAN>report need backup recovery window of 3 days; RMAN> report unrecoverable; RMAN> report obsolete; RMAN> delete obsolete; RMAN> delete noprompt obsolete; RMAN> crosscheck; RMAN> crosscheck backup; RMAN> crosscheck backupset of database; RMAN> crosscheck copy; RMAN> delete expired; --use this after crosscheck command RMAN> delete noprompt expired backup of tablespace users; To delete backup and copies: RMAN> delete backupset 104; RMAN> delete datafilecopy 'D:oracleoradatausers01.dbf'; To change the status of some backups or copies to unavailable come back to available: RMAN>change backup of controlfile unavaliable; RMAN>change backup of controlfile available; RMAN>change datafilecopy 'H:orabackusers01.dbf' unavailable; RMAN>change copy of archivelog sequence between 230 and 240 unavailable; To catalog or uncatalog in RMAN repository some copies of datafiles, archivelogs and controlfies made by users using OS commands: RMAN>catalog datafilecopy 'F:orabacksample01.dbf'; RMAN>catalog archivelog 'E:oraclearch_404.arc', 'F:oraclearch_410.arc'; RMAN>catalog controlfilecopy 'H:oracleoradatacontrolfile.ctl'; RMAN> change datafilecopy 'F:orabacksample01.dbf' uncatalog; RMAN> change archivelog 'E:oraclearch_404.arc', 'E:oraclearch_410.arc' uncatalog; RMAN> change controlfilecopy 'H:oracleoradatacontrolfile.ctl' uncatalog; RESTORING & RECOVERING WITH RMAN BACKUPYou can perform easily restore & recover operation with RMAN. Depending on the situation you can select either complete or incomplete recovery process. The complete recovery process applies all the redo or archivelog where as incomplete recovery does not apply all of the redo or archive logs. In this case of recovery, as you are not going to complete recovery of the database to the most current time, you must tell Oracle when to terminate recovery. Note: You must open your database with resetlogs option after each incomplete recovery. The resetlogs operation starts the database with a new stream of log sequence number starting with sequence 1. DATAFILE – Restore specified datafile.CONTROLFILE – To restore controlfile from backup database must be in nomount.ARCHIVELOG or ARCHIVELOG from until – Restore archivelog to location there were backed up.TABLESPACE – Restores all the datafiles associated with specified tablespace. It can be done with database open.RECOVER TABLESPACE/DATAFILE:If a non-system tablespace or datafile is missing or corrupted, recovery can be performed while the database remains open.STARTUP; (you will get ora-1157 ora-1110 and the name of the missing datafile, the database will remain mounted)Use OS commands to restore the missing or corrupted datafile to its original location, ie: cp -p /user/backup/uman/user01.dbf /user/oradata/u01/dbtst/user01.dbfSQL>ALTER DATABASE DATAFILE3 OFFLINE; (tablespace cannot be used because the database is not open)SQL>ALTER DATABASE OPEN;SQL>RECOVER DATAFILE 3;SQL>ALTER TABLESPACE ONLINE; (Alternatively you can use ‘alter database’ command to take datafile online)If the problem is only the single file then restore only that particualr file otherwise restore & recover whole tablespace. The database can be in use while recovering the whole tablespace.run { sql ‘alter tablespace users offline’; allocate channel c1 device type disk|sbt; restore tablespace users; recover tablespace users; sql ‘alter tablespace users online’;}If the problem is in SYSTEM datafile or tableapce then you cannnot open the database. You need sifficient downtime to recover it. If problem is in more than one file then it is better to recover whole tablepace or database.startup mountrun { allocate channel c1 device type disk|sbt; allocate channel c2 device type disk|sbt; restore database check readonly; recover database; alter database open;}DATABASE DISASTER RECOVERY:Disaster recovery plans start with risk assessment. We need to identify all the risks that our data center can face such as: All datafiles are lost, All copies of current controlfile are lost, All online redolog group member are lost, Loss of OS, loss of a disk drive, complete loss of our server etc: Our disaster plan should give brief description about recovery from above disaster. Planning Disaster Recovery in advance is essential for DBA to avoid any worrying or panic situation.The below method is used for complete disaster recovery on the same as well as different server. set dbid=xxxxxxxstartup nomount;run {allocate channel c1 device type disk|sbt;restore spfile to ‘some_location’ from autobackup;recover database; alter database open resetlogs;}shutdown immediate;startup nomount;run { allocate channel c1 device type disk|sbt; restore controlfile from autobackup;alter database mount; } RMAN> restore database;RMAN> recover database; --no need incase of cold backupRMAN> alter database open resetlogs;}DATABASE POINT INTIME RECOVERY:DBPITR enables you to recover a database to some time in the past. For example, if logical error occurred today at 10.00 AM, DBPITR enables you to restore the entire database to the state it was in at 09:59 AM there by removing the effect of error but also remove all other valid update that occur since 09:59 AM. DBPTIR requires the database is in archivelog mode, and existing backup of database created before the point in time to which you wish to recover must exists, and all the archivelog and online logs created from the time of backup until the point in time to which you wish to recover must exist as well. RMAN> shutdown Abort; RMAN> startup mount; RMAN> run { Set until time to_date('12-May-2012 00:00:00′, ‘DD-MON-YYYY HH24:MI:SS'); restore database; recover database; }RMAN> alter database open resetlogs;Caution: It is highly recommended that you must backup your controlfile and online redo log file before invoking DBPITR. So you can recover back to the current point in time in case of any issue.Oracle will automatically stop recovery when the time specified in the RECOVER command has been reached. Oracle will respond with a recovery successful message.SCN/CHANGE BASED RECOVERY:Change-based recovery allows the DBA to recover to a desired point of System change number (SCN). This situation is most likely to occur if archive logfiles or redo logfiles needed for recovery are lost or damaged and cannot be restored.Steps:– If the database is still open, shut down the database using the SHUTDOWN command with the ABORT option.– Make a full backup of the database including all datafiles, a control file, and the parameter files in case an error is made during the recovery.– Restore backups of all datafiles. Make sure the backups were taken before the point in time you are going to recover to. Any datafiles added after the point in time you are recovering to should not be restored. They will not be used in the recovery and will have to be recreated after recovery is complete. Any data in the datafiles created after the point of recovery will be lost.– Make sure read-only tablespace are offline before you start recovery so recovery does not try to update the datafile headers.RMAN> shutdown Abort; RMAN> startup mount; RMAN>run { set until SCN 1048438; restore database; recover database; alter database open resetlogs; }RMAN> restore database until sequence 9923; --Archived log sequence number RMAN> recover database until sequence 9923; --Archived log sequence number RMAN> alter database open resetlogs;Note: Query with V$LOG_HISTORY and check the alert.log to find the SCN of an event and recover to a prior SCN.IMPORTANT VIEW: Views to consult into the target database: v$backup_device: Device types accepted for backups by RMAN. v$archived_log: Redo logs archived. v$backup_corruption: Corrupted blocks in backups. v$copy_corruption: Corrupted blocks in copies. v$database_block_corruption: Corrupted blocks in the database after last backup. v$backup_datafile: Backups of datafiles. v$backup_redolog: Backups of redo logs. v$backup_set: Backup sets made. v$backup_piece: Pieces of previous backup sets made. v$session_long_ops: Long operations running at this time. Views to consult into the RMAN catalog database: rc_database: Information about the target database. rc_datafile: Information about the datafiles of target database. rc_tablespace: Information about the tablespaces of target database. rc_stored_script: Stored scripts. rc_stored_script_line: Source of stored scripts. For More Information on RMAN click on the below link: Different RMAN Recovery Scenarios 24-Feb-13 Synchronizes the Test database with RMAN Cold Backup 16-Feb-13 Plan B: Renovate old Apps Server Hardware 27-Jan-13 Plan A: Renovate old Apps Server Hardware 25-Jan-13 Planning to Renovate old Apps Server Hardware 24-Jan-13 Duplicate Database with RMAN without Connecting to Target Database 23-Jan-13 Different RMAN Errors and their Solution 24-Nov-12 Block Media Recovery using RMAN 4-Nov-12 New features in RMAN since Oracle9i/10g 14-Oct-12 A Shell Script To Take RMAN Cold/Hot and Export Backup 7-Oct-12 Automate Rman Backup on Windows Environment 3-Sep-12 How to take cold backup of oracle database? 26-Aug-12 Deleting RMAN Backups 22-Aug-12 Script: RMAN Hot Backup on Linux Environment 1-Aug-12 How RMAN behave with the allocated channel during backup 31-Jul-12 RMAN Important Commands Description. 7-Jul-12 Script: Crontab Use for RMAN Backup 2-Jun-12 RMAN Report and Show Commands 16-May-12 RMAN backup on a Windows server thruogh DBMS_SCHEDULING 15-May-12 Format Parameter of Rman Backup 12-May-12 Rman Backup with Stored Script 12-May-12 Rman: Disaster Recovery from the Scratch 6-May-12 RMAN- Change-Based (SCN) Recovery 30-Apr-12 RMAN-Time-Based Recovery 30-Apr-12 RMAN – Cold backup Restore 23-Apr-12 RMAN Backup on Network Storage 22-Apr-12 Rman Catalog Backup Script 18-Apr-12 Point to be considered with RMAN Backup Scripts 11-Apr-12 Monitoring RMAN Through V$ Views 7-Apr-12 RMAN Weekly and Daily Backup Scripts 25-Mar-12 Unregister Database from RMAN: 6-Mar-12
0 notes
Text
DBA interview Question and Answer part 23
What is basic difference between V$ view to GV$ or V$ and V_$ view?The V_$ (V$ is the public synonym for V_$ views) view are called dynamic performance views. They are continuously updated while a database is open in use and their contents related primary to performance.Select object_type from '%SESSION' 'V%';OWNER OBJECT_NAME OBJECT_TYPE----- ----------- -------------SYS V_$HS_SESSION VIEWSYS V_$LOGMNR_SESSION VIEWSYS V_$PX_SESSION VIEWSYS V_$SESSION VIEWWhere as GV$ views are called Global dynamic performance view and retrieve information about all started instance accessing one RAC database in contrast with dynamic performance views which retrieves information about local instance only. The GV$ views having the additional column INST_ID which indicates the instance in RAC environment.GV$ views use a special form of parallel execution. The parallel execution co-ordinator runs on the instance that the client connects to and one slave is allocated in each instance to query the underlying V$ view for that instance.What is the Purpose of default Tablespace in oracle database?Each user should have a default tablespace. When a user creates a schema objects and specifies no tablespace to contain it, oracle database stores the object in default user tablespace.The default setting for default tablespace of all users is the SYSTEM tablespace. If a user likely to create any type of objects then you should specify and assign the user a default tablespace. Note: Using the tablespace other than SYSTEM reduces contention between data dictionary objects and the user objects for the same data files. Thus it is not advisable for user data to be stored in the SYSTEM tablesapce.SELECT USERNAME, DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME='EDSS';SQL> Alter user EDSS default tablespace XYZ;SELECT USERNAME, DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME='EDSS';Once you change the tablespace for a user the previous/existing objects stay the same,I suppose that you never specified a tablespace when you created the objects and let to use the default tablespace from the user, the objects stay stored in the previous tablespace(tablespace A) and new objects will be created in the new default tablespace (tablespace B). Like in the example above, the objects for EDSS stay in the ORAJDA_DB tablespace and any new object will be stored in the ORAJDA_DB1 tablespace.What is Identity Columns Feature in oracle 12c?Before Oracle 12c there was no direct equivalent of the AutoNumberor Identityfunctionality, when needed it will implemented using a combination of sequences and triggers. The oracle 12c database introduces the ability to define an identity clause for a table column defined using a numeric type. Using ALWAYS keyword will force the use of the identity.GENERATED ]AS IDENTITY Using BY DEFAULT allows you to use the identity if the column isn't referenced in the insert statement.Using BY DEFAULT ON NULL allows the identity to be used even when the identity column is referenced and NULL value is specified.How to find Truncated Table user information?If you have already configure the data mining concept with your database then there is nothing to do you can query with v$logmnr_contents view and find the list, otherwise you need to do some more step to configure it first with your database.Why used Materialized view instead of Table?Materialized views are basically used to increase query performance since it contains results of a query. They should be used for reporting instead of a table for a faster execution.How does Session communicate with the server process?Server processes executes SQL received from user processes.Which SGA memory structure cannot re-size dynamically after instance startup?Log BufferWhich Activity will generate less UNDO data?InsertWhat happens when a user issue a COMMIT?The LGWR flushes the log buffer to the online redo log.When the SMON processes perform ICR?Only at the time of startup after abort shutdown.What is the purpose of synonym in oracle?Synonym permits application to function without modification regardless of which user owns table or view or regardless of which database holds the table or view. It masks the real name and owner of an object and provides location transparency for tables, views or program units of a remote database.CREATE SYNONYM pay_payment_master FOR HRMS.pay_payment_master;CREATE PUBLIC SYNONYM pay_payment_master FOR [email protected];How many memory layers are in the shared pool?The shared pool of SGA having three layers: Library cache which contains parsed sql statement, cursor information, execution plan etc; dictionary cache contains cache user account information, privilege information, datafiles, segments and extent information; buffer for parallel execution messages and control structure.What is the cache hit ratio, what impact does it have on performance?It calculates how often a requested block has been found in the buffer cache without requiring disk space. This ratio is computed using view V$SYSSTAT. The buffer cache hit ratio can be used to verify the physical I/O as predicted by V$DB_CACHE_ADVICE.select From in 'physical reads');The cache-hit ratio can be calculated as follows: Hit ratio = 1 – (physical reads / (db block gets + consistent gets)) If the cache-hit ratio goes below 90% then: increase the initialization parameter DB_CACHE_SIZE.Which environment variables are critical to run OUI?ORACLE_BASE; ORACLE_HOME; ORACLE_SIDWhat is Cluster verification utility in RAC env.The cluster verification utility (CVU) is a validation tools that you can use to check all the important component that need to verified at different stage of deployment in a RAC environment.How to identify the voting disk in RAC env. and why it is always in odd number?As we know every node are interconnected with each other and pinging voting disk in cluster to check whether they are alive. If voting disks are in even count then both nodes are survival node and it is created multiple brains in same cluster. If it is odd number in that case only one node ping greater count of voting disk and cluster can be saved from multiple brain syndrome. You can identify voting disk by using the below command line:#crsctl query css votediskWhat are the components of physical database structure? What is the use of control files?Oracle database consists of three main categories of files: one or more datafiles, two or more redo log files; one or more control files.When an instance of an Oracle database is started, its control file is used to identify the database and redo log files that must be opened for database operation to proceed. It is also used in database recovery.What is difference between database Refreshing and Cloning?DB refreshing means the data in the target environment has been synchronized with a copy of production. This can be done by restoring with a backup of production database where as cloning means that an identical copy of production has been taken and restore to the target environment.When we need to Clone or Refresh the database? There are a couple of scenarios when cloning should be performed: 1. Creating a new environment with the same or different DBNAME. 2. Sometimes we need to apply patches or other major configuration changes thus a copy of environment is needed to test the effect of this change.3. Normally in software development environment before any major development efforts take place, it is always good to re-clone dev, test environments to keep environment sync. The refreshment is needed only when you sure that the environment are already sync and you need to apply only change of latest data.What is OERR utility?The OERR (Oracle Error) utility is provided only with Oracle databases on UNIX platforms. OERR is not an executable, but instead, a shell script that retrieves messages from installed message files. OERR is an Oracle utility that extracts error messages with suggested actions from the standard Oracle message files. This utility is very useful as it can extract OS-specific errors that are not in the generic Error Messages and Codes Manual.What do you mean by logfile mirroring?The Process of having copy of redolog file is called mirroring. It is done by creating group by log file together. This ensures that LGWR automatically writes them to all the member of the current online redo log group. In case a group fails the database automatically switch over the next group. It diminishes the performance.What is the use of large pool? Which case you need to use the large pool?You need to set large pool if you are using multi thread server and RMAN backup. It prevents RMAN and MTS server from competing with other subsystem for the same memory. RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled then oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.What will be your first steps if you get the message Application is running slow?Gather the statistics (statspack, AWR) report to find TOP 5 wait event or run a Top command in Linux to see CPU usage. Later run VMSTAT, SAR and PRSTAT command to get more information on CPU and Memory usage and possible blocking.If poor written statements then run EXPLAIN PLAN on these statements and see whether new index or use of HINT brings the cost of SQL down.How do you add more or subsequent block size specification? Re-create the CONTROLFILE to specify the new BLOCK SIZE for specific data files or Take the database OFFLINE and bring back online with a new BLOCK SIZE specification.
0 notes
Text
Useful Query for DBA
Database default information: Select username,profile,default_tablespace,temporary_tablespace from dba_users; Database Structure information: SELECT /*+ ordered */ d.tablespace_name tablespace, d.file_name filename , d.bytes filesize, d.autoextensible autoextensible, d.increment_by * e.value increment_by, d.maxbytes maxbytes FROM sys.dba_data_files d, v$datafile v , (SELECT value FROM v$parameter WHERE name = 'db_block_size') e WHERE (d.file_name = v.name) UNION SELECT d.tablespace_name tablespace, d.file_name filename, d.bytes filesize, d.autoextensible autoextensible , d.increment_by * e.value increment_by, d.maxbytes maxbytes FROM sys.dba_temp_files d, (SELECT value FROM v$parameter WHERE name = 'db_block_size') e UNION SELECT '', a.member, b.bytes, null, TO_NUMBER(null), TO_NUMBER(null) FROM v$logfile a, v$log b WHERE a.group# = b.group# UNION SELECT '', a.name, TO_NUMBER(null), null, TO_NUMBER(null), TO_NUMBER(null) FROM v$controlfile a ORDER BY 1,2; Database Character Set Informations: Select * from nls_database_parameters; Database Segment Managment Informations: Select TABLESPACE_NAME, BLOCK_SIZE, EXTENT_MANAGEMENT, SEGMENT_SPACE_MANAGEMENT from dba_tablespaces; Database Object Information: Select owner,object_type,count(*) from dba_objects Where owner not IN ('SYS','MDSYS','CTXSYS','HR','ORDSYS','OE','ODM_MTR','WMSYS','XDB','QS_WS', 'RMAN','SCOTT','QS_ADM','QS_CBADM', 'ORDSYS', 'OUTLN', 'PM', 'QS_OS', 'QS_ES', 'ODM', 'OLAPSYS','WKSYS','SH','SYSTEM','ORDPLUGINS','QS','QS_CS') Group by owner,object_type order by owner; Find the last record from a table? select * from employees where rowid in(select max(rowid) from employees);select * from employees minus select * from employees where rownum select scn_to_timestamp(8843525) from dual;Find UNDO information Table: select to_char(begin_time,'hh24:mi:ss'),to_char(end_time,'hh24:mi:ss') , maxquerylen,ssolderrcnt,nospaceerrcnt,undoblks,txncount from v$undostat order by undoblks ; Shared Pool Information: select to_number(value) shared_pool_size, sum_obj_size, sum_sql_size, sum_user_size, (sum_obj_size + sum_sql_size+sum_user_size)* 1.3 min_shared_pool from (select sum(sharable_mem) sum_obj_size from v$db_object_cache where type 'CURSOR'), (select sum(sharable_mem) sum_sql_size from v$sqlarea), (select sum(250 * users_opening) sum_user_size from v$sqlarea), v$parameter where name = 'shared_pool_size'; How to determine whether the datafiles are synchronized or not? select status, checkpoint_change#, to_char(checkpoint_time, 'DD-MON-YYYY 0 ORDER BY elapsed_seconds; How can we see the oldest flashback available? You can use the following query to see the flashback data available.to_char(sysdate, to_char(oldest_flashback_time'YYYY-MM-DD HH24:MI', .*60 v$database d;How to get current session id, process id, client process id?select b.sid, b.serial#, a.spid processid, b.process clientpid from v$process a, v$session bwhere a.addr = b.paddr and b.audsid = userenv('sessionid');V$SESSION.SID and V$SESSION.SERIAL# are database process idV$PROCESS.SPID – Shadow process id on the database serverV$SESSION.PROCESS – Client process id, on windows it is “:” separated the first # is the process id on the client and 2nd one is the thread id.How to find running jobs in oracle database select sid, job,instance from dba_jobs_running;select sid, serial#,machine, status, osuser,username from v$session where username!='NULL'; --all active usersselect owner, job_name from DBA_SCHEDULER_RUNNING_JOBS; --for oracle 10gHow to find long running jobs in oracle databaseselect username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, time_remaining remaining, message from v$session_longops where time_remaining = 0 order by time_remaining descReport Longest Rman Backup Job Select username, to_char(start_time, 'hh24:mi:ss dd/mm/yy') started, TOTALWORK, SOFAR COMPLETED, time_remaining remaining, ELAPSED_SECONDS, message from v$session_longops where time_remaining = 0 and message like 'RMAN%' order by ELAPSED_SECONDS DESC; Last SQL Fired from particular Schema or Table: Select CREATED, TIMESTAMP, last_ddl_time from all_objects WHERE OWNER='HRMS' AND OBJECT_TYPE='TABLE' AND OBJECT_NAME='PAYROLL_MAIN_FILE';Display Log on Information of database: Select SYSDATE-logon_time "Days", (SYSDATE-logon_time)*24 "Hours" from sys.v_$session where sid=1; Note: The above query will display since how many days and time your database is up. That means you can estimate the last login days and time. Here Sid=1 is the PMON How do you find whether the instance was started with pfile or spfile SELECT name, value FROM v$parameter WHERE name = 'spfile'; This query will return NULL if you are using PFILE SELECT COUNT(*) FROM v$spparameter WHERE value IS NOT NULL;If the count is non-zero then the instance is using a spfile, and if the count is zero then it is using a pfile: SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type" FROM sys.v_$parameter WHERE name = 'spfile';How can you check which user has which Role: Sql>Select * from DBA_ROLE_PRIVS order by grantee;How to detect Migrated and chained row in a TableYou must execute script UTLCHAIN.SQL from before doing actual query SQL> ANALYZE TABLE scott.emp LIST CHAINED ROWS; SQL> SELECT * FROM chained_rows; You can also detect migrated and chained rows by checking the ‘table fetch continued row’ statistic in the v$sysstat view. SQL> SELECT name, value FROM v$sysstat WHERE name = ‘table fetch continued row’; Find Top 10 SQL from Database:SELECT * FROM (SELECT rownum Substr(a.sql_text 1 200) sql_text Trunc(a.disk_reads/Decode(a.executions 0 1 a.executions)) reads_per_execution a.buffer_gets a.disk_reads a.executions a.sorts a.address FROM v$sqlarea a ORDER BY 3 DESC)WHERE rownum < 10; How to Get Database Version:SELECT * from v$version; SELECT VALUE FROM v$system_parameter WHERE name = 'compatible'; Find the Size of Schema:SELECT SUM (bytes / 1024 / 1024) "size"FROM dba_segments WHERE owner = '&owner';Oracle SQL query over the view that shows actual Oracle Connections.SELECT osuser, username, machine, programFROM v$session ORDER BY osuser; SELECT program application, COUNT (program) Numero_SesionesFROM v$session GROUP BY programORDER BY Numero_Sesiones DESC;Showing the Table Structure:SELECT DBMS_METADATA.get_ddl ('TABLE', 'TABLE_NAME', 'USER_NAME') FROM DUAL;Getting Current Schema:SELECT SYS_CONTEXT ('userenv', 'current_schema') FROM DUAL;How to find the last time a session performed any activity?select username, floor(last_call_et / 60) "Minutes", status from v$sessionwhere username is not null order by last_call_et;How to find parameters that will take into effect for new sessions?SELECT name FROM v$parameter WHERE issys_modifiable = 'DEFERRED';How to find tables that have a specific column name?SELECT owner, table_name, column_nameFROM dba_tab_columns WHERE column_name like 'AMOUNT' ORDER by table_name;Display database Recovery status: SELECT * FROM v$backup; SELECT * FROM v$recovery_status; SELECT * FROM v$recover_file; SELECT * FROM v$recovery_file_status; SELECT * FROM v$recovery_log;
0 notes
Text
How to check tablespace in Oracle Database
How to check tablespace in Oracle Database
How to check tablespace in Oracle
To list the names and various other of all tablespaces in a database, use the following query on the DBA_TABLESPACES view:
SELECT TABLESPACE_NAME “TABLESPACE”, EXTENT_MANAGEMENT,FORCE_LOGGING,BLOCK_SIZE,SEGMENT_SPACE_MANAGEMENT FROM DBA_TABLESPACES; To list the Datafiles and Associated Tablespaces of a Database
To list the names, sizes, and associated…
View On WordPress
0 notes
Text
Database Duplication Using Rman Recovery
[ad_1]
Introduction
This document gives you with a temporary description on how to do refresh a Databases (duplicate a databases) from the Manufacturing Databases backup taken applying RMAN to tapes to any other environments.
I. Introduction
This document describes the approach of refreshing the Check Databases from the RMAN Manufacturing databases backups taken to the Tapes / Disks.
II. Preliminary Preparing Actions
We very first will need to make sure that the databases is not jogging within Fall short Secure ecosystem, and that the disk place employed by the previous databases is unveiled, so we can healthy the new databases. Listed here are a handful of steps that will need to be taken autos of before starting the Databases Refresh:
one. If there is necessity to protect some knowledge or accounts (Schema's and other important issues) from the previous ecosystem, export that knowledge very first before starting the refreshing from Manufacturing.
2. If the databases are jogging in Fall short Secure ecosystem, shut them down by means of Fall short Secure Manager. Also shut down the Listener and the Clever Agent that is jogging for people individual databases.
three. Modify the TNSNAMES.ORA, the INIT.ORA (and listener.ora if needed) files to make sure the databases can be started off independently, applying the nearby listener. Attempt this out by starting the listener support and databases support manually by means of the Products and services display screen on the Get 2000 equipment, and starting databases by means of SQL*As well as.
four. Shut down the databases applying the FAILSAFE Manager and get rid of all databases files apart from ones from the Admin directories (e.g. init.ora). This is needed to thoroughly clean place on disk to healthy the new databases. If we have more than enough disk place for the restore to happen, then we move the present files to a various listing or mount issue.
III. Preparing RMAN Duplication Script
After we are carried out with the above steps, we can move forward with the next step of building the scripts for restoring the Databases. An instance of this script is presented underneath:
hook up catalog rman/password@ hook up target sys/password@ hook up auxiliary sys/password@ operate allocate auxiliary channel ch1 kind 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=c:clustertdpo.decide)' established right up until scn or
established newname for datafile one to ‘new route for restore' . . . . . .
. . . . . . ... duplicate concentrate on databases to logfile group one (‘', (‘') Dimensions 100M, group 2 ((‘', (‘') Dimensions 100M, group three ((‘', (‘') Dimensions 100M
The description of the above script is as follows :
The very first portion promotions with connecting to needed databases:
one. catalog databases wherever RMAN catalog is saved,
2. concentrate on databases which is the databases we want to clone, and
three. auxiliary databases which is the one that we are attempting to make.
four. Note that when jogging this script afterwards on, the two catalog and concentrate on databases will need to be open up throughout the approach, although auxiliary databases is typically in NOMOUNT condition.
Subsequent in the script is allocating channel employed to accessibility file method by means of TSM. Note that to do that we will will need to adjust TSM configuration (dsm.decide file, nodename parameter) in purchase for the node to show up as the generation node.
Subsequent in the script is established right up until SCN / Day command that specifies right up until which issue the databases will be duplicated. If the Until finally SCN / Day is not talked about, RMAN will attempt to recover right up until the previous archived log, which can induce failure if that log is not offered on the tape drive (e.g. it is nevertheless on the generation server disk).
Subsequent is the checklist of established newname for datafile instructions, which are needed when new disk framework is various from generation disk framework (which is case on all our devices). All databases files need to be specified in this checklist (almost nothing is needed for tempfiles). The checklist of datafile's can be received by querying the DBA_Facts_Information knowledge dictionary perspective.
Last but not least, the duplicate command is there to do the true databases duplication.
IV. Jogging Databases Duplication
To operate databases duplication we can prepare a batch script, or operate a command to start off it up. It would glimpse a little something like this:
rman cmdfile msglog
Prior to starting the RMAN script, the next issues will need to be taken treatment of :
one. Validate that the RMAN catalog databases is open up. Make sure this databases will be open up throughout full duplication approach, e.g. if it typically goes down for backup change off the backup strategies. If the link to the databases is dropped throughout the duplication, the approach will fall short and will will need restoration.
2. Validate that the concentrate on databases is open up. Make sure this databases will be open up throughout full duplication approach, identical as for RMAN databases.
three. Validate that the Oracle companies for auxiliary databases are jogging and the databases is in nomount condition.
If the RMAN script is prosperous, it will get all the files from the file method, place them in proper locations as specified in the script, and recover the databases. It will also adjust the Databases ID, and start off the databases. This is the ideal case state of affairs, nonetheless, if duplication script fails you could will need to attempt and recover from failure.
V. Recovering from Failure
If the RMAN duplication approach fails, We could will need to recover the databases applying the RMAN backup. The Databases supplication or the restore can fall short because of some causes like :
one. RMAN catalog databases likely down for backup
2. Archived logs not offered on the file method (when established right up until SCN was not specified in the script). In people circumstances you could attempt next steps to recover, very first operate the swap clone command by means of RMAN (Following CONNECTING to the Target, CATALOG and AUXILIARY Databases) :
operate swap clone datafile all
Later on, attempt recreating the control file. RMAN very first results in a control file but does not have all knowledge files specified in there (it results in that one afterwards). Finest way is to backup control file to trace on the concentrate on databases, and modify that script to operate in auxiliary databases. Modifications to the script are usually: use new filenames as the area could have improved, established new databases name, and use RESETLOGS clause.
After control file is developed and executed, full restoration of the databases right up until specified SCN, the RMAN script can glimpse a little something like this :
operate allocate auxiliary channel ch1 kind 'sbt_tape' parms 'ENV=(TDPO_OPTFILE=c:clustertdpo.decide)' established right up until scn 6899135273 recover clone databases test readonly launch channel ch1
This step will attain all needed archived logs from the file method and use them to the databases. Following restoration is finished you can open up the databases:
change databases open up resetlogs
That would full the restoration. Note that when recovered this way, the databases Id is nevertheless the identical as for generation, as a result you can't use RMAN to backup that databases (except if you are applying various catalog). As a result, one need to normally strive to have the databases duplicated properly by means of RMAN without the need of failures.
VI. Put up Refresh Actions
Following the databases is duplicated, there are handful of steps that could be needed:
one. In some environments , it may perhaps be needed to adjust the Mode to noarchivelog method as the Manufacturing is generally operate in Archivelog method.
2. Insert files to momentary tablespaces. When the databases is restored all files and tablespaces will exist, nonetheless, none of the temp files will be developed. One particular needs to increase tempfiles to momentary tablespaces.
three. Fall all databases back links, and recreate them to issue to proper ecosystem. Following duplication, new databases will have identical databases back links as the generation, as a result pointing to the generation databases. All the databases back links need to consequently be dropped, and new ones developed to issue to the new ecosystem.
four. If the new databases is jogging in the Fall short Secure ecosystem, one will will need to rebuild the password file on the other node (the one that was not employed when duplicating the databases). If this is not carried out, the databases will not start off on that node and the total Fall short Secure group will be moved to other node.
five. Revert again improvements to tnsnames.ora (and listener.ora if applicable) to make sure the databases can start off within Fall short Secure.
6. Revert again improvements carried out to the TSM configuration files (dsm.decide).
seven. Shut the databases, halt nearby listener and databases companies, and start off the listener and databases within Fall short Secure.
8. Make sure the databases can slide-above properly to yet another node, by relocating Fall short Secure group manually.
[ad_2] Resource by Karthick Thanigaimani
from Viral News Around The World - Feed http://ift.tt/2rakA6b via IFTTT
0 notes
Text
Difference between Auditing and Performance
Oracle training course is more than enough for you to make your career in this field
You display this AWR review to the DBA and he happily concludes: convert off review, it is eliminating the performance! And thus, quite often Oracle database review is not allowed. And here are the 3 significant factors why review is not converted on:
– DBAs, designers, etc. are not acquainted with this feature: For those who are not acquainted with review, I recommend Tim Hall’s and Pete Finnigan’s articles: Auditing in Oracle 10g Release 2 and Guide to Easy Oracle Auditing.
– Protection is not regarded essential and necessary: For those who do not consider review essential, I wish them fortune. They are anyway not enthusiastic about what I have to say..
– Efficiency is being hit by allowing auditing: For the ones having problems with performance when review is allowed, here is something.
There are Good factors why performance experiences when review is enabled: too much is being audited, AUD$ still dangles in the SYSTEM tablespace and shock, surprise: the Oracle insects.
1. Too much is being audited. If it is a new database, invest a while with everyone concerned on what to review. The truth however is something like that: go-live day is getting nearer, oh do we have review enabled? How do you allow it, can you provide me the control please. And it should not go like that. You first choose on the value of audit_trail and then review what is really required, do not review recurring instructions that produce too many places into the AUD$ desk for it can develop extremely quick indeed.
Have a look at this line from Pete Finnigan’s website known as Efficiency Effect of Auditing.
If it is a preexisting database, examine first what is being audited. To figure out system audited things run the following:
select * from DBA_PRIV_AUDIT_OPTS
union all
select * from DBA_STMT_AUDIT_OPTS;
Note that the distinction between the two opinions above is very little and I have not discovered yet an area with description about the distinction. The certification says that DBA_STMT_AUDIT_OPTS explains the present system review choices across it and by customer while DBA_PRIV_AUDIT_OPTS explains the present system rights being audited across it and by customer. Puzzled? Me too.
For example, AUDIT SYSTEM connected only to DBA_PRIV_AUDIT_OPTS while PROFILE, PUBLIC SYNONYM, DATABASE LINK, SYSTEM AUDIT, SYSTEM GRANT and ROLE are supposed to be only to DBA_STMT_AUDIT_OPTS.
On the contrary, CREATE PUBLIC DATABASE LINK, EXEMPT ACCESS POLICY, CREATE EXTERNAL JOB, DROP USER and ALTER DATABASE are supposed to be to both opinions, get it
For the review choices on all things, examine DBA_OBJ_AUDIT_OPTS.
2. AUD$ still dangles in the SYSTEM tablespace. The system tablespace might be fragmented. Beginning 11gR2, Oracle facilitates shifting the AUD$ desk out of the SYSTEM tablespace. But first, noaudit your policy or quit the review.
If using 11.2.0 and above adhere to the certification training.
If still operating 11.1.0 or a below, here is how to do it:
create tablespace AUDIT_DATA datafile …;
create desk AUDX tablespace AUDIT_DATA as choose * from AUD$;
rename AUD$ to AUD$$;
rename AUDX to AUD$;
create catalog i_aud2 on AUD$(sessionid, ses$tid) tablespace AUDIT_DATA;
Remember to remove the information on consistent foundation. Do not just remove them but shift them to a central review database. Use the new DBMS_AUDIT_MGMT program. Check Tim Hall’s instructionon how to remove review pathway information. In immediate situations, it secure to run truncate desk AUD$;
If you use FGA, keep in mind to go also FGA_LOG$ away from the SYSTEM tablespace:
BEGIN
DBMS_AUDIT_MGMT.set_audit_trail_location(
audit_trail_type => DBMS_AUDIT_MGMT.AUDIT_TRAIL_FGA_STD,
audit_trail_location_value => ‘AUDIT_DATA’);
END;
/
And I indicate this post by Martin Widlake: Why is my SYSTEM Tablespace so Big?! The 3rd SYSTEM desk after SYS.AUD$ and SYS.HISTGRM$ that I have seen to develop up is SYS.IDL_UB1$.
3. Oracle insects. If you allow review you might get several insects without any charge, most old ones should be set in 11.2.0.2, don’t know about the new ones
20 years back, Bug 52646: review pathway degrades activities too much was set in Oracle 6:
Well, you still have many identical ones. As of nowadays, all these insects have vacant area for “Fixed in Item Version”. And this is not the whole list!
Bug 10347785: large edition depend for put into sys.aud$ resulting in collection cache: mutex x/hang
Bug 504968: ora-600[17051] and database accident when contacting review table
Bug 11901734: dbms_audit_mgmt review pathway clean-up cannot keep up with aud$ volume
Bug 8236755: ora-00060 happens while upgrading aud$ desk.
Bug 6159102: trade period rotates when signing off in recurring upgrade sys.aud$ statements
Bug 6334058: state of chaos with ora-00060 while upgrading sys.aud$ and review on choose is enable
Bug 4405301: too many records in aud$ when series is queried and review is by session
Bug 1257564: noaudit does not convert off review of database (very awesome indeed!)
I wish Oracle can provide one bug known as “Performance problems with the AUD$ table”, fix it so lastly no one gripes about the performance of one easy desk which in my view is not even a genuine vocabulary desk.
Question: In RAC, classes from both/all nodes are being audited. Will AUD$ hot prevents “stuffed” with new information be ping-ponged via the interconnect?
Oracle certification course is more than enough for you to be an Oracle certified Professional.
#oracle training in pune#oracle courses in pune#database administrator training#oracle certification courses#oracle corporation pune#sql training in pune#sql dba training in pune#best oracle training
0 notes
Text
300+ TOP ORACLE DBA Objective Questions and Answers
Oracle DBA Multiple Choice Questions :-
1.SNAPSHOT is used for_____ . A. Synonym, B. Table space, c System server, d Dynamic data replication Ans : D 2.We can create SNAPSHOTLOG for A. Simple snapshots, B. Complex snapshots, C. Both A & B, D. Neither A nor B Ans : A 3.Transactions per rollback segment is derived from A. Db_Block_Buffers, B. Processes, C. Shared_Pool_Size, D. None of the above Ans : B 4.ENQUEUE resources parameter information is derived from A. Processes or DDL_LOCKS and DML_LOCKS, B. LOG_BUFFER, C. DB__BLOCK_SIZE.. Ans : A 5.LGWR process writes information into A. Database files, B. Control files, C. Redolog files, D. All the above. Ans : C 6.SET TRANSACTION USE ROLLBACK SEGMENT is used to create user objects in a particular Tablespace A. True, B. False Ans : False 7.Databases overall structure is maintained in a file called A. Redolog file, B. Data file, C. Control file, D. All of the above. Ans : C 8.These following parameters are optional in init.ora parameter file DB_BLOCK_SIZE, PROCESSES A. True, B. False Ans : False 10.Constraints cannot be exported through EXPORT command A. True, B. False Ans : B
ORACLE DBA MCQs 11.It is very difficult to grant and manage common privileges needed by different groups of database users using the roles A. True, B. False Ans : B 12.What is difference between a DIALOG WINDOW and a DOCUMENT WINDOW regarding moving the window with respect to the application window A. Both windows behave the same way as far as moving the window is concerneD. B. A document window can be moved outside the application window while a dialog window cannot be moved C. A dialog window can be moved outside the application window while a document window cannot be moved Ans : C 13.What is the difference between a MESSAGEBOX and an ALERT A. A messagebox can be used only by the system and cannot be used in user application while an alert can be used in user application also. B. A alert can be used only by the system and cannot be use din user application while an messagebox can be used in user application also. C. An alert requires an response from the userwhile a messagebox just flashes a message and only requires an acknowledment from the user D. An message box requires an response from the userwhile a alert just flashes a message an only requires an acknowledment from the user Ans : C 14.Which of the following is not an reason for the fact that most of the processing is done at the server ? A. To reduce network traffiC. B. For application sharing, C. To implement business rules centrally, D. None of the above Ans : D 15.Can a DIALOG WINDOW have scroll bar attached to it ? A. Yes, B. No Ans : B 16.Which of the following is not an advantage of GUI systems ? A. Intuitive and easy to use., B. GUI’s can display multiple applications in multiple windows C. GUI’s provide more user interface objects for a developer D. None of the above Ans: D 17.What is the difference between a LIST BOX and a COMBO BOX ? A. In the list box, the user is restricted to selecting a value from a list but in a combo box the user can type in value which is not in the list B. A list box is a data entry area while a combo box can be used only for control purposes C. In a combo box, the user is restricted to selecting a value from a list but in a list box the Ans: A 18.When do you get a .PLL extension ? A. Save Library file B. Generate Library file C. Run Library file D. None of the above Ans : A 19.In a CLIENT/SERVER environment , which of the following would not be done at the client ? A. User interface part, B. Data validation at entry line, C. Responding to user events, D. None of the above Ans : D 20.Why is it better to use an INTEGRITY CONSTRAINT to validate data in a table than to use a STORED PROCEDURE ? A. Because an integrity constraint is automatically checked while data is inserted into or updated in a table while a stored procedure has to be specifically invoked B. Because the stored procedure occupies more space in the database than a integrity constraint definition C. Because a stored procedure creates more network traffic than a integrity constraint definition Ans : A 21.Which of the following is not an advantage of a client/server model ? A. A client/server model allows centralised control of data and centralised implementation of business rules. B. A client/server model increases developer;s productivity C. A client/server model is suitable for all applications D. None of the above. Ans : C 22.What does DLL stands for ? A. Dynamic Language Library B. Dynamic Link Library C. Dynamic Load Library D. None of the above Ans : B 23.POST-BLOCK trigger is a A. Navigational trigger B. Key trigger C. Transactional trigger D. None of the above Ans : A 24.You can prepare for these Oracle employment qualification test multiple choice questions. People usually get similar questions in the regular oracle placement papers. Check out the answers given. The system variable that records the select statement that SQL * FORMS most recently used to populate a block is A. SYSTEM.LAST_RECORD B. SYSTEM.CURSOR_RECORD C. SYSTEM.CURSOR_FIELD D. SYSTEM.LAST_QUERY Ans: D 25.Which of the following is TRUE for the ENFORCE KEY field 1. ENFORCE KEY field characterstic indicates the source of the value that SQL*FORMS uses to populate the field 2. A field with the ENFORCE KEY characterstic should have the INPUT ALLOWED charaterstic turned off A. Only 1 is TRUE B. Only 2 is TRUE C. Both 1 and 2 are TRUE D. Both 1 and 2 are FALSE Ans : A 26.What is the maximum size of the page ? A. Characters wide & 265 characters length B. Characters wide & 265 characters length C. Characters wide & 80 characters length D. None of the above Ans : B 27.A FORM is madeup of which of the following objects A. block, fields only, B. blocks, fields, pages only, C. blocks, fields, pages, triggers and form level procedures, D. Only blocks. Ans : C 28.For the following statements which is true 1. Page is an object owned by a form 2. Pages are a collection of display information such as constant text and graphics. A. Only 1 is TRUE B. Only 2 is TRUE C. Both 1 & 2 are TRUE D. Both are FALSE Ans : B 29.The packaged procedure that makes data in form permanent in the Database is A. Post B. Post form C. Commit form D. None of the above Ans : C 30.Which of the following is TRUE for the SYSTEM VARIABLE $$date$$ A. Can be assigned to a global variable B. Can be assigned to any field only during design time C. Can be assigned to any variable or field during run time D. None of the above Ans : B 31.Which of the following packaged procedure is UNRESTRICTED ? A. CALL_INPUT, B. CLEAR_BLOCK, C. EXECUTE_QUERY, D. USER_EXIT Ans : D 32. Identify the RESTRICTED packaged procedure from the following A. USER_EXIT, B. MESSAGE, C. BREAK, D. EXIT_FORM Ans : D 32.What is SQL*FORMS A. SQL*FORMS is a 4GL tool for developing & executing Oracle based interactive applications. B. SQL*FORMS is a 3GL tool for connecting to the Database. C. SQL*FORMS is a reporting tool D. None of the above. Ans : A 33.Name the two files that are created when you generate a form using Forms 3.0 A. FMB & FMX, B. FMR & FDX, C. INP & FRM, D. None of the above Ans : C 34.Which of the folowing is TRUE for a ERASE packaged procedure 1. ERASE removes an indicated Global variable & releases the memory associated with it 2. ERASE is used to remove a field from a page A. Only 1 is TRUE B. Only 2 is TRUE C. Both 1 & 2 are TRUE D. Both 1 & 2 are FALSE Ans : A 35.All datafiles related to a Tablespace are removed when the Tablespace is dropped A. TRUE B. FALSE Ans : B 36.Size of Tablespace can be increased by A. Increasing the size of one of the Datafiles B. Adding one or more Datafiles C. Cannot be increased D. None of the above Ans : B 37.Multiple Tablespaces can share a single datafile A. TRUE B. FALSE Ans : B 38.A set of Dictionary tables are created A. Once for the Entire Database B. Every time a user is created C. Every time a Tablespace is created D. None of the above Ans : A 39.Datadictionary can span across multiple Tablespaces A. TRUE B. FALSE Ans : B 40.What is a DATABLOCK A. Set of Extents B. Set of Segments C. Smallest Database storage unit D. None of the above Ans : C Oracle DBA Objective type Questions with Answers 41.Can an Integrity Constraint be enforced on a table if some existing table data does not satisfy the constraint A. Yes B. No Ans : B 42. A column defined as PRIMARY KEY can have NULL’s A. TRUE B. FALSE Ans : B 43. A Transaction ends A. Only when it is Committed B. Only when it is Rolledback C. When it is Committed or Rolledback D. None of the above Ans : C 44. A Database Procedure is stored in the Database A. In compiled form B. As source code C. Both A & B D. Not stored Ans : C 45.A database trigger doesnot apply to data loaded before the definition of the trigger A. TRUE B. FALSE Ans : A 46.Dedicated server configuration is A. One server process – Many user processes B. Many server processes – One user process C. One server process – One user process D. Many server processes – Many user processes Ans : C 47. Which of the following does not affect the size of the SGA A. Database buffer B. Redolog buffer C. Stored procedure D. Shared pool Ans : C 48. What does a COMMIT statement do to a CURSOR A. Open the Cursor B. Fetch the Cursor C. Close the Cursor D. None of the above Ans : D 49.Which of the following is TRUE 1. Host variables are declared anywhere in the program 2. Host variables are declared in the DECLARE section A. Only 1 is TRUE B. Only 2 is TRUE C. Both 1 & 2are TRUE D. Both are FALSE Ans : B 50. Which of the following is NOT VALID is PL/SQL A. Bool boolean; B. NUM1, NUM2 number; C. deptname dept.dname%type; D. date1 date := sysdate Ans : B 51. GET_BLOCK property is a A. Restricted procedure B. Unrestricted procedure C. Library function D. None of the above Ans : D 52.What will be the value of svar after the execution ? A. Error B. 10 C. 5 D. None of the above Ans : A 53.Which of the following is not correct about an Exception ? A. Raised automatically / Explicitly in response to an ORACLE_ERROR B. An exception will be raised when an error occurs in that block C. Process terminates after completion of error sequence. D. A Procedure or Sequence of statements may be processeD. Ans : C 54.Which of the following is not correct about User_Defined Exceptions ? A. Must be declared B. Must be raised explicitly C. Raised automatically in response to an Oracle error D. None of the above Ans : C 55.A Stored Procedure is a A. Sequence of SQL or PL/SQL statements to perform specific function B. Stored in compiled form in the database C. Can be called from all client environmets D. All of the above Ans : D 56.Which of the following statement is false A. Any procedure can raise an error and return an user message and error number B. Error number ranging from 20000 to 20999 are reserved for user defined messages C. Oracle checks Uniqueness of User defined errors D. Raise_Application_error is used for raising an user defined error. Ans : C 57.Is it possible to open a cursor which is in a Package in another procedure ? A. Yes B. No Ans : A 58.Is it possible to use Transactional control statements in Database Triggers? A. Yes B. No Ans : B 59.Is it possible to Enable or Disable a Database trigger ? A. Yes B. No Ans : A 60.PL/SQL supports datatype(s) A. Scalar datatype B. Composite datatype C. All of the above D. None of the above Ans C 61.Find the ODD datatype out A. VARCHAR2 B. RECORD C. BOOLEAN D. RAW Ans : B 62.Which of the following is not correct about the “TABLE” datatype ? A. Can contain any no of columns B. Simulates a One-dimensional array of unlimited size C. Column datatype of any Scalar type D. None of the above Ans : A 63. Find the ODD one out of the following A. OPEN B. CLOSE C. INSERT D. FETCH Ans C 64. Which of the following is not correct about Cursor ? A. Cursor is a named Private SQL area B. Cursor holds temporary results C. Cursor is used for retrieving multiple rows D. SQL uses implicit Cursors to retrieve rows Ans : B 65. Which of the following is NOT VALID in PL/SQL ? A. Select … into B. Update C. Create D. Delete Ans : C 67. What is the Result of the following ‘VIK’NULL’RAM’ ? A. Error B. VIK RAM C. VIKRAM D. NULL Ans : C 68.A CONTROL BLOCK can sometimes refer to a BASETABLE ? A. TRUE B. FALSE Ans : B 69. Is it possible to modify a Datatype of a column when column contains data ? A. Yes B. No Ans: B 70. Which of the following is not correct about a View ? A. To protect some of the columns of a table from other users B. Ocuupies data storage space C. To hide complexity of a query D. To hide complexity of a calculations Ans : B 71. Which is not part of the Data Definiton Language ? A. CREATE B. ALTER C. ALTER SESSION Ans : C 72. List of Values (LOV) supports A. Single column B. Multi column C. Single or Multi column D. None of the above Ans : C 73. If an UNIQUE KEY constraint on DATE column is created, will it accept the rows that are inserted with SYSDATE ? A. Will B. Won’t Ans : B 74. What are the different events in Triggers ? A. Define, Create B. Drop, Comment C. Insert, Update, Delete D. All of the above Ans : C 75. What built-in subprogram is used to manipulate images in image items ? A. Zoom_out B. Zoom_in’ C. Image_zoom D. Zoom_image Ans : C 76. Can we pass RECORD GROUP between FORMS ? A. Yes B. No Ans : A 77. SHOW_ALERT function returns A. Boolean B. Number C. Character D. None of the above Ans : B 78. What SYSTEM VARIABLE is used to refer DATABASE TIME ? A. $$dbtime$$ B. $$time$$ C. $$datetime$$ D. None of the above Ans : A 79. SYSTEM.EFFECTIVE.DATE varaible is A. Read only B. Read & Write C. Write only D. None of the above Ans : C 80. How can you CALL Reports from Forms4.0 ? A. Run_Report built_in B. Call_Report built_in C. Run_Product built_in D. Call_Product built_in Ans : C ORACLE DBA Questions and Answers pdf Download Read the full article
0 notes
Text
DBA interview Question and Answer Part 22
I have configured the RMAN with Recovery window of 3 days but on my backup destination only one days archive log is visible while 3 days database backup is available there why?I go through the issue by checking the backup details using the list command. I found there is already 3 days database as well as archivelog backup list is available. Also the backup is in Recoverable backup. Thus it is clear due to any reason the backup is not stored on Backup place.Connect rman target database with catalogList backup Summary;List Archivelog All;List Backup Recoverable;When I check the db_recovery_dest_size, it is 5 GB and our flash-recovery area is almost full because of that it will automatically delete archive logs from backup location. When I increase the db_recovery_dest_sizethen it is working fine.If one or all of control file is get corrupted and you are unable to start database then how can you perform recovery?If one of your control file is missing or corrupted then you have two options to recover it either delete corrupted CONTROLFILE manually from the location and copy the available rest of CONTROLFILE and rename it as per the deleted one. You can check the alert.log for exact name and location of the control file. Another option is delete the corrupted CONTROLFILE and remove the location from Pfile/Spfile. After removing said control file from spfile and start your database.In another scenario if all of your CONTROLFILE is get corrupted then you need to restore them using RMAN.As currently none of the CONTROLFILE is mounted so RMAN does not know about the backup or any pre-configured RMAN setting. In order to use the backup we need to pass the DBID (SET DBID=691421794) to the RMAN.RMAN>Restore Controlfile from ‘H:oracleBackup C-1239150297-20130418’ You are working as a DBA and usually taking HOTBACKUP every night. But one day around 3.00 PM one table is dropped and that table is very useful then how will you recover that table?If your database is running on oracle 10g version and you already enable the recyclebin configuration then you can easily recover dropped table from user_recyclebin or dba_recyclebin by using flashback feature of oracle 10g.SQL> select object_name,original_name from user_recyclebin;BIN$T0xRBK9YSomiRRmhwn/xPA==$0 PAY_PAYMENT_MASTERSQL> flashback table table2 to before drop;Flashback complete.In that case when no recyclebin is enabled with your database then you need to restore your backup on TEST database and enable time based recovery for applying all archives before drop command execution. For an instance, apply archives up to 2:55 PM here.It is not recommended to perform such recovery on production database directly because it is a huge database will take time.Note: If you are using SYS user to drop any table then user’s object will not go to the recyclebin for SYSTEM tablespace, even you have already set recyclebin parameter ‘true’. And If you database is running on oracle 9i you require in-complete recovery for the same.Sometimes why more archivelog is Generating?There are many reasons such as: if more database changes were performed either using any import/export work or batch jobs or any special task or taking hot backup (For more details why hot backup generating more archive check my separate post).You can check it using enabling log Minor utility.How can I know my require table is available in export dump file or not?You can create index file for export dump file using ‘import with index file’ command. A text file will be generating with all table and index object name with number of rows. You can confirm your require table object from this text file.What is Cache Fusion Technology?Cache fusion provides a service that allows oracle to keep track of which nodes are writing to which block and ensure that two nodes do not updates duplicates copies of the same block. Cache fusion technology can provides more resource and increase concurrency of users internally. Here multiple caches can able to join and act into one global cache. Thus solving the issues like data consistency internally without any impact on the application code or design.Why we should we need to open database using RESETLOGS after finishing incomplete recovery?When we are performing incomplete recovery that means, it is clear we are bringing our database to past time or re-wind period of time. Thus this recovery makes database in prior state of database. The forward sequence of number already available after performing recovery, due to mismatching of this sequence numbers and prior state of database, it needs open database with new sequence number of redo log and archive log.Why export backup is called as logical backup?Export dump file doesn’t backup or contain any physical structure of database such as datafiles, redolog files, pfile and password file etc. Instead of physical structure, export dump contains logical structure of database like definition of tablespace, segment, schema etc. Due to these reason export dump is call logical backup.What are difference between 9i and 10g OEM?In oracle 9i OEM having limited capability or resource compares to oracle 10g grids. There are too many enhancements in 10g OEM over 9i, several tools such as AWR and ADDM has been incorporated and there is SQL Tuning advisor also available.Can we use same target database as catalog DB?The recovery catalog should not reside in the target database because recovery catalog must be protected in the event of loss of the target database.What is difference between CROSSCHECK and VALIDATE command?Validate command is to examine a backup set and report whether it can be restored successfully where as crosscheck command is to verify the status of backup and copies recorded in the RMAN repository against the media such as disk or tape. How do you identify or fix block Corruption in RMAN database?You can use the v$block_corruption view to identify which block is corrupted then use the ‘blockrecover’ command to recover it.SQL>select file# block# from v$database_block_corruption;file# block10 1435RMAN>blockrecover datafile 10 block 1435;What is auxiliary channel in RMAN? When it is required?An auxiliary channel is a link to auxiliary instance. If you do not have automatic channel configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.Explain the use of Setting GLOBAL_NAME equal to true?Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either ‘TRUE’ or ‘FALSE’ and if it is set to ‘TRUE’ which enforces database links to have the same name as the remote database to which they are linking.How can you say your data in database is Valid or secure?If data of the database is validated we can say that our database is secured. There is different way to validate the data:1. Accept only valid data2. Reject bad data.3. Sanitize bad data. Write a query to display all the odd number from table.Select * from (select employee_number, rownum rn from pay_employee_personal_info)where MOD (rn, 2) 0;-or- you can perform the same things through the below function.set serveroutput on; begin for v_c1 in (select num from tab_no) loop if mod(v_c1.num,2) = 1 then dbms_output.put_line(v_c1.num); end if; end loop; end;What is difference between Trim and Truncate?Truncate is a DDL command which delete the contents of a table completely, without affecting the table structures where as Trim is a function which changes the column output in select statement or to remove the blank space from left and right of the string.When to use the option clause "PASSWORD FILE" in the RMAN DUPLICATE command? If you create a duplicate DB not a standby DB, then RMAN does not copy the password file by default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the existing password file on the auxiliary instance and if you create a standby DB, then RMAN copies the password file by default to the standby host overwriting the existing password file. What is Oracle Golden Gate?Oracle GoldenGate is oracle’s strategic solution for real time data integration. Oracle GoldenGate captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across Oracle and heterogeneous environments with very low impact and preserved transaction integrity. The transaction data management provides read consistency, maintaining referential integrity between source and target systems.What is meaning of LGWR SYNC and LGWR ASYNC in log archive destination parameter for standby configuration.When use LGWR with SYNC, it means once network I/O initiated, LGWR has to wait for completion of network I/O before write processing. LGWR with ASYNC means LGWR doesn’t wait to finish network I/O and continuing write processing.What is the truncate command enhancement in Oracle 12c?In the previous release, there was not a direct option available to truncate a master table while child table exist and having records.Now the truncate table with cascade option in 12c truncates the records in master as well as all referenced child table with an enabled ON DELETE constraint.
0 notes
Text
DBA Daily/Weekly/Monthly or Quarterly Checklist
In response of some fresher DBA I am giving quick checklist for a production DBA. Here I am including reference of some of the script which I already posted as you know each DBA have its own scripts depending on database environment too. Please have a look on into daily, weekly and quarterly checklist. Note: I am not responsible of any of the script is harming your database so before using directly on Prod DB. Please check it on Test environment first and make sure then go for it.Please send your corrections, suggestions, and feedback to me. I may credit your contribution. Thank you.------------------------------------------------------------------------------------------------------------------------Daily Checks:Verify all database, instances, Listener are up, every 30 Min. Verify the status of daily scheduled jobs/daily backups in the morning very first hour.Verify the success of archive log backups, based on the backup interval.Check the space usage of the archive log file system for both primary and standby DB. Check the space usage and verify all the tablespace usage is below critical level once in a day. Verify Rollback segments.Check the database performance, periodic basis usually in the morning very first hour after the night shift schedule backup has been completed.Check the sync between the primary database and standby database, every 20 min. Make a habit to check out the new alert.log entry hourly specially if getting any error.Check the system performance, periodic basis.Check for the invalid objectsCheck out the audit files for any suspicious activities. Identify bad growth projections.Clear the trace files in the udump and bdump directory as per the policy.Verify all the monitoring agent, including OEM agent and third party monitoring agents.Make a habit to read DBA Manual.Weekly Checks:Perform level 0 or cold backup as per the backup policy. Note the backup policy can be changed as per the requirement. Don’t forget to check out the space on disk or tape before performing level 0 or cold backup.Perform Export backups of important tables.Check the database statistics collection. On some databases this needs to be done every day depending upon the requirement.Approve or plan any scheduled changes for the week.Verify the schedule jobs and clear the output directory. You can also automate it.Look for the object that break rule. Look for security policy violation. Archive the alert logs (if possible) to reference the similar kind of error in future. Visit the home page of key vendors.Monthly or Quarterly Checks:Verify the accuracy of backups by creating test databases.Checks for the critical patch updates from oracle make sure that your systems are in compliance with CPU patches.Checkout the harmful growth rate. Review Fragmentation. Look for I/O Contention. Perform Tuning and Database Maintenance.Verify the accuracy of the DR mechanism by performing a database switch over test. This can be done once in six months based on the business requirements.------------------------------------------------------------------------------------------------------------------------------------------------------- Below is the brief description about some of the important concept including important SQL scripts. You can find more scripts on my different post by using blog search option.Verify all instances are up: Make sure the database is available. Log into each instance and run daily reports or test scripts. You can also automate this procedure but it is better do it manually. Optional implementation: use Oracle Enterprise Manager's 'probe' event.Verify DBSNMP is running:Log on to each managed machine to check for the 'dbsnmp' process. For Unix: at the command line, type ps –ef | grep dbsnmp. There should be two dbsnmp processes running. If not, restart DBSNMP.Verify success of Daily Scheduled Job:Each morning one of your prime tasks is to check backup log, backup drive where your actual backup is stored to verify the night backup. Verify success of database archiving to tape or disk:In the next subsequent work check the location where daily archiving stored. Verify the archive backup on disk or tape.Verify enough resources for acceptable performance:For each instance, verify that enough free space exists in each tablespace to handle the day’s expected growth. As of , the minimum free space for : . When incoming data is stable, and average daily growth can be calculated, then the minimum free space should be at least days’ data growth. Go to each instance, run query to check free mb in tablespaces/datafiles. Compare to the minimum free MB for that tablespace. Note any low-space conditions and correct it.Verify rollback segment:Status should be ONLINE, not OFFLINE or FULL, except in some cases you may have a special rollback segment for large batch jobs whose normal status is OFFLINE. Optional: each database may have a list of rollback segment names and their expected statuses.For current status of each ONLINE or FULL rollback segment (by ID not by name), query on V$ROLLSTAT. For storage parameters and names of ALL rollback segment, query on DBA_ROLLBACK_SEGS. That view’s STATUS field is less accurate than V$ROLLSTAT, however, as it lacks the PENDING OFFLINE and FULL statuses, showing these as OFFLINE and ONLINE respectively.Look for any new alert log entries:Connect to each managed system. Use 'telnet' or comparable program. For each managed instance, go to the background dump destination, usually $ORACLE_BASE//bdump. Make sure to look under each managed database's SID. At the prompt, use the Unix ‘tail’ command to see the alert_.log, or otherwise examine the most recent entries in the file. If any ORA-errors have appeared since the previous time you looked, note them in the Database Recovery Log and investigate each one. The recovery log is in .Identify bad growth projections.Look for segments in the database that are running out of resources (e.g. extents) or growing at an excessive rate. The storage parameters of these segments may need to be adjusted. For example, if any object reached 200 as the number of current extents, upgrade the max_extents to unlimited. For that run query to gather daily sizing information, check current extents, current table sizing information, current index sizing information and find growth trendsIdentify space-bound objects:Space-bound objects’ next_extents are bigger than the largest extent that the tablespace can offer. Space-bound objects can harm database operation. If we get such object, first need to investigate the situation. Then we can use ALTER TABLESPACE COALESCE. Or add another datafile. Run spacebound.sql. If all is well, zero rows will be returned.Processes to review contention for CPU, memory, network or disk resources:To check CPU utilization, go to =>system metrics=>CPU utilization page. 400 is the maximum CPU utilization because there are 4 CPUs on phxdev and phxprd machine. We need to investigate if CPU utilization keeps above 350 for a while.Make a habit to Read DBA Manual:Nothing is more valuable in the long run than that the DBA be as widely experienced, and as widely read, as possible. Readingsshould include DBA manuals, trade journals, and possibly newsgroups or mailing lists.Look for objects that break rules:For each object-creation policy (naming convention, storage parameters, etc.) have an automated check to verify that the policy is being followed. Every object in a given tablespace should have the exact same size for NEXT_EXTENT, which should match the tablespace default for NEXT_EXTENT. As of 10/03/2012, default NEXT_EXTENT for DATAHI is 1 gig (1048576 bytes), DATALO is 500 mb (524288 bytes), and INDEXES is 256 mb (262144 bytes). To check settings for NEXT_EXTENT, run nextext.sql. To check existing extents, run existext.sqlAll tables should have unique primary keys:To check missing PK, run no_pk.sql. To check disabled PK, run disPK.sql. All primary key indexes should be unique. Run nonuPK.sql to check. All indexes should use INDEXES tablespace. Run mkrebuild_idx.sql. Schemas should look identical between environments, especially test and production. To check data type consistency, run datatype.sql. To check other object consistency, run obj_coord.sql.Look for security policy violations:Look in SQL*Net logs for errors, issues, Client side logs, Server side logs and Archive all Alert Logs to historyVisit home pages of key vendors:For new update information made a habit to visit home pages of key vendors such as: Oracle Corporation: http://www.oracle.com, http://technet.oracle.com, http://www.oracle.com/support, http://www.oramag.com Quest Software: http://www.quests.comSun Microsystems: http://www.sun.com Look for Harmful Growth Rates:Review changes in segment growth when compared to previous reports to identify segments with a harmful growth rate. Review Tuning Opportunities and Perform Tuning Maintainance:Review common Oracle tuning points such as cache hit ratio, latch contention, and other points dealing with memory management. Compare with past reports to identify harmful trends or determine impact of recent tuning adjustments. Make the adjustments necessary to avoid contention for system resources. This may include scheduled down time or request for additional resources.Look for I/O Contention:Review database file activity. Compare to past output to identify trends that could lead to possible contention.Review Fragmentation:Investigate fragmentation (e.g. row chaining, etc.), Project Performance into the FutureCompare reports on CPU, memory, network, and disk utilization from both Oracle and the operating system to identify trends that could lead to contention for any one of these resources in the near future. Compare performance trends to Service Level Agreement to see when the system will go out of bounds. -------------------------------------------------------------------------------------------- Useful Scripts: -------------------------------------------------------------------------------------------- Script: To check free, pct_free, and allocated space within a tablespace SELECT tablespace_name, largest_free_chunk, nr_free_chunks, sum_alloc_blocks, sum_free_blocks , to_char(100*sum_free_blocks/sum_alloc_blocks, '09.99') || '%' AS pct_free FROM ( SELECT tablespace_name, sum(blocks) AS sum_alloc_blocks FROM dba_data_files GROUP BY tablespace_name), ( SELECT tablespace_name AS fs_ts_name, max(blocks) AS largest_free_chunk , count(blocks) AS nr_free_chunks, sum(blocks) AS sum_free_blocks FROM dba_free_space GROUP BY tablespace_name ) WHERE tablespace_name = fs_ts_name; Script: To analyze tables and indexes BEGIN dbms_utility.analyze_schema ( '&OWNER', 'ESTIMATE', NULL, 5 ) ; END ; Script: To find out any object reaching SELECT e.owner, e.segment_type , e.segment_name , count(*) as nr_extents , s.max_extents , to_char ( sum ( e.bytes ) / ( 1024 * 1024 ) , '999,999.90') as MB FROM dba_extents e , dba_segments s WHERE e.segment_name = s.segment_name GROUP BY e.owner, e.segment_type , e.segment_name , s.max_extents HAVING count(*) > &THRESHOLD OR ( ( s.max_extents - count(*) ) < &&THRESHOLD ) ORDER BY count(*) desc; The above query will find out any object reaching level extents, and then you have to manually upgrade it to allow unlimited max_extents (thus only objects we expect to be big are allowed to become big. Script: To identify space-bound objects. If all is well, no rows are returned. SELECT a.table_name, a.next_extent, a.tablespace_name FROM all_tables a,( SELECT tablespace_name, max(bytes) as big_chunk FROM dba_free_space GROUP BY tablespace_name ) f WHERE f.tablespace_name = a.tablespace_name AND a.next_extent > f.big_chunk; Run the above query to find the space bound object . If all is well no rows are returned if found something then look at the value of next extent. Check to find out what happened then use coalesce (alter tablespace coalesce;). and finally, add another datafile to the tablespace if needed. Script: To find tables that don't match the tablespace default for NEXT extent. SELECT segment_name, segment_type, ds.next_extent as Actual_Next , dt.tablespace_name, dt.next_extent as Default_Next FROM dba_tablespaces dt, dba_segments ds WHERE dt.tablespace_name = ds.tablespace_name AND dt.next_extent !=ds.next_extent AND ds.owner = UPPER ( '&OWNER' ) ORDER BY tablespace_name, segment_type, segment_name; Script: To check existing extents SELECT segment_name, segment_type, count(*) as nr_exts , sum ( DECODE ( dx.bytes,dt.next_extent,0,1) ) as nr_illsized_exts , dt.tablespace_name, dt.next_extent as dflt_ext_size FROM dba_tablespaces dt, dba_extents dx WHERE dt.tablespace_name = dx.tablespace_name AND dx.owner = '&OWNER' GROUP BY segment_name, segment_type, dt.tablespace_name, dt.next_extent; The above query will find how many of each object's extents differ in size from the tablespace's default size. If it shows a lot of different sized extents, your free space is likely to become fragmented. If so, need to reorganize this tablespace. Script: To find tables without PK constraint SELECT table_name FROM all_tables WHERE owner = '&OWNER' MINUS SELECT table_name FROM all_constraints WHERE owner = '&&OWNER' AND constraint_type = 'P'; Script: To find out which primary keys are disabled SELECT owner, constraint_name, table_name, status FROM all_constraints WHERE owner = '&OWNER' AND status = 'DISABLED' AND constraint_type = 'P'; Script: To find tables with nonunique PK indexes. SELECT index_name, table_name, uniqueness FROM all_indexes WHERE index_name like '&PKNAME%' AND owner = '&OWNER' AND uniqueness = 'NONUNIQUE' SELECT c.constraint_name, i.tablespace_name, i.uniqueness FROM all_constraints c , all_indexes i WHERE c.owner = UPPER ( '&OWNER' ) AND i.uniqueness = 'NONUNIQUE' AND c.constraint_type = 'P' AND i.index_name = c.constraint_name; Script: To check datatype consistency between two environments SELECT table_name, column_name, data_type, data_length,data_precision,data_scale,nullable FROM all_tab_columns -- first environment WHERE owner = '&OWNER' MINUS SELECT table_name,column_name,data_type,data_length,data_precision,data_scale,nullable FROM all_tab_columns@&my_db_link -- second environment WHERE owner = '&OWNER2' order by table_name, column_name; Script: To find out any difference in objects between two instances SELECT object_name, object_type FROM user_objects MINUS SELECT object_name, object_type FROM user_objects@&my_db_link; For more about script and Daily DBA Task or Monitoring use the search concept to check my other post. Follow the below link for important Monitoring Script: http://shahiddba.blogspot.com/2012/04/oracle-dba-daily-checklist.html
0 notes
Text
Script: To Monitor Tablespaces/datafiles
Important Note: If any of the script in this blog is not running then please re-type it or try to retype quotation, command and braces (may be format is changed). I am using toad so if you are using SQL then try to fix column length before exectunig the script (if any). ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- To check Tablespace free space: SELECT TABLESPACE_NAME, SUM(BYTES/1024/1024) "Size (MB)" FROM DBA_FREE_SPACE GROUP BY TABLESPACE_NAME; To check Tablespace by datafile: SELECT tablespace_name, File_id, SUM(bytes/1024/1024)"Size (MB)" FROM DBA_FREE_SPACE group by tablespace_name, file_id; To Check Tablespace used and free space %: SELECT /* + RULE */ df.tablespace_name "Tablespace", df.bytes / (1024 * 1024) "Size (MB)", SUM(fs.bytes) / (1024 * 1024) "Free (MB)", Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free", Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used" FROM dba_free_space fs, (SELECT tablespace_name,SUM(bytes) bytes FROM dba_data_files GROUP BY tablespace_name) df WHERE fs.tablespace_name (+) = df.tablespace_name GROUP BY df.tablespace_name,df.bytes UNION ALL SELECT /* + RULE */ df.tablespace_name tspace, fs.bytes / (1024 * 1024), SUM(df.bytes_free) / (1024 * 1024), Nvl(Round((SUM(fs.bytes) - df.bytes_used) * 100 / fs.bytes), 1), Round((SUM(fs.bytes) - df.bytes_free) * 100 / fs.bytes) FROM dba_temp_files fs, (SELECT tablespace_name,bytes_free,bytes_used FROM v$temp_space_header GROUP BY tablespace_name,bytes_free,bytes_used) df WHERE fs.tablespace_name (+) = df.tablespace_name GROUP BY df.tablespace_name,fs.bytes,df.bytes_free,df.bytes_used ORDER BY 4 DESC; --or-- Select t.tablespace, t.totalspace as " Totalspace(MB)", round((t.totalspace-fs.freespace),2) as "Used Space(MB)", fs.freespace as "Freespace(MB)", round(((t.totalspace-fs.freespace)/t.totalspace)*100,2) as "% Used", round((fs.freespace/t.totalspace)*100,2) as "% Free" from (select round(sum(d.bytes)/(1024*1024)) as totalspace, d.tablespace_name tablespace from dba_data_files d group by d.tablespace_name) t, (select round(sum(f.bytes)/(1024*1024)) as freespace, f.tablespace_name tablespace from dba_free_space f group by f.tablespace_name) fs where t.tablespace=fs.tablespace order by t.tablespace; Tablespace (File wise) used and Free space SELECT SUBSTR (df.NAME, 1, 40) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024 allocated_mb, ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) used_mb, NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb FROM v$datafile df, dba_free_space dfs WHERE df.file# = dfs.file_id(+) GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name ORDER BY file_name; To check Growth rate of Tablespace Note: The script will not show the growth rate of the SYS, SYSAUX Tablespace. T he script is used in Oracle version 10g onwards. SELECT TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY') days, ts.tsname , max(round((tsu.tablespace_size* dt.block_size )/(1024*1024),2) ) cur_size_MB, max(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) usedsize_MB FROM DBA_HIST_TBSPC_SPACE_USAGE tsu, DBA_HIST_TABLESPACE_STAT ts, DBA_HIST_SNAPSHOT sp, DBA_TABLESPACES dt WHERE tsu.tablespace_id= ts.ts# AND tsu.snap_id = sp.snap_id AND ts.tsname = dt.tablespace_name AND ts.tsname NOT IN ('SYSAUX','SYSTEM') GROUP BY TO_CHAR (sp.begin_interval_time,'DD-MM-YYYY'), ts.tsname ORDER BY ts.tsname, days; List all Tablespaces with free space < 10% or full space> 90% Select a.tablespace_name,sum(a.tots/1048576) Tot_Size, sum(a.sumb/1024) Tot_Free, sum(a.sumb)*100/sum(a.tots) Pct_Free, ceil((((sum(a.tots) * 15) - (sum(a.sumb)*100))/85 )/1048576) Min_Add from (select tablespace_name,0 tots,sum(bytes) sumb from dba_free_space a group by tablespace_name union Select tablespace_name,sum(bytes) tots,0 from dba_data_files group by tablespace_name) a group by a.tablespace_name having sum(a.sumb)*100/sum(a.tots) < 10 order by pct_free; Script to find all object Occupied space for a Tablespace Select OWNER, SEGMENT_NAME, SUM(BYTES)/1024/1024 "SZIE IN MB" from dba_segments where TABLESPACE_NAME = 'SDH_HRMS_DBF' group by OWNER, SEGMENT_NAME; Which schema are taking how much space Select obj.owner "Owner", obj_cnt "Objects", decode(seg_size, NULL, 0, seg_size) "size MB" from (select owner, count(*) obj_cnt from dba_objects group by owner) obj, (select owner, ceil(sum(bytes)/1024/1024) seg_size from dba_segments group by owner) seg where obj.owner = seg.owner(+) order by 3 desc ,2 desc, 1; To Check Default Temporary Tablespace Name: Select * from database_properties where PROPERTY_NAME like '%DEFAULT%'; To know default and Temporary Tablespace for particualr User: Select username,temporary_tablespace,default_tablespace from dba_users where username='HRMS'; To know Default Tablespace for All User: Select default_tablespace,temporary_tablespace,username from dba_users; To Check Datafiles used and Free Space: SELECT SUBSTR (df.NAME, 1, 40) file_name,dfs.tablespace_name, df.bytes / 1024 / 1024 allocated_mb, ((df.bytes / 1024 / 1024) - NVL (SUM (dfs.bytes) / 1024 / 1024, 0)) used_mb, NVL (SUM (dfs.bytes) / 1024 / 1024, 0) free_space_mb FROM v$datafile df, dba_free_space dfs WHERE df.file# = dfs.file_id(+) GROUP BY dfs.file_id, df.NAME, df.file#, df.bytes,dfs.tablespace_name ORDER BY file_name; To check Used free space in Temporary Tablespace: SELECT tablespace_name, SUM(bytes_used/1024/1024) USED, SUM(bytes_free/1024/1024) FREE FROM V$temp_space_header GROUP BY tablespace_name; SELECT A.tablespace_name tablespace, D.mb_total, SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_used, D.mb_total - SUM (A.used_blocks * D.block_size) / 1024 / 1024 mb_free FROM v$sort_segment A, ( SELECT B.name, C.block_size, SUM (C.bytes) / 1024 / 1024 mb_total FROM v$tablespace B, v$tempfile C WHERE B.ts#= C.ts# GROUP BY B.name, C.block_size ) D WHERE A.tablespace_name = D.name GROUP by A.tablespace_name, D.mb_total; Sort (Temp) space used by Session SELECT S.sid || ',' || S.serial# sid_serial, S.username, S.osuser, P.spid, S.module, S.program, SUM (T.blocks) * TBS.block_size / 1024 / 1024 mb_used, T.tablespace, COUNT(*) sort_ops FROM v$sort_usage T, v$session S, dba_tablespaces TBS, v$process P WHERE T.session_addr = S.saddr AND S.paddr = P.addr AND T.tablespace = TBS.tablespace_name GROUP BY S.sid, S.serial#, S.username, S.osuser, P.spid, S.module, S.program, TBS.block_size, T.tablespace ORDER BY sid_serial; Sort (Temp) Space Usage by Statement SELECT S.sid || ',' || S.serial# sid_serial, S.username, T.blocks * TBS.block_size / 1024 / 1024 mb_used, T.tablespace,T.sqladdr address, Q.hash_value, Q.sql_text FROM v$sort_usage T, v$session S, v$sqlarea Q, dba_tablespaces TBS WHERE T.session_addr = S.saddr AND T.sqladdr = Q.address (+) AND T.tablespace = TBS.tablespace_name ORDER BY S.sid; Who is using which UNDO or TEMP segment? SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial, NVL(s.username, 'None') orauser,s.program, r.name undoseg, t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo" FROM sys.v_$rollname r, sys.v_$session s, sys.v_$transaction t, sys.v_$parameter x WHERE s.taddr = t.addr AND r.usn = t.xidusn(+) AND x.name = 'db_block_size'; Who is using the Temp Segment? SELECT b.tablespace, ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE", a.sid||','||a.serial# SID_SERIAL, a.username, a.program FROM sys.v_$session a, sys.v_$sort_usage b, sys.v_$parameter p WHERE p.name = 'db_block_size' AND a.saddr = b.session_addr ORDER BY b.tablespace, b.blocks; Total Size and Free Size of Database: Select round(sum(used.bytes) / 1024 / 1024/1024 ) || ' GB' "Database Size", round(free.p / 1024 / 1024/1024) || ' GB' "Free space" from (select bytes from v$datafile union all select bytes from v$tempfile union all select bytes from v$log) used, (select sum(bytes) as p from dba_free_space) free group by free.p; To find used space of datafiles: SELECT SUM(bytes)/1024/1024/1024 "GB" FROM dba_segments; IO status of all of the datafiles in database: WITH total_io AS (SELECT SUM (phyrds + phywrts) sum_io FROM v$filestat) SELECT NAME, phyrds, phywrts, ((phyrds + phywrts) / c.sum_io) * 100 PERCENT, phyblkrd, (phyblkrd / GREATEST (phyrds, 1)) ratio FROM SYS.v_$filestat a, SYS.v_$dbfile b, total_io c WHERE a.file# = b.file# ORDER BY a.file#; Displays Smallest size the datafiles can shrink to without a re-organize. SELECT a.tablespace_name, a.file_name, a.bytes AS current_bytes, a.bytes - b.resize_to AS shrink_by_bytes, b.resize_to AS resize_to_bytes FROM dba_data_files a, (SELECT file_id, MAX((block_id+blocks-1)*&v_block_size) AS resize_to FROM dba_extents GROUP by file_id) b WHERE a.file_id = b.file_id ORDER BY a.tablespace_name, a.file_name; Scripts to Find datafiles increment details: Select SUBSTR(fn.name,1,DECODE(INSTR(fn.name,'/',2),0,INSTR(fn.name,':',1),INSTR(fn.name,'/',2))) mount_point,tn.name tabsp_name,fn.name file_name, ddf.bytes/1024/1024 cur_size, decode(fex.maxextend, NULL,ddf.bytes/1024/1024,fex.maxextend*tn.blocksize/1024/1024) max_size, nvl(fex.maxextend,0)*tn.blocksize/1024/1024 - decode(fex.maxextend,NULL,0,ddf.bytes/1024/1024) unallocated,nvl(fex.inc,0)*tn.blocksize/1024/1024 inc_by from sys.v_$dbfile fn, sys.ts$ tn, sys.filext$ fex, sys.file$ ft, dba_data_files ddf where fn.file# = ft.file# and fn.file# = ddf.file_id and tn.ts# = ft.ts# and fn.file# = fex.file#(+) order by 1;
0 notes
Text
Oracle DBA interview Question with Answer (All in One Doc)
1. General DB Maintenance2. Backup and Recovery3. Flashback Technology4. Dataguard5. Upgration/Migration/Patches6. Performance Tuning7. ASM8. RAC (RAC (Cluster/ASM/Oracle Binaries) Installation Link 9. Linux Operating10. PL/SQLGeneral DB Maintenance Question/Answer:When we run a Trace and Tkprof on a query we see the timing information for three phase?Parse-> Execute-> FetchWhich parameter is used in TNS connect identifier to specify number of concurrent connection request?QUEUESIZEWhat does AFFIRM/NOFFIRM parameter specify?AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).After upgrade task which script is used to run recompile invalid object?utlrp.sql, utlprpDue to too many cursor presents in library cache caused wait what parameter need to increase?Open_cursor, shared_pool_sizeWhen using Recover database using backup control file?To synchronize datafile to controlfileWhat is the use of CONSISTENT=Y and DIRECT=Y parameter in export?It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.What the parameter COMPRESS, SHOW, SQLFILE will do during export?If you are using COMPRESS during import, It will put entire data in a single extent. if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.Can we import 11g dumpfile into 10g using datapump? If so, is it also possible between 10g and 9i?Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9iWhat does KEEP_MASTER and METRICS parameter of datapump?KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.What happens when we fire SQL statement in Oracle?First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.What are between latches and locks?1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap. 2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level. 3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.What are the differences between LMTS and DMTS? Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.Difference of Regular and Index organized table?The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.What are Table portioning and their use and benefits?Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.Range partition:How to deal online redo log file corruption?1. Recover when only one redo log file corrupted?If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.RECOVER DATABASE UNTIL CANCEL; then ALTER DATABASE OPEN RESETLOGS;2. Recover when all the online redo log file corrupted?When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archivedALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archivedIf the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.What is the function of shared pool in SGA?The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.Backup & Recovery Question/Answer:Is target database can be catalog database?No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.What is the use of large pool, which case you need to set the large pool?You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.How to take User-managed backup in RMAN or How to make use of obsolete backup? By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup. What is difference between using recovery catalog and control file?When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.What is the benefit of Block Media Recovery and How to do it?Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.Connect target database with RMAN in Mount phase:RMAN> Recover datafile 8 block 13;RMAN> Recover CORRUPTION_LIST; --to recover all the corrupted block at a time.In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.What is Advantage of Datapump over Traditional Export?1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.2. Data can be exported from remote database by using database link.3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.4. Has ability to attach/detach from job and able to monitor the job remotely.5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.6. Explicit DB version can be specified so only supported object can be exported.7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.8. During impdp we change the target file name, schema, tablespace using: REMAP_Why datapump is faster than traditional Export. What to do to increase datapump performace?Data Pump is block mode, exp is byte mode.Data Pump will do parallel execution.Data Pump uses direct path API and Network link features.Data pump export/import/access file on server rather than client by providing directory structure grant.Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.Following initialization parameter must be set to increase data pump performance:· DISK_ASYNCH_IO=TRUE· DB_BLOCK_CHECKING=FALSE· DB_BLOCK_CHECKSUM=FALSEFollowing initialization must be set high to increase datapump parallelism:· PROCESSES· SESSIONS· PARALLEL_MAX_SERVERS· SHARED_POOL_SIZE and UNDO_TABLESPACENote: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.Flashback Question/AnswerFlashback Archive Features in oracle 11gThe flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.Limitation or Restriction on flashback Drop features?1. The recyclebin features is only for non-system and locally managed tablespace. 2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.Limitation or Restriction on flashback Database features?1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.What are Advantages of flashback database over flashback Table?1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here. 2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table. 3. Flashback Table cannot be used on a standby database.How should I set the database to improve Flashback performance? Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target. If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance. For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.Performance Tuning Question/Answer:If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.The database performance directly related with Network load, Data volume and Running SQL profiling.1. So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.To check the user sessions and waiting events use the join query on views: V$session,v$session_wait2. After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_eventSQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ" 2 FROM sys.v_$system_event a, sys.v_$system_event b 3 WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read'; SEQ READ SCAT READ---------- ---------- .74 1.6When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.How to perform SQL Tuning?1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index. It will solve your issues somehow…… otherwise use next step of performance tuning.3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)SQL> create public synonym plan_table for sys.plan_table;4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor: Grant Advisor to HR; Grant Administer SQL Tuning set to HR;SQL Tuning Advisor will check your SQL structure and statistics. SQL Tuning Advisor suggests indexes that might be very useful. SQL Tuning Advisor suggests query rewrites. SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views. 6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplusDBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM DBA_SQL_PROFILES; 9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.Why use SQL Plan Baseline, How to Generate new plan using Baseline 10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set. Which factors are to be considered for creating index on Table? How to select column for index? 1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data < 15% of rows retrieving in report then you need to create index on that table. 2. Primary key and unique key automatically having index you might concentrate to create index on foreign key where indexing can improve performance on join on multiple table.3. The column is best suitable for indexing whose values are relatively unique in column (through which you can access complete table records. Wide range of value in column (good for regular index) whereas small range of values (good for bitmap index) or the column contains many nulls but queries can select all rows having a value. CREATE INDEX emp_ename ON emp_tab(ename);The column is not suitable for indexing which is having many nulls but cannot search non null value or LONG, LONG RAW column not suitable for indexing.CAUTION: The size of single index entry cannot exceed one-half of the available space on data block.The more indexes on table will create more overhead as with each DML operation on table all index must be updated. It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously. What are Different Types of Index? Is creating index online possible? Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.An index can be considered for re-building under any of these circumstances:We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command* The % of deleted rows exceeds 30% of the total rows (depending on table length). * If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows. * If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.Differentiate the use of Bitmap index and Binary Tree index? Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.If you are getting high “Busy Buffer waits”, how can you find the reason behind it? Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait. Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values. SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where event = 'buffer busy waits'; SQL> Select owner, segment_name, segment_type from dba_extents Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report for standalone environment and awrgrpt for RAC environment.In general as a DBA following list of information you must check in STATSPACK/AWR report. ¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)¦ Load profile (DB CPU(per sec) < Core configuration and ratio of hard parse must be < parse)¦ Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)¦ Instance activity¦ File I/O and segment statistics¦ Memory allocation¦ Buffer waits¦ Latch waits 1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.2. Look the Load profile Report: Here DB CPU (per sec) must be < Core in Host configuration. If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.6. Operating system statistics Report7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.What is the difference between DB file sequential read and DB File Scattered Read? DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache. Dataguard Question/AnswerWhat are Benefits of Data Guard?Using Data guard feature in your environment following benefit:High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.Oracle Dataguard classified into two types:1. Physical standby (Redo apply technology)2. Logical Standby (SQL Apply Technology)Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.What are different services available in oracle data guard?1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary. 2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.4. DG broker: control the creation and monitoring of data guard through GUI and command line.What is different protection mode available in oracle data guard? How can check and change it?1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.Step to create physical standby database?On Primary site Modification:1. Enable force logging: Alter database force logging;2. Create redolog group for standby on primary server:Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;3. Setup the primary database pfile by changing required parameterLog_archive_dest_n – Primary database must be running in archive modeLog_archive_dest_state_nLog_archive_config -- enble or disable the redo stream to the standby site.Log_file_name_convert , DB_file_name_convert -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.Standby_File_Management -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby. DB_Unique_Name, Fal_server, Fal_client4. Create password file for primary5. Create controlfile for standby database on primary site:alter database create standby controlfile as ‘STAN.ctl;6. Configure the listner and tnsname on primary database.On Standby Modification:1. Copy primary site pfile and modify these pfile as per standby name and location:2. Copy password from primary and modify the name.3. Startup standby database in nomount using modified pfile and create spfile from it4. Use the created controlfile to mount the database.5. Now enable DG Broker to activate the primary or standby connection.6. Finally start redo log apply.How to enable/disable log apply service for standby?Alter database recover managed standby database disconnect; apply in backgroundAlter database recover managed standby database using current logfile; apply in real time.Alter database start logical standby apply immediate; to start SQL apply for logical standby database.What are different ways to manage long gap of standby database?Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.1. Check the actual gap and perform incremental backup and use this backup to recover standby site.2. Create controlfile for standby on primary and restore the standby using newly created controlfile.3. Register the missing archive log.Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option. Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;What is Active DATAGUARD feature in oracle 11g?In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.How can you find out back log of standby?You can perform join query on v$archived_log, v$managed_standbyWhat is difference between normal Redo Apply and Real-time Apply?Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.What are the Back ground processes for Data guard?On primary:Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby locationFetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.On standby:Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.Remote File Server (RFS): receives archives on standby redo log from primary database. Archiver (ARCn): archived the standby redo log applied by managed recovery process.Managed Recovery Process (MRP): applies archives redo log to the standby server.Logical Standby Process (LSP): applies SQL to the standby server.ASM/RAC Question/AnswerWhat is the use of ASM (or) Why ASM preferred over filesystem?ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.What are different types of striping in ASM & their differences?Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.Default Memory Allocation for ASM? How will backup ASM metadata?Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.You can backup ASM metadata (ASM disk group configuration) using Md_Backup.How to find out connected databases with ASM or not connected disks list?ASMCMD> lsctSQL> select DB_NAME from V$ASM_CLIENT;ASMCMD> lsdgselect NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;What are required parameters for ASM instance Creation?INSTANCE_TYPE = ASM by default it is RDBMSDB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.How to Creating spfile for ASM database?SQL> CREATE SPFILE FROM PFILE = ‘/tmp/init+ASM1.ora’;Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST, CONTROL_FILESLOG_ARCHIVE_DEST_n,LOG_ARCHIVE_DEST,STANDBY_ARCHIVE_DESTWhat are DISKGROUP Redundancy Level?Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)High Redundancy: Three ways mirroring requiring three failure groupsExternal Redundancy: No mirroring for disk that are already protecting using RAID on OS level.CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2 FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;We are going to migrate new storage. How we will move my ASM database from storage A to storage B? First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning. ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk ‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;What are required component of Oracle RAC installation?:1. Oracle ASM shared disk to store OCR and voting disk files.2. OCFS2 for Linux Clustered database3. Certified Network File system (NFS)4. Public IP: Configuration: TCP/IP (To manage database storage system)5. Private IP: To manager RAC cluster ware (cache fusion) internally.6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.Steps to configure RAC database:1. Install same OS level on each nodes or systems.2. Create required number of group and oracle user account.3. Create required directory structure or CRS and DB home.4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.5. Edit etc/host file and specify public/private/virtual ip for each node.6. Create required level of partition for OCR/Votdisk and ASM diskgroup.7. Install OCFSC2 and ASM RPM and configure with each node.8. Install clustware binaries then oracle binaries in first node.9. Invoke netca to configure listener. 10. Finally invoke DBCA to configure ASM to store database CRD files and create database.What is the structure change in oracle 11g r2?1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)2. OCR and Voting disk on ASM.3. SAN listener4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.5. GSDWhat are oracle RAC Services?Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.Main Components of Oracle RAC Clusterware?OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster. Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.ASM Backup:You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.OCR and Votefile Backup: Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using: ocrconfig –manualbackup --To take manual backup of ocrocrconfig –showbackup -- To list the available backup.ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.ocrconfig –backuploc --To change the OCR configured backup location.dd if=’vote disk name’ of=’bakup file name’; To take votefile backupTo check OCR and Vote disk Location:crsctl query css votedisk/etc/orcle/ocr.loc or use ocrcheckocrcheck --To check the OCR corruption status (if any).Crsctl check crs/cluster --To check crs status on local and remote nodeMoving OCR and Votedisk:Login with root user as the OCR store on root and for votedisk stops all crs first.Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.List to check all nodes in your cluster from root or to check public/private/vi pip info.olsnodes –n –p –I How can Restore the OCR in RAC environment?1. Stop clusterware all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.# crsctl stop crs, # crsctl stop crs –f # crsctl start crs –excel –nocrs Check if crsd still open then stop it: # crsctl stop resource ora.crsd -init 2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.# ocrconfig –restore file_name; 3. Verify the integrity of OCR and stop exclusive mode crs# ocrcheck # crsctl stop crs –f4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.# ocrconfig –repair –replace 5. Finally start all the node and verify with CVU command# crsctl start crs# cluvfy comp ocr –n all –verboseNote: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.How to Restore Votedisks?1. Shutdown the CRS on all node in clusterCrsctl stop crs2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.Crsctl query css votedisksDd if= of=3. Finally start crs of all node.Crsctl start crsHow to add node or instance in RAC environment?1. From the ORACLE_HOME/oui/bin location of node1 run the script addNode.sh$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"2. Run from ORACLE_HOME/root.sh script of node33. Run from existing node srvctl config db -d db_name then create a new mount point4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;5. Finally run the cluster installer for new node and update the inventory of clusterwareIn another way you can start the dbca and from instance management page choose add instance and follow the next step.How to Identify master node in RAC ? # /u1/app/../crsd>grep MASTER crsd.log | tail -1 (or) cssd >grep -i "master node" ocssd.log | tail -1 OR You can also use V$GES_RESOURCE view to identify the master node.Difference crsctl and srvctl?Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.What are ONS/TAF/FAN/FCF in RAC?ONS is a part of clusterware and is used to transfer messages between node and application tiers.Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.How OCCSD starts if voting disk & OCR resides on ASM?Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.Upgration/Migration/Patches Question/AnswerWhat are Database patches and How to apply?CPU (Critical Patch Update or one-off patch): security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.- Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.- Setup your current directory to the directory where patch is located and then run the opatch utility.- After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.- Finally run the utlrp.sql to validate invalid object.To rollback CPU Patch:- Shutdown all instances or listner.- Go to the patch location and run opatch rollback –id 677666- Start all the database and listner and use catcpu_rollback.sql script.- Bounce back the database use utlrp.sql script.PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.You must have two things two apply PSU patch: Latest version for Opatch, PSU patch that you want to apply1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch versionNow to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.2. To Apply PSU patch:unzip p13923374_11203_.zipcd 13923374opatch apply -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.3. Post Apply Steps: Startup database with sys as sysdbaSQL> @catbundle.sql psu applySQL> quitOpatch lsinventory --to check which psu patch is installed.Opatch rollback –id 13923374 --Rolling back a patch you have applied.Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.Patchset: (eg. 10.2.0.1 to 10.2.0.3): Applying a patchset usually requires OUI.Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.You have collection of patch nearly 100 patches. How can you apply only one of them?By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.What is rolling upgrade?It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.What happens when you use STARTUP UPGRADE?The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.
0 notes