#get ddl of tablespace in oracle
Explore tagged Tumblr posts
ocptechnology · 4 years ago
Text
How To Get Ddl Of All Tablespaces In Oracle
How To Get Ddl Of All Tablespaces In Oracle
Hello Friends, In this article we are going to show you how to get ddl of all tablespaces in oracle. Get DDL of All Tablespaces Oracle Use below query to get ddl of all tablespaces in oracle. SQL>set heading off; SQL>set echo off; SQL>Set pages 999; SQL>set long 90000; SQL>spool ddl_of_tablesspace.sql SQL>select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces…
Tumblr media
View On WordPress
0 notes
oraclerider · 2 years ago
Text
How to get tablespace DDL in oracle
To get the tablespace DDL (Data Definition Language) in Oracle, you can use the DBMS_METADATA package. “tablespace DDL in oracle” Get DDL for specific tablespace Step 1: To get tablespace ddl in oracle first connect to your Oracle database using a tool such as SQL*Plus or SQL Developer. SQL> sqlplus / as sysdba Step 2: Execute the following SQL statement to set the output format for the DDL…
Tumblr media
View On WordPress
0 notes
sandeep2363 · 4 years ago
Text
Generate DDL for the User including grants in Oracle
Generate DDL for the User including grants in Oracle
Get the DDL for the User present in the Oracle database Following is the script will provide you all the grants, profile , permission, tablespace quote of user: Script: set long 20000 set longchunksize 20000 set pagesize 0 set linesize 1000 set trimspool on set column ddl format a1000 set feedback off set verify off --Add a semicolon at the end of each statement execute…
View On WordPress
0 notes
siva3155 · 5 years ago
Text
300+ TOP Oracle Database Backup & Recovery Interview Questions and Answers
Oracle Database Backup and Recovery Interview Questions for freshers experienced :-
1. What is an Oracle database Partial Backup? A Partial Backup is any operating system backup short of a full backup, taken while the database is open or shut down. 2. What is an Oracle database Full Backup? A full backup is an operating system backup of all data files, on-line redo log files and control file that constitute ORACLE database and the parameter. 3. What is the difference between oracle media recovery and crash recovery? Media recovery is a process to recover database from backup when physical disk failure occure. cash recovery is a automated process take care by oracle when instance failure occure. 4. What is db_recovery_file_dest in oracle? When do you need to set this value? Give me the steps to perform the point in time recovery with a backup which is taken before the reset logs of the db? Tell me about the steps required to enable the RMAN backup for a target database? In oracle db_recovery_file_dest specifies a default location of flash recovery area which contains multiplexed current control files, online redo logs as well as archived logs, Rman backups,flash back logs. db_recovery_file_dest_size should be specified as well. 5. What is Restricted Mode of Instance Startup in Oracle? To Enable Restricted Session Alter system enable restricted session; To Disable Restricted Session Alter system disable restricted session; To Start the Database in Restricted Mode STARTUP RESTRICT By starting Instance in restricted mode it will not allow all users to access and only users with restriction privilege will be allowed to access. This will be done time of make some data changes so that no users should be allowed to access data on time of changes happening 6. What is the difference between recovery and restoring of the oracle database? Here is a scenario to understand Restore & Recovery Sunday 10pm: Database is backed up. and is running fine. Monday 11am: Went down / crashed due to some reason. To bring up the database, we have 2 options: Simple Restore: copying files from backup taken sunday night and open the database. Here, we loose all the changes that are done since sunday night. Restore and Recovery: Copying files from backup taken sunday night and applying all the archivelog and redo log files to bring up the database to the point of failure. Here you dont loose the changes done until monday 11 am. Restore: copying files from the backup overwriting the existing database files Recovery: applying the changes to the database till point of failure. these changes are recorded in online redolog and archivelog (which are the backups of redolog) files. 7. What are the different tools available for hot backups in Oracle? Is it preferable to take it manually all the time or it depends on the size of the database? A hot backup can be done by either RMAN,User Managed Backups by puting tablespace in backup mode my OEM which does the same as the user managed backup.But the Backup depends upon the size of the database you are using . if the database size in TB the RMAN backup will take more than 10 hours to complete and if the database is critical you can' wait for long to go for so long in this case their are special backup techniques which are given by vendors like TIVOLI and Netbackup they provide BC Vol backup called Business content Volumn Sync which copies a snapshot of the primary data to another place and backsup the database from one SAN to another with in 15 min for 2 TB of database and is the preferable method for big companies. 8. What do you mean by Oracle MEDIA RECOVERY? When physical disk fail, physical database file corrupt then media recovery required 9. What is the disk migration? what is the steps involved in oracle disk migration? Disk migration is noting but, migration of data from one OS dependent database to another Dependent database. The steps involved in this are first go to your target database and export all your data into flat files next in the destination database during the installation of the database, it asks for data source ,instead of giving the data of the oracle provided , give the path of the flat file you exported previously . 10. What are the advantages of operating a database in ARCHIVELOG mode over operating it in NO ARCHIVELOG mode in Oracle? Database in archivelog mode chance to take hot backup and no data loss in recovery. you can use RMAN for backup and recovery .disadvantage is poor ferformance, and more chance to crash disc.
Tumblr media
Oracle Database Backup & Recovery Interview Questions 11. Why more redos are generated when the oracle database is in begin backup mode? During begin backup mode datafile headers get freezed and as a result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs. 12. What is the use of FULL option in EXP command? A flag to indicate whether full database export should be performed. 13. What is the use of OWNER option in EXP command? List of table accounts should be exported. 14. What is the use of TABLES option in EXP command? List of tables should be exported. 15. What is the use of RECORD LENGTH option in EXP command? Record length in bytes. 16. What is the use of INCTYPE option in EXP command? Type export should be performed COMPLETE, CUMULATIVE, INCREMENTAL. 17. What is the use of RECORD option in EXP command? For Incremental exports, the flag indirect whether a record will be stores data dictionary tables recording the export. 18. What is the use of PARFILE option in EXP command? Name of the parameter file to be passed for export. 19. What is the use of ANALYSE option in EXP command? A flag to indicate whether statistical information about the exported objects should be written to export dump file. 20. What is the use of CONSISTENT option in EXP command? A flag to indicate whether a read consistent version of all the exported objects should be maintained. 21. What is use of LOG (Ver 7) option in EXP command? The name of the file which log of the export will be written. 22. What is the use of FILE option in IMP command? The name of the file from which import should be performed. 23. What is the use of SHOW option in IMP command? A flag to indicate whether file content should be displayed or not. 24. What is the use of IGNORE option in IMP command? A flag to indicate whether the import should ignore errors encounter when issuing CREATE commands. 25. What is the use of GRANT option in IMP command? A flag to indicate whether grants on database objects will be imported. 26. What is the use of INDEXES option in IMP command? A flag to indicate whether import should import index on tables or not. 27. What is the use of ROWS option in IMP command? A flag to indicate whether rows should be imported. If this is set to 'N' then only DDL for database objects will be executed. 28. What are the different methods of backing up oracle database? Logical Backups Cold Backups Hot Backups (Archive log) 29. What is a logical backup? Logical backup involves reading a set of database records and writing them into a file. Export utility is used for taking backup and Import utility is used to recover from backup. 30. What is cold backup? What are the elements of it? Cold backup is taking backup of all physical files after normal shutdown of database. We need to take. All Data files. All Control files. All on-line redo log files. The init.ora file (Optional) 31. What are the different kinds of export backups? Full backup - Complete database Incremental backup - Only affected tables from last incremental date/full backup date. Cumulative backup - Only affected table from the last cumulative date/full backup date. 32. What is hot backup and how it can be taken? Taking backup of archive log files when database is open. For this the ARCHIVELOG mode should be enabled. The following files need to be backed up. All data files. All Archive log, redo log files. On control file. 33. What is the use of FILE option in EXP command? To give the export file name. 34. What is the use of COMPRESS option in EXP command? Flag to indicate whether export should compress fragmented segments into single extents. 35. What is the use of GRANT option in EXP command? A flag to indicate whether grants on database objects will be exported or not. Value is 'Y' or 'N'. 36. What is the use of INDEXES option in EXP command? A flag to indicate whether indexes on tables will be exported. 37. What is the use of ROWS option in EXP command? Flag to indicate whether table rows should be exported. If 'N' only DDL statements for the database objects will be created. 38. What is the use of CONSTRAINTS option in EXP command? A flag to indicate whether constraints on table need to be exported. Oracle Database Backup and Recovery Questions and Answers Pdf Download Read the full article
0 notes
Text
Get Trained on IBM DB2 DBA Online by 15+ Years of experienced Professionals - IQ Online Training
IBM DB2 DBA:
DB2 DBA is RDBMS (Relational Database Management System) product from IBM which will provide the number of different OS (Operating System) platform. As according to the IBM, it leads in term of the Database market share and their Performances.
Although those there products are offered for a Personal Computer Operating Systems and UNIX-Based Systems, as the DB2 DBA trails the Oracle's Database product in the Microsoft Access and the UNIX-based system in Windows systems.
                     Features:
·         As the entire user has the authority to create Table in Database or Table Spaces as an implicitly created by Database.
·         If a Table Space implicitly is created, and you not specify the IN clause in CREATE TABLE statement, DB2 implicitly create database to which a table space will be assigned.
·         The DB2 implicitly use existing implicitly Created Database for the Table either uses an Created Database.
·         DB2 DBA Database will be a set of structure which includes collection of the Tables, and the Table Spaces and the associated index which it reside. Define the Database using an CREATE DATABASE statement.
                  Short Notes on DB2 DBA:
·         The Excessive number of Table and Table Spaces in database which can decrease the Manageability and the Performance issues. Reduce the number of Tables and Table Spaces in a Database, improve performance, minimize maintenance, increase concurrency and decrease the log volume concurrency.
·         Also grant the authorization for accessing all Data as it is Single Unit. Which Authorized to Access the Data stored in different Databases?
·         By Collecting the Data in single Database allow starting or stopping access as all the data in one operation.
·         As an single Database contain the data which is associated in one application or with a group of related applications.
          Highlights of IQ Online Training:
·         IQ ONLINE TRAINING is a premier online training portal with an decade of rich experience in the web based online training and as it has established one of dominant online training portal in the world.
·         IQ ONLINE TRAINING offers you LIVE FREE DEMO, Live Webinar, support interview question and answer.
·         IQ ONLINE TRAINING with a 12+ years of experience, Certified Trainers, and Provides Job assistance training.
To register for our IT training courses in USA
Please call us on +1 904-304-2519 or
Send an email to [email protected]
Website: http://www.iqonlinetraining.com/
         http://www.iqonlinetraining.com/ibm-db2-online-training/
           Applications:
·         One of important feature of supporting Big Data DBA DB2 environment is RPN (Relative Page Numbering) for Range-Partitioned Table Spaces.
·         Either use an Existing Range-Partitioned table space which can changed to an RPN PAGE NUM RELATIVE with ALTER TABLESPACE, which follow by Online REORG of entire spaces or create a New RPN Range-Partitioned Table Spaces.
·         RPN table spaces have the ability to create large number of partition in sizes. The maximum Partition Size is nearly about 1TB.
·         As RPN Table Spaces allows storing LOT of data, and reaching the capacity of existing Table Space. And Of course, as it require a expanded RID, which increase from 5 Bytes to 7 Bytes.
·         It impact the DDL for mapping Table Online REORG Utilities. RPN Table Spaces improve the availability, but size is not the issue. It also specified the RPN Table Space for DSSIZE at partition level.
Tumblr media
1 note · View note
passquestion-blog · 6 years ago
Text
Oracle Database 12cR2 1Z0-072 dumps
In the preparation of 1Z0-072 Oracle Database 12cR2 Administration, the quality of Oracle Database 12cR2 1Z0-072 dumps is visible through the elements of knowledge and skill practice in a small span of time. Passquestion also provides 100% money back guarantee to the customers who took the Oracle 1Z0-072 exam. The Oracle Database 1Z0-072 is available for practicing the questions and getting full benefits.
Download latest Oracle Database1Z0-072 exam questions, 100% pass your exam in the first attempt
Share some Oracle Database 1Z0-072 exam questions and answers below.
In your database, you create a user, HR, and then execute this command: GRANT CREATE SESSION TO hr WITH ADMIN OPTION; Which three actions can HR perform? (Choose three.)
A. Grant the CREATE SESSION privilege without ADMIN OPTION to other users.
B. Grant the CREATE SESSION privilege with ADMIN OPTION to other users.
C. Execute DML statements in the HR schema. D. Log in to the database instance.
E. Revoke the CREATE SESSION privilege from other users.
F. Execute DDL statements in the HR schema.
Correct Answer: BDE
Your client application and Oracle database are located on different hosts. Which statement is true about establishing a connection between them?
A. Oracle Connection Manager must be installed on the client.
B. Net service names must be configured on both the client and database servers.
C. The Listener Control utility must be installed on the client if you are using the default listener on the server.
D. Only Oracle Net can be used.
E. Oracle Net Listener is required.
Correct Answer: D
Which three failures require intervention for recovery? (Choose three.)
A. a user error such as dropping the wrong table
B. network interface card (NIC) failure
C. user process failure
D. statement failure
E. media failure
Correct Answer: ACE
Which statement is true about smallfile tablespaces?
A. Extent location metadata is stored in the data dictionary by default.
B. The number of data files is constrained only by the size of the storage array.
C. Maximum file size can be set to unlimited only if the logical volume manager supports striping.
D. Segments can span multiple tablespaces.
E. Segments can span multiple data files.
Correct Answer: E
Which two are true about RETENTION GUARANTEE? (Choose two.)
A. It is a tablespace attribute.
B. It prevents FLASHBACK DATABASE operation failure.
C. It prevents out-of-space errors.
D. It prevents “Snapshot too old” errors.
E. It is a static parameter.
Correct Answer: AB
Save 35% off - Passquestion 2019 Promotion
Pass your Oracle1Z0-072 exam with confidence with real 1Z0-072 questions
How to get Oracle Database
1Z0-072
success? For this, you should consider taking Passquestion Oracle Database 12cR2 1Z0-072 dumps. Because these questions had been made by the hard work of the Oracle experts, which will guarantee you 100% passing in Oracle 1Z0-072 at the very first attempt.These Oracle Database 12cR2 1Z0-072 dumps will help you in clearing your doubts about the 1Z0-072 Oracle Database 12cR2 Administration exam.
How to best prepare for 1Z0-072 exam? - Passquestion 1Z0-072 questions and answers
Want to get success in Oracle Database 1Z0-072 exam in just first attempt? If yes, then you must get help from Passquestion by downloading latest Oracle Database 12cR2 1Z0-072 dumps for your Oracle 1Z0-072 exam. Passquestion have an excellent Oracle Database 12cR2 1Z0-072 dumps with latest and relevant questions and answers in PDF files.
To help you pass your 1Z0-072 Oracle Database 12cR2 Administration exam, you can prepare your Oracle 1Z0-072 Exam with less effort. You would have basic and advanced understanding about all the concepts of
Oracle
1Z0-072 Exam Certification. These Oracle Database 12cR2 1Z0-072 dumps are designed for your convenience and you can rely on them without any hesitation. With the help of Passquestion Oracle Database 12cR2 1Z0-072 dumps for the preparation of Oracle Database 1Z0-072 Exam, you would be able to pass this exam in first attempt with maximum grades.
0 notes
youngprogrammersclub · 6 years ago
Text
How to reset High water mark or Remove Fragmentation?
Fragmentation comes with when we perform update and deleted operation in a table. The space which gets freed up during these update and delete operation is not immediately re-used. This leaves behind holes in a table which results in table fragmentation. As we know the high water mark of table actually defines the border line between used and unused space for tables. While performing full table scan, oracle will always read the data up to the high water mark (used block). HWM is an indicator of USED BLOCKS in the database. DDL statement always reset the high water mark but not the DML (update/delete). When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.Create table DEMO as select * from PAYROLL_MAIN_FILE;Table created. Analyze table DEMO compute statistics;Table analyzed.Select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows" From user_tables where table_name='DEMO';Ever Used Never Used Total rows ---------- ---------- ---------- 154 18 3680 Delete from DEMO where owner='HRMS';1784 rows deleted. Commit;Analyze table DEMO compute statistics;Table analyzed. Select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows" From User_tables where table_name='DEMO';Ever Used Never Used Total rows ---------- ---------- ---------- 154       18          1896 PL/SQL procedure successfully completed Even though you deleted almost half of the rows, the above shows that table high water mark is up to 154 blocks, and to perform any query for full table scan, Oracle will go up to 154 blocks to search the data. Thus for better performance you need to re-organize this table.Query to find Table size with fragmentationSQL> select table_name,round((blocks*8),2)||'kb' "size" From user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 14376kbQuery to find Actual data in TableSQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 9248.96kb Note: 14376 – 9248.96 = 5127.04 Kb is wasted space in tableThe difference between two values is 35% as the default Pctfree 10% so, the table has 25% extra space. For that we need to reorganize the fragmented table. You can use any of the below option to reorganize fragmented tables:Alter table move + rebuild indexesexport/truncate/importCreate table as selectdbms_redefinitionHere in this article I am applying following two methods to show how to reset high water mark and remove fragmentation. The document is tested on oracle 9i database. Alter table move + rebuild indexes Method:SQL> alter table PAY_PAYMENT_MASTER move; Table altered. SQL> alter index PAY_PAYMENT_MASTER_PK rebuild; Index altered. SQL> select status,index_name from user_indexes Where table_name = 'PAY_PAYMENT_MASTER'; STATUS INDEX_NAME -------- ------------------------------ VALID PAY_PAYMENT_MASTER_PK SQL> exec dbms_stats.gather_table_stats('HRMS','PAY_PAYMENT_MASTER'); PL/SQL procedure successfully completed. SQL> select table_name,round((blocks*8),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 11376kb SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 9148.32kbCreate table as select * from Method:SQL> create table PAY_PAYMENT_MASTER_TEMP as select * from PAY_PAYMENT_MASTER; Table created. SQL> drop table PAY_PAYMENT_MASTER purge; Table dropped. SQL> rename table PAY_PAYMENT_MASTER_TEMP to PAY_PAYMENT_MASTER; Table renamed. SQL> exec dbms_stats.gather_table_stats('HRMS','PAY_PAYMENT_MASTER'); PL/SQL procedure successfully completed. SQL> select table_name,round((blocks*8),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 85536kb SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 68986.97kb SQL> select status from user_indexes where table_name = 'PAY_PAYMENT_MASTER'; no rows selected Note: Here you need to create all indexes again. In Oracle 10g onwards you can also use the shrink command to re-organize the data.This command is only applicable for the tables whose tablespace defined with auto segment space management. Before using this command, you should have row movement enabled.SQL> alter table DEMO enable row movement;Table altered.In first part re-arrange rows and in second part reset the HWM.SQL> alter table DEMO shrink space compact;Table altered.SQL> alter table DEMO shrink space;Table altered.Benefit New Shrink command MethodUnlike "alter table move", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.It is an online operation, so you do not need downtime to do re-organization.It does not require any extra space for the process to complete.
0 notes
windows10trainingclub · 6 years ago
Text
How to reset High water mark or Remove Fragmentation?
Fragmentation comes with when we perform update and deleted operation in a table. The space which gets freed up during these update and delete operation is not immediately re-used. This leaves behind holes in a table which results in table fragmentation. As we know the high water mark of table actually defines the border line between used and unused space for tables. While performing full table scan, oracle will always read the data up to the high water mark (used block). HWM is an indicator of USED BLOCKS in the database. DDL statement always reset the high water mark but not the DML (update/delete). When rows are not stored contiguously, or if rows are split onto more than one block, performance decreases because these rows require additional block accesses.Create table DEMO as select * from PAYROLL_MAIN_FILE;Table created. Analyze table DEMO compute statistics;Table analyzed.Select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows" From user_tables where table_name='DEMO';Ever Used Never Used Total rows ---------- ---------- ---------- 154 18 3680 Delete from DEMO where owner='HRMS';1784 rows deleted. Commit;Analyze table DEMO compute statistics;Table analyzed. Select blocks "Ever Used", empty_blocks "Never Used", num_rows "Total rows" From User_tables where table_name='DEMO';Ever Used Never Used Total rows ---------- ---------- ---------- 154       18          1896 PL/SQL procedure successfully completed Even though you deleted almost half of the rows, the above shows that table high water mark is up to 154 blocks, and to perform any query for full table scan, Oracle will go up to 154 blocks to search the data. Thus for better performance you need to re-organize this table.Query to find Table size with fragmentationSQL> select table_name,round((blocks*8),2)||'kb' "size" From user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 14376kbQuery to find Actual data in TableSQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 9248.96kb Note: 14376 – 9248.96 = 5127.04 Kb is wasted space in tableThe difference between two values is 35% as the default Pctfree 10% so, the table has 25% extra space. For that we need to reorganize the fragmented table. You can use any of the below option to reorganize fragmented tables:Alter table move + rebuild indexesexport/truncate/importCreate table as selectdbms_redefinitionHere in this article I am applying following two methods to show how to reset high water mark and remove fragmentation. The document is tested on oracle 9i database. Alter table move + rebuild indexes Method:SQL> alter table PAY_PAYMENT_MASTER move; Table altered. SQL> alter index PAY_PAYMENT_MASTER_PK rebuild; Index altered. SQL> select status,index_name from user_indexes Where table_name = 'PAY_PAYMENT_MASTER'; STATUS INDEX_NAME -------- ------------------------------ VALID PAY_PAYMENT_MASTER_PK SQL> exec dbms_stats.gather_table_stats('HRMS','PAY_PAYMENT_MASTER'); PL/SQL procedure successfully completed. SQL> select table_name,round((blocks*8),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 11376kb SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 9148.32kbCreate table as select * from Method:SQL> create table PAY_PAYMENT_MASTER_TEMP as select * from PAY_PAYMENT_MASTER; Table created. SQL> drop table PAY_PAYMENT_MASTER purge; Table dropped. SQL> rename table PAY_PAYMENT_MASTER_TEMP to PAY_PAYMENT_MASTER; Table renamed. SQL> exec dbms_stats.gather_table_stats('HRMS','PAY_PAYMENT_MASTER'); PL/SQL procedure successfully completed. SQL> select table_name,round((blocks*8),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 85536kb SQL> select table_name,round((num_rows*avg_row_len/1024),2)||'kb' "size" from user_tables where table_name = 'PAY_PAYMENT_MASTER'; TABLE_NAME size ------------------ ------------- PAY_PAYMENT_MASTER 68986.97kb SQL> select status from user_indexes where table_name = 'PAY_PAYMENT_MASTER'; no rows selected Note: Here you need to create all indexes again. In Oracle 10g onwards you can also use the shrink command to re-organize the data.This command is only applicable for the tables whose tablespace defined with auto segment space management. Before using this command, you should have row movement enabled.SQL> alter table DEMO enable row movement;Table altered.In first part re-arrange rows and in second part reset the HWM.SQL> alter table DEMO shrink space compact;Table altered.SQL> alter table DEMO shrink space;Table altered.Benefit New Shrink command MethodUnlike "alter table move", indexes are not in UNUSABLE state. After shrink command, indexes are updated also.It is an online operation, so you do not need downtime to do re-organization.It does not require any extra space for the process to complete.
0 notes
stokedevwebsite · 6 years ago
Text
DBA Interview Questions with Answer Part 20‎
What is Checkpoint SCN and Checkpoint Count? How we can check it?Checkpoint is an event when the database writer is going to flush the dirty buffers into the datafiles. This an ongoing activity and in the result checkpoint number constantly incremented in the datafile header and controfile and the background process CKPT take care of this responsibility.How can you find length of Username and Password?You can find the length of username with below query. The password is hashed (#) so there is no way to get their length.You can use special characters ($, #, _) without single quotes and any other characters must be enclosed in single quotation.Select length (username), usernamefrom dba_users;The minimum length for password is at least 1 character where as maximum depends on database version. In 10g it is restricted to 17 characters long.What are the restrictions applicable while creating view?– A view can be created referencing tables and views only in the current database.– A view name must not be the same as any table owned by that user.– You can build view on other view and on procedure that references views.For More information you can click on the below link: Common Interview Question & AnswerWhat is difference between Delete/Drop/Truncate?DELETE is a command that only removes data from the table. It is DML statement. Deleted data can be rollback (when you delete all the data get copied into rollback first then deleted). We can use where condition with delete to delete particular data from the table.Where as DROP commands remove the table from data dictionary. This is DDL statement. We cannot recover the table before oracle 10g, but flashback feature of oracle 10g provides the facility to recover the drop table.While TRUNCATE is a DDL command that delete data as well as freed the storage held by this table. This free space can be used by this table or some other table again. This is faster because it performs the deleted operation directly (without copying the data into rollback).Alternatively you can enable the row movement for that table and can use shrink command while using the delete command.SQL> Create table test     (      Number s1, Number s2     );SQL> Select bytes, blocks from user_segments     where segment_name = ‘test’;Bytes       block----------  -------65536       8SQL> insert into t select level, level*3     From dual connect by level  Select bytes, blocks from user_segments     where segment_name = ‘test’;Bytes       block----------  -------131072      16SQL> Delete from test;3000 rows deleted.SQL> select bytes,blocks from user_segments     where segment_name = 'test';Bytes       block----------  -------131072      16SQL> Alter table t enable row movement;SQL> Alter table t shrink space;Table alteredSQL> Select bytes,blocks from user_segments      where segment_name = 'test';Bytes       block----------  -------65536       8What is difference between Varchar and Varchar2?Varchar2 can store upto 4000 bytes where as Varchar can only store upto 2000 bytes. Varchar2 can occupy space for NULL values where as Varchar2 will not specify any space for NULL values.What is difference between Char and Varchar2?A CHAR values have fixed length. They are padded with space characters to match the specified length where as VARCHAR2 values have a variable length. They are not padded with any characters. In which Language oracle has been developed?Oracle is RDBMS package developed using C language.What is difference between Translate and Replace?Translate is used for character by character substitution where as Replace is used to substitute a single character with a word.What is the fastest query method to fetch data from table?Using ROWID is the fastest method to fetch data from table.What is Oracle database Background processes specific to RAC?LCK0—Instance Enqueue Process LMS—Global Cache Service Process LMD—Global Enqueue Service Daemon LMON—Global Enqueue Service Monitor Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES) to ensure that each oracle RAC database instance obtain the block that it needs to satisfy as query or transaction. The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.What is SCAN in respect of oracle RAC?Single client access name (SCAN) is a new oracle real application clusters (RAC) 11g releases 2 features that provides a single name for client to access an oracle database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the clusters.Why do we have a virtual IP (VIP) in oracle RAC?Without VIP when a node fails the client wait for the timeout before getting error where as with VIP when a node fails, the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.Why query fails sometimes?Rollback segments dynamically extent to handle large transactions entry loads. A single transaction may occupy all available free space in rollback segment tablespace. This situation prevents other user using rollback segments. You can monitor the rollback segment status by querying DBA_ROLLBACK_SEGS view.What is ADPATCH and OPATCH utility? Can you use both in Application?ADPATCH is a utility to apply application patch and OPATCH is a utility to apply database patch. You have to use both in application for applying in application you have to use ADPATCH and for applying in database you have to use OPATCH.What is Automatic refresh of Materialized view and how you will find last refresh time of Materialized view?Since oracle 10g complete refresh of materialized view can be done with deleted instead of truncate. To force the instance to do the refresh with truncate instead of deleted, parameter AUTOMIC_REFRESH must be set to FALSEWhen it is FALSE Mview will be faster and no UNDO will be generated and whole data will be inserted.When it is TRUE Mview will be slower and UNDO will be generated and whole data will be inserted. Thus we will have access of all time even while it is being refreshed.If you want to find when the last refresh has taken place. You can query with these view: dba_mviews or dba_mview_analysis or dba_mview_refresh_times SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mviews; -or- SQL> select NAME, to_char(LAST_REFRESH,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_refresh_times; -or- SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_analysis;Why more archivelogs are generated, when database is begin backup mode?During begin backup mode datafiles headers get freezed so row information can not be retrieved as a result the entire block is copied to redo logs thus more redo log generated or more log switch occur in turn more archivelogs. Normally only deltas (change vector) are logged to the redo logs.The main reason is to overcome the fractured block. A fractured block is a block in which the header and footer are not consistent at a given SCN. In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file. It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half, while the second half contains older data. In this case, the block is fractured.For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP when a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can re-construct this block if media recovery finds that this block was fractured.The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time. If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.Why is UNION ALL faster than UNION?UNION ALL faster than a UNION because UNION ALL will not eliminate the duplicate rows from the base tables instead it access all rows from all tables according to your query where as the UNION command is simply used to select related distinct information from base tables like JOIN command.Thus if you know that all the records of your query returns the unique records then always use UNION ALL instead of UNION. It will give you faster results.How will you find your instance is started with Spfile and Pfile?You can query with V$spparameter viewSQL> Select isspecified, count(*) from v$spparameter     Group by isspecified;ISSPEC   COUNT(*)------   ----------FALSE    221TRUE     39As isspecified is TRUE with some count we can say that instance is running with spfile. Now try to start your database with pfile and run the previous query again.SQL> Select isspecified, count(*) from v$spparameter     Group by isspecified;ISSPEC COUNT(*)------ ----------FALSE  258Then you will not find any parameter isspecified in spfile they all come from pfile thus you can say instance is started with pfile.Alternatively you can use the below querySQL> show parameter spfile;SQL> Select decode(count(*), 1, 'spfile', 'pfile' )     from v$spparameter     where rownum=1 and isspecified='TRUE';Why we need to enable Maintenance Mode?To ensure optimal performance and reduce downtime during patching sessions, enabling this feature shuts down the Workflow Business Events System and sets up function security so that Oracle Applications functions are unavailable to users. This provides a clear separation between normal run time operation and system downtime for patching..
0 notes
otterhackerxyz · 6 years ago
Text
DBA interview Question and Answer Part 22
I have configured the RMAN with Recovery window of 3 days but on my backup destination only one days archive log is visible while 3 days database backup is available there why?I go through the issue by checking the backup details using the list command. I found there is already 3 days database as well as archivelog backup list is available. Also the backup is in Recoverable backup. Thus it is clear due to any reason the backup is not stored on Backup place.Connect rman target database with catalogList backup Summary;List Archivelog All;List Backup Recoverable;When I check the db_recovery_dest_size, it is 5 GB and our flash-recovery area is almost full because of that it will automatically delete archive logs from backup location. When I increase the db_recovery_dest_sizethen it is working fine.If one or all of control file is get corrupted and you are unable to start database then how can you perform recovery?If one of your control file is missing or corrupted then you have two options to recover it either delete corrupted CONTROLFILE manually from the location and copy the available rest of CONTROLFILE and rename it as per the deleted one. You can check the alert.log for exact name and location of the control file. Another option is delete the corrupted CONTROLFILE and remove the location from Pfile/Spfile. After removing said control file from spfile and start your database.In another scenario if all of your CONTROLFILE is get corrupted then you need to restore them using RMAN.As currently none of the CONTROLFILE is mounted so RMAN does not know about the backup or any pre-configured RMAN setting. In order to use the backup we need to pass the DBID (SET DBID=691421794‎) to the RMAN.RMAN>Restore Controlfile from ‘H:oracleBackup C-1239150297-20130418’ You are working as a DBA and usually taking HOTBACKUP every night. But one day around 3.00 PM one table is dropped and that table is very useful then how will you recover that table?If your database is running on oracle 10g version and you already enable the recyclebin configuration then you can easily recover dropped table from user_recyclebin or dba_recyclebin by using flashback feature of oracle 10g.SQL> select object_name,original_name from user_recyclebin;BIN$T0xRBK9YSomiRRmhwn/xPA==$0 PAY_PAYMENT_MASTERSQL> flashback table table2 to before drop;Flashback complete.In that case when no recyclebin is enabled with your database then you need to restore your backup on TEST database and enable time based recovery for applying all archives before drop command execution. For an instance, apply archives up to 2:55 PM here.It is not recommended to perform such recovery on production database directly because it is a huge database will take time.Note: If you are using SYS user to drop any table then user’s object will not go to the recyclebin for SYSTEM tablespace, even you have already set recyclebin parameter ‘true’. And If you database is running on oracle 9i you require in-complete recovery for the same.Sometimes why more archivelog is Generating?There are many reasons such as: if more database changes were performed either using any import/export work or batch jobs or any special task or taking hot backup (For more details why hot backup generating more archive check my separate post).You can check it using enabling log Minor utility.How can I know my require table is available in export dump file or not?You can create index file for export dump file using ‘import with index file’ command. A text file will be generating with all table and index object name with number of rows. You can confirm your require table object from this text file.What is Cache Fusion Technology?Cache fusion provides a service that allows oracle to keep track of which nodes are writing to which block and ensure that two nodes do not updates duplicates copies of the same block. Cache fusion technology can provides more resource and increase concurrency of users internally. Here multiple caches can able to join and act into one global cache. Thus solving the issues like data consistency internally without any impact on the application code or design.Why we should we need to open database using RESETLOGS after finishing incomplete recovery?When we are performing incomplete recovery that means, it is clear we are bringing our database to past time or re-wind period of time. Thus this recovery makes database in prior state of database. The forward sequence of number already available after performing recovery, due to mismatching of this sequence numbers and prior state of database, it needs open database with new sequence number of redo log and archive log.Why export backup is called as logical backup?Export dump file doesn’t backup or contain any physical structure of database such as datafiles, redolog files, pfile and password file etc. Instead of physical structure, export dump contains logical structure of database like definition of tablespace, segment, schema etc. Due to these reason export dump is call logical backup.What are difference between 9i and 10g OEM?In oracle 9i OEM having limited capability or resource compares to oracle 10g grids. There are too many enhancements in 10g OEM over 9i, several tools such as AWR and ADDM has been incorporated and there is SQL Tuning advisor also available.Can we use same target database as catalog DB?The recovery catalog should not reside in the target database because recovery catalog must be protected in the event of loss of the target database.What is difference between CROSSCHECK and VALIDATE command?Validate command is to examine a backup set and report whether it can be restored successfully where as crosscheck command is to verify the status of backup and copies recorded in the RMAN repository against the media such as disk or tape. How do you identify or fix block Corruption in RMAN database?You can use the v$block_corruption view to identify which block is corrupted then use the ‘blockrecover’ command to recover it.SQL>select file# block# from v$database_block_corruption;file# block10 1435RMAN>blockrecover datafile 10 block 1435;What is auxiliary channel in RMAN? When it is required?An auxiliary channel is a link to auxiliary instance. If you do not have automatic channel configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.Explain the use of Setting GLOBAL_NAME equal to true?Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either ‘TRUE’ or ‘FALSE’ and if it is set to ‘TRUE’ which enforces database links to have the same name as the remote database to which they are linking.How can you say your data in database is Valid or secure?If data of the database is validated we can say that our database is secured. There is different way to validate the data:1. Accept only valid data2. Reject bad data.3. Sanitize bad data. Write a query to display all the odd number from table.Select * from (select employee_number, rownum rn from pay_employee_personal_info)where MOD (rn, 2) 0;-or- you can perform the same things through the below function.set serveroutput on; begin for v_c1 in (select num from tab_no) loop if mod(v_c1.num,2) = 1 then dbms_output.put_line(v_c1.num); end if; end loop; end;What is difference between Trim and Truncate?Truncate is a DDL command which delete the contents of a table completely, without affecting the table structures where as Trim is a function which changes the column output in select statement or to remove the blank space from left and right of the string.When to use the option clause "PASSWORD FILE" in the RMAN DUPLICATE command? If you create a duplicate DB not a standby DB, then RMAN does not copy the password file by default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the existing password file on the auxiliary instance and if you create a standby DB, then RMAN copies the password file by default to the standby host overwriting the existing password file. What is Oracle Golden Gate?Oracle GoldenGate is oracle’s strategic solution for real time data integration. Oracle GoldenGate captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across Oracle and heterogeneous environments with very low impact and preserved transaction integrity. The transaction data management provides read consistency, maintaining referential integrity between source and target systems.What is meaning of LGWR SYNC and LGWR ASYNC in log archive destination parameter for standby configuration.When use LGWR with SYNC, it means once network I/O initiated, LGWR has to wait for completion of network I/O before write processing. LGWR with ASYNC means LGWR doesn’t wait to finish network I/O and continuing write processing.What is the truncate command enhancement in Oracle 12c?In the previous release, there was not a direct option available to truncate a master table while child table exist and having records.Now the truncate table with cascade option in 12c truncates the records in master as well as all referenced child table with an enabled ON DELETE constraint.
0 notes
notsadrobotxyz · 6 years ago
Text
Oracle DBA interview Question with Answer (All in One Doc)
1. General DB Maintenance2. Backup and Recovery3. Flashback Technology4. Dataguard5. Upgration/Migration/Patches6. Performance Tuning7. ASM8. RAC (RAC (Cluster/ASM/Oracle Binaries) Installation Link 9. Linux Operating10. PL/SQLGeneral DB Maintenance Question/Answer:When we run a Trace and Tkprof on a query we see the timing information for three phase?Parse-> Execute-> FetchWhich parameter is used in TNS connect identifier to specify number of concurrent connection request?QUEUESIZEWhat does AFFIRM/NOFFIRM parameter specify?AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).After upgrade task which script is used to run recompile invalid object?utlrp.sql, utlprpDue to too many cursor presents in library cache caused wait what parameter need to increase?Open_cursor, shared_pool_sizeWhen using Recover database using backup control file?To synchronize datafile to controlfileWhat is the use of CONSISTENT=Y and DIRECT=Y parameter in export?It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.What the parameter COMPRESS, SHOW, SQLFILE will do during export?If you are using COMPRESS during import, It will put entire data in a single extent. if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.Can we import 11g dumpfile into 10g using datapump? If so, is it also  possible between 10g and 9i?Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9iWhat does KEEP_MASTER and METRICS parameter of datapump?KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.What happens when we fire SQL statement in Oracle?First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.What are between latches and locks?1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap. 2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level. 3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.What are the differences between LMTS and DMTS? Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.Difference of Regular and Index organized table?The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.What are Table portioning and their use and benefits?Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.Range partition:How to deal online redo log file corruption?1. Recover when only one redo log file corrupted?If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.RECOVER DATABASE UNTIL CANCEL;  then ALTER DATABASE OPEN RESETLOGS;2. Recover when all the online redo log file corrupted?When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archivedALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archivedIf the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.What is the function of shared pool in SGA?The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.Backup & Recovery Question/Answer:Is target database can be catalog database?No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.What is the use of large pool, which case you need to set the large pool?You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.How to take User-managed backup in RMAN or How to make use of obsolete backup? By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup. What is difference between using recovery catalog and control file?When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.What is the benefit of Block Media Recovery and How to do it?Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.Connect target database with RMAN in Mount phase:RMAN> Recover datafile 8 block 13;RMAN> Recover CORRUPTION_LIST;  --to recover all the corrupted block at a time.In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.What is Advantage of Datapump over Traditional Export?1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.2. Data can be exported from remote database by using database link.3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.4. Has ability to attach/detach from job and able to monitor the job remotely.5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.6. Explicit DB version can be specified so only supported object can be exported.7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.8. During impdp we change the target file name, schema, tablespace using: REMAP_Why datapump is faster than traditional Export. What to do to increase datapump performace?Data Pump is block mode, exp is byte mode.Data Pump will do parallel execution.Data Pump uses direct path API and Network link features.Data pump export/import/access file on server rather than client by providing directory structure grant.Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.Following initialization parameter must be set to increase data pump performance:· DISK_ASYNCH_IO=TRUE· DB_BLOCK_CHECKING=FALSE· DB_BLOCK_CHECKSUM=FALSEFollowing initialization must be set high to increase datapump parallelism:· PROCESSES· SESSIONS· PARALLEL_MAX_SERVERS· SHARED_POOL_SIZE and UNDO_TABLESPACENote: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.Flashback Question/AnswerFlashback Archive Features in oracle 11gThe flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.Limitation or Restriction on flashback Drop features?1. The recyclebin features is only for non-system and locally managed tablespace. 2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.Limitation or Restriction on flashback Database features?1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.What are Advantages of flashback database over flashback Table?1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here. 2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table. 3. Flashback Table cannot be used on a standby database.How should I set the database to improve Flashback performance? Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target. If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance. For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.Performance Tuning Question/Answer:If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.The database performance directly related with Network load, Data volume and Running SQL profiling.1.  So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.To check the user sessions and waiting events use the join query on views: V$session,v$session_wait2.  After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_eventSQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ"  2    FROM sys.v_$system_event a, sys.v_$system_event b  3   WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read';  SEQ READ  SCAT READ---------- ----------       .74        1.6When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.How to perform SQL Tuning?1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index.  It will solve your issues somehow…… otherwise use next step of performance tuning.3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)SQL> create public synonym plan_table for sys.plan_table;4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor:         Grant Advisor to HR;         Grant Administer SQL Tuning set to HR;SQL Tuning Advisor will check your SQL structure and statistics. SQL Tuning Advisor suggests indexes that might be very useful. SQL Tuning Advisor suggests query rewrites. SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views. 6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplusDBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM   DBA_SQL_PROFILES; 9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.Why use SQL Plan Baseline, How to Generate new plan using Baseline 10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set. Which factors are to be considered for creating index on Table? How to select column for index? 1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data < 15% of rows retrieving in report then you need to create index on that table. 2. Primary key and unique key automatically having index you might concentrate to create index on foreign key where indexing can improve performance on join on multiple table.3. The column is best suitable for indexing whose values are relatively unique in column (through which you can access complete table records. Wide range of value in column (good for regular index) whereas small range of values (good for bitmap index) or the column contains many nulls but queries can select all rows having a value. CREATE INDEX emp_ename ON emp_tab(ename);The column is not suitable for indexing which is having many nulls but cannot search non null value or LONG, LONG RAW column not suitable for indexing.CAUTION: The size of single index entry cannot exceed one-half of the available space on data block.The more indexes on table will create more overhead as with each DML operation on table all index must be updated. It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously. What are Different Types of Index? Is creating index online possible? Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.An index can be considered for re-building under any of these circumstances:We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command* The % of deleted rows exceeds 30% of the total rows (depending on table length). * If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows. * If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.Differentiate the use of Bitmap index and Binary Tree index? Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.If you are getting high “Busy Buffer waits”, how can you find the reason behind it? Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait. Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values. SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where event = 'buffer busy waits'; SQL> Select owner, segment_name, segment_type from dba_extents Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report  for standalone environment and awrgrpt for RAC environment.In general as a DBA following list of information you must check in STATSPACK/AWR report. ¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)¦ Load profile (DB CPU(per sec) < Core configuration and ratio of hard parse must be < parse)¦ Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)¦ Instance activity¦ File I/O and segment statistics¦ Memory allocation¦ Buffer waits¦ Latch waits 1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.2. Look the Load profile Report: Here DB CPU (per sec) must be < Core in Host configuration. If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.6. Operating system statistics Report7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.What is the difference between DB file sequential read and DB File Scattered Read? DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache.  Dataguard Question/AnswerWhat are Benefits of Data Guard?Using Data guard feature in your environment following benefit:High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.Oracle Dataguard classified into two types:1. Physical standby (Redo apply technology)2. Logical Standby (SQL Apply Technology)Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.What are different services available in oracle data guard?1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary. 2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.4. DG broker: control the creation and monitoring of data guard through GUI and command line.What is different protection mode available in oracle data guard? How can check and change it?1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.Step to create physical standby database?On Primary site Modification:1. Enable force logging: Alter database force logging;2. Create redolog group for standby on primary server:Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;3. Setup the primary database pfile by changing required parameterLog_archive_dest_n – Primary database must be running in archive modeLog_archive_dest_state_nLog_archive_config  -- enble or disable the redo stream to the standby site.Log_file_name_convert , DB_file_name_convert  -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.Standby_File_Management  -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby.              DB_Unique_Name,  Fal_server, Fal_client4. Create password file for primary5. Create controlfile for standby database on primary site:alter database create standby controlfile as ‘STAN.ctl;6. Configure the listner and tnsname on primary database.On Standby Modification:1. Copy primary site pfile and modify these pfile as per standby name and location:2. Copy password from primary and modify the name.3. Startup standby database in nomount using modified pfile and create spfile from it4. Use the created controlfile to mount the database.5. Now enable DG Broker to activate the primary or standby connection.6. Finally start redo log apply.How to enable/disable log apply service for standby?Alter database recover managed standby database disconnect; apply in backgroundAlter database recover managed standby database using current logfile; apply in real time.Alter database start logical standby apply immediate; to start SQL apply for logical standby database.What are different ways to manage long gap of standby database?Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.1. Check the actual gap and perform incremental backup and use this backup to recover standby site.2. Create controlfile for standby on primary and restore the standby using newly created controlfile.3. Register the missing archive log.Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option. Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;What is Active DATAGUARD feature in oracle 11g?In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering  current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.How can you find out back log of standby?You can perform join query on v$archived_log, v$managed_standbyWhat is difference between normal Redo Apply and Real-time Apply?Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.What are the Back ground processes for Data guard?On primary:Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby locationFetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.On standby:Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.Remote File Server (RFS): receives archives on standby redo log from primary database. Archiver (ARCn):  archived the standby redo log applied by managed recovery process.Managed Recovery Process (MRP): applies archives redo log to the standby server.Logical Standby Process (LSP): applies SQL to the standby server.ASM/RAC Question/AnswerWhat is the use of ASM (or) Why ASM preferred over filesystem?ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.What are different types of striping in ASM & their differences?Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.Default Memory Allocation for ASM? How will backup ASM metadata?Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.You can backup ASM metadata (ASM disk group configuration) using Md_Backup.How to find out connected databases with ASM or not connected disks list?ASMCMD> lsctSQL> select DB_NAME from V$ASM_CLIENT;ASMCMD> lsdgselect NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;What are required parameters for ASM instance Creation?INSTANCE_TYPE = ASM by default it is RDBMSDB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.How to Creating spfile for ASM database?SQL> CREATE SPFILE FROM PFILE = ‘/tmp/init+ASM1.ora’;Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST, CONTROL_FILESLOG_ARCHIVE_DEST_n,LOG_ARCHIVE_DEST,STANDBY_ARCHIVE_DESTWhat are DISKGROUP Redundancy Level?Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)High Redundancy: Three ways mirroring requiring three failure groupsExternal Redundancy: No mirroring for disk that are already protecting using RAID on OS level.CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY  FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2  FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;We are going to migrate new storage. How we will move my ASM database from storage A to storage B? First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning. ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk ‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;What are required component of Oracle RAC installation?:1. Oracle ASM shared disk to store OCR and voting disk files.2. OCFS2 for Linux Clustered database3. Certified Network File system (NFS)4. Public IP: Configuration: TCP/IP (To manage database storage system)5. Private IP:  To manager RAC cluster ware (cache fusion) internally.6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.Steps to configure RAC database:1. Install same OS level on each nodes or systems.2. Create required number of group and oracle user account.3. Create required directory structure or CRS and DB home.4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.5. Edit etc/host file and specify public/private/virtual ip for each node.6. Create required level of partition for OCR/Votdisk and ASM diskgroup.7. Install OCFSC2 and ASM RPM and configure with each node.8. Install clustware binaries then oracle binaries in first node.9. Invoke netca to configure listener. 10. Finally invoke DBCA to configure ASM to store database CRD files and create database.What is the structure change in oracle 11g r2?1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)2. OCR and Voting disk on ASM.3. SAN listener4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.5. GSDWhat are oracle RAC Services?Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.Main Components of Oracle RAC Clusterware?OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster. Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.ASM Backup:You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.OCR and Votefile Backup: Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using: ocrconfig –manualbackup   --To take manual backup of ocrocrconfig –showbackup -- To list the available backup.ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.ocrconfig –backuploc   --To change the OCR configured backup location.dd if=’vote disk name’ of=’bakup file name’; To take votefile backupTo check OCR and Vote disk Location:crsctl query css votedisk/etc/orcle/ocr.loc or use ocrcheckocrcheck   --To check the OCR corruption status (if any).Crsctl check crs/cluster --To check crs status on local and remote nodeMoving OCR and Votedisk:Login with root user as the OCR store on root and for votedisk stops all crs first.Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.List to check all nodes in your cluster from root or to check public/private/vi pip info.olsnodes –n –p –I How can Restore the OCR in RAC environment?1. Stop clusterware  all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.# crsctl stop crs, # crsctl stop crs –f # crsctl start crs –excel –nocrs  Check if crsd still open then stop it: # crsctl stop resource ora.crsd  -init 2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.# ocrconfig –restore file_name;   3. Verify the integrity of OCR and stop exclusive mode crs# ocrcheck # crsctl stop crs –f4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.# ocrconfig –repair –replace  5. Finally start all the node and verify with CVU command# crsctl start crs# cluvfy comp ocr –n all –verboseNote: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.How to Restore Votedisks?1. Shutdown the CRS on all node in clusterCrsctl stop crs2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.Crsctl query css votedisksDd if= of=3. Finally start crs of all node.Crsctl start crsHow to add node or instance in RAC environment?1. From the ORACLE_HOME/oui/bin location of node1 run the script addNode.sh$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"2. Run from ORACLE_HOME/root.sh script of node33. Run from existing node srvctl config db -d db_name then create a new mount point4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;5. Finally run the cluster installer for new node and update the inventory of clusterwareIn another way you can start the dbca and from instance management page choose add instance and follow the next step.How to Identify master node in RAC ? # /u1/app/../crsd>grep MASTER crsd.log | tail -1 (or) cssd >grep -i  "master node" ocssd.log | tail -1 OR You can also use V$GES_RESOURCE view to identify the master node.Difference crsctl and srvctl?Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.What are ONS/TAF/FAN/FCF in RAC?ONS is a part of clusterware and is used to transfer messages between node and application tiers.Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.How OCCSD starts if voting disk & OCR resides on ASM?Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.Upgration/Migration/Patches Question/AnswerWhat are Database patches and How to apply?CPU (Critical Patch Update or one-off patch):  security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.- Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.- Setup your current directory to the directory where patch is located and then run the opatch utility.- After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.- Finally run the utlrp.sql to validate invalid object.To rollback CPU Patch:- Shutdown all instances or listner.- Go to the patch location and run opatch rollback –id 677666- Start all the database and listner and use catcpu_rollback.sql script.- Bounce back the database use utlrp.sql script.PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.You must have two things two apply PSU patch:  Latest version for Opatch, PSU patch that you want to apply1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch versionNow to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.2. To Apply PSU patch:unzip p13923374_11203_.zipcd 13923374opatch apply  -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.3. Post Apply Steps: Startup database with sys as sysdbaSQL> @catbundle.sql psu applySQL> quitOpatch lsinventory  --to check which psu patch is installed.Opatch rollback –id 13923374  --Rolling back a patch you have applied.Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.Patchset: (eg. 10.2.0.1 to 10.2.0.3): Applying a patchset usually requires OUI.Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.You have collection of patch nearly 100 patches. How can you apply only one of them?By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.What is rolling upgrade?It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.What happens when you use STARTUP UPGRADE?The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.
0 notes
benny-wang · 7 years ago
Text
MySQL 8.0 GA
It’s great to see MySQL 8.0 has been GA. As a cloud provider in the world, Alibaba Cloud always keeps the pace with Oracle MySQL. We have provided ApsaraDB MySQL services based on MySQL 5.5, MySQL 5.6 and MySQL5.7. ApsaraDB MySQL services are the most popular for our customers. With MySQL 8.0 GA, we would like to start checking out the GA version and do some tests. Hope we can provide our service based on it soon.
MySQL 8.0 did a huge change. Not only so many new features were introduced but also basic structures such as redo log, undo tablespace etc. do a lot changes. Excellent work. Congratulations on Oracle MySQL team. Of course, such a huge change might give us some pressure on how to deal with upgrade. We will do some tests and see.
As a database service provider, it’s delighted to see MySQL 8.0 has a 1.8M QPS, which has a big performance improvement comparing to MySQL 5.7. UTF8mb4 makes MySQL support more and more character set. It’s also good to see NOWAIT idea from AliSQL has been admitted. There are so many new features we expected. Let’s take a look.
The biggest change in MySQL 8.0 is transactional system table. All of the system tables are stored in InnoDB storage engine. Such a movement can remove the replication problems caused by old MyISAM system tables, such as whole table lock thing etc. We truly believe we will easily develop more and more interesting features based on the transactional system table.
The atomic DDL has been expected for a long time. Before 8.0, DDL causes a lot of inconsistence problems during replication. Such a feature will make the DDL replication easier. This is a big enhancement. Someday, it’s better to see DDL to be transactional. Currently, in order solve this problem, we have to use some indirect solutions.
As you know, JSON is very popular for web users. The JSON_TABLE gives us a powerful way to convert between relational table and JSON. It’s an interesting feature. It will give the NOSQL users a new chance to move to MySQL.
Window functions and CTE are commonly used in other databases. Before 8.0, we always needs a complex work in order to simulate these things. Now, these two features will make our SQL life easier. They are so welcomed.
Histograms were introduced in MySQL 8.0 finally. We hope the cost model can work better. Currently, the histograms need to be created by user. It’s better to make it automatically.
More elegant hints are provided. So many new hints are introduced, which give the users more ways to adjust the query plan.
INVISIBLE Indexes is a very useful feature. It gives users another convenient method to test which query plan is good enough. It also gives the users a way to change the query plan to make one index invisible.
Error logs user defined filters make MySQL convenient to deal with the error log and show the interesting logs to users.
Performance schema is more and more powerful. We can get more internal detailed information in MySQL and monitor the MySQL behavior efficiently. Hope to see little performance regression if it turns on.
InnoDB storage plays more and more role in MySQL. We can see InnoDB does some big changes in MySQL 8.0. The new designed redo log and flexible undo tablespace management make InnoDB higher performance and concurrency.
As a MySQL contributor, the unified code style on server and InnoDB, refactored parser and optimizer make our development more convenient. We would always love to provide high qualified features to contribute MySQL community.
All in all, we can see MySQL is keeping optimizing the performance. We don’t list all the features we are interested in. However, from the above description, we can see the great effort Oracle MySQL did to make MySQL friendly to manage and use. We will do a lot of tests to see if the 8.0 is stable enough in short future. As we always doing, we will keep our focus on providing customers a safe, stable and high performance MySQL service.
0 notes
oraclerider · 3 years ago
Text
How to Generate table DDL
in this practice we are going to learn how to generate table DDL, view DDL, Materialized View DDL and user DDL.
Hi, in this practice we are going to learn how to generate table DDL, view DDL, Materialized View DDL and user DDL. You also read below articles: DB LINK DDL GET TABLESPACE DDLGET ROLE DDL Get more oracle scritps How do you get DDL of a table in Oracle? In real time environment some times we need to DDL of an existing table. We can perfo this activity with the help of dbms_metadata.get_ddl…
Tumblr media
View On WordPress
2 notes · View notes
sandeep2363 · 4 years ago
Text
How to get DDL of Tablespace present in Oracle Database
How to get DDL of Tablespace present in Oracle Database
Fetch the DDL commands for all or one tablespace in Oracle Get the DDL of all the Tablespaces present in Oracle set echo off; Set pages 999; set long 90000; spool ddl_tablespace.sql select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb; spool off Note: If you want the Semicolon after every statement generated then you can execute the following command at your…
View On WordPress
0 notes
youngprogrammersclub · 6 years ago
Text
If found Temporary Segment in Permanent Tablespace?
If there is a situation when you see the “temporary segment” in permanent tablespace hanging around and not getting cleaned up. These temporary segments take actual disk space when SMON fails to perform its assign job to clean up this temporary segment. Overview: Generally temporary segment is created in temporary tablespace, but some of the circumstances when temporary segment gets created in permanent tablespace such as:1. CREATE TABLE AS SELECT 2. ALTER TABLE MOVE 3. CREATE INDEX 4. ALTER INDEX REBUILDFor any of the above operations when you perform, oracle internally will create a temporary table/index which occupies disk space. Once the operation completes, then oracle will mark those table/index as permanent segment. Thus indicating the work is done all temporary segments will become permanent. SMON will take the responsibility in clearing these temporary segments. If there is situation when you see the “temporary segment” in permanent tablespace hanging around and SMON fails to perform its assign job to clean up this temporary segment.The below is the query to find out the information about temporary segments in permanent tablespaceSQL>select tablespace_name, owner, sum(bytes/1024/1024) from dba_segmentswhere segment_type = 'TEMPORARY' group by tablespace_name, owner;TABLESPACE_NAME     OWNER          SUM(BYTES/1024/1024)------------------- ------------   --------------------SDH_TIMES           SYS            34576SDH_INDEX           SYS            4120SDH_HRMS            SYS            44284.875SDH_EDSS            SYS            14.69SDH_SHTR            SYS            41452.39SDH_FIN             SYS            208.425Here we can see the tablespace SDH_SHTR,SDH_HRMS having large temporary segment. But in his case, why it is still showing some temporary segments? It could be because of any of the following reasons. Find the correct reason perform the necessary action.Reason 1: Possibly any DDL is active which can create temporary segment. To find such DDL, you can make use of join query on v$sql and v$Session views:SQL> select pid from v$process where username= ‘owner_name’;SQL> alter session set tracefile_identifier='TEMPORARY_SEGMENTS';Open the corresponding trace file and check the “current sql”·
0 notes