#how to get ddl of all views in oracle
Explore tagged Tumblr posts
Text
How to Generate table DDL
in this practice we are going to learn how to generate table DDL, view DDL, Materialized View DDL and user DDL.
Hi, in this practice we are going to learn how to generate table DDL, view DDL, Materialized View DDL and user DDL. You also read below articles: DB LINK DDL GET TABLESPACE DDLGET ROLE DDL Get more oracle scritps How do you get DDL of a table in Oracle? In real time environment some times we need to DDL of an existing table. We can perfo this activity with the help of dbms_metadata.get_ddl…
View On WordPress
#dbms_metadata.get_ddl materialized view#dbms_metadata.get_ddl table#dbms_metadata.get_ddl view#get ddl for materialized view in oracle#get ddl in oracle#get ddl of all users in oracle#get ddl of materialized view in oracle#get ddl of schema in oracle#how to get create table script in oracle#how to get ddl of all views in oracle#how to get ddl of users and roles#how to get the query of a view in oracle#how to get the query of materialized view in oracle#how to refresh materialized view in oracle automatically#oracle user ddl with password#schema ddl in oracle#user ddl oracle
2 notes
·
View notes
Text
What are SQL and PL/SQL, their uses, benefits, and drawbacks?
An Introduction to PL/SQL
Procedural Language Extensions to the Structured Query Language are known as PL/SQL. In relational database management systems, SQL is a popular language for data querying and updating (RDBMS).
PL/SQL enhances the SQL language with numerous procedural constructs to address some of SQL's drawbacks. A more comprehensive programming language option is also provided by PL/SQL for creating mission-critical applications on Oracle databases.
A well-organised and legible programming language is PL/SQL. Its constructions unambiguously convey the code's intention. PL/SQL is also an easy language to learn.
A standardised and portable language for creating Oracle databases is PL/SQL. You can easily port a software that is written to run on an Oracle database to another Oracle database that is compatible.
What is SQL?
Relational databases can be created, maintained, and retrieved using the robust non-procedural database language known as Structured Query Language (SQL) (a type of database that provides and stores data that is related to each other). It was developed by IBM in the 1970s and enables users to communicate with different database management systems according to their accessibility.
SQL commands are classified into six types. These are their names:
Data Manipulation Language (DML)
Data Definition Language (DDL)
Data Control Language (DCL)
Transaction Control Language (TCL)
Data Query Language (DDQL)
Advantages sql
There are several benefits to structured query language, some of which are described below:
Without having to create a substantial amount of code, managing database systems is fairly easy using ordinary SQL.
SQL databases based on well-established databases. NoSQL databases don't adhere to any rules.
Programs that run on computers, servers, laptops, and even some mobile phones can use SQL.
Using this domain language, you can instantly get complex question answers from databases.
Using the SQL language, users can design various views of database architecture and databases for various users.
Disadvantages of sql
SQL requires training on a large database.
An expert user or programmer can handle SQL.
It takes up some space, requiring an extra memory location for each record.
It is a platform-specific language.
It is a command-based programming language.
What is pl/sql ?
Oracle The PL/SQL extension for SQL combines the processing power of a procedural language with the data manipulation capabilities of SQL to produce extremely strong SQL queries. By enhancing the security, portability, and robustness of the database, PL/SQL makes sure that SQL commands are processed efficiently.
Using PL/SQL, you may instruct the compiler on both "what to do" and "how to do it" using SQL and its procedural approach.
By using loops, conditions, and object-oriented ideas, it allows programmers more control than other database languages. PL/SQL stands for "Procedural Language Extensions to SQL," in its full form.
What is PL/SQL Developer?
Programmers can combine procedural statements and the power of SQL with PL/SQL, a procedural language (code written in the form of a set of instructions). PL/SQL stands for Procedural Language Extensions to SQL. Triggers, functions, and procedures that increase an operation's functionality while lowering traffic are all executed simultaneously in PL/SQL.
It was developed in the 1990s by Oracle Corporation to build server pages and web applications, and it has features like abstraction and error handling.
Basic PL/SQL Commands
PL/SQL is a procedural (code written in the form of a series of instructions) language that allows programmers to combine the power of SQL with procedural statements. Procedural Language extensions to SQL are abbreviated as PL/SQL.
In PL/SQL, all statements are executed at once, including triggers, functions, and procedures that improve the functionality of an operation while reducing traffic.
Oracle Corporation created it in the 1990s to build server pages and web applications, and it includes features such as abstraction and error handling.
Data Insertion: Using the 'insert' command in this PL/SQL command, data can be inserted into any table. This command will require the table name, column name, and column values. This statement can insert values into any base table. Table names and values are required. Also, the keyword "values" should be used.
Data Update: Using the 'update' statement, data in a table can be easily updated. It can change the value of any column in a table. It requires the table name, column name, and value as input and updates the data.
Selecting data: To retrieve specific data, it must be retrieved from a specific database table. This is made possible by Pl/SQL by the ‘select’ statement.
Exception handling: Use this PL/SQL command to detect error conditions. It intelligently handles errors and allows the user to send appropriate user-defined messages.
The Key Differences Between SQL and PL/SQL
Structural Query Language, or SQL, was created to work with relational databases. It is a declarative language that emphasises specifics. The Procedural Language/Structured Query Language (PL/SQL), on the other hand, makes use of SQL as its database. It is a programming language that is application-oriented.
While PL/SQL has variables, constraints, data types, and other features, SQL does not.
In PL/SQL, code blocks including functions, triggers, variables, control structures (for loop, while), and conditional statements (if...then...else) are written instead of the DDL and DML used in SQL to create queries and commands.
Only one action or query can be run simultaneously in SQL. However, PL/SQL allows for the simultaneous execution of multiple actions or an entire block of close statements. Thus, network traffic is decreased.
Features & Advantages of PL/SQL
Because SQL is processed in batches as opposed to as individual statements, performance is improved.
Efficacy is high.
Integration with SQL is tight.
Full Portability
The principles of object-oriented programming benefit from strict security.
Scalability and administration
Supports creating web applications
encourages the creation of server sites
Disadvantages of PL/SQL
In PL/SQL, stored procedures use a lot of memory and don't have debugging capabilities.
Any modification to the underlying database requires an equivalent modification to the presentation layer.
Because it does not totally divide the duties of back-end developers and front-end developers, it is challenging to separate HTML creation from PL/SQL development.
Conclusion
Using procedures, functions, control structures, cursors, and triggers, PL/SQL executes all of SQL's operations on massive amounts of data. It is essentially SQL with some procedural capabilities added. The necessary data is specified by SQL, but not how it can be obtained. PL/SQL is useful in this situation. we can now handle sophisticated SQL issues that were previously a pain in the neck thanks to the invention of PL/SQL. These days, data is the new oil, and data management is more crucial than ever. As a result, PL/SQL is essential for the operation of many enterprises. Knowledge of PL/SQL and SQL can be quite beneficial.
4 notes
·
View notes
Text
Oracle 1Z0-447 Certification Aspects
* Exam Title: Oracle GoldenGate 12c Implementation Essentials * Exam Code: 1Z0-447 * Exam Price: $245.00 More on exam pricing * Format: Multiple-Choice * Duration:Two hours * Number of Questions: 72 * Passing Score: 69% * Validated Against: Exam has become validated against Oracle GoldenGate 12c. * 1Z0-447 Practice Test: https://www.dbexam.com/1z0-447-oracle-goldengate-12c-implementation-essentials * 1Z0-447 sample questions: https://www.dbexam.com/sample-questions/oracle-1z0-447-certification-sample-questions-and-answers Oracle GoldenGate 12c Certified Implementation Specialist Certification Overview The Oracle GoldenGate 12c Essentials (1Zx-xxx) exam is ideal for people who use a strong foundation and expertise in selling or implementing oracle GoldenGate 12c solutions. This certification exam covers topics like: Oracle Goldengate 12c Architecture; Oracle GoldenGate 12c Parametres; Oracle Goldengate 12c Mapping and Transformation Overview and more. Up-to-date training and field experience are suggested. The Oracle GoldenGate 12c Implementation Specialist certification recognizes OPN members as OPN Certified Specialists. This certification differentiates OPN members available by giving an aggressive edge through proven expertise. This certification helps the OPN member’s partner organization entitled to the Oracle GoldenGate 12c.

* Get More Detail About Oracle 1Z0-447 Certification: https://oracle-exam-guide.blogspot.com/2019/05/how-to-score-best-in-1z0-447.html Oracle 1Z0-447 Certification Exam Topics * Oracle GoldenGate (OGG) Overview * Describe OGG functional overview and customary topologies * Describe OGG Veridata and Management Pack functionality * Describe the gap between real-time data integration replication files Manipulation Language (DML) replication * Install and Configure OGG * Download and Install OGG, and differentiate between various installers (zip, OUI, tar) * Synchronize source and target databases with the Initial Load * Prepare database for OGG CDC and view databases with OGG schema check script * Configure OGG Replication component parameter files * Configure the OGG Command Interface to generate OGG processes * Describe how you can identify and resolve issues in heterogeneous replication, and offer appropriate solutions * Configure OGG utilities * Mapping and Transformation Overview * Implement use cases for transformation functions * Implement macros * Managing and Monitoring Oracle GoldenGate * Manage OGG command information security * Implement and troubleshoot OGG Monitoring * Explain the configuration and management of the Enterprise Manager 12c plug-in * Implement and troubleshoot OGG Veridata * Architecture Overview * Describe OGG components * Create both forms of Capture systems for Oracle database * Create the three forms of Replicat processes * Explain the real difference between an Extract and Pump, and local and remote trails * Configure OGG's process recovery mechanism * Parameters * Describe and compare GLOBALS versus MANAGER parameters * Create solutions using component parameters for replication requirements * Install OGG parameters * Explain and identify parameters specific for non-Oracle databases * Configuration Options * Describe OGG configuration options (Data Definition Language (DDL), compression and encryption options) * Configure OGG event actions based on use cases * Troubleshoot conflict detection and backbone * Configure Integrated Capture, Replicat, and deployment options Sign up for Oracle 1Z0-447 Certification exam Sign up for Oracle 1Z0-447 Certification exam with Pearson VUE and buy test with all the voucher you purchase from Oracle University or which has a bank card applied during exam registration. To learn more about oracle goldengate certification webpage: check.
1 note
·
View note
Text
MariaDB is one of the most used open source relational databases. It is developed by MySQL developers but made to be totally free unlike MySQL. The development of MariaDB focuses on stability and performance. The other reasons why MariaDB is preferred is it being robust and scalable, new storage engines and comes with various tools and plugins that make it widely applicable. It is the default database in most Linux distributions. In this guide, we are going to look at how to install MariaDB 10.6 on Ubuntu 20.04 and Ubuntu 18.04 from MariaDB APT repository. Features of MariaDB 10.6 MariaDB 10.6 is the current stable version of MariaDB and comes with a number of new features as discussed below: Ignored Indexes – These are indexes that are visible and maintained but not used by the optimizer sys schema supported- This is a “system” database containing views and procedures for investigating performance problems. SKIP LOCKED – Locked tables are skipped from being updated or selected. JSON_TABLE() – can create a JSON table that can be used as a subquery from a JSON document. OFFSET…FETCH…[WITH TIES] – WITH TIES is an optional clause that adds extra functionality. Example as used Oracle compatibility – There are ongoing works in making MariaDB compatible with OracleDB with some Oracle Syntaxes and functions already added. Some of the improvements on MariaDB 10.6 from its predecessors include: Atomic DDL – CREATE, ALTER, DROP and RENAME are atomic and crash safe. If MariaDB server crashes while processing any of these operations, the change will either e done completely or not done at all. InnoDB improvements – First insert to an empty table is faster. Also writes to temporary tables are avoided.Faster implicit and explicit temporary tables. Improvements in Galera. Ability to enable encrypted connections between two nodes without downtime. Also added flags to specify if galera controversial compatible features should be enabled. Clean up to remove unsupported features such as TukoDB Engine, Cassandra Engine, some InnoDB variables and some innodb_checksum_algorithm. Step 1: System upgrade As usual, we begin by running system upgrades to ensure that we are performing our installations on the latest packages. After updates, reboot your server. sudo apt update sudo apt upgrade -y sudo reboot Step 2: Install Required Packages Next, we are going to install software-properties-common package sudo apt install software-properties-common -y Step 3: Import MariaDB GPG Key and Add MariaDB APT Repository With the below commands respectively, we are going to add MariaDB signing key and add MariaDB APT repository: curl -LsS -O https://downloads.mariadb.com/MariaDB/mariadb_repo_setup sudo bash mariadb_repo_setup --mariadb-server-version=10.6 You’ll get an output with contents like the ones shown below: [info] Checking for script prerequisites. [info] Repository file successfully written to /etc/apt/sources.list.d/mariadb.list [info] Adding trusted package signing keys... [info] Running apt-get update... [info] Done adding trusted package signing keys Step 4: Install MariaDB 10.6 on Ubuntu 20.04|18.04 Once MariaDB key and APT repositories are added, update the packages and proceed to install MariaDB 10.6 on Ubuntu 20.04|18.04: sudo apt update sudo apt install mariadb-server mariadb-client Step 5: Secure MariaDB Installation Once MariaDB is installed, run the below MySQL script to secure MariaDB $ sudo mariadb-secure-installation NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY! In order to log into MariaDB to secure it, we'll need the current password for the root user. If you've just installed MariaDB, and haven't set the root password yet, you should just press enter here. Enter current password for root (enter for none): OK, successfully used password, moving on... Setting the root password or using the unix_socket ensures that nobody
can log into the MariaDB root user without the proper authorisation. You already have your root account protected, so you can safely answer 'n'. Switch to unix_socket authentication [Y/n] Y Enabled successfully! Reloading privilege tables.. ... Success! You already have your root account protected, so you can safely answer 'n'. Change the root password? [Y/n] Y New password: Re-enter new password: Password updated successfully! Reloading privilege tables.. ... Success! By default, a MariaDB installation has an anonymous user, allowing anyone to log into MariaDB without having to have a user account created for them. This is intended only for testing, and to make the installation go a bit smoother. You should remove them before moving into a production environment. Remove anonymous users? [Y/n] Y ... Success! Normally, root should only be allowed to connect from 'localhost'. This ensures that someone cannot guess at the root password from the network. Disallow root login remotely? [Y/n] Y ... Success! By default, MariaDB comes with a database named 'test' that anyone can access. This is also intended only for testing, and should be removed before moving into a production environment. Remove test database and access to it? [Y/n] Y - Dropping test database... ... Success! - Removing privileges on test database... ... Success! Reloading the privilege tables will ensure that all changes made so far will take effect immediately. Reload privilege tables now? [Y/n] Y ... Success! Cleaning up... All done! If you've completed all of the above steps, your MariaDB installation should now be secure. Thanks for using MariaDB! Step 6: Confirm Mariadb Status MariaDB server should be automatically started. Check status as below: $ systemctl status mariadb ● mariadb.service - MariaDB 10.6.4 database server Loaded: loaded (/lib/systemd/system/mariadb.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/mariadb.service.d └─migrated-from-my.cnf-settings.conf Active: active (running) since Tue 2021-10-19 11:37:25 UTC; 9s ago Docs: man:mariadbd(8) https://mariadb.com/kb/en/library/systemd/ Main PID: 3023 (mariadbd) Status: "Taking your SQL requests now..." Tasks: 14 (limit: 4703) CGroup: /system.slice/mariadb.service └─3023 /usr/sbin/mariadbd Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: performance_schema Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: sys Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: Phase 6/7: Checking and upgrading tables Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: Processing databases Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: information_schema Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: performance_schema Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: sys Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: sys.sys_config OK Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: Phase 7/7: Running 'FLUSH PRIVILEGES' Oct 19 11:37:29 ubuntu /etc/mysql/debian-start[3046]: OK root@ubuntu:~# Step 7: Enable MariaDB to Start on Server Reboot Run the below command to enable MariaDB to automatically start when server is rebooted sudo systemctl enable mariadb Step 8: Check MariaDB Version To confirm the installed version, we need to first login to MySQL as below. $ mysql -u root -p Enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 32 Server version: 10.6.3-MariaDB-1:10.6.3+maria~focal mariadb.org binary distribution Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> Now run the following command to check MariaDB version MariaDB [(none)]> SELECT VERSION(); +--------------------------------------+
| VERSION() | +--------------------------------------+ | 10.6.4-MariaDB-1:10.6.4+maria~bionic | +--------------------------------------+ 1 row in set (0.000 sec) MariaDB [(none)]> QUIT Bye Step 9: MariaDB Basic Usage Next, we are going to see some of the MariaDB Dababase basic usage such creating a database, users and so on. MariaDB Create Database As seen above, MariaDB uses MySQL syntax. To create a database, you first need to login to mariadb as shown above then run the below command to create a database. #Create a new database MariaDB [(none)]> CREATE DATABASE db1; Query OK, 1 row affected (0.000 sec) #If the database with the same exists CREATE DATABASE db1; ERROR 1007 (HY000): Can't create database 'db1'; database exists #Create a database if already exits MariaDB [(none)]> CREATE OR REPLACE DATABASE db1; Query OK, 2 rows affected (0.009 sec) #First check if a database exists MariaDB [(none)]> CREATE DATABASE IF NOT EXISTS db1; Query OK, 1 row affected, 1 warning (0.000 sec) # Check Databases MariaDB MariaDB [(none)]> SHOW DATABASES; MariaDB add user and Grant Privileges To create a user and grant privileges; #Create user mariadb MariaDB [(none)]> CREATE USER 'db1user'@'localhost' IDENTIFIED BY 'mypassword'; #Grant privileges to a specific database MariaDB [(none)]> GRANT ALL PRIVILEGES ON db1.* TO 'db1user'@'localhost'; #Remember to refresh the privileges MariaDB [(none)]> FLUSH privileges; #To check user grants in MariaDB MariaDB [(none)]> SHOW GRANTS FOR 'db1user'@'localhost'; Create a Table and Add Data MariaDB Once you have created a database, you can create table and add data into it MariaDB [(none)]> USE db1; MariaDB [(none)]> CREATE TABLE employees (id INT, name VARCHAR(20), email VARCHAR(20)); MariaDB [(none)]> INSERT INTO employees (id,name,email) VALUES(01,"User1","[email protected]"); MariaDB Clean up If you want to completely clean MariaDB installation, you can remove and do a re-install. sudo apt purge mariadb-server sudo rm -rf /var/lib/mysql/ This has been a guide n how to install MariaDB 10.6 on Ubuntu 20.04 and Ubuntu 18.04. It is a short and easy installation and I hope that this guide as been helpful.
0 notes
Text
How to Get DB Link DDL in oracle
How to Get DB Link DDL in oracle #oracle #oracledba #oracledatabase
how to get ddl for db_link in oracle with password, export db_link with password, drop db link, dbms_metadata.get_ddl dblink, get public db link ddl in oracle, drop db link in oracle. Viewing Information About Database Links Oracle has three data dictionary views listed below, which help us to find the information about DB links. View NamePurposeDBA_DB_LINKSShow the list of all DB Links from…
View On WordPress
#create database link#create dblink in oracle#dbms_metadata.get_ddl dblink#drop db link#drop db link in oracle#export db_link with password#how to get ddl for db_link in oracle with password#how to get public db link ddl in oracle
0 notes
Text
How to get DDL of Tablespace present in Oracle Database
How to get DDL of Tablespace present in Oracle Database
Fetch the DDL commands for all or one tablespace in Oracle Get the DDL of all the Tablespaces present in Oracle set echo off; Set pages 999; set long 90000; spool ddl_tablespace.sql select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb; spool off Note: If you want the Semicolon after every statement generated then you can execute the following command at your…
View On WordPress
0 notes
Text
SQL Intro
What's SQL?

SQL was developed by IBM at 70's to his or her main frame platform. Several decades after SQL became standardized with the American National Standards Institute (ANSI-SQL) and also International Organization for Standardization (ISO-SQL). Based on ANSI SQL is pronounced"es queue el", but a lot of database and software programmers together with background in MS SqlServer declare it"sequel".
What's RDBMS?
A Relational Database Management System can be an item of computer software employed to store and manage data in database items called tables. A relational database is a tabular data architecture arranged in rows and columns. The dining table columns known as dining table areas possess unique titles and distinct features specifying the pillar type, default value, indicators and a lot of other column faculties. The rows of this database are the real data entrances.
Most famous SQL RDBMS
Nearly all commercial database vendors are suffering from their own proprietary SQL extension predicated on ANSI-SQL conventional. As an instance the SQL version employed by MS SqlServer is popularly known as transact sql or only TSQL, '' The Oracle's variant is known as PL/SQL (short for Procedural Language/SQL), also ms-access usage Jet-SQL.
So what can you do using SQL?
O SQL questions are utilised to recover data in database tables. The SQL questions utilize the SELECT SQL key word that's an element of this Data Query Language (DQL). When we've got a table called"Orders" and you also need to select all entrances in which the sequence value is higher than $100 dictated by the sequence value, you are able to certainly do it together with the next SQL SELECT question:
FROM Orders
ORDER BY Order-value;
The FROM SQL clause specifies in that table(s) we have been regaining data. The ORDER BY clause specifies which the returned data needed to be arrange by the order-value column.
O You are able to control data stored in relational database tables, utilizing the INSERT, UPDATE and DELETE SQL key words. These three SQL controls are a part of this Data Manipulation Language (DML).
-- To add information in to a table known as"Orders" you can utilize a SQL statement comparable to this one below:
-- To alter information into a table you can use a statement such as this:
UPDATE Orders
SET order-value = 199.99
-- To delete info out of database use a statement such as the one below:
DELETE Orders
O You are able to make, alter or delete database items (example of database items are database tables, views, stored procedures, etc.), utilizing the CREATE, ALTER and DROP SQL key words. These 3 SQL key words are a part of this Data Definition Language (DDL). As an instance to make table"Orders" You Should Use the following SQL statement:
CREATE Orders
ProductID INT,
CustomerID I D,
OrderDate DATE,
Order-value Currency
O it is possible to get a grip on database items rights utilizing the GRANT and REVOKE key words, area of this Data Control Language (DCL). For instance to Permit the consumer "User1" to automatically Choose information from table"Orders" You Should Use the following SQL statement:
Now every program professional needs atleast a basic comprehension of how SQL works. If you're new to SQL, then you are feeling confused and overwhelmed initially, however as you advance you may quickly realize just how successful and refined SQL is.
0 notes
Text
Streaming data to Amazon Managed Streaming for Apache Kafka using AWS DMS
Introduction AWS Database Migration Service (DMS) announced support of Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka clusters as target. With AWS DMS you can replicate ongoing changes from any DMS supported sources such as Amazon Aurora (MySQL and PostgreSQL-compatible), Oracle, and SQL Server to Amazon Managed Streaming for Apache Kafka (Amazon MSK) and self-managed Apache Kafka clusters. In this post, we use an ecommerce use case and set up the entire pipeline with the order data being persisted in an Aurora MySQL database. We use AWS DMS to load and replicate this data to Amazon MSK. We then use the data to generate a live graph on our dashboard application. Then the data will be used to generate a live graph on our dashboard application. We will also discuss about how to use a custom S3 feeder application to send the data from MSK topics to an S3 bucket. Once the setup is completed, any new incoming orders into your Aurora MySQL database will be replicated and reflected on your dashboard graph in real time. Solution overview We divide this solution into two sections: Section 1: Setting up infrastructure for your pipeline which feeds a live dashboard to showcase incoming order data. Section 2: Consuming the data streams coming to Amazon MSK and pushing to Amazon S3 and query it with Amazon Athena. You can find the accompanying dashboards and sample applications in the GitHub repo. The repo contains the sample Java applications that we use in both sections in this post. It contains three modules: Dashboard A dashboard application showing incoming orders data and displaying it by states. This module contains a Spring Boot based Kafka listener. It shows how to build a custom application to listen to an incoming stream of data in Kafka topics and send it to a live dashboard. It uses a websocket connection to connect to the server and open source chartjs to build a simple graph on the data. Data-gen-utility You can use this small command line utility to generate dummy order data to feed to the source MySQL database. Msk-to-s3-feeder This independent Spring Boot application shows how you can take streaming data from Amazon MSK and implement a batch listener to club streaming data and feed it to an S3 bucket that you provide in one or more objects. Solution architecture The following architecture diagram shows how to replicate data in a database like MySQL to Amazon MSK in near-real time using AWS DMS. For our ecommerce use case, data is being generated and stored in our transaction database. This data flows to the live dashboard and Amazon S3 via AWS DMS and Amazon MSK. Setting up your pipeline In this section, you setup a pipeline to feed data to the live dashboard Creating an IAM role If you don’t have dms-vpc-role in your account, create an IAM role named “dms-vpc-role” for AWS DMS API. On the IAM console, choose Roles. Choose Create role. For Choose a use case, choose DMS”. For Filter policies, search for dms. Select permissions AmazonDMSCloudWatchLogsRole and AmazonDMSVPCManagementRole. Provisioning your resources You create the resources for this use case with the provided AWS CloudFormation stack. Run the stack in the region of your choice: The CloudFormation stack requires that you input parameters to configure the infrastructure to migrate the data into Managed streams of Kafka. A few parameters have default values filled. Please add two values below. IPCidrToAllowTraffic – An IP address to SSH to the client instance. Generally you provide your own system IP (such as x.x.x.x/32). KeyName – The key pair name for your client Amazon Elastic Compute Cloud (Amazon EC2) instance. You can choose an existing keypair or create a new one. Select the check box to allow CloudFormation to create IAM roles. When the stack is complete, you can find the status on the AWS CloudFormation console. The stack creates the following resources: EC2 instance: Streaming-DMS-MSK-Pipeline Amazon MSK cluster: MSKMMCluster1 MySQL source endpoint Target Amazon MSK endpoint AWS DMS task (full load and CDC) AWS Lambda functions: CustomClusterConfigLambda, CustomKafkaEndpointLambda and TestConnLambda Aurora MySQL DB cluster. Setting up source data generation After you create your resources, complete the follow steps to set up the source data generation on the client EC2 instance. Login to Amazon EC2 using putty (Windows) or terminal (MacBook) with the following code: ssh –i “ [email protected] Download the sample code: git clone https://github.com/aws-samples/aws-dms-msk-demo.git Enter the following code to build the applications: cd aws-dms-msk-demo mvn clean install Enter the following code to connect to MySQL. Replace the hostname with the Aurora MySQL DB host endpoint (writer endpoint) that was created by the CloudFormation stack. You can find it on the Amazon Relational Database service (Amazon RDS) console for the database. The default username is ‘master’ and the default password is ‘Password1’ unless you changed it in the CloudFormation stack parameters. MYSQL –u -p -h At the SQL prompt, enter the following code to create the sample table ‘orders’ in the database ‘testdb’: SQL > create table orders (orderid bigint(20) NOT NULL, source varchar(45) NOT NULL default 'andriod', amount varchar(45) NOT NULL default '0', state varchar(45) NOT NULL default 'New Jersey', date datetime NOT NULL default current_timestamp, Primary key (orderid)); Ensure that the binary logs are available to AWS DMS because Amazon-managed MySQL-compatible databases purge the binary logs as soon as possible, You should increase the length of time that the logs remain available. To increase log retention to 24 hours, enter the following command: call mysql.rds_set_configuration('binlog retention hours', 24); Hit cmd + z and come out of the SQL prompt. Setting up your dashboard application: You can now set up the sample dashboard application on your EC2 instance. You first get the Amazon MSK broker URLs On the Amazon MSK console, choose the cluster you created Choose view client information. Locate the broker connection details in plaintext. You want the plaintext link and not the TLS, because TLS requires some extra configuration on the client side to work. For more information about connecting via TLS, see Client Authentication. Run the following code on the client EC2 Instance to launch dashboard: java -jar aws-dms-msk-demo/dashboard/target/dashboard-1.0.jar --kafka.bootstrapEndpoints :9092 --kafka.topic dms-blog Open the application web URL in the browser. http://:8080/ The application looks similar to the following screenshot: This dashboard is volatile and all data is reset when you refresh your browser. Generating data on the source engine Now that we have the infrastructure and the pipelines set up, you can generate the data on the MySQL source engine and check the dashboard: Open a new SSH session to Amazon EC2. Use the datagen.jar utility present in the cloned GitHub repo to generate sample data in bulk of 2000 records. java -jar aws-dms-msk-demo/data-gen-utility/target/datagen.jar When prompted for the start index, enter 1. The *.sql file is generated with 2000 dummy order records. Connect to the database again using the following command and ingest the SQL file generated. It inserts your dummy data into database. MySQL –u -p database_name –h testdb --start-replication-task-type start-replication Check the task logs to confirm that the task migrated the data onto the target Amazon MSK. 2020-05-15T20:46:41 [SOURCE_UNLOAD ]I: Unload finished for table 'testdb'.'orders' (Id = 1). 2000 rows sent. (streamcomponent.c:3485) 2020-05-15T20:46:41 [TARGET_LOAD ]I: Load finished for table 'testdb'.'orders' (Id = 1). 2000 rows received. 0 rows skipped. Volume transferred 0. (streamcomponent.c:3770) Check the sample dashboard graphs that now shows the replicated data. You can generate more sample data following these steps and migrate the data onto the target. To test the ongoing changes and check the dashboard for replicated data, connect to the Aurora cluster and perform DML and DDL operations. The sample dashboard graph is incremented for inserts. For updates and deletes, only the counters on the app change. Consuming the data streams and querying the data: In this section, you consume the data streams pushed to Amazon S3 and use Athena to query the data. You must have an S3 bucket to push the data from the MSK topics to, and an external table in Athena pointing to that bucket. Creating your S3 bucket and consuming the data Complete the following steps: Create an s3 bucket in your account to act as the destination. Go to the SSH session where the dashboard application was running and stop the application by pressing CTRL/CMD + C. Run the msk-to-s3-feeder application using the following code: java –jar msk-to-s3-feeder.jar –aws.s3.bucketName --kafka.bootstrapEndpoints :9092 --kafka.topic dms-blog In a separate SSH window, generate 3000 records and insert via the MySQL command line to Aurora (see the previous section for the relevant code). You should see some msk-to-s3-feeder application logs Confirm the files were delivered to Amazon S3 via the AWS CLI: aws s3 ls s3:///output/ --recursive You should see entries like the following code: 2020-06-19 17:02:42 560 output/660fff82-3501-4476-b29a-66d6028b4c841592141618007.json Creating a table and querying the data Create the sample table named ‘kafkajson’ in database: ‘default’ (replace the bucket name before running): SQL> CREATE EXTERNAL TABLE IF NOT EXISTS default.kafkajson ( orderId int, source string, amount decimal, state string, date timestamp, operation string ) ROW FORMAT SERDE ‘org.openx.jsonserde.JsonSerDe’ With SEDREPROPERTIES ( ‘Serialization.format’ = ‘1’ ) LOCATION ‘s3://bsd-martech/output/’ TBLPROPERTIES (‘has_encrypted_data’ =’false’); The following screenshot shows the preceding code on the console. After you provide the required details and create the table, you can run a query and return the following result set: SQL> SELECT * FROM “default”.”kafkajson” The following screenshot shows your results. Cleaning Up To avoid incurring future charges, clean up the resources used in this post. On the AWS CloudFormation console, delete the CloudFormation stack. On the AWS DMS console, delete “dms-blog-kafka-target” AWS DMS endpoints. On the AWS DMS console, delete any log groups with name “Streaming-DMS-MSK” (or use the stack name if you changed it while creating the stack). On the Amazon S3 console, clean the folder output and delete the S3 bucket. On the Athena console, delete the Athena table you created. Conclusion In this post, we showed you how to set up a pipeline with a CloudFormation template to get data from a MySQL database and send it to Amazon MSK topics in real time. We also set up custom applications to consume the data from Amazon MSK and display dashboards based on it in real time. We pushed the data from Amazon S3 using a simple application and queried the data via Athena. We hope this post helps you get started on building similar pipelines. Good luck and happy building! About the authors Harish Bannai is a Technical Account Manager at AWS. He holds the AWS Solutions Architect Professional, Developer Associate and Solutions Architect Professional certifications. He works with enterprise customers providing technical assistance on RDS, Database Migration services operational performance and sharing database best practices. Aswin Sankarapillai is a database engineer in Database Migration Service at AWS. He works with our customers to provide guidance and technical assistance on database migration projects, helping them improve the value of their solutions when using AWS. Amandeep Bhatia works as an Architect. He is passionate about cloud technologies and focuses on building large distributed systems on cloud for various startups and enterprises. He has helped many large scale migrations to AWS as well as building cloud native architectures. In his free time he likes to learn new technologies and spend quality time with kids and family. https://aws.amazon.com/blogs/database/streaming-data-to-amazon-managed-streaming-for-apache-kafka-using-aws-dms/
1 note
·
View note
Link
This SQL tutorial will help you learn SQL basics, so you can become a successful SQL developer. You will find out what are the SQL commands, syntax, data types, operators, creation & dropping of tables, inserting and selecting query. Through this tutorial you will learn SQL for working with a relational database. Learn SQL from Intellipaat SQL training and fast-track your career.
SQL Tutorial for Beginners
This SQL Tutorial for Beginners is a complete package for how to learn SQL online. In this SQL tutorial , you will learn SQL programming to get a clear idea of what Structured Query Language is and how you deploy SQL to work with a relational database system. So, a structured query language is a language that is used to operate the relational databases. Some of the major ways in which SQL is used in conjunction with a relational database is for the purposes of storing, retrieving and manipulating of data stored in a relational database.
Check out Intellipaat’s blog to get a fair understanding of SQL Optimization Techniques!
Here we have the list of topics if you want to jump right into a specific one:
What is SQL?
Why is SQL programming so widely used?
Features of SQL
Applications of SQL
Why should you learn SQL Programming?
Watch this SQL Tutorial for Beginners video:
https://www.youtube.com/watch?v=JTDK6r1GuUU&feature=youtu.be
What is SQL?
To learn SQL Online, first let us compare SQL with NoSQL in the below table:
Comparison criteriaSQLNoSQL
Type of databaseRelationalNon-relational
How is data storedStructured data in tablesUnstructured data in JSON files
Suitability for OLTP systemsExcellentAverage
Database compliance
ACID properties
CAP Theorem
The language to communicate with the relational database is the SQL or Structured Query Language. SQL programming helps to operate the relational databases and derive information from it.
Some of the operations that the SQL does include creation of database, fetching, modifying, updating and deleting the rows along with storing, manipulating and retrieving data within the relational database. SQL programming is an ANSI standard language but there are a lot of versions of SQL in usage as well.
Why is SQL programming so widely used?
Structured Query Language or SQL programming is used so extensively for the following reasons.
SQL lets you access any data within the relational database
You can describe the data in the database using SQL
Using SQL you can manipulate the data with the relational database
SQL can embed within other languages through SQL modules & libraries
SQL lets you easily create and drop databases and tables
SQL allows you to create views, functions and stored procedures in databases
Using SQL you can set permissions on procedures, tables and views.
Wish to crack SQL job interviews? Intellipaat’s Top SQL Interview Questions are meant only for you!
Get interview-ready with this SQL Interview Questions: https://intellipaat.com/blog/interview-question/sql-interview-questions/
Features of SQL
Here in this section of the SQL tutorial for beginners, we list some of the top features of SQL that make it so ubiquitous when it comes to managing relational databases.
SQL is very simple and easy to learn language
SQL is versatile as it works with database systems from Oracle, IBM, Microsoft, etc.
SQL is an ANSI and ISO standard language for database creation and manipulation
SQL has a well-defined structure as it uses long established standards
SQL is very fast in retrieving large amounts of data very efficiently
SQL lets you manage databases without knowing lot of coding.
Applications of SQL
Here in this section of the SQL tutorial we will learn SQL applications that make it so important in a data-driven world where managing huge databases is the norm of the day.
SQL is used as a Data Definition Language (DDL) meaning you can independently create a database, define its structure, use it and then discard it when you are done with it
SQL is also used as a Data Manipulation Language (DML) which means you can use it for maintaining an already existing database. SQL is a powerful language for entering data, modifying data and extracting data with regard to a database
SQL is also deployed as a Data Control Language (DCL) which specifies how you can protect your database against corruption and misuse.
SQL is extensively used as a Client/Server language to connect the front-end with the back-end thus supporting the client/server architecture
SQL can also be used in the three-tier architecture of a client, an application server and a database that defines the Internet architecture.
Still have queries? Come to Intellipaat’s SQL Community, clarify all your doubts, and excel in your career!
Why should you learn SQL?
Today regardless of the relational databases systems by major corporations like Oracle, IBM, Microsoft and others, the one thing that is common to them is the Structured Query Language or SQL.
Originally published at www.intellipaat.com on August 12, 2019
youtube
0 notes
Text
Oracle Views and ORA-04027
Achievement unlocked! A new Oracle Error message appears: ORA-04027: self-deadlock during automatic validation for object PB
Wait what? Let me google that quick... Not helpful and amusingly none of the results when I did it were from Oracle, despite it being their error number.
How did we get here? Two developers running different code with shared dependencies individually and working fine. One day (after 15 years) they ran it at the same time by chance and noticed it just sat their doing nothing. If one developer killed their job, the others finished. If they wait eventually one times out with the error message above. ORA-04027.
Let’s show how this was triggered
Table tA - A dimension table.
Table tB1 - A second dimension table.
Table tB2 - A second table related to the second dimension.
View vB - A View that is a union of tB1 and tB2.
Table tF - A fact table.
Package pA - Inserts data into tables each insert calls a function (fA) from pB.
Package pB - A package of get functions that do simple selects from dimension tables. One function is fA and selects from table tA. One function is fB and selects from view vB.
package pC - Saves and restores indexes on table tB1 and inserts new rows to it.
Package pI - Procedures to save indexes (pS) and restore indexes (pR) on tables. pS - Get DDL for index, write to table, and issue a DML to drop the index from a table. pR - Gets indexes from tables and creates them on the table.
OKAY! So with the stage set, let’s get started!
Developer 1 call his procedure from pA to load table F. It takes about 10 minutes to truncate the table and perform an insert. This insert is calling fA from pB for every row.
Developer 2 starts his package pC. It saves the indexes off of table tB1 in prep to load it. The process hangs right there and Developer 1 and Developer 2 are left stranded.
Developer 2 wasn’t doing anything to the tables that were used by developer 1. They were separate tables. Developer 2′s code didn’t alter the underlying tables which would have caused the package to go invalid. All Developer 2 did was drop an index, try to add data and then restore the index.
As I said before, Developer 1 killing his job causes Developer 2′s process to finish. Developer 2 killing his job causes Developer 1′s process to finish.
Going invalid - One of the developers noticed that when he ran his code at the same time as the other developer, that vB would go invalid and so would pB. This didn’t occur when they ran it one at a time.
Odd. We went through the code and everything looked okay at first glance. No DDL’s changing the tables. Nothing that would make me think any object would be invalidated. We wondered if unlocking stats, gathering them and re-locking would cause the invalidation? No it does not.
We stepped through the section by section then line by line and narrowed it down to the drop index statement from package pI. When the index was dropped it was invalidating the view and the package that depended on the view. That is odd to me. The view isn’t really dependent on the index unless it’s an IoT table or maybe a clob. If a view is invalidated on next run it would re-compile.
To the Oracle docs on Drop Index.. LINK
“Oracle Database invalidates all objects that depend on the underlying table, including views, packages, package bodies, functions, and procedures.”
Wait, what? It still doesn’t make sense to me that a view is invalidated on dropping an index. That might explain our problem more.
The dead-lock - At Last! The Oracle Speaks!
After some time leaving both developers code hanging in wait state we’d produce the error: ORA-04027: self-deadlock during automatic validation for object PB
When Developer 2 dropped the index on tB1 it invalidated vB which was used by package pB. Package pB was also invalidated.
Developer 1 who was running his insert using a different function dependent on a valid table from pB paused. Developer 1′s package tried to automatically recompile the package.
Developer 2′s code who was also dependent on pB (just in a totally different procedure than the one running) would also try to recompile the package. No big deal normally, one of them would win and the other would resume. So for 15 years we never encountered this problem.
It was only when this one function ran that happened to be dependent on a view vB instead of directly on a table like tB, that we got a self-deadlock.
Remember that Developer 2′s package would see that pB was invalid and would try to recompile. Our assumption (based on observation of wait state/pin lock timing) is that when it fails, it then tries to recompile objects it depends on and moves to working on vB.
Developer 1′s package also noticed that pB was invalidated during it’s run and would try to recompile. Usually this would result in a busy resource/pin lock timeout sort of deal. In this case since Developer 2 switched to validating vB, Developer 1′s application would then try to validated pB and they both get stuck. So it’s not really a true self-deadlock. The deadlock is caused caused during automatic re-validation of the view vB which is being done to try and validate package pB. It’s really the other developer trying to validate pB at the same time that stops the first process from getting a lock on the package again.
The timing is important since whoever’s process tried to revalidate first is going to be the one to time out first and get the ORA-04027 error. The other user’s process will just eventually complete when it becomes the sole user trying to compile package pB.
Ugh.
If you eliminate the view here you don’t see this problem as tables do not go invalid on dropping an index. I still don’t understand why every view (even simple ones) would invalidate on dropping an index, except for lazy programmers. It’s good that Oracle have at least explained it away in the documentation. It’s no longer a bug if we want it fixed, it’s a feature request!
1 note
·
View note
Text
DBA Interview Questions with Answer Part 20
What is Checkpoint SCN and Checkpoint Count? How we can check it?Checkpoint is an event when the database writer is going to flush the dirty buffers into the datafiles. This an ongoing activity and in the result checkpoint number constantly incremented in the datafile header and controfile and the background process CKPT take care of this responsibility.How can you find length of Username and Password?You can find the length of username with below query. The password is hashed (#) so there is no way to get their length.You can use special characters ($, #, _) without single quotes and any other characters must be enclosed in single quotation.Select length (username), usernamefrom dba_users;The minimum length for password is at least 1 character where as maximum depends on database version. In 10g it is restricted to 17 characters long.What are the restrictions applicable while creating view?– A view can be created referencing tables and views only in the current database.– A view name must not be the same as any table owned by that user.– You can build view on other view and on procedure that references views.For More information you can click on the below link: Common Interview Question & AnswerWhat is difference between Delete/Drop/Truncate?DELETE is a command that only removes data from the table. It is DML statement. Deleted data can be rollback (when you delete all the data get copied into rollback first then deleted). We can use where condition with delete to delete particular data from the table.Where as DROP commands remove the table from data dictionary. This is DDL statement. We cannot recover the table before oracle 10g, but flashback feature of oracle 10g provides the facility to recover the drop table.While TRUNCATE is a DDL command that delete data as well as freed the storage held by this table. This free space can be used by this table or some other table again. This is faster because it performs the deleted operation directly (without copying the data into rollback).Alternatively you can enable the row movement for that table and can use shrink command while using the delete command.SQL> Create table test ( Number s1, Number s2 );SQL> Select bytes, blocks from user_segments where segment_name = ‘test’;Bytes block---------- -------65536 8SQL> insert into t select level, level*3 From dual connect by level Select bytes, blocks from user_segments where segment_name = ‘test’;Bytes block---------- -------131072 16SQL> Delete from test;3000 rows deleted.SQL> select bytes,blocks from user_segments where segment_name = 'test';Bytes block---------- -------131072 16SQL> Alter table t enable row movement;SQL> Alter table t shrink space;Table alteredSQL> Select bytes,blocks from user_segments where segment_name = 'test';Bytes block---------- -------65536 8What is difference between Varchar and Varchar2?Varchar2 can store upto 4000 bytes where as Varchar can only store upto 2000 bytes. Varchar2 can occupy space for NULL values where as Varchar2 will not specify any space for NULL values.What is difference between Char and Varchar2?A CHAR values have fixed length. They are padded with space characters to match the specified length where as VARCHAR2 values have a variable length. They are not padded with any characters. In which Language oracle has been developed?Oracle is RDBMS package developed using C language.What is difference between Translate and Replace?Translate is used for character by character substitution where as Replace is used to substitute a single character with a word.What is the fastest query method to fetch data from table?Using ROWID is the fastest method to fetch data from table.What is Oracle database Background processes specific to RAC?LCK0—Instance Enqueue Process LMS—Global Cache Service Process LMD—Global Enqueue Service Daemon LMON—Global Enqueue Service Monitor Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES) to ensure that each oracle RAC database instance obtain the block that it needs to satisfy as query or transaction. The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances.What is SCAN in respect of oracle RAC?Single client access name (SCAN) is a new oracle real application clusters (RAC) 11g releases 2 features that provides a single name for client to access an oracle database running in a cluster. The benefit is clients using SCAN do not need to change if you add or remove nodes in the clusters.Why do we have a virtual IP (VIP) in oracle RAC?Without VIP when a node fails the client wait for the timeout before getting error where as with VIP when a node fails, the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP. Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients. This results in the clients getting errors immediately.Why query fails sometimes?Rollback segments dynamically extent to handle large transactions entry loads. A single transaction may occupy all available free space in rollback segment tablespace. This situation prevents other user using rollback segments. You can monitor the rollback segment status by querying DBA_ROLLBACK_SEGS view.What is ADPATCH and OPATCH utility? Can you use both in Application?ADPATCH is a utility to apply application patch and OPATCH is a utility to apply database patch. You have to use both in application for applying in application you have to use ADPATCH and for applying in database you have to use OPATCH.What is Automatic refresh of Materialized view and how you will find last refresh time of Materialized view?Since oracle 10g complete refresh of materialized view can be done with deleted instead of truncate. To force the instance to do the refresh with truncate instead of deleted, parameter AUTOMIC_REFRESH must be set to FALSEWhen it is FALSE Mview will be faster and no UNDO will be generated and whole data will be inserted.When it is TRUE Mview will be slower and UNDO will be generated and whole data will be inserted. Thus we will have access of all time even while it is being refreshed.If you want to find when the last refresh has taken place. You can query with these view: dba_mviews or dba_mview_analysis or dba_mview_refresh_times SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mviews; -or- SQL> select NAME, to_char(LAST_REFRESH,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_refresh_times; -or- SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,’YYYY-MM-DD HH24:MI:SS’) from dba_mview_analysis;Why more archivelogs are generated, when database is begin backup mode?During begin backup mode datafiles headers get freezed so row information can not be retrieved as a result the entire block is copied to redo logs thus more redo log generated or more log switch occur in turn more archivelogs. Normally only deltas (change vector) are logged to the redo logs.The main reason is to overcome the fractured block. A fractured block is a block in which the header and footer are not consistent at a given SCN. In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file. It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half, while the second half contains older data. In this case, the block is fractured.For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP when a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can re-construct this block if media recovery finds that this block was fractured.The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time. If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.Why is UNION ALL faster than UNION?UNION ALL faster than a UNION because UNION ALL will not eliminate the duplicate rows from the base tables instead it access all rows from all tables according to your query where as the UNION command is simply used to select related distinct information from base tables like JOIN command.Thus if you know that all the records of your query returns the unique records then always use UNION ALL instead of UNION. It will give you faster results.How will you find your instance is started with Spfile and Pfile?You can query with V$spparameter viewSQL> Select isspecified, count(*) from v$spparameter Group by isspecified;ISSPEC COUNT(*)------ ----------FALSE 221TRUE 39As isspecified is TRUE with some count we can say that instance is running with spfile. Now try to start your database with pfile and run the previous query again.SQL> Select isspecified, count(*) from v$spparameter Group by isspecified;ISSPEC COUNT(*)------ ----------FALSE 258Then you will not find any parameter isspecified in spfile they all come from pfile thus you can say instance is started with pfile.Alternatively you can use the below querySQL> show parameter spfile;SQL> Select decode(count(*), 1, 'spfile', 'pfile' ) from v$spparameter where rownum=1 and isspecified='TRUE';Why we need to enable Maintenance Mode?To ensure optimal performance and reduce downtime during patching sessions, enabling this feature shuts down the Workflow Business Events System and sets up function security so that Oracle Applications functions are unavailable to users. This provides a clear separation between normal run time operation and system downtime for patching..
0 notes
Text
DBA interview Question and Answer Part 22
I have configured the RMAN with Recovery window of 3 days but on my backup destination only one days archive log is visible while 3 days database backup is available there why?I go through the issue by checking the backup details using the list command. I found there is already 3 days database as well as archivelog backup list is available. Also the backup is in Recoverable backup. Thus it is clear due to any reason the backup is not stored on Backup place.Connect rman target database with catalogList backup Summary;List Archivelog All;List Backup Recoverable;When I check the db_recovery_dest_size, it is 5 GB and our flash-recovery area is almost full because of that it will automatically delete archive logs from backup location. When I increase the db_recovery_dest_sizethen it is working fine.If one or all of control file is get corrupted and you are unable to start database then how can you perform recovery?If one of your control file is missing or corrupted then you have two options to recover it either delete corrupted CONTROLFILE manually from the location and copy the available rest of CONTROLFILE and rename it as per the deleted one. You can check the alert.log for exact name and location of the control file. Another option is delete the corrupted CONTROLFILE and remove the location from Pfile/Spfile. After removing said control file from spfile and start your database.In another scenario if all of your CONTROLFILE is get corrupted then you need to restore them using RMAN.As currently none of the CONTROLFILE is mounted so RMAN does not know about the backup or any pre-configured RMAN setting. In order to use the backup we need to pass the DBID (SET DBID=691421794) to the RMAN.RMAN>Restore Controlfile from ‘H:oracleBackup C-1239150297-20130418’ You are working as a DBA and usually taking HOTBACKUP every night. But one day around 3.00 PM one table is dropped and that table is very useful then how will you recover that table?If your database is running on oracle 10g version and you already enable the recyclebin configuration then you can easily recover dropped table from user_recyclebin or dba_recyclebin by using flashback feature of oracle 10g.SQL> select object_name,original_name from user_recyclebin;BIN$T0xRBK9YSomiRRmhwn/xPA==$0 PAY_PAYMENT_MASTERSQL> flashback table table2 to before drop;Flashback complete.In that case when no recyclebin is enabled with your database then you need to restore your backup on TEST database and enable time based recovery for applying all archives before drop command execution. For an instance, apply archives up to 2:55 PM here.It is not recommended to perform such recovery on production database directly because it is a huge database will take time.Note: If you are using SYS user to drop any table then user’s object will not go to the recyclebin for SYSTEM tablespace, even you have already set recyclebin parameter ‘true’. And If you database is running on oracle 9i you require in-complete recovery for the same.Sometimes why more archivelog is Generating?There are many reasons such as: if more database changes were performed either using any import/export work or batch jobs or any special task or taking hot backup (For more details why hot backup generating more archive check my separate post).You can check it using enabling log Minor utility.How can I know my require table is available in export dump file or not?You can create index file for export dump file using ‘import with index file’ command. A text file will be generating with all table and index object name with number of rows. You can confirm your require table object from this text file.What is Cache Fusion Technology?Cache fusion provides a service that allows oracle to keep track of which nodes are writing to which block and ensure that two nodes do not updates duplicates copies of the same block. Cache fusion technology can provides more resource and increase concurrency of users internally. Here multiple caches can able to join and act into one global cache. Thus solving the issues like data consistency internally without any impact on the application code or design.Why we should we need to open database using RESETLOGS after finishing incomplete recovery?When we are performing incomplete recovery that means, it is clear we are bringing our database to past time or re-wind period of time. Thus this recovery makes database in prior state of database. The forward sequence of number already available after performing recovery, due to mismatching of this sequence numbers and prior state of database, it needs open database with new sequence number of redo log and archive log.Why export backup is called as logical backup?Export dump file doesn’t backup or contain any physical structure of database such as datafiles, redolog files, pfile and password file etc. Instead of physical structure, export dump contains logical structure of database like definition of tablespace, segment, schema etc. Due to these reason export dump is call logical backup.What are difference between 9i and 10g OEM?In oracle 9i OEM having limited capability or resource compares to oracle 10g grids. There are too many enhancements in 10g OEM over 9i, several tools such as AWR and ADDM has been incorporated and there is SQL Tuning advisor also available.Can we use same target database as catalog DB?The recovery catalog should not reside in the target database because recovery catalog must be protected in the event of loss of the target database.What is difference between CROSSCHECK and VALIDATE command?Validate command is to examine a backup set and report whether it can be restored successfully where as crosscheck command is to verify the status of backup and copies recorded in the RMAN repository against the media such as disk or tape. How do you identify or fix block Corruption in RMAN database?You can use the v$block_corruption view to identify which block is corrupted then use the ‘blockrecover’ command to recover it.SQL>select file# block# from v$database_block_corruption;file# block10 1435RMAN>blockrecover datafile 10 block 1435;What is auxiliary channel in RMAN? When it is required?An auxiliary channel is a link to auxiliary instance. If you do not have automatic channel configured, then before issuing the DUPLICATE command, manually allocate at least one auxiliary channel within the same RUN command.Explain the use of Setting GLOBAL_NAME equal to true?Setting GLOBAL_NAMES indicates how you might connect to the database. This variable is either ‘TRUE’ or ‘FALSE’ and if it is set to ‘TRUE’ which enforces database links to have the same name as the remote database to which they are linking.How can you say your data in database is Valid or secure?If data of the database is validated we can say that our database is secured. There is different way to validate the data:1. Accept only valid data2. Reject bad data.3. Sanitize bad data. Write a query to display all the odd number from table.Select * from (select employee_number, rownum rn from pay_employee_personal_info)where MOD (rn, 2) 0;-or- you can perform the same things through the below function.set serveroutput on; begin for v_c1 in (select num from tab_no) loop if mod(v_c1.num,2) = 1 then dbms_output.put_line(v_c1.num); end if; end loop; end;What is difference between Trim and Truncate?Truncate is a DDL command which delete the contents of a table completely, without affecting the table structures where as Trim is a function which changes the column output in select statement or to remove the blank space from left and right of the string.When to use the option clause "PASSWORD FILE" in the RMAN DUPLICATE command? If you create a duplicate DB not a standby DB, then RMAN does not copy the password file by default. You can specify the PASSWORD FILE option to indicate that RMAN should overwrite the existing password file on the auxiliary instance and if you create a standby DB, then RMAN copies the password file by default to the standby host overwriting the existing password file. What is Oracle Golden Gate?Oracle GoldenGate is oracle’s strategic solution for real time data integration. Oracle GoldenGate captures, filters, routes, verifies, transforms, and delivers transactional data in real-time, across Oracle and heterogeneous environments with very low impact and preserved transaction integrity. The transaction data management provides read consistency, maintaining referential integrity between source and target systems.What is meaning of LGWR SYNC and LGWR ASYNC in log archive destination parameter for standby configuration.When use LGWR with SYNC, it means once network I/O initiated, LGWR has to wait for completion of network I/O before write processing. LGWR with ASYNC means LGWR doesn’t wait to finish network I/O and continuing write processing.What is the truncate command enhancement in Oracle 12c?In the previous release, there was not a direct option available to truncate a master table while child table exist and having records.Now the truncate table with cascade option in 12c truncates the records in master as well as all referenced child table with an enabled ON DELETE constraint.
0 notes
Text
Oracle DBA interview Question with Answer (All in One Doc)
1. General DB Maintenance2. Backup and Recovery3. Flashback Technology4. Dataguard5. Upgration/Migration/Patches6. Performance Tuning7. ASM8. RAC (RAC (Cluster/ASM/Oracle Binaries) Installation Link 9. Linux Operating10. PL/SQLGeneral DB Maintenance Question/Answer:When we run a Trace and Tkprof on a query we see the timing information for three phase?Parse-> Execute-> FetchWhich parameter is used in TNS connect identifier to specify number of concurrent connection request?QUEUESIZEWhat does AFFIRM/NOFFIRM parameter specify?AFFIRM specify redo transport service acknowledgement after writing to standby (SYNC) where as NOFFIRM specify acknowledgement before writing to standby (ASYNC).After upgrade task which script is used to run recompile invalid object?utlrp.sql, utlprpDue to too many cursor presents in library cache caused wait what parameter need to increase?Open_cursor, shared_pool_sizeWhen using Recover database using backup control file?To synchronize datafile to controlfileWhat is the use of CONSISTENT=Y and DIRECT=Y parameter in export?It will take consistent values while taking export of a table. Setting direct=yes, to extract data by reading the data directly, bypasses the SGA, bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.What the parameter COMPRESS, SHOW, SQLFILE will do during export?If you are using COMPRESS during import, It will put entire data in a single extent. if you are using SHOW=Y during import, It will read entire dumpfile and confirm backup validity even if you don’t know the formuser of export can use this show=y option with import to check the fromuser.If you are using SQLFILE (which contains all the DDL commands which Import would have executed) parameter with import utility can get the information dumpfile is corrupted or not because this utility will read entire dumpfile export and report the status.Can we import 11g dumpfile into 10g using datapump? If so, is it also possible between 10g and 9i?Yes we can import from 11g to 10g using VERSION option. This is not possible between 10g and 9i as datapump is not there in 9iWhat does KEEP_MASTER and METRICS parameter of datapump?KEEP_MASTER and METRICS are undocumented parameter of EXPDP/IMPDP. METRICS provides the time it took for processing the objects and KEEP_MASTER prevents the Data Pump Master table from getting deleted after an Export/Import job completion.What happens when we fire SQL statement in Oracle?First it will check the syntax and semantics in library cache, after that it will create execution plan. If already data is in buffer cache it will directly return to the client (soft parse) otherwise it will fetch the data from datafiles and write to the database buffer cache (hard parse) after that it will send server and finally server send to the client.What are between latches and locks?1. A latch management is based on first in first grab whereas lock depends lock order is last come and grap. 2. Lock creating deadlock whereas latches never creating deadlock it is handle by oracle internally. Latches are only related with SGA internal buffer whereas lock related with transaction level. 3. Latches having on two states either WAIT or NOWAIT whereas locks having six different states: DML locks (Table and row level-DBA_DML_LOCKS ), DDL locks (Schema and Structure level –DBA_DDL_LOCKS), DBA_BLOCKERS further categorized many more.What are the differences between LMTS and DMTS? Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces, the dictionary tables are created on SYSTEM tablespace and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.Difference of Regular and Index organized table?The traditional or regular table is based on heap structure where data are stored in un-ordered format where as in IOT is based on Binary tree structure and data are stored in order format with the help of primary key. The IOT is useful in the situation where accessing is commonly with the primary key use of where clause statement. If IOT is used in select statement without primary key the query performance degrades.What are Table portioning and their use and benefits?Partitioning the big table into different named storage section to improve the performance of query, as the query is accessing only the particular partitioned instead of whole range of big tables. The partitioned is based on partition key. The three partition types are: Range/Hash/List Partition.Apart from table an index can also partitioned using the above partition method either LOCAL or GLOBAL.Range partition:How to deal online redo log file corruption?1. Recover when only one redo log file corrupted?If your database is open and you lost or corrupted your logfile then first try to shutdown your database normally does not shutdown abort. If you lose or corrupted only one redo log file then you need only to open the database with resetlog option. Opening with resetlog option will re-create your online redo log file.RECOVER DATABASE UNTIL CANCEL; then ALTER DATABASE OPEN RESETLOGS;2. Recover when all the online redo log file corrupted?When you lose all member of redo log group then the step of maintenance depends on group ‘STATUS’ and database status Archivelog/NoArchivelog.If the affected redo log group has a status of INACTIVE then it is no longer required crash recovery then issues either clear logfile or re-create the group manually.ALTER DATABASE CLEAR UNARCHIVED LOGFILE GROUP 3; -- you are in archive mode and group still not archivedALTER DATABASE CLEAR LOGFILE GROUP 3; noarchive mode or group already archivedIf the affected redo log group has a status ACTIVE then it is still required for crash recovery. Issue the command ALTER SYSTEM CHECKPOINT, if successful then follow the step inactive if fails then you need to perform incomplete recovery up to the previous log file and open the database with resetlog option.If the affected redo log group is CURRENT then lgwr stops writing and you have to perform incomplete recovery up to the last logfile and open the database with resetlog option and if your database in noarchive then perform the complete recovery with last cold backup.Note: When the online redolog is UNUSED/STALE means it is never written it is newly created logfile.What is the function of shared pool in SGA?The shared pool is most important area of SGA. It control almost all sub area of SGA. The shortage of shared pool may result high library cache reloads and shared pool latch contention error. The two major component of shared pool is library cache and dictionary cache.The library cache contains current SQL execution plan information. It also holds PL/SQL procedure and trigger.The dictionary cache holds environmental information which includes referential integrity, table definition, indexing information and other metadata information.Backup & Recovery Question/Answer:Is target database can be catalog database?No recovery catalog cannot be the same as target database because whenever target database having restore and recovery process it must be in mount stage in that period we cannot access catalog information as database is not open.What is the use of large pool, which case you need to set the large pool?You need to set large pool if you are using: MTS (Multi thread server) and RMAN Backups. Large pool prevents RMAN & MTS from competing with other sub system for the same memory (specific allotment for this job). RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled, then Oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.How to take User-managed backup in RMAN or How to make use of obsolete backup? By using catalog command: RMAN>CATALOG START WITH '/tmp/KEEP_UNTIL_30APRIL2010;It will search into all file matching the pattern on the destination asks for confirmation to catalog or you can directly change the backup set keep until time using rman command to make obsolete backup usable.RMAN> change backupset 3916 keep until time "to_date('01-MAY-2010','DD-MON-YYYY')" nologs;This is important in the situation where our backup become obsolete due to RMAN retention policy or we have already restored prior to that backup. What is difference between using recovery catalog and control file?When new incarnation happens, the old backup information in control file will be lost where as it will be preserved in recovery catalog .In recovery catalog, we can store scripts. Recovery catalog is central and can have information of many databases. This is the reason we must need to take a fresh backup after new incarnation of control file.What is the benefit of Block Media Recovery and How to do it?Without block media recovery if the single block is corrupted then you must take datafile offline and then restore all backup and archive log thus entire datafile is unavailable until the process is over but incase of block media recovery datafile will be online only the particular block will be unavailable which needs recovery. You can find the details of corrupted block in V$database_Block_Corruption view as well as in alert/trace file.Connect target database with RMAN in Mount phase:RMAN> Recover datafile 8 block 13;RMAN> Recover CORRUPTION_LIST; --to recover all the corrupted block at a time.In respect of oracle 11g Active Dataguard features (physical standby) where real time query is possible corruption can be performed automatically. The primary database searches for good copies of block on the standby and if they found repair the block with no impact to the query which encounter the corrupt block.By default RMAN first searches the good block in real time physical standby database then flashback logs then full and incremental rman backup.What is Advantage of Datapump over Traditional Export?1. Data pump support parallel concept. It can write multiple dumps instead of single sequential dump.2. Data can be exported from remote database by using database link.3. Consistent export with Flashback_SCN, Flashback_Time supported in datapump.4. Has ability to attach/detach from job and able to monitor the job remotely.5. ESTIMATE_ONLY option can be used to estimate disk space requirement before perform the job.6. Explicit DB version can be specified so only supported object can be exported.7. Data can be imported from one DB to another DB without writing into dump file using NETWORK_LINK.8. During impdp we change the target file name, schema, tablespace using: REMAP_Why datapump is faster than traditional Export. What to do to increase datapump performace?Data Pump is block mode, exp is byte mode.Data Pump will do parallel execution.Data Pump uses direct path API and Network link features.Data pump export/import/access file on server rather than client by providing directory structure grant.Data pump is having self-tuning utilities, the tuning parameter BUFFER and RECORDLENGTH no need now.Following initialization parameter must be set to increase data pump performance:· DISK_ASYNCH_IO=TRUE· DB_BLOCK_CHECKING=FALSE· DB_BLOCK_CHECKSUM=FALSEFollowing initialization must be set high to increase datapump parallelism:· PROCESSES· SESSIONS· PARALLEL_MAX_SERVERS· SHARED_POOL_SIZE and UNDO_TABLESPACENote: you must set the reasonable amount of STREAMS_POOL_SIZE as per database size if SGA_MAXSIZE parameter is not set. If SGA_MAXSIZE is set it automatically pickup reasonable amount of size.Flashback Question/AnswerFlashback Archive Features in oracle 11gThe flashback archiving provides extended features of undo based recovery over a year or lifetime as per the retention period and destination size.Limitation or Restriction on flashback Drop features?1. The recyclebin features is only for non-system and locally managed tablespace. 2. When you drop any table all the associated objects related with that table will go to recyclebin and generally same reverse with flashback but sometimes due to space pressure associated index will finished with recyclebin. Flashback cannot able to reverse the referential constraints and Mviews log.3. The table having fine grained auditing active can be protected by recyclebin and partitioned index table are not protected by recyclebin.Limitation or Restriction on flashback Database features?1. Flashback cannot use to repair corrupt or shrink datafiles. If you try to flashback database over the period when drop datafiles happened then it will records only datafile entry into controlfile.2. If controlfile is restored or re-created then you cannot use flashback over the point in time when it is restored or re-created.3. You cannot flashback NOLOGGING operation. If you try to flashback over the point in time when NOLOGGING operation happens results block corruption after the flashback database. Thus it is extremely recommended after NOLOGGING operation perform backup.What are Advantages of flashback database over flashback Table?1. Flashback Database works through all DDL operations, whereas Flashback Table does not work with structural change such as adding/dropping a column, adding/dropping constraints, truncating table. During flashback Table operation A DML exclusive lock associated with that particular table while flashback operation is going on these lock preventing any operation in this table during this period only row is replaced with old row here. 2. Flashback Database moves the entire database back in time; constraints are not an issue, whereas they are with Flashback Table. 3. Flashback Table cannot be used on a standby database.How should I set the database to improve Flashback performance? Use a fast file system (ASM) for your flash recovery area, configure enough disk space for the file system that will hold the flash recovery area can enable to set maximum retention target. If the storage system used to hold the flash recovery area does not have non-volatile RAM (ASM), try to configure the file system on top of striped storage volumes, with a relatively small stripe size such as 128K. This will allow each write to the flashback logs to be spread across multiple spindles, improving performance. For large production databases set LOG_BUFFER to be at least 8MB. This makes sure the database allocates maximum memory (typically 16MB) for writing flashback database logs.Performance Tuning Question/Answer:If you are getting complain that database is slow. What should be your first steps to check the DB performance issues?In case of performance related issues as a DBA our first step to check all the session connected to the database to know exactly what the session is doing because sometimes unexpected hits leads to create object locking which slow down the DB performance.The database performance directly related with Network load, Data volume and Running SQL profiling.1. So check the event which is waiting for long time. If you find object locking kill that session (DML locking only) will solve your issues.To check the user sessions and waiting events use the join query on views: V$session,v$session_wait2. After locking other major things which affect the database performance is Disk I/O contention (When a session retrieves information from datafiles (on disk) to buffer cache, it has to wait until the disk send the data). This waiting time we need to minimize.We can check these waiting events for the session in terms of db file sequential read (single block read P3=1 usually the result of using index scan) and db file scattered read (multi block read P3 >=2 usually the results of for full table scan) using join query on the view v$system_eventSQL> SELECT a.average_wait "SEQ READ", b.average_wait "SCAT READ" 2 FROM sys.v_$system_event a, sys.v_$system_event b 3 WHERE a.event = 'db file sequential read'AND b.event = 'db file scattered read'; SEQ READ SCAT READ---------- ---------- .74 1.6When you find the event is waiting for I/O to complete then you must need to reduce the waiting time to improve the DB performance. To reduce this waiting time you must need to perform SQL tuning to reduce the number of block retrieve by particular SQL statement.How to perform SQL Tuning?1. First of all you need to identify High load SQL statement. You can identify from AWR Report TOP 5 SQL statement (the query taking more CPU and having low execution ratio). Once you decided to tune the particular SQL statement then the first things you have to do to run the Tuning Optimizer. The Tuning optimize will decide: Accessing Method of query, Join Method of query and Join order.2. To examine the particular SQL statement you must need to check the particular query doing the full table scan (if index not applied use the proper index technique for the table) or if index already applied still doing full table scan then check may be table is having wrong indexing technique try to rebuild the index. It will solve your issues somehow…… otherwise use next step of performance tuning.3. Enable the trace file before running your queries, then check the trace file using tkprof created output file. According to explain_plan check the elapsed time for each query, and then tune them respectively.To see the output of plan table you first need to create the plan_table from and create a public synonym for plan_table @$ORACLE_HOME/rdbms/admin/utlxplan.sql)SQL> create public synonym plan_table for sys.plan_table;4. Run SQL Tuning Advisor (@$ORACLE_HOME/rdbms/admin/sqltrpt.sql) by providing SQL_ID as you find in V$session view. You can provide rights to the particular schema for the use of SQL Tuning Advisor: Grant Advisor to HR; Grant Administer SQL Tuning set to HR;SQL Tuning Advisor will check your SQL structure and statistics. SQL Tuning Advisor suggests indexes that might be very useful. SQL Tuning Advisor suggests query rewrites. SQL Tuning Advisor suggests SQL profile. (Automatic reported each time)5. Now in oracle 11g SQL Access Advisor is used to suggests new index for materialized views. 6. More: Run TOP command in Linux to check CPU usage information and Run VMSTAT, SAR, PRSTAT command to get more information on CPU, memory usage and possible blocking.7. Optimizer Statistics are used by the query optimizer to choose the best execution plan for each SQL statement. Up-to-date optimizer statistics can greatly improve the performance of SQL statements.8. A SQL Profile contains object level statistics (auxiliary statistics) that help the optimizer to select the optimal execution plan of a particular SQL statement. It contains object level statistics by correcting the statistics level and giving the Tuning Advisor option for most relevant SQL plan generation.DBMS_SQLTUNE.ACCEPT_SQL_PROFILE – to accept the correct plan from SQLplusDBMS_SQLTUNE.ALTER_SQL_PROFILE – to modify/replace existing plan from SQLplus.DBMS_SQLTUNE.DROP_SQL_PROFILE – to drop existing plan.Profile Type: REGULAR-PROFILE, PX-PROFILE (with change to parallel exec)SELECT NAME, SQL_TEXT, CATEGORY, STATUS FROM DBA_SQL_PROFILES; 9. SQL Plan Baselines are a new feature in Oracle Database 11g (previously used stored outlines, SQL Profiles) that helps to prevent repeatedly used SQL statements from regressing because a newly generated execution plan is less effective than what was originally in the library cache. Whenever optimizer generating a new plan it is going to the plan history table then after evolve or verified that plan and if the plan is better than previous plan then only that plan going to the plan table. You can manually check the plan history table and can accept the better plan manually using the ALTER_SQL_PLAN_BASELINE function of DBMS_SPM can be used to change the status of plans in the SQL History to Accepted, which in turn moves them into the SQL Baseline and the EVOLVE_SQL_PLAN_BASELINE function of the DBMS_SPM package can be used to see which plans have been evolved. Also there is a facility to fix a specific plan so that plan will not change automatically even if better execution plan is available. The plan base line view: DBA_SQL_PLAN_BASELINES.Why use SQL Plan Baseline, How to Generate new plan using Baseline 10. SQL Performance Analyzer allows you to test and to analyze the effects of changes on the execution performance of SQL contained in a SQL Tuning Set. Which factors are to be considered for creating index on Table? How to select column for index? 1. Creation of index on table depends on size of table, volume of data. If size of table is large and you need only few data < 15% of rows retrieving in report then you need to create index on that table. 2. Primary key and unique key automatically having index you might concentrate to create index on foreign key where indexing can improve performance on join on multiple table.3. The column is best suitable for indexing whose values are relatively unique in column (through which you can access complete table records. Wide range of value in column (good for regular index) whereas small range of values (good for bitmap index) or the column contains many nulls but queries can select all rows having a value. CREATE INDEX emp_ename ON emp_tab(ename);The column is not suitable for indexing which is having many nulls but cannot search non null value or LONG, LONG RAW column not suitable for indexing.CAUTION: The size of single index entry cannot exceed one-half of the available space on data block.The more indexes on table will create more overhead as with each DML operation on table all index must be updated. It is important to note that creation of so many indexes would affect the performance of DML on table because in single transaction should need to perform on various index segments and table simultaneously. What are Different Types of Index? Is creating index online possible? Function Based Index/Bitmap Index/Binary Tree Index/4. implicit or explicit index, 5. Domain Index You can create and rebuild indexes online. This enables you to update base tables at the same time you are building or rebuilding indexes on that table. You can perform DML operations while the index building is taking place, but DDL operations are not allowed. Parallel execution is not supported when creating or rebuilding an index online.An index can be considered for re-building under any of these circumstances:We must first get an idea of the current state of the index by using the ANALYZE INDEX VALIDATE STRUCTURE, ANALYZE INDEX COMPUTE STATISTICS command* The % of deleted rows exceeds 30% of the total rows (depending on table length). * If the ‘HEIGHT’ is greater than 4, as the height of level 3 we can insert millions of rows. * If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.Differentiate the use of Bitmap index and Binary Tree index? Bitmap indexes are preferred in Data warehousing environment when cardinality is low or usually we have repeated or duplicate column. A bitmap index can index null value Binary-tree indexes are preferred in OLTP environment when cardinality is high usually we have too many distinct column. Binary tree index cannot index null value.If you are getting high “Busy Buffer waits”, how can you find the reason behind it? Buffer busy wait means that the queries are waiting for the blocks to be read into the db cache. There could be the reason when the block may be busy in the cache and session is waiting for it. It could be undo/data block or segment header wait. Run the below two query to find out the P1, P2 and P3 of a session causing buffer busy wait then after another query by putting the above P1, P2 and P3 values. SQL> Select p1 "File #",p2 "Block #",p3 "Reason Code" from v$session_wait Where event = 'buffer busy waits'; SQL> Select owner, segment_name, segment_type from dba_extents Where file_id = &P1 and &P2 between block_id and block_id + blocks -1;What is STATSPACK and AWR Report? Is there any difference? As a DBA what you should look into STATSPACK and AWR report?STATSPACK and AWR is a tools for performance tuning. AWR is a new feature for oracle 10g onwards where as STATSPACK reports are commonly used in earlier version but you can still use it in oracle 10g too. The basic difference is that STATSPACK snapshot purged must be scheduled manually but AWR snapshots are purged automatically by MMON BG process every night. AWR contains view dba_hist_active_sess_history to store ASH statistics where as STASPACK does not storing ASH statistics.You can run $ORACLE_HOME/rdbms/admin/spauto.sql to gather the STATSPACK report (note that Job_queue_processes must be set > 0 ) and awrpt to gather AWR report for standalone environment and awrgrpt for RAC environment.In general as a DBA following list of information you must check in STATSPACK/AWR report. ¦ Top 5 wait events (db file seq read, CPU Time, db file scattered read, log file sync, log buffer spac)¦ Load profile (DB CPU(per sec) < Core configuration and ratio of hard parse must be < parse)¦ Instance efficiency hit ratios (%Non-Parse CPU nearer to 100%)¦ Top 5 Time Foreground events (wait class is ‘concurrency’ then problem if User IO, System IO then OK)¦ Top 5 SQL (check query having low execution and high elapsed time or taking high CPU and low execution)¦ Instance activity¦ File I/O and segment statistics¦ Memory allocation¦ Buffer waits¦ Latch waits 1. After getting AWR Report initially crosscheck CPU time, db time and elapsed time. CPU time means total time taken by the CPU including wait time also. Db time include both CPU time and the user call time whereas elapsed time is the time taken to execute the statement.2. Look the Load profile Report: Here DB CPU (per sec) must be < Core in Host configuration. If it is not means there is a CPU bound need more CPU (check happening for fraction time or all the time) and then look on this report Parse and Hard Parse. If the ratio of hard parse is more than parse then look for cursor sharing and application level for bind variable etc.3. Look instance efficiency Report: In this statistics you have to look ‘%Non-Parse CPU’, if this value nearer to 100% means most of the CPU resource are used into operation other than parsing which is good for database health.4. Look TOP five Time foreground Event: Here we should look ‘wait class’ if the wait class is User I/O, system I/O then OK if it is ‘Concurrency’ then there is serious problem then look Time(s) and Avg Wait time(s) if the Time (s) is more and Avg Wait Time(s) is less then you can ignore if both are high then there is need to further investigate (may be log file switch or check point incomplete).5. Look Time Model Statistics Report: This is detailed report of system resource consumption order by Time(s) and % of DB Time.6. Operating system statistics Report7. SQL ordered by elapsed time: In this report look for the query having low execution and high elapsed time so you have to investigate this and also look for the query using highest CPU time but the lower the execution.What is the difference between DB file sequential read and DB File Scattered Read? DB file sequential read is associated with index read where as DB File Scattered Read has to do with full table scan. The DB file sequential read, reads block into contiguous (single block) memory and DB File scattered read gets from multiple block and scattered them into buffer cache. Dataguard Question/AnswerWhat are Benefits of Data Guard?Using Data guard feature in your environment following benefit:High availability, Data protection, Offloading backup operation to standby, Automatic gap detection and resolution in standby database, Automatic role transitions using data guard broker.Oracle Dataguard classified into two types:1. Physical standby (Redo apply technology)2. Logical Standby (SQL Apply Technology)Physical standby are created as exact copy (matching the schema) of the primary database and keeping always in recoverable mode (mount stage not open mode). In physical standby database transactions happens in primary database synchronized by using Redo Apply method by continually applying redo data on standby database received from primary database. Physical standby database can be opened for read only transitions only that time when redo apply is not going on. But from 11g onward using active data guard option (extra purchase) you can simultaneously open the physical standby database for read only access and can apply redo log received from primary in the meantime.Logical standby does not matching the same schema level and using the SQL Apply method to synchronize the logical standby database with primary database. The main advantage of logical standby database over physical standby is you can use logical standby database for reporting purpose while you are apply SQL.What are different services available in oracle data guard?1. Redo Transport Service: Transmit the redo from primary to standby (SYNC/ASYNC method). It responsible to manage the gap of redo log due to network failure. It detects if any corrupted archive log on standby system and automatically perform replacement from primary. 2. Log Apply Service: It applies the archive redo log to the standby. The MRP process doing this task.3. Role Transition service: it control the changing of database role from primary to standby includes: switchover, switchback, failover.4. DG broker: control the creation and monitoring of data guard through GUI and command line.What is different protection mode available in oracle data guard? How can check and change it?1. Maximum performance: (default): It provides the high level of data protection that is possible without affecting the performance of a primary database. It allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.2. Maximum protection: This protection mode ensures that no data loss will occur if the primary database fails. In this mode the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.3. Maximum availability: This provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.Step to create physical standby database?On Primary site Modification:1. Enable force logging: Alter database force logging;2. Create redolog group for standby on primary server:Alter database add standby logfile (‘/u01/oradata/--/standby_redo01.log) size 100m;3. Setup the primary database pfile by changing required parameterLog_archive_dest_n – Primary database must be running in archive modeLog_archive_dest_state_nLog_archive_config -- enble or disable the redo stream to the standby site.Log_file_name_convert , DB_file_name_convert -- these parameter are used when you are using different directory structure in standby database. It is used for update the location of datafile in standby database.Standby_File_Management -- by setting this AUTO so that when oracle file added or dropped from primary automatically changes made to the standby. DB_Unique_Name, Fal_server, Fal_client4. Create password file for primary5. Create controlfile for standby database on primary site:alter database create standby controlfile as ‘STAN.ctl;6. Configure the listner and tnsname on primary database.On Standby Modification:1. Copy primary site pfile and modify these pfile as per standby name and location:2. Copy password from primary and modify the name.3. Startup standby database in nomount using modified pfile and create spfile from it4. Use the created controlfile to mount the database.5. Now enable DG Broker to activate the primary or standby connection.6. Finally start redo log apply.How to enable/disable log apply service for standby?Alter database recover managed standby database disconnect; apply in backgroundAlter database recover managed standby database using current logfile; apply in real time.Alter database start logical standby apply immediate; to start SQL apply for logical standby database.What are different ways to manage long gap of standby database?Due to network issue sometimes gap is created between primary and standby database but once the network issue is resolved standby automatically starts applying redolog to fill the gap but in case when the gap is too long we can fill through rman incremental backup in three ways.1. Check the actual gap and perform incremental backup and use this backup to recover standby site.2. Create controlfile for standby on primary and restore the standby using newly created controlfile.3. Register the missing archive log.Use the v$archived_log view to find the gap (archived not applied yet) then find the Current_SCN and try to take rman incremental backup from physical site till that SCN and apply on standby site with recover database noredo option. Use the controlfile creation method only when fail to apply with normal backup method. Create new controlfile for standby on primary site using backup current controlfile for standby; Copy this controlfile on standby site then startup the standby in nomount using pfile and restore with the standby using this controlfile: restore standby controlfile from ‘/location of file’; and start MRP to test.If still alert.log showing log are transferred to the standby but still not applied then need to register these log with standby database with Alter database register logfile ‘/backup/temp/arc10.rc’;What is Active DATAGUARD feature in oracle 11g?In physical standby database prior to 11g you are not able to query on standby database while redo apply is going on but in 11g solve this issue by quering current_scn from v$database view you are able to view the record while redo log applying. Thus active data guard feature s of 11g allows physical standby database to be open in read only mode while media recovery is going on through redo apply method and also you can open the logical standby in read/write mode while media recovery is going on through SQL apply method.How can you find out back log of standby?You can perform join query on v$archived_log, v$managed_standbyWhat is difference between normal Redo Apply and Real-time Apply?Normally once a log switch occurs on primary the archiver process transmit it to the standby destination and remote file server (RFS) on the standby writes these redo log data into archive. Finally MRP service, apply these archive to standby database. This is called Redo Apply service.In real time apply LGWR or Archiver on the primary directly writing redo data to standby there is no need to wait for current archive to be archived. Once a transaction is committed on primary the committed change will be available on the standby in real time even without switching the log.What are the Back ground processes for Data guard?On primary:Log Writer (LGWR): collects redo information and updates the online redolog . It can also create local archive redo log and transmit online redo log to standby.Archiver Process (ARCn): one or more archiver process makes copies of online redo log to standby locationFetch Archive Log (FAL_server): services request for archive log from the client running on different standby server.On standby:Fetch Archive Log (FAL_client): pulls archive from primary site and automatically initiates transfer of archive when it detects gap.Remote File Server (RFS): receives archives on standby redo log from primary database. Archiver (ARCn): archived the standby redo log applied by managed recovery process.Managed Recovery Process (MRP): applies archives redo log to the standby server.Logical Standby Process (LSP): applies SQL to the standby server.ASM/RAC Question/AnswerWhat is the use of ASM (or) Why ASM preferred over filesystem?ASM provides striping and mirroring. You must put oracle CRD files, spfile on ASM. In 12c you can put oracle password file also in ASM. It facilitates online storage change and also rman recommended to backed up ASM based database.What are different types of striping in ASM & their differences?Fine-grained striping is smaller in size always writes data to 128 kb for each disk, Coarse-grained striping is bigger in size and it can write data as per ASM allocation unit defined by default it is 1MB.Default Memory Allocation for ASM? How will backup ASM metadata?Default Memory allocation for ASM in oracle 10g in 1GB in Oracle 11g 256M in 12c it is set back again 1GB.You can backup ASM metadata (ASM disk group configuration) using Md_Backup.How to find out connected databases with ASM or not connected disks list?ASMCMD> lsctSQL> select DB_NAME from V$ASM_CLIENT;ASMCMD> lsdgselect NAME,ALLOCATION_UNIT_SIZE from v$asm_diskgroup;What are required parameters for ASM instance Creation?INSTANCE_TYPE = ASM by default it is RDBMSDB_UNIQUE_NAME = +ASM1 by default it is +ASM but you need to alter to run multiple ASM instance.ASM_POWER_LIMIT = 11 It defines maximum power for a rebalancing operation on ASM by default it is 1 can be increased up to 11. The higher the limit the more resources are allocated resulting in faster rebalancing. It is a dynamic parameter which will be useful for rebalancing the data across disks.ASM_DISKSTRING = ‘/u01/dev/sda1/c*’it specify a value that can be used to limit the disks considered for discovery. Altering the default value may improve the speed disk group mount time and the speed of adding a disk to disk group.ASM_DISKGROUPS = DG_DATA, DG_FRA: List of disk group that will be mounted at instance startup where DG_DATA holds all the datafiles and FRA holds fast recovery area including online redo log and control files. Typically FRA disk group size will be twice of DATA disk group as it is holding all the backups.How to Creating spfile for ASM database?SQL> CREATE SPFILE FROM PFILE = ‘/tmp/init+ASM1.ora’;Start the instance with NOMOUNT option: Once an ASM instance is present disk group can be used for following parameter in database instance to allow ASM file creation:DB_CREATE_FILE_DEST, DB_CREATE_ONLINE_LOG_DEST_n, DB_RECOVERY_FILE_DEST, CONTROL_FILESLOG_ARCHIVE_DEST_n,LOG_ARCHIVE_DEST,STANDBY_ARCHIVE_DESTWhat are DISKGROUP Redundancy Level?Normal Redundancy: Two ways mirroring with 2 FAILURE groups with 3 quorum (optionally to store vote files)High Redundancy: Three ways mirroring requiring three failure groupsExternal Redundancy: No mirroring for disk that are already protecting using RAID on OS level.CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY FAILGROUP failure_group_1 DISK '/devices/diska1' NAME diska1,'/devices/diska2' NAME diska2 FAILGROUP failure_group_2 DISK '/devices/diskb1' NAME diskb1,'/devices/diskb2' NAME diskb2;We are going to migrate new storage. How we will move my ASM database from storage A to storage B? First need to prepare OS level to disk so that both the new and old storage accessible to ASM then simply add the new disks to the ASM disk group and drop the old disks. ASM will perform automatic rebalance whenever storage will change. There is no need to manual i/o tuning. ASM_SQL> alter diskgroup DATA drop disk data_legacy1, data_legacy2, data_legacy3 add disk ‘/dev/sddb1’, ‘/dev/sddc1’, ‘/dev/sddd1’;What are required component of Oracle RAC installation?:1. Oracle ASM shared disk to store OCR and voting disk files.2. OCFS2 for Linux Clustered database3. Certified Network File system (NFS)4. Public IP: Configuration: TCP/IP (To manage database storage system)5. Private IP: To manager RAC cluster ware (cache fusion) internally.6. SCAN IP: (Listener): All connection to the oracle RAC database uses the SCAN in their client connection string with SCAN you do not have to change the client connection even if the configuration of cluster changes (node added or removed). Maximum 3 SCAN is running in oracle.7. Virtual IP: is alternate IP assigned to each node which is used to deliver the notification of node failure message to active node without being waiting for actual time out. Thus possibly switchover will happen automatically to another active node continue to process user request.Steps to configure RAC database:1. Install same OS level on each nodes or systems.2. Create required number of group and oracle user account.3. Create required directory structure or CRS and DB home.4. Configure kernel parameter (sysctl.config) as per installation doc set shell limit for oracle user account.5. Edit etc/host file and specify public/private/virtual ip for each node.6. Create required level of partition for OCR/Votdisk and ASM diskgroup.7. Install OCFSC2 and ASM RPM and configure with each node.8. Install clustware binaries then oracle binaries in first node.9. Invoke netca to configure listener. 10. Finally invoke DBCA to configure ASM to store database CRD files and create database.What is the structure change in oracle 11g r2?1. Grid and (ASM+Clustware) are on home. (oracle_binaries+ASM binaries in 10g)2. OCR and Voting disk on ASM.3. SAN listener4. By using srvctl can manage diskgroups, SAN listener, oracle home, ons, VIP, oc4g.5. GSDWhat are oracle RAC Services?Cache Fusion: Cache fusion is a technology that uses high speed Inter process communication (IPC) to provide cache to cache transfer of data block between different instances in cluster. This eliminates disk I/O which is very slow. For example instance A needs to access a data block which is being owned/locked by another instance B. In such case instance A request instance B for that data block and hence access the block through IPC this concept is known as Cache Fusion.Global Cache Service (GCS): This is the main heart of Cache fusion which maintains data integrity in RAC environment when more than one instances needed particular data block then GCS full fill this task:In respect of instance A request GCS track that information if it finds read/write contention (one instance is ready to read while other is busy with update the block) for that particular block with instance B then instance A creates a CR image for that block in its own buffer cache and ships this CR image to the requesting instance B via IPC but in case of write/write contention (when both the instance ready to update the particular block) then instance A creates a PI image for that block in its own buffer cache, and make the redo entries and ships the particular block to the requesting instance B. The dba_hist_seg_stats is used to check the latest object shipped.Global Enqueue Service (GES): The GES perform concurrency (more than one instance accessing the same resource) control on dictionary cache lock, library cache lock and transactions. It handles the different lock such as Transaction lock, Library cache lock, Dictionary cache lock, Table lock.Global Resource Directory (GRD): As we know to perform any operation on data block we need to know current state of the particular data block. The GCS (LMSN + LMD) + GES keep track of the resource s, location and their status of (each datafiles and each cache blocks ) and these information is recorded in Global resource directory (GRD). Each instance maintains their own GRD whenever a block transfer out of local cache its GRD is updated.Main Components of Oracle RAC Clusterware?OCR (Oracle Cluster Registry): OCR manages oracle clusterware (all node, CRS, CSD, GSD info) and oracle database configuration information (instance, services, database state info).OLR (Oracle Local Registry): OLR resides on every node in the cluster and manages oracle clusterware configuration information for each particular node. The purpose of OLR in presence of OCR is that to initiate the startup with the local node voting disk file as the OCR is available on GRID and ASM file can available only when the grid will start. The OLR make it possible to locate the voting disk having the information of other node also for communicate purpose.Voting disk: Voting disk manages information about node membership. Each voting disk must be accessible by all nodes in the cluster for node to be member of cluster. If incase a node fails or got separated from majority in forcibly rebooted and after rebooting it again added to the surviving node of cluster. Why voting disk place to the quorum disk or what is split-brain syndrome issue in database cluster?Voting disk placed to the quorum disk (optionally) to avoid the possibility of split-brain syndrome. Split-brain syndrome is a situation when one instance trying to update a block and at the same time another instance also trying to update the same block. In fact it can happen only when cache fusion is not working properly. Voting disk always configured with odd number of disk series this is because loss of more than half of your voting disk will cause the entire cluster fail. If it will be even number node eviction cannot decide which node need to remove due to failure. You must store OCR and voting disk on ASM. Thus if necessary you can dynamically add or replace voting disk after you complete the Cluster installation process without stopping the cluster.ASM Backup:You can use md_backup to restore ASM disk group configuration in case of ASM disk group storage loss.OCR and Votefile Backup: Oracle cluster automatically creates OCR backup (auto backup managed by crsd) every four hours and retaining at least 3 backup (backup00.ocr, day.ocr, week.ocr on the GRID) every times but you can take OCR backup manually at any time using: ocrconfig –manualbackup --To take manual backup of ocrocrconfig –showbackup -- To list the available backup.ocrdump –backupfile ‘bak-full-location’ -- To validate the backup before any restore.ocrconfig –backuploc --To change the OCR configured backup location.dd if=’vote disk name’ of=’bakup file name’; To take votefile backupTo check OCR and Vote disk Location:crsctl query css votedisk/etc/orcle/ocr.loc or use ocrcheckocrcheck --To check the OCR corruption status (if any).Crsctl check crs/cluster --To check crs status on local and remote nodeMoving OCR and Votedisk:Login with root user as the OCR store on root and for votedisk stops all crs first.Ocrconfig –replace ocrmirror/ocr -- Adding/removing OCR mirror and OCR file.Crsctl add/delete css votedisks --Adding and Removing Voting disk in Cluster.List to check all nodes in your cluster from root or to check public/private/vi pip info.olsnodes –n –p –I How can Restore the OCR in RAC environment?1. Stop clusterware all node and restart with one node in exclusive mode to restore. The nocrs ensure crsd process and OCR do not start with other node.# crsctl stop crs, # crsctl stop crs –f # crsctl start crs –excel –nocrs Check if crsd still open then stop it: # crsctl stop resource ora.crsd -init 2. If you want to restore OCR to and ASM disk group then you must check/activate/repair/create diskgroup with the same name and mount from local node. If you are not able to mount that diskgroup locally then drop that diskgroup and re-create it with the same name. Finally run the restore with current backup.# ocrconfig –restore file_name; 3. Verify the integrity of OCR and stop exclusive mode crs# ocrcheck # crsctl stop crs –f4. Run ocrconfig –repair –replace command all other node where you did not use the restore. For example you restore the node1 and have 4 node then run that rest of node 3,2,4.# ocrconfig –repair –replace 5. Finally start all the node and verify with CVU command# crsctl start crs# cluvfy comp ocr –n all –verboseNote: Using ocrconfig –export/ocrconfig –import also enables you to restore OCR Why oracle recommends to use OCR auto/manual backup to restore the OCR instead of Export/Import?1. An OCR auto/manual backup is consistent snapshot of OCR whereas export is not.2. Backup are created when the system is online but you must shutdown all node in clusterware to take consistent export.3. You can inspect a backup using OCRDUMP utility where as you cannot inspect the contents of export.4. You can list and see the backup by using ocrconfig –showbackup where as you must keep track of each export.How to Restore Votedisks?1. Shutdown the CRS on all node in clusterCrsctl stop crs2. Locate current location of the vote disk restore each of the votedisk using dd command from previous good backup taken using the same dd command.Crsctl query css votedisksDd if= of=3. Finally start crs of all node.Crsctl start crsHow to add node or instance in RAC environment?1. From the ORACLE_HOME/oui/bin location of node1 run the script addNode.sh$ ./addNode.sh -silent "CLUSTER_NEW_NODES={node3}"2. Run from ORACLE_HOME/root.sh script of node33. Run from existing node srvctl config db -d db_name then create a new mount point4. Mkdir –p ORACLE_HOME_NEW/”mount point name”;5. Finally run the cluster installer for new node and update the inventory of clusterwareIn another way you can start the dbca and from instance management page choose add instance and follow the next step.How to Identify master node in RAC ? # /u1/app/../crsd>grep MASTER crsd.log | tail -1 (or) cssd >grep -i "master node" ocssd.log | tail -1 OR You can also use V$GES_RESOURCE view to identify the master node.Difference crsctl and srvctl?Crsctl managing cluster related operation like starting/enabling clusters services where srcvctl manages oracle related operation like starting/stoping oracle instances. Also in oracle 11gr2 srvctl can be used to manage network,vip,disks etc.What are ONS/TAF/FAN/FCF in RAC?ONS is a part of clusterware and is used to transfer messages between node and application tiers.Fast Application Notification (FAN) allows the database to notify the client, of any changes either node UP/DOWN, Database UP/DOWN.Transport Application Failover (TAF) is a feature of oracle Net services which will move a session to the backup connection whenever a session fails.FCF is a feature of oracle client which receives notification from FAN and process accordingly. It clean up connection when down event receives and add new connection when up event is received from FAN.How OCCSD starts if voting disk & OCR resides on ASM?Without access to the voting disk there is no css to join or accelerate to start the CLUSTERWARE as the voting disk stored in ASM and as per the oracle order CSSD starts before ASM then how it become possible to start OCR as the CSSD starts before ASM. This is due to the ASM disk header in 11g r2 having new metadata kfdhbd.vfstart, kfdhbd.vfend (which tells the CSS where to find the voting files). This does not require to ASM instance up. Once css getting the voting files it can join the cluster easily.Note: Oracle Clusterware can access the OCR and the voting disks present in ASM even if the ASM instance is down. As a result CSS can continue to maintain the Oracle cluster even if the ASM instance has failed.Upgration/Migration/Patches Question/AnswerWhat are Database patches and How to apply?CPU (Critical Patch Update or one-off patch): security fixes each quarter. They are cumulative means fixes from previous oracle security alert. To Apply CPU you must use opatch utility.- Shutdown all instances and listener associated with the ORACLE_HOME that you are updating.- Setup your current directory to the directory where patch is located and then run the opatch utility.- After applying the patch startup all your services and listner and startup all your database with sysdba login and run the catcpu.sql script.- Finally run the utlrp.sql to validate invalid object.To rollback CPU Patch:- Shutdown all instances or listner.- Go to the patch location and run opatch rollback –id 677666- Start all the database and listner and use catcpu_rollback.sql script.- Bounce back the database use utlrp.sql script.PSU (Patch Set Update): Security fixes and priority fixes. Once a PSU patch is applied only a PSU can be applied in near future until the database is upgraded to the newer version.You must have two things two apply PSU patch: Latest version for Opatch, PSU patch that you want to apply1. Check and update Opatch version: Go to ORACLE_HOME/OPATCH/opatch versionNow to Update the latest opatch. Take the backup of opatch directory then remove the current opatch directory and finally unzip the downloaded patch into opatch directory. Now check again your opatch version.2. To Apply PSU patch:unzip p13923374_11203_.zipcd 13923374opatch apply -- in case of RAC optach utility will prompt for OCM (oracle configuration manager) response file. You have to provide complete path of OCM response if you have already created.3. Post Apply Steps: Startup database with sys as sysdbaSQL> @catbundle.sql psu applySQL> quitOpatch lsinventory --to check which psu patch is installed.Opatch rollback –id 13923374 --Rolling back a patch you have applied.Opatch nrollback –id 13923374, 13923384 –Rolling back multiple patch you have applied.SPU (Security Patch Update): SPU cannot be applied once PSU is applied until the database is upgraded to the new base version.Patchset: (eg. 10.2.0.1 to 10.2.0.3): Applying a patchset usually requires OUI.Shutdown all database services and listener then Apply the patchset to the oracle binaries. Finally Startup services and listner then apply post patch script.Bundle Patches: it is for windows and Exadata which include both quarterly security patch as well as recommended fixes.You have collection of patch nearly 100 patches. How can you apply only one of them?By napply itself by providing the specific patch id and you can apply one patch from collection of many patch by using opatch util napply - id9- skip_subset-skip_duplicate. This will apply only patch 9 within many extracted patches.What is rolling upgrade?It is new ASM feature in oracle 11g. This enables to patch ASM node in clustered environment without affecting database availability. During rolling upgrade we can maintain node while other node running different software.What happens when you use STARTUP UPGRADE?The startup upgrade enables you to open a database based on earlier version. It restrict sysdba logon and disable system trigger. After startup upgrade only specific view query can be used no other views can be used till catupgrd.sql is executed.
0 notes
Link
A SQL Join clause is put within a Select statement and at the end, it’s given a join condition, which tells the database how to fetch your data. The column specified within the join condition must be preceded by a table name if the column name is the same in both tables. When a column is preceded with a table name, it’s known as a qualified column.
In other words, joining is the process of taking data from multiple tables and putting it into one generated result set. This article will be pretty basic and hopefully easy to follow because the goal here is to fully grasp the idea of SQL joins.
Joins are a complex subject, and some find them confusing and tricky. Because of that, let’s see the concept of retrieving data from multiple tables rather than just diving into all different types of joins. Some might not even know what a join is or maybe you’ve been writing queries involving a single table. But now you need information from more than one table. Either way, you’re in the right place. So, thank you for joining us (see what we did there) and let’s begin.
Data can be structured in a database in a confusing no user-friendly way and we basically use SQL joins to present it in the opposite. Now, this is done in a way, so we can store our data in a database that is the best structured to where we protect our data integrity and it’s normalized, separated across tables with the proper relationships, data types, keys, indexes, etc. When we actually want to output that data, e.g. on a web page, or within an application, or to another person we need the results to be organized and structured, so it makes sense to us. That’s a purpose of SQL Joins, it takes a mess and puts it out in a way that looks beautiful and this can be done over multiple tables.
Note: To automatically complete SQL statements, such as Join clause or cross-database queries, in a single click check out a free SQL Server Management Studio and Visual Studio add-in from this link.
For example, let’s say that we have three different tables with relationships between them:
We don’t need any specifics of what kind of tables, data we are dealing with but basically, the SQL Join will take all of this (Table1, Table2, Table3) and it’s going to present us with a generated table that is more pretty, structured, user-friendly and it makes more sense to us.
Now, the way this is done is by using foreign and primary key connections. So, these tables are going to have relationships between them, as shown below, this is known as a foreign key connection that references a primary key:
In other words, one of these tables is the parent and another one is the child and we want the end result to combine them into one generated table as shown above.
The relationship that is used within the SQL Join condition is often already defined in the database (usually foreign key/primary key connection) but it doesn’t have to be. If it’s not already in the database, it’s known as an ad hoc relationship.
You should know that when we do this, these columns that are connected are going to be indexed to make data joining faster. This is another complex subject and we highly recommend that you check out articles below associated with indexes that speed up retrieval of rows from the table or view:
How to identify and solve SQL Server index scan problems
How to optimize SQL Server query performance – Statistics, Joins and Index Tuning
How to create and optimize SQL Server indexes for better performance
Top things you need in a SQL Server query plan analysis tool
Moving on, when we are joining tables it’s a common practice to replace the columns names with more user-friendly names. For example, let’s say that we got a comment on a website, it says “This article is awesome!”, and it’s posted by a reader named ”Michael“:
This username on a website is probably a user ID in a comment table. So, let’s say we have two tables: Users table and Comments table and there’s a foreign key from the User ID referencing the User ID in the Users table:
Well, when we present that data on a database we will want to replace that user ID with the actual username. Otherwise, we could end up with a user that’s named ”4523“ and nobody would know who that is. Since all of this information is within the Comments table, we can just take the information from the Users table and put that username instead of the user ID.
This is how SQL Join works conceptually. We have to think conceptually because it’s important to understand that this article should only be helping you understand the concepts of a different kind of SQL Joins. This is why we got these mockup examples instead of writing actual queries. Also because joins are done differently from a database management system to a database management system. All you need to know is how the joins work and the expected results. Then, when you start working with MySQL, SQL Server, Oracle database, etc. hopefully you’ll be able to just figure out how to do their joins. Ultimately, the most important thing when it comes to joins is to know beforehand what the results are going to be because if you understand the joins then you’ll be able to just type it out.
Bear in mind that all of this is data manipulation language (DML). This is different from data definition language or data description language (DDL). These are two parts of the SQL concept and when dealing with SQL Joins, it’s important to know that we already have the definition of a database. This simply means that the database might already have been designed and structured with the column and rows but now, we’re manipulating that data to look a certain way. So, by quering data using joins you’re not actually changing the structure of the database itself. You’re just changing the presentation of that database.
Joining data from multiple tables depends a lot on database design. A database can be structured to have a friendly design but in reality, it doesn’t work because it’s not normalized, there’s repeating data, etc. A simple example is a table called ”Comments“:
Let’s say inside this table we got three comments by the username”Michael”.
If you show this to someone who is not familiar with databases, he or she will probably say that user Michael posted a comment “This article is awesome!”. The same user posted another comment and so on. You get the point. But this isn’t the best way to store data because of data integrity and also repeating data that we mentioned above. In this artificial example, we got the same username three times. The best practice here would be to use a user ID and get rid of the “Username” and “Michael” and have something like this so we can apply the SQL Join logic:
Here we simply have a Users table with the ”UserID“ column which has all of our user IDs for each individual person. So we started off with this friendly design but in reality, it does not work because it’s not normalized, there’s repeating data, etc. The only con in this case it that it looks pretty. Once we have that, we can break it up and normalize it so that we have two tables.
Next thing we got to do is to take up those pieces of the puzzle and put them back together in a SQL Join to get the final result. We basically want to replicate the original table we had before by using joins of our normalized database and recreate it. This UserID 12 in the ”Comments” table will be connected with the UserID 12 in the ”Users“ table which is assotiated with the name ”Michael”.
There are a couple different kinds of SQL Joins like: inner joins, outer joins, cross joins, etc. and we’d highly recommend that you check out detailed articles below that covers not only basics but also shows actual examples of data retrieval:
SQL Join clause introduction and overview
SQL Join overview and tutorial
0 notes
Text
How To Get Ddl Of All Tablespaces In Oracle
How To Get Ddl Of All Tablespaces In Oracle
Hello Friends, In this article we are going to show you how to get ddl of all tablespaces in oracle. Get DDL of All Tablespaces Oracle Use below query to get ddl of all tablespaces in oracle. SQL>set heading off; SQL>set echo off; SQL>Set pages 999; SQL>set long 90000; SQL>spool ddl_of_tablesspace.sql SQL>select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces…
View On WordPress
#get ddl for table in oracle#get ddl of table in oracle#get ddl of tablespace in oracle#How To Get Ddl Of All Tablespaces In Oracle#how to get ddl of table in oracle#query to get ddl of table in oracle
0 notes
Text
oracle database training in ameerpet
Oracle Database is an object-relational database management system (ORDBMS) developed and marketed by Oracle Corporation. Oracle Database is commonly referred. In the early 70’s Dr. EFT Codd, a computer scientist, invented the relational model for database management. The relational model deals with many issues caused by the flat file model or the FMS (File Management System). According to his model, instead of combining everything in a single structure, data is organized in entities and attributes to make it more effecient. Oracle has the SQL (Structured Query Language) and also support PL/SQL (Procedural Language using SQL).

Career Prospects
This course is meant for everyone who would like to visualize how information gets stored and delivered to end users and a solid understanding of the Oracle products used in professional job roles.
Application Developer
Database Developer
Support Analyst
Curriculum
Introduction to DBMS • DBMS, Data, Information, Database Data models – FMS/ HDBMS/ NDBMS/ RDBMS/ ORDBMS • E-R Diagram, Normalization, Codd Rules • RDBMS Packages • Versions SQL (Structured Query Language) • SQL and SQL*Plus Commands • Types of commands in SQL • How to login into SQL • HR user, locking & unlocking user Retrieving data • Select statement usage • Alias names – columns & expressions • Applying filters using operators • Operators – Relational/ Logical / Special • Distinct clause, Order by clause • Sorting NULL values Working with tables • Datatypes in Oracle • Naming conventions; Rules – tables/columns • Creating Tables in different methods DML operations • Insert data in different methods • Update / Delete data DDL commands • Alter / Truncate / Rename / Drop table • Recyclebin concept • Flashback / Purge Table Constraints • Data Integrity and importance • Not Null/Unique/Primary Key/ Check • References/Foreign Key • Add/enable/disable/drop constraints • Composite primary key/unique key • About ‘User_constraints’ table TCL commands (Transaction Handling) • About login session • Redo log files • Commit / Rollback / Savepoint • Working with multiple savepoints DCL Commands • User management, SYSTEM user • Create user, Grant basic privileges • Change password, lock/unlock user • Grant/Revoke a user as DBA • Data sharing between users • Grant and Revoke commands • Accessing another user data • Dead lock situations About ‘User_tab_privs_made’, ‘User_tab_privs_recd’ Database functions • Group / Row / Numeric / String • Conversion functions • Dates & date formats • Date functions, Misc. functions • Pseudo columns Data grouping • Group by / Having clause • ‘where’ versus ‘having’ clause • Using expressions in group by Set operators • Union/ Intersect/ Minus/ Union All Subqueries • Simple / Multiple row subquery • Nested / Co-related subquery Joins tables • Simple join (equi and non-equi) • Self/ inner / Outer join (Left/Right/Full) • Cartesian/Cross join • ANSI Standard of joins Oracle Database objects • Synonyms • Views • Sequences • Indexes PLSQL PL/SQL architecture • PL/SQL engine • Structure of a PL/SQL program • Different section of a program • Datatypes in PL/SQL • Operators, I/O operations • Write expressions, simple program • Extracting data from tables Control statements • If-Then-Else – Simple, Nested • Compound conditions (and/or/not) • Case – End Case • Loops – For/ While/ Simple/ ForALL Exception Handling • Introduction • Types of Exceptions • Runtime errors vs Exceptions • System / User defined exceptions • Using multiple exceptions • Raise_Application_Error() Cursor management • Cursor types – Implicit / Explicit • Cursor attributes • Declaring, opening, closing • Fetching, checking End of cursor • Parameterized cursor • For loops in cursor processing • DML operations using cursors • Cursor using joins Sub programs Stored Procedures • Creating a simple procedure • Compiling, verifying errors • Executing procedure from SQL prompt • Calling procedure in another PL/SQL program • Procedure including DML operations • Procedures with parameters • Types of parameters (IN, OUT & IN OUT) • Cursors in a procedure User defined Functions • Writing a user’s function • About ‘Return’ statement • Compiling, verifying errors • Executing from SQL prompt • Functions with parameters (IN, OUT & IN OUT) Packages • Create package specification / body • Calling the elements of a package • Adv of package over procedure/function • Implementing Polymorphism • Function overloading • Procedure overloading Database Triggers • Types of triggers • Creating before/after triggers • :OLD, :NEW reference for data references • DML operations using triggers • ‘Instead of’ triggers • Triggers Vs constraints • Transaction auditing using triggers
Related searchers are : oracle admin online training,oracle admin training in hyderabad,oracle administration course in ameerpet,oracle administration course in hyderabad,oracle administration online training in hyderabad,best oracle online training institutes in hyderabad,oracle training in ameerpet,oracle admin training institute in ameerpet hyderabad,oracle admin online training institute in ameerpet hyderabad,oracle Admin Training Institutes in Hyderabad
For More info : www.datadot.in
Call us : 9052641113
0 notes