#installing sample schema oracle databases
Explore tagged Tumblr posts
Text
Installing HR sample schema script in Oracle Linux platform
Script to setup HR sample schema on Oracle Linux platform On Linux machine, go to the following directory where oracle home has Human Resources script while installing [oracle@Linux1 /]$ cd $ORACLE_HOME/demo/schema/human_resources [oracle@Linux1 human_resources]$ Connect with the database with sqlplus command sqlplus / as sysdba Connected to: Oracle Database 19c Enterprise Edition Release…
View On WordPress
0 notes
Text
Oracle 1Z0-447 Certification Aspects
* Exam Title: Oracle GoldenGate 12c Implementation Essentials * Exam Code: 1Z0-447 * Exam Price: $245.00 More on exam pricing * Format: Multiple-Choice * Duration:Two hours * Number of Questions: 72 * Passing Score: 69% * Validated Against: Exam has become validated against Oracle GoldenGate 12c. * 1Z0-447 Practice Test: https://www.dbexam.com/1z0-447-oracle-goldengate-12c-implementation-essentials * 1Z0-447 sample questions: https://www.dbexam.com/sample-questions/oracle-1z0-447-certification-sample-questions-and-answers Oracle GoldenGate 12c Certified Implementation Specialist Certification Overview The Oracle GoldenGate 12c Essentials (1Zx-xxx) exam is ideal for people who use a strong foundation and expertise in selling or implementing oracle GoldenGate 12c solutions. This certification exam covers topics like: Oracle Goldengate 12c Architecture; Oracle GoldenGate 12c Parametres; Oracle Goldengate 12c Mapping and Transformation Overview and more. Up-to-date training and field experience are suggested. The Oracle GoldenGate 12c Implementation Specialist certification recognizes OPN members as OPN Certified Specialists. This certification differentiates OPN members available by giving an aggressive edge through proven expertise. This certification helps the OPN member’s partner organization entitled to the Oracle GoldenGate 12c.

* Get More Detail About Oracle 1Z0-447 Certification: https://oracle-exam-guide.blogspot.com/2019/05/how-to-score-best-in-1z0-447.html Oracle 1Z0-447 Certification Exam Topics * Oracle GoldenGate (OGG) Overview * Describe OGG functional overview and customary topologies * Describe OGG Veridata and Management Pack functionality * Describe the gap between real-time data integration replication files Manipulation Language (DML) replication * Install and Configure OGG * Download and Install OGG, and differentiate between various installers (zip, OUI, tar) * Synchronize source and target databases with the Initial Load * Prepare database for OGG CDC and view databases with OGG schema check script * Configure OGG Replication component parameter files * Configure the OGG Command Interface to generate OGG processes * Describe how you can identify and resolve issues in heterogeneous replication, and offer appropriate solutions * Configure OGG utilities * Mapping and Transformation Overview * Implement use cases for transformation functions * Implement macros * Managing and Monitoring Oracle GoldenGate * Manage OGG command information security * Implement and troubleshoot OGG Monitoring * Explain the configuration and management of the Enterprise Manager 12c plug-in * Implement and troubleshoot OGG Veridata * Architecture Overview * Describe OGG components * Create both forms of Capture systems for Oracle database * Create the three forms of Replicat processes * Explain the real difference between an Extract and Pump, and local and remote trails * Configure OGG's process recovery mechanism * Parameters * Describe and compare GLOBALS versus MANAGER parameters * Create solutions using component parameters for replication requirements * Install OGG parameters * Explain and identify parameters specific for non-Oracle databases * Configuration Options * Describe OGG configuration options (Data Definition Language (DDL), compression and encryption options) * Configure OGG event actions based on use cases * Troubleshoot conflict detection and backbone * Configure Integrated Capture, Replicat, and deployment options Sign up for Oracle 1Z0-447 Certification exam Sign up for Oracle 1Z0-447 Certification exam with Pearson VUE and buy test with all the voucher you purchase from Oracle University or which has a bank card applied during exam registration. To learn more about oracle goldengate certification webpage: check.
1 note
·
View note
Text
#WDILTW – Creating examples can be hard
This week I was evaluating AWS QLDB. Specifically the verifiable history of changes to determine how to simplify present processes that perform auditing via CDC. This is not the first time I have looked at QLDB so there was nothing that new to learn. What I found was that creating a workable solution with an existing application is hard. Even harder is creating an example to publish in this blog (and the purpose of this post). First some background. Using MySQL as the source of information, how can you leverage QLDB? It’s easy to stream data from MySQL Aurora, and it’s easy to stream data from QLDB, but it not that easy to place real-time data into QLDB. AWS DMS is a good way to move data from a source to a target, previously my work has included MySQL to MySQL, MySQL to Redshift, and MySQL to Kinesis, however there is no QLDB target. Turning the problem upside down, and using QLDB as the source of information, and streaming to MySQL for compatibility seemed a way forward. After setting up the QLDB Ledger and an example table, it was time to populate with existing data. The documented reference example looked very JSON compatible. Side bar, it is actually Amazon Ion a superset of JSON. INSERT INTO Person Now, MySQL offers with the X Protocol. This is something that lefred has evangelized for many years, I have seen presented many times, but finally I had a chance to use. The MySQL Shell JSON output looked ideal. { "ID": 1523, "Name": "Wien", "CountryCode": "AUT", "District": "Wien", "Info": { "Population": 1608144 } } { "ID": 1524, "Name": "Graz", "CountryCode": "AUT", "District": "Steiermark", "Info": { "Population": 240967 } } And now, onto some of the things I learned this week. Using AWS RDS Aurora MySQL is the first stumbling block, X Protocol is not supported. As this was a example, simple, mysqldump some reference data and load it into a MySQL 8 instance, and extract into JSON, so as to potentially emulate a pipeline. Here is my experiences of trying to refactor into a demo to write up. Launch a MySQL Docker container as per my standard notes. Harmless, right? MYSQL_ROOT_PASSWORD="$(date | md5sum | cut -c1-20)#" echo $MYSQL_ROOT_PASSWORD docker run --name=qldb-mysql -p3306:3306 -v mysql-volume:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD -d mysql/mysql-server:latest docker logs qldb-mysql docker exec -it qldb-mysql /bin/bash As it's a quick demo, I shortcut credentials to make using the mysql client easier. NOTE: as I always generate a new password each container, it's included here. # echo "[mysql] user=root password='ab6ea7b0436cbc0c0d49#' > .my.cnf # mysql ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO) What the? Did I make a mistake, I test manually and check # mysql -u root -p # cat .my.cnf Nothing wrong there. Next check # pwd / bash-4.2# grep root /etc/passwd root:x:0:0:root:/root:/bin/bash operator:x:11:0:operator:/root:/sbin/nologin And there is the first Dockerism. I don't live in Docker, so these 101 learnings would be known. First I really thing using "root" by default is a horrible idea. And when you shell in, you are not dropped into the home directory? Solved, we move on. # mv /.my.cnf /root/.my.cnf Mock and example as quickly as I can think. # mysql mysql> create schema if not exists demo; Query OK, 1 row affected (0.00 sec) mysql> use demo; Database changed mysql> create table sample(id int unsigned not null auto_increment, name varchar(30) not null, location varchar(30) not null, domain varchar(50) null, primary key(id)); Query OK, 0 rows affected (0.03 sec) mysql> show create table sample; mysql> insert into sample values (null,'Demo Row','USA',null), (null,'Row 2','AUS','news.com.au'), (null,'Kiwi','NZ', null); Query OK, 3 rows affected (0.00 sec) Records: 3 Duplicates: 0 Warnings: 0 mysql> select * from sample; +----+----------+----------+-------------+ | id | name | location | domain | +----+----------+----------+-------------+ | 1 | Demo Row | USA | NULL | | 2 | Row 2 | AUS | news.com.au | | 3 | Kiwi | NZ | NULL | +----+----------+----------+-------------+ 3 rows in set (0.00 sec) Cool, now to look at it in Javascript using MySQL Shell. Hurdle 2. # mysqlsh MySQL Shell 8.0.22 Copyright (c) 2016, 2020, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. MySQL JS > var session=mysqlx.getSession('root:ab6ea7b0436cbc0c0d49#@localhost') mysqlx.getSession: Argument #1: Invalid URI: Illegal character [#] found at position 25 (ArgumentError) What the, it doesn't like the password format. I'm not a Javascript person, and well this is an example for blogging, which is not what was actually setup, so do it the right way, create a user. # mysql mysql> create user demo@localhost identified by 'qldb'; Query OK, 0 rows affected (0.01 sec) mysql> grant ALL ON sample.* to demo@localhost; Query OK, 0 rows affected, 1 warning (0.01 sec) mysql> SHOW GRANTS FOR demo@localhost; +----------------------------------------------------------+ | Grants for demo@localhost | +----------------------------------------------------------+ | GRANT USAGE ON *.* TO `demo`@`localhost` | | GRANT ALL PRIVILEGES ON `sample`.* TO `demo`@`localhost` | +----------------------------------------------------------+ 2 rows in set (0.00 sec) Back into the MySQL Shell, and hurdle 3. MySQL JS > var session=mysqlx.getSession('demo:qldb@localhost') mysqlx.getSession: Access denied for user 'demo'@'127.0.0.1' (using password: YES) (MySQL Error 1045) Did I create the creds wrong, verify. No my password is correct. # mysql -udemo -pqldb -e "SELECT NOW()" mysql: [Warning] Using a password on the command line interface can be insecure. +---------------------+ | NOW() | +---------------------+ | 2021-03-06 23:15:26 | +---------------------+ I don't have time to debug this, User take 2. mysql> drop user demo@localhost; Query OK, 0 rows affected (0.00 sec) mysql> create user demo@'%' identified by 'qldb'; Query OK, 0 rows affected (0.01 sec) mysql> grant all on demo.* to demo@'%' -> ; Query OK, 0 rows affected (0.00 sec) mysql> show grants; +-- | Grants for root@localhost | +--- | GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, SHUTDOWN, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `root`@`localhost` WITH GRANT OPTION | | GRANT APPLICATION_PASSWORD_ADMIN,AUDIT_ADMIN,BACKUP_ADMIN,BINLOG_ADMIN,BINLOG_ENCRYPTION_ADMIN,CLONE_ADMIN,CONNECTION_ADMIN,ENCRYPTION_KEY_ADMIN,FLUSH_OPTIMIZER_COSTS,FLUSH_STATUS,FLUSH_TABLES,FLUSH_USER_RESOURCES,GROUP_REPLICATION_ADMIN,INNODB_REDO_LOG_ARCHIVE,INNODB_REDO_LOG_ENABLE,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,REPLICATION_SLAVE_ADMIN,RESOURCE_GROUP_ADMIN,RESOURCE_GROUP_USER,ROLE_ADMIN,SERVICE_CONNECTION_ADMIN,SESSION_VARIABLES_ADMIN,SET_USER_ID,SHOW_ROUTINE,SYSTEM_USER,SYSTEM_VARIABLES_ADMIN,TABLE_ENCRYPTION_ADMIN,XA_RECOVER_ADMIN ON *.* TO `root`@`localhost` WITH GRANT OPTION | | GRANT PROXY ON ''@'' TO 'root'@'localhost' WITH GRANT OPTION | +--- 3 rows in set (0.00 sec) mysql> show grants for demo@'%'; +--------------------------------------------------+ | Grants for demo@% | +--------------------------------------------------+ | GRANT USAGE ON *.* TO `demo`@`%` | | GRANT ALL PRIVILEGES ON `demo`.* TO `demo`@`%` | +--------------------------------------------------+ 2 rows in set (0.00 sec) Right, initially I showed grants of not new user, but note to self, I should checkout the MySQL 8 Improved grants. I wonder how RDS MySQL 8 handles these, and how Aurora MySQL 8 will (when it ever drops, that's another story). Third try is a charm, so nice to also see queries with 0.0000 execution granularity. MySQL JS > var session=mysqlx.getSession('demo:qldb@localhost') MySQL JS > var sql='SELECT * FROM demo.sample' MySQL JS > session.sql(sql) +----+----------+----------+-------------+ | id | name | location | domain | +----+----------+----------+-------------+ | 1 | Demo Row | USA | NULL | | 2 | Row 2 | AUS | news.com.au | | 3 | Kiwi | NZ | NULL | +----+----------+----------+-------------+ 3 rows in set (0.0006 sec) Get that now in JSON output. NOTE: There are 3 different JSON formats, this matched what I needed. bash-4.2# mysqlsh MySQL Shell 8.0.22 Copyright (c) 2016, 2020, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help' or '?' for help; 'quit' to exit. MySQL JS > var session=mysqlx.getSession('demo:qldb@localhost') MySQL JS > var sql='SELECT * FROM demo.sample' MySQL JS > shell.options.set('resultFormat','json/array') MySQL JS > session.sql(sql) [ {"id":1,"name":"Demo Row","location":"USA","domain":null}, {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"}, {"id":3,"name":"Kiwi","location":"NZ","domain":null} ] 3 rows in set (0.0006 sec) Ok, that works in interactive interface, I need it scripted. # vi bash: vi: command not found # yum install vi Loaded plugins: ovl http://repo.mysql.com/yum/mysql-connectors-community/el/7/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden Trying other mirror. ... And another downer of Docker containers, other tools or easy ways to install them, again I want to focus on the actual example, and not all this preamble, so # echo "var session=mysqlx.getSession('demo:qldb@localhost') var sql='SELECT * FROM demo.sample' shell.options.set('resultFormat','json/array') session.sql(sql)" > dump.js # mysqlsh What the? Hurdle 4. Did I typo this as well, I check the file, and cut/paste it and get what I expect. # cat dump.js var session=mysqlx.getSession('demo:qldb@localhost') var sql='SELECT * FROM demo.sample' shell.options.set('resultFormat','json/array') session.sql(sql) # mysqlsh MySQL Shell 8.0.22 Copyright (c) 2016, 2020, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help' or '?' for help; 'quit' to exit. MySQL JS > var session=mysqlx.getSession('demo:qldb@localhost') MySQL JS > var sql='SELECT * FROM demo.sample' MySQL JS > shell.options.set('resultFormat','json/array') MySQL JS > session.sql(sql) [ {"id":1,"name":"Demo Row","location":"USA","domain":null}, {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"}, {"id":3,"name":"Kiwi","location":"NZ","domain":null} ] 3 rows in set (0.0022 sec) This is getting crazy. # echo '[ > {"id":1,"name":"Demo Row","location":"USA","domain":null}, > {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"}, > {"id":3,"name":"Kiwi","location":"NZ","domain":null} > ]' > sample.json bash-4.2# jq . sample.json bash: jq: command not found Oh the docker!!!!. Switching back to my EC2 instance now. $ echo '[ > {"id":1,"name":"Demo Row","location":"USA","domain":null}, > {"id":2,"name":"Row 2","location":"AUS","domain":"news.com.au"}, > {"id":3,"name":"Kiwi","location":"NZ","domain":null} > ]' > sample.json $ jq . sample.json [ { "id": 1, "name": "Demo Row", "location": "USA", "domain": null }, { "id": 2, "name": "Row 2", "location": "AUS", "domain": "news.com.au" }, { "id": 3, "name": "Kiwi", "location": "NZ", "domain": null } ] I am now way of the time I would like to spend on this weekly post, and it's getting way to long, and I'm nowhere near showing what I actually want. Still we trek on. Boy, this stock EC2 image uses version 1, we need I'm sure V2, and well command does not work!!!! $ aws qldb list-ledgers ERROR: $ aws --version $ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" $ unzip awscliv2.zip $ sudo ./aws/install $ export PATH=/usr/local/bin:$PATH $ aws --version Can I finally get a ledger now. $ aws qldb create-ledger --name demo --tags JIRA=DEMO-5826,Owner=RonaldBradford --permissions-mode ALLOW_ALL --no-deletion-protection { "Name": "demo", "Arn": "arn:aws:qldb:us-east-1:999:ledger/demo", "State": "CREATING", "CreationDateTime": "2021-03-06T22:46:41.760000+00:00", "DeletionProtection": false } $ aws qldb list-ledgers { "Ledgers": [ { "Name": "xx", "State": "ACTIVE", "CreationDateTime": "2021-03-05T20:12:44.611000+00:00" }, { "Name": "demo", "State": "ACTIVE", "CreationDateTime": "2021-03-06T22:46:41.760000+00:00" } ] } $ aws qldb describe-ledger --name demo { "Name": "demo", "Arn": "arn:aws:qldb:us-east-1:999:ledger/demo", "State": "ACTIVE", "CreationDateTime": "2021-03-06T22:46:41.760000+00:00", "DeletionProtection": false } Oh the Python 2, and the lack of user packaging, more crud of getting an example. $ pip install pyqldb==3.1.0 ERROR $ echo "alias python=python3 alias pip=pip3" >> ~/.bash_profile source ~/.bash_profile $ pip --version pip 9.0.3 from /usr/lib/python3.6/site-packages (python 3.6) $ python --version Python 3.6.8 $ pip install pyqldb==3.1.0 ERROR $ sudo pip install pyqldb==3.1.0 Yeah!, after all that, my example code works and data is inserted. $ cat demo.py from pyqldb.config.retry_config import RetryConfig from pyqldb.driver.qldb_driver import QldbDriver # Configure retry limit to 3 retry_config = RetryConfig(retry_limit=3) # Initialize the driver print("Initializing the driver") qldb_driver = QldbDriver("demo", retry_config=retry_config) def create_table(transaction_executor, table): print("Creating table {}".format(table)) transaction_executor.execute_statement("Create TABLE {}".format(table)) def create_index(transaction_executor, table, column): print("Creating index {}.{}".format(table, column)) transaction_executor.execute_statement("CREATE INDEX ON {}({})".format(table,column)) def insert_record(transaction_executor, table, values): print("Inserting into {}".format(table)) transaction_executor.execute_statement("INSERT INTO {} ?".format(table), values) table="sample" column="id" qldb_driver.execute_lambda(lambda executor: create_table(executor, table)) qldb_driver.execute_lambda(lambda executor: create_index(executor, table, column)) record1 = { 'id': "1", 'name': "Demo Row", 'location': "USA", 'domain': "" } qldb_driver.execute_lambda(lambda x: insert_record(x, table, record1)) $ python demo.py Initializing the driver Creating table sample Creating index sample.id Inserting into sample One vets in the AWS Console, but you cannot show that in text in this blog, so goes to find a simple client and there is qldbshell What the? I installed it and it complains about pyqldb.driver.pooled_qldb_driver. I literally used that in the last example. $ pip3 install qldbshell Collecting qldbshell Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/0f/f7/fe984d797e0882c5e141a4888709ae958eb8c48007a23e94000507439f83/qldbshell-1.2.0.tar.gz (68kB) 100% |████████████████████████████████| 71kB 55.6MB/s Requirement already satisfied: boto3>=1.9.237 in /usr/local/lib/python3.6/site-packages (from qldbshell) Collecting amazon.ion=0.5.0 (from qldbshell) Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/4e/b7/21b7a7577cc6864d1c93fd710701e4764af6cf0f7be36fae4f9673ae11fc/amazon.ion-0.5.0.tar.gz (178kB) 100% |████████████████████████████████| 184kB 78.7MB/s Requirement already satisfied: prompt_toolkit=3.0.5 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: ionhash~=1.1.0 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: s3transfer=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: jmespath=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: botocore=1.20.21 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion=0.5.0->qldbshell) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/site-packages (from prompt_toolkit=3.0.5->qldbshell) Requirement already satisfied: python-dateutil=2.1 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Requirement already satisfied: urllib3=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Installing collected packages: amazon.ion, qldbshell Found existing installation: amazon.ion 0.7.0 Uninstalling amazon.ion-0.7.0: Exception: Traceback (most recent call last): File "/usr/lib64/python3.6/shutil.py", line 550, in move os.rename(src, real_dst) PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth' -> '/tmp/pip-p8j4d45d-uninstall/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/lib/python3.6/site-packages/pip/basecommand.py", line 215, in main status = self.run(options, args) File "/usr/lib/python3.6/site-packages/pip/commands/install.py", line 365, in run strip_file_prefix=options.strip_file_prefix, File "/usr/lib/python3.6/site-packages/pip/req/req_set.py", line 783, in install requirement.uninstall(auto_confirm=True) File "/usr/lib/python3.6/site-packages/pip/req/req_install.py", line 754, in uninstall paths_to_remove.remove(auto_confirm) File "/usr/lib/python3.6/site-packages/pip/req/req_uninstall.py", line 115, in remove renames(path, new_path) File "/usr/lib/python3.6/site-packages/pip/utils/__init__.py", line 267, in renames shutil.move(old, new) File "/usr/lib64/python3.6/shutil.py", line 565, in move os.unlink(src) PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.6/site-packages/amazon.ion-0.7.0-py3.6-nspkg.pth' [centos@ip-10-204-101-224] ~ $ sudo pip3 install qldbshell WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Collecting qldbshell Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/0f/f7/fe984d797e0882c5e141a4888709ae958eb8c48007a23e94000507439f83/qldbshell-1.2.0.tar.gz (68kB) 100% |████████████████████████████████| 71kB 49.8MB/s Requirement already satisfied: boto3>=1.9.237 in /usr/local/lib/python3.6/site-packages (from qldbshell) Collecting amazon.ion=0.5.0 (from qldbshell) Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/4e/b7/21b7a7577cc6864d1c93fd710701e4764af6cf0f7be36fae4f9673ae11fc/amazon.ion-0.5.0.tar.gz (178kB) 100% |████████████████████████████████| 184kB 27.7MB/s Requirement already satisfied: prompt_toolkit=3.0.5 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: ionhash~=1.1.0 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: botocore=1.20.21 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: jmespath=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: s3transfer=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion=0.5.0->qldbshell) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/site-packages (from prompt_toolkit=3.0.5->qldbshell) Requirement already satisfied: python-dateutil=2.1 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Requirement already satisfied: urllib3=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Installing collected packages: amazon.ion, qldbshell Found existing installation: amazon.ion 0.7.0 Uninstalling amazon.ion-0.7.0: Successfully uninstalled amazon.ion-0.7.0 Running setup.py install for amazon.ion ... done Running setup.py install for qldbshell ... done Successfully installed amazon.ion-0.5.0 qldbshell-1.2.0 $ sudo pip3 install qldbshell $ qldbshell Traceback (most recent call last): File "/usr/local/bin/qldbshell", line 11, in load_entry_point('qldbshell==1.2.0', 'console_scripts', 'qldbshell')() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 476, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2700, in load_entry_point return ep.load() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2318, in load return self.resolve() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2324, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python3.6/site-packages/qldbshell/__main__.py", line 25, in from pyqldb.driver.pooled_qldb_driver import PooledQldbDriver ModuleNotFoundError: No module named 'pyqldb.driver.pooled_qldb_driver' $ pip list qldbshell DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. amazon.ion (0.5.0) boto3 (1.17.21) botocore (1.20.21) ionhash (1.1.0) jmespath (0.10.0) pip (9.0.3) prompt-toolkit (3.0.16) pyqldb (3.1.0) python-dateutil (2.8.1) qldbshell (1.2.0) s3transfer (0.3.4) setuptools (39.2.0) six (1.15.0) urllib3 (1.26.3) So, uninstalled and re-installed and voila, my data. $ qldbshell usage: qldbshell [-h] [-v] [-s QLDB_SESSION_ENDPOINT] [-r REGION] [-p PROFILE] -l LEDGER qldbshell: error: the following arguments are required: -l/--ledger $ qldbshell -l demo Welcome to the Amazon QLDB Shell version 1.2.0 Use 'start' to initiate and interact with a transaction. 'commit' and 'abort' to commit or abort a transaction. Use 'start; statement 1; statement 2; commit; start; statement 3; commit' to create transactions non-interactively. Use 'help' for the help section. All other commands will be interpreted as PartiQL statements until the 'exit' or 'quit' command is issued. qldbshell > qldbshell > SELECT * FROM sample; INFO: { id: "1", name: "Demo Row", location: "USA", domain: "" } INFO: (0.1718s) qldbshell > q WARNING: Error while executing query: An error occurred (BadRequestException) when calling the SendCommand operation: Lexer Error: at line 1, column 1: invalid character at, '' [U+5c]; INFO: (0.1134s) qldbshell > exit Exiting QLDB Shell Right q is a mysqlism of the client, need to rewire myself. Now, I have a ledger, I created an example table, mocked a row of data and verified. Now I can just load my sample data in JSON I created earlier right? Wrong!!! $ cat load.py import json from pyqldb.config.retry_config import RetryConfig from pyqldb.driver.qldb_driver import QldbDriver # Configure retry limit to 3 retry_config = RetryConfig(retry_limit=3) # Initialize the driver print("Initializing the driver") qldb_driver = QldbDriver("demo", retry_config=retry_config) def insert_record(transaction_executor, table, values): print("Inserting into {}".format(table)) transaction_executor.execute_statement("INSERT INTO {} ?".format(table), values) table="sample" with open('sample.json') as f: data=json.load(f) qldb_driver.execute_lambda(lambda x: insert_record(x, table, data)) $ python load.py Traceback (most recent call last): File "load.py", line 2, in from pyqldb.config.retry_config import RetryConfig ModuleNotFoundError: No module named 'pyqldb' [centos@ip-10-204-101-224] ~ Oh sweet, I'd installed that, and used it, and re-installed it. $ pip list | grep pyqldb DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. [centos@ip-10-204-101-224] ~ $ sudo pip3 install pyqldb WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Collecting pyqldb Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/5c/b4/9790b1fad87d7df5b863cbf353689db145bd009d31d854d282b31e1c1781/pyqldb-3.1.0.tar.gz Collecting amazon.ion=0.7.0 (from pyqldb) Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/7d/ac/fd1edee54cefa425c444b51ad00a20e5bc74263a3afbfd4c8743040f8f26/amazon.ion-0.7.0.tar.gz (211kB) 100% |████████████████████████████████| 215kB 24.8MB/s Requirement already satisfied: boto3=1.16.56 in /usr/local/lib/python3.6/site-packages (from pyqldb) Requirement already satisfied: botocore=1.19.56 in /usr/local/lib/python3.6/site-packages (from pyqldb) Requirement already satisfied: ionhash=1.1.0 in /usr/local/lib/python3.6/site-packages (from pyqldb) Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion=0.7.0->pyqldb) Requirement already satisfied: s3transfer=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3=1.16.56->pyqldb) Requirement already satisfied: jmespath=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3=1.16.56->pyqldb) Requirement already satisfied: python-dateutil=2.1 in /usr/local/lib/python3.6/site-packages (from botocore=1.19.56->pyqldb) Requirement already satisfied: urllib3=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore=1.19.56->pyqldb) Installing collected packages: amazon.ion, pyqldb Found existing installation: amazon.ion 0.5.0 Uninstalling amazon.ion-0.5.0: Successfully uninstalled amazon.ion-0.5.0 Running setup.py install for amazon.ion ... done Running setup.py install for pyqldb ... done Successfully installed amazon.ion-0.7.0 pyqldb-3.1.0 Load one more time. $ cat load.py import json from pyqldb.config.retry_config import RetryConfig from pyqldb.driver.qldb_driver import QldbDriver # Configure retry limit to 3 retry_config = RetryConfig(retry_limit=3) # Initialize the driver print("Initializing the driver") qldb_driver = QldbDriver("demo", retry_config=retry_config) def insert_record(transaction_executor, table, values): print("Inserting into {}".format(table)) transaction_executor.execute_statement("INSERT INTO {} ?".format(table), values) table="sample" with open('sample.json') as f: data=json.load(f) qldb_driver.execute_lambda(lambda x: insert_record(x, table, data)) $ python load.py Initializing the driver Inserting into sample And done, I've got my JSON extracted MySQL 8 data in QLDB. I go to vett it in the client, and boy, didn't expect yet another package screw up. Clearly, these 2 AWS python packages are incompatible. That's a venv need, but I'm now at double my desired time to show this. $ qldbshell -l demo Traceback (most recent call last): File "/usr/local/bin/qldbshell", line 11, in load_entry_point('qldbshell==1.2.0', 'console_scripts', 'qldbshell')() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 476, in load_entry_point return get_distribution(dist).load_entry_point(group, name) File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2700, in load_entry_point return ep.load() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2318, in load return self.resolve() File "/usr/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2324, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/usr/local/lib/python3.6/site-packages/qldbshell/__main__.py", line 25, in from pyqldb.driver.pooled_qldb_driver import PooledQldbDriver ModuleNotFoundError: No module named 'pyqldb.driver.pooled_qldb_driver' [centos@ip-10-204-101-224] ~ $ pip list | grep qldbshell DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. qldbshell (1.2.0) $ sudo pip uninstall qldbshell pyqldb $ sudo pip install qldbshell WARNING: Running pip install with root privileges is generally not a good idea. Try `pip3 install --user` instead. Collecting qldbshell Downloading https://artifactory.lifion.oneadp.com/artifactory/api/pypi/pypi/packages/packages/0f/f7/fe984d797e0882c5e141a4888709ae958eb8c48007a23e94000507439f83/qldbshell-1.2.0.tar.gz (68kB) 100% |████████████████████████████████| 71kB 43.4MB/s Requirement already satisfied: boto3>=1.9.237 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: amazon.ion=0.5.0 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: prompt_toolkit=3.0.5 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: ionhash~=1.1.0 in /usr/local/lib/python3.6/site-packages (from qldbshell) Requirement already satisfied: s3transfer=0.3.0 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: botocore=1.20.21 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: jmespath=0.7.1 in /usr/local/lib/python3.6/site-packages (from boto3>=1.9.237->qldbshell) Requirement already satisfied: six in /usr/local/lib/python3.6/site-packages (from amazon.ion=0.5.0->qldbshell) Requirement already satisfied: wcwidth in /usr/local/lib/python3.6/site-packages (from prompt_toolkit=3.0.5->qldbshell) Requirement already satisfied: python-dateutil=2.1 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Requirement already satisfied: urllib3=1.25.4 in /usr/local/lib/python3.6/site-packages (from botocore=1.20.21->boto3>=1.9.237->qldbshell) Installing collected packages: qldbshell Running setup.py install for qldbshell ... done Successfully installed qldbshell-1.2.0 Can I see my data now $ qldbshell -l demo Welcome to the Amazon QLDB Shell version 1.2.0 Use 'start' to initiate and interact with a transaction. 'commit' and 'abort' to commit or abort a transaction. Use 'start; statement 1; statement 2; commit; start; statement 3; commit' to create transactions non-interactively. Use 'help' for the help section. All other commands will be interpreted as PartiQL statements until the 'exit' or 'quit' command is issued. qldbshell > select * from sample; INFO: { id: 1, name: "Demo Row", location: "USA", domain: null }, { id: 1, name: "Demo Row", location: "USA", domain: null }, { id: "1", name: "Demo Row", location: "USA", domain: "" }, { id: 3, name: "Kiwi", location: "NZ", domain: null }, { id: 2, name: "Row 2", location: "AUS", domain: "news.com.au" }, { id: 3, name: "Kiwi", location: "NZ", domain: null }, { id: 2, name: "Row 2", location: "AUS", domain: "news.com.au" } INFO: (0.0815s) And yes, data, I see it's duplicated, so I must have in between the 10 steps run twice. This does highlight a known limitation of QLDB, no unique constraints. But wait, that data is not really correct, I don't want null. Goes back to the JSON to see the MySQL shell gives that. $ jq . sample.json [ { "id": 1, "name": "Demo Row", "location": "USA", "domain": null }, ... At some point I also got this load error, but by now I've given up documenting how to do something, in order to demonstrate something. NameError: name 'null' is not defined One has to wrap the only nullable column with IFNULL(subdomain,'') as subdomain and redo all those steps again. This is not going to be practical having to wrap all columns in a wider table with IFNULL. However, having exhausted all this time for what was supposed to be a quiet weekend few hours, my post is way to long, and I've learned "Creating examples can be hard". http://ronaldbradford.com/blog/wdiltw-creating-examples-can-be-hard-2021-03-06/
0 notes
Text
[FREE] Practical SQL with Oracle Database 18c
[FREE] Practical SQL with Oracle Database 18c
What you Will learn ? Install Oracle 18c Database Server Unlock sample hr schema Install Oracle SQL Developer Connect to Oracle using SQL Developer Basic Database Concepts Course Description SQL forms the cornerstone of all relational database operations. Taking full advantage of its power requires an in-depth understanding of the language. In this course, you learn to use the full potential of…

View On WordPress
0 notes
Text
300+ TOP ORACLE Database Interview Questions and Answers
ORACLE Database Interview Questions for freshers experienced :-
1. What Is Oracle? Oracle is a company. Oracle is also a database server, which manages data in a very structured way. It allows users to store and retrieve related data in a multiuser environment so that many users can concurrently access the same data. All this is accomplished while delivering high performance. A database server also prevents unauthorized access and provides efficient solutions for failure recovery. 2. What Is an Oracle Database? An Oracle database is a collection of data treated as a big unit in the database server. 3. What Is an Oracle Instance? Every running Oracle database is associated with an Oracle instance. When a database is started on a database server (regardless of the type of computer), Oracle allocates a memory area called the System Global Area (SGA) and starts one or more Oracle processes. This combination of the SGA and the Oracle processes is called an Oracle instance. The memory and processes of an instance manage the associated database's data efficiently and serve the one or multiple users of the database. 4. What Is a Parameter File in Oracle? A parameter file is a file that contains a list of initialization parameters and a value for each parameter. You specify initialization parameters in a parameter file that reflect your particular installation. Oracle supports the following two types of parameter files: * Server Parameter Files - Binary version. Persistent. * Initialization Parameter Files - Text version. Not persistent. 5. What Is a Server Parameter File in Oracle? A server parameter file is a binary file that acts as a repository for initialization parameters. The server parameter file can reside on the machine where the Oracle database server executes. Initialization parameters stored in a server parameter file are persistent, in that any changes made to the parameters while an instance is running can persist across instance shutdown and startup. 6. What Is a Initialization Parameter File in Oracle? An initialization parameter file is a text file that contains a list of initialization parameters. The file should be written in the client's default character set. Sample initialization parameter files are provided on the Oracle distribution medium for each operating system. A sample file is sufficient for initial use, but you will probably want to modify the file to tune the database for best performance. Any changes will take effect after you completely shut down and restart the instance. 7. What is System Global Area (SGA) in Oracle? The System Global Area (SGA) is a memory area that contains data shared between all database users such as buffer cache and a shared pool of SQL statements. The SGA is allocated in memory when an Oracle database instance is started, and any change in the value will take effect at the next startup. 8. What is Program Global Area (PGA) in Oracle? A Program Global Area (PGA) is a memory buffer that is allocated for each individual database session and it contains session specific information such as SQL statement data or buffers used for sorting. The value specifies the total memory allocated by all sessions, and changes will take effect as new sessions are started. 9. What Is a User Account in Oracle? A user account is identified by a user name and defines the user's attributes, including the following: Password for database authentication Privileges and roles Default tablespace for database objects Default temporary tablespace for query processing work space 10. What Is the Relation of a User Account and a Schema in Oracle? User accounts and schemas have a one-to-one relation. When you create a user, you are also implicitly creating a schema for that user. A schema is a logical container for the database objects (such as tables, views, triggers, and so on) that the user creates. The schema name is the same as the user name, and can be used to unambiguously refer to objects owned by the user.
ORACLE Database Interview Questions 11. What Is a User Role in Oracle? A user role is a group of privileges. Privileges are assigned to users through user roles. You create new roles, grant privileges to the roles, and then grant roles to users. 12. What is a Database Schema in Oracle? A schema is a collection of logical structures of data, or schema objects. A schema is owned by a database user and has the same name as that user. Each user owns a single schema. Schema objects can be created and manipulated with SQL and include: tables, views, and other types of data objects. 13. What Is a Database Table in Oracle? A database table is a basic unit of data logical storage in an Oracle database. Data is stored in rows and columns. You define a table with a table name, such as employees, and a set of columns. You give each column a column name, such as employee_id, last_name, and job_id; a datatype, such as VARCHAR2, DATE, or NUMBER; and a width. The width can be predetermined by the datatype, as in DATE. If columns are of the NUMBER datatype, define precision and scale instead of width. A row is a collection of column information corresponding to a single record. 14. What Is a Table Index in Oracle? Index is an optional structure associated with a table that allow SQL statements to execute more quickly against a table. Just as the index in this manual helps you locate information faster than if there were no index, an Oracle Database index provides a faster access path to table data. You can use indexes without rewriting any queries. Your results are the same, but you see them more quickly. 15. What Is an Oracle Tablespace? An Oracle tablespace is a big unit of logical storage in an Oracle database. It is managed and used by the Oracle server to store structures data objects, like tables and indexes. 16. What Is an Oracle Data File? An Oracle data file is a big unit of physical storage in the OS file system. One or many Oracle data files are organized together to provide physical storage to a single Oracle tablespace. 17. What Is a Static Data Dictionary in Oracle? Data dictionary tables are not directly accessible, but you can access information in them through data dictionary views. To list the data dictionary views available to you, query the view DICTIONARY. Many data dictionary tables have three corresponding views: * An ALL_ view displays all the information accessible to the current user, including information from the current user's schema as well as information from objects in other schemas, if the current user has access to those objects by way of grants of privileges or roles. * A DBA_ view displays all relevant information in the entire database. DBA_ views are intended only for administrators. They can be accessed only by users with the SELECT ANY TABLE privilege. This privilege is assigned to the DBA role when the system is initially installed. * A USER_ view displays all the information from the schema of the current user. No special privileges are required to query these views. 18. What Is a Dynamic Performance View in Oracle? Oracle contains a set of underlying views that are maintained by the database server and accessible to the database administrator user SYS. These views are called dynamic performance views because they are continuously updated while a database is open and in use, and their contents relate primarily to performance. Although these views appear to be regular database tables, they are not. These views provide data on internal disk structures and memory structures. You can select from these views, but you can never update or alter them. 19. What Is a Recycle Bin in Oracle? Recycle bin is a logical storage to hold the tables that have been dropped from the database, in case it was dropped in error. Tables in recycle bin can be recovered back into database by the Flashback Drop action. Oracle database recycle save the same purpose as the recycle bin on your Windows desktop. Recycle bin can be turned on or off in the recyclebin=on/off in your parameter file. 20. What Is SQL*Plus? SQL*Plus is an interactive and batch query tool that is installed with every Oracle Database Server or Client installation. It has a command-line user interface, a Windows Graphical User Interface (GUI) and the iSQL*Plus web-based user interface. 21. What Is Transport Network Substrate (TNS) in Oracle? TNS, Transport Network Substrate, is a foundation technology, built into the Oracle Net foundation layer that works with any standard network transport protocol. 22. What Is Open Database Communication (ODBC) in Oracle? ODBC, Open Database Communication, a standard API (application program interface) developed by Microsoft for Windows applications to communicate with database management systems. Oracle offers ODBC drivers to allow Windows applications to connect Oracle server through ODBC. 23. What is Oracle Database 10g Express Edition? Based on Oracle Web site: Oracle Database 10g Express Edition (Oracle Database XE) is an entry-level, small-footprint database based on the Oracle Database 10g Release 2 code base that's free to develop, deploy, and distribute; fast to download; and simple to administer. Oracle Database XE is a great starter database for: Developers working on PHP, Java, .NET, and Open Source applications DBAs who need a free, starter database for training and deployment Independent Software Vendors (ISVs) and hardware vendors who want a starter database to distribute free of charge Educational institutions and students who need a free database for their curriculum 24. What Are the Limitations Oracle Database 10g XE? Oracle Database XE is free for runtime usage with the following limitations: Supports up to 4GB of user data (in addition to Oracle system data) Single instance only of Oracle Database XE on any server May be installed on a multiple CPU server, but only executes on one processor in any server May be installed on a server with any amount of memory, but will only use up to 1GB RAM of available memory 25. What Operating Systems Are Supported by Oracle Database 10g XE? Oracle Database 10g Express Edition is available for two types of operating Systems: Linux x86 - Debian, Mandriva, Novell, Red Hat and Ubuntu Microsoft Windows 26. How To Download Oracle Database 10g XE? If you want to download a copy of Oracle Database 10g Express Edition, visit http://www.oracle.com/technology/software/products/database/xe/. If you are using Windows systems, there are downloads available for you: Oracle Database 10g Express Edition (Western European) - Single-byte LATIN1 database for Western European language storage, with the Database Homepage user interface in English only. Oracle Database 10g Express Edition (Universal) - Multi-byte Unicode database for all language deployment, with the Database Homepage user interface available in the following languages: Brazilian Portuguese, Chinese (Simplified and Traditional), English, French, German, Italian, Japanese, Korean and Spanish. Oracle Database 10g Express Client You need to download the universal edition, OracleXEUniv.exe, (216,933,372 bytes) and client package, OracleXEClient.exe (30,943,220 bytes). 27. How To Install Oracle Database 10g XE? To install 10g universal edition, double click, OracleXEUniv.exe, the install wizard starts. It will guide you to finish the installation process. You should take notes about: The SYSTEM password you selecte: atoztarget. Database server port: 1521. Database HTTP port: 8080. MS Transaction Server port: 2030. The directory where 10g XE is installed: oraclexe Hard disk space taken: 1655MB. 28. How To Check Your Oracle Database 10g XE Installation? If you want to check your fresh installation of 10g Express Edition without using any special client programs, you can use a Web browser with this address, http://localhost:8080/apex/. You will see the login page. Enter SYSTEM as the user name, and the password (atoztarget), you selected during the installation to log into the server. Visit different areas on your 10g XE server home page to make sure your server is running OK. You can also get to your 10g XE server home page by going through the start menu. Select All Programs, then Oracle Database 10g Express Edition, and then Go To Database Home Page 29. How To Shutdown Your 10g XE Server? If you want to shutdown your 10g Express Edition server, go to the Services manager in the control panel. You will a service called OracleServiceXE, which represents your 10g Express Edition server. Select OracleServiceXE, and use the right mouse click to stop this service. This will shutdown your 10g Express Edition server. You can also shutdown your 10g XE server through the start menu. Select All Programs, then Oracle Database 10g Express Edition, and then Stop Database. 30. How To Start Your 10g XE Server? Go to the Start menu, select All Programs, Oracle Database 10g Express Edition, and Start Database. 31. How Much Memory Your 10g XE Server Is Using? Your 10g XE Server is using about 180MB of memory even there is no users on the server. The server memory usage is displayed on your server home page, if you log in as SYSTEM. 32. How To Start Your 10g XE Server from Command Line? You can start your 10g XE server from command line by: Open a command line window. Change directory to oraclexeapporacleproduct10.2.0serverBIN. Run StartDB.bat. The batch file StartDB.bat contains: net start OracleXETNSListener net start OracleServiceXE @oradim -startup -sid XE -starttype inst > nul 2>&1 33. How To Shutdown Your 10g XE Server from Command Line? You can shutdown your 10g XE server from command line by: Open a command line window. Change directory to oraclexeapporacleproduct10.2.0serverBIN. Run StopDB.bat. The batch file StopDB.bat contains: net stop OracleServiceXE 34. How To Unlock the Sample User Account in Oracle? Your 10g XE server comes with a sample database user account called HR. But this account is locked. You must unlock it before you can use it: Log into the server home page as SYSTEM. Click the Administration icon, and then click Database Users. Click the HR schema icon to display the user information for HR. Enter a new password (hr) for HR, and change the status to Unlocked. Click Alter User to save the changes. Now user account HR is ready to use. 35. How To Change System Global Area (SGA) in Oracle? Your 10g XE server has a default setting for System Global Area (SGA) of 140MB. The SGA size can be changed to a new value depending on how many concurrent sessions connecting to your server. If you are running this server just for yourself to improve your DBA skill, you should change the SGA size to 32MB by: Log into the server home page as SYSTEM. Go to Administration, then Memory. Click Configure SGA. Enter the new memory size: 32 Click Apply Changes to save the changes. Re-start your server. 36. How To Change Program Global Area (PGA) in Oracle? Your 10g XE server has a default setting for Program Global Area (PGA) of 40MB. The PGA size can be changed to a new value depending on how much data a single session should be allocated. If you think your session will be short with a small amount of data, you should change the PGA size to 16MB by: Log into the server home page as SYSTEM. Go to Administration, then Memory. Click Configure PGA. Enter the new memory size: 16 Click Apply Changes to save the changes. Re-start your server. 37. What Happens If You Set the SGA Too Low in Oracle? Let's you made a mistake and changed to SGA to 16MB from the SYSTEM admin home page. When you run the batch file StartDB.bat, it will return a message saying server stated. However, if you try to connect to your server home page: http://localhost:8080/apex/, you will get no response. Why? Your server is running, but the default instance XE was not started. If you go the Control Panel and Services, you will see service OracleServiceXE is listed not in the running status. 38. What To Do If the StartBD.bat Failed to Start the XE Instance? If StartBD.bat failed to start the XE instance, you need to try to start the instance with other approaches to get detail error messages on why the instance can not be started. One good approach to start the default instance is to use SQL*Plus. Here is how to use SQL*Plus to start the default instance in a command window: >cd (OracleXE home directory) >.binstartdb >.binsqlplus Enter user-name: SYSTEM Enter password: atoztarget ERROR: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist The first "cd" is to move the current directory the 10g XE home directory. The second command ".binstartdb" is to make sure the TNS listener is running. The third command ".binsqlplus" launches SQL*Plus. The error message "ORA-27101" tells you that there is a memory problem with the default instance. So you can not use the normal login process to the server without a good instance. See other tips on how to log into a server without any instance. 39. How To Login to the Server without an Instance? If your default instance is in trouble, and you can not use the normal login process to reach the server, you can use a special login to log into the server without any instance. Here is how to use SQL*Plus to log in as as a system BDA: >cd (OracleXE home directory) >.binstartdb >.binsqlplus Enter user-name: SYSTEM/atoztarget AS SYSDBA Connected to an idle instance SQL> show instance instance "local" The trick is to put user name, password and login options in a single string as the user name. "AS SYSDBA" tells the server to not start any instance, and connect the session the idle instance. Log in as SYSDBA is very useful for performing DBA tasks. 40. How To Use "startup" Command to Start Default Instance? If you logged in to the server as a SYSDBA, you start the default instance with the "startup" command. Here is how to start the default instance in SQL*Plus in SYSDBA mode: >.binsqlplus Enter user-name: SYSTEM/atoztarget AS SYSDBA Connected to an idle instance SQL> show instance instance "local" SQL> startup ORA-00821: Specified value of sga_target 16M is too small, needs to be at least 20M Now the server is telling you more details about the memory problem on your default instance: your SGA setting of 16MB is too small. It must be increased to at least 20MB. 41. Where Are the Settings Stored for Each Instance in Oracle? Settings for each instance are stored in a file called Server Parameter File (SPFile). Oracle supports two types of parameter files, Text type, and Binary type. parameter files should be located in $ORACLE_HOMEdatabase directory. A parameter file should be named like "init$SID.ora", where $SID is the instance name. 42. What To Do If the Binary SPFile Is Wrong for the Default Instance? Let's say the SPFile for the default instance is a binary file, and some settings are wrong in the SPFile, like SGA setting is bellow 20MB, how do you change a setting in the binary file? This seems to be a hard task, because the binary SPFile is not allowed to be edited manually. It needs to be updated by the server with instance started. But you can not start the instance because the SPFile has a wrong setting. One way to solve the problem is to stop using the binary SPFile, and use a text version of the a parameter file to start the instance. Here is an example of how to use the backup copy (text version) of the parameter file for the default instance to start the instance: >.binsqlplus Enter user-name: SYSTEM/atoztarget AS SYSDBA Connected to an idle instance 43. How To Check the Server Version in Oracle? Oracle server version information is stored in a table called: PRODUCT_COMPONENT_VERSION. You can use a simple SELECT statement to view the version information like this: >.binsqlplus Enter user-name: SYSTEM/atoztarget AS SYSDBA Connected to an idle instance SQL> COL PRODUCT FORMAT A35 SQL> COL VERSION FORMAT A15 SQL> COL STATUS FORMAT A15 SQL> SELECT * FROM PRODUCT_COMPONENT_VERSION; PRODUCT VERSION STATUS ----------------------------------- ----------- ---------- NLSRTL 10.2.0.1.0 Production Oracle Database 10g Express Edition 10.2.0.1.0 Product PL/SQL 10.2.0.1.0 Production TNS for 32-bit Windows: 10.2.0.1.0 Production 44. Explain What Is SQL*Plus? SQL*Plus is an interactive and batch query tool that is installed with every Oracle Database Server or Client installation. It has a command-line user interface, a Windows Graphical User Interface (GUI) and the iSQL*Plus web-based user interface. SQL*Plus has its own commands and environment, and it provides access to the Oracle Database. It enables you to enter and execute SQL, PL/SQL, SQL*Plus and operating system commands to perform the following: Format, perform calculations on, store, and print from query results Examine table and object definitions Develop and run batch scripts Perform database administration You can use SQL*Plus to generate reports interactively, to generate reports as batch processes, and to output the results to text file, to screen, or to HTML file for browsing on the Internet. You can generate reports dynamically using the HTML output facility of SQL*Plus, or using the dynamic reporting capability of iSQL*Plus to run a script from a web page. 45. How To Start the Command-Line SQL*Plus? f you Oracle server or client installed on your windows system, you can start the command-line SQL*Plus in two ways: 1. Click Start > All Programs > Oracle ... > Start SQL Command Line. The SQL*Plus command window will show up with a message like this: SQL*Plus: Release 10.2.0.1.0 - Production on Tue ... Copyright (c) 1982, 2005, Oracle. All rights reserved. SQL> 2. Click Start > Run..., enter "cmd" and click OK. A Windows command window will show up. You can then use Windows commands to start the command-line SQL*Plus as shown in the tutorial exercise below: >cd c:oraclexeapporacleproduct10.2.0server >.binsqlplus /nolog SQL*Plus: Release 10.2.0.1.0 - Production on Tue ... Copyright (c) 1982, 2005, Oracle. All rights reserved. 46. How To Get Help at the SQL Prompt? Once SQL*Plus is started, you will get a SQL prompt like this: SQL>. This where you can enter commands for SQL*Plus to run. To get help information at the SQL prompt, you can use the HELP command as shown in the following tutorial example: SQL> HELP INDEX Enter Help for help. @ COPY PAUSE SHUTDOWN @@ DEFINE PRINT SPOOL / DEL PROMPT SQLPLUS ACCEPT DESCRIBE QUIT START APPEND DISCONNECT RECOVER STARTUP ARCHIVE LOG EDIT REMARK STORE ATTRIBUTE EXECUTE REPFOOTER TIMING BREAK EXIT REPHEADER TTITLE ... COMPUTE LIST SET XQUERY CONNECT PASSWORD SHOW SQL> HELP CONNECT CONNECT ------ 47. What Information Is Needed to Connect SQL*Plus an Oracle Server? If you want to connect your SQL*Plus session to an Oracle server, you need to know the following information about this server: The network hostname, or IP address, of the Oracle server. The network port number where the Oracle server is listening for incoming connections. The name of the target database instance managed by the Oracle server. The name of your user account predefined on in the target database instance. The password of your user account predefined on in the target database instance. 48. What Is a Connect Identifier? A "connect identifier" is an identification string of a single set of connection information to a specific target database instance on a specific Oracle server. Connect identifiers are defined and stored in a file called tnsnames.ora located in $ORACLE_HOME/network/admin/ directory. Here is one example of a "connect identifier" definition: ggl_XE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP) (HOST = www.atoztarget.com) (PORT = 1521) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = XE) ) ) The above "connect identifier" defines "TNS_XE" with the following connection information: * The network hostname: www.atoztarget.com. * The network port number: 1521. * The name of the target database instance: XE. 49. How To Connect a SQL*Plus Session to an Oracle Server? In order to connect a SQL*Plus session to an Oracle server, you need to: 1. Obtain the connection information from the Oracle server DBA. 2. Define a new "connect identifier" called "ggl_XE" in your tnsnames.org file with the given connection information. 3. Run the CONNECT command in SQL*Plus as shown in the tutorial exercise below: >cd c:oraclexeapporacleproduct10.2.0server >.binsqlplus /nolog SQL*Plus: Release 10.2.0.1.0 - Production on Tue ... Copyright (c) 1982, 2005, Oracle. All rights reserved. SQL> CONNECT ggl/retneclgg@ggl_XE; Connected. SQL> SELECT SYSDATE FROM DUAL; SYSDATE --------- 05-MAR-06 50. What Happens If You Use a Wrong Connect Identifier? Of course, you will get an error, if you use a wrong connect identifier. Here is an example of how SQL*Plus react to a wrong connect identifier: SQL> CONNECT ggl/retneclgg@WRONG; ERROR: ORA-12154: TNS:could not resolve the connect identifier specified Warning: You are no longer connected to ORACLE. What you need to do in this case: Check the CONNECT command to make sure that the connect identifier is entered correctly. Check the tnsnames.ora file to make sure that the connect identifier is defined correctly. Check the tnsnames.ora file to make sure that there is no multiple definitions of the same connect identifier. Check your files system to see if you have multiple copies of tnsnames.ora in different Oracle home directories, because you installed multiple versions of Oracle. If you do have multiple copies, make sure your SQL*Plus session is picking up the correct copy of tnsnames.ora. 51. What To Do If DBA Lost the SYSTEM Password? If the DBA lost the password of the SYSTEM user account, he/she can go to the Oracle server machine, and run SQL*Plus on server locally with the operating system authentication method to gain access to the database. The tutorial exercise below shows you how: (Terminal server to the Oracle server machine) (Start SQL*Plus) SQL>CONNECT / AS SYSDBA Connected. SQL> ALTER USER SYSTEM IDENTIFIED BY ssap_lgg; User altered. Notice that the (/) in the CONNECT command tells SQL*Plus to use the current user on local operating system as the connection authentication method. 52. What Types of Commands Can Be Executed in SQL*Plus? There are 4 types of commands you can run at the SQL*Plus command line prompt: 1. SQL commands - Standard SQL statements to be executed on target database on the Oracle server. For example: "SELECT * FROM ggl_faq;" is a SQL command. 2. PL/SQL commands - PL/SQL statements to be executed by the Oracle server. For example: "EXECUTE DBMS_OUTPUT.PUT_LINE('Welcome to www.atoztarget.com')" runs a PL/SQL command. SQL*Plus commands - Commands to be executed by the local SQL*Plus program itself. For example: "SET NULL 'NULL'" is a SQL*Plus command. OS commands - Commands to be executed by the local operating system. For example: "HOST dir" runs an operating system command on the local machine. 53. How To Run SQL Commands in SQL*Plus? If you want to run a SQL command in SQL*Plus, you need to enter the SQL command in one or more lines and terminated with (;). The tutorial exercise below shows a good example: SQL> SELECT 'Welcome!' FROM DUAL; 'WELCOME -------- Welcome! SQL> SELECT 'Welcome to atoztarget.com tutorials!' 2 FROM DUAL 3 ; 'WELCOMETOatoztarget.COMTUTORIALS!' ----------------------------------- Welcome to atoztarget.com tutorials! 54. How To Run PL/SQL Statements in SQL*Plus? If you want to run a single PL/SQL statement in SQL*Plus, you need to use the EXECUTE command as shown in the following tutorial example: SQL> SET SERVEROUTPUT ON SQL> EXECUTE DBMS_OUTPUT.PUT_LINE('Welcome to atoztarget!') Welcome to atoztarget! PL/SQL procedure successfully completed. 55. How To Change SQL*Plus System Settings? SQL*Plus environment is controlled a big list of SQL*Plus system settings. You can change them by using the SET command as shown in the following list: * SET AUTOCOMMIT OFF - Turns off the auto-commit feature. * SET FEEDBACK OFF - Stops displaying the "27 rows selected." message at the end of the query output. * SET HEADING OFF - Stops displaying the header line of the query output. * SET LINESIZE 256 - Sets the number of characters per line when displaying the query output. * SET NEWPAGE 2 - Sets 2 blank lines to be displayed on each page of the query output. * SET NEWPAGE NONE - Sets for no blank lines to be displayed on each page of the query output. * SET NULL 'null' - Asks SQL*Plus to display 'null' for columns that have null values in the query output. * SET PAGESIZE 60 - Sets the number of lines per page when displaying the query output. * SET TIMING ON - Asks SQL*Plus to display the command execution timing data. * SET WRAP OFF - Turns off the wrapping feature when displaying query output. 56. How To Look at the Current SQL*Plus System Settings? If you want to see the current values of SQL*Plus system settings, you can use the SHOW command as shown in the following tutorial exercise: SQL> SHOW AUTOCOMMIT autocommit OFF SQL> SHOW HEADING heading ON SQL> SHOW LINESIZE linesize 80 SQL> SHOW PAGESIZE pagesize 14 SQL> SHOW FEEDBACK FEEDBACK ON for 6 or more rows SQL> SHOW TIMING timing OFF SQL> SHOW NULL null "" SQL> SHOW ALL appinfo is OFF and set to "SQL*Plus" arraysize 15 autocommit OFF autoprint OFF autorecovery OFF autotrace OFF blockterminator "." (hex 2e) cmdsep OFF colsep " " compatibility version NATIVE concat "." (hex 2e) copycommit 0 COPYTYPECHECK is ON define "&" (hex 26) describe DEPTH 1 LINENUM OFF INDENT ON echo OFF 57. What Are SQL*Plus Environment Variables? Behaviors of SQL*Plus are also controlled a some environment variables predefined on the local operating system. Here are some commonly used SQL*Plus environment variables: * ORACLE_HOME - The home directory where your Oracle client application is installed. * PATH - A list of directories where SQL*Plus will search for executable or DLL files. PATH should include $ORACLE_HOMEbin. * SQLPLUS - The directory where localization messages are stored. SQLPLUS should be set to $ORACLE_HOMEsqlplusmesg * TNS_ADMIN - The directory where the connect identifier file, tnsnames.ora is located. TNS_ADMIN should be set to $ORACLE_HOME/network/admin. 58. How To Generate Query Output in HTML Format? If you want your query output to be generated in HTML format, you can use the "SET MARKUP HTML ON" to turn on the HTML feature. The following tutorial exercise gives you a good example: SQL> connect HR/retneclgg SQL> SET MARKUP HTML ON SQL> SELECT FIRST_NAME, LAST_NAME, HIRE_DATE 2 FROM EMPLOYEES WHERE FIRST_NAME LIKE 'Joh%'; FIRST_NAME LAST_NAME HIRE_DATE John Seo 12-FEB-98 John Russell 01-OCT-96 59. What Is Output Spooling in SQL*Plus? The output spooling a nice feature of the command-line SQL*Plus tool. If the spooling feature is turned on, SQL*Plus will send a carbon copy of the everything on your screen to a specified local file. Output spooling is used mostly for quick dump of data to local files. Here are the commands to turn on and off output spooling in SQL*Plus: * SPOOL fileName - Turning on output spooling with the specified file. * SPOOL OFF - Turning off output spooling and close the spool file. 60. How To Save Query Output to a Local File? Normally, when you run a SELECT statement in SQL*Plus, the output will be displayed on your screen. If you want the output to be saved to local file, you can use the "SPOOL fileName" command to specify a local file and start the spooling feature. When you are done with your SELECT statement, you need to close the spool file with the "SPOOL OFF" command. The following tutorial exercise gives you a good example: SQL> connect HR/retneclgg SQL> SET HEADING OFF SQL> SET FEEDBACK OFF SQL> SET LINESIZE 1000 SQL> SPOOL tempemployees.lst SQL> SELECT * FROM EMPLOYEES; ...... SQL> SPOOL OFF You should get all records in employees.lst with fixed length fields. 61. What Is Input Buffer in SQL*Plus? Input buffer is a nice feature of the command-line SQL*Plus tool. It allows you to revise a multiple-line command and re-run it with a couple of simple commands. By default, input buffer is always turned on in SQL*Plus. The last SQL statement is always stored in the buffer. All you need is to remember to following commonly used commands: * LIST - Displays the SQL statement (the last executed SQL statement) in the buffer. * RUN - Runs the SQL statement in the buffer again. ";" is a quick command equivalent to RUN. * CLEAR BUFFER - Removes the SQL statement in the buffer. * INPUT line - Adds a new line into the buffer. * APPEND text - Appends more text to the last line in the buffer. * DEL - Deletes one line from the buffer. * CHANGE /old/new - Replaces 'old' text with 'new' text in the buffer. 62. How To Revise and Re-Run the Last SQL Command? If executed a long SQL statement, found a mistake in the statement, and you don't want enter that long statement again, you can use the input buffer commands to the correct last statement and re-run it. The following tutorial exercise gives you a good example: SQL> connect HR/retneclgg SQL> SELECT FIRST_NAME, LAST_NAME, HIRE_DATE 2 FROM EMPLOYEE WHERE FIRST_NAME LIKE 'Joh%'; FROM EMPLOYEE WHERE FIRST_NAME LIKE 'Joh%' * ERROR at line 2: ORA-00942: table or view does not exist SQL> LIST 1 SELECT FIRST_NAME, LAST_NAME, HIRE_DATE 2* FROM EMPLOYEES WHERE FIRST_NAME LIKE 'Joh%' SQL> CHANGE /EMPLOYEE/EMPLOYEES/ 2* FROM EMPLOYEES WHERE FIRST_NAME LIKE 'Joh%' SQL> RUN (Query output) SQL> INPUT ORDER BY FIRE_DATE SQL> LIST 1 SELECT FIRST_NAME, LAST_NAME, HIRE_DATE 2 FROM EMPLOYEE WHERE FIRST_NAME LIKE 'Joh%' 3* ORDER BY HIRE_DATE SQL> RUN (Query output) SQL> CLEAR BUFFER buffer cleared SQL> LIST SP2-0223: No lines in SQL buffer. 63. How Run SQL*Plus Commands That Are Stored in a Local File? If you have a group of commands that you need to run them repeatedly every day, you can save those commands in a file (called SQL script file), and using the "@fileName" command to run them in SQL*Plus. If you want to try this, create a file called tempinput.sql with: SELECT 'Welcome to' FROM DUAL; SELECT 'atoztarget.com!' FROM DUAL; Then run the "@" command in SQL*Plus as: SQL> connect HR/retneclgg SQL> @tempinput.sql 'WELCOMETO ---------- Welcome to 'atoztarget.COM -------------- atoztarget.com! 64. How To Use SQL*Plus Built-in Timers? If you don't have a stopwatch/timer and want to measure elapsed periods of time, you can SQL*Plus Built-in Timers with the following commands: * TIMING - Displays number of timers. * TIMING START - Starts a new timer with or without a name. * TIMING SHOW - Shows the current time of the named or not-named timer. * TIMING STOP - Stops the named or not-named timer. The following tutorial exercise shows you a good example of using SQL*Plus built-in timers: SQL> TIMING START timer_1 (some seconds later) SQL> TIMING START timer_2 (some seconds later) SQL> TIMING START timer_3 (some seconds later) SQL> TIMING SHOW timer_1 timing for: timer_2 Elapsed: 00:00:19.43 (some seconds later) SQL> TIMING STOP timer_2 timing for: timer_2 Elapsed: 00:00:36.32 SQL> TIMING 2 timing elements in use 65. What Is Oracle Server Autotrace in Oracle? Autotrace is Oracle server feature that generates two statement execution reports very useful for performance tuning: * Statement execution path - Shows you the execution loop logic of a DML statement. * Statement execution statistics - Shows you various execution statistics of a DML statement. To turn on the autotrace feature, the Oracle server DBA need to: * Create a special table called PLAN_TABLE. * Create a special security role called PLUSTRACE. * Grant PLUSTRACE role your user account. 66. How To Set Up Autotrace for a User Account? If an Oracle user wants to use the autotrace feature, you can use the tutorial as an example to create the required table PLAN_TABLE, the required security role PLUSTRACE, and grant the role to that user: SQL> CONNECT HR/retneclgg SQL> @oraclexeapporacleproduct10.2.0server RDBMSADMINUTLXPLAN.SQL Table (HR.PLAN_TABLE) created. SQL> CONNECT / AS SYSDBA SQL> @C:oraclexeapporacleproduct10.2.0server SQLPLUSADMINPLUSTRCE.SQL SQL> drop role plustrace; Role (PLUSTRACE) dropped. SQL> create role plustrace; Role (PLUSTRACE) created. SQL> grant plustrace to dba with admin option; Grant succeeded. SQL> GRANT PLUSTRACE TO HR; Grant succeeded. Remember that PLAN_TABLE table must be created under the user schema HR. 67. How To Get Execution Path Reports on Query Statements? If your user account has autotrace configured by the DBA, you can use the "SET AUTOTRACE ON EXPLAIN" command to turn on execution path reports on query statements. The tutorial exercise bellow shows you a good example: SQL> CONNECT HR/retneclgg SQL> SET AUTOTRACE ON EXPLAIN SQL> SELECT E.LAST_NAME, E.SALARY, J.JOB_TITLE 2 FROM EMPLOYEES E, JOBS J 3 WHERE E.JOB_ID=J.JOB_ID AND E.SALARY>12000; LAST_NAME SALARY JOB_TITLE ----------------- ---------- ----------------------------- King 24000 President Kochhar 17000 Administration Vice President De Haan 17000 Administration Vice President Russell 14000 Sales Manager Partners 13500 Sales Manager Hartstein 13000 Marketing Manager 6 rows selected. 68. How To Get Execution Statistics Reports on Query Statements? If your user account has autotrace configured by the DBA, you can use the "SET AUTOTRACE ON STATISTICS" command to turn on execution statistics reports on query statements. The tutorial exercise bellow shows you a good example: SQL> CONNECT HR/retneclgg SQL> SET AUTOTRACE ON STATISTICS SQL> SELECT E.LAST_NAME, E.SALARY, J.JOB_TITLE 2 FROM EMPLOYEES E, JOBS J 3 WHERE E.JOB_ID=J.JOB_ID AND E.SALARY>12000; LAST_NAME SALARY JOB_TITLE ----------------- ---------- ----------------------------- King 24000 President Kochhar 17000 Administration Vice President De Haan 17000 Administration Vice President Russell 14000 Sales Manager Partners 13500 Sales Manager Hartstein 13000 Marketing Manager 6 rows selected. 69. What Is SQL in Oracle? SQL, SEQUEL (Structured English Query Language), is a language for RDBMS (Relational Database Management Systems). SQL was developed by IBM Corporation. 70. How Many Categories of Data Types in Oracle? Oracles supports the following categories of data types: * Oracle Built-in Datatypes. * ANSI, DB2, and SQL/DS Datatypes. * User-Defined Types. * Oracle-Supplied Types. 71. What Are the Oracle Built-in Data Types? There are 20 Oracle built-in data types, divided into 6 groups: * Character Datatypes - CHAR, NCHAR, NVARCHAR2, VARCHAR2 * Number Datatypes - NUMBER, BINARY_FLOAT, BINARY_DOUBLE * Long and Row Datatypes - LONG, LONG RAW, RAW * Datetime Datatypes - DATE, TIMESTAMP, INTERVAL YEAR TO MONTH, INTERVAL DAY TO SECOND * Large Object Datatypes - BLOB, CLOB, NCLOB, BFILE * Row ID Datatypes - ROWID, UROWID 72. What Are the Differences between CHAR and NCHAR in Oracle? Both CHAR and NCHAR are fixed length character data types. But they have the following differences: * CHAR's size is specified in bytes by default. * NCHAR's size is specified in characters by default. A character could be 1 byte to 4 bytes long depending on the character set used. * NCHAR stores characters in Unicode. 73. What Are the Differences between CHAR and VARCHAR2 in Oracle? The main differences between CHAR and VARCHAR2 are: * CHAR stores values in fixed lengths. Values are padded with space characters to match the specified length. * VARCHAR2 stores values in variable lengths. Values are not padded with any characters. 74. What Are the Differences between NUMBER and BINARY_FLOAT in Oracle? The main differences between NUMBER and BINARY_FLOAT in Oracle are: * NUMBER stores values as fixed-point numbers using 1 to 22 bytes. * BINARY_FLOAT stores values as single precision floating-point numbers. 75. What Are the Differences between DATE and TIMESTAMP in Oracle? The main differences between DATE and TIMESTAMP in Oracle are: * DATE stores values as century, year, month, date, hour, minute, and second. * TIMESTAMP stores values as year, month, day, hour, minute, second, and fractional seconds. 76. What Are the Differences between INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND? The main differences between INTERVAL YEAR TO MONTH and INTERVAL DAY TO SECOND are: * INTERVAL YEAR TO MONTH stores values as time intervals at the month level. * INTERVAL DAY TO SECOND stores values as time intervals at the fractional seconds level. 77. What Are the Differences between BLOB and CLOB in Oracle? The main differences between BLOB and CLOB in Oracle are: * BLOB stores values as LOB (Large OBject) in bitstreams. * CLOB stores values as LOB (Large OBject) in character steams. 78. What Are the ANSI Data Types Supported in Oracle? The following ANSI data types are supported in Oracle: * CHARACTER(n) / CHAR(n) * CHARACTER VARYING(n) / CHAR VARYING(n) * NATIONAL CHARACTER(n) / NATIONAL CHAR(n) / NCHAR(n) * NATIONAL CHARACTER VARYING(n) / NATIONAL CHAR VARYING(n) / NCHAR VARYING(n) * NUMERIC(p,s) * DECIMAL(p,s) * INTEGER / INT * SMALLINT * FLOAT * DOUBLE PRECISION * REAL 79. How To Write Text Literals in Oracle? There are several ways to write text literals as shown in the following samples: SELECT 'atoztarget.com' FROM DUAL -- The most common format atoztarget.com SELECT 'It''s Sunday!' FROM DUAL -- Single quote escaped It's Sunday! SELECT N'Allo, C''est moi.' FROM DUAL -- National chars Allo, C'est moi. SELECT Q'/It's Sunday!/' FROM DUAL -- Your own delimiter It's Sunday! 80. How To Write Numeric Literals in Oracle? Numeric literals can coded as shown in the following samples: SELECT 255 FROM DUAL -- An integer 255 SELECT -6.34 FROM DUAL -- A regular number -6.34 SELECT 2.14F FROM DUAL -- A single-precision floating point 2.14 SELECT -0.5D FROM DUAL -- A double-precision floating point -0.5 81. How To Write Date and Time Literals in Oracle? Date and time literals can coded as shown in the following samples: SELECT DATE '2002-10-03' FROM DUAL -- ANSI date format 03-OCT-07 SELECT TIMESTAMP '0227-01-31 09:26:50.124' FROM DUAL 31-JAN-07 09.26.50.124000000 AM -- This is ANSI format 82. How To Write Date and Time Interval Literals in Oracle? Date and time interval literals can coded as shown in the following samples: SELECT DATE '2002-10-03' + INTERVAL '123-2' YEAR(3) TO MONTH FROM DUAL -- 123 years and 2 months is added to 2002-10-03 03-DEC-25 SELECT DATE '2002-10-03' + INTERVAL '123' YEAR(3) FROM DUAL -- 123 years is added to 2002-10-03 03-OCT-25 SELECT DATE '2002-10-03' + INTERVAL '299' MONTH(3) FROM DUAL -- 299 months years is added to 2002-10-03 03-SEP-27 SELECT TIMESTAMP '1997-01-31 09:26:50.124' + INTERVAL '4 5:12:10.222' DAY TO SECOND(3) FROM DUAL 04-FEB-97 02.39.00.346000000 PM SELECT TIMESTAMP '1997-01-31 09:26:50.124' + INTERVAL '4 5:12' DAY TO MINUTE FROM DUAL 04-FEB-97 02.38.50.124000000 PM SELECT TIMESTAMP '1997-01-31 09:26:50.124' + INTERVAL '400 5' DAY(3) TO HOUR FROM DUAL 07-MAR-98 02.26.50.124000000 PM SELECT TIMESTAMP '1997-01-31 09:26:50.124' + INTERVAL '400' DAY(3) FROM DUAL 07-MAR-98 09.26.50.124000000 AM SELECT TIMESTAMP '1997-01-31 09:26:50.124' + INTERVAL '11:12:10.2222222' HOUR TO SECOND(7) FROM DUAL 31-JAN-97 08.39.00.346222200 PM 83. How To Convert Numbers to Characters in Oracle? You can convert numeric values to characters by using the TO_CHAR() function as shown in the following examples: SELECT TO_CHAR(4123.4570) FROM DUAL 123.457 SELECT TO_CHAR(4123.457, '$9,999,999.99') FROM DUAL $4,123.46 SELECT TO_CHAR(-4123.457, '9999999.99EEEE') FROM DUAL -4.12E+03 84. How To Convert Characters to Numbers in Oracle? You can convert characters to numbers by using the TO_NUMBER() function as shown in the following examples: SELECT TO_NUMBER('4123.4570') FROM DUAL 4123.457 SELECT TO_NUMBER(' $4,123.46','$9,999,999.99') FROM DUAL 4123.46 SELECT TO_NUMBER(' -4.12E+03') FROM DUAL -4120 85. How To Convert Dates to Characters in Oracle? You can convert dates to characters using the TO_CHAR() function as shown in the following examples: SELECT TO_CHAR(SYSDATE, 'DD-MON-YYYY') FROM DUAL; -- SYSDATE returns the current date 07-MAY-2006 SELECT TO_CHAR(SYSDATE, 'YYYY/MM/DD') FROM DUAL; 2006/05/07 SELECT TO_CHAR(SYSDATE, 'MONTH DD, YYYY') FROM DUAL; MAY 07, 2006 SELECT TO_CHAR(SYSDATE, 'fmMONTH DD, YYYY') FROM DUAL; May 7, 2006 SELECT TO_CHAR(SYSDATE, 'fmDAY, MONTH DD, YYYY') FROM DUAL; SUNDAY, MAY 7, 2006 86. How To Convert Characters to Dates in Oracle? You can convert dates to characters using the TO_DATE() function as shown in the following examples: SELECT TO_DATE('07-MAY-2006', 'DD-MON-YYYY') FROM DUAL; 07-MAY-06 SELECT TO_DATE('2006/05/07 ', 'YYYY/MM/DD') FROM DUAL; 07-MAY-06 SELECT TO_DATE('MAY 07, 2006', 'MONTH DD, YYYY') FROM DUAL; 07-MAY-06 SELECT TO_DATE('May 7, 2006', 'fmMONTH DD, YYYY') FROM DUAL; 07-MAY-06 SELECT TO_DATE('SUNDAY, MAY 7, 2006', 'fmDAY, MONTH DD, YYYY') FROM DUAL; 07-MAY-06 87. How To Convert Times to Characters in Oracle? You can convert dates to characters using the TO_CHAR() function as shown in the following examples: SELECT TO_CHAR(SYSDATE, 'HH:MI:SS') FROM DUAL; 04:49:49 SELECT TO_CHAR(SYSDATE, 'HH24:MI:SS.FF') FROM DUAL; -- Error: SYSDATE has no fractional seconds SELECT TO_CHAR(SYSTIMESTAMP, 'HH24:MI:SS.FF9') FROM DUAL; 16:52:57.847000000 SELECT TO_CHAR(SYSDATE, 'SSSSS') FROM DUAL; -- Seconds past midnight 69520 88. How To Convert Characters to Times in Oracle? You can convert dates to characters using the TO_CHAR() function as shown in the following examples: SELECT TO_CHAR(TO_DATE('04:49:49', 'HH:MI:SS'), 'DD-MON-YYYY HH24:MI:SS') FROM DUAL; -- Default date is the first day of the current month 01-MAY-2006 04:49:49 SELECT TO_CHAR(TO_TIMESTAMP('16:52:57.847000000', 'HH24:MI:SS.FF9'), 'DD-MON-YYYY HH24:MI:SS.FF9') FROM DUAL; 01-MAY-2006 16:52:57.847000000 SELECT TO_CHAR(TO_DATE('69520', 'SSSSS'), 'DD-MON-YYYY HH24:MI:SS') FROM DUAL; 01-MAY-2006 19:18:40 89. What Is NULL value in Oracle? NULL is a special value representing "no value" in all data types. NULL can be used on in operations like other values. But most operations has special rules when NULL is involved. The tutorial exercise below shows you some examples: SET NULL 'NULL'; -- Make sure NULL is displayed SELECT NULL FROM DUAL; N - N U L L SELECT NULL + NULL FROM DUAL; NULL+NULL ---------- NULL SELECT NULL + 7 FROM DUAL; NULL+7 ---------- NULL SELECT NULL * 7 FROM DUAL; NULL*7 ---------- NULL SELECT NULL || 'A' FROM DUAL; N - A SELECT NULL + SYSDATE FROM DUAL; NULL+SYSD --------- NULL 90. How To Use NULL as Conditions in Oracle? If you want to compare values against NULL as conditions, you should use the "IS NULL" or "IS NOT NULL" operator. Do not use "=" or "" against NULL. The sample script below shows you some good examples: SELECT 'A' IS NULL FROM DUAL; -- Error: Boolean is not data type. -- Boolean can only be used as conditions SELECT CASE WHEN 'A' IS NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; FALSE SELECT CASE WHEN '' IS NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN 0 IS NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; FALSE SELECT CASE WHEN NULL IS NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN 'A' = NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; -- Do not use "=" FALSE SELECT CASE WHEN 'A' NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; -- Do not use "" FALSE SELECT CASE WHEN NULL = NULL THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; -- Do not use "=" FALSE 91. How To Concatenate Two Text Values in Oracle? There are two ways to concatenate two text values together: * CONCAT() function. * '||' operation. Here is some examples on how to use them: SELECT 'ggl' || 'Center' || '.com' FROM DUAL; atoztarget.com SELECT CONCAT('atoztarget','.com') FROM DUAL; atoztarget.com 92. How To Increment Dates by 1 in Oracle? If you have a date, and you want to increment it by 1. You can do this by adding the date with a date interval. You can also do this by adding the number 1 directly on the date. The tutorial example below shows you how to adding numbers to dates, and take date differences: SELECT TO_DATE('30-APR-06') + 1 FROM DUAL; -- Adding 1 day to a date 01-MAY-06 SELECT TO_DATE('01-MAY-06') - TO_DATE('30-APR-06') FROM DUAL; -- Taking date differences 1 SELECT SYSTIMESTAMP + 1 FROM DUAL; -- The number you add is always in days. 08-MAY-06 SELECT TO_CHAR(SYSTIMESTAMP+1,'DD-MON-YYYY HH24:MI:SS.FF3') FROM DUAL; -- Error: Adding 1 to a timestamp makes it a date. 93. How To Calculate Date and Time Differences in Oracle? If you want to know how many years, months, days and seconds are there between two dates or times, you can use the date and time interval expressions: YEAR ... TO MONTH and DAY ... TO SECOND. The tutorial exercise below gives you some good examples: SELECT (TO_DATE('01-MAY-2006 16:52:57','DD-MON-YYYY HH24:MI:SS') - TO_DATE('31-JAN-1897 09:26:50','DD-MON-YYYY HH24:MI:SS')) YEAR(4) TO MONTH FROM DUAL; -- 109 years and 3 months 109-3 SELECT (TO_DATE('01-MAY-2006 16:52:57','DD-MON-YYYY HH24:MI:SS') - TO_DATE('31-JAN-1897 09:26:50','DD-MON-YYYY HH24:MI:SS')) DAY(9) TO SECOND FROM DUAL; -- 39901 days and some seconds 39901 7:26:7.0 SELECT (TO_TIMESTAMP('01-MAY-2006 16:52:57.847', 'DD-MON-YYYY HH24:MI:SS.FF3') - TO_TIMESTAMP('31-JAN-1897 09:26:50.124', 'DD-MON-YYYY HH24:MI:SS.FF3')) YEAR(4) TO MONTH FROM DUAL; -- 109 years and 3 months 109-3 SELECT (TO_TIMESTAMP('01-MAY-2006 16:52:57.847', 'DD-MON-YYYY HH24:MI:SS.FF3') - TO_TIMESTAMP('31-JAN-1897 09:26:50.124','DD-MON-YYYY HH24:MI:SS.FF3')) DAY(9) TO SECOND FROM DUAL; -- 39 94. How To Use IN Conditions in Oracle? An IN condition is single value again a list of values. It returns TRUE, if the specified value is in the list. Otherwise, it returns FALSE. Some examples are given in the script below: SELECT CASE WHEN 3 IN (1,2,3,5) THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN 3 NOT IN (1,2,3,5) THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; FALSE SELECT CASE WHEN 'Y' IN ('F','Y','I') THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE 95. How To Use LIKE Conditions in Oracle? LIKE condition is also called pattern patch. There 3 main rules on using LIKE condition: * '_' is used in the pattern to match any one character. * '%' is used in the pattern to match any zero or more characters. * ESCAPE clause is used to provide the escape character in the pattern. The following script provides you some good pattern matching examples: SELECT CASE WHEN 'atoztarget.com' LIKE '%Center%' THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN 'atoztarget.com' LIKE '%CENTER%' THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; -- Case sensitive by default FALSE SELECT CASE WHEN 'atoztarget.com' LIKE '%Center_com' THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN '100% correct' LIKE '100% %' ESCAPE '' THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE 96. How To Use Regular Expression in Pattern Match Conditions in Oracle? If you have a pattern that is too complex for LIKE to handle, you can use the regular expression pattern patch function: REGEXP_LIKE(). The following script provides you some good examples: SELECT CASE WHEN REGEXP_LIKE ('atoztarget.com', '.*ggl.*','i') THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN REGEXP_LIKE ('atoztarget.com', '.*com$','i') THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE SELECT CASE WHEN REGEXP_LIKE ('atoztarget.com', '^F.*','i') THEN 'TRUE' ELSE 'FALSE' END FROM DUAL; TRUE 97. What Are DDL Statements in Oracle? DDL (Data Definition Language) statements are statements to create and manage data objects in the database. The are 3 primary DDL statements: CREATE - Creating a new database object. ALTER - Altering the definition of an existing data object. DROP - Dropping an existing data object. 98. How To Create a New Table in Oracle? If you want to create a new table in your own schema, you can log into the server with your account, and use the CREATE TABLE statement. The following script shows you how to create a table: >.binsqlplus /nolog SQL> connect HR/atoztarget Connected. SQL> CREATE TABLE tip (id NUMBER(5) PRIMARY KEY, subject VARCHAR(80) NOT NULL, description VARCHAR(256) NOT NULL, create_date DATE DEFAULT (sysdate)); Table created. This scripts creates a testing table called "tip" with 4 columns in the schema associated with the log in account "HR". 99. How To Create a New Table by Selecting Rows from Another Table? Let's say you have a table with many data rows, now you want to create a backup copy of this table of all rows or a subset of them, you can use the CREATE TABLE...AS SELECT statement to do this. Here is an example script: >.binsqlplus /nolog SQL> connect HR/atoztarget Connected. SQL> CREATE TABLE emp_dept_10 2 AS SELECT * FROM employees WHERE department_id=10; Table created. SQL> SELECT first_name, last_name, salary 2 FROM emp_dept_10; FIRST_NAME LAST_NAME SALARY -------------------- ------------------------- ---------- Jennifer Whalen 4400 As you can see, this SQL scripts created a table called "emp_dept_10" using the same column definitions as the "employees" table and copied data rows of one department. This is really a quick and easy way to create a table. 100. How To Add a New Column to an Existing Table in Oracle? If you have an existing table with existing data rows, and want to add a new column to that table, you can use the ALTER TABLE ... ADD statement to do this. Here is an example script: SQL> connect HR/atoztarget Connected. SQL> CREATE TABLE emp_dept_110 2 AS SELECT * FROM employees WHERE department_id=110; Table created. SQL> ALTER TABLE emp_dept_110 ADD (vacation NUMBER); Table altered. SQL> SELECT first_name, last_name, vacation 2 FROM emp_dept_110; FIRST_NAME LAST_NAME VACATION -------------------- ------------------------- ---------- Shelley Higgins William Gietz This SQL script added a new column called "vacation" to the "emp_dept_110" table. NULL values were added to this column on all existing data rows. ORACLE Database Questions and Answers Pdf Download Read the full article
0 notes
Text
Install Oracle 12c on Windows env.
The article describes the step by step installation of oracle 12c Release 1 on windows 2008 based on server installation with a minimum of 2GB swap. Click on the link to download oracle: Download Oracle 12c database then unzip software to a directory such as: unzip winx64_12c_database_1of2.zip, unzip winx64_12c_database_2of2.zipStep1: Run the Setup.exe the installer will start. Uncheck checkbox “I wish to receive security updates via My Oracle Support” and then click “Next” button. Then in next subsequent screen ignore the security alert message. Step2: Select “skip software updates” and click ‘Next” button afterward select “create and configure a database” and click “Next” button.

Step3: Now select the system class accordingly in my case I have selected “server class”– Desktop class: Choose this option if you are installing on a laptop or desktop class system (stand alone). This includes a starter database and allows minimal configuration.– Server class: Choose this option if you are installing on a server class system, which oracle database as a system used in a production data center. This option allows for more advance configuration system.Step4: Select “single instance database installation” option and click “Next”. Other option are available if you have grid software 12c installed. In case you want to install oracle 12c RAC on windows 2008 click on the following link: Install oracle RAC 12c on windows 2008 using Virtual Box

Step5: Either Select “Typical install” (full database installation with basic configuration) or “Advanced install” to have more options during installation of new database and click “Next”. Then select your language option (English) and click “Next” Step6: Accept Default “Enterprise Edition” and click “Next” button.

Step7: On this screen you can select account which will be used to install and use of new database software. I used “Use Windows Built-in Account”. Click “Next” button. In such case accept next subsequence alert message by clicking on “Yes”.

Note: Oracle recommended that specify a standard windows user account (not an Administrator account) to install and configure the oracle home for enhanced security. This account is used for running the windows services for the oracle home. Do not log in using this account to perform administrative task. Step8: Select the new binaries location and click on “Next” button. Then select the type of database “General purpose/Transaction processing” and click “Next” button.

Step9: Enter “Global database name” and “Oracle system identifier (SID)” for your new database which will be created. If you check checkbox “Create as Container database” your database will be able to consolidate many databases. In such case you need to enter name of your first pluggable database “Pluggable database name (12c new feature )”. Click “Next” button.

Step10: Specify configuration option (more details) about database in respective tab: Memory, character sets, Sample Schema etc. – Check on “enable automatic memory management” (allows the database distribute memory automatically between SGA based on overall target size). If automatic memory not enabled then SGA and PGA must be size manually.– Either use default character set for this database is based on the language setting of the operating system or choose from the list available.– You can choose to “create a starter database” with or without sample schema or you can plug the sample schema to your existing starter database after creation. Step 11: Select database storage type either “file system” or ASM (select this option only if you intended to use oracle ASM). Provide the file system location and click “Next”.

On the next subsequent screen “Management function” register your database with oracle Enterprise Cloud. Step12: In the ‘Recovery options” screen check “Enable Recovery” to specify recovery area where backup will be stored.

Step13: Specify schema password for each user or enter the same for all then by clicking on “Next” button checks are started to verify the OS is ready to install database software. If every things OK then “install” button is appear verify every things before click on “install” button.


Now database install is in progress… wait until you are getting the summary window and finally “The installation of oracle database was successful” message.
0 notes
Photo
DATA INTEGRATION WITH ORACLE WAREHOUSE BUILDERABOUT THIS COURSELearn concepts of Oracle Warehouse Builder 11g.
This training starts you at the beginner level and concludes with knowledge of advanced concepts and end-to-end implementation of data integration and ETL through Oracle Warehouse Builder.COURSE DETAILS & CURRICULUMInstalling and Setting Up the Warehouse Builder Environment
Oracle Warehouse Builder Licensing and Connectivity Options
Supported operating systems (OS), sources, targets, and optional components
What Is Oracle Warehouse Builder?
Using the Repository Assistant to Manage Workspaces
OWBSYS Schema
Using OWB 11.2 with Database 10g R2
Installing Oracle Warehouse Builder 11.2
Basic Process Flow of Design and Deployment
Getting Started with Warehouse Builder
Locations Navigator and Global Navigator panels
Logging In to OWB Design Center
Overview of Objects within a Project
Setting Projects Preferences: Recent Logons
OWB Projects
Overview of the Design Center
Organizing Metadata Using Foldering
Overview of Objects within an Oracle Module
Understanding the Warehouse Builder Architecture
Overview of Configurations, Control Centers, and Locations
Warehouse Builder Development Cycle
Registering an Oracle Workflow User
Registering DB User as an OWB User
Roles and Privileges of Warehouse Builder Users
Creating Target Schemas
Overview of the Architecture for Design, Deployment, Execution
Defining Source Metadata
Difference Between Obtaining Relational and Flat File Source Metadata
Data warehouse implementation: Typical steps
Creating an Oracle Module
Sampling Simple Delimited File
Creating Flat File Module
Sampling Multi-record Flat File
Selecting the Tables for Import
Defining ETL Mappings for Staging Data
Mapping Editor Interface: Grouping, Ungrouping, and Spotlighting
Creating External Tables
Purpose of a Staging Area
Set loading type and target load ordering
Define OWB Mappings
Levels of Synchronizing Changes
Using the Automapper in the Mapping Editor
Create and Bind process
Using the Data Transformation Operators
Lookup Operator: Handling Multiple Match Rows
Component Palette
Using the Aggregator, Constant, Transformation, and Pre/Post Mapping Operators
Pivot and Unpivot Operators
Using the Set, Sequence, and Splitter Operators
Using the Subquery Filter Operator
Using a Joiner
Deploying and Executing in Projects Navigator Panel
Cleansing and Match-Merging Name and Address Data
Name and Address Data Cleansing
Using the Match Merge Operator in a Mapping
Name and Address Software Providers
Reviewing a Name and Address Mapping
Settings in the Name and Address Operator
Consolidating Data Using the Match Merge Operator
Name and Address Server
Integrating Data Quality into ETL
Using Process Flows
Types of Activities: Fork, And, Mapping, End Activity
Creating Transitions Between Activities
Some More Activities: Manual, SQLPLUS, Email
Process Flow Concepts
Creating a Process Flow Module, a Process Flow Package and a Process Flow
Generating the Process Flow Package
Deploying and Reporting on ETL Jobs
Deployment Concepts
Repository Browser
Starting OWB Browser Listener and the Repository Browser
Browsing Design Center and Control Center Reports
Setting Object Configuration
Logical Versus Physical Implementation
Invoking the Control Center Manager
Deploy Options and Preferences
Using the Mapping Debugger
Preparing the testing environment and test data
Overview of the Mapping Debugger
Initializing a Mapping Debugging Session
Evaluating the flow of data to detect mapping errors
Setting breakpoints and watch points
Enhancing ETL Performance
Performance-Related Parameters in ETL Design
Configuring Indexes, Partitions, Constraints
Setting Tablespace Properties and Gathering Schema Statistics
Configuring Mappings for Operating Modes, DML Error Logging, Commit Control, and Default Audit Levels
Enabling Partition Exchange Loading (PEL) for Targets
Enabling Parallelism and Parallel DML
Performance-Related Parameters in Schema Design
Performance Tuning at Various Levels
Managing Backups, Development Changes, and Security
Overview of Metadata Loader Utilities (MDL)
Graphical UI for Security Management
Managing Metadata Changes by Using Snapshots
Object-Level Security
Using Change Manager
Version Management of Design Objects
Setting Security Parameters
Integrating with Oracle Business Intelligence Enterprise Edition (OBI EE)
Converting the UDML File for OBI EE
Oracle BI Admin and Answers Tool
Integrating with OBI EE and OBI SE
Business Justification: Tools Integration
Deploying the BI Module
Transferring BI Metadata to OBI EE Server
Setting Up the UDML File Location
Deriving the BI Metadata (OBI EE)
Administrative Tasks in Warehouse Builder
Multiple Named Configurations: Why and How
Enterprise ETL License Extends Core In-Database ETL
Creating an OWB Schedule
Using Configuration Templates
Using Multiple Named Configurations
Steps for Setting Up OWB in a RAC Environment
Managing Metadata
Using Pluggable Mappings
Advanced Activity Types in Process Flows
Using the Change Propagation Dialog
User-Defined Properties, Icons, and Objects
Invoking Lineage and Impact Analysis
Using Lineage and Impact Analysis Diagrams
Heterogeneous Predefined SQL Transformations
Native Relational Object Support
Accessing Non-Oracle Sources
Defining New Integration Platforms in OWB
Location of Seeded Code Templates
Extensible Framework of OWB 11g Release 2
Benefits of Extensible Code Templates
Creating New Code Templates
Designing Mappings with the Oracle Data Integration Enterprise Edition License
Convert a Classic Mapping to a CT Mapping That Utilizes Data Pump
Execution View Versus Logical View
Traditional Versus Code Template (CT) Mappings
Assigning a Code Template to an Execution Unit
Execution Units in a CT Mapping
CT Mappings Deploy to Control Center Agents
Right-Time Data Warehousing with OWB
Starting CDC Capture Process
What Refresh Frequency Does OWB Support
Building a Trickle Feed Mapping
What Is Meant by Real-Time Data Warehousing
Using Advanced Queues in Trickle Feed Mappings
Using CDC Code Templates in Mappings for Change Data Capture
Defining Relational Models
Defining a Cube
Using the Create Time Dimension Wizard
Binding Dimension Attributes to the Implementation Table
Defining Dimensions Using Wizards and Editors
Specifying a Cube's Attributes and Measures
Designing Mappings Using Relational Dimensions and Cubes
Defining Dimension Attributes, Levels, and Hierarchies
More Relational Dimensional Modeling
Initial Versus Incremental Data Warehouse Loads
Capturing Changed Data for Refresh
Creating a Type 2 Slowly Changing Dimension
Updating Data and Metadata
Choosing the DML Load Type
Support for Cube-Organized Materialized Views
How OWB Manages Orphans
Setting Loading Properties
Modeling Multidimensional OLAP Dimensions and Cubes
Dimensional Modeling Using OWB
Multidimensional Data Types
What Is OLAP
OWB Calculated Measures
Analytic Workspace
For any questions, simply contact us at -
Call: +44 7836 212635 WhatsApp: +44 7836 212635 Email: [email protected] https://training.uplatz.com
0 notes
Text
Character Set Migration using CSSCAN Utility
Sometimes as per the requirement of application we need to change the database character set. As the character set conversion can cause data loss or data corruption, it is necessary before altering the character set, check the convertibility of the data. CSSCAN (Database Character Set Scanner) is a SCAN tool that allows us to see the impact of a database character set change or assist us to correct an incorrect database nls_characterset setup.The scanner checks all the character set in the database including the data dictionary and test for the effects and problem of changing the character set encoding.The CSALTER script is part of the Database Character Set Scanner utility. The CSALTER script is the most straightforward way to migrate a character set, but it can be used only if all of the schema data is a strict subset of the new character set. Each and every character in the current character set is available in the new character set and has the same code point value in the new character set.With the strict superset criteria in mind, only the metadata is converted to the new character set by the CSALTER script, with the following exception: the CSALTER script performs data conversion only on CLOB columns in the data dictionary and sample schemas that have been created by Oracle. CLOB columns that users have created may need to be handled separatelyNote: The CSALTER script does not perform any user data conversion. It only changes the character set metadata in the data dictionary. It is possible to run CSSCAN from a client, but this client needs to be the same base version as the database home. For example oracle 10g server need oracle 10g client.Steps to change DB Characterset: Step 1: Before starting character set conversion we need the following pre-checking for database.Remove the invalid objects List all of schema which contains invalid objects. These invalid object need to be compiled or dropped, if they are unusedSQL> select distinct owner from dba_objects where status=’INVALID’;SQL> exec utl_recomp.recomp_serial(‘SCHEMA’);Purge the RecyclebinIf there are object in recyclebin then purge the recyclebin otherwise during CSALTER you will get ora-38301.Take a full backup of the database. Performing a backup before starting the characterset conversion is very important. If the conversion fails in middle you must restore from a backup before reattempting the conversion. Step 2: Install the CSS utility. If the character set migration utility schema is not installed on your database. You will get the error: CSS-00107: Character set migration utility schema not installed. Install the CSS utility by running the csminst.sql script from $ORACLE_HOMErdbmsadmin.Step 3: Run the Database Character Set Scanner utility as set the oracle_sid and run as CSSCAN sys/password@instance_name AS SYSDBA FULL=Y Step 4: Once the scan has completed successfully, the database should be opened in restricted mode so you can run the CSALTER script from $ORACLE_HOMErdbmsadmin folder. If there is any possible conversion problem, the process will report the problem and clean itself up without performing the conversion. Once the successfully conversion is completed you must restart the instance. SQL>shutdown immediate; SQL>startup restrict; SQL>@$ORACLE_HOMErdbmscsalter.plb SQL>shutdown immediate; SQL>startup; Caution: Changing the database character set is not an easy task. It is quite tricky tasks and may face errors which need the oracle support. So, I would strongly recommend, better to involve the oracle support.
0 notes
Text
Migrate your Oracle database to Azure SQL Database Managed Instance using SSMA 8.0
SQL Server Migration Assistant 8.0 introduces support for Azure SQL Database Managed Instance as a target. Whether you were planning to migrate or already in the process of converting your database schema – now you have all the tool under your belt!
Before you start converting schemas in the SQL Server Migration Assistant, you need to install SSMA for Oracle Extension Pack on your target Azure SQL Database Managed Instance. This will add additional functionality to your instance that will be used by the converted code produced by SSMA and it only needs to be done once for each instance. To install the extension pack, download Extension Pack installer and follow usual installation steps. Once you get to the second part of the installation process, select Remote instance (Linux or Azure):
Provide connection credentials to your Azure SQL Database Managed Instance, and the install the Utilities database and Extension Pack libraries.
After installation is done, you can jump to the SSMA client tool to start working on your schema conversion. For this walk-through, let's move the well-known HR schema from Oracle to Azure SQL Database Managed Instance.
Open the SSMA client tool, and then create a new project targeting Azure SQL Database Managed Instance:
Connect to source and target servers.
After you have your source and target connected, right-click desired schema in the Oracle Metadata Explorer tree, and then select Convert Schema:
SSMA will load additional information about the objects and convert them to their SQL Server representations. Depending on the size of your schema this may take some time. For our sample HR schema this takes less than 10 seconds and once done you will be presented with the results in a form of conversion errors and warnings. In this simple case we only have five warnings that we need to look into:
As you may notice, four of them are related to the NUMBER datatype and on is about the READ ONLY clause that was ignored. Let’s start with the NUMBER datatype conversion: double-click on of the warnings and it will take you to the place that requires attention.
Here you can see that p_emp_id is defined as job_history.employee_id column’s type, which resolves to an arbitrary NUMBER, since Oracle doesn’t allow precision and scale for procedure arguments. SSMA cannot tell which value will be passed in there (integer or decimal) at runtime, so it assumes anything and uses float(53) datatype on the SQL Server side. Given that float in SQL Server and NUMBER in Oracle support different scale and precision, SSMA notifies you that there might be a potential data loss, depending on the values you decide to store in the column. In our case, since we know that we always pass an integer here – we can just update it to int datatype in SQL Server and move on. Other three issues from this bucket are the same – ID is being passed as an argument – let’s just update all places to use int.
Lastly, we have “WITH READ ONLY” clause on the view that SSMA does not convert, so it notifies you that it was ignored. To make the view non-updatable in SQL Server you can tweak user’s permissions after you deploy the schema. Given that it is not a functional blocker for the migration, lets proceed and deploy our converted schema to the target Azure SQL Database Managed Instance.
Make sure you have converted schema (HR, in our case) as well as ssma_oracle schema checked, then right-click the database node in the target metadata explorer and select Synchronize with Database:
SSMA will analyze existing database state and present you with the changes that are going to be applied:
Review the changes, and then click OK to proceed. SSMA will start loading new objects to the database. This may take some time, depending on the size of your schema. Wait until you see synchronization completion message in the Output window and now your Oracle schema is successfully migrated to Azure SQL Database Managed Instance!
The next step in the migration process is to move your data, and SSMA can assist you with this as well. All you need to do after the schema is migrated is to right-click the Tables node in the Oracle Metadata Explorer, and the select Migrate Data:
Re-enter you source and target credentials and wait until it’s done. At the end you will be presented with the summary, indicating how many rows were moved for each table and whether there were any issues:
At this point your database is fully migrated and ready to use!
For those who are already familiar with SSMA and wondering if there are any limitations when targeting Azure SQL Database Managed Instance, compared to on-premises SQL Server – unfortunately, there are a few. In this initial release, the Tester feature and Server-side data migration are not supported. We will continue working with the Azure SQL Database Managed Instance team to enable all functionality in future versions.
from Microsoft Data Migration Blog http://bit.ly/2IlAN2m via IFTTT
0 notes
Text
87% off #Toad for Oracle for beginners: A database managment tool – $10
Learn how to use TOAD for Oracle. The big firms use this tool to manage their Oracle databases for better productivity
Beginner Level, – 1.5 hours, 25 lectures
Average rating 3.0/5 (3.0 (2 ratings) Instead of using a simple lifetime average, Udemy calculates a course’s star rating by considering a number of different factors such as the number of ratings, the age of ratings, and the likelihood of fraudulent ratings.)
Course requirements:
Oracle database ( We will cover this in the course) Sample Schema ( We will cover this in the course) Toad for Oracle ( We will cover this in the course) Basic understanding of SQL Basic understanding of Oracle database
Course description:
Welcome to Toad for Oracle Toad for Oracle continues to be the “de-facto” standard tool for database development and administration. Toad for Oracle provides an intuitive and efficient way for database professionals of all skill and experience levels to perform their jobs with an overall improvement in workflow effectiveness and productivity. Toad is widely used in big and medium size organizations as well as fortune 100 companies. With Toad Oracle database professionals can increase their productivity.
With Toad for Oracle you can:
Managing Database Connections create and configure database connections Edit/configure the oracle client settings Navigate Oracle Database Use Query Builder to build scripts Create SQL scripts inside TOAD’s editor Work with DDL scripts Work with DML scripts Build Better Code Faster Compare data Use Schema Browser to view database objects Work with Data Grids Sort data Filter data Edit data Export data Execute scripts Generate data reports Execute statements Comment out block of code Format blocks of code Use Code Snippets View and report on data Populate tables with sample data, Create joins and sub-queries Reverse engineer sql statements Understand your database environment through visual representations Meet deadlines easily through automation and smooth workflows Perform essential development and administration tasks from a single tool Manage and share projects, templates, scripts, and more with ease
By the end of this course you will be able to use TOAD for oracle in a live production environment confidently and increase your efficiency and productivity.
Full details Install and configure TOAD for Oracle Manage connections to the Database Navigate the Oracle Database Create and alter objects Use the Editor to build better code faster Build queries visually Use Schema Browser to view database objects Populating tables with sample data Importing/exporting data
Full details General Oracle Users Developers DBAs
Reviews:
“” ()
“” ()
“” ()
About Instructor:
Bluelime Learning Solutions
Bluelime is UK based and creates quality easy to understand eLearning solutions .All our courses are 100% video based. We teach hands –on- examples that teach real life skills . Bluelime has engaged in various types of projects for fortune 500 companies and understands what is required to prepare students with the relevant skills they need.
Instructor Other Courses:
Kick off your SQL skills with Oracle Database SQL Workshop Bluelime Learning Solutions, Learning made simple (10) $10 $155 A Beginner’s Guide To Creating Database Driven Applications How to Build an E-commerce Online Shop fast with no coding …………………………………………………………… Bluelime Learning Solutions coupons Development course coupon Udemy Development course coupon Databases course coupon Udemy Databases course coupon Toad for Oracle for beginners: A database managment tool Toad for Oracle for beginners: A database managment tool course coupon Toad for Oracle for beginners: A database managment tool coupon coupons
The post 87% off #Toad for Oracle for beginners: A database managment tool – $10 appeared first on Course Tag.
from Course Tag http://coursetag.com/udemy/coupon/87-off-toad-for-oracle-for-beginners-a-database-managment-tool-10/ from Course Tag https://coursetagcom.tumblr.com/post/158200928138
0 notes
Link
This is my learning notes about the SQL (based on MySql). As far as I known, Mysql was acquired by Oracle, whose mainly product is also the enterprise highly professional database, and the founder of the Mysql worried the future of the MySQL, then he started another open sourced database named MariaDB. I consider the Mysql and MariaDB the same thing, they even have the same command line tool name. And with no hesitate, I use MariaDB as the alternative to Mysql in the tutorial.
Database Creation
The operation of Database include Change, Create, Alter, Drop database.
USE db_name; CREATE DATABASE IF NOT EXIST db_name CHARACTER SET=utf-8; ALTER DATABASE db_name CHARACTER SET=utf-8; DROP DATABASE IF EXIST db_name;
Table Creation
The basic composition of a database is its tables, table schema definition is the beginning stage in the database design. There is two import aspect in define a table schema, the column data types and the constrictions. Escape the SQL standards, data types capability and definitions are difference case by case, take MariaDB as an example, here is its data types definition. I will take a sample database named sakila as an example, which is MySQL's sample database located at this link, this is sakila's installation hints. Then following its step to step instructions to install sakila.
USE sakila; SHOW TABLES; SHOW CREATE TABLE actor\G;
This instrument print the actor table's creation SQL commands.
CREATE TABLE `actor` ( `actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT, `first_name` varchar(45) NOT NULL, `last_name` varchar(45) NOT NULL, `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, PRIMARY KEY (`actor_id`), KEY `idx_actor_last_name` (`last_name`) ) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8;
Its data types have smallint(5), varchar(45) and timestamp, the most common used datatypes are numberic types and string types. The constrictions are the instruments after data type declaration, for example, "NOT NULL", "AUTO_INCREMENT". There is another SQL instrument to show the columns definition in a table.
MariaDB [sakila]: show columns from actor; +-------------+----------------------+------+-----+-------------------+-----------------------------+ | Field | Type | Null | Key | Default | Extra | +-------------+----------------------+------+-----+-------------------+-----------------------------+ | actor_id | smallint(5) unsigned | NO | PRI | NULL | auto_increment | | first_name | varchar(45) | NO | | NULL | | | last_name | varchar(45) | NO | MUL | NULL | | | last_update | timestamp | NO | | CURRENT_TIMESTAMP | on update CURRENT_TIMESTAMP | +-------------+----------------------+------+-----+-------------------+-----------------------------+ 4 rows in set (0.00 sec)
Table insert and update
I insert a row into table actor.
INSERT INTO actor(first_name, last_name) VALUE('zpcat', 'su'); SELECT * FROM actor where first_name='zpcat'; +----------+------------+-----------+---------------------+ | actor_id | first_name | last_name | last_update | +----------+------------+-----------+---------------------+ | 201 | zpcat | su | 2017-02-19 15:27:31 | +----------+------------+-----------+---------------------+
I update the above row by rename my first_name to 'moses'.
UPDATE actor set first_name='moses' where first_name='zpcat'; SELECT * FROM actor where actor_id=201; +----------+------------+-----------+---------------------+ | actor_id | first_name | last_name | last_update | +----------+------------+-----------+---------------------+ | 201 | moses | su | 2017-02-19 15:32:35 | +----------+------------+-----------+---------------------+
Table query
check how many rows in table actor:
SELECT COUNT(*) AS size FROM actor; +------+ | size | +------+ | 201 | +------+
There are 201 records inside table actor. Then how about query how many actors with the same last_name.
SELECT COUNT(last_name) as size, last_name FROM actor GROUP BY last_name; +------+--------------+ | size | last_name | +------+--------------+ | 3 | AKROYD | | 3 | ALLEN | | 1 | ASTAIRE | | 1 | BACALL |
Query how many unique last_name with sub query.
SELECT COUNT(*) as size FROM (SELECT COUNT(last_name) as size, last_name FROM actor GROUP BY last_name) b; +------+ | size | +------+ | 122 | +------+
There are two skills when use sub query after comparison instrument, use aggregated function inside the sub query selection, or use ANY, SOME, ALL before the sub query selection when the sub query can return more than one row. Take film table for example.
SELECT title FROM film WHERE rental_rate >= ALL (SELECT rental_rate FROM film WHERE length > 180); SELECT title FROM film WHERE rental_rate >= (SELECT MAX(rental_rate) FROM film WHERE length > 180);
Both of the above two SQL generate the same results. Create another table from current actor table.
CREATE TABLE t_actor ( actor_id smallint(5) unsigned NOT NULL AUTO_INCREMENT PRIMARY KEY, first_name varchar(45) NOT NULL, last_name varchar(45) NOT NULL ) SELECT first_name, last_name FROM actor;
This SQL create a new table named t_actor, then create another table t_actor_first_name.
CREATE TABLE t_actor_first_name ( id smallint(5) unsigned NOT NULL auto_increment PRIMARY KEY, first_name varchar(45) NOT NULL ) SELECT first_name FROM t_actor GROUP BY first_name;
update t_actor's column first_name.
UPDATE t_actor a INNER JOIN t_actor_first_name b ON a.first_name = b.first_name SET a.first_name = b.id;
Then table t_actor and t_actor_first_name is related. How about query all actors with first_name and last_name from those two related tables by inner join two tables.
SELECT b.first_name, a.last_name FROM t_actor a JOIN t_actor_first_name b on a.first_name = b.id;
I think that's the beautiful of the relational database, to use JOIN to link multi tables together. Of course, in MySQL, there are four types of JOIN, INNER JOIN, LEFT JOIN, RIGHT JOIN, CROSS JOIN.
0 notes
Text
Migrating a commercial database to open source with AWS SCT and AWS DMS
You’re moving your applications to the AWS Cloud and you want to migrate from a commercial database engine to an open source database. One thought that may have rightfully crossed your mind is that changing database engines is not a simple task. Rather, it can be a complex, multi-step process that involves pre-migration assessments, converting database schema and code, data migration, functional testing, performance tuning, and many other steps. However, two fundamental steps in this process that require the most effort are converting the schema and database code objects, and migrating the data itself. Fortunately, AWS provides you with the AWS Schema Conversion Tool (AWS SCT) and AWS Database Migration Service (AWS DMS), which help with each step. If you’ve never used these products, you may be wondering how they work. Moreover, you may prefer to test them on a sample database to get some experience under your belt before migrating your application databases. In this post, I introduce you to a series of step-by-step guides for migrating Oracle and Microsoft SQL Server databases to open-source engines like MySQL and PostgreSQL using the AWS SCT and AWS DMS. I also provide an AWS CloudFormation template that you can use to provision the required resources in your own account and follow along to migrate a sample database. Converting schema and code objects Converting the database schema and code objects is usually the most time-consuming operation in a heterogeneous database migration. By converting the schema properly, you can achieve a major milestone of the migration. The AWS SCT is an easy-to-use application that you can install on a local computer or an Amazon Elastic Compute Cloud (Amazon EC2) instance. It helps simplify heterogeneous database migrations by examining your source database schema and automatically converting the majority of the database code objects, including views, stored procedures, and functions, to a format compatible with your new target database. Any objects that the AWS SCT can’t convert automatically are marked with detailed information that you can use to convert it manually. Migrating the data After you complete the schema conversion, you need to move the data itself. In case of production databases, you may not be able to afford any downtime during the migration. Moreover, you may want to keep the transactions from source and target database in sync until you switch your application to the new target. AWS DMS helps you migrate the data from the source database to the target database easily and securely. AWS DMS supports data migration to and from most widely used commercial and open-source databases. The source database can be located in your on-premises environment, running on an EC2 instance, or it can be an Amazon Relational Database Service (Amazon RDS) database. The target can be a database in Amazon EC2 or Amazon RDS. Additionally, the source database remains fully operational during the migration, minimizing downtime to applications that rely on the database. Migration process To help you get started with the AWS SCT and AWS DMS, I’ve created a series of step-by-step guides that walk you through the entire migration process for Oracle to Amazon Aurora PostgreSQL, and Microsoft SQL Server to Amazon Aurora MySQL. The steps that we perform to migrate these databases are similar: Install and configure the AWS SCT. Create the schema in the target database. Drop foreign keys and secondary indexes on the target database, and disable triggers. Set up an AWS DMS task to replicate your data—full load and change data capture (CDC). Stop the task when the full load phase is complete. Inspect the target database for migrated data. Enable constraints such as foreign keys and secondary indexes in the target database. Enable an AWS DMS task for ongoing replication to keep the source and the target database in sync. Prerequisites To follow along with the guides, you need to have an AWS account. Create and activate an AWS account if you don’t have one already. You also need to have an Amazon EC2 key pair in the Region to deploy the resources. Create an Amazon EC2 key pair if you don’t have one. Creating a stack We use a CloudFormation template to set up the environment, including the source and target databases, networking configurations, and the tools we use. This way, we can focus on the steps dedicated to the migration. For instructions on launching the stack and configuring your environment, see Environment Configuration on GitHub. Keep in mind that some resources incur costs as long as the resources are in use. At the end of the migration, delete the AWS DMS resources and tear down the CloudFormation stack. To get an idea of the scale of the cost, I’ve provided a small pricing example (accurate as of January 2021) that assumes you have the stack for a duration of 5 hours (sufficient to complete the walkthrough) in us-east-1 (N. Virginia) region. Service Amount Consumed Pricing Structure Total Cost Amazon EC2 1 x m5a.xlarge Windows with SQL Server Standard (License Included) for 5 hours $0.836 per hour $4.18 Amazon EBS 1 x 250 GB with 2000 IOPS (io1) for 5 hours $0.125 per GB-month of provisioned storage AND $0.065 per provisioned IOPS-month $1.12 Amazon Aurora 1 x db.r5.xlarge with 20 GB of storage and an estimated 1M IO requests for 5 hours $0.58 per hour for on demand instance AND $0.10 per GB-month of storage with $0.20 per 1 million requests $4.92 Amazon RDS (Oracle Migration Only) 1 x db.r5.xlarge (BYOL) with 100 GB of GP2 storage for 5 hours $0.466 per hour for on demand instance AND $0.115 USD per GB-month of storage $2.41 Amazon DMS 1 x c4.xlarge with 100 GB of storage for 5 hours $0.238 per hour $1.19 Amazon SCT N/A Free $0 Total Cost $13.82 Step-by-step guides To keep this post short, I host the instructions of the migration process on GitHub. Use the following links to find the appropriate guide for the database engine that you want to migrate: Step-by-Step Guide for Oracle Migration to Amazon Aurora (PostgreSQL) Step-by-Step Guide for Microsoft SQL Server Migration to Amazon Aurora (MySQL) Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. It combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open-source databases. Although I use Aurora as my target database in this post, you can follow similar instructions to migrate your source database to a MySQL or a PostgreSQL database on Amazon RDS, or a database that you manage on an EC2 instance. Deleting the resources When your migration is complete, you need to delete the AWS DMS resource and the CloudFormation stack to stop incurring chargers. For instructions, see Environment Cleanup on GitHub. Conclusion Database migration can be a difficult process, but products like the AWS SCT and AWS DMS mitigate some of that complexity. The step-by-step guides that I provide in this post help you get hands-on experience migrating Oracle and Microsoft SQL Server to Aurora PostgreSQL and Aurora MySQL using the AWS SCT and AWS DMS. You can follow similar steps to migrate to and from other supported database engines. Further reading I recommend that you look through the following resources as you plan your migration: Database Migration—What Do You Need to Know Before You Start? AWS SCT User Guide AWS DMS User Guide AWS Database Migration Service Best Practices Validating database objects after migration using AWS SCT and AWS DMS Thanks for reading and have a successful database migration! About the Author Hooman Hamilton is a Solutions Architect with Amazon Web Services. He works with customers to build scalable, highly available, and secure solutions in the AWS Cloud. https://aws.amazon.com/blogs/database/migrating-a-commercial-database-to-open-source-with-aws-sct-and-aws-dms/
0 notes
Text
87% off #Installing and Manipulating Oracle Database Using SQL – $10
Hands-On Practical Examples
Beginner Level, – Video: 1.5 hours, 15 lectures
Average rating 3.9/5 (3.9)
Course requirements:
You should have a sample database and connection to the database.We ‘ll cover this in section one of this course
Course description:
Oracle Databases are among the most widely used in the world. You will find oracle databases in large to medium size organizations to store various types of information. This course will teach you how to install an oracle database step by step from scratch with no steps skipped. You will also be getting access to a sample schema object and database tables. But before you can access the tables in the schema it has to be unlocked. I will show you the special code used to unlock that in this course.
Once the database is installed you will need a tool to connect to it and perform data manipulation. The tool you will need to install is Oracle SQL developer developed by oracle. This is a free tool that can sometimes be tricky to install. I will show you how to install it step by step.
You will be using SQL (Structured Query Language) to talk to and give instructions to the database to perform various types of data manipulation.
This course is for those who have some very basic knowledge of database fundamentals as well as basic SQL knowledge. However the course is also easy enough for anyone who just has a basic knowledge of how to use a computer.
This format of this course is video based and the duration of the course is under 2hours long
The examples are practical hands on easy to follow and resembles a real life environment.
.What You Learn in this course:
How to add new data into a database How to remove data from a database How to update existing data in a database How to get data from a database and create a report from it How to Get Unique Data Values from a database
Full details Remove Data from a database Add Data into a database Update existing records inside a database Create Report from extracted data This course is meant for those that have a basic understanding of databases and some very basic SQL
Reviews:
“It’s kind of okay, I guess. I didn’t have Oracle Database installed before this video, so I followed the instructions on what to download and how to install. While the instructions were correct — as far as I remember — it wasn’t obviously clear why certain things needed to be done. I would like to hear more about how Oracle organizes their database objects and permissions (the bigger picture) before being told how to unlock the hr schema via command prompt. Also, certain things like the DESC command were covered, but you can click on any table name to open a tab containing A LOT more information about that table. So why even bother covering the DESC command?” (Blazej K)
“Top” (Randrin Nzeukang Nimpa)
“So far so good, the pace is a little slow for my taste” (Felipe De Jesús Piña Arjona)
About Instructor:
Skill Tree
We are experienced company that provides quality video based training . Our courses are easy to follow and understand and will take you from an absolute beginner with no technical skills to being efficient and confident with various technical skill like SQL and databases. We have worked with companies of various sizes and provided consultancy services at various levels. Thank you for learning with us and we hope your experience will be pleasant.
Instructor Other Courses:
Introduction to MySQL Database SQL DBA For Beginners Learn and Understand HTML and CSS From Scratch with Videos …………………………………………………………… Skill Tree coupons Development course coupon Udemy Development course coupon Databases course coupon Udemy Databases course coupon Installing and Manipulating Oracle Database Using SQL Installing and Manipulating Oracle Database Using SQL course coupon Installing and Manipulating Oracle Database Using SQL coupon coupons
The post 87% off #Installing and Manipulating Oracle Database Using SQL – $10 appeared first on Udemy Cupón.
from Udemy Cupón http://www.xpresslearn.com/udemy/coupon/87-off-installing-and-manipulating-oracle-database-using-sql-10/
from https://xpresslearn.wordpress.com/2017/02/18/87-off-installing-and-manipulating-oracle-database-using-sql-10/
0 notes
Text
300+ TOP NETEZZA Interview Questions and Answers
NETEZZA Interview Questions for freshers experienced :-
1. How nzload works in Netezza? A query usually goes through plan generation, optimization and transaction management. But Netezza bypass all these steps, loads are done in terms of sets and is based on the underlying table structures i.e. if the two tables are distributed on different columns then loading time is different for these two tables. Data format is verified and distribution of records calculated very quickly, fills out the set structure and writes to storage structure. While the nzload job is running, it sends binary records to the SPUs along with the current transaction ID. When a SPU receives new binary records, it immediately allocates resources and writes the records to the database or the table on the disks that the SPU owns. 2. Explain FPGA and how is it useful for query performance. FPGA: Field Programmable Gate Array (FPGA) is located on each SPU. Netezza is different from other architectures. Netezza can do a “hardware upgrade” through software by using FPGA. Hardware is reconfigured during install. While reading data from disk, FPGA on each SPU also helps in ‘filtering’ unnecessary data before getting loaded into memory on each SPU. This way, FPGA does not overwhelm with all the data from disk. 3. What is a zone map. Zone map in Netezza is similar (concept wise) to partitions in Oracle. Netezza maintains map for data so that it does relies on zone map to pull only the range it is interested in. For example, if we need to pull out data from Jan 2009 till June 2009 from a table that is distributed on date column, zone map helps us to achieve this. Zone map is maintained by Netezza automagically, no user intervention needed. Zone mapping is done at a block (extent) level. Netezza has zone maps for all columns (not just distributed column) and includes information such as minimum, maximum, total number of records. 4. How do you deal with historical data, with respect to zone maps. Sort data first, based on historical data (for example, date) and load this in using nzload. What are different ways to load nzload External tables Create table AS (aka, CTAS). Inserts (Eeeewee!!) 5. Does everything gets cached in Netezza (or any other data appliance). Typically only schema and other database objects are cached in appliances. Data is not cached, in general. In most cases, data is not saved any where (in any cache or on host computer) and is streamed directly from SPU to client software. 6. What is the best data appliance. Obviously, it all depends. This is my (limited) view: From features respect, Green Plum. Popularity with a bit of hype, Netezza. Matured and well respected, Teradata. With existing database integration, Dataupia. Largest implementations: Teradata: 72 nodes (two quad-core CPUs, 32GB RAM,104 / 300GB disks per node) and manages 2.4PB. Greenplum: Fox Interactive Media using a 40-node, Sun X4500 with two dual-core CPUs, 48 / 500GB disks, and 16 GB RAM (1PB total disk space) Source: Vertica’s Michael Stonebraker! 7. When are we likely to receive incorrect (aggregate) results. Very rarely a driver may return aggregated results that are still getting processed back to client. In this case, client may assume that calculation is complete, instead of updating with latest or final results. Obviously, driver has to wait for Netezza to complete operation on host computer, before delivering results. 8. Explain how data gets stored in Netezza and how does SPU failover take place. Data is stored based on a selected field(s) which are used for distibution. ==Data (A)==> Hash Function (B) ==> Logical SPU identifier list (C) ==> Physical SPU list (D) ==> Storage (E) 9. When data arrives, it is hased based on field(s) and a hash function (B) is used for this purpose. For example, for a hypothetical 32 node system system, logical SPU identifier list has 32 unique entries. If there are 1000 hashed data items from (B), there are 1000 entries in (C), all having only 32 SPU entries (a number of data items go to the same SPU, thus multiple (B) entries map to same (C)). For instance, (C) has values . This way, 1000 data entries are mapped. (D) has physical IP address of both primary and failover SPU. If there is a failover, this is the only place where Netezza need to update its entries. Same goes for a system that has a new SPU added. It is a little complicated, in principle, this is the concept. 10. what are 4 environment variables that are required. What are different states on Netezza. Environment variables: NZ_HOST, NZ_DATABASE, NZ_USER and NZ_PASSWORD Online: Normal or usual state. Stopped: Netezza will shutdown after completing current queries, no new queries allowed. Offline: Waits for completion of current queries, new or queries in queue receive error. Paused: Same as above, but no error displayed. Typically caused during Netezza bootup or startup. Down: Just plain down, could be due to Netezza server problem or user initiated. Does Netezza support concurrent update of the same recordIn case of conflict in which the same record is set for modification, Netezza rolls back recent transaction that is attempted on the same record, in fact same table. This is generally acceptable in DW environments. Netezza does support serialization transactions and does not permit dirty reads.
NETEZZA Interview Questions 11. How Netezza updates records. Give an idea of how transactions are maintained and how read consistency is maintaned. Netezza does not update records in place, it marks records with delete flag. In fact, each record contains two slots, one for create xid another for delete xid. Delete xid allows us to mark a record with current transaction for deletion, up to 31 transactions are allowed in Netezza for all tables. As noted earlier, only one update at a time allowed on the same table though. Here update refers to transactions that are not committed yet. Coming back to delete xid, this is how Netezza maintains transaction roll back and recovery. Once a record is modified, it’s delete xid is given transaction id; this is changed from previous value of 0, all records when loaded will contain 0 for delete xid. Note that FPGA uses its intelligence to scan data before delivering them to host or applications. Sample data: // First time a record is loaded, record R1 // After some time, updating the same record // Record R1 is updated; note T33 // New update record R33; similar to a new record this has zero for Delete Xid If the record is deleted, simply deletion xid will contain that transaction id. Based on the above, how do you know a record is the latest. It has zero in delete xid flag. Extending same logic, how do we know a record is deleted. It has non zero value in delete xid flag. How do you roll back to transaction. Follow similar to above listing, we can roll back a transaction of our interest. Note that transaction id is located in create xid flag and that is our point of interest in this case. From what I know, row id and create id is never modified by Netezza. 12. What happens to records that are loaded during nzload process, but were not committed.They are logically deleted and administrator can run nzreclaim, we may also truncate table. Can a group become a member of another group in Netezza user administration. Can we use same group name for databases. In Netezza, public group is created automatically and every one is a memeber of this group by default. We can create as many groups and any user can be member of any group(s). Group can not be a member of another group. Group names, user names and database names are unique. That is, we can not have a database called sales and a group also called sales. 13. How can we give a global permission to user joe so that he can create table in any database. Login into system database and give that permission to user by saying “grant create table to joe;” What permission will you give to connect to a database.List. Grant list, select on table to public (if logged into sales database, this allows all users to query tables in sales database). 14. Do we need to drop all tables and objects in that database, before dropping a database. No, drop database will take care of it. 15. What constraints on a table are enforced.Not null and default. Netezza does not apply PK and FK. Why NOT NULL specification is better in Netezza. Specifying not null results in better performance as NULL values are tracked at rowheader level. Having NULL values results in storing references to NULL values in header. If all columns are NOT NULL, then there is no record header. Create Table AS (CTAS), does it distribute data randomly or based on table on which it received data.Response: Newly created table from CTAS gets distribution from the original table. Why do you prefer truncate instead of drop table command.Just empties data from table, keeping table structure and permission intact. 16. When no distribution clause is used while creating a table, what distribution is used by Netezza.First column (same as in Teradata). Can we update all columns in a Netezza table.No, the column that is used in distribution clause cannot be used for updates. Remember, up to four columns can be used for distribution of data on SPU. From practical sense, updating distribution columns result in redistribution of data; the single most performance hit when large table is involved. This restriction makes sense. 17. What is dataslice and SPU.For me, they are the same! Of course, this answer is not accurate reply in your interview(s). What data type works best for zone maps. Zone maps work best for integer data types. 18. What feature in Netezza you do not like. Of course, a large list, especially when compared to Oracle. PK and FK enforcement is a big drawback though this is typically enforced at ETL or ELT process . 19. What data type is most suited for zone maps Zone maps are typically useful for integers, date and time, variations of this data type. Zone maps are useful for ordered that that are usually built into data that is loaded; for example, phone call logs. . 20. How are materialized views (MV) used in Netezza. Similar to other databases, materialized views are defined against base tables. Just like in views, we can not insert, update or delete from MV. MV is automatically used by optimizer when appropriate. When base table data changes, MV gets automatically updated by Netezza. MV is based on a single base table, thus practically not as useful. 21. What are typical join types in Netezza Mostly hash joins (note that Netezza may do hash join in memory or on the SPUs if memory is not sufficient for doing a hash join); in some cases sort merge or even some cases nested loops are performed (did not really come across much; In Netezza lingo it is called either function based join or something similar, not sure about the correct term). Of course, cross or product joins are possible like in any databases. 22. What is equivalent of replication of a table in Netezza on all SPUs. Imagine a situation in which a small table (us_states) is required for joining against a large table (creditcard_txn). Netezza may decide to read this table from all SPUs (remember, table data is spread on all SPUs) and “assemble” this table on host. This data is then broadcasted to all SPUs, resulting in a copy on each SPU. Note that an important db parameter “factrel_size_threshold” holds that triggering number; any table beyond these many number of rows is considered as a fact table. That is, if this parameter is set to 1 million, any table holding more than 1 million rows is considered by Netezza as a fact table and will not result in this replication or broadcast. 23. Primary goal of a table design is to distribute data evenly on all tables. Is it a good idea to choose multiple columns in Netezza so that data gets distributed evenly. NO, unless all columns are used during a join process. Most likely, this results in a large amount redistribution of data during query execution. 24. From design point of view, do you forsee performance problem using order ID as an integer for one table and Order num as varchar for another table as distribution keys. Assuming that these two tables are often used for join, do you see any performance problem. YES, we will encounter performance problem as Netezza redistributes integer to varchar type and recreates the first table. This can be avoided using both tables distributed on order number as either integer or varchar. 25. What is a snippet Snippet is a small block of database operation, typically three to four operations, that are carried out on all SPUs where data is location. If a query results in these snippets: Snippet A, Snippet B, Snippet C, ….Snippet X; they are carried out in a sequential manner. 26. List which options are prioritized when join operation is required. Netezza evaluates joins in this order of preference: Colocated joins: All data for joins are located on the same SPU. Redistribute: All required data is not located in the same SPU, send data to corresponding SPU where driving table data is located. Broadcast: Mentioned as replication above; Send all data from SPU to host which collates all that data, sends it to all SPUs. Each SPU has entire table data. That is, if Netezza machine has 32 SPU, there will be 32 physical tables, one one each SPU. Coming to joins, typically Netezza prefers in this order: Hash join in memory Hash join on disk Sort Merge join Nested loops Cross join Oracle preference closely resembles the same or similar order. 27. How can I look into some system parameters nzsystem showRegistry: Command for looking into system specific information /nz/data/postgresql.conf: To check NZ db parameters; we can change this file OR use set command at SQL. 28. Why integer data type is preferred in Netezza. A couple of reasons: Better joins, thus effecient. Netezza compress works only for integer type of data, not with varchar or date. Zonemaps are based on integer data types. 29. How can we find log (SQL) activity for a day. We can find this under /nz/kit.x.y/log/postgres/pg.log file. Older files are named as pg.log.N (Where N starts from 1, after pg.log file this is the latest file). Assuming that we are looking for a week day within pg.log, we may run $ cat pg.log | sed -n “/2010-02-01 00/,/2010-02-05 23:59/p” > pg.firstweekFeb2010.log If this produces no data, look for corresponding log file based on the last update timestamp (ls -ltr sorts them in reverse time stemp order). 30. What are the ways to get data into Netezza. What happens if inserts are interrupted, how Netezza handles commits. Please see my previous posting first. Here is a short list: load using nzload, SQL inserts (very slow), create table as command or inserts from other tables, and external tables. When insert is interrupted, all rows that are inserted are already committed unless we use transaction command. 31. How do you nzsystem command Most Netezza commands come with a variety of sub commands. For nzsystem, we can list a number of parameters using “nzsystem showregistry”. To find quickly if Netezza is up and running, use “nzsystem showstate”. However, using these commands require you to set “NZ_USER” and “NZ_PASSWORD” at unix level. You can do this with ‘export NZ_USER=username’ and ‘export NZ_PASSWORD=userpassword’, and then run this command. If not, you can specify login and password combo at command level. I do not prefer this way, as any users can do “ps -ef” to check what commands you are using, which also gives login and password you entered at shell command level. 32. Where are Netezza binaries stored Two important bin locations for Netezza are: /nz/kit.x.y/sbin and /nz/kit.x.y/bin. Some configurations are located in /nz/data directory. 33. How can we remove formatting with NZSQL NZSQL by default shows text formatted. In cases in which we do not need white spaces, use “nzsql -A” option. For example, “nzsql -A -c “select count(*) from hertz.daily_bookings;” allows us to run SQL command direct without logging into nzsql interactively. 34. Is there a way to stop NZSQL command, if one of the SQL commands fail. Yes. A similar feature exists in other databases too. In this case, ON_ERROR_STOP=true on nzsql command so that other commands do not get executed. For instance, we like to create a new table (CTAS) and later drop old table. If a new table creation fails, we certainly do not want to drop old table. In this case, this option is very useful. 35. Can we access data in other databases with the same NZSQL. This depends on the version. Version 5 and upwards support selects against database.owner.table, provided that user has same login and password access. Inserts are allowed in the current database with data from other databases, not the other way (meaning, we cannot insert into another database from the database where we logged in). 36. How can we plan and corresponding CPP files. We can just to ‘explain’ on a query to see how plan looks. At run time, plans are created under /nz/data/plans we will see corresponding plans generated during run time. Corresponding CPP code is located under /nz/data/cache/ 37. What constraints on a table are enforced. Not null and default. Netezza does not apply PK and FK. 38. How is load achieved in Netezza and why is that quick / fast. Loads by pass a few steps that typically a query would go through (a query goes through plan generation, optimization and transaction management). Loads are done in terms of “sets” and this set is based on underlying table structure (thus loads for two different tables are different as their sets are based on table structures). Data is processed to check format and distribution of records calculated very quickly (in one step), fills into ‘set’ structure and writes to storage structure. Storage also performs space availability and other admin tasks, all these operations go pretty quick (think of them as UNIX named pipes that streams data and SPU stores these records). 39. What are the partitioning methods in Netezza? There are two partitioning methods available in Netezza Random partitioning: Netezza used round robin method and distributes data randomly Hash Partitioning: Netezza use hash algorithms on key specified on distribution on clause and data is distributed on that columns. 40. What is the use of materialized views? A materialized view reduces the width (number of columns) of data being scanned in the base table by creating a thin version (fewer columns) of the base table that contains a small subset of frequently queried columns. IBM Netezza Questions and Answers Pdf Download Read the full article
0 notes
Text
87% off #Installing and Manipulating Oracle Database Using SQL – $10
Hands-On Practical Examples
Beginner Level, – Video: 1.5 hours, 15 lectures
Average rating 3.9/5 (3.9)
Course requirements:
You should have a sample database and connection to the database.We ‘ll cover this in section one of this course
Course description:
Oracle Databases are among the most widely used in the world. You will find oracle databases in large to medium size organizations to store various types of information. This course will teach you how to install an oracle database step by step from scratch with no steps skipped. You will also be getting access to a sample schema object and database tables. But before you can access the tables in the schema it has to be unlocked. I will show you the special code used to unlock that in this course.
Once the database is installed you will need a tool to connect to it and perform data manipulation. The tool you will need to install is Oracle SQL developer developed by oracle. This is a free tool that can sometimes be tricky to install. I will show you how to install it step by step.
You will be using SQL (Structured Query Language) to talk to and give instructions to the database to perform various types of data manipulation.
This course is for those who have some very basic knowledge of database fundamentals as well as basic SQL knowledge. However the course is also easy enough for anyone who just has a basic knowledge of how to use a computer.
This format of this course is video based and the duration of the course is under 2hours long
The examples are practical hands on easy to follow and resembles a real life environment.
.What You Learn in this course:
How to add new data into a database How to remove data from a database How to update existing data in a database How to get data from a database and create a report from it How to Get Unique Data Values from a database
Full details Remove Data from a database Add Data into a database Update existing records inside a database Create Report from extracted data This course is meant for those that have a basic understanding of databases and some very basic SQL
Reviews:
“It’s kind of okay, I guess. I didn’t have Oracle Database installed before this video, so I followed the instructions on what to download and how to install. While the instructions were correct — as far as I remember — it wasn’t obviously clear why certain things needed to be done. I would like to hear more about how Oracle organizes their database objects and permissions (the bigger picture) before being told how to unlock the hr schema via command prompt. Also, certain things like the DESC command were covered, but you can click on any table name to open a tab containing A LOT more information about that table. So why even bother covering the DESC command?” (Blazej K)
“Top” (Randrin Nzeukang Nimpa)
“So far so good, the pace is a little slow for my taste” (Felipe De Jesús Piña Arjona)
About Instructor:
Skill Tree
We are experienced company that provides quality video based training . Our courses are easy to follow and understand and will take you from an absolute beginner with no technical skills to being efficient and confident with various technical skill like SQL and databases. We have worked with companies of various sizes and provided consultancy services at various levels. Thank you for learning with us and we hope your experience will be pleasant.
Instructor Other Courses:
Introduction to MySQL Database SQL DBA For Beginners Learn and Understand HTML and CSS From Scratch with Videos …………………………………………………………… Skill Tree coupons Development course coupon Udemy Development course coupon Databases course coupon Udemy Databases course coupon Installing and Manipulating Oracle Database Using SQL Installing and Manipulating Oracle Database Using SQL course coupon Installing and Manipulating Oracle Database Using SQL coupon coupons
The post 87% off #Installing and Manipulating Oracle Database Using SQL – $10 appeared first on Udemy Cupón.
from http://www.xpresslearn.com/udemy/coupon/87-off-installing-and-manipulating-oracle-database-using-sql-10/
0 notes
Text
Migrating user-defined types from Oracle to PostgreSQL
Migrating from commercial databases to open source is a multistage process with different technologies, starting from assessment, data migration, data validation, and cutover. One of the key aspects for any heterogenous database migration is data type conversion. In this post, we show you a step-by-step approach to migrate user-defined types (UDT) from Oracle to Amazon Aurora PostgreSQL or Amazon RDS for PostgreSQL. We also provide an overview of custom operators to use in SQL queries to access tables with UDT in PostgreSQL. Migrating UDT from Oracle to Aurora PostgreSQL or Amazon RDS for PostgreSQL isn’t always straightforward, especially with UDT member functions. UDT defined in Oracle and PostgreSQL store structured business data in its natural form and work efficiently with applications using object-oriented programming techniques. UDT in Oracle can have both the data structure and the methods that operate on that data within the relational model. Though similar, the approaches to implement UDT in Oracle and PostgreSQL with member functions have subtle differences. Overview At a high level, migrating tables with UDT from Oracle to PostgreSQL involves following steps: Converting UDT – You can use the AWS Schema Conversion Tool (AWS SCT) to convert your existing database schema from one database engine to another. Unlike PostgreSQL, user-defined types in Oracle allow PL/SQL-based member functions to be a part of UDT. Because PostgreSQL doesn’t support member functions in UDT, you need to handle them separately during UDT conversion. Migrating data from tables with UDT – AWS Database Migration Service (AWS DMS) helps you migrate data from Oracle databases to Aurora PostgreSQL and Amazon RDS for PostgreSQL. However, as of this writing, AWS DMS doesn’t support UDT. This post explains using the open-source tool Ora2pg to migrate tables with UDT from Oracle to PostgreSQL. Prerequisites Before getting started, you must have the following prerequisites: The AWS SCT installed on a local desktop or an Amazon Elastic Compute Cloud (Amazon EC2) instance. For instructions, see Installing, verifying, and updating the AWS SCT. Ora2pg installed and set up on an EC2 instance. For instructions, see the Ora2pg installation guide. Ora2pg is an open-source tool distributed via GPLv3 license. EC2 instances used for Ora2pg and the AWS SCT should have connectivity to the Oracle source and PostgreSQL target databases. Dataset This post uses a sample dataset of a sporting event ticket management system. For this use case, the table DIM_SPORT_LOCATION_SEATS with event location seating details has been modified to include location_t as a UDT. location_t has information of sporting event locations and seating capacity. Oracle UDT location_t The UDT location_t has attributes describing sporting event location details, including an argument-based member function to compare current seating capacity of the location with expected occupancy for a sporting event. The function takes expected occupancy for the event as an argument and compares it to current seating capacity of the event location. It returns t if the sporting event location has enough seating capacity for the event, and f otherwise. See the following code: create or replace type location_t as object ( LOCATION_NAME VARCHAR2 (60 ) , LOCATION_CITY VARCHAR2 (60 ), LOCATION_SEATING_CAPACITY NUMBER (7) , LOCATION_LEVELS NUMBER (1) , LOCATION_SECTIONS NUMBER (4) , MEMBER FUNCTION COMPARE_SEATING_CAPACITY(capacity in number) RETURN VARCHAR2 ); / create or replace type body location_t is MEMBER FUNCTION COMPARE_SEATING_CAPACITY(capacity in number) RETURN VARCHAR2 is seat_capacity_1 number ; seat_capacity_2 number ; begin if ( LOCATION_SEATING_CAPACITY is null ) then seat_capacity_1 := 0; else seat_capacity_1 := LOCATION_SEATING_CAPACITY; end if; if ( capacity is null ) then seat_capacity_2 := 0; else seat_capacity_2 := capacity; end if; if seat_capacity_1 >= seat_capacity_2 then return 't'; else return 'f'; end if; end COMPARE_SEATING_CAPACITY; end; / Oracle table DIM_SPORT_LOCATION_SEATS The following code shows the DDL for DIM_SPORT_LOCATION_SEATS table with UDT location_t in Oracle: CREATE TABLE DIM_SPORT_LOCATION_SEATS ( SPORT_LOCATION_SEAT_ID NUMBER NOT NULL , SPORT_LOCATION_ID NUMBER (3) NOT NULL , LOCATION location_t, SEAT_LEVEL NUMBER (1) NOT NULL , SEAT_SECTION VARCHAR2 (15) NOT NULL , SEAT_ROW VARCHAR2 (10 BYTE) NOT NULL , SEAT_NO VARCHAR2 (10 BYTE) NOT NULL , SEAT_TYPE VARCHAR2 (15 BYTE) , SEAT_TYPE_DESCRIPTION VARCHAR2 (120 BYTE) , RELATIVE_QUANTITY NUMBER (2) ) ; Converting UDT Let’s start with the DDL conversion of location_t and the table DIM_SPORT_LOCATION_SEATS from Oracle to PostgreSQL. You can use the AWS SCT to convert your existing database schema from Oracle to PostgreSQL. Because the target PostgreSQL database doesn’t support member functions in UDT, the AWS SCT ignores the member function during UDT conversion from Oracle to PostgreSQL. In PostgreSQL, we can create functions in PL/pgSQL with operators to have similar functionality as Oracle UDT does with member functions. For this sample dataset, we can convert location_t, to PostgreSQL using the AWS SCT. The AWS SCT doesn’t convert the DDL of member functions for location_t from Oracle to PostgreSQL. The following screenshot shows our SQL code. PostgreSQL UDT location_t The AWS SCT converts LOCATION_LEVELS and LOCATION_SECTIONS from the location_t UDT to SMALLINT for Postgres optimizations based on schema mapping rules. See the following code: create TYPE location_t as ( LOCATION_NAME CHARACTER VARYING(60) , LOCATION_CITY CHARACTER VARYING(60) , LOCATION_SEATING_CAPACITY INTEGER , LOCATION_LEVELS SMALLINT , LOCATION_SECTIONS SMALLINT ); For more information about schema mappings, see Creating mapping rules in the AWS SCT. Because PostgreSQL doesn’t support member functions in UDT, the AWS SCT ignores them while converting the DDL from Oracle to PostgreSQL. You need to write a PL/pgSQL function separately. In order to write a separate entity, you may need to add additional UDT object parameters to the member function. For our use case, the member function compare_seating_capacity is rewritten as a separate PL/pgSQL function. The return data type for this function is bool instead of varchar2 (in Oracle), because PostgreSQL provides a bool data type for true or false. See the following code: CREATE or REPLACE FUNCTION COMPARE_SEATING_CAPACITY (event_loc_1 location_t,event_loc_2 integer) RETURNS bool AS $$ declare seat_capacity_1 integer; seat_capacity_2 integer ; begin if ( event_loc_1.LOCATION_SEATING_CAPACITY is null ) then seat_capacity_1 = 0 ; else seat_capacity_1 = event_loc_1.LOCATION_SEATING_CAPACITY; end if; if ( event_loc_2 is null ) then seat_capacity_2 = 0 ; else seat_capacity_2 = event_loc_2 ; end if; if seat_capacity_1 >= seat_capacity_2 then return true; else return false; end if; end; $$ LANGUAGE plpgsql; The UDT conversion is complete yielding the PL/pgSQL function and the UDT in PostgreSQL. You can now create the DDL for tables using this UDT in the PostgreSQL target database using the AWS SCT, as shown in the following screenshot. In the next section, we dive into migrating data from tables containing UDT from Oracle to PostgreSQL. Migrating data from tables with UDT In this section, we use the open-source tool Ora2pg to perform a full load of the DIM_SPORT_LOCATION_SEATS table with UDT from Oracle to PostgreSQL. To install and set up Ora2pg on an EC2 instance, see the Ora2pg installation guide. After installing Ora2pg, you can test connectivity with the Oracle source and PostgreSQL target databases. To test the Oracle connection, see the following code: -bash-4.2$ cd $ORACLE_HOME/network/admin -bash-4.2$ echo "oratest=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = oratest.xxxxxxx.us-west-2.rds.amazonaws.com )(PORT =1526))(CONNECT_DATA =(SERVER = DEDICATED) (SERVICE_NAME = UDTTEST)))" >> tnsnames.ora -bash-4.2$ sqlplus username/password@oratest SQL*Plus: Release 11.2.0.4.0 Production on Fri Aug 7 05:05:35 2020 Copyright (c) 1982, 2013, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> To test the Aurora PG connection, see the following code: -bash-4.2$ psql -h pgtest.xxxxxxxx.us-west-2.rds.amazonaws.com -p 5436 -d postgres master Password for user master: psql (9.2.24, server 11.6) WARNING: psql version 9.2, server version 11.0. Some psql features might not work. SSL connection (cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256) Type "help" for help. postgres=> You use a configuration file to migrate data from Oracle to PostgreSQL with Ora2pg. The following is the configuration file used for this sample dataset. Ora2pg has many options to copy and export different object types. In this example, we use COPY to migrate tables with UDT: -bash-4.2$ cat ora2pg_for_copy.conf ORACLE_HOME /usr/lib/oracle/11.2/client64 ORACLE_DSN dbi:Oracle:sid=oratest ORACLE_USER master ORACLE_PWD xxxxxxx DEBUG 1 EXPORT_SCHEMA 1 SCHEMA dms_sample CREATE_SCHEMA 0 COMPILE_SCHEMA 0 PG_SCHEMA TYPE COPY PG_DSN dbi:Pg:dbname=postgres;host=pgtest.xxxxxxxxx.us-west-2.rds.amazonaws.com;port=5436 PG_USER master PG_PWD xxxxxxxx ALLOW DIM_SPORT_LOCATION_SEATS BZIP2 DATA_LIMIT 400 BLOB_LIMIT 100 LONGREADLEN6285312 LOG_ON_ERROR PARALLEL_TABLES 1 DROP_INDEXES 1 WITH_OID 1 FILE_PER_TABLE The configuration file has the following notable settings: SCHEMA – Sets the list of schemas to be exported as part of data migration. ALLOW – Provides a list of objects to migrate. Object names could be space- or comma-separated. You can also use regex like DIM_* to include all objects starting with DIM_ in the dms_sample schema. DROP_INDEXES – Improves data migration performance by dropping indexes before data load and recreating them in the target database post-data migration. TYPE – Provides an export type for data migration. For our use case, we’re migrating data to the target table using COPY statements. This parameter can only have a single value. For more information about the available options in Ora2pg to migrate data from Oracle to PostgreSQL, see the Ora2pg documentation. In the following code, we migrate the DIM_SPORT_LOCATION_SEATS table from Oracle to PostgreSQL using the configuration file created previously: -bash-4.2$ ora2pg -c ora2pg_for_copy.conf -d Ora2Pg version: 18.1 Trying to connect to database: dbi:Oracle:sid=oratest Isolation level: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Retrieving table information... [1] Scanning table DIM_SPORT_LOCATION_SEATS (2 rows)... Trying to connect to database: dbi:Oracle:sid=oratest Isolation level: SET TRANSACTION ISOLATION LEVEL SERIALIZABLE Retrieving partitions information... Dropping indexes of table DIM_SPORT_LOCATION_SEATS... Looking how to retrieve data from DIM_SPORT_LOCATION_SEATS... Data type LOCATION_T is not native, searching on custom types. Found Type: LOCATION_T Looking inside custom type LOCATION_T to extract values... Fetching all data from DIM_SPORT_LOCATION_SEATS tuples... Dumping data from table DIM_SPORT_LOCATION_SEATS into PostgreSQL... Setting client_encoding to UTF8... Disabling synchronous commit when writing to PostgreSQL... DEBUG: Formatting bulk of 400 data for PostgreSQL. DEBUG: Creating output for 400 tuples DEBUG: Sending COPY bulk output directly to PostgreSQL backend Extracted records from table DIM_SPORT_LOCATION_SEATS: total_records = 2 (avg: 2 recs/sec) [========================>] 2/2 total rows (100.0%) - (1 sec., avg: 2 recs/sec). Restoring indexes of table DIM_SPORT_LOCATION_SEATS... Restarting sequences The data from the DIM_SPORT_LOCATION_SEATS table with UDT is now migrated to PostgreSQL. Setting search_path in PostgreSQL allows dms_sample to be the schema searched for objects referenced in SQL statements in this database session, without qualifying them with the schema name. See the following code: postgres=> set search_path=dms_sample; SET postgres=> select sport_location_seat_id,location,seat_level,seat_section,seat_row,seat_no from DIM_SPORT_LOCATION_SEATS; sport_location_seat_id | location | seat_level | seat_section | seat_row | seat_no ------------------------+----------------------------+------------+--------------+----------+--------- 1 | (Germany,Munich,75024,2,3) | 3 | S | 2 | S-8 1 | (Germany,Berlin,74475,2,3) | 3 | S | 2 | S-8 (2 rows) Querying UDT in PostgreSQL Now that both the DDL and data for the table DIM_SPORT_LOCATION_SEATS are migrated to PostgreSQL, we can query the UDT using the newly created PL/pgSQL functions. Querying Oracle with the UDT member function The following code is an example of a SQL query to determine if any stadiums in Germany have a seating capacity of more than 75,000 people. The dataset provides seating capacity information of stadiums in Berlin and Munich: SQL> select t.location.LOCATION_CITY CITY,t.LOCATION.COMPARE_SEATING_CAPACITY(75000) SEATS_AVAILABLE from DIM_SPORT_LOCATION_SEATS t where t.location.LOCATION_NAME='Germany'; CITY SEATS_AVAILABLE --------------------------------- ---------------- Munich t Berlin f The result of this SQL query shows that a stadium in Munich has sufficient seating capacity. However, the event location in Berlin doesn’t have enough seating capacity to host a sporting event of 75,000 people. Querying PG with the PL/pgSQL function The following code is the rewritten query in PostgreSQL, which uses the PL/pgSQL function COMPARE_SEATING_CAPACITY to show the same results: postgres=> select (location).LOCATION_CITY,COMPARE_SEATING_CAPACITY(location,75000) from DIM_SPORT_LOCATION_SEATS where (location).LOCATION_NAME='Germany'; location_city | compare_seating_capacity ---------------+-------------------------- Munich | t Berlin | f (2 rows) Using operators You can also use PostgreSQL operators to simplify the previous query. Every operator is a call to an underlying function. PostgreSQL provides a large number of built-in operators for system types. For example, the built-in integer = operator has the underlying function as int4eq(int,int) for two integers. You can invoke built-in operators using the operator name or its underlying function. The following queries get sport location IDs with only two levels using the = operator and its built-in function int4eq: postgres=> select sport_location_id,(location).location_levels from DIM_SPORT_LOCATION_SEATS where (location).location_levels = 2; sport_location_id | location_levels -------------------+----------------- 2 | 2 3 | 2 (2 rows) postgres=> select sport_location_id,(location).location_levels from DIM_SPORT_LOCATION_SEATS where int4eq((location).location_levels,2); sport_location_id | location_levels -------------------+----------------- 2 | 2 3 | 2 (2 rows) You can use operators to simplify the SQL query that finds stadiums in Germany with a seating capacity of more than 75,000 people. As shown in the following code, the operator >= takes the UDT location_t as the left argument and integer as the right argument to call the compare_seating_capacity function. The COMMUTATOR clause, if provided, names an operator that is the commutator of the operator being defined. Operator X is the commutator of operator Y if (a X b) equals (b Y a) for all possible input values of a and b. In this case, <= acts as commutator to the operator >=. It’s critical to provide commutator information for operators that are used in indexes and join clauses because this allows the query optimizer to flip such a clause for different plan types. CREATE OPERATOR >= ( LEFTARG = location_t, RIGHTARG = integer, PROCEDURE = COMPARE_SEATING_CAPACITY, COMMUTATOR = <= ); The following PostgreSQL query with an operator shows the same results as the Oracle query with the UDT member function: postgres=> select (location).LOCATION_CITY CITY,(location).LOCATION_SEATING_CAPACITY >=75000 from DIM_SPORT_LOCATION_SEATS where (location).LOCATION_NAME='Germany'; city | ?column? --------+---------- Munich | t Berlin | f (2 rows) You can also use the operator >= in the where clause with UDT location_t, just like any other comparison operator. With the help of the user-defined operator >= defined earlier, the SQL query takes the location_t data type as the left argument and integer as the right argument. The following SQL query returns cities in Germany where seating capacity is more than 75,000. postgres=> select (location).LOCATION_CITY from DIM_SPORT_LOCATION_SEATS where (location).LOCATION_NAME='Germany' and location >=75000; location_city --------------- Munich (1 row) Conclusion This post showed you a solution to convert and migrate UDT with member functions from Oracle to PostgreSQL and how to use operators in queries with UDT in PostgreSQL. We hope that you find this post helpful. For more information about moving your Oracle workload to Amazon RDS for PostgreSQL or Aurora PostgreSQL, see Oracle Database 11g/12c To Amazon Aurora with PostgreSQL Compatibility (9.6.x) Migration Playbook. As always, AWS welcomes feedback. If you have any comments or questions on this post, please share them in the comments. About the Authors Manuj Malik is a Senior Data Lab Solutions Architect at Amazon Web Services. Manuj helps customers architect and build databases and data analytics solutions to accelerate their path to production as part of AWS Data Lab. He has an expertise in database migration projects and works with customers to provide guidance and technical assistance on database services, helping them improve the value of their solutions when using AWS. Devika Singh is a Solutions Architect at Amazon Web Services. Devika has expertise in database migrations to AWS and as part of AWS Data Lab, works with customers to design and build solutions in databases, data and analytics platforms. https://aws.amazon.com/blogs/database/migrating-user-defined-types-from-oracle-to-postgresql/
0 notes