#how to check latest cpu patch applied in oracle
Explore tagged Tumblr posts
ocptechnology · 3 years ago
Text
Check the Patch Applied to the Oracle Database
multiple ways to find Patch is Applied or not in the Database. Here i give all the ways to get the details about the same. #oracle #oracledba #databaseadmin #oracledatabase
In Oracle Database we have multiple ways to find Patch is Applied or not in the Database. Here i give all the ways to get the details about the same. how to check patch is applied or not in oracle, sql query to find patches applied in oracle 11g, How do I know if a patch is applied in Oracle 19c? Check applied patch list Here we have two ways to find the patch list and patch list with…
Tumblr media
View On WordPress
1 note · View note
michaelok · 6 years ago
Text
Fun with Java Deserialization
Down the Rabbit Hole
I’ve just been scrambling down the rabbit hole to patch an app that Qualys is complaining has a deserialization vulnerability. What should have been a simple effort has turned into a mystery because, while we appear to have the correct libraries already in place, Qualys is still complaining about the error. A report that should be clean, to indicate compliance with GDPR, is instead “yellow”, or “orange”, or “red”, so fingers point, tempers flare, e-mails fly about, cc’ing higher and higher ups, so now we have assumptions, and based on those assumptions, tersely written orders, involvement by 3rd party vendors. Time to panic? Shall we be careful and tip-toe through the eggs?[0]
Well, it turns out to be a rather interesting mystery.
What is Java serialization?
First, some definitions are in order. What is Java serialization and why is it important? Perhaps Wikipedia[1] defines it the simplest:
A method for transferring data through the wires
Java serialization is a mechanism to store an object in a non-object form, i.e. a flat, serial stream rather than an object, so that it can be easily sent somewhere, such as to a filesystem, for example. It is also known as “marshaling”, “pickling”, “freezing” or “flattening”. Java programmers should be familiar with the concept, and with the Serializable interface, since it is required in various situations. For example, this technique is used for Oracle Coherence’s “Portable Object Format” to improve performance and support language independence.
Early Days of Java Serialization
Amazing to think that, back in the day, we used all the various tools required for distributed communication, whether simple like RMI and JMX, or more involved specs like CORBA and EJB, and we never thought much about the security aspects. I’m sure if I peruse my copy Henning and Vinoski’s definitive work on C++ and CORBA, I’ll find a chapter or so focusing on security[1], but I’m figuring, we, like everyone else, focused on the business details, getting the apps to communicate reliably, adding features, improving stability, etc, and not on whether there were any security holes, such as tricking a server into running cryptocurrency mining malware[2]. Yes, Bitcoin and the like did not even exist then.
The Biggest Wave of Remote Execution Bugs in History
Well, times change, and the twenty-year-old Java deserialization capability is the source of “nearly half of the vulnerabilities that have been patched in the JDK in the last 2 years” [3], so Oracle has plans in the works to completely revamp object serialization. Further note that this is not solely Oracle’s issue, nor is it limited to Java. Many other software vendors, and open source projects, whether tools or languages, have this weakness, such as Apache Commons Collections, Google Guava, Groovy, Jackson, and Spring.
It seems all the excitement, at least in the Java world, started when Chris Frohoff and Garbriel Lawrence presented their research on Java serialization “ultimately resulting in what can be readily described as the biggest wave of remote code execution bugs in Java history.” [6] However, it is important to note that this flaw is not limited to Java. While Frohoff and Lawrence focused on Java deserialization, Moritz Bechler wrote a paper that focuses on various Java open-source marshalling libraries:
Research into that matter indicated that these vulnerabilities are not exclusive to mechanisms as expressive as Java serialization or XStream, but some could possibly be applied to other mechanisms as well.
I think Moritz describes the heart of the issue the best:
Giving an attacker the opportunity to specify an arbitrary type to unmarshal into enables him to invoke a certain set of methods on an object of that type. Clearly the expectation is that these will be well-behaved – what could possibly go wrong?
Java deserialization
For our purposes, we focused on Java serialization and Apache Commons Collections. From the bug report COLLECTIONS-580[4]:
With InvokerTransformer serializable collections can be build that execute arbitrary Java code. sun.reflect.annotation.AnnotationInvocationHandler#readObject invokes #entrySet and #get on a deserialized collection.
If you have an endpoint that accepts serialized Java objects (JMX, RMI, remote EJB, …) you can combine the two to create arbitrary remote code execution vulnerability.
The Qualys report didn’t have much in the way of details, other than a port and the commons-collections payloads that illustrated the vulnerability, but I guessed from that info that the scanner simply uses the work done by the original folks (Frohoff and Lawrence) [5] that discovered the flaw available as the ysoserial project below.
https://www.youtube.com/watch?v=KSA7vUkXGSg
Source code here: https://github.com/frohoff/ysoserial
Now, in the flurry of trying to fix this error, given the annoyingly vague details from Qualys, I had looked at all sorts of issues, after noticing a few extra JVM arguments in the Tomcat configuration that happened to be set for the instances that were failing with this error, but were not set on other instances. Apparently someone had decided to add these, without informing our team. Interesting.
Now, remember that according to the original bug report, this exploit requires (1) untrusted deserialization, it (2) some way to send a payload, i.e. something listening on a port, such as a JMX service. In fact, These extra JVM args were for supporting remote access via JMX, so unraveling the thread, I researched Tomcat 8 vulnerabilities especially related to JMX. While it turns out that JMX is a weak point (JBoss in particular had quite a well-known major JMX flaw), I did have any luck convincing the customer that they should shut down the port. It is used to gather monitoring metrics useful in determining application performance such as CPU load, memory, and even cache information. Ok, easy but drastic solutions were off the table. I was back to the drawing board.
Next, I tried to see why it was flagging Apache collections in the first place. Going back to the ysoserial project, was it incorrectly flagging Apache Commons Collections 3.2.2, or Collections4-4.1, despite the fact that the libs were fixed? Further looking at the specific payloads, Qualys/Ysoserial was complaining about Collections 3.2.1, which limited the test scenarios to try to get working
Now here’s the interesting part: with ysoserial, I was unable to get the exploit to work, as depicted in the Marshalling Pickles video. It was failing with a strange error I hadn’t seen before, something about filter setting a “rejected” status. Now, this led me to finding info about Oracle’s critical patch update (_121). I was running with latest & greatest JDK, release _192, however our production servers were running a very out-of-date version - surprise surprise.
Apparently, with Oracle JDK at release 121 or later, Oracle has started to address this vulnerability in an official way, rather than what exists currently which is a bunch of ad-hoc solutions, mainly whitelisting/blacklisting, which is a difficult without library support. Some would call this ‘whack-a-mole’, but I think this illustrates quite well the idea of a “patch”, i.e. there’s a leak, so run over and put some tape over it, but we aren’t solving the fundamental issue. In other words, the current defense against this attach is limited because we can’t possibly know what libraries customers will use, so the library maintainer has to scramble to plug the holes whenever they are discovered. Note that even the best of libraries like Groovy, Apache and Spring have had to fix this flaw.
So kudos to Oracle for taking some much needed steps in solving this problem. Here’s a little detail on the new feature that works to make the deserialization process more secure:
The core mechanism of deserialization filtering is based on an ObjectInputFilter interface which provides a configuration capability so that incoming data streams can be validated during the deserialization process. The status check on the incoming stream is determined by Status.ALLOWED, Status.REJECTED, or Status.UNDECIDED arguments of an enum type within ObjectInputFilter interface.
https://access.redhat.com/blogs/766093/posts/3135411
While it is the “official” way to deal with the deserialization issue, it remains to be seen how well this strategy will work. As a further research project, I’m curious whether this model might be used beyond Java serialization, i.e. in projects like Jackson. Does it add anything more than Jackson already has, or does it simplify it, etc.
This feature is targeted for Java 9, but was backported to 8, though it looks like it doesn’t have all the functionality that Java 9 supports.
So you are probably wondering what happened? Did we fix all of the above, and even throw in an upgrade Tomcat, like the Monty Python “Meaning of Life” movie “everything, with a cherry on top!” Well, finally, given a little guidance on where to look, the 3rd party developers - turned out that not only had they added the JVM args, they had also added in some extra code to handle the authentication. Which used - you guessed it - the _old_ 3.2.1 version of commons-collections. This code was also manually maintained, so while the app our team maintained received the updated commons jar in an automated fashion along with all the other updates, this little bit of code, tucked away on the server, was never updated.
Lessons learned? Off-the-wall custom authentication? Don’t do this. But if you do, don’t leave manually updated chunks of code lying around, and further, keep up with the patches!
[0] Yes, I’m reading William Finnegan’s “Barbarian Days: The Surfing Life”, Finnegan’s hilarious and fascinating account of being a surfer in the early days of the sport. At one point, he complains to his friend and fellow surfer, who is getting on his nerves, that he is tired of walking on eggs around him. Of course, in his anger, he mixed up the quote, and meant “walking on eggshells”.
[1] https://en.wikipedia.org/wiki/Serialization
[2] A quick look at the omniORB doc shows it has a feature called the “Dynamic Invocation Interface…Thus using the DII applications may invoke operations on any CORBA object, possibly determining the object’s interface dynamically by using an Interface Repository.” Sounds like reflection doesn’t it? I’m not aware of any specific vulnerabilities, but it does seem we’ve traded a bit of the security that invoking statically-compiled objects brings for convenience.
https://www.cl.cam.ac.uk/research/dtg/attarchive/omniORB/doc/3.0/omniORB/omniORB011.html
[3] https://www.siliconrepublic.com/enterprise/cryptocurrency-malware-monero-secureworks
The Java Object Serialization Specification for Java references a good set of guidelines on how to mitigate the vulnerability:
https://www.oracle.com/technetwork/java/seccodeguide-139067.html#8
[4] https://www.securityinfowatch.com/cybersecurity/information-security/article/12420169/oracle-plans-to-end-java-serialization-but-thats-not-the-end-of-the-story
[5] https://issues.apache.org/jira/browse/COLLECTIONS-580
[6]Which seems to be a pretty standard strategy these days, i.e. proprietary companies like Qualys leveraging open source and adding it to their toolset. AWS does this to great effect, and we, as the consumer, benefit by getting simple interfaces. However, we should not forget that much of the code we use today is Open-source software, in some way or another.
Stratechery, as usual, has a very thoughtful post about this very idea:
It’s hard to not be sympathetic to MongoDB Inc. and Redis Labs: both spent a lot of money and effort building their products, and now Amazon is making money off of them. But that’s the thing: Amazon isn’t making money by selling software, they are making money by providing a service that enterprises value, and both MongoDB and Redis are popular in large part because they were open source to begin with.
[snip]
That, though, should give pause to AWS, Microsoft, and Google. It is hard to imagine them ever paying for open source software, but at the same time, writing (public-facing) software isn’t necessarily the core competency of their cloud businesses. They too have benefited from open-source companies: they provide the means by which their performance, scalability, and availability are realized. Right now everyone is winning: simply following economic realities could, in the long run, mean everyone is worse off.
https://stratechery.com/2019/aws-mongodb-and-the-economic-realities-of-open-source/
[7] https://www.github.com/mbechler/marshalsec/blob/master/marshalsec.pdf?raw=true
[8] https://medium.com/@cowtowncoder/on-jackson-cves-dont-panic-here-is-what-you-need-to-know-54cd0d6e8062
0 notes
youngprogrammersclub · 6 years ago
Text
How to Apply Critical Patch Update (CPU) on RAC
Patch Details: Patch “CPUJULY2012” CPU patch is released 4 times in a year JAN/APR/JUL/OCTVersion & Platform: 10.2.0.4 Database on LINUX Overview:A Critical patch update (CPU) is a collection of patches for multiple security vulnerabilities. CPU patch are usually cumulative, which means fixes from previous Oracle security alerts and critical patch updates are included in current patch. However each advisory describes only the security fixes added since the critical patch update advisory. Thus it is not required to have previous security patches applied before applying the latest patches. However, you must be on the stated patch set level for a given product home before applying the latest patches for that release: Information about latest CPU patchNote: Do not forget to read the README file from the patch before start the actual work.–        If this is a RAC environment, select one of the patch installation methods provided by OPatch (rolling, all node, or minimum downtime), and shutdown instances and listeners as appropriate for the installation method selected. This CPU patch is rolling RAC installable.–        If this is not a RAC environment, shut down all instances and listeners associated with the Oracle home that you are updating.Repeat the Steps for each node of RAC– Shutdown the Oracle instances and services or listener on node 1 – Apply the patch to the on node 1 – Start the Oracle instance of node 1 – Now Shutdown the Oracle instance on node 2 – Apply the patch to on node 2 – Start the Oracle instance of node 2– Repeat this process as per the number of node. 1. Download the CPU required patchset from Metalink2. Change the owner of the patch file to oracle user.# chown –R oracle3. Set the PATH variable to locate the opatch utility.$ export PATH=$PATH: $ORACLE_HOME/OPatch4. unzip the patch and go the unzipped directory$unzip “patchfile.zip”5. Find and invoke the Opatch version$ opatch versionOPatch Version: 10.2.0.4.2$ opatch lsinventoryIf you want check the CPU patch is whether rolling support or not, go to the patch directoryCd /oracle/??  --patch directory$ opatch query -all6. Backup the oraInventory and Opatch directory$cp -R oraInventory old_oraInventory$cp -R opatch old_opatch7. If you are Applying on RAC: Stop Instance/asm/nodeapps of node1$ srvctl stop instance -d racdb –i racdb1$ srvctl stop asm -n rac1$ srvctl stop nodeapps -n rac18. Go to the Patch Directory and invoke opatch apply.$ cd ??  --patch directory$opatch apply or $opatch napply -skip_subset -skip_duplicate9. Verify Patches are applied$opatch lsinventory -detail -oh $ORACLE_HOME10. Now start the Node1 and repeat the same steps 7 to 9 for Node211. Verify Patches are applied           $opatch lsinventory -detail -oh $ORACLE_HOME12. Start the Instance/Asm/Nodeapps of node2$srvctl start instance –d racdb –i racdb2$srvctl start asm –n rac2$srvctl start nodeapps –n rac2$crs_stat –t Post CPU Patch Steps:For each database instance running on the Oracle home being patched, connect to the database using SQL*Plus on each node. Connect as SYSDBA and run the catbundle.sqlscript:On node1/node2:cd $ORACLE_HOME/rdbms/adminsqlplus /nologSQL> CONNECT / AS SYSDBASQL> STARTUPSQL> @catbundle.sql cpu applySQL> @utlrp.sqlThe catbundle.sql execution is reflected in the dba_registry_historyview by a row associated with bundle series CPU. Check the following log files for any errors:catbundle_CPU__APPLY_.log, catbundle_CPU__GENERATE_.log on the path $ORACLE_HOME/cfgtoollogs/catbundleRecompiling Views in the DatabaseIf you already applied this during previous CPU installation then you can skip this section. To check whether view recompilation has already been performed:SELECT * FROM registry$history where ID = '6452863';If the view recompilation has been performed, this statement returns one or more rows. If the view recompilation has not been performed, this statement returns no rows.If no rows returns then go the following steps for RAC environment:1. Run the pre-check script, which reports the maximum number of views and objects that may be recompiled:cd $ORACLE_HOME/cpu/view_recompilesqlplus /nologSQL> CONNECT / AS SYSDBASQL> @recompile_precheck_jan2010cpu.sqlNote: If the database is in RAC environment then perform both the steps otherwise (single instance db), perform only step1 skip the step 2. cd $ORACLE_HOME/cpu/view_recompilesqlplus /nologSQL> CONNECT / AS SYSDBASQL> SHUTDOWN IMMEDIATE;SQL> STARTUP UPGRADE;SQL> @view_recompile_jan2010cpu.sqlSQL> SHUTDOWN;SQL> STARTUP;2. Stop all instances except the one where the view recompilation is being executed.cd $ORACLE_HOME/cpu/view_recompilesqlplus /nologSQL> CONNECT / AS SYSDBASQL> STARTUP NOMOUNT;SQL> ALTER SYSTEM SET CLUSTER_DATABASE=FALSE SCOPE=spfile;SQL> SHUTDOWN;SQL> STARTUP UPGRADE;SQL> @?/ cpu/view_recompile /view_recompile_jan2008cpu.sqlSQL> SHUTDOWN;SQL> STARTUP NOMOUNT;Set the CLUSTER_DATABASE initialization parameter to TRUE:SQL> ALTER SYSTEM SET CLUSTER_DATABASE=TRUE SCOPE=spfile;Restart the database:SQL> QUITcd $CRS_HOME/binsrvctl start database -d racdbIf any invalid objects were reported, run the utlrp.sqlscript:cd $ORACLE_HOME/rdbms/adminsqlplus /nologSQL> CONNECT / AS SYSDBASQL> @utlrp.sqlThen, manually recompile any invalid objects:SQL> alter package schemaname.packagename compile;Verify Patches are applied.$opatch lsinventory -detail -oh $CRS_HOME #if you have CRS_HOME$opatch lsinventory -detail -oh $ORACLE_HOME#if you have both ORACLE_HOME
0 notes
otterhackerxyz · 6 years ago
Text
DBA interview Question and Answer part 23
What is basic difference between V$ view to GV$ or V$ and V_$ view?The V_$ (V$ is the public synonym for V_$ views) view are called dynamic performance views. They are continuously updated while a database is open in use and their contents related primary to performance.Select object_type  from '%SESSION' 'V%';OWNER OBJECT_NAME    OBJECT_TYPE-----  ----------- -------------SYS    V_$HS_SESSION VIEWSYS    V_$LOGMNR_SESSION    VIEWSYS    V_$PX_SESSION VIEWSYS    V_$SESSION    VIEWWhere as GV$ views are called Global dynamic performance view and retrieve information about all started instance accessing one RAC database in contrast with dynamic performance views which retrieves information about local instance only. The GV$ views having the additional column INST_ID which indicates the instance in RAC environment.GV$ views use a special form of parallel execution. The parallel execution co-ordinator runs on the instance that the client connects to and one slave is allocated in each instance to query the underlying V$ view for that instance.What is the Purpose of default Tablespace in oracle database?Each user should have a default tablespace. When a user creates a schema objects and specifies no tablespace to contain it, oracle database stores the object in default user tablespace.The default setting for default tablespace of all users is the SYSTEM tablespace. If a user likely to create any type of objects then you should specify and assign the user a default tablespace. Note: Using the tablespace other than SYSTEM reduces contention between data dictionary objects and the user objects for the same data files. Thus it is not advisable for user data to be stored in the SYSTEM tablesapce.SELECT USERNAME, DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME='EDSS';SQL> Alter user EDSS default tablespace XYZ;SELECT USERNAME, DEFAULT_TABLESPACE FROM DBA_USERS WHERE USERNAME='EDSS';Once you change the tablespace for a user the previous/existing objects stay the same,I suppose that you never specified a tablespace when you created the objects and let to use the default tablespace from the user, the objects stay stored in the previous tablespace(tablespace A) and new objects will be created in the new default tablespace (tablespace B). Like in the example above, the objects for EDSS stay in the ORAJDA_DB tablespace and any new object will be stored in the ORAJDA_DB1 tablespace.What is Identity Columns Feature in oracle 12c?Before Oracle 12c there was no direct equivalent of the AutoNumberor Identityfunctionality, when needed it will implemented using a combination of sequences and triggers. The oracle 12c database introduces the ability to define an identity clause for a table column defined using a numeric type. Using ALWAYS keyword will force the use of the identity.GENERATED ]AS IDENTITY Using BY DEFAULT allows you to use the identity if the column isn't referenced in the insert statement.Using BY DEFAULT ON NULL allows the identity to be used even when the identity column is referenced and NULL value is specified.How to find Truncated Table user information?If you have already configure the data mining concept with your database then there is nothing to do you can query with v$logmnr_contents view and find the list, otherwise you need to do some more step to configure it first with your database.Why used Materialized view instead of Table?Materialized views are basically used to increase query performance since it contains results of a query. They should be used for reporting instead of a table for a faster execution.How does Session communicate with the server process?Server processes executes SQL received from user processes.Which SGA memory structure cannot re-size dynamically after instance startup?Log BufferWhich Activity will generate less UNDO data?InsertWhat happens when a user issue a COMMIT?The LGWR flushes the log buffer to the online redo log.When the SMON processes perform ICR?Only at the time of startup after abort shutdown.What is the purpose of synonym in oracle?Synonym permits application to function without modification regardless of which user owns table or view or regardless of which database holds the table or view. It masks the real name and owner of an object and provides location transparency for tables, views or program units of a remote database.CREATE SYNONYM pay_payment_master FOR HRMS.pay_payment_master;CREATE PUBLIC SYNONYM pay_payment_master FOR [email protected];How many memory layers are in the shared pool?The shared pool of SGA having three layers: Library cache which contains parsed sql statement, cursor information, execution plan etc; dictionary cache contains cache user account information, privilege information, datafiles, segments and extent information; buffer for parallel execution messages and control structure.What is the cache hit ratio, what impact does it have on performance?It calculates how often a requested block has been found in the buffer cache without requiring disk space. This ratio is computed using view V$SYSSTAT. The buffer cache hit ratio can be used to verify the physical I/O as predicted by V$DB_CACHE_ADVICE.select From in 'physical reads');The cache-hit ratio can be calculated as follows: Hit ratio = 1 – (physical reads / (db block gets + consistent gets)) If the cache-hit ratio goes below 90% then:  increase the initialization parameter DB_CACHE_SIZE.Which environment variables are critical to run OUI?ORACLE_BASE; ORACLE_HOME; ORACLE_SIDWhat is Cluster verification utility in RAC env.The cluster verification utility (CVU) is a validation tools that you can use to check all the important component that need to verified at different stage of deployment in a RAC environment.How to identify the voting disk in RAC env. and why it is always in odd number?As we know every node are interconnected with each other and pinging voting disk in cluster to check whether they are alive. If voting disks are in even count then both nodes are survival node and it is created multiple brains in same cluster. If it is odd number in that case only one node ping greater count of voting disk and cluster can be saved from multiple brain syndrome. You can identify voting disk by using the below command line:#crsctl query css votediskWhat are the components of physical database structure? What is the use of control files?Oracle database consists of three main categories of files: one or more datafiles, two or more redo log files; one or more control files.When an instance of an Oracle database is started, its control file is used to identify the database and redo log files that must be opened for database operation to proceed. It is also used in database recovery.What is difference between database Refreshing and Cloning?DB refreshing means the data in the target environment has been synchronized with a copy of production. This can be done by restoring with a backup of production database where as cloning means that an identical copy of production has been taken and restore to the target environment.When we need to Clone or Refresh the database? There are a couple of scenarios when cloning should be performed: 1. Creating a new environment with the same or different DBNAME. 2. Sometimes we need to apply patches or other major configuration changes thus a copy of environment is needed to test the effect of this change.3. Normally in software development environment before any major development efforts take place, it is always good to re-clone dev, test environments to keep environment sync. The refreshment is needed only when you sure that the environment are already sync and you need to apply only change of latest data.What is OERR utility?The OERR (Oracle Error) utility is provided only with Oracle databases on  UNIX  platforms.  OERR is not an executable, but instead, a shell script that retrieves messages from installed message files. OERR is an Oracle utility that extracts error messages with suggested actions from the standard Oracle message files. This utility is very useful as it can extract OS-specific errors that are not in the generic Error Messages and Codes Manual.What do you mean by logfile mirroring?The Process of having copy of redolog file is called mirroring. It is done by creating group by log file together. This ensures that LGWR automatically writes them to all the member of the current online redo log group. In case a group fails the database automatically switch over the next group. It diminishes the performance.What is the use of large pool? Which case you need to use the large pool?You need to set large pool if you are using multi thread server and RMAN backup. It prevents RMAN and MTS server from competing with other subsystem for the same memory. RMAN uses the large pool for backup & restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O. If neither of these parameters is enabled then oracle allocates backup buffers from local process memory rather than shared memory. Then there is no use of large pool.What will be your first steps if you get the message Application is running slow?Gather the statistics (statspack, AWR) report to find TOP 5 wait event or run a Top command in Linux to see CPU usage. Later run VMSTAT, SAR and PRSTAT command to get more information on CPU and Memory usage and possible blocking.If poor written statements then run EXPLAIN PLAN on these statements and see whether new index or use of HINT brings the cost of SQL down.How do you add more or subsequent block size specification? Re-create the CONTROLFILE to specify the new BLOCK SIZE for specific data files or Take the database OFFLINE and bring back online with a new BLOCK SIZE specification.
0 notes