#OCI Web Application Firewall
Explore tagged Tumblr posts
bizionictechnologies · 8 days ago
Text
OCI Web Application Firewall: Detroit’s Top Choice?
OCI Web Application Firewall stands out as Detroit’s smartest security choice—discover how it compares to AWS & Azure in this expert guide by Bizionic Tech. Don’t miss the chance to safeguard your digital assets—read now and make the right decision for your business!
0 notes
dglitservices · 6 months ago
Text
Oracle Cloud Infrastructure vs. Competitors: What Makes It Stand Out?
In the fast-evolving world of cloud computing, choosing the right infrastructure is critical for driving business efficiency, scalability, and innovation. Oracle Cloud Infrastructure (OCI) has emerged as a formidable player in the market, challenging industry giants like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). But what sets Oracle Cloud Solutions apart from its competitors? In this article, we take an in-depth look at the unique features, benefits, and real-world applications of OCI that make it a standout choice for enterprises.
Tumblr media
1. Superior Performance and Cost Efficiency
One of the key differentiators of Oracle Cloud Solutions is its focus on delivering exceptional performance while maintaining cost efficiency.
Key Advantages:
High Performance: OCI’s high-speed networking and bare-metal instances provide superior processing power, ideal for data-intensive workloads like ERP and financial management.
Transparent Pricing: Oracle offers a straightforward pricing model without hidden fees, making it more cost-effective compared to competitors like AWS and Azure.
Predictable Costs: With consistent pricing across all global regions, businesses can accurately budget for cloud expenses.
Real-World Application:
An e-commerce company using OCI for ERP systems can handle peak traffic efficiently during sales seasons without incurring unexpected costs.
2. Advanced Security Features
Security is a top priority for enterprises migrating to the cloud, and Oracle Cloud Solutions excels in providing robust security measures.
Key Features:
Built-In Security: OCI includes advanced security controls like encryption, firewalls, and identity management as part of its core architecture.
Zero Trust Architecture: Oracle employs a zero trust model, ensuring strict access control and data protection.
Comprehensive Compliance: OCI adheres to global regulatory standards such as GDPR, HIPAA, and SOC.
Real-World Application:
A financial institution migrating sensitive customer data to OCI benefits from robust encryption and compliance with financial regulations, ensuring data integrity and customer trust.
3. Seamless Integration with Oracle Applications
For enterprises already using Oracle’s suite of applications, OCI offers unparalleled integration capabilities.
Key Benefits:
Optimised for Oracle Workloads: OCI is specifically designed to run Oracle’s flagship applications, including Oracle ERP Cloud, Oracle Database, and Oracle Financials, with optimal performance.
Unified Ecosystem: Seamlessly integrate enterprise software with cloud infrastructure for a cohesive IT environment.
Migration Tools: Oracle’s tools simplify the migration of on-premises workloads to the cloud.
Real-World Application:
A manufacturing company running Oracle E-Business Suite on OCI can experience faster performance, reduced latency, and streamlined operations.
4. Innovative Technologies and Automation
Oracle Cloud Infrastructure leverages cutting-edge technologies to enhance efficiency and simplify operations.
Key Innovations:
AI and ML Integration: Built-in AI tools enable predictive analytics, process automation, and personalised user experiences.
Automation: OCI simplifies routine tasks like patch management, backups, and monitoring, reducing manual effort.
Autonomous Database: Oracle’s Autonomous Database automatically optimises performance, scales resources, and applies updates without human intervention.
Real-World Application:
A healthcare organisation uses Oracle’s Autonomous Database to analyse patient data in real time, improving decision-making and patient outcomes.
5. Flexibility and Hybrid Cloud Support
Oracle Cloud Solutions stand out for their flexibility, offering hybrid cloud capabilities to meet diverse business needs.
Key Features:
Hybrid Cloud: OCI supports hybrid environments, allowing businesses to run workloads seamlessly across on-premises and cloud infrastructure.
Multi-Cloud Interoperability: Collaborations with Microsoft Azure enable enterprises to integrate OCI with Azure services, ensuring interoperability.
Customisation: Tailor OCI configurations to match specific workload requirements.
Real-World Application:
An energy company utilising hybrid cloud capabilities can keep critical systems on-premises for security while leveraging OCI for scalability.
6. Sustainability and Green IT Practices
As businesses aim to reduce their carbon footprint, Oracle Cloud Solutions lead the way in sustainability.
Key Advantages:
Energy-Efficient Data Centres: OCI’s infrastructure is designed to minimise energy consumption.
Carbon-Neutral Cloud: Oracle is committed to achieving carbon neutrality by 2025, aligning with global sustainability goals.
Paperless Operations: Digital tools reduce reliance on physical resources, supporting eco-friendly practices.
Real-World Application:
A logistics company adopting OCI can digitise supply chain processes, reducing paper usage and improving environmental sustainability.
7. Global Reach and Reliability
With a network of data centres across the globe, Oracle Cloud Infrastructure ensures reliable and low-latency services.
Key Benefits:
Global Data Centres: Oracle operates in 41 regions worldwide, with plans for further expansion.
High Availability: OCI provides robust disaster recovery and backup options to minimise downtime.
Consistent Performance: Enterprises benefit from fast and reliable services, regardless of location.
Real-World Application:
A multinational corporation uses OCI’s global data centres to deliver seamless services to customers across different continents.
Conclusion
Oracle Cloud Infrastructure stands out among competitors for its performance, security, cost efficiency, and innovative technologies. Whether it’s optimising ERP systems, integrating advanced AI tools, or supporting hybrid cloud strategies, OCI empowers businesses to achieve their digital transformation goals.
Curious about how Oracle Cloud Solutions can revolutionise your business? Let Denova Glosoft Limited help you explore tailored solutions to meet your unique needs. Contact us today to get started!
0 notes
lencykorien · 2 years ago
Text
Demystifying OCI’s Virtual Cloud Network: A Deep Dive into VCN Architecture (Part 1)
A Virtual Cloud Network (VCN) is the fundamental building block for networking in Oracle Cloud Infrastructure (OCI). It can be thought of as a virtual version of a traditional network that you’d operate in your own data center. 
The benefits of using a VCN include:
Isolation– VCNs provide complete isolation from other virtual networks in the cloud. This allows you to have full control over your network environment.
Security– VCNs give you control over security through security lists and network security groups. You can restrict access within subnets as well as between subnets.
Customization– VCNs allow you to fully customize the network environment. You can define subnets, route tables, gateways, and other components to meet your specific needs.want to learn more Cloud Platform Engineering.
The key components that make up a VCN include:
Subnets– A subnet is a subdivision of a VCN that allows you to group related resources together. Subnets can be either public or private.
Route tables– Route tables control the flow of traffic out of a subnet. They specify the destinations that traffic can be routed to.
Security lists– Security lists act as virtual firewalls that control ingress and egress traffic at the subnet level.
Gateways– Gateways connect your VCN to external networks or other VCNs. Common gateways are internet gateways, NAT gateways, service gateways, and peering gateways.
Network security groups– NSGs provide subnet-level and instance-level security through stateful firewall rules. 
By leveraging VCNs and their components, you can create a secure, robust, and customizable network environment tailored to your application and use case requirements.
Oracle VCN Architecture
Tumblr media
Creating a Virtual Cloud Network
Tumblr media
Choose Networking > VirtualCloud Networks
Tumblr media
Click on Start VCN Wizard
Tumblr media
Click VCN with Internet Connectivity then Click Start VCN Wizard
Fill in the details as shown in the below imagesVCN Name: OCI_HOL_VCN # Example Compartment: Demo # Example VCN CIDR Block: 10.0.0.0/16 # Example Public Subnet CIDR Block: 10.0.2.0/24 # Example Private Subnet CIDR Block: 10.0.1.0/24 # Example Use DNS Hostnames in this VCN: Checked
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Public Subnets
Public subnets provide resources with access to and from the public internet via an internet gateway. Resources such as web servers, application servers, and load balancers that need to be accessible from the internet should be deployed in public subnets. Cloud And DevSecOps Solutions for companies.
Private Subnets 
Private subnets provide resources with private, isolated access inside the VCN, with no direct route to the public internet. Resources such as databases, application backends, and other systems that only need to be accessed privately from within the VCN should be deployed in private subnets.
0 notes
globalmediacampaign · 4 years ago
Text
Deploy Moodle on OCI with MDS
Moodle is the world’s most popular learning management system. Moodle is Open Source and of course it’s compatible with the most popular Open Source Database : MySQL ! I’ve already posted an article on how to install Moodle on OCI before we released MySQL Database Service. In this article we will see how to deploy Moodle very easily in OCI and using MDS. Once again we will use the easiest way to deploy an complete architecture on OCI: Resource Manager. We will then use a stack I’ve created that is available on GitHub This stack includes Terraform code allowing to deploy different architectures that we can use for Moodle. I’ve tried to cover the main possible architecture directly in the stack. It’s also possible to just download the Terraform code and modify it if you need. You also have the possibility to generate again a stack from your modified code. I’ve already multiple stacks you can deploy directly on OCI that allows you to deploy the same architectures as I cover in this article but for other solutions directly from this page: Deploy to OCI. Let’s have a look at some of the possible architectures we can deploy directly by clicking on the ��deploy to OCI” button. Simplest Deployment This deployment, is the most simple to deploy. One single MySQL Database Service Instance and one compute instance as the Moodle Web Server. The architecture is composed by the following components: Availability domains: Availability domains are standalone, independent data centers within a region. The physical resources in each availability domain are isolated from the resources in the other availability domains, which provides fault tolerance. Availability domains don’t share infrastructure such as power or cooling, or the internal availability domain network. So, a failure at one availability domain is unlikely to affect the other availability domains in the region.Virtual cloud network (VCN) and subnets: a VCN is a customizable, software-defined network that you set up in an Oracle Cloud Infrastructure region. Like traditional data center networks, VCNs give you complete control over your network environment. A VCN can have multiple non-overlapping CIDR blocks that you can change after you create the VCN. You can segment a VCN into subnets, which can be scoped to a region or to an availability domain. Each subnet consists of a contiguous range of addresses that don’t overlap with the other subnets in the VCN. You can change the size of a subnet after creation. A subnet can be public or private.Internet gateway: the internet gateway allows traffic between the public subnets in a VCN and the public internet.Network security group (NSG): NSGs act as virtual firewalls for your cloud resources. With the zero-trust security model of Oracle Cloud Infrastructure, all traffic is denied, and you can control the network traffic inside a VCN. An NSG consists of a set of ingress and egress security rules that apply to only a specified set of VNICs in a single VCN.MySQL Database Service (MDS): MySQL Database Service is a fully managed Oracle Cloud Infrastructure (OCI) database service that lets developers quickly develop and deploy secure, cloud native applications. Optimized for and exclusively available in OCI, MySQL Database Service is 100% built, managed, and supported by the OCI and MySQL engineering teams.Compute Instance: OCI Compute service enables you provision and manage compute hosts in the cloud. You can launch compute instances with shapes that meet your resource requirements (CPU, memory, network bandwidth, and storage). After creating a compute instance, you can access it securely, restart it, attach and detach volumes, and terminate it when you don’t need it. Apache, PHP and Moodle are installed on the compute instance.Let’s see the different steps to deploy this architecture directly from here: You will redirected the OCI’s dashboard create stack page: As soon as you accept the Oracle Terms of Use, the form will be pre-filled by some default values. You can of course decide in which compartment you want to deploy the architecture: The second screen of the wizard the most important form where we need to fill all the required variables and also change the architecture as we will see later: The second part of the form looks like this. Note that we can enable High Availability for MDS, use multiple Web Server Instances or use existing infrastructure. This means that we have the possibility to use an existing VCN, subnets, etc… And of course we can also specify the Shapes for the compute instances (from a dropdown list of the available shapes in your tenancy and compartment) and for the MDS instance (this one needs to be entered manually). When we click next, we reach the last screen which summarize the choices and we can click on “Create”. By default the Architecture will be automatically applied (meaning all necessary resources will be deployed): Now we need to be a little bit of patience while everything is deployed… Other Possible Architectures As we could see earlier on the second screen of the stack’s creation wizard, we could also specify the use of multiple Web Servers. Then we have the possibility to deploy them on different Fault Domains (default) or use different Availability Domains: It’s possible to also specify if all Moodle servers will use their own database and user or share the same schema in case we want to use a load balancer in front of all the web servers and spread the load for the same site/application. The default architecture with 3 web servers looks like this: And if you want to enable High Availability for the MDS instance, you just need to check the box: And you will have an architecture like this: Finishing Moodle’s Installation When the deployment using the stack is finished, you will the a nice large green square with “SUCCEEDED” and in the log you will also see some important information: This information is also available in the Output section on the left: Now, we just need to open a web browser and enter that public ip to finish the installation of Moodle: And we follow the wizard until the database configuration section: On the screen above, we use the information that we can find in the Stack’s output section. Then we continue the installation process until it’s completed and finally we can enjoy our new Moodle deployment: As you can see, it has never been so easy to deploy applications using MySQL in Oracle Cloud Infrastructure. https://lefred.be/content/deploy-moodle-on-oci-with-mds/
0 notes
masaa-ma · 6 years ago
Text
AWS/GCP/OCIサービス比較
from https://qiita.com/ghogho-seki/items/19a026a8c643aa868d53?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
Tumblr media
AWS、GCP、OCI(Oracle Cloud Infrastructure)の主なサービス比較
カテゴリ AWS GCP OCI
リージョン AZ(アベイラビリティ・ゾーン) ゾーン AD(Availability Domain(可用性ドメイン))
ネットワークおよびコンテンツ配信 VPC VPC VCN(Virtual Cloud Network)
ネットワークおよびコンテンツ配信 ELB(Elastic Load Balancing) Cloud Load Balancing Load Balancing
ネットワークおよびコンテンツ配信 DX(Direct Connect) Cloud Interconnect FastConnect
ネットワークおよびコンテンツ配信 サブネット サブネット サブネット
ネットワークおよびコンテンツ配信 ルートテーブル ルート ルートテーブル
ネットワークおよびコンテンツ配信 IGW(Internet Gateway) IGW(Internet Gateway) IGW(Internet Gateway)
ネットワークおよびコンテンツ配信 VGW(Virtual Private Gateway) Cloud Router、VPN Gateway DRG(Dynamic Routing Gateway)
ネットワークおよびコンテンツ配信 CloudFront Cloud CDN -
Compute EC2 Compute Engine Virtual Machines
Compute EC2(I3.metal) - Bare Metal
Compute EC2(P2、P3、G3インスタンス) Google GPU GPU
Compute ECS(Elastic Compute Cloud Container Service) Kubernetes Engine Container Engine for Kubernetes
Compute ECR(EC2 Container Registry) Container Registry Container Registry
Compute Lambda Cloud Functions Oracle Functions (準備中)
ストレージ EBS(Elastic Block Store) Compute Engine永続ディスク Block Volumes
ストレージ EFS(Elastic File System) ZFS / Avere、Cloud Filestore (ベータ) File Storage
ストレージ S3 Cloud Storage、Cloud Storage Nearline Object Storage
ストレージ Glacier Cloud Storage Coldline アーカイブ・ストレージ
ストレージ Import/Export Disk、Import/Export Snowball、Import/Export Snowmobile Transfer Appliance、Transfer Service Data Transfer Service
ストレージ Storage Gateway ZFS / Avere Storage Gateway
セキュリティ、アイデンティティおよびコンプライアンス IAM、AWS Organizations IAM(Cloud Identity & Access Management) IAM(Identity and Access Management)
セキュリティ、アイデンティティおよびコンプライアンス KMS(Key Management Service) KMS(Cloud Key Management Service) KMS(Key Management Service)
セキュリティ、アイデンティティおよびコンプライアンス CloudTrail Cloud Audit Logging Audit
セキュリティ、アイデンティティおよびコンプライアンス SG(セキュリティグループ)、NACL(ネットワークACL) ファイヤーウォールルール セキュリティリスト、ネットワーク・セキュリティ・グループ
Database RDS Cloud SQL (MySQL、Postgres)、Cloud Spanner Database Cloud Service
Database Aurora Cloud SQL、Cloud Spanner Autonomous Transaction Processing
Database Redshift BQ(BigQuery) Autonomous Data Warehouse
Database DynamoDB Cloud Datastore、Cloud Bigtable NoSQL
エッジ Route53 Cloud DNS DNS
エッジ SES - Email Delivery
エッジ WAF(Web Application Firewall) - WAF(Web Application Firewall)
エッジ Shield Cloud Armor DDoS Protection
参考
Oracle Cloud Infrastructure Serviceマッピング AWS プロフェッショナルのための Google Cloud Platform
0 notes
globalmediacampaign · 5 years ago
Text
Using Terraform to configure MySQL Database Service
Recently the MySQL Database Service (MDS) was launched in Oracle Cloud Infrastructure (OCI). As Cloud is about automation you don't have to use the Web Console to configure your instances, but can do it via API, for instance with the oci command line tool or your favorite programming language. However often it is nice to define the world in a declarative way ("I want a network, like this, and a MySQL database like that") and let the tool figure out how to align the reality in the cloud with your wish. A tool doing this is Terraform. With Terraform you can declare the state in description files, the tool creates a dependency graph and then applies what has to be applied and of course it supports OCI and as part of the default OCI Provider, there is even directly MDS support. So let's build a set of description files for a cloud environment. I assume you have a tenancy and want to create a new compartment, with it's own VCN and a Compute instance with a client, like MySQL Shell to access the database. For this configuration create a new empty directory. In there the first thing we need is to tell Terraform that we want to use the OCI provider for accessing  OCI. We will come back to it, but for now this will be quite short. Whether we put everything in one file, or split it up and how our files are called doesn't matter for Terraform. It will scan for all files called something.tf and will build it's graph. I like relatively small files, one for each aspect of the configuration, but you are of course free. I start with oci.tf for my general configuration: provider "oci" { version = "~> 3.95" region = var.region } Here we say that we want at least version 3.95 for the OCI provider and configure our cloud region using a variable. All variables I use, and which can be set or overwritten, I put in a file called variables.tf, where I declare region like this: variable "region" {} As said the first thing I want to to create a Compartment. A Compartment in OCI is a grouping of different instances from services you are using. You can use Compartments for instance to group services different departments of your company are using and giving them different resource limits or having development and production systems separated or whatever you might need. By using a compartment here, we won't get in conflict with other services you are already using. This is my compartment.tf: resource "oci_identity_compartment" "mds_terraform" { name = "mds_terraform" description = "Compartment to house the MySQL Database and Terraform experiment" compartment_id = var.compartment_ocid enable_delete = true } In the first line we declare that the following is a description of a resource of a type oci_identity_compartment, which inside our other Terraform resources will be called mds_terraform. Then we define the name of the compartment we want to have inside OCI. Here I'm using the same name both times, followed by a description, which might help your colleagues or your later self to understand the purpose. The compartment_id property here refers to the parent, as Compartments can be hierarchical nested. Finally setting the property enable_deleta means that Terraform will try to delete the Compartment, when we tell it to delete things. As the parent Compartment is a variable again, we need to declare it, thus let's extend variables.tf: variable "compartment_ocid" {} With the compartment the first thing we need is our network. This is my vcn.tf: resource "oci_core_vcn" "mds_terraform_vcn" { cidr_block = "10.0.0.0/16" dns_label = "mdsterraform" compartment_id = oci_identity_compartment.mds_terraform.id display_name = "mds_terraform_vcn" } resource "oci_core_internet_gateway" "internet_gateway" { compartment_id = oci_identity_compartment.mds_terraform.id vcn_id = oci_core_vcn.mds_terraform_vcn.id display_name = "gateway" } resource "oci_core_default_route_table" "default-route-table-options" { manage_default_resource_id = oci_core_vcn.mds_terraform_vcn.default_route_table_id route_rules { network_entity_id = oci_core_internet_gateway.internet_gateway.id cidr_block = "0.0.0.0/0" } } resource "oci_core_subnet" "test_subnet" { cidr_block = "10.0.2.0/24" display_name = "mds_tf_subnet" dns_label = "mdssubnet" security_list_ids = [oci_core_security_list.securitylist1.id] compartment_id = oci_identity_compartment.mds_terraform.id vcn_id = oci_core_vcn.mds_terraform_vcn.id route_table_id = oci_core_vcn.mds_terraform_vcn.default_route_table_id dhcp_options_id = oci_core_vcn.mds_terraform_vcn.default_dhcp_options_id } resource "oci_core_security_list" "securitylist1" { display_name = "securitylist1" compartment_id = oci_identity_compartment.mds_terraform.id vcn_id = oci_core_vcn.mds_terraform_vcn.id egress_security_rules { protocol = "all" destination = "0.0.0.0/0" } ingress_security_rules { protocol = "6" source = "0.0.0.0/0" tcp_options { min = 22 max = 22 } } ingress_security_rules { protocol = "6" source = "0.0.0.0/0" tcp_options { min = 3306 max = 3306 } } ingress_security_rules { protocol = "6" source = "0.0.0.0/0" tcp_options { min = 33060 max = 33060 } } } This is quite a lot and I won't go through all things here, but this declares a VCN with a single subnet, adds an internet gateway, so that we can export services to the internet and can reach the internet from our VCN and sets ingress and egress firewall rules, to only allow traffic to MDS (ports 3306 and 33060) and SSH (port 22).  What you might notice is how we are referring to the id of the Compartment we created before, by using oci_identity_compartment.mds_terraform.id and how the different network resources refer to each other in similar ways. Now it's time to create our MDS instance! Here is my mysql.tf: data "oci_mysql_mysql_configurations" "shape" { compartment_id = oci_identity_compartment.mds_terraform.id type = ["DEFAULT"] shape_name = var.mysql_shape } resource "oci_mysql_mysql_db_system" "mds_terraform" { display_name = "Terraform Experiment" admin_username = var.mysql_admin_user admin_password = var.mysql_admin_password shape_name = var.mysql_shape configuration_id =data.oci_mysql_mysql_configurations.shape.configurations[0].id subnet_id = oci_core_subnet.test_subnet.id compartment_id = oci_identity_compartment.mds_terraform.idm.id availability_domain = data.oci_identity_availability_domain.ad.name data_storage_size_in_gb = var.mysql_data_storage_in_gb } output "mysql_url" { value =  "mysqlx://${var.mysql_admin_user}:${var.mysql_admin_password}@${oci_mysql_mysql_db_system.mds_terraform.ip_address}:${oci_mysql_mysql_db_system.mds_terraform.port_x}" } The actual MySQL Database Instance is declared in the second block, where we give it a name, configure the adminstrative user account, assign the subnet etc. Again we introduced some variables, so let's declare them in variables.tf: variable "mysql_admin_user" { default = "root" } variable "mysql_admin_password" { } variable "mysql_shape" { default = "VM.Standard.E2.1" } variable "mysql_data_storage_in_gb" { default = 50 } A few fields might need some extra explanation: The shape is the machine type we want and defines CPU type, whether we want a VM, memory and so on. Here we default to VM.Standard.E2.1, which is the smallest type and good enough for an experiment. On a production system you probably want to override and use a larger shape. Then MDS allows you to use different Configurations, so you can tune MySQL Configuration Variables for your application's needs. If you have your custom config you can provide the ID, but I want to use the default for that shape, so I use a data resource to look it up. In many Cloud Region there are different Availability Domains, different data centers close to each other. The resources we created before span over ADs. However the MDS Instance has to live in a AD. To lookup the AD's ID based on the number of the AD we can put this in oci.tf: data "oci_identity_availability_domain" "ad" { compartment_id = var.compartment_ocid ad_number = var.availability_domain } And, again, I add another variable to variables.tf. Now there's one more thing in the mysql.tf: An output block. This will ask Terraform to give us a summary once it is done. With all these things ready we can execute it! For a start I want to use the Web Console and OCI's Resource Manager. For that I have to package my files, which I do from my command line: $ zip mds-terraform.zip *.tf adding: compartment.tf (deflated 38%) adding: mysql.tf (deflated 61%) adding: network.tf (deflated 75%) adding: oci.tf (deflated 35%) adding: variables.tf (deflated 50%) With that file we can login to the Console, and navigate to the Resource Manager. After clicking the "Create Stack" button we can use the checkbox to tell the system that we have zip file and then either drag the file from a file manager or browse for the file. Now we are being asked to fill the configuration variables we defined previously. No surprise is that our defaults are pre-filled, however the system also identified your region and Compartment ID! The Compartment ID suggested is the one which was used to create the Stack, which probably is the root aka. the tenancy's ID. Now you could pick a password for the MySQL user and continue.  However MDS has specific requirements on the password security and we would eventually fail later, so let's take a quick side tour and make this form a bit nicer. This can be done by providing a schema.yml file: title: "MySQL Terraform Experiment" description: "An experimental Terraform setup to create MySQL Database Service Instances" schemaVersion: 1.1.0 version: "20190304" locale: "en" groupings: - title: "Basic Hidden" visible: false variables: - compartment_ocid - tenancy_ocid - region - title: "General Configuration" variables: - mysql_admin_user - mysql_admin_password variables: compartment_ocid: type: oci:identity:compartment:id # type: string required: true title: "Compartment" description: "The compartment in which to create compute instance(s)" mysql_admin_user: type: string required: true title: "MySQL Admin User" description: "Username for MySQL Admin User" minLength: 1 maxLength: 14 pattern: "^[a-zA-Z][a-zA-Z0-9]+$" mysql_admin_password: type: string required: true title: "MySQL Password" description: "Password for MySQL Admin User" pattern: "^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()_+-=[]{};':"\|,.<>/?]).{8,32}$" outputGroups: - title: "MySQL Database Service" outputs: - mysql_url outputs: mysql_url: type: string title: "MySQL Connection String" visible: true And then packing the zip file again: $ zip mds-terraform.zip *.tf schema.yaml updating: compartment.tf (deflated 38%) updating: mysql.tf (deflated 61%) updating: network.tf (deflated 75%) updating: oci.tf (deflated 35%) updating: variables.tf (deflated 50%) adding: schema.yaml (deflated 57%) In the stack configuration I now blick Back, upload the new file and get a nicer form. So, let's enter a password and we can continue. (IMPORTANT: The password will be stored insecurely in the stack, for production usage you should secure it) After completing the Wizard I come to an overview page for the Stack and can then pick the Terraform Apply Action. This will take about 15 minutes and create our resources. After the process is done I browse to the MySQL Database Service page But oh wait, there is no System in the list!? - Yes, since it is in the newly created compartment, so on the left I can select the mds_terraform Compartment. If it doesn't appear in the list my browser has an outdated version cached and I simply reload the page. Now we have a MySQL Database Service Database Instance within a VCN and can't reach it. Not so good, so I add one more service to my Terraform configuration: A compute instance with pre-installed MySQL Shell. Here's the compute.tf: data "oci_core_images" "images_for_shape" { compartment_id = oci_identity_compartment.mds_terraform.id operating_system = "Oracle Linux" operating_system_version = "7.8" shape = var.compute_shape sort_by = "TIMECREATED" sort_order = "DESC" } resource "oci_core_instance" "compute_instance" { availability_domain = data.oci_identity_availability_domain.ad.name compartment_id = oci_identity_compartment.mds_terraform.id display_name = "MySQL Database Service and Terraform Test" shape = var.compute_shape source_details { source_type = "image" source_id = data.oci_core_images.images_for_shape.images[0].id } create_vnic_details { assign_public_ip = true display_name = "primaryVnic" subnet_id = oci_core_subnet.test_subnet.id hostname_label = "compute" } metadata = { ssh_authorized_keys = var.public_key user_data = filebase64("init-scripts/compute-init.sh") } } output "compute_public_ip" { value = oci_core_instance.compute_instance.public_ip } This creates a VM using the latest Oracle Linux 7.8 image and asks for a public IP address, so we can reach it from the outside. I also reference a script called init-scripts/compute-init.sh. This script looks like this and simply installs MySQL Shell from MySQL's yum repository: #!/bin/sh cd /tmp wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm sudo rpm -i mysql80-community-release-el7-3.noarch.rpm sudo yum update sudo yum install -y mysql-shell In variables.tf a new variable is to be added, which will ask for an SSH public key, so we can login to the machine and a variable to configure the shape with a sensible default: variable "compute_shape" { default ="VM.Standard2.1" } variable "public_key" { } For adding the new configuration field and new output to our form in the Resource Manager schema.yml needs a few minor updates, for simplicity here is the complete file: title: "MySQL Terraform Experiment" description: "An experimental Terraform setup to create MySQL Database Service Instances" schemaVersion: 1.1.0 version: "20190304" locale: "en" groupings: - title: "Basic Hidden" visible: false variables: - compartment_ocid - tenancy_ocid - region - title: "General Configuration" variables: - mysql_admin_user - mysql_admin_password - public_key variables: compartment_ocid: type: oci:identity:compartment:id # type: string required: true title: "Compartment" description: "The compartment in which to create compute instance(s)" mysql_admin_user: type: string required: true title: "MySQL Admin User" description: "Username for MySQL Admin User" minLength: 1 maxLength: 14 pattern: "^[a-zA-Z][a-zA-Z0-9]+$" mysql_admin_password: type: password required: true title: "MySQL Password" description: "Password for MySQL Admin User" pattern: "^(?=.*[0-9])(?=.*[a-z])(?=.*[A-Z])(?=.*[!@#$%^&*()_+-=[]{};':"\|,.<>/?]).{8,32}$" public_key: type: string title: "SSH Public Key" description: "An OpenSSH public key for accessing your compute instance" outputGroups: - title: "MySQL Database Service" outputs: - mysql_url - title: "Compute Instance" outputs: - compute_public_ip outputs: mysql_url: type: string title: "MySQL Connection String" visible: true compute_public_ip: type: string title: "Public IP" visible: true Now I can package it up, again: $ zip mds-terraform.zip *.tf schema.yaml init-scripts/compute-init.sh updating: compartment.tf (deflated 38%) updating: mysql.tf (deflated 62%) updating: network.tf (deflated 76%) updating: oci.tf (deflated 35%) updating: variables.tf (deflated 55%) updating: schema.yaml (deflated 57%) adding: compute.tf (deflated 54%) adding: init-scripts/compute-init.sh (deflated 39%) And go back to the resource manager ... oh wait .. list is empty ... hah .. I'm in the wrong Compartment. Once that hurdle is bypassed I can select the Stack I created previously, click Edit and upload the new file. The wizard will now ask for the ssh key, which I copy from my $HOME/.ssh/id_rsa.pub before completing the wizard. then I again pick the Terraform Apply action and can observe how Terraform notices that most things already exist, but only the Compute instance is missing and creates it. A few minutes later it is done and the task completed. On top of the page a new tab Application Information appeared and based on information from the schema.yml file giving me an mysqlx URL and an IP address. I then use that IP address to connect to the machine, using my ssh key and the usernamne opc. I have to confirm the server identity by typing yes and am on my Compute instance, which is in my VCN. I can then use MySQL Shell with the URL from the Terraform summary to connect to the MySQL instance. MySQL Shell will by default start in JavaScript mode. If I'm not in mood for that I can type sql and switch in SQL mode. I can also install other programs as I like, including my own, and connect to the MDS instance just like any other MySQL. [opc@compute ~]$ mysqlsh 'mysqlx://root:[email protected]:33060' MySQL Shell 8.0.22 Copyright (c) 2016, 2020, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help' or '?' for help; 'quit' to exit. WARNING: Using a password on the command line interface can be insecure. Creating an X protocol session to '[email protected]:33060' Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 13 (X protocol) Server version: 8.0.22-u2-cloud MySQL Enterprise - Cloud No default schema selected; type use to set one. MySQL 10.0.2.5:33060+ ssl JS > sql Switching to SQL mode... Commands end with ; MySQL 10.0.2.5:33060+ ssl SQL > CREATE DATABASE foo; Query OK, 1 row affected (0.0052 sec) MySQL 10.0.2.5:33060+ ssl SQL > use foo Default schema set to `foo`. Fetching table and column names from `foo` for auto-completion... Press ^C to stop. MySQL 10.0.2.5:33060+ ssl foo SQL > CREATE TABLE t (id INT); Query OK, 0 rows affected (0.0236 sec) MySQL 10.0.2.5:33060+ ssl foo SQL > INSERT INTO t VALUES (1); Query OK, 1 row affected (0.0139 sec) MySQL 10.0.2.5:33060+ ssl foo SQL > SELECT * FROM t; +----+ | id | +----+ | 1 | +----+ 1 row in set (0.0007 sec) Once you are done you can go back to the Web Console and the Stack's page and pick the Terraform Destroy Action and all things will be removed again. Note: It can happen that the Cloud Init Script didn't finish, yet and MySQL Shell isn't installed. Then wait a few moments and try again. Also you might see an error like mysqlsh: error while loading shared libraries: libpython3.7m.so.1.0: cannot open shared object file: No such file or directory. If that happens logout and back in. If the error persists run export LD_LIBRARY_PATH=/usr/lib/mysqlsh/ mysqlsh as a workaround. Now didn't I initially say that I want to automate it and not click in a Web Console? - Yeah I did and install the terraform tool locallyby downloading from terraform.io and then changing my oci.tf file. Previously I was inside OCI and could use my Web Session as authentication and gather data. From my local machine I have to configure more. The provider entry now looks like this: provider "oci" { version = "~> 3.95" region = var.region tenancy_ocid = var.tenancy_ocid user_ocid = var.user_ocid fingerprint = var.fingerprint private_key_path = var.private_key_path } There are new variables, so I add them to variables.tf: variable "user_ocid" {} variable "fingerprint" {} variable "private_key_path" {} Now I can run terraform init, which will read the files and download the oci provider. If I now run terraform apply it will ask me about all those variables. Best way to gather those is by installing the OCI command line tool and running  oci setup bootstrap, which will guide you through the process to setup a client and putting relevant information in your $HOME/.oci/config file. All these files are available on GitHub at https://github.com/johannes/mysql-database-service-terraform-example Happy MySQLing. If you want to see how to use a similar setup for running a serverless application using Node.js on OCI you can look at this Hands-on-Lab and I also suggest reserving some time to attend the Oracle Live event with a big MySQL announcement on December 2nd.   http://schlueters.de/blog/archives/190-Using-Terraform-to-configure-MySQL-Database-Service.html
0 notes
globalmediacampaign · 5 years ago
Text
A Step by Step Guide to Take your MySQL Instance to the Cloud
You have a MySQL instance? Great. You want to take it to a cloud? Nothing new. You want to do it fast, minimizing downtime / service outage? “I wish” I hear you say. Pull up a chair. Let’s have a chinwag. Given the objective above, i.e. “I have a database server on premise and I want the data in the cloud to ‘serve’ my application”, we can go into details: - Export the data - Hopefully make that export find a cloud storage place ‘close’ to the destination (in my case, @OCI of course) - Create my MySQL cloud instance. - import the data into the cloud instance. - Redirect the application to the cloud instance. All this takes time. With a little preparation we can reduce the outage time down to be ‘just’ the sum of the export + import time. This means that once the export starts, we will have to set the application in “maintenance” mode, i.e. not allow more writes until we have our cloud environment available.  Depending on each cloud solution, the ‘export’ part could mean “export the data locally and then upload the data to cloud storage” which might add to the duration. Then, once the data is there, the import might allow us to read from the cloud storage, or require adjustments before the import can be fully completed. Do you want to know more? https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/  Let’s get prepared then: Main objective: keep application outage time down to minimum. Preparation: You have an OCI account, and the OCI CLI configuration is in place. MySQL Shell 8.0.21 is installed on the on-premise environment. We create an Object Storage bucket for the data upload. Create our MySQL Database System. We create our “Endpoint” Compute instance, and install MySQL Shell 8.0.21 & MySQL Router 8.0.21 here. Test connectivity from PC to Object storage, from PC to Endpoint, and, in effect, from PC to MDS. So, now for our OCI environment setup. What do I need? Really, we just need some files to configure with the right info. Nothing has to be installed nor similar. But if we do have the OCI CLI installed on our PC or similar, then we’ll already have the configuration, so it’s even easier. (if you don’t have it installed, it does help avoid the web page console once we have learned a few commands so we can easily get things like the Public IP of our recently started Compute or we can easily start / stop these cloud environments.) What we need is the config file from .oci, which contains the following info: You’ll need the API Key stuff as mentioned in the documentation “Required Keys and OCIDs”. Remember, this is a one-off, and it really helps your OCI interaction in the future. Just do it. The “config” file and the PEM key will allow us to send the data straight to the OCI Object Storage bucket. MySQL Shell 8.0.21 install on-premise. Make a bucket. I did this via the OCI console. This creates a Standard Private bucket. Click on the bucket name that now appears in the list, to see the details. You will need to note down the Name and Namespace. Create our MySQL Database System. This is where the data will be uploaded to. This is also quite simple. And hey presto. We have it. Click on the name of the MDS system, and you’ll find that there’s an IP Address according to your VCN config. This isn’t a public IP address for security reasons. On the left hand side, on the menu you’ll see “Endpoints”. Here we have the info that we will need for the next step. For example, IP Address is 10.0.0.4. Create our Endpoint Compute instance. In order to access our MDS from outside the VCN, we’ll be using a simple Compute instance as a jump server. Here we’ll install MySQL Router to be our proxy for external access. And we’ll also install MySQL Shell to upload the data from our Object Storage bucket. For example, https://gist.github.com/alastori/005ebce5d05897419026e58b9ab0701b. First, go to the Security List of your OCI compartment, and add an ingress rule for the port you want to use in Router and allow access from the IP address you have for your application server or from the on-premise public IP address assigned. Router & Shell install ‘n’ configure Test connectivity. Test MySQL Router as our proxy, via MySQL Shell: $ mysqlsh root@kh01:3306 --sql -e 'show databases' Now, we can test connectivity from our pc / application server / on-premise environment. Knowing the public IP address, let’s try: $ mysqlsh root@:3306 --sql -e 'show databases' If you get any issues here, check your ingress rules at your VCN level. Also, double check your o.s. firewall rules on the freshly created compute instance too. Preparation is done. We can connect to our MDS instance from the Compute instance where MySQL Router is installed, kh01, and also from our own (on-premise) environment. Let’s get the data streaming. MySQL Shell Dump Utility In effect, it’s here when we’ll be ‘streaming’ data. This means that from our on-premise host we’ll export the data into the osBucket in OCI, and at the same time, read from that bucket from our Compute host kh01 that will import the data into MDS. First of all, I want to check the commands with “dryRun: true”. util.dumpSchemas dryRun From our own environment / on-premise installation, we now want to dump / export the data: $ mysqlsh root@OnPremiseHost:3306 You’ll want to see what options are available and how to use the util.dumpSchemas utility: mysqlsh> help util.dumpSchemas NAME       dumpSchemas - Dumps the specified schemas to the files in the output                     directory. SYNTAX       util.dumpSchemas(schemas, outputUrl[, options]) WHERE       schemas: List of schemas to be dumped.       outputUrl: Target directory to store the dump files.       options: Dictionary with the dump options. Here’s the command we’ll be using, but we want to activate the ‘dryRun’ mode, to make sure it’s all ok. So: util.dumpSchemas( ["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"] } ) ["test"]               I just want to dump the test schema. I could put a list of                                schemas here.      Careful if you think you can export internal                                      schemas, ‘cos you can’t. “test”                             is the “outputURL target directort”. Watch the prefix of all the                        files being created in the bucket.. options: dryRun:             Quite obvious. Change it to false to run. showProgress:                 I want to see the progress of the loading. threads:              Default is 4 but choose what you like here, according to the                                        resources available. ocimds:              VERY IMPORTANT! This is to make sure that the                                      environment is “MDS Ready” so when the data gets to the                             cloud, nothing breaks. osBucketName:   The name of the bucket we created. osNamespace:                 The namespace of the bucket. ociConfigFile:    This is what we looked at, right at the beginning. This what makes it easy.  compatibility:                There are a list of options here that help reduce all customizations and/or simplify our data export ready for MDS. Here I am looking at exporting / dumping just schemas. I could have dumped the whole instance via util.DumpInstance. Have a try! I tested a local DumpSchemas export without OCIMDS readiness, and I think it might be worth sharing that, this is how I found out that I needed a Primary Key to be able to configure chunk usage, and hence, a faster dump: util.dumpSchemas(["test"], "/var/lib/mysql-files/test/test", {dryRun: true, showProgress: true}) Acquiring global read lock All transactions have been started Locking instance for backup Global read lock has been released Writing global DDL files Preparing data dump for table `test`.`reviews` Writing DDL for schema `test` Writing DDL for table `test`.`reviews` Data dump for table `test`.`reviews` will be chunked using column `review_id` (I created the primary key on the review_id column and got rid of the following warning at the end:) WARNING: Could not select a column to be used as an index for table `test`.`reviews`. Chunking has been disabled for this table, data will be dumped to a single file. Anyway, I used dumpSchemas (instead of dumpInstance) with OCIMDS and then loaded with the following: util.LoadDump dryRun Now, we’re on the compute we created, with Shell 8.0.21 installed and ready to upload / import the data: $ mysqlsh root@kh01:3306 util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", ociConfigFile: "/home/osuser/.oci/config"}) As imagined, I’ve copied my PEM key and oci CLI config file to the compute, via scp to a “$HOME/.oci directory. Loading DDL and Data from OCI ObjectStorage bucket=test-bucket, prefix='test' using 8 threads. Util.loadDump: Failed opening object '@.json' in READ mode: Not Found (404) (RuntimeError) This is due to the bucket being empty. You’ll see why it complains of the “@.json” in a second. You want to do some “streaming”? With our 2 session windows opened, 1 from the on-premise instance and the other from the OCI compute host, connected with mysqlsh: On-premise: dry run: util.dumpSchemas(["test"], "test", {dryRun: true, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]}) real: util.dumpSchemas(["test"], "test", {dryRun: false, showProgress: true, threads: 8, ocimds: true, "osBucketName": "test-bucket", "osNamespace": "idazzjlcjqzj", ociConfigFile: "/home/os_user/.oci/config", "compatibility": ["strip_definers"]}) OCI Compute host: dry run: util.loadDump("test", {dryRun: true, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180}) real: util.loadDump("test", {dryRun: false, showProgress: true, threads: 8, osBucketName: "test-bucket", osNamespace: "idazzjlcjqzj", waitDumpTimeout: 180}) They do say a picture is worth a thousand words, here are some images of each window that was executed at the same time: On-premise: At the OCI compute host you can see the waitDumpTimeout take effect with: NOTE: Dump is still ongoing, data will be loaded as it becomes available. In the osBucket, we can now see content (which is what the loadDump is reading): And once it’s all dumped ‘n’ uploaded we have the following output: If you like logs, then check the .mysqlsh/mysqlsh.log that records all the output under the directory where you have executed MySQL Shell (on-premise & OCI compute) Now the data is all in our MySQL Database System, all we need to do is point the web server or the application server to the OCI compute systems IP and port so that MySQL Router can enroute the connection to happiness!!!! Conclusion https://blogs.oracle.com/mysql/a-step-by-step-guide-to-take-your-mysql-instance-to-the-cloud
0 notes
globalmediacampaign · 5 years ago
Text
Migrate from on premise MySQL to MySQL Database Service
If you are running MySQL on premise, it's maybe the right time to think about migrating your lovely MySQL database somewhere where the MySQL Team prepared a comfortable place for it to stay running and safe. This awesome place is MySQL Database Service in OCI. For more information about what MDS is and what it provides, please check this blog from my colleague Airton Lastori. One important word that should come to your mind when we talk about MDS is SECURITY ! Therefore, MDS endpoint can only be a private IP in OCI. This means you won't be able to expose your MySQL database publicly on the Internet. Now that we are aware of this, if we want to migrate an existing database to the MDS, we need to take care of that. What is my case ? When somebody needs to migrate its actual MySQL database, the first question that needs to be answered is: Can we eventually afford large downtime ? If the answer is yes, then the migration is easy: you stop your application(s) you dump MySQL you start your MDS instance you load your data into MDS and that's it ! In case the answer is no, things are of course more interesting and this is the scenario I will cover in this post. Please note that the application is not covered in this post and of course, it's also recommended to migrate it to the cloud, in a compute instance of OCI for example. What's the plan ? To migrate successfully a MySQL database from on premise to MDS, these are the actions I recommend: create a VCN with two subnets, the public and the private one create a MDS instance create a VPN create an Object Storage Bucket dump the data to be loaded in MDS load the data in MDS create an in-bound replication channel in MDS The architecture will look like this: Virtual Cloud Network First thing to do when you have your OCI access, it's to create a VNC from the dashboard. If you have already created some compute instances, these steps are not required anymore:   You can use Start VNC Wizard, but I will cover the VPN later in this article. So let's just use Create VCN: We need a name and a CIDR Block, we use 10.0.0.0/16:   This is what it looks like:   Now we click on the name (lefred_vcn in my case) and we need to create two subnets:   We will create the public one on 10.0.0.0/24:   and the Private one on 10.0.1.0/24. After these two steps, we have something like this:   MySQL Database Service Instance We can create a MDS instance:   And we just follow the creation wizard that is very simple:   It's very important to create an admin user (the name can be what you want) and don't forget the password. We put our service in the private subnet we just created.   The last screen of the wizard is related to the automatic backups:   The MDS instance will be provisioned after a short time and you can see that in its detailed view:   VPN OCI allows you to very easily create IPSEC VPN's with all enterprise level hardware that are used in the industry. Unfortunately I don't have such opportunity at home (and no need for it), so I will use another supported solution that is more appropriate for domestic usage: OpenVPN. If you are able to deploy the IPSEC solution, I suggest you to use it.   On that new page, you have a link to the Marketplace where you can deploy a compute instance to act as OpenVPN server:   You need to follow the wizard and make sure to use the vcn we created and the public subnet:   The compute instance will be launched by Terraform. When done we will be able to reach the OpenVPN web interface using the public IP that was assigned to this compute instance using the credentials we entered in the wizard:   In case you lost those logs, the ip is available in the Compute->Instances page:   As soon as the OpenVPN instance is deployed, we can go on the web interface and setup OpenVPN:   As we want be able to connect from our MDS instance to our on-premise MySQL server for replication, we will need to setup our VPN to use Routing instead of NAT:   We also specified two ranges as we really want to have a static IP for our on-premise MySQL Instance, otherwise, the IP might change the next time we connect to the VPN. The next step is the creation of a user we will use to connect to the VPN:   The settings are very important:   Save the settings and click on the banner to restart OpenVPN. Now, we connect using the user we created to download his profile: That client.ovpn file needs to be copied to the on-premise MySQL Server. If OpenVPN is not yet installed on the on-premise MySQL Server, it's time to install it (yum install openvpn). Now, we copy the client.ovpn in /etc/openvpn/client/ and we call it client.conf: # cp client.ovpn /etc/openvpn/client/client.conf We can start the VPN: # systemctl status openvpn-client@client Enter Auth Username: lefred Enter Auth Password: ****** We can verify that the VPN connection is established: # ifconfig tun0 tun0: flags=4305 mtu 1500 inet 172.27.232.134 netmask 255.255.255.0 destination 172.27.232.134 inet6 fe80::9940:762c:ad22:5c62 prefixlen 64 scopeid 0x20 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 100 (UNSPEC) RX packets 1218 bytes 102396 (99.9 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 1287 bytes 187818 (183.4 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 sytemctl status openvpn-client@client can also be called to see the status. Object Storage To transfer our data to the cloud, we will use Object Storage. And we create a bucket:   Dump the Data To dump the data of our on-premise MySQL server, we will use MySQL Shell that has the capability to Load & Dump large datasets in an optimized and compatible way for OCI since 8.0.21. Please check those links for more details: https://docs.cloud.oracle.com/en-us/iaas/mysql-database/doc/importing-and-exporting-databases.html https://mysqlserverteam.com/mysql-shell-dump-load-part-1-demo/ https://mysqlserverteam.com/mysql-shell-dump-load-part-2-benchmarks/ https://mysqlserverteam.com/mysql-shell-dump-load-part-3-load-dump/ https://mysqlserverteam.com/mysql-shell-8-0-21-speeding-up-the-dump-process/ OCI Config The first step is to create an OCI config file that will look like this: [DEFAULT] user=ocid1.user.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx fingerprint=xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx key_file=/home/lefred/oci_api_key.pem tenancy=ocid1.tenancy.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx compartment=ocid1.compartment.oc1..xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx region=us-ashburn-1 The user information and key can be found under the Identity section: Please refer to this manual page to generate a PEM key. Now that we have an oci config file (called oci.config in my case), we need to verify that our current MySQL server is using GTID: on-premise mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | OFF | +-------------+ 1 row in set (0.00 sec) By default GTID mode is disabled and we need to enable it. To be able to perform this operation without restarting the MySQL instance, this is how to proceed: on-premise mysql> SET PERSIST server_id=1; on-premise mysql> SET PERSIST enforce_gtid_consistency=true; on-premise mysql> SET PERSIST gtid_mode=off_permissive; on-premise mysql> SET PERSIST gtid_mode=on_permissive; on-premise mysql> SET PERSIST gtid_mode=on; on-premise mysql> select @@gtid_mode; +-------------+ | @@gtid_mode | +-------------+ | ON | +-------------+ Routing & Security We need to add some routing and firewall rules to our VCN to allow the traffic from and to the VPN.   Now that we dealt with routing and security, it's time to dump the data to Object Store by connecting MySQL Shell to our on-premise server and use util.dumpInstance(): $ mysqlsh MySQL JS > c root@localhost [...] MySQL localhost:33060+ ssl JS > util.dumpInstance('onpremise', {ociConfigFile: "oci.config", osBucketName: "lefred_bucket", osNamespace: "xxxxxxxxxxxx",threads: 4, ocimds: true, compatibility: ["strip_restricted_grants", "strip_definers"]}) You can also find more info on this MDS manual page. Load the data in MDS The data is now already in the cloud and we need to load it in our MDS instance. We first connect to our MDS instance using Shell. We could use a compute instance in the public subnet or the VPN we created. I will use the second option: MySQL localhost:33060+ ssl JS > c [email protected] Creating a session to '[email protected]' Fetching schema names for autocompletion… Press ^C to stop. Closing old connection… Your MySQL connection id is 283 (X protocol) Server version: 8.0.21-u1-cloud MySQL Enterprise - Cloud No default schema selected; type use to set one. It's time to load the data from Object Storage to MDS: MySQL 10.0.1.11:33060+ ssl JS > util.loadDump('onpremise', {ociConfigFile: "oci.config", osBucketName: "lefred_bucket", osNamespace: "xxxxxxxxxxxx",threads: 4}) Loading DDL and Data from OCI ObjectStorage bucket=lefred_bucket, prefix='onpremise' using 4 threads. Target is MySQL 8.0.21-u1-cloud. Dump was produced from MySQL 8.0.21 Checking for pre-existing objects… Executing common preamble SQL Executing DDL script for schema employees Executing DDL script for employees.departments Executing DDL script for employees.salaries Executing DDL script for employees.dept_manager Executing DDL script for employees.dept_emp Executing DDL script for employees.titles Executing DDL script for employees.employees Executing DDL script for employees.current_dept_emp Executing DDL script for employees.dept_emp_latest_date [Worker002] employees@dept_emp@@0.tsv.zst: Records: 331603 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] employees@dept_manager@@0.tsv.zst: Records: 24 Deleted: 0 Skipped: 0 Warnings: 0 [Worker003] employees@titles@@0.tsv.zst: Records: 443308 Deleted: 0 Skipped: 0 Warnings: 0 [Worker000] employees@employees@@0.tsv.zst: Records: 300024 Deleted: 0 Skipped: 0 Warnings: 0 [Worker002] employees@departments@@0.tsv.zst: Records: 9 Deleted: 0 Skipped: 0 Warnings: 0 [Worker001] employees@salaries@@0.tsv.zst: Records: 2844047 Deleted: 0 Skipped: 0 Warnings: 0 Executing common postamble SQL 6 chunks (3.92M rows, 141.50 MB) for 6 tables in 1 schemas were loaded in 5 min 28 sec (avg throughput 431.39 KB/s) 0 warnings were reported during the load. We still need to set the GTID purged information from when the dump was taken. In MDS, this operation can be achieved calling a dedicated procedure called sys.set_gtid_purged() Now let's find the value we need to add there. The value of GTID executed from the dump is written in the file @.json. This file is located in Object Storage and we need to retrieve it: When you have the value of gtidExecuted in that file you can set it in MDS: MySQL 10.0.1.11:33060+ ssl SQL > call sys.set_gtid_purged("ae82914d-e096-11ea-8a7a-08002718d305:1") In-bound Replication Before stopping our production server running MySQL on premise, we need to resync the data. We also need to be sure we have moved everything we need to the cloud (applications, etc...) and certainly run some tests. This can take some time and during that time we want to keep the data up to date. We will then use replication from on-premise to MDS. Replication user creation On the production MySQL (the one still running on the OCI compute instance), we need to create a user dedicated to replication: mysql> CREATE USER 'repl'@'10.0.1.%' IDENTIFIED BY 'C0mpl1c4t3d!Paddw0rd' REQUIRE SSL; mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'10.0.1.%'; Creation of the replication channel We go back on OCI's dashboard and in our MDS instance's details page, we click on Channels:   We now create a channel and follow the wizard:   We use the credentials we just created and as hostname we put the IP of our OpenVPN client: 172.27.232.134 After a little while, the channel will be created and in MySQL Shell when connected to your MDS instance, you can see that replication is running.   Wooohooo it works ! o/ Conclusion As you can see, transferring the data and creating a replication channel from on-premise to MDS is easy. The most complicated part is the VPN and dealing with the network, but straightforward for a sysadmin. This is a task that you have to do only once and it's the price to pay to have a more secure environment. https://blogs.oracle.com/mysql/migrate-from-on-premise-mysql-to-mysql-database-service
0 notes