#MYDBOPS
Explore tagged Tumblr posts
mydbops · 2 months ago
Text
​The Mydbops Blog offers expert insights and practical guidance on managing open-source databases such as MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, and Cassandra. It covers topics like performance optimization, security hardening, replication strategies, and the latest trends in database technology. The blog serves as a valuable resource for database administrators and developers seeking to enhance their knowledge and stay updated with industry best practices. ​
0 notes
globalmediacampaign · 4 years ago
Text
Webinar Summary: Migrate your EOL MySQL Servers
This brief summarises the proceedings and outcomes of the 2nd MyWebinar which was held on 13th February 2021 at Online Webinar. As part of our thought leadership webinar series, our latest hosting webinar Migrate your EOL MySQL Servers (seamless migration to MySQL group replication / InnoDB cluster) We have conducted MyWebinar with a very positive response with the help of software like zoom hosting arrangement and YouTube streaming and commitment of our business team, We have easily planned the perfect broadcasting for all of the attendees. Over 30+ people took part in our webinar on 13th Feb 2021, to learn MySQL EOL and upgrade path. The session “Migrate your EOL MySQL servers to HA Complaint GR Cluster / InnoDB Cluster With Zero Downtime” by Vinoth Kanna Ramiya Sriram, Co-founder, Mydbops was able to elevate & seam your workmanship to perfection. Thank you to everybody who joined the live event and participated actively by voting in the poll and asking so many relevant questions. The answered to all questions at end of the session Migrate your EOL MySQL Servers – Vinoth Kanna R S – Cofounder – Mydbops Migrate your EOL MySQL servers to HA Complaint GR Cluster / InnoDB Cluster With Zero Downtime from Mydbops Migrate your EOL MySQL servers to HA Complaint GR Cluster / InnoDB Cluster With Zero DowntimeSpecial thanks to Mr. Selva Venkatesh, Mr. Santhana Gopalan, Manthiramoorthy, Subitsha for seamlessly and smoothly organizing the webinar and all the attendees, who had helped us in making this event a grand success. The Next webinar is scheduled for March 20 2021. More Details and update Please follow-up the below URL: LinkedIn: https://in.linkedin.com/company/mydbops​ Twitter: https://twitter.com/mydbopsofficial​ Facebook: https://www.facebook.com/mydbops/​ Blogs: https://mydbops.wordpress.com/​ SlideShare: https://www.slideshare.net/MyDBOPS​ Meetup page: https://www.meetup.com/Mydbops-Databa…​ Aws APN page: https://mydbops.com/aws-services.html​ http://www.mydbops.com https://mydbops.wordpress.com/2021/03/12/webinar-summary-migrate-your-eol-mysql-servers/
0 notes
amuawia · 4 years ago
Photo
Tumblr media
Post has been published on http://muawia.com/?post_type=wprss_feed_item&p=273922
Mydbops MyWebinar: High availability with InnoDB Cluster
Tumblr media
0 notes
mydbops · 2 months ago
Text
​Mydbops, with over 8 years of expertise, is a leading provider of specialized managed and consulting services for open-source databases, including MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, and Cassandra. Their team of certified professionals offers comprehensive solutions tailored to optimize database performance, enhance security, and ensure scalability. Serving over 300 clients and managing more than 6,000 servers, Mydbops is ISO & PCI-DSS certified and holds an advanced AWS consulting partnership.
1 note · View note
mydbops · 2 months ago
Text
​Mydbops offers comprehensive MySQL Managed Services designed to optimize database performance, ensure high availability, and maintain robust security. Their services include 24/7 monitoring using advanced observability tools for efficient issue detection and resolution, performance tuning to enhance efficiency and scalability, and security hardening measures such as role-based access control and encryption. Additionally, Mydbops provides expert assistance with database architecture, migration, and disaster recovery solutions, ensuring seamless operations for both cloud and on-premises deployments.
1 note · View note
mydbops · 2 months ago
Text
​Mydbops offers comprehensive PostgreSQL Managed Services designed to ensure optimal database performance, security, and scalability. Their services include 24/7 proactive monitoring using proprietary tools for efficient issue detection and resolution, periodic performance tuning to optimize queries and configurations, monthly slow query analysis, and robust security hardening measures like role-based access control and encryption. Additionally, they provide expert assistance with configuration, optimization, and compliance to maintain uninterrupted PostgreSQL operations.
1 note · View note
globalmediacampaign · 4 years ago
Text
Mydbops MyWebinar: High availability with InnoDB Cluster
Celebrating The new year is not about a change of just the time element, It is about the discovering of a new you and elevate you by continuous learning & upgrading that recognizes WAAOC!! We Are All One Community!! Mydbops is geared up to give back to our community of opensource database, AGAIN!! MyWebinar is our expanded framework, we plan to go live every month with the new, trendy topics of the opensource database as many as possible. Kickoff of our very first MyWebinar session is set for 9th of January, 2021. We look forward to seeing you at our very first MyWebinar session. SPEAKER: Karthik .P.RTOPIC: High availability with InnoDB clusterTIME: 11AM IST This presentation deals with the transaction flow in InnoDB Cluster and how it can be deployed on a WAN based network for High Availability. If you are a Developer or a DBA who is passionate about transaction flow in the InnoDB Cluster, you should attend this MyWebinar. Free Registration Link: https://lnkd.in/gkywMzt https://mydbops.wordpress.com/2021/01/07/mydbops-mywebinar-high-availability-with-innodb-cluster/
0 notes
globalmediacampaign · 4 years ago
Text
Install Specific Version of MySQL 8 using YUM
We have many ways to install MySQL on linux machines such as source, binary and so on. But most of the Engineers always prefer default package managers (yum for RPM-Based distributions and apt for DPKG-Based distributions) for its ease of use and it can resolve all dependencies on its own. And of course, it is not possible to use package managers in environments where the internet is not allowed, but this is a different case.  At some point, we need to install exactly specific version of MySQL for the following cases To create Production Replicas  To simulate an Production Issue on similar kind of environment To configure Disaster Recovery(DR)/UAT Setup Compatibility with opensource tools ( Eg , Xtrabackup compatibility ) But following the instructions from MySQL Documentation for installation using yum always leads to the installation of the latest version released. At the time of writing this blog, the latest release is MySQL 8.0.22. Following the documentation steps resulted in installation of the same. Definitely, we can install other versions of MySQL using RPM files. But it is quite tough at times because of its dependencies or conflicts with any other older version packages. So, in this blog post, we will look at how to install a specific version of MySQL using yum. All the following steps are tested in CentOS 7 and it requires sudo privileges. Setting up Yum Repository The below command will enable the yum repository required for the installation of MySQL rpm -Uvh https://repo.mysql.com/mysql80-community-release-el7-3.noarch.rpm Disabling all repos in MySQL It is to disable repos for all versions of MySQL. sed -i 's/enabled=1/enabled=0/' /etc/yum.repos.d/mysql-community.repo To install MySQL 8 of particular minor version, let’s say 8.0.20 , we can execute the following command yum --enablerepo=mysql80-community install mysql*8.0.20* So, the general command will be like yum --enablerepo=mysql80-community install mysql*8.0.X* Where X is the minor version that you want Or else if you want to install the latest version, just go with the below command. It will install the latest version available at that time. yum --enablerepo=mysql80-community install mysql-community-server That’s it if the above command succeeds. We will get the required version successfully installed on the server. 4. Start the MySQL service Use the below service command to start up the MySQL service service mysqld start 5. Get the temporary password in the error log By default, MySQL will generate a temporary password for root user and it will be in the error log. So, to get the temporary password from error log use the below command. grep "temporary password" /var/log/mysqld.log The sample output would be like: [root@ha mydbopslabs]# grep "temporary password" /var/log/mysqld.log 2020-12-10T14:12:49.940127Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: 7y:ztnVKwh7q [root@ha mydbopslabs]# 6) Login the MySQL using root user with the temporary password mysql -p'$temporary password' 7) On successful login, we should reset the root user password as the first thing. Otherwise, we cannot execute any commands until we reset the password. we will get the warning like below mysql> show schemas;ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.mysql> To reset the root user password, execute the below query on mysql prompt. alter user 'root'@'localhost' identified by 'strong password'; Sample query be like mysql>mysql> alter user 'root'@'localhost' identified by '[BX.D"7s!c';Query OK, 0 rows affected (0.22 sec)mysql> Once it is done, our MySQL setup is ready to use. For fine tuning of MySQL to handle production work load, refer our presentation Fine Tuning of MySQL 8.0 which was taken at Mydbops Database Meetup. Also, look our other blogs that explores the coolest features of MySQL 8.0 which greatly helps for Database Administrators and Developers. https://mydbops.wordpress.com/2020/12/12/install-specific-version-mysql-8-using-yum/
0 notes
amuawia · 6 years ago
Photo
Tumblr media
Post has been published on http://muawia.com/open-source-indiaosi-2019-mydbops/
Open Source India(OSI) 2019 & Mydbops
Tumblr media
0 notes
amuawia · 6 years ago
Photo
Tumblr media
Post has been published on http://muawia.com/summary-mydbops-database-meetup-2/
Summary – Mydbops Database meetup 2
Tumblr media
0 notes
globalmediacampaign · 5 years ago
Text
Dockerizing MySQL step by step.
As a Database Engineer in Mydbops. We tend to solve multiple complex problems for our esteemed customers. To control the System resources and scale up /down based on needed we are evaluating Dockers and Kubernetes. Docker is a set of platform as a service products that uses OS-level virtualization to deliver software in packages called Containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels.It’s more lightweight than standard Containers and boots up in seconds. Docker also is easy to use when you need a simple, single instance. What is great about Docker though is that it allows configuring multiple versions of MySQL. Docker Installation: Docker can be installed with yum repository or apt-get repository based on your linux operating system distributions. (I’m using CentOS 7 operating system in the following examples). [root@mydbopslabs202 vagrant]# yum install docker -y To start the docker service, Run the following command: [root@mydbopslabs202 vagrant]# systemctl start docker How to pull the Docker MySQL Images? There are  two official MySQL Docker Repositories.  Docker team (https://hub.docker.com/r/mysql/mysql-server/) maintains a branch and can be pulled by a simple docker run mysql:latest MySQL team (https://hub.docker.com/_/mysql) is maintains a branchand can be pulled by a simple docker run mysql/mysql-server:latest  I have used docker’s team images(latest), from Oracle MySQL’s team , but there are many custom designed docker images available in the Dockerhub too.To download the MySQL Server image, run this command: shell> docker pull mysql/mysql-server:tag If :tag is omitted, the latest tag is used by default and the image for the latest GA version of MySQL Server is downloaded. For older versions use the tags available with above command. Step 1 : Pull the Docker image for MySQL [root@mydbopslabs202 vagrant]# docker pull mysql/mysql-server:latest Trying to pull repository docker.io/mysql/mysql-server ... latest: Pulling from docker.io/mysql/mysql-server c7127dfa6d78: Pull complete 530b30ab10d9: Pull complete 59c6388c2493: Pull complete cca3f8362bb0: Pull complete Digest: sha256:7cd104d6ff11f7e6a16087f88b1ce538bcb0126c048a60cd28632e7cf3dbe1b7 Status: Downloaded newer image for docker.io/mysql/mysql-server:latest To list all the docker images downloaded, run this command: [root@mydbopslabs202 vagrant]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/mysql/mysql-server latest a7a39f15d42d 3 months ago 381 MBStart MySQL Server Instance Start a new docker container for MySQL Server, run below command [root@mydbopslabs202 vagrant]# docker run --name=test -d mysql/mysql-server:latest -m 500M -c 1 --port=3306 585f3cec96f1636838b7327e80b10a0354f13fc0e9d4f06f07b3b99c59d2c319 The –name option, for supplying a custom name for your server container and it is optional. If no container name is supplied, a random one is generated. -m ( Memory ) -c ( CPU’s ) –port=3306. Initialization for the container begins, and the container appears in the list of running containers when the docker ps command is executed. [root@mydbopslabs202 vagrant]# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4a6ad40ba073 mysql/mysql-server:latest "/entrypoint.sh my..." 30 seconds ago Up 29 seconds (health: starting) 3306/tcp, 33060/tcp test Run this command to monitor the output from the container: [root@mydbopslabs202 vagrant]# docker logs test Once the initialisation is completed, the random password generated for the root user is filtered from the log output. And reset the password on the initial run. [root@mydbopslabs202 vagrant]# docker logs test 2>&1 | grep GENERATED [Entrypoint] GENERATED ROOT PASSWORD: q0tyDrAbPyzYK+Unl4xopiDUB4k Step 3: Connect the MySQL Server within the ContainerRun the following command to start a mysql client inside the docker container, [root@mydbopslabs202 vagrant]# docker exec -it test mysql -uroot -p Enter password: When asked, enter the generated root password.You must reset the server root password because the MYSQL_ONETIME_PASSWORD option is true by default. mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'Data3ng!neer@123'; Query OK, 0 rows affected (0.01 sec) Container Shell Access:To have shell access to your MySQL Server container, run the docker exec -it command to start a bash shell inside the container: [root@mydbopslabs202 vagrant]# docker exec -it test bash bash-4.2# You can then run Linux commands inside the container. For example, to view contents in the MySQL server’s data directory inside the container, run this command: bash-4.2# ls /var/lib/mysql #innodb_temp binlog.000002 ca.pem ib_buffer_pool ibdata1 mysql.ibd performance_schema server-cert.pem undo_001 auto.cnf binlog.index client-cert.pem ib_logfile0 ibtmp1 mysql.sock private_key.pem server-key.pem undo_002 binlog.000001 ca-key.pem client-key.pem ib_logfile1 mysql mysql.sock.lock public_key.pem sys bash-4.2# bash-4.2# exit exit [root@mydbopslabs202 vagrant]# Stopping and Deleting a MySQL Container To stop the MySQL Server container you have created, run this command: [root@mydbopslabs202 vagrant]# docker stop test test [root@mydbopslabs202 vagrant]# docker stop sends a SIGTERM signal to themysqld process, so that the server is shut down gracefully. To start the MySQL Server container again, execute this command: [root@mydbopslabs202 vagrant]# docker start test test [root@mydbopslabs202 vagrant]# To restart the MySQL Server, run this command: [root@mydbopslabs202 vagrant]# docker restart test test [root@mydbopslabs202 vagrant]# To delete the MySQL container, stop it first, and then run the docker rm command: [root@mydbopslabs202 vagrant]# docker rm test test [root@mydbopslabs202 vagrant]# Storage management in docker ? By default, Docker stores data in its internal volume. To validate the location of the volumes, use the command: [root@mydbopslabs202 vagrant]# docker inspect test In that output,it shows "Image": "mysql/mysql-server:latest", "Volumes": { "/var/lib/mysql": {} }, You can also change the location of the data directory and create one on the host for persistence. Having a volume outside the container allows other applications and tools to access the volumes when needed. It’s also a best practice followed for databases.For example if you remove the docker instance unfortunately the data directory gets also removed, it is always better to have the data directory outside the docker with the DB log files. Containers help us in automation and easy customisations. We have all needed DB tool and monitoring packages with a custom MySQL images which can scale the database operations.We will evaluate more about containers in upcoming days. https://mydbops.wordpress.com/2020/09/13/getting-started-with-dockerizing-mysql/
0 notes
globalmediacampaign · 5 years ago
Text
Integrating MySQL tools with Systemd Service
In my day to day tasks as a DB Engineer at Mydbops we have been using multiple MySQL tools for multiple use cases to ensure an optimal performance and availability for servers managed by our Remote DBA Team. A tool like pt-online-schema can be used for any DDL changes ( Overview to DDL algorithm ), if any tool which needs to scheduled for longer period we tend to use screen or cron. Some of the problems we face when we demonise the process or use screen for running processes. The daemon process gets killed when the server reboot happens. The screen might accidentally terminate while closing it. To flexibility to start or stop the process when required. These common problem can be overcome by using systemd service. What is Systemd? Systemd is a system and service manager for Linux operating systems. When running as the first process on boot (as PID 1), it acts as an init system that brings up and maintains userspace services. List of few use cases that can be made simple with systemd.service. Integrating Pt-heartbeat with Systemd Service Integrating Auto kill using pt-kill with Systemd Service. Integrating multi query killer with Systemd service Integrating Pt-heartbeat with Systemd Service. We had the requirement to schedule pt-heartbeat to monitor replication lag for one of our clients under our database managed Services. Here is problem statement pt-heartbeat process was running as a daemon process, the usual problem we were facing was when the system is rebooted for any maintenance , the pt-heartbeat process gets killed and we start receiving the replication lag alerts and then it needs a manual fix. Script for pt-heartbeat /usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update Now let us integrate it with systemd Question : Can we schedule multiple pt-kill processes? Question : It is possible to schedule Multiple Kill statement for different servers $ cd etc/systemd/system/ $ vi pt-heartbeat.service ##pt-heartbeat systemd service file [Unit] Description="pt-heartbeat" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-heartbeat Restart=always ExecStart = It needs the command which needs to be executed when the service kick start ) Restart = Always option specifies to start the process once the OS is booted up. Once the new systemd script is pushed, Reload the systemctl daemon and start the service $ systemctl daemon-reload $ systemctl start pt-heartbeat $ systemctl status pt-heartbeat -l ● pt-heartbeat.service - "pt-heartbeat" Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled) Active: active (running) since Mon 2020-06-20 13:20:24 IST; 10 days ago Main PID: 25840 (perl) Tasks: 1 Memory: 19.8M CPU: 1h 1min 47.610s CGroup: /system.slice/pt-heartbeat.service └─25840 perl /usr/bin/pt-heartbeat --daemonize -D mydbops --host=192.168.33.11 --master-server-id 1810 --user=pt-hbt --password=vagrant --table heartbeat --insert-heartbeat-row --update This service can be stopped by just giving ( similar to to any systemctl process ) $ systemctl stop pt-heartbeat ● pt-heartbeat.service - "pt-heartbeat" Loaded: loaded (/etc/systemd/system/pt-heartbeat.service; disabled; vendor preset: enabled) Active: inactive (dead) Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopping "pt-heartbeat"… Jun 20 15:46:07 ip-192-168-33-11 systemd[1]: Stopped "pt-heartbeat". Integrating Auto kill using pt-kill with Systemd Service Usually in production servers long running queries will spike the system resource usage and degrade the MySQL performance drastically or might kill your MySQL process with OOM killer, in order to avoid this hiccups , we can schedule Percona pt-kill process based on the use case defined. scheduling the pt-kill service $ cd etc/systemd/system/ $ vi pt-kill.service #pt-kill systemd service file [Unit] Description="pt-kill" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-kill Restart=always $ systemctl daemon-reload $ systemctl start pt-kill $ systemctl status pt-kill pt-kill.service - "pt-kill" Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled) Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago CGroup: name=dsystemd:/system/pt-kill.service ├─20974 perl /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 200 --kill --match-command Query --match-info (select|SELECT|Select) --match-user cron_ae --interval 10 --print --daemonize Now we have configured a fail safe pt-kill process. Integrating multi query killer with Systemd service Question : Is it possible to integrate multiple Kill Statements for different hosts as single process. Answer – Yes ! It is possible and quite simple too. Just add the needed commands as shell script file and make it execute it. In the below example i have chose three different server consider a RDS instance ( more on AWS RDS its Myth ) and a couple of virtual machine. $ vi pt_kill.sh /usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(mkt_ro|dash)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave1.log /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user "(user_qa|cron_ae)" --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave2.log /usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select|SELECT|Select)" --match-user sbtest_ro --interval 10 --print --daemonize >> /home/vagrant/pt_kill_slave3.log Scheduling pt-kill.service for multiple hosts. #pt-kill systemd service file [Unit] Description="pt-kill" After=network-online.target syslog.target Wants=network-online.target [Install] WantedBy=multi-user.target [Service] Type=forking ExecStart=/bin/bash /home/vagrant/pt_kill.sh StandardOutput=syslog StandardError=syslog SyslogIdentifier=pt-kill Restart=always Reload the daemon and start the service $ systemctl daemon-reload $ systemctl start pt-kill $ systemctl status pt-kill pt-kill.service - "pt-kill" Loaded: loaded (/etc/systemd/system/pt-kill.service; enabled) Active: active (running) since Wed 2020-06-24 11:00:17 IST; 5 days ago CGroup: name=dsystemd:/system/pt-kill.service ├─20974 Perl /usr/bin/pt-kill --user=pt_kill --host=test.ap-northeast-1.rds.amazonaws.com --password=awkS --busy-time 1000 --rds --kill --match-command Query --match-info "(select... ├─21082 perl /usr/bin/pt-kill --user=mon_ro --host=192.168.33.11 --password=pt@123 --busy-time 20 --kill --match-command Query --match-info "(select... ├─21130 perl /usr/bin/pt-kill --user=db_ro --host=192.168.33.12 --password=l9a40E --busy-time 200 --kill --match-command Query --match-info "(select... This makes Systemd more useful and easy tool for scheduling mysql tools in database environment. There are many more features in Systemd that be used for scheduling scripts bypassing the use of crontab, hopefully. Note : All these are sample scripts you ensure you test well before making it in production. https://mydbops.wordpress.com/2020/07/10/integrating-mysql-tools-with-systemd-service/
0 notes