#recoverymethods
Explore tagged Tumblr posts
Text
Call for Abstracts! Showcase your research at the CME/CPD accredited 5th World Cardiology and Cardiovascular Disease Conference, happening October 15-17, 2025, in Dubai, UAE & Online. Abstract Submission Deadline is Extended to April 30, 2025 Submit your abstract: https://cardiology.utilitarianconferences.com/submit-abstract Don’t miss this opportunity to gain well-deserved recognition for your work!
#CARDIOLOGYUCG#NeonatalMedicine#KidneyandPancreasTransplant#PositiveBirthExperienceCaseStudy#PhysicalTraumas&RecoveryMethods#WorldCardiologyDay#CardiovascularHealth#HeartAwareness#HeartHealth#CardiologyMatters#HeartDiseaseAwareness#HealthyHeart#BeatHeartDisease
0 notes
Text
#IceBathBenefits#ColdTherapy#RecoveryMethod#AthleteLife#WellnessTrend#FitnessRecovery#ChillOut#IceImmersion#HealthAndWellness#ColdTherapyBenefits
0 notes
Text
youtube
Half Ironman Training Secrets: Mental & Physical Domination! Essential recovery methods like Epsom salt baths and meditation. #HalfIronmanTraining #IronmanJourney #Podcast #EnduranceTraining #FitnessMotivation #RecoveryMethods #EpsomSaltBath #Meditation #CompressionTherapy #Visualization https://www.youtube.com/watch?v=6KNfbF3CnVo via Desire Too Inspire https://www.youtube.com/channel/UCuzS_IyVB36K0nTJdRUb9Yg June 11, 2025 at 01:45AM
#fitnessmotivation#healthtips#fitnessjourney#healthylifestyle#personalgrowth#wellnessjourney#motivation#health#podcast#Youtube
0 notes
Photo

Following the guidelines I have set up in my #energeticstressrelief program (#ESR), Brandon has shaved nearly 2:00 off his 5-mile run from just 2 short weeks ago with a blistering pace of 6:08/mile. He’s got his sights set on qualifying for the 2020 Olympics (His distance is the 800-meter). We are building up his endurance for the 800 come Indoor season. I’ll share his morning routine which always starts with: •Brushing teeth and especially tongue to cleanse the mouth of toxins & bacteria •#ascendedhealth Super Gum Oil swished around the teeth & gums to pull toxins out of the entire body and rebalance the body’s bacteria •32 ounces of #berkey filtered lemon water •#Loisbreathing 4 steps •One #beet, #califia coconut almond milk w/ #maca powder •30 minutes of hip-opening #hatha #yoga •Altering hot/cold shower w/ deep nose-breathing •1 hour of samata/vipassana meditation •10 minutes #sunsalutations yoga #nabosomattechnology •10 minutes #dynamicfascialstretching •#Run! #intermittentfasting #vegan #veganathlete #recoverymethods #quantumfitnessorg #meditation #fascialbounding #dynamicstrength #vipr #progressiveloading #nomeatathlete (at Downtown Athletic Club)
#esr#dynamicstrength#hatha#beet#fascialbounding#energeticstressrelief#veganathlete#run#califia#quantumfitnessorg#loisbreathing#recoverymethods#yoga#maca#progressiveloading#nomeatathlete#dynamicfascialstretching#berkey#vegan#intermittentfasting#meditation#nabosomattechnology#vipr#ascendedhealth#sunsalutations
0 notes
Text
Cloning MySQL InnoDB Cluster data-at-rest encrypted tables
MySQL Server offers, among the different security features, encryption at rest (or Transparent Data Encryption, TDE) in the commercial release (find the differences with Community here). While testing TDE with MySQL InnoDB Cluster, I was wondering what inner mechanism was implemented to deal with TDE, master keys and keyrings, and the clone plugin. If I need to clone an instance, how would everything work so to guarantee my InnoDB Cluster instances will still be encrypted, and the keyring chosen would still use a good master key to decrypt tablespace keys and finally tablespace pages? The answer is obviously that this is fully managed. You can add instances to InnoDB Cluster using MySQL Shell clone option and this will guarantee that tables will be encrypted using the original tablespace key, but can be decrypted using local instance master key in its own keyring. This is specified in the related worklog. The instance to be added to the cluster, will need a keyring configured, indeed. How does this work? When an instance is added to the cluster, for example using: cluster.addInstance('USER@HOST:PORT', {recoveryMethod: 'clone'}) The following happens: Pages are transferred unaltered from source to target (During FILE_COPY, all file data is cloned as it is). That means that data is transmitted encrypted.In the source instance, the tablespace key is fetched from the tablespace header page (that is in the first page, page 0, together with other information) and decrypted using the source master key and passed in clear (encrypted communication between cluster instances is then mandatory so to avoid transmitting the tablespace key, capable of decrypting data, in clear over the network)When the data reaches the target instance, the original tablespace key, which was passed unencrypted, is encrypted back but with local master key, which is wherever needs to be in the keyring adopted in the instance (see a list of supported keyrings).As mentioned, if SSL connection is not set by using group_replication_ssl_mode setting the clone operation will fail, because it’s not secure to transmit non ecrypted tablespace keys over the link. So, if you don’t have SSL enabled for intra cluster communication: cl.status() { "clusterName": "mortensi", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:2356", "ssl": "DISABLED", Clone operation will fail: NOTE: 127.0.0.1:2358 is being cloned from 127.0.0.1:2357 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ============================================================ 0% Failed PAGE COPY ============================================================ 0% Not Started REDO COPY ============================================================ 0% Not Started ERROR: The clone process has failed: Clone Donor Error: 3872 : Clone needs SSL connection for encrypted table.. (3862) If SSL is configured, instead: cl.status() { "clusterName": "mortensi", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:2356", "ssl": "REQUIRED", Clone operation will just flow smoothly: * Waiting for clone to finish... NOTE: 127.0.0.1:2358 is being cloned from 127.0.0.1:2357 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:2358 is shutting down... * Waiting for server restart... ready * 127.0.0.1:2358 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.33 MB transferred in about 1 second (~72.33 MB/s) Read about the high level architecture in the related worklog. The post Cloning MySQL InnoDB Cluster data-at-rest encrypted tables appeared first on mortensi. https://www.mortensi.com/2021/05/cloning-mysql-innodb-cluster-data-at-rest-encrypted-tables/
0 notes
Text

#love
#instagood
#photooftheday
#fashion
#beautiful
#happy
#cute
#tbt
#like4like
#followme
#picoftheday
#follow
#me
#selfie
#summer
#art #Recovery #recoverybar #recoveryisntlinear #recoverYou #recoveryroleplay #RecoveryPointWestVirginia #recoverydoneright #recoverycommunity #recoveryday #recoveryorlando #recoverywins #recoveryagent #recoverymethod #recoveryformula #RecoveryInspiration #recoveryworlds #recoverytruck #recoveryforreal #RecoveryEP #RecoverySucks #recoveryzone1 #recoverydog #recoveryaftercare #RecoveryGalsArtExchange #recoveryhome #recoveryphase #recoverydatahardisk #recoveryfam #recoverytime
#instadaily #Overcome #overcomeevil #overcomefears #overcomeyourfear #OvercomeBullying #overcomeadversity #overcomeit #overcomeyourlimits #overcometattoo #overcomeyourself #overcomelawnservices #overcomer #overcomeyourpride #overcomers4FREEdom #overcomeyourfearsandyouwillconquertheworld #overcomechallenges #overcomeeverything #overcomeevilwithgood #overcomediabetes #overcomeselfdoubt #overcomesdarkness #overcomethepain #overcomedepression #overcomerofdisease #overcomeall #overcomeobsticals #overcome38 #overcomeHER #overcomeeveryobstacle #overcomecravings #friends #Depression #depressionstate #depressionlife #Depressionhasnoface #Depressionshilfe #depressioncampaign #depressionsdits #depressionconfessions #depressioncherry #DepressionIsNotAChoice #depressioniskillingme #depressionistnichtsch #depressioneon #depressionhurts #depressionsupport #depressionthougts #depressionfree #depressionisarealthing #depressionenhabenkeingesicht #depressionisnotajoke #depressionqoutes #depressiondrawing #depressione #depressionglassware #depressionepisode #depression #depressionisreal #DepressionIsAillness #depressionthings #depressionsucks #repost #nature #Anger #AngerFree #angermanagementsportsgym #angermovition #AngerKillsYou #angerese #angerfist #angeredarena #anger1 #angerdoesnotexsist #angerscity #angerapproved #angerisfear #angermaier #anger #angersenautomne #angeriffen #AngerMangement #ANGERFLARES #angermanagment #angeroffenrirteam #angermanagement #angerslafoliedouce #angerbr #angers #angerville #angergym #AngerAndRage #angerme #angera #girl #Disorder #disorderdesign #disorderedeating #disorderandreaart #disorderlyconduct #disorderedeatingrecovery #disordered #disordermagazine #disorders #disorder #disorderhelpers #DisorderNotDecision #disorderly #disorderrecordings #disorderlycats #disordertributeband #disorderandrea #disordermind #disorderlies #DisorderMag #madpeople #insane #dirty #gay #wrong #lesbian #animal #beast #satan #dysphoria #fun #Love #lovelaurenelizabeth #lovecraftbar #lovenocco #loveyourparrot #lovelydayforahike #lovelyflowers #lovesthekowloonwings #lovethesedoors #Lovelovestolovelove #loveVPL #lovebam #lovethecountry #lovepuppy #lovelygardenguide #lovedalla #lovemangaanime #lovemango #lovelocallondon #loveanimalsdonteatthem #loveballaratinautumn #lovemangaandanimeforever #loveyourworkcc #loveofbasketball #loveyouresthetican #lovelifeffs #Loveanddesignstudios #loveismyrealname #lovelylocksbeautyshop #LoveYouuuuu #style #Dream #dreamerwisherliar #dreamingthedream #dreameatermerry #DREAMTEAM1 #dreamcatchersb #dreaminteriors #dreamcatcherindia #dreamwedding4you #dreamrooms #dreamdiaries #dreamheaven #DreamLifeNow #dreamgabby #dreamribbons #dreambath #dreamingtravelcontest #dreamyarchilover #dreamequatcher #dreamlady17 #dreamqueen #dreamsintomemories #dreamlady #DreamLandLive #dreamautomotive #dreamtheater25th #dreambigproject #dreamhouse #dreamsdolls #dreamanimal#smile #Me #medicineman #merchantcenter #metrodetroitphotography #meikertaapartemen #messdinner2017 #membersofhumanity #messingerfoto #meykap #mesinindustri #mealpandeliveryph #mekashanghai #MeggaSale #merkava #mechanicalinfluences #meno26 #megangibbsphotography #MerissaHamilton #mememedico #medavitaartist #meatspecialist #mercedesclassegamg #merijnhos #mensfashionstoday #meetupfootball #metropoliamotorsport #menabrands #metallicclothing #MemeMomAllDay #MetavidaPer#food #Poetry #poetryinmetal #poetrywriting #poetryprints #poetryinmontreal #poetryfordays #poetrysociety #poetrythoughts #poetrybooks #poetryofforms #poetryreading #poetry101 #poetrytribe #poetrybyme #PoetryRecommendation #poetrypoetryoftheworld #poetrywriters #poetryonline #poetryinmotion #poetryadventcalendar #poetryagram #Poetryurdu #poetryfromthesoul #poetryimages #poetryi #poetryofsl #poetrygames #PoetryInABoxSlippers #poetrybillboard #poetrypen
0 notes
Text
MySQL 8.0.19 InnoDB ReplicaSet Configuration and Manual Switchover
InnoDB ReplicaSet was introduced from MySQL 8.0.19. It works based on the MySQL asynchronous replication. Generally, InnoDB ReplicaSet does not provide high availability on its own like InnoDB Cluster, because with InnoDB ReplicaSet we need to perform the manual failover. AdminAPI includes the support for the InnoDB ReplicaSet. We can operate the InnoDB ReplicaSet using the MySQL shell. InnoDB cluster is the combination of MySQL shell and Group replication and MySQL router InnoDB ReplicaSet is the combination of MySQL shell and MySQL traditional async replication and MySQL router Why InnoDB ReplicaSet? You can manually perform the switchover and failover with InnoDB ReplicaSet You can easily add the new node to your replication environment. InnoDB ReplicaSet helps with data provisioning (using MySQL clone plugin) and setting up the replication. In this blog, I am going to explain the process involved in the following topics How to set up the InnoDB ReplicaSet in a fresh environment? How to perform the manual switchover with ReplicaSet? Before going into the topic, I am summarising the points which should be made aware to work on InnoDB ReplicaSet. ReplicaSet only supports GTID based replication environments. MySQL version should be 8.x +. It has support for only Row-based replication. Replication filters are not supported with InnoDB ReplicaSet InnoDB ReplicaSet should have one primary node ( master ) and one or multiple secondary nodes ( slaves ). All the secondary nodes should be configured under the primary node. There is no limit for secondary nodes, you can configure many nodes under ReplicaSet. It supports only manual failover. InnoDB ReplicaSet should be completely managed with MySQL shell. How to set up the InnoDB ReplicaSet in a fresh environment? I have created two servers (replicaset1, replicaset2) for testing purposes. My goal is to create the InnoDB ReplicaSet with one primary node and one secondary node. I installed Percona Server for MySQL 8.0.20 for my testing. Step 1 : Allow hostname based communication. Make sure that you configured this on all the servers, which participated in the ReplicaSet. #vi /etc/hosts 172.28.128.20 replicaset1 replicaset1 172.28.128.21 replicaset2 replicaset2 Step 2 : In this step, I am going to prepare the MySQL instances for InnoDB ReplicaSet. Below are the major tasks that need to be performed as part of this operation. Create a dedicated user account to effectively manage the ReplicaSet. The account will be automatically created with sufficient privileges. MySQL parameters changes which need to be updated for InnoDB ReplicaSet (persisting settings). Restart the MySQL instance to apply the changes. Command : dba.configureReplicaSetInstance() Connecting the shell, [root@replicaset1 ~]# mysqlsh --uri root@localhost Please provide the password for 'root@localhost': ************* Save password for 'root@localhost'? [Y]es/[N]o/Ne[v]er (default No): y MySQL Shell 8.0.20 Configuring the instance, Once you triggered the command, it will start to interact with you. You have to choose the needed options. MySQL localhost:33060+ ssl JS > dba.configureReplicaSetInstance() Configuring local MySQL instance listening at port 3306 for use in an InnoDB ReplicaSet... This instance reports its own address as replicaset1:3306 Clients and other cluster members will communicate with it through this address by default. If this is not correct, thereport_host MySQL system variable should be changed. ERROR: User 'root' can only connect from 'localhost'. New account(s) with proper source address specification to allow remote connection from all instances must be created to manage the cluster. 1) Create remotely usable account for 'root' with same grants and password 2) Create a new admin account for InnoDB ReplicaSet with minimal required grants 3) Ignore and continue 4) Cancel Please select an option [1]: 2 Please provide an account name (e.g: icroot@%) to have it created with the necessary privileges or leave empty and press Enter to cancel. Account Name: InnodbReplicaSet Password for new account: ******** Confirm password: ******** NOTE: Some configuration options need to be fixed: +--------------------------+---------------+----------------+--------------------------------------------------+ | Variable | Current Value | Required Value | Note | +--------------------------+---------------+----------------+--------------------------------------------------+ | enforce_gtid_consistency | OFF | ON | Update read-only variable and restart the server | | gtid_mode | OFF | ON | Update read-only variable and restart the server | | server_id | 1 | | Update read-only variable and restart the server | +--------------------------+---------------+----------------+--------------------------------------------------+ Some variables need to be changed, but cannot be done dynamically on the server. Do you want to perform the required configuration changes? [y/n]: y Do you want to restart the instance after configuring it? [y/n]: y Cluster admin user 'InnodbReplicaSet'@'%' created. Configuring instance... The instance 'replicaset1:3306' was configured to be used in an InnoDB ReplicaSet. Restarting MySQL... NOTE: MySQL server at replicaset1:3306 was restarted. You can find the updated parameters from the file “mysqld-auto.cnf”. The blog by Marco Tusa has more details about the PERSIST configuration. [root@replicaset1 mysql]# cat mysqld-auto.cnf { "Version" : 1 , "mysql_server" : { "server_id" : { "Value" : "3391287398" , "Metadata" : { "Timestamp" : 1598084590766958 , "User" : "root" , "Host" : "localhost" } } , "read_only" : { "Value" : "OFF" , "Metadata" : { "Timestamp" : 1598084718849667 , "User" : "InnodbReplicaSet" , "Host" : "localhost" } } , "super_read_only" : { "Value" : "ON" , "Metadata" : { "Timestamp" : 1598084898510380 , "User" : "InnodbReplicaSet" , "Host" : "localhost" } } , "mysql_server_static_options" : { "enforce_gtid_consistency" : { "Value" : "ON" , "Metadata" : { "Timestamp" : 1598084590757563 , "User" : "root" , "Host" : "localhost" } } , "gtid_mode" : { "Value" : "ON" , "Metadata" : { "Timestamp" : 1598084590766121 , "User" : "root" , "Host" : "localhost" } } } } } Note : Make sure that this step is executed on all the MySQL instances which are going to participate in the ReplicaSet group. Make sure that the cluster account name and password are the same on all MySQL instances. Step 3 : In this step, I am going to switch my login to the ReplicaSet account which was created in Step 2. MySQL localhost:33060+ ssl JS > connect InnodbReplicaSet@replicaset1 Creating a session to 'InnodbReplicaSet@replicaset1' Please provide the password for 'InnodbReplicaSet@replicaset1': ******** Save password for 'InnodbReplicaSet@replicaset1'? [Y]es/[N]o/Ne[v]er (default No): y Fetching schema names for autocompletion... Press ^C to stop. Closing old connection... Your MySQL connection id is 8 (X protocol) Server version: 8.0.20-11 Percona Server (GPL), Release 11, Revision 5b5a5d2 Step 4: Now, all are set to create the ReplicaSet. Command : dba.createReplicaSet(‘’) MySQL replicaset1:33060+ ssl JS > dba.createReplicaSet('PerconaReplicaSet') A new replicaset with instance 'replicaset1:3306' will be created. * Checking MySQL instance at replicaset1:3306 This instance reports its own address as replicaset1:3306 replicaset1:3306: Instance configuration is suitable. * Updating metadata... ReplicaSet object successfully created for replicaset1:3306. Use rs.addInstance() to add more asynchronously replicated instances to this replicaset and rs.status() to check its status. ReplicaSet is created with the name “PerconaReplicaSet” Step 5: In this step, I am going to assign the ReplicaSet to the variable and check the ReplicaSet status. Assigning to the variable can be done while creating the ReplicaSet as well (i.e. var replicaset = dba.createReplicaSet(‘’) MySQL replicaset1:33060+ ssl JS > replicaset = dba.getReplicaSet() You are connected to a member of replicaset 'PerconaReplicaSet'. MySQL replicaset1:33060+ ssl JS > MySQL replicaset1:33060+ ssl JS > replicaset.status() { "replicaSet": { "name": "PerconaReplicaSet", "primary": "replicaset1:3306", "status": "AVAILABLE", "statusText": "All instances available.", "topology": { "replicaset1:3306": { "address": "replicaset1:3306", "instanceRole": "PRIMARY", "mode": "R/W", "status": "ONLINE" } }, "type": "ASYNC" } } The ReplicaSet status states the Instance replicaset1 is operational and is the PRIMARY member. Step 6: Now, I need to add the secondary instance “replicaset2” to the ReplicaSet. When adding the new instance, it should be fulfilled with all the ReplicaSet requirements. We have two recovery methods when joining the new node. Clone: It will take the snapshot from the ONLINE instance and build the target node with a snapshot and finally add to the ReplicaSet. This method is always recommended when adding fresh nodes. Incremental: This method relies on MySQL replication and applies all the transactions which are missed on the new instance. This can be faster when the missing transaction amount is small. Command : replicaset.addInstance(‘:’) MySQL replicaset1:33060+ ssl JS > replicaset.addInstance('replicaset2:3306') Adding instance to the replicaset... * Performing validation checks This instance reports its own address as replicaset2:3306 replicaset2:3306: Instance configuration is suitable. * Checking async replication topology... * Checking transaction state of the instance... NOTE: The target instance 'replicaset2:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether replication can completely recover its state. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'replicaset2:3306' with a physical snapshot from an existing replicaset member. To use this method by default, set the 'recoveryMethod' option to 'clone'. WARNING: It should be safe to rely on replication to incrementally recover the state of the new instance if you are sure all updates ever executed in the replicaset were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the replicaset or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C * Updating topology Waiting for clone process of the new member to complete. Press ^C to abort the operation. * Waiting for clone to finish... NOTE: replicaset2:3306 is being cloned from replicaset1:3306 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed ** Stage RECOVERY: | NOTE: replicaset2:3306 is shutting down... * Waiting for server restart... ready * replicaset2:3306 has restarted, waiting for clone to finish... * Clone process has finished: 60.68 MB transferred in about 1 second (~60.68 MB/s) ** Configuring replicaset2:3306 to replicate from replicaset1:3306 ** Waiting for new instance to synchronize with PRIMARY... The instance 'replicaset2:3306' was added to the replicaset and is replicating from replicaset1:3306. Here I have chosen the clone method for recovery. MySQL replicaset1:33060+ ssl JS > replicaset.status() { "replicaSet": { "name": "PerconaReplicaSet", "primary": "replicaset1:3306", "status": "AVAILABLE", "statusText": "All instances available.", "topology": { "replicaset1:3306": { "address": "replicaset1:3306", "instanceRole": "PRIMARY", "mode": "R/W", "status": "ONLINE" }, "replicaset2:3306": { "address": "replicaset2:3306", "instanceRole": "SECONDARY", "mode": "R/O", "replication": { "applierStatus": "APPLIED_ALL", "applierThreadState": "Slave has read all relay log; waiting for more updates", "receiverStatus": "ON", "receiverThreadState": "Waiting for master to send event", "replicationLag": null }, "status": "ONLINE" } }, "type": "ASYNC" } } The second instance has been added to the ReplicaSet. How to perform the manual switchover with ReplicaSet? As per the current topology, replicaset1 is the PRIMARY replicaset2 is the SECONDARY Requirement: Regarding the maintenance activity, I am planning to remove the server “replicaset1” from the ReplicaSet. This needs to be performed in a safe manner and the secondary instance “replicaset2” should be available for application writes and reads. First, I need to promote “replicaset2” as the PRIMARY. Then, remove the “replicaset1” from the group. Switching the “replicaset2” as the PRIMARY. Command : replicaset.setPrimaryInstance(‘host:port’) MySQL replicaset1:33060+ ssl JS > replicaset.setPrimaryInstance('replicaset2:3306') replicaset2:3306 will be promoted to PRIMARY of 'PerconaReplicaSet'. The current PRIMARY is replicaset1:3306. * Connecting to replicaset instances ** Connecting to replicaset1:3306 ** Connecting to replicaset2:3306 ** Connecting to replicaset1:3306 ** Connecting to replicaset2:3306 * Performing validation checks ** Checking async replication topology... ** Checking transaction state of the instance... * Synchronizing transaction backlog at replicaset2:3306 * Updating metadata * Acquiring locks in replicaset instances ** Pre-synchronizing SECONDARIES ** Acquiring global lock at PRIMARY ** Acquiring global lock at SECONDARIES * Updating replication topology ** Configuring replicaset1:3306 to replicate from replicaset2:3306 replicaset2:3306 was promoted to PRIMARY. You can see the “replicaset2” has been promoted as PRIMARY. MySQL replicaset1:33060+ ssl JS > replicaset.status() { "replicaSet": { "name": "PerconaReplicaSet", "primary": "replicaset2:3306", "status": "AVAILABLE", "statusText": "All instances available.", "topology": { "replicaset1:3306": { "address": "replicaset1:3306", "instanceRole": "SECONDARY", "mode": "R/O", "replication": { "applierStatus": "APPLIED_ALL", "applierThreadState": "Slave has read all relay log; waiting for more updates", "receiverStatus": "ON", "receiverThreadState": "Waiting for master to send event", "replicationLag": null }, "status": "ONLINE" }, "replicaset2:3306": { "address": "replicaset2:3306", "instanceRole": "PRIMARY", "mode": "R/W", "status": "ONLINE" } }, "type": "ASYNC" } } Removing “replicaset1” from the group, Command : replicaset.removeInstance(‘host:port’) MySQL replicaset1:33060+ ssl JS > replicaset.removeInstance('replicaset1:3306') The instance 'replicaset1:3306' was removed from the replicaset. MySQL replicaset1:33060+ ssl JS > MySQL replicaset1:33060+ ssl JS > replicaset.status() { "replicaSet": { "name": "PerconaReplicaSet", "primary": "replicaset2:3306", "status": "AVAILABLE", "statusText": "All instances available.", "topology": { "replicaset2:3306": { "address": "replicaset2:3306", "instanceRole": "PRIMARY", "mode": "R/W", "status": "ONLINE" } }, "type": "ASYNC" } } We can perform the forced failover using “ReplicaSet.forcePrimaryInstance()”. This is dangerous and only recommended to use on the disaster type of scenario. MySQL InnoDB ReplicaSet is a very good feature to manage the MySQL asynchronous replication environment. It has the CLONE plugin support and it greatly helps on data provisioning and setting up the replication. But, still it has some limitations when compared with the MySQL InnoDB Cluster. https://www.percona.com/blog/2020/08/27/mysql-8-0-19-innodb-replicaset-configuration-and-manual-switchover/
0 notes
Text
MySQL InnoDB Cluster Setup with Replication between 2 InnoDB Clusters
This is tutorial to setup 2 innodb clusters and create Replication using REPLICATION FILTER to define replication channel across 2 InnoDB Cluster.BackgroundInnoDB Cluster setup is commonly found within a Data Center (DC1) where the network is good and reliable. Having another Datacenter for DR purpose, set up of another InnoDB Cluster with Replication is an example. To allow MySQL Replication to be successfully created between 2 InnoDB Clusters, the following items must be considered carefully :1. InnoDB Cluster when it is created, there is mysql_innodb_cluster_metadata schema which stores the state and information about the 'cluster'. The 2 InnoDB Clusters in 2 DCs are different. The state information within the 'mysql_innodb_cluster_metadata' is different per each of the InnoDB Cluster. MySQL Replication may bring the data update to metadata from DC1 to the innodb cluster in DC2. This may corrupt the meta data in DC2. We can define MySQL Replication FILTER to ignore the 'mysql_innodb_cluster_metadata' database update. 2. InnoDB Cluster is based on the REPLICATION. Creating Replication 'globally' and the setting of global replication may affect the replication channel for Group Replication. For MySQL 8.0, MySQL REPLICATION FILTER can be defined for CHANNEL. The REPLICATION must be defined as CHANNEL and REPLICATION FILTER can be applied to the specific CHANNEL. Note : MySQL 8.0 supports REPLICATION FILTER by CHANNEL.3. GTIDs - The Global Transaction ID(s) defines the database content 'logically' The @@GTID_EXECUTED is the GTIDs set executed on the server / cluster. If the 2 servers having the same @@EXECUTED_GTID, it is considered having same data content and they are the same. REPLICATION using GTIDs is to identify any missing GTIDs and request from the Master's BINLOG for data replication. If the transaction is not in BINLOG due to any purge BINLOG process or RECOVERY without BINLOG, the @@GTID_PURGED must be setup properly so that the server has the correct information.4. Time setting (Time and Time Zone) : Within InnoDB Cluster, the time setting must be the same. The best way of doing this is to have ntp setup to maintain the correct time/timezone setting.5. An InnoDB Cluster may have 3 or more nodes. Replication is point to point setup. There are 2 options for the MASTER setup.a. Using PRIMARY nodeb. Using SECONDARY node(s)By using MySQL Router (R1) as InnoDB Cluster in DC1. And Replication Channel in DC2 can be setup with the master server/port pointing to R1 as such the MySQL Router (R1) can determine the corresponding Server for Master Server. R1 port can be used to establish the SECONDARY node so that the Primary Node can be off-loaded from replication workload.6. Any 'circular' / bi-direction Replication setup between DC1 and DC2. REPLICATION is UNI-directional. If bi-directional Replication is considered, the corresponding details with GTIDs, ROUTER, FILTER, Replication USER must be setup properly.7. Determine the way how to 'clone' a server in DC2 from DC1. There are optionsSteps1. Setup of MySQL InnoDB Cluster (cluster1) on DC12. Setup of MySQL InnoDB Cluster (cluster2) on DC23. Materialize data on cluster2 ===> At this point the data on cluster1 = data on cluster 2 <====4. Setup MySQL Router [R1:6446] pointing to InnoDB Cluster 'cluster1' on DC2-primary node.5. Creating REPLICATION USER on cluster1 (Note : the user will be replicated to cluster2 automatically)6. Creating REPLICATION CHANNEL and FILTER on cluster2 (primary node), and finally to Start up SLAVE on cluster2-primary node.===> At this point, the replication from DC1 to DC2 is done <====8. Optionally, setup 'super-read-only' on cluster2-primary nodeFor some reason, people may not want any application to change any data on DC2.primary-node-dc2 mysql> set global super-read-only=true;Setup of MySQL InnoDB Cluster (cluster1) on DC1a. Initialize 3 Servers : mysqld --defaults-file=config/my1.cnf --initialize-insecure mysqld --defaults-file=config/my2.cnf --initialize-insecure mysqld --defaults-file=config/my3.cnf --initialize-insecurePlease check sample configuration in appendix Ab. startup 3 servers mysqld_safe --defaults-file=config/my1.cnf & mysqld_safe --defaults-file=config/my2.cnf & mysqld_safe --defaults-file=config/my3.cnf &c. Configure group replication admin usermysqlsh -e "dba.configureInstance('root:@localhost:3310',{clusterAdmin:'gradmin',clusterAdminPassword:'grpass'});dba.configureInstance('root:@localhost:3320',{clusterAdmin:'gradmin',clusterAdminPassword:'grpass'});dba.configureInstance('root:@localhost:3330',{clusterAdmin:'gradmin',clusterAdminPassword:'grpass'});"d. Creating InnoDB Cluster 'cluster1' on DC1 mysqlsh --uri gradmin:grpass@primary:3310 -e "var x = dba.createCluster('cluster1', {exitStateAction:'OFFLINE_MODE', consistency:'BEFORE_ON_PRIMARY_FAILOVER', expelTimeout:30, memberSslMode:'REQUIRED', ipWhitelist:'192.168.56.0/24', clearReadOnly:true, interactive:false, localAddress:'node1:13310', autoRejoinTries:120, memberWeight:80 }) "mysqlsh --uri gradmin:grpass@primary:3310 -e "x = dba.getCluster()x.addInstance('gradmin:grpass@node1:3320', {exitStateAction:'OFFLINE_MODE', recoveryMethod:'incremental', localAddress:'node1:13320', autoRejoinTries:120, memberWeight:70 })"mysqlsh --uri gradmin:grpass@primary:3310 -e "x = dba.getCluster()x.addInstance('gradmin:grpass@node1:3330', {exitStateAction:'OFFLINE_MODE', recoveryMethod:'incremental', localAddress:'node1:13330', autoRejoinTries:120, memberWeight:60 })print(x.status())"2. Setup of MySQL InnoDB Cluster (cluster2) on DC2Repeat the same procedure on "(1) setup of MySQL InnoDB Cluster(cluster1) on DC1" but run it on DC2 with cluster name as 'cluster2'3. Materialize data on cluster2For demo/tutorial purpose, the setup of data is brand new without real data. It is assumed data is the same at this step.GTIDs on cluster2 and cluster1 at this point are different. But at this point, logically the data on cluster1 and cluster2 are the same (Just new database + group replication admin user). The manual setup is purely for DEMO purpose, please do consider other options for production purpose.a. Manual Configuration for brand new server setup (this tutorial will cover the steps)b. Backup Server on DC1 and Restore data on DC2 c. Using CLONE feature from MySQLd. Extending Cluster1 to 6 nodes and breaking the 3 nodes in DC2.e. other options .... The data content is 'assumed' to be the same as the data in DC1. SET the global @@GTID_EXECUTED and @@GTID_PURGED to be the @@GTID_EXECUTED from DC1.e.g. (Using mysqlsh and connecting the 'primary':3310 to fetch the @@GTID_EXECUTED, and use it to set up the @@GTID_EXECUTED and @@GTID_PURGED on ALL nodes.Assuming 3 nodes on DC2 as secondary:3340,3350 and 3360. (For demo purpose, the 3 nodes are running within the same VM - hostname as 'secondary')mysqlsh << EOFc1=mysql.getSession('gradmin:grpass@secondary:3360')q1=c1.runSql('stop group_replication;')c1.close()c1=mysql.getSession('gradmin:grpass@secondary:3350')q1=c1.runSql('stop group_replication;')c1.close()c1=mysql.getSession('gradmin:grpass@secondary:3340')q1=c1.runSql('stop group_replication;')c1.close()EOFmysqlsh << EOFc1=mysql.getSession('gradmin:grpass@primary:3310')q1=c1.runSql('select @@gtid_executed;')r1=q1.fetchAll()println("GTID on Master : " + r1[0][0])c2=mysql.getSession('gradmin:grpass@secondary:3340')q2a=c2.runSql('select @@gtid_executed, @@gtid_purged;')r2a=q2a.fetchAll()println("GTID executed on Slave : " + r2a[0][0])println("GTID purged on Slave : " + r2a[0][1])q2b=c2.runSql('set global gtid_purged="' + r1[0][0]+'"')c2=mysql.getSession('gradmin:grpass@secondary:3350')q2a=c2.runSql('select @@gtid_executed, @@gtid_purged;')r2a=q2a.fetchAll()println("GTID executed on Slave : " + r2a[0][0])println("GTID purged on Slave : " + r2a[0][1])q2b=c2.runSql('set global gtid_purged="' + r1[0][0]+'"')c2=mysql.getSession('gradmin:grpass@secondary:3360')q2a=c2.runSql('select @@gtid_executed, @@gtid_purged;')r2a=q2a.fetchAll()println("GTID executed on Slave : " + r2a[0][0])println("GTID purged on Slave : " + r2a[0][1])q2b=c2.runSql('set global gtid_purged="' + r1[0][0]+'"')4. Setup MySQL Router [R1:6446] pointing to InnoDB Cluster 'cluster1' on DC2-primary node.On primary node on DC2 > mysqlrouter --bootstrap gradmin:grpass@primary:3310 --force --directory config/myrouterif [ $? -eq 0 ]then echo "starting router...." ./config/myrouter/start.shfi 5. Creating REPLICATION USER on cluster1 (Note : the user will be replicated to cluster2 automatically) The setup of "repl@'%'" user is created for demo purpose only. On primary node on DC1 > mysql -uroot -h127.0.0.1 -P3310 -e "drop user if exists repl@'%';""On primary node on DC1 > mysql -uroot -h127.0.0.1 -P3310 -e "create user repl@'%' identified with mysql_native_password by 'repl';""On primary node on DC1 > mysql -uroot -h127.0.0.1 -P3310 -e "grant replication slave on *.* to repl@'%';"" 6. Creating REPLICATION CHANNEL and FILTER on cluster2 (primary node), and finally to Start up SLAVE on cluster2-primary node.The replication channel 'channel1' is created to build replication between MySQL Router (6447) port locally. It routes to the SECONARY node on cluster1 as MASTER server.The replication filter is created on 'channel1' to ignore the 'mysql_innodb_cluster_metadata'.On primary node on DC2 > mysql -uroot -h127.0.0.1 -P3340 << EOFCHANGE MASTER TO master_host = '127.0.0.1', master_port = 6447, master_user = 'repl', master_password = 'repl', master_auto_position=1FOR CHANNEL 'channel1';CHANGE REPLICATION FILTER REPLICATE_IGNORE_DB = (mysql_innodb_cluster_metadata)FOR CHANNEL 'channel1';START SLAVE FOR CHANNEL 'channel1';SHOW SLAVE STATUS FOR CHANNEL 'channel1'GEOF Appendix A - Sample configuration for my[1|2|3|4|5|6].cnf my1.cnf[mysqld]datadir=/home/mysql/data/3310basedir=/usr/local/mysqllog-error=/home/mysql/data/3310/my.errorport=3310socket=/home/mysql/data/3310/my.sockmysqlx-port=33100mysqlx-socket=/home/mysql/data/3310/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=101# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=primary my2.cnf[mysqld]datadir=/home/mysql/data/3320basedir=/usr/local/mysqllog-error=/home/mysql/data/3320/my.errorport=3320socket=/home/mysql/data/3320/my.sockmysqlx-port=33200mysqlx-socket=/home/mysql/data/3320/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=102# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=primary my3.cnf[mysqld]datadir=/home/mysql/data/3330basedir=/usr/local/mysqllog-error=/home/mysql/data/3330/my.errorport=3330socket=/home/mysql/data/3330/my.sockmysqlx-port=33300mysqlx-socket=/home/mysql/data/3330/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=103# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=primary my4.cnf [ on DC2 ][mysqld]datadir=/home/mysql/data/3340basedir=/usr/local/mysqllog-error=/home/mysql/data/3340/my.errorport=3340socket=/home/mysql/data/3340/my.sockmysqlx-port=33400mysqlx-socket=/home/mysql/data/3340/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=104# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=secondary my5.cnf [ on DC2 ][mysqld]datadir=/home/mysql/data/3350basedir=/usr/local/mysqllog-error=/home/mysql/data/3350/my.errorport=3350socket=/home/mysql/data/3350/my.sockmysqlx-port=33500mysqlx-socket=/home/mysql/data/3350/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=105# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=secondary my6.cnf [ on DC2 ][mysqld]datadir=/home/mysql/data/3360basedir=/usr/local/mysqllog-error=/home/mysql/data/3360/my.errorport=3360socket=/home/mysql/data/3360/my.sockmysqlx-port=33600mysqlx-socket=/home/mysql/data/3360/myx.socklog-bin=logbinrelay-log=logrelaybinlog-format=rowbinlog-checksum=NONEserver-id=106# enable gtidgtid-mode=onenforce-gtid-consistency=truelog-slave-updates=true# Table based repositoriesmaster-info-repository=TABLErelay-log-info-repository=TABLE# Extraction Algorithmtransaction-write-set-extraction=XXHASH64report-host=secondary http://mysqlhk.blogspot.com/2020/06/mysql-innodb-cluster-setup-with.html
0 notes