#check Snapshot Replication Status
Explore tagged Tumblr posts
techdirectarchive · 5 months ago
Text
Fix Task failed to perform Scheduled Snapshot Replication
Snap Replication provides schedulable, near-instantaneous data protection. This package helps ensure that your business data located in shared folders or virtual machines located in LUNs remain safe and available in the event of disaster. Therefore, with this solution, snapshot Replication will automatically take snapshots at a pre-determined time and frequency. In this article, we shall discuss…
0 notes
atliqtechnologies · 11 months ago
Text
When You’re Busy, You’re Dumb: Redefining Success in a Hectic World
Tumblr media
Ever noticed how being busy has become a badge of honor? It’s almost as if we equate jam-packed schedules with success and self-worth. “How have you been?” “Oh, you know, super busy!” We wear our busyness like a medal, but what if this constant hustle is making us…stupid?
Yes, you read that right. Being perpetually busy might just be the biggest mistake we’re all making. It’s time to debunk the myth that busyness equals productivity. In this post, we’ll explore why being busy isn’t something to be proud of, how outsourcing can free up your time, the art of thinking better, and why a less hectic schedule can make you smarter.
Ready to trade your busy badge for a smarter, more fulfilling life?  
Being Busy is Not a Badge of Honor
Cultural Perception
In today’s fast-paced world, busyness has become a symbol of status and success. We’ve all heard the phrase “I’m so busy” used as a default response to “How are you?” It’s as if being constantly occupied is a sign that we are important, hardworking, and indispensable. Social media often exacerbates this perception, showcasing meticulously curated snapshots of packed schedules and endless to-do lists. The underlying message is clear: if you’re not busy, you’re not trying hard enough.
Reality Check
However, equating busyness with productivity is a fundamental misunderstanding. Being busy often means juggling numerous tasks without necessarily making meaningful progress on any of them. True productivity is about working smarter, not harder. It involves prioritizing tasks that genuinely move the needle, rather than just keeping ourselves occupied. In contrast to busyness, productivity is strategic and focused, leading to tangible results and a greater sense of accomplishment.
Examples of Common Busywork
Consider the average workday filled with constant email checking, attending endless meetings, and managing minor administrative tasks. These activities often feel urgent but add little value to our core goals. For instance, spending hours crafting the perfect email or sitting in unproductive meetings might keep us occupied, but they rarely contribute significantly to our long-term objectives.
Debunking the Myth of Busyness
Numerous studies and expert opinions have challenged the notion that busyness is synonymous with productivity. Research from Harvard Business Review highlights that constantly being busy can lead to burnout, reduced creativity, and lower overall performance. Additionally, a study by McKinsey & Company found that executives who set aside time for strategic thinking and deep work are significantly more effective than those who fill their days with back-to-back activities.
Experts like Cal Newport, author of “Deep Work,” advocate for the importance of focused, undistracted work. Newport argues that deep work—professional activities performed in a state of distraction-free concentration that pushes cognitive capabilities to their limit—creates new value, improves skills, and is hard to replicate. This is in stark contrast to shallow work, which is often non-cognitively demanding and easily replicated, yet consumes a large portion of our time.
Outsource (Almost) Everything
Importance of Delegation
Outsourcing is a powerful tool that can drastically improve your productivity and overall quality of life. By delegating tasks, you free up valuable time that can be spent on high-value activities that truly matter.
Time-saving: Outsourcing routine and time-consuming tasks allow you to focus on what you do best, leaving the rest to specialists.
Improved focus on high-value activities: When you delegate, you can channel your energy into strategic and creative tasks that drive significant results.
What to Outsource
Outsourcing isn’t just for businesses; it’s for personal tasks too. Here’s a list of tasks you can delegate:
PERSONAL:
Cleaning: Hire a cleaning service to maintain your home.
Grocery shopping: Use delivery services to save time.
Administrative tasks: Personal assistants can handle scheduling, emails, and more.
PROFESSIONAL:
Bookkeeping: Let accountants manage your finances.
Marketing: Hire social media, content creation, and advertising experts.
Routine reports: Delegate data analysis and reporting to specialized services.
How to Outsource Effectively
To make the most of outsourcing, follow these tips for finding reliable help and managing outsourced tasks:
Research & Vetting: Choose providers with strong reviews and recommendations. Interview them to ensure they meet your standards.
Clear Communication: Define your expectations and provide detailed instructions to avoid misunderstandings.
Use Technology: Leverage project management tools to keep track of outsourced tasks and ensure timely completion.
Monitor Performance: Regularly review the quality of work and provide feedback to maintain high standards.
By outsourcing (almost) everything, you can reduce your workload, increase your efficiency, and focus on what truly matters, leading to a more balanced and productive life.
The Art of Thinking Better
Quality Over Quantity
The art of deep, focused thinking often gets overlooked in a world that celebrates multitasking and constant busyness. Yet, it’s this quality of thought truly drives innovation, problem-solving, and personal growth.
Importance of Deep Thinking: Deep, focused thinking allows you to delve into complex problems, generate creative solutions, and make informed decisions.
Contrast with Multitasking: Multitasking, on the other hand, divides your attention and can lead to shallow information processing, reducing overall effectiveness.
Techniques for Better Thinking
To enhance your cognitive abilities and foster deep thinking, consider incorporating these techniques into your daily routine:
Time-blocking: Allocate specific periods during your day exclusively for deep work. Minimize distractions and immerse yourself fully in the task at hand.
Mindfulness & Meditation: Practices that promote mindfulness help reduce stress, improve concentration, and enhance cognitive clarity.
Continuous Learning: Engage in lifelong learning by exploring new subjects, acquiring new skills, and challenging your mind with diverse perspectives and information.
When You’re Less Busy, You Are Less Stupid
Cognitive Benefits
A less hectic schedule isn’t just about having more free time—it significantly enhances mental clarity and decision-making. When you’re not constantly rushing from one task to another, your mind can focus better, leading to sharper insights and more effective problem-solving.
Creativity & Innovation
Downtime and relaxation are catalysts for creativity and innovation. When your schedule allows for moments of reflection and unplugging, you allow your brain to connect ideas, think outside the box, and come up with novel solutions.
Health & Well-being
Maintaining a balanced schedule has profound effects on both physical and mental health:
Reduced stress: A less busy lifestyle reduces stress levels, allowing you to approach challenges with a clear mind and greater resilience.
Better sleep: Improved sleep quality results from reduced mental clutter and stress, leading to enhanced cognitive function and overall well-being.
Enhanced mood & relationships: With more time for self-care and meaningful connections, you cultivate healthier relationships and a more positive outlook on life.
In a world where busyness often masquerades as productivity and success, it’s crucial to rethink our approach. The notion that being constantly busy equates to achievement is a misconception that overlooks the true essence of productivity and personal fulfillment.
Busyness isn’t Productivity: Simply filling our schedules with tasks doesn’t necessarily translate to meaningful progress or success. True productivity involves prioritizing tasks that align with our goals and values.
The Power of Delegation: Outsourcing tasks can free up time for activities that truly matter, allowing us to focus on high-impact endeavors and nurturing creativity.
The Art of Thinking Better: Deep, focused thinking trumps multitasking in fostering innovation and effective decision-making. Techniques like time-blocking and mindfulness can significantly enhance cognitive clarity.
Health and Well-being: A less hectic schedule not only reduces stress but also improves sleep quality, enhances mood, and fosters better relationships—key ingredients for a balanced and fulfilling life.
As we conclude, it’s clear that being less busy doesn’t equate to being less productive or less successful. Instead, it enables us to cultivate a lifestyle that prioritizes quality over quantity, creativity over busyness, and well-being over stress.
Let’s redefine our definition of success. Let’s shift from glorifying busyness to valuing purposeful action and mindful living. By embracing a balanced approach, we can achieve not only professional success but also personal happiness and fulfillment.
So, let’s dare to be less busy and strive to be smarter, healthier, and happier individuals. It’s time to reclaim our time and our lives. Remember, busy should never be the new measure of intelligence—it’s about working smarter, not harder.
Check out the Original Article
0 notes
globalmediacampaign · 5 years ago
Text
MySQL NDB Cluster Backup & Restore In An Easy Way
In this post, we will see, how easily user can take NDB Cluster backup and then restore it. NDB cluster supports online backups, which are taken while transactions are modifying the data being backed up. In NDB Cluster, each backup captures all of the table content stored in the cluster. User can take backup in the following states: When the cluster is live and fully operational When the cluster is live, but in a degraded state: Some data nodes are down Some data nodes are restarting During read and write transactions Users can restore backups in the following cluster environments: Restore to the same physical cluster Restore into a different physical cluster Restore into a different configuration cluster i.e. backup taken from a 4 nodes cluster and restore into 8 data nodes cluster Restore into a different cluster version Backups can be restored flexibly: Restore can be run locally or remotely w.r.t the data nodes Restore can be run in parallel across data nodes Can restore a partial set of the tables captured in the backup Use cases of Backup & Restore: Disaster recovery - setting up a cluster from scratch Setup NDB Cluster asynchronous replication Recovery from user/DBA accidents like dropping of a table/database/schema changes etc During NDB Cluster software upgrade Limitations: Schemas and table data for tables stored using the NDB Cluster engine are backed up Views, stored procedure, triggers and tables/schemas from other storage engine like Innodb are not backed up. Users need to use other MySQL backup tools like mysqldump/mysqlpump etc to capture these Support for only full backup. No incremental or partial backup supported. NDB Cluster Backup & Restore concept in brief: In NDB Cluster, tables are horizontally partitioned into a set of partitions, which are then distributed across the data nodes in the cluster. The data nodes are logically grouped into nodegroups. All data nodes in a nodegroup (up to four) contain the same sets of partitions, kept in sync at all times. Different nodegroups contain different sets of partitions. At any time, each partition is logically owned by just one node in one nodegroup, which is responsible for including it in a backup.When a backup starts, each data node scans the set of table partitions it owns, writing their records to its local disk. At the same time, a log of ongoing changes is also recorded. The scanning and logging are synchronised so that the backup is a snapshot at a single point in time. Data is distributed across all the data nodes, and the backup occurs in parallel across all nodes, so that all data in the cluster is captured. At the end of a backup, each data node has recorded a set of files (*.data, *.ctl, *.log), each containing a subset of cluster data.During restore, each set of files will be restored [in parallel] to bring the cluster to the snapshot state. The CTL file is used to restore the schema, the DATA file is used to restore most of the data, and the LOG file is used to ensure snapshot consistency.Let’s look at NDB Cluster backup and restore feature through an example:To demonstrate this feature, let’s create a NDB Cluster with below environment.NDB Cluster 8.0.22 version 2 Management servers 4 Data nodes servers 2 Mysqld servers 6 API nodes NoOfReplicas = 2 If you are wondering how to setup a NDB Cluster, then please look into my previous blog here.  Step 1:Before we start the cluster, let’s modify the cluster config file (config.ini) for backup. When backup starts, it create 3 files (BACKUP-backupid.nodeid.Data, BACKUP-backupid.nodeid.ctl, BACKUP-backupid.nodeid.log) under a directory named BACKUP. By default, this directory BACKUP created under each data node data directory. It is advisable to create this BACKUP directory outside the data directory. This can be done by adding a config variable ‘BackupDataDir’ to cluster configuration file i.e. config.iniIn the below example, I have assigned a path to ‘BackupDataDir‘ in config.ini:BackupDataDir=/export/home/saroj/mysql-tree/8.0.22/ndbd/node1/data4Step 2: Let’s look at the cluster from the management client (bin/ndb_mgm): Step 3: As cluster is up and running so let’s create a database, a table and do some transactions on it. Let’s insert rows into table ‘t1’ either thru sql or thru any tools. Let’s continue the rows insertion thru sql to have a significant amount of datas in the cluster. Let’s check the rows count from table ‘t1’. From the below image, we can see that table 't1' has ‘396120’ rows in it. Step 4: Now issue a backup command from the management client (bin/ndb_mgm) while some transactions on the table ‘t1’ was going on. We will delete rows from table ‘t1’ and issue a backup command in parallel. While delete ops is going on, issue a backup command from the management client: Let’s check the new row count from table ‘t1’ after all the delete ops finished. From the below image, we can see that now the table ‘t1’ has ‘306120’ rows. Let’s look at the files backup created. As we have assigned a path to backup files so let’s discuss about these files in brief. From the above image, we can see that, for each backup, one backup directory is created (BACKUP-backupid) and under each backup directory, 3 files are created. These are:BACKUP-backupid-0.node_id.Data (BACKUP-1-0.1.Data):The above file contains most of the data stored in the table fragments owned by this node. In the above example, 1 is the backupid, 0 is a hardcoded value for future use. 1 is node_id of the data node 1. BACKUP-backupid.node_id.ctl (BACKUP-1.1.ctl): The above file contains table meta data i.e. table definitions, index definitions.BACKUP-backupid.node_id.log (BACKUP-1.1.log):This file contains all the row changes that happened to the tables while the backup was in progress. These logs will execute during restore either as roll forward or roll back depends on whether the backup is snapshot start or snapshot end. Note:User can restore from anywhere i.e. doesn’t have to be from any particular data node. ndb_restore is an NDB API client program, so can run anywhere that can access the cluster. Step 5: Upon successfully completion of a backup, the output will looks like below: From the above image, Node 1 is the master node who initiate the backup, node 254 is the management node on which the START BACKUP command was issued, and Backup 1 is the 1st backup taken. #LogRecords ‘30000’ indicate that while backup was in progress some transaction was also running on the same table. #Records shows the number of records captured across the cluster. User can see the backup status also from the “cluster log” as shown below:2021-01-12 15:00:04 [MgmtSrvr] INFO -- Node 1: Backup 1 started from node 2542021-01-12 15:01:18 [MgmtSrvr] INFO -- Node 1: Backup 1 started from node 254 completed. StartGCP: 818 StopGCP: 855 #Records: 306967 #LogRecords: 30000 Data: 5950841732 bytes Log: 720032 bytesSo this concludes our NDB Cluster backup procedure. Step 6:We will now try to restore the data from the backup taken above. Let’s shutdown the cluster, cleanup all the files except the backup files and then start the cluster with initial (with no data).Let’s restore the backup to a different cluster. From the below image, we can see that data node Id’s are different from the cluster where backup was taken. Now let’s see if our database ‘test1’ is exist in the cluster or not after initial start. From the above image, we can see that, database ‘test1’ is not present. Now let’s start our restore process from the backup image #1 (BACKUP-1). The NDB restore works in this flow: It first restore the meta data from the *.ctl file so that all the tables/indexes can be recreated in the database. Then it restore the data files (*.Data) i.e. inserts all the records into the tables in the database. At the end, it executes all the transaction logs (*.log) rollback or roll forward to make the database consistent. Since restore will fail while restoring unique and foreign key constraints that are taken from the backup image so user must disable the index at the beginning and once restore is finished, again user need to rebuild the index. Step 7:Let’s start the restoration of meta data.Meta data restore, disable index and data restore can execute at one go, or can be done in serial. This restore command can be issued from any data node or can be from a non-data node as well.In this example, I am issuing meta data restore and disable index from Data Node 1 only for once. Upon successful completion, I will issue the data restore.Data Node 1: shell> bin/ndb_restore -n node_id -b backup_id -m --disable-indexes --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directory -n: node id of the data node from where backup was taken. Do not confuse with the data node id of the new cluster.-b: backup id (we have taken backup id as ‘1’)-m: meta data restoration (recreate table/indexes)--disable-indexes: disable restoration of indexes during restore of data--ndb-connectstring (-c): Connection to the management nodes of the cluster.--backup_path: path to the backup directory where backup files exist. The results of above meta restore from data node 1 is shown below: Let’s start the data restore on the Data Node 1.  Data Node 1:shell> bin/ndb_restore -n node_id -b backup_id -r --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directory Below, I am trying to capture the logs from the data restore run results as it started and then at the end. From the above image, we can see that restore went successful. Restore skips restoration of system table data. System tables referred to here are tables used internally by NDB Cluster. Thus these tables should not be overwritten by the data from a backup. Backup data is restored in fragments, so whenever a fragment is found, ndb_restore checks if it belongs to a system table. If it does belong to a system table, ndb_restore decides to skip restoring it and prints a 'Skipping fragment' log message.Let’s finish all the remaining data restore from the other data nodes. These data restore can be run in parallel to minimise the restore time. Here, we don’t have to pass -m, --disable-indexes again to restore command as we need to do it only once. With the first restore completion, it has already created tables, indexes etc so no need to recreate it again and will also fail. Once all the data are restored into the table(s), we will enable the indexes and constraints again using the –rebuild-indexes option. Note that rebuilding the indexes and constraints like this ensures that they are fully consistent when the restore completes.Data Node 2:shell> bin/ndb_restore -n node_id -b backup_id -r --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directoryData Node 3:shell> bin/ndb_restore -n node_id -b backup_id -r --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directoryData Node 4:shell> bin/ndb_restore -n node_id -b backup_id -r --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directory Ndb restore (ndb_restore) is an API, it needs API slots to connect to cluster. Since we have initiated 3 ndb_restore programme in parallel from Data node ID 4, 5 and 6 so we can see from the below image that ndb_restore took API ID: 47, 48 and 49. Let’s see the results from the remaining data nodes. Since all the ndb_restore API finished successfully, we can see that the API ID that it had taken to connect the cluster has been released. The last step is to rebuild the index. This can also done from any data nodes or from any non-data nodes but only once.Data Node 1:shell> bin/ndb_restore -n node_id -b backup_id --rebuild-indexes --ndb-connectstring=cluster-test01:1186,cluster-test02:1186 –backup_path=/path/to/backup directory--rebuild-indexes: It enables rebuilding of ordered indexes and foreign key constraints. Step 8:So we have finished our restoration steps. Let’s check the database, table, rows count in table etc .. So database ‘test1’ is already created. Now we can see that table ‘t1’ has been created and the row count#306120 which is also matching with our backup image (look at Step# 4).So this concludes our NDB Cluster backup and restore feature. There are many more options user can pass to both backup (START BACKUP) and restore (ndb_restore) programme based on the requirements. In the above example, I have selected the basic minimum options user might need for backup and restore. For more information on these options, please refer to NDB Cluster reference manual here. https://clustertesting.blogspot.com/2021/01/mysql-ndb-cluster-backup-restore-in.html
1 note · View note
karonbill · 4 years ago
Text
DELL EMC DES-DD33 Practice Test Questions
Valid DES-DD33 Practice Test Questions presented by PassQuestion are the perfect source for your preparation.It covers all the topics that are to be tested in the DELL EMC DES-DD33 exam.PassQuestion DES-DD33 Practice Test Questions will enable you to pass the Specialist - Systems Administrator PowerProtect DD Exam in the first attempt.It is highly recommended to go through detailed DES-DD33 Practice Test Questions so you can clear your concepts before taking DES-DD33 PowerProtect DD Exam. Make sure that you are using up to date DES-DD33 Practice Test Questions so you can easily clear the DELL EMC DES-DD33 exam on the first shot.
Specialist – Systems Administrator, PowerProtect DD (DES-DD33) Exam
DES-DD33 exam is a qualifying exam for the Specialist – Systems Administrator,PowerProtect DD (DCS-SA) Certification. This exam assesses the knowledge and skills to configure and manage PowerProtect DD systems. This exam also covers the concepts,features, administration, and integration with other products.There are two parts in the DES-DD33 exam, you should pass both two parts to get certified.
Exam Information
Part 1: Duration: 90 Minutes Number of Questions: 54 Questions Pass Score: 63%
Part 2: Duration: 30 Minutes Number of Questions: 6 Simulations Pass Score: 66%
Exam ObjectivesPowerProtect DD Concepts and Features (20%)
Explain the key differentiators of the Dell EMC PowerProtect DD deduplication technology, including SISL, DIA, In-line versus Post Process deduplication, and file versus block storage.
Identify typical Dell EMC PowerProtect DD backup and recovery solutions and describe PowerProtect DD product positioning.
Identify and describe various Dell EMC PowerProtect DD software options and the functionality they enable.
Cloud Tier Administration (15%)
Describe Dell EMC Cloud Tier features, benefits, architecture and use cases.
Perform administrative tasks on Dell EMC PowerProtect DD systems with the Dell EMC Cloud Tier, including adding and expanding storage, adjusting compression settings, deleting or reusing storage units, configuring replication and disaster recovery.
PowerProtect DD Implementation in Backup Environments and Integration with Application Software (15%)
Distinguish between key backup software components.
Recognize the packet flow in a typical backup environment with and without a Dell EMC PowerProtect DD system.
Describe key information points for a backup and recovery solution using DD Boost/OST technology.
Implement best practices and system tuning procedures for optimal performance of backup enviro
PowerProtect DD System Administration (50%)
Implement Dell EMC Data Domain system with key protocols, including NFS/CIFS, DD Boost, VTL, and NDMP.
Implement Dell EMC PowerProtect DD system with key technologies,including data security, link aggregation/failover, fibre channel connections, secure multi-tenancy, DDMC, snapshots, fastcopy,retention lock, sanitization, encryption, storage migration, replication,and recovery functionalities.
Manage system access and describe and configure autosupport,Support bundle, SNMP, and Syslog.
Monitor system activity and performance and evaluate the cleaning frequency.
Verify hardware and analyze and interpret space utilization and compression graphs.
Monitor PowerProtect DD capacity and estimate storage burn rate
View Online Specialist – Systems Administrator, PowerProtect DD Exam DES-DD33 Free Questions
An organization uses tape libraries in their current backup infrastructure. They have purchased a PowerProtect DD system and plan to use VTL to move from physical tape. What is a consideration when configuring the tape size? A.Target multiple drives to write single stream B.Use multiple drives to write multiple streams C.Set retention periods for as long as possible D.Use larger tapes for multiple smaller datasets Answer: B
A storage administrator finished the zoning between the PowerProtect DD and the NetWorker backup server. How can they verify that the zoning was completed properly and the server HBA are visible? A. scsitarget initiator show list B. scsitarget initiator list C. scsitarget show initiator D. scsitarget show initiator list Answer: A
Which command is used to check if NFS is enabled? A.nfs show clients B.nfs enable C.system show D.nfs status Answer: D
Which PowerProtect DD technology provides fast and efficient deduplication while minimizing disk access? A. SISL B. Cloud Tier C. Cache Tier D. DIA Answer: B
An administrator wants to integrate a PowerProtect DD appliance into their current backup environment using CIFS, NFS, DD Boost, and VTL. Which backup applications support these protocols? A.Commvault Simpana and Veritas Backup Exec B.Veritas NetBackup and Dell EMC NetWorker C.Dell EMC Avamar and Dell EMC NetWorker D.IBM Spectrum Protect and Veritas NetBackup Answer: B
What is a characteristic of using replication with Dell EMC Cloud Tier? A.Always takes place between the source and destination cloud tier B.Cloud Tier is required to be configured on the source and destination systems C.Always takes place between the source and destination active tier D.Does not require that a file be recalled from the cloud on the source Answer: C
0 notes
aws101training · 4 years ago
Text
AWS Enterprise Training for Seasoned Professionals, Get Future Ready Now
Our AWS enterprise training exposes the students to the fundamental and advanced ideas and capabilities of Amazon Web Services. Hundreds of organizations have found our Amazon Web Services training program helpful. They are all satisfied with the assistance and support we provided the in learning the intricacies of the subject with satisfaction. Our AWS Tutorials fulfil the expectations of Enterprises. Stay tuned as we often update Amazon Web Services Tutorials. We assist organizations achieve optimum performance with our training program.
Amazon Web Services is the most significant course in the current scenario due to the increased number of requirements and dependency with attached best in class security.  Our AWS group training helped businesses and professionals adapt to the live usage environment and to its difficulties. 
 1. What exactly is AWS?
 Our AWS lesson exposes the reader to the fundamental ideas and capabilities of Amazon Web Services. Our students found the training program helpful, assisting with improved performance on the job. We encourage our students to stay tuned to updates related to AWS and further certifications; for that, join our LinkedIn or Facebook cloud practitioners’ group.
 Amazon Web Services is the most significant course in the current scenario due to the increased number of job opportunities and the high compensation pay. We also provide AWS online instruction to all students worldwide through the media. These are the best AWS Tutorials created by our institute's professional teachers.
 • What exactly is AWS?
• What exactly are AWS EC2 Instances?
• What exactly is the AWS Management Console?
• What exactly is AWS Lambda?
• What exactly is AWS API Gateway?
• What exactly is AWS Dynamo DB?
• What exactly are AWS Certifications?
• Why is AWS so popular today?
• How will AWS evolve over the next five years?
1. What exactly is AWS?
 AWS is an abbreviation for Amazon Web Services. It's a cloud platform created by the e-commerce behemoth Amazon.com.
Governments, businesses, and people may utilize Amazon's cloud computing services to access a powerful cluster of cloud-based services.
 2. What exactly is AWS API Gateway?
 Amazon Web Services API Gateway is one of the many AWS services available. 
Developers may use this platform to build, publish, regulate, and secure APIs.
To access various online services, including Amazon Web Services, you may create an Application Programming Interface (API).
3. What exactly is AWS Lambda?
 AWS Lambda is a cloud computing service provided by Amazon Web Services. It's an event-driven service that provides serverless processing of codes responding to events such as program messages, user actions, sensor output, and so on. The service manages resources automatically to offer smooth and real-time processing of requests. AWS Lambda joined Amazon Web Services in 2014, and since then, it has seen many improvements, making it most attractive among businesses. 
 4. What exactly is the AWS Management Console?
 The AWS management console is a graphical user interface that enables you to access Amazon Web Services through web browsers via the internet. You may use the management interface to control various Amazon services, including cloud storage and cloud computing. AWS interface is accessible from a mobile device by downloading the AWS Console mobile app.
5. What exactly are AWS EC2 Instances?
 Amazon EC2 is a cloud computing service that offers scalable computational power. In other words, AWS's EC2 service provides us with a virtual private computer in the cloud. COMPUTE capacity is the amount of energy needed to run your task in the cloud. The computing power provided by the EC2 service in the form of EC2 Instances is both scalable and dynamic.
 6. What exactly is Amazon Dynamo DB?
 Amazon Web Services offers Dynamo DB as a database service (AWS).
The dynamo DB is also known as a NoSQL database since it uses JSON queries to interact with the database. Opposed to SQL databases, which utilize SQL-related questions to interact with the database. This database is both versatile and quick and used for applications that need consistency and latency at the millisecond level at any size.
 7. What exactly is Amazon Relational Database Services (RDS)?
 The DB case is Amazon RDS's basic structural square. A database occurrence is a distinct database circumstance in the cloud. A DB event may include several client-created databases, and you can access them using the same tools and apps as you would with an isolated database example. A DB event may be created and modified using the AWS Command Line Interface, the Amazon RDS API, or the AWS Management Console.
 The DB usage comes with computation and memory limits. Its type and size govern these. It's the reason database planning is an essential aspect of server optimization. If your requirements change after some time, you may modify DB cases.
 It provides a cost-effective, scalable limit for an industry-standard social database and manages fundamental database organization tasks.
 Zones of Monitoring and Availability
 Amazon's distributed computing assets are stored in widely accessible server farm offices across the globe (for instance, North America, Europe, or Asia).
Every datum focal region is referred to as a district. You may execute your database event in several Availability Zones, which is known as a multi-AZ organization.
Amazon sets up and maintains an optional backup DB event in a different Availability Zone when you choose this option. Your primary DB example is replicated synchronously across Availability Zones to the auxiliary occurrence to provide information excess, failover support, dispose of I/O solidifies, and minimize inactivity spikes during framework reinforcements.
See High Availability (Multi-AZ) for Amazon RDS for additional information.
 Security
 A security group controls the entry to a DB event. It does this by granting access to the IP address ranges or Amazon EC2 instances that you specify. Amazon RDS makes use of DB security groups, VPC security groups, and EC2 security groups. In simple words, a DB security gathering regulates access to a DB instance that is not in a VPC; a VPC security gathering controls access to a DB instance that is within a VPC. An Amazon EC2 security gathering contains access to an EC2 instance used with a DB instance. See Security in Amazon RDS for additional information about security gatherings.
 Examining an Amazon RDS Database Instance 
There are many methods to monitor the presentation and well-being of a DB occurrence.
The free Amazon CloudWatch administration monitors the exhibition and strength of a DB occurrence; execution outlines are shown in the Amazon RDS comfort.
You may subscribe to Amazon RDS events to be notified when changes occur with a database, for example, a DB Snapshot, a DB parameter collection, or a DB security collection.
See Monitoring Amazon RDS for additional information.
Interfaces for Amazon RDS
There are many methods for connecting to Amazon RDS.
Amazon Web Services Management Console
 The AWS Management Console is a simple web interface. You may manage your DB events directly from to reassure, with no scripting needed. Sign in to the AWS Management Console and open the Amazon RDS to get the Amazon RDS assistance.
 Amazon RDS Programming
If you are a designer, you can get access to Amazon RDS automatically.
See Amazon RDS Application Programming Interface Reference for additional information.
It is recommended to use AWS Software Development Kits for application development.
The AWS SDKs handle low-level nuances like confirmation, retry logic, and error handling, allowing you to focus on your application logic.
AWS SDKs are available in a variety of languages.
See Tools for Amazon Web Services for additional information.
AWS also provides libraries, test code, educational exercises, and other resources to help you get started quickly.
See Sample Code and Libraries for additional information.
How are Amazon RDS fees calculated?
When using Amazon RDS, you have the option of using on-request DB cases or held DB occurrences.
More information may be found in DB Instance Billing for Amazon RDS.
8. Amazon Web Services Elastic Load Balancer - ELB
Elastic Load Balancer spreads incoming network traffic from applications across many servers, which may be in various Availability Zones.
It aids and improves the fault tolerance of your applications.
It acts as a single point of contact for your customers, ensuring that the application is always accessible.
You may either add or remove instances from the load balancer, which has no effect on your load balancer, and similarly, removing the load balancer has no effect on your Ec2s.
It aids in scaling your load balancer's traffic to your application as it changes over time and also automatically handles the bulk of burden when scaled.
Elastic load balancers have their own DNS domains and are never assigned IP addresses.
We may examine the load balancing activities using DNS.
Load balancer health checks are required to be set up since they bring in the status of the load balancers.
It is essentially the sanity tests that are performed on the servers to determine if they are healthy or sick.
The main health check answers are as follows:
 Response timer:
When checking for output, it should return the status in the time allowed, and if there is no response, it should at least throw the error message in the time permitted.
In the allotted period, either an error or a response should be shown.
Interval: The amount of time it takes to distribute load across the instances.
Healthy Threshold: Any output anticipated to be shown within the specified time, as well as servers that are healthy enough to bring in the load management.
Unhealthy Threshold: If the instances are offline or unresponsive, or if the threshold has been exceeded, the load balancer will fail to handle the load.
Elastic Load Balancing Varieties
Elastic Load Balancing is compatible with three kinds of load balancers:
 1.Traditional Load Balancer.
2.Load Balancer for Applications
3.Load Balancer for the network
 Load Balancer in the Traditional Style 
Classic Load Balancer, like any other load balancer, does basic load balancing across many EC2 instances. It operates in accordance with the OSI model's Transport layer (fourth layer). This was mostly utilized before the implementation of the application load balancer. 
Load Balancers for Applications
Application Load Balancers operate according to the OSI model's application layer (the seventh layer). Loads are dispersed and controlled in accordance with the application's use.
 Load Balancer for Networks
The Network Load Balancer is excellent for balancing TCP traffic loads.
Instances, containers, microservices, and so on may be targets.
It can handle millions of requests per second while maintaining low latency, guaranteeing excellent network performance.
Load balancers have many key characteristics, including the ability to guarantee high availability across several or a single availability zone by spreading incoming traffic.
In reaction to incoming application load, the request processing capacity is increased.
One of the essential aspects of ELB is the health check, which displays or brings in the status of servers. It guarantees the servers' sanity and status checks. Bringing in or detecting the health of Amazon EC2 instances is ideal. 
If any of the EC2 instances are discovered to be unhealthy, the load balancer redirects traffic to other servers rather than the defective or unhealthy servers or EC2 instances.
Security features are implemented in load balancers by adding security groups, NACLs (Network Access control lists), or even VPCs (virtual private clouds).
Load Balancer accepts IPv4 and IPv6 addresses.
Cloud Watch is a monitoring and management solution that tracks all load balancer data.
Logs obtained by cloud monitor will aid in the diagnosis of any application problems or web traffic.
SSL (secure socket layer) certificates handle load balancer encryption, including public key authentication and offloading SSL (secure socket layer) decryption from application instances.
Load Balancing and Autoscaling services may be coupled to guarantee traffic is properly controlled and balanced while applications and services are in place. 
Certifications from AWS
 AWS is an abbreviation for Amazon Web Services, which is a collection of cloud computing services offered by Amazon.
An AWS certification validates your ability to manage the services offered by AWS.
Cloud computing has recently grown in popularity among businesses due to its advantages, such as increased productivity and lower costs. 
Why has AWS remained so popular up to this point?
 AWS, or Amazon Web Services, is a subsidiary business of Amazon.com that provides on-demand cloud computing platforms.
These services are available in sixteen different locations across the globe, and they include EC2 (Amazon Elastic Compute Cloud) and S3 (Simple Storage Services). They provide approximately 70 services, including storage, network, database, analytics, and so on.
What Will AWS Look Like in the Next Few Years?
There is hardly any business out there that does not want to build its own public cloud and is not carefully watching AWS or Amazon Web Services. AWS, with its security, brings customers increased database storage and computational capacity. Furthermore, it offers content distribution and other helpful services to businesses to help them develop. Many businesses are currently using, and millions are moving to Amazon Web Services to build complex applications with increased flexibility and dependability.
0 notes
npmjs · 8 years ago
Text
npm private modules outage on December 12th
For a period of about 100 minutes on 12 Dec 2017, all read and write access to private packages was interrupted. For some days after that, our customer data was not in perfect sync with the source of truth—Stripe, our payments provider—and some customers renewed their subscriptions when they did not need to.
We believe we have now fully reconciled our customer database with Stripe’s data, and have refunded all renewals that were made during the incident. If we are in error about your account, please contact our Support team who will sort it out for you.
The underlying cause of this outage was a software bug that deleted all records from our customer database. Read on for more details about the root cause and a timeline of the incident.
Root cause
The root cause was a bug in the query builder software we use to interface with postgres, ormnomnom. Due to a bug in ormnomnom’s “delete” method, it omitted the WHERE clause for the query built by the list of filters. This would result in removing all records from the database table where a single record was intended to be deleted. The fix was a one-liner.
In this case, one endpoint in the microservice fronting our payments database used a DELETE query. This endpoint is exercised only by an internal tool that our support team uses to resolve customer payments problems and requests to terminate service. After the bug was deployed to production, it was triggered the first time the tool was used in this way. A DELETE with no WHERE clause was executed and all records were deleted from the targeted table.
Code audits revealed that this one API endpoint was the only place in our system where we use a straightforward DELETE, and the internal support tool is the only consumer of that endpoint. In all of our other databases, we do not ever delete rows from tables. Instead, we use “tombstoning”: setting a deleted column to a timestamp that indicates when a record was marked as deleted. In this one case, removing the data entirely is more appropriate because it is reflecting a ground truth stored in an external database. The support tool use was the only thing that could have triggered this bug, and only for that one category of data.
Timeline
All times given here are in PST, aka GMT–8.
11 Dec 2017 5:00 PM: The microservice that fronts our payments database is deployed with an upgrade to the backing framework, including an upgrade to our SQL query builder.
12 Dec 2017 1:34 PM: All customer records are deleted from our payments database. Uncached access to private modules begins failing.
2:16 PM: Our support team escalates a Twitter complaint & npm engineering is alerted.
2:32 PM: We roll back the deploy to the relevant microservice on the theory that recent changes are the most likely culprit. We observe that this does not restore service.
2:44 PM: We discover that our payments database has far fewer records in it than it should. We activate our serious incident response protocol, which formalizes communication and roles to aid internal coordination during outages.
3:07 PM: We deploy a mitigation to our payments service that treats all customer accounts as being in good standing, without checking the database. We observe that this does not restore access.
3:19 PM: We clear the access cache entirely and access to private packages is restored for all customers.
3:32 PM: We restore the database from a backup and note that all customer events from the time of the backup snapshot on will need to be replayed. We decide to leave the mitigation in place overnight to give us time to do that work in a methodical way. Meanwhile, root cause exploration continues.
3:36 PM: We find the culprit DELETE statement in the database logs. We isolate where in service code this could have been executed and confirm that a support tool invocation was the trigger, given the timestamp.
4:48 PM: The root cause is found and fixed in the query builder code.
13 Dec, through the day: We reconcile data in our behind-reality customer database with data from Stripe, our payments provider. When this is complete, we undo the mitigation that short-circuited payment checks.
27 Dec: Some customers are locked out of their organizations or private modules accounts because the 15-day grace period has expired. This happens to a handful of customers we missed in our original cleanup sweep. We identify those customers and replay the Stripe events to restore their accounts to working order.
Steps we’re taking in response
The root cause bug did not turn up in development because a new feature in ormnomnom was not completely tested. The developer wrote only cursory tests because the test suite for the library is awkward to use.
We kicked off a project to rewrite the tests for this library so that new tests are very easy to write, and will therefore be written. In our opinion, the time invested in making testing easy is time well spent. Software bugs are inevitable and testing is how we catch them early. This is the most important step we can take in response to the incident. We can measure our progress in this work by measuring test coverage of our query builder library.
None of our monitoring alerted us that anything was wrong, because our systems and software were operating correctly. Our access cache did alert briefly on CPU use, but this was not enough to make us aware that something was wrong. We were alerted by a sudden and unusually high volume of support requests and Twitter complaints. This is probably an inappropriate target for monitoring. What we can monitor, however, are unusually high rates of certain status codes being reported by our internal services. In this case, the rate of 402 Payment Required responses was an unusually high percentage of all auth checks (indeed, 100%), and could have triggered an automated alert. We will be adding such an alert to our monitoring system.
Our database logging was set to log slow queries only. Fortunately the delete query in question was quite slow, but it was only luck that we had a log line to pinpoint the time the error occurred. We will be turning on more complete logging for all data-mutating queries on databases like this one.
We were pleased with the behavior of postgres’s logical replication and will be migrating the systems we have that still use repmgr to the newer system.
An apology
We apologize to all of npm’s customers for this outage. This downtime affected all of our paying customers during peak work hours in the United States. We prefer to be an invisible and reliable part of your infrastructure, but on 12 Dec we were a source of frustration instead. We’re sorry.
10 notes · View notes
tensult · 6 years ago
Text
MyDumper, MyLoader and My Experience of migrating to AWS RDS
Tumblr media
Ref: https://tinyurl.com/yymj43hn How did we migrate large MySQL databases from 3 different DB servers of total size 1TB to a single AWS RDS instance using mydumper? Migration of database involves 3 parts: Dumping the data from the source DBRestoring the data to target DBReplication between source and target DBs Our customer had decided migration from Azure to AWS and as part of that needed to migrate about 35 databases running out of 3 different DB servers to a single RDS instance. RDS currently doesn’t support multi-source replication so we decided that we only set up replication from the largest DB and use dump and restore method for other 2 DB servers during the cutover period.
Setting up RDS Instance
In order to test the application end to end, and during the testing we need to change the data on the DB and that might cause issues in the DB replication process so we decided to set up a separate staging stack for testing purpose alone. Initially, we used native mysql tools like mysqldump, but found that these tools generate a single dump file for the whole database and some of our databases are of a size more than 400GB. We have some of the triggers and views using DEFINER=`root`@`localhost` but RDS doesn’t have root user so we need to either update the DEFINER or remove it according to this documentation. We found it really challenging to update such huge dump files so then upon a suggestion from my friend Bhuvanesh, we decided to try out the mydumper tool. Setting up a server for mydumper We could have run mydumper from the source DB server itself, but we decided to run it in a separate server as it will reduce the load on the source DB server during the dumping and restoration phases. Now let us see how to install mydumper. # Installers: https://github.com/maxbube/mydumper/releases# You may choose to take latest available release here. sudo yum install https://github.com/maxbube/mydumper/releases/download/v0.9.5/mydumper-0.9.5-2.el7.x86_64.rpm# Now we should have both mydumper and myloader commands installed on the serverDumping data from the source MyDumper tool extracts the DB data in parallel and creates separate files from schemas and tables data so it is easy to modify them before restoring them. You will need give at least SELECT and RELOAD permissions to the mydumper user. # Remember to run the following commands in the screen as it is a long running process.# Example1: Following will dump data from only DbName1 and DbName2 time \ mydumper \ --host= \ --user= \ --password= \ --outputdir=/db-dump/mydumper-files/ \ --rows=50000 \ -G -E -R \ --compress \ --build-empty-files \ --threads=16 \ --compress-protocol \ --regex '^(DbName1\.|DbName2\.)' \ -L //mydumper-logs.txt# Example2: Following will dump data from all databases except DbName1 and DbName2 time \ mydumper \ --host= \ --user= \ --password= \ --outputdir=/db-dump/mydumper-files/ \ --rows=50000 \ -G -E -R \ --compress \ --build-empty-files \ --threads=16 \ --compress-protocol \ --regex '^(?!(mysql|test|performance_schema|information_schema|DbName1|DbName2))' \ -L //mydumper-logs.txt Please decide the number of threads based on the CPU cores of the DB server and server load. For more information on various mydumper options, please read this. Also incase you want to use negative filters (Example2) for selecting databases to be dumped then please avoid default database such as (mysql, information_schema, performance_schema and test) It is important to measure the time it takes to take the dump as it can be used to plan the migration for production setup so here I have used thetime command to measure it. Also, please check if there any errors present in the //mydumper-logs.txt before restoring the data to RDS instance. Once the data is extracted from source DB, we need to clean up before loading into RDS. We need to remove the definers from schema files. cd # Check if any schema files are using DEFINER, as files are compressed, we need to use zgrep to search zgrep DEFINER *schema*# Uncompress the schema files find . -name "*schema*" | xargs gunzip # Remove definers using sed find . -name "*schema*" | xargs sed -i -e 's/DEFINER=`*`@`localhost`//g' find . -name "*schema*" | xargs sed -i -e 's/SQL SECURITY DEFINER//g'# Compress again find . -name "*schema*" | xargs gzipRestoring data to RDS instance Now the data is ready to restore, so let us prepare RDS MySQL instance for faster restoration. Create a new parameter group with the following parameters and attach to the RDS instance. transaction-isolation=READ-COMMITTED innodb_log_buffer_size = 256M innodb_log_file_size = 1G innodb_buffer_pool_size = {DBInstanceClassMemory*4/5} innodb_io_capacity = 2000 innodb_io_capacity_max = 3000 innodb_read_io_threads = 8 innodb_write_io_threads = 16 innodb_purge_threads = 2 innodb_buffer_pool_instances = 16 innodb_flush_log_at_trx_commit = 0 max_allowed_packet = 900MB time_zone = Also you can initally restore to a bigger instance to accheive faster restoration and later you can change to the desired the instance type. # Remember to run the following commands in the screen as it is a long running process.time myloader --host= --user= --password= --directory= --queries-per-transaction=50000 --threads=8 --compress-protocol --verbose=3 -e 2> Choose the number of threads according to the number of CPU cores of the RDS instance. Don’t forget to redirect STDERR to file (2>) as it will be useful to track the progress. Monitoring the progress of loader: it is a very long running process so it is very important to check the progress regularly. Schema files get loaded very quickly so we are checking the progress of data files only using the following commands. # Following gives approximate number of data files already restored grep restoring |grep Thread|grep -v schema|wc -l # Following gives total number of data files to be restored ls -l |grep -v schema|wc -l # Following gives information about errors grep -i error Verification of data on RDS against the source DB It is a very important step to make sure that data is restored correctly to target DB. We need to execute the following commands on the source and target DB servers and we should see the same results. # Check the databases show databases;# Check the tables count in each database SELECT table_schema, COUNT(*) as tables_count FROM information_schema.tables group by table_schema;# Check the triggers count in each database select trigger_schema, COUNT(*) as triggers_count from information_schema.triggers group by trigger_schema;# Check the routines count in each database select routine_schema, COUNT(*) as routines_count from information_schema.routines group by routine_schema;# Check the events count in each database select event_schema, COUNT(*) as events_count from information_schema.events group by event_schema;# Check the rows count of all tables from a database. Create the following procedure: # Run the following in both DB servers and compare for each database. call COUNT_ROWS_COUNTS_BY_TABLE('DbName1'); Make sure that all the commands are executed on both source and target DB servers and you should see same results. Once everything is good, take a snapshot before proceeding any further. Change DB parameter group to a new parameter group according to your current source configuration. Replication Now that data is restored let us now setup replication. Before we begin the replication process, we need to make sure that bin-logs are enabled in source DB and time_zone is the same on both servers. We can use the current server should be as staging DB for the end to end application testing and we need to create one more RDS instance from snapshot to set up the replication from source DB and we shouldn’t make any data modifications on this new RDS instance and this should be used as production DB in the applications. # Get bin-log info of source DB from mydumber metadata file cat /metadata# It should show something like below: SHOW MASTER STATUS: Log: mysql-bin-changelog.000856 # This is bin log path Pos: 154 # This is bin log postion# Set external master CALL mysql.rds_set_external_master( '', 3306, '', '', '', , 0);# Start the replication CALL mysql.rds_start_replication;# Check the replication status show slave status \G;# Make sure that there are no replication errors and Seconds_Behind_Master should reduce to 0. Once the replication is completed, please verify the data again and plan for application migration. Make sure that you don’t directly modify the data and on DB server till the writes are completely stop in source DB and applications are now pointing to the target DB server. Also set innodb_flush_log_at_trx_commit = 1 before switching applications as it provides better ACID compliance. Conclusion We have learned how to use mydumper and myloader tools for migration of MySQL DB to RDS instance. I hope this blog is useful for you to handle your next DB migration smoothly and confidently. In case you have any questions, please free to get in touch with me. Read the full article
0 notes
techdirectarchive · 9 months ago
Text
How to create Synology Snapshot Replication
Snapshot Replication delivers an expert solution for data backup and recovery, offering near-instant data protection with flexible scheduling. This package safeguards your business data in shared folders and virtual machines stored in LUNs, ensuring quick access and availability in the event of a disaster. In this article, we shall learn the steps on how to create Synology Snapshot Replication.…
0 notes
jenesys-ph-blog · 8 years ago
Text
DAY 9: Goodbye Sendai, Reporting Session, and Farewell Dinner.
Abdel: At the train station looking for post cards to send to friends in the Philippines, I saw a blind Japanese young fellow with a cane waking along the the specially marked and perforated lines for the blind. Our facilitator Somasan told us that these are common all over Japan - these yellow lines not just on train platforms but bus stops and sidewalks. I pointed this out to Dadah who was with me and both of us were impressed at the sight, especially when he made a left turn and then went down a flight of stairs to the level below. Even on platforms, there are marks for them to gauge the safe distance to stand from the edge. *In Tokyo, I felt all of us were very happy with lunch at The OVen American Buffet, which was eat all you can and full of recognizable food that we are used to back home. I thought this was a nice touch to transition back to meals in our own respective countries, although we knew that we were going to miss the Japanese meals we also have liked. The view was also astounding. *Before going up to the reporting sessions, Anya and I had time to send the postcards I recently bought. The drop your mail in a mailbox does not work in the Philippines but in Japan, we just dropped the postcards after putting stamps. I should have done it earlier in the trip, but sending postcards from a country visited should always be done once one is in a foreign country. *There was an android receptionist at the mall that was capable of answering questions from anybody. She looked Japanese too. *The farewell ceremony was a good time for people to mingle with delegations from other country. We took this opportunity to give out our tokens and souvenirs and took snapshots of the beautiful persons from other countries. Haha I joined Roy, Norshida, Dadah, Joan in doing these, and the last three standing were Roy Norshida and I. I sensed that most of the delegates were much like us Filipinos in terms of keeping to our group but once there is a stranger who is going to talk, there is always a welcome smile to greet back. They loved that we were taking pictures with them. *I was proud of the Filipino delegation who looked all Filipino in the barong tagalogs and maria claras. What better way to show national pride in a gathering of different countries. Some countries also looked very fabulous in their national attires, and to see them up close being worn by an actual national is better than a Filipino wearing them during Linggo ng Wika or ASEAN Week. A bit disappointed  for other countries that did not bring any cultural attire. But all in all, everyone had fun!
Anya: Today is technically our last day in the program and JICE had made it entirely enjoyable. As soon as we got back to Tokyo, they took us to Odaiba to have lunch. We ate at The Oven, which was an American buffet place. They reserved the perfect spot for us. Our seat overlooked the Odaiba Statue of Liberty and the Rainbow Bridge. This perfect view (plus the good food) put us in complete awe of Japan’s beauty.
During the reporting, we were finally able to hear from the delegates who went to other prefectures. Just like our group, they had this notion that Japan was this technologically advanced country where everything functioned efficiently. Our respective trips validated this perception of Japan. However, it added up facts about the preserved culture of Japan, the aging society, and the country’s relationship with the ASEAN countries. After the reporting, a farewell party was held for the delegates. It was emphasized since the beginning of the program that this wasn’t just about Japan; this was a multicultural experience. I was totally admiring the presence of all the delegates. At this point almost everyone was in their national clothes. I agreed with what my friend said: “Asia is so colorful.” The cultural presentation made the event much more alive. There was dancing and singing, and everyone was showcasing the lively side of their delegation. We also exchanged token from our home countries, which gave some information of what they had back home. Later on, we checked in the Hotel Emion for the night. Our room had a balcony overlooking the streets of Urayasu. From that spot we were lucky to watch the fireworks from Tokyo Disney Sea, making the night extra special. Tonight I felt totally blissful. I grateful to JICE for giving us this opportunity and to NYC for selecting me to be part of the program. Excited to go home tomorrow!
Blessy: Traveling back to Tokyo made me thank Sendai for letting us experience large amount of snow. During the bullet train ride, it was an opportunity for me to withdraw sleep and recharge for the reporting session and farewell party in the afternoon. 
In the reporting session, delegates from other ASEAN countries were also present. By looking at the crowd, i could say that the Japanese government is really spending that much in this kind of programs. This made me think that they are really serious in having partnerships (trade, business, tourism, etc) among the ASEAN countries.
Christian: Today is the day we head back to Tokyo for the program’s culminating activities. We had an early breakfast at the hotel. We then proceeded to the JR Sendai Station for our Shinkansen trip again back to the capital city. After our arrival in Tokyo, we headed to Odaiba for a great lunch. After an hour of looking at various shops, we headed to the Time 24 Building for the presentation.
During the reporting session, each group was given 8 minutes to present their knowledge of Japan prior to the program, their learnings during the trip, and their proposed action plans that each group wants to implement after returning to their respective countries. The presentations each country gave was an exhibition of great ideas that I think is essential in communicating Japan to their home countries, with the hopes of replicating Japan’s success in development.  After the activity, we headed to the building’s 11th floor for the farewell party. During that moment, the participants had a great time eating, taking pictures, and socializing with other delegates. Our group also presented song and dance performances each country has prepared, centering on the theme of love. With that, we performed Leah Navarro’s “Isang Mundo, Isang Awit” in front of the international audience. We also enjoyed watching the performances of other countries. Afterwards, we exchanged cards with other delegates and gave small tokens to those who joined us in this journey.  The program has officially ended and in this day, we had to say a lot of goodbyes. Parting may be painful, but I believe that this may not yet be the end of it. It is my personal hope that these connections and relationships we have established during the program would truly last.
Crismer: 1. In today's closing program, I realized that I really never got the chance to interact with other delegates from other nations aside from Thailand and Singapore. How I wish that the program will be redesigned to provide ample time to have that interaction.
 2. The program was about immersing in Japan, the people and their culture. I hope in the next batches, they can include as well youth delegates from Japan itself. While we were able to learn the Japanese culture through the different visits including our interactions with our guides, the dialogues and interaction with the Japanese youth will allow us to gauge as well the future of Japan.
3. Lastly, all the learnings presented by all groups are spot on. I hope that JICE and MOFA will find these learnings useful for the betterment and improvement of Japan. The action plan presented are clear and tangible as well. May JICE support these to ensure they will be executed?
Dada: It’s our last whole day in Japan. In the morning, we are scheduled to travel for Tokyo. (A bus ride to Sendai Station, a train to Tokyo and another bus ride to Odaiba). While waiting for the train to arrive at the station, we took the opportunity to take more pictures. On the way to Odaiba, the participants are in awe of the Tokyo Bay and the amazing edifices of Tokyo. We were taken to a fabulous buffet resto with a nice view of Odaiba. We ate until we’re full and strolled the mall. We also met the robot receptionist that I once saw on TV news. Then, we were transported to a nearby government office where the group presentations were held. All the delegations in their national costumes were present for the activity. Each groups’ well-prepared presentation had showcased their own experience in Japan with other activities located in other prefectures. Moreover, I could infer that the delegates’ expectations on Japan prior to JENESYS does not quite stray far from the actual. Japan was, indeed, a technologically advanced country.
After the presentation was the farewell party/ closing program in which groups showcased their meaningful cultural presentations. The party was wonderful and happy amidst the diversity of cultures. We had the chance to interact with foreign delegates and to individually thank our coordinators.
When the party was over, we head to the Emion Hotel near the Tokyo Bay. In the bus, each country representative of group C gave a farewell massage to Sohma-san. We also bid farewell to our Thai and Singaporean groupmates. We head to our hotel room to prepare for our flight the next day.
Darren: If I were to describe Day 9 using one word, I would say this one's the BEST. It was early morning when we moved to Sendai Station to ride Shinkansen once again. We had our lunch at The Oven: American Buffet located in Odaiba. Aside from the great food served in the restaurant, I really enjoyed the majestic view of the Statue of Liberty's replica, which is originally located in New York City. Right after eating, we went to Time 24 for the reporting session.
During the presentation, we were given a chance to share our knowledge, perspective, findings, and learnings all throughout the JENESYS 2016 program to other delegates of the other countries. In general, the delegates from Thailand, Singapore, and Philippines from Group C agreed that most of our impressions in Japan (prime destination of investment, steadfast economy, punctuality and cleanliness, ageing population, advanced technology and developed country, and exquisite culture) were true and all of these were validated when we had cultural trips in our respective prefectures.  After the reporting session, we had our farewell party with the organizers and the other delegates. I consider this event as one of the most memorable moment in my life. Watching the cultural performances and exchanging tokens with different countries made me want to study culture even more! The program might end here but one thing's for sure - we will always look back to this moment and eminently, the friendships we've built with Japan and ASEAN countries continue to grow as well.
Grace:  Sayonara Sendai.
Haiku: Sendai, my new love Four days was more than enough Treasure you for life.  Leaving you (Sendai) was difficult but I needed to move on. I came to you by riding on the fast, safe and reliable Shinkansen and I left you using the same transportation.  We ate our lunch at The Oven American Buffet, and I must say it was the sugoi-est lunch I had during the program. Food was very oishi plus the amazing view of Odaiba, it seems like we're eating in one of the restaurants in New York.  We headed to Time 24 for the group presentations. The representatives by each group presented their expectations, learning, observations and information that they got all throughout the 9-day program. Also they presented the action plans on how to disseminate the learning and how to always get connected with the fellow delegates.  In the evening was the farewell party. All Batch 6 delegates were gathered in one hall, tossed and ate together. Groups also shared their talents thru singing and dancing. We did not miss the last minutes to give some tokens of friendship to the friends we've met. We shared business cards to keep in touch with each other. Indeed, it was a wonderful night.
Ian: Day 9: For the last day of the program, we have to return to Tokyo. Again, we have to ride the Shinkansen (bullet train) to be able to return to Tokyo for a short time. I finally bid goodbye to the beautiful city of Sendai. Even though it is already my second time to ride the bullet train, I was still amazed by it. Upon arriving on Tokyo, I was able to feel the difference in weather. It was warmer compared to Sendai. In the bus, our facilitators shared some facts about the Odaiba, including the Statue of Liberty situated in Odaiba. We had our lunch in a shopping center in Odaiba. The first thing that welcomed us in the shopping center was the animatronics in the information center. I was amazed with the technology they come up with. The lunch was really good and it made me so satisfied. We were given a time to roam around in the shopping center and I did not miss the opportunity to buy souvenirs for my family and friends.  The final part of the program is the reporting session which will be held on Time 24 Building. When we arrived on the venue, we quickly changed to our traditional attire. It was a beautiful view to see people from different countries wearing their traditional clothes. There is really beauty in diversity. Representative of JICE and Ambassador of Myanmar also graced the event. Representatives of India, Cambodia, Myanmar, Timor-Leste, Vietnam, Singapore, Thailand and Philippines presented all our knowledge about Japan before the program, our experiences and learnings and of course, our action plans. Most of us already know the culture and technology of Japan before the program but were further enhanced by experiencing it through the program. I have also inferred that all our plans are rooted in increasing efforts to intensify people’s awareness in Japan and encourage our fellow countrymen to visit this beautiful country. I can say that all of the action plans prepared by different countries are well-thought of. After the reporting session, the certificate of completion were awarded to us. We were also able to socialize with fellow participants from different countries. After having short conversations, we went were asked to proceed to the 11th floor where the farewell party will take place. I was astounded since there are more people, 261 participants from 12 different countries, to be exact. The representative of JICE had a short closing remarks then initiated a toast. After the toast, we had our delicious dinner and also a cultural presentation took place. We were able to listen to the songs, watch the dances and observe the traditional attire of different countries. It was a fun-filled moment for all of us. After the performances, we had short conversations with fellow participants and even though just for a short time, we were able to build friendship. We exchanged tokens and calling cards and finally, bid goodbye to each other. I also waved goodbye to my beloved facilitators. This made me sad because I realized that the JENESYS 2016: Batch 5 program already concluded. We then proceeded to our hotel and packed our bags for the last time as we will be returning to our own country early morning tomorrow.
Jerm: As the program header for a close, we finally got the time to look back and reflect on everything we have learned for the duration of Jenesys. Not only were we immersed into Japanese culture, we also learned the concepts and structures of economic institutions in the country. During the presentation, we were given the chance to share our learnings to the delegates of the other countries. Most of our perceptions were true about Japan – having high-technology, great food and disciplined citizens. We were also able to discover the Japanese people's high regard for culture and tradition and the fact that they do not have to be sacrificed for progress. To add to that, Japan was also the symbol of responsible and sustainable development having shown that they could be progressive without having to destroy the environment. The reporting was a wonderful time as it highlighted individual lessons from each country and I believe that's the essence of JENESYS – to integrate all these countries within one vision: progress. The program ended on a high note with the farewell dinner party where each group showcased their presentations. It was also difficult to finally say goodbye to the other delegates we've become close to and become friends with. The 9 days we have spent together will never be forgotten because these good memories will live on.
Joan: We rode the Shinkansen (Tohoku line) in going back to Tokyo. I slept most of the trip as I am feeling a little bit lonely as the program is nearing its end. Although everybody has different personalities, I have already learned to love them (Aww). I just wish to know more about them and hopefully see them again in the future. We had lunch at the Oven located in the Aqua City Mall in Odaiba- a large artificial island in Tokyo and had the best view of the mini statue of liberty and the Rainbow Bridge from Central Tokyo. My friend Norsh got to try a money changer atm and bought a polaroid camera in the Sega or gaming center. A magokoroid woman was one of the customer assistants in that mall. As usual, we had last minute shopping at Daiso Japan for small stuffs like pens and notebooks.
We arrived at Time 24 and immediately changed into our Filipiniana attire for the reporting. The room was full of excitement and energy as each group reported their findings and discussions in a very limited time. Each country group was profound in their expectations, learnings, and action plans after the program. A representative from each group was tasked to receive their certificate of participation together with their ambassadors.
What a better way to end and part ways was through a standing farewell party. The best sushi and maki were served, we had juices and soda for the ceremonial toast. Each country presented dances or song for the entertainment. We even joined in some dances. It was also a great time to give tokens of gratitude to our coordinators and we also exchanged cards with other country delegates. Mostly we enjoyed taking pictures with them and wished each one a happy and safe trip back home.
It was energy draining for an introvert like me but I did enjoy a lot. I just hate goodbyes and loud noises (a little).
Juliet: This day highlights the presentation of learnings and action plan of each group delegation,showcase of talents of the delegates, and farewell party.Hearing the presentation indeed proved the mind and ingenuity of the young people.In general,the learnings obtained during the entire program focus on environmental protection, economic advancement, culture, innovation, advanced technology, strong collaboration, societal systems, and social problems besetting Japan such as aging society. Meanwhile, the action plans presented cover activities such as disseminating these learnings and realizations about Japan to respective countries, building friendship and camaraderie among countries, bridging each country's economic agency and businesses to Japan, and deepening discussions and education about Japan and its culture and economy.
Having lighter time with the delegates during the farewell party, the friendship and camaraderie built between and among the young people of the ASEAN region made the night meaningful.The rich culture of each countries was also displayed through the delegates' dances, songs, and fashion show of national costumes.Truly, the delegates did not only open their hearts to Japan but to the ASEAN as well.
June: Calling this day as good would be an understatement. It was amazing! We had our lunch at a restaurant that had a view of a replica of the Statue of Liberty and the horizon beyond it. It was indeed a spectacle. 
Moving on the the group presentations wherein we were able to share what we learned from the program was not only reflective but also very informative since we were also able to hear from others as well. The content and delivery of all the groups were excellent. Ending it all with the farewell party was a blast! Interacting will all the other groups participating in the program allowed us to understand one another more, create more connections, and give our last goodbyes to one another. It a night full of memories.
Love: On Day 9, the team travelled to Tokyo for the reporting session to the Ministry of Foreign Affairs. Group B (The Philippines, Singapore and Thailand) along with the Malaysia, Indonesia, Vietnam and Laos Delegates presented their findings, impressions and action plans which were highlights of the presentation. One of the action plans of the Group B delegates, specifically the Philippines, is the conduct of the National Youth Forum. Details of this proposed activity will be discussed by the Philippine delegates and will update the NYC and JICE about its progress. The JENESYS Program is about to end on this day. Delegates from all participating countries convened in the training hall to celebrate the success of the JENESYS Program. As part of the closing ceremony, delegates from different countries performed a cultural presentation while wearing their traditional costumes. Dinner was served at the venue. Authentic Japanese cuisines were served and everyone enjoyed the night. I know the JENESYS program was about to end but the experience and learnings/information garnered from this experience was truly a very great one. For youth like me, this program was able to establish networks, build connections, meet new friends and find new family.
Nina: Last day of activities for the JENESYS program and we're bound to Tokyo. After having lunch at a nice restaurant with an awesome view (and desserts), we were able to look around the place, then some shopping, and start preparing for our presentation in the afternoon. The delegation was divided into two groups for the presentation. Our group was supposed to be the first to present but we ended up presenting last. Everybody did a great job with their ideas and outputs! You could see that the JENESYS program taught us and made us experience Japan, real firsthand. I second the statement when one presenter thought that kimonos are Japanese's daily wear, where everybody walks around wearing it. Funny, because I thought the same too! The presentations mostly revolved around Japan's high technology, preserved culture and hospitality. Our presentation, I believe, was good, structured and well-prepared. Kuya Cris, Jon, and P did a great job presenting it clearly - good speakers, indeed. The presentation focused in targeting different types of audiences (general public, youth, and those who are interested in cross-cultural studies) to share and explain what we have learned from the trip and what Japan is based in what we have experienced during the program.  Finally, as we officially end the JENESYS program, we had a farewell party together with all the delegates from different countries as well as the JICE coordinators. It was a night full of performances, photos, and "see you soons". I'll miss everyone, really. Hoping that the friendships we've made during the program would last! Ended our day figuring out how to pack all our stuff, card games, and ramen ❤
Norshida: Day 9 Today our amazing race begins with the bus taking us to the Shinkansen Bullet Train Station started at 7:25 am. I just don’t know what too feel today, I’m feeling so emotional. I already love staying in Sendai not because of the snow but because of the people, food and the system they have. For now, I‘m planning to come back together with my family. Aside from enjoying the ride with the Shinkansen Bullet Train, I really appreciate the timeliness of the Japanese People. We travelled at exactly 9:00 am going to Tokyo and arrive at exactly 11:55 am in Odaiba, Tokyo. We also had our very delicious lunch at The Oven Buffet wherein I was amazed with the view outside as if I was seeing the real Statue of Liberty in New York, but it was only a replica as per Soma-san said it was given by the British Council to Japan. After our lunch, we go directly to Time 24 to begin presenting our workshops in which Mark-san Velasco presented for our group and so with Thailand and Singapore Delegates they also have their presenter. Finally, we have our emotional farewell party. It pained me, saying “Arigatu Gozaimasu” and “Sayonara” to our JICE Coordinator wherein I could applause them for being so patient and caring and of course with our co-delegates from ASEAN countries wherein we established great friendships. I have given token of friendship hoping they will not forget us. And start exchanging business card and lots of pictures for us to really not forget them. It was a blissful but emotional night for me.
Rea: Today is our last day before we go back to our own countries. Today is also the day we have to bid our goodbyes to the beautiful place of Sendai. Today’s highlight is the presentation made by each group. During this presentation, each group presented what they have learned during the JENESYS program and what will be their action plans after the program. We did not only enjoy our stay in Japan but also learned insightful lessons that can be useful for the country. Japan’s culture, economic advancement, high regards with the research and development, and strong industry-university collaboration are just some of the learnings we had during this program. Despite having advanced technologies and sustainable development, Japan still protects their environment. Most of our action plans focus on strengthening the relationship with the other JENESYS batch 5 delegates, and expanding the Japanese culture and economy in our own country through forums, seminars, etc.  The JENESYS program was ended by having a farewell party together with all the delegates from the different countries where we showcased our talents, and built connections and friendships with all the delegates.
1 note · View note
tayfundeger · 6 years ago
Text
New Post has been published on
New Post has been published on https://www.tayfundeger.com/vsphere-replication-nasil-update-upgrade-edilir.html
vSphere Replication Nasıl Update/Upgrade Edilir?
Merhaba,
vSphere Replication Nasıl Update/Upgrade Edilir? isimli bu makalemde replikasyon için kullandığımız vSphere Replication nasıl upgrade edileceğini anlatacağım. Ben daha önce vSphere Replication ile ilgili bir çok makale yazdım ancak upgrade konusunu anlatmadığımı farkettim.
Daha önce yazmış olduğum vSphere Replication ile ilgili makalelere aşağıdaki linkten ulaşabilirsiniz.
https://www.tayfundeger.com/kat/vmware-vsphere-replication
vSphere Replication 2 farklı upgrade/update methodu var. Bunlar;
ISO dosyası ile upgrade/update
vSphere Update Manager ile update (bu yöntemi kullanarak major bir upgade işlemi yapamazsınız sadece patch/update’leri yükleyebilirsiniz.)
Ben genellikle vSphere Replication upgrade ve update işleminde ISO dosyasını kullanıyorum çünkü bu oldukça basit ve hızlı bir şekilde gerçekleşiyor. Ben bu makalemde vSphere Replication 6.x sürümünden vSphere Replication 8.x sürümüne upgrade gerçekleştireceğim. Ancak tüm versiyon geçişlerinde bu yöntemi kullanabilirsiniz.
Upgrade işlemi oldukça basit ancak burada dikkat etmeniz gereken bir nokta var. Bazı vSphere Replication sürümleri doğrudan sizin istemiş olduğunuz versiyona upgrade olmayabilir. Yine aynı şekilde upgrade ettiğinizde vCenter Server versiyonunuz ile versiyon uyumsuzluğu yaşanabilir. Böyle bir durumu yaşamamak için aşağıdaki link’i incelemeniz gerekiyor.
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#upgrade&solution=509
vSphere Replication Nasıl Update/Upgrade Edilir?
Yukarıdaki matrix’e dikkat ederseniz bazı vSphere Replication versiyonlarından yine bazı versiyonlara doğrudan upgrade işlemi gerçekleştiremiyorsunuz. Buna dikkat etmeniz gerekiyor, eğer dikkat etmezseniz upgrade sonrasında replikasyonun başlamaması gibi, vCenter Server’a connect olmaması gibi sorunlar ile karşılaşabilirsiniz.
vCenter Server ile vSphere Replication uyumluluğunu kontrol etmek için ise aşağıdaki link’i inceleyebilirsiniz.
https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&2=&509=
vSphere Replication’ı upgrade etmeden önce mutlaka snapshot alın olası bir sorunda snapshot’dan geri dönebilirsiniz. Upgade etmek istediğiniz vSphere Replication’ı aşağıdaki linkten download edin.
https://my.vmware.com/group/vmware/details?downloadGroup=VR8202&productId=742&rPId=40069
ISO dosyasını download ettikten sonra datastore’a upload edip, vSphere Replication virtual machine’ine mount edin. Daha sonrasında, vSphere Replication’ın management arayüzüne giriş yapıyoruz.
vSphere Replication Status
vSphere Replication Appliance Management’a login olduktan sonra update sekmesine giriş yapıyoruz.
vSphere Replication Settings
Settings bölümüne giriş yapıp ISO dosyası ile upgrade işlemini gerçekleştireceğimiz için Use CDROM Updates seçeneğinii seçiyorum ve tekrar Status bölümüne giriş yapıyorum.
vSphere Replication Update
Status bölümüne giriş yaptıktan sonra Check Updates seçeneğine basıyoruz ve ISO dosyasındaki versiyonun update olarak bulunmasını bekliyoruz.
Install Updates butonuna basarak versiyon yükseltme işlemine başlıyoruz. Karşımıza çıkan uyarıya OK butonuna basıyoruz. Bu işlem sonunda vSphere Replication’ı reboot etmemiz istenecektir. Ancak şunu unutmayın bu işlem biraz uzun sürebilir bundan dolayı update esnasında vSphere Replication’ın kitlendiğini, hang olduğunu düşünmeyin.
vSphere Replication Update
Upgrade işlemi tamamlandığında vSphere Replication’ı reboot etmemiz istenecektir. System bölümüne giriş yapıp Reboot butonuna tıklayabilirsiniz.
vSphere Replication açıldıktan sonra tekrar Management Console’a giriş yapıp System > Information bölümünden vSphere Replication’in versiyonunun upgrade olduğunu görebilirsiniz.
Umarım faydalı olmuştur.
İyi çalışmalar.
0 notes
globalmediacampaign · 4 years ago
Text
Managed disaster recovery with Amazon RDS for Oracle cross-Region automated backups – Part 2
In the first part of this series, we discussed several use cases for including a second Region in your disaster recovery (DR) plans for your Amazon Relational Database Service (Amazon RDS) for Oracle  database instances. We also introduced cross-Region automated backups to assist you in establishing and maintaining cross-Region point in time restore (PITR) capability for your Amazon RDS for Oracle instances. In this post, we show you how to set up cross-Region automated backups on new and existing RDS for Oracle instances, including AWS KMS-encrypted instances, and we show how to monitor the replication as well as how to perform a point-in-time restore in the destination Region. Set up cross-Region automated backups Setting up cross-Region automated backups via the AWS Management Console or AWS Command Line Interface (AWS CLI) is straightforward. You can enable cross-Region automated backups during instance creation or at a later time with a simple modification of the instance. To add cross-Region automated backups to an existing instance via the Amazon RDS console, complete the following steps: On the Amazon RDS console, choose an RDS Oracle instance from the list of databases in your AWS account and Modify. Under Additional Configuration, the Backup section allows you to specify the backup retention period for the local Region. A check box below the backup window enables replication and exposes drop-downs for the destination Region and replicated backup retention period.The backup retention period for the destination Region is completely independent of the period set for the source Region. Either Region may be set up to 35 days to accommodate your DR plans. After you make your selections, choose whether to implement the changes immediately or during the next scheduled maintenance window. Only certain Regions are paired at this time, such as US East (N. Virginia) with US West (Oregon); EU (Ireland) with EU (Frankfurt); and Asia Pacific (Tokyo) with Asia Pacific (Osaka). More Region pairings are coming soon. To see which Regions support replication with your current Region, run the following command. [ec2-user@ip-10-1-0-11 ~]$ aws rds describe-source-regions --region us-east-1 The following screenshot shows the us-west-2 Region is paired with us-east-1 for Amazon RDS automated backup replication. Cross-Region automated backups fully support Amazon RDS encryption using AWS Key Management Service (AWS KMS) keys. To enable cross-Region automated backups on an AWS KMS-encrypted instance, you must specify an existing AWS KMS key ARN (Amazon Resource Name) in the destination Region to encrypt the snapshot data there. No other changes to the workflow are required. To verify automated backups of your instances in the local Region, choose Automated backups in the navigation pane of the Amazon RDS console and look under the Current Region tab. You can view and manage cross-Region backups from this page. Choose the Replicated tab to view backups from a remote Region that have been replicated to the Region currently in view. In the destination Region, you can view the restorable time window for each replicated backup and initiate the restore of a backup to a point in time within that window for a given instance. To view your in-Region restorable time window and replicated backups ARN from the AWS CLI, enter the following code. [ec2-user@ip-10-1-0-11 ~]$ aws rds describe-db-instance-automated-backups --db-instance-identifier slob-saz The following screenshot shows our result. In the destination Region, you can confirm the restore window specifying the automated backup’s ARN obtained from the previous command’s output. [ec2-user@ip-10-1-0-11 ~]$ aws rds describe-db-instance-automated-backups --db-instance-automated-backups-arn " arn:aws:rds:us-west-2:xxxxxxxxxxxx:auto-backup:ab-gug4ww2bbuacdji7fdqjoosprm4mnea2jqgsdpa" The following screenshot shows our result. Disaster recovery walkthrough In the unlikely event of a disaster that renders the source Region for your RDS Oracle instance unavailable, restoring your cross-Region snapshots follows the same process as restoring in the source Region. Amazon RDS always restores from backup to a new instance. You can initiate a restore via the Amazon RDS console, the AWS CLI, or by making an API call within your automation framework. To use the Amazon RDS console, complete the following steps: On the Amazon RDS console, choose Automated backups in the navigation pane. On the Replicated tab, choose the instance in question. On the Actions menu, choose Restore to point in time. The dialog follows the same flow as in the source Region for the instance: restore to the latest restorable time is the default option, or specify any time within the restore window down to the second.Choose the latest restorable time to recover as much data as possible, or specify a custom restore time in cases where logical data corruption was introduced at a known time and the goal is to restore to prior to that incident. You must specify a DB instance identifier and may change various aspects of the instance to be restored, including database version, license model, instance name, instance class and size, single- or Multi-AZ, storage options, authentication, and more. Accomplish the same from the AWS CLI with the following code. aws rds restore-db-instance-to-point-in-time --source-db-instance-automated-backups-arn "arn:aws:rds:us-west-2:xxxxxxxxxxxx:auto-backup:ab-r2mmwuimjtv2tt7apmwl3ymi2mlfhgs5ictuexq" --target-db-instance-identifier slob-saz-restore --restore-time 2021-04-29T08:06:11.000-6:00 The cross-Region automated backups feature has also created a copy of the options group from the source Region named xrab---…, which is the default selection for the options group of the restored instance. You can specify a different options group or leave it at the default to preserve the same options as in the source Region. You can also use the RestoreDbInstanceToPointInTime Amazon RDS API operation to accomplish the restore. Amazon RDS begins working on the restore immediately, and the Amazon RDS console shows the status of the new instance as Creating. The time taken to complete the restore operation largely depends on the number of archived logs that must be applied on top of the automated snapshot to arrive at your chosen restore point. Amazon RDS backs up the instance after it completes the restore. Shortly thereafter, the instance shows the Available status. Select the instance endpoint on the Connectivity & security tab and update your applications to point to the new instance. Your restore is complete, and the instance is available for transactions. You may now choose to replicate the automated backups of the restored instance to another Region. Monitoring When automated backups are enabled for your instance and the instance is in the Available state, Amazon RDS takes a daily backup snapshot of your database instance during the maintenance window associated with the instance. Amazon RDS also uploads archived redo logs from the Oracle instance to Amazon S3 every 5 minutes. With cross-Region automated backups enabled for the instance, Amazon RDS replicates the snapshots and archived redo logs to the second Region. You can observe the difference in the latest restorable time between the source and target Regions by viewing the automated backups lists in each Region. Each scheduled upload of archived logs takes some minutes to complete, which means you should expect the latest restorable time in the Region where the RDS instance runs to be less than 10 minutes ago at any given time. After the logs are stored in Amazon S3 in the source Region, the logs replicate to the target Region, typically arriving there within minutes, which means you can typically expect the latest restorable time in the destination Region to be less than 25 minutes ago, trailing the local Region by 10–15 minutes. As an example, the following screenshots were captured at 5:40 PM UTC, and we see latest restorable times from approximately 7 minutes ago to as little as 3 minutes ago.   Looking at the destination Region, we see the latest restorable times are between 10–14 minutes ago. Our focus database instance shows a latest restorable time 5 minutes further in the past in the destination Region than in its source Region. Summary In this post, we walked through setting up cross-Region automated backups on new and existing RDS for Oracle instances, including AWS KMS-encrypted instances, and we showed how to monitor the replication as well as how to perform a point-in-time restore in the destination Region. For more information about enabling and working with cross-Region automated backups, see Replicating automated backups to another AWS Region. About the authors Nathan Fuzi is a Senior Database Specialist Solutions Architect at AWS.         Nagesh Battula is a Principal Product Manager on the Amazon Web Services RDS team. He is responsible for the product management of Amazon RDS for Oracle. Prior to joining AWS, Nagesh was a member of the Oracle High Availability Product Management team with special focus on distributed database architecture addressing scalability and high availability. While at Oracle, he was the product manager for Oracle Sharding and Oracle Global Data Services. Nagesh has 20+ years of combined experience in the database realm. He has a BS in Engineering and MS in Computer Science. He is a frequent speaker at various database related user groups and conferences. https://aws.amazon.com/blogs/database/part-2-managed-disaster-recovery-with-amazon-rds-for-oracle-xrab/
0 notes
iyarpage · 7 years ago
Text
OpenShift project backups
Dr Jekyll’s potion famously owes its effectiveness to an ‘unknown impurity’. This is why, at the end of Stevenson’s tale, the protagonist has to confess to himself and the world that he will never regain control of his destructive alter ego. Some configuration errors are hard to spot; but it is much harder to figure out why an earlier, throwaway version of a service worked when our painstaking attempts to recreate it fail. As I hope to show, creating regular backups of our projects can help.
I’d like to distinguish between two kinds of backup here. On the one hand, there’s a spare vial in the fridge. Its contents match the original potion exactly. This is essentially a database snapshot. On the other hand, there’s a laboratory analysis of the original potion, which represents our only chance of identifying the ‘unknown impurity’.
In many cases, the vial in the fridge is what is needed. Its direct equivalent in the Kubernetes world is a database backup of the master’s etcd store. I want to concentrate instead on the laboratory analysis. It is less convenient when time is short, but it does offer a clear, human-readable glimpse of a particular moment in time when our service was working correctly.
While this approach will probably not allow you to restore the entire cluster to a working state, it enables you to look at an individual project, dissect its parts and hopefully identify the tiny, inadvertent configuration change that separates a failed deployment from a successful one.
There is no need to lock the database prior to taking the backup. We are exporting individual objects to pretty-printed JSON, not dumping a binary blob.
Why, considering our infrastructure is expressed in code, should we go to the trouble of requesting laboratory analyses? Surely the recipe will suffice as everything of consequence is persisted in Git? The reason is that too often the aspiration to achieve parity between code and infrastructure is never realised. Few of us can say that we never configure services manually (a port changed here, a health check adjusted there); even fewer can claim that we regularly tear down and rebuild our clusters from scratch. If we consider ourselves safe from Dr Jekyll’s error, we may well be deluding ourselves.
Project export
Our starting point is the script export_project.sh in the repository openshift/openshift-ansible-contrib. We will use a substantially modified version (see fork and pull request).
One of the strengths of the Kubernetes object store is that its contents are serialisable and lend themselves to filtering using standard tools. We decide which objects we deem interesting and we also decide which fields can be skipped. For example, the housekeeping information stored in the .status property is usually a good candidate for deletion.
oc export has been deprecated, so we use oc get -o json (followed by jq pruning) to export object definitions. Take pods, for example. Most pod properties are worth backing up, but some are dispensable: they include not only a pod’s .status, but also its .metadata.uid, .metadata.selfLink, .metadata.resourceVersion, .metadata.creationTimestamp and .metadata.generation fields.
Some caveats are in order. We store pod and replication controller definitions, yet we also store deployment configurations. Clearly the third is perfectly capable of creating the first two. Still, rather than second-guess a given deployment sequence, the backup comprises all three. It is after all possible that the pod definition (its replicas property, for example) has been modified. The resulting history may be repetitive, but we cannot rule out the possibility of a significant yet unseen change.
Another important caveat is that this approach does not back up images or application data (whether stored ephemerally or persistently on disk). It complements full disk backups, but it cannot take their place.
Why not use the original export script? The pull request addresses three central issues: it continues (with a warning) when the cluster does not recognise a resource type, thus supporting older OpenShift versions. It also skips resource types when the system denies access to the user or service account running the export, thus adding support for non-admin users. (Usually the export will be run by a service account, and denying the service account access to secrets is a legitimate choice.) Finally, it always produces valid JSON. The stacked JSON output of the original is supported by jq and indeed oc, but expecting processors to accept invalid, stacked JSON is a risky choice for backup purposes. python -m json.tool, for instance, requires valid JSON input and rejects the output of the original script. Stacked JSON may be an excellent choice for chunked streaming (log messages come to mind) but here it seems out of place.
Backup schedule
Now that the process of exporting the resources is settled, we can automate it. Let’s assume that we want the export to run nightly backups. We want to zip up the output, add a date stamp and write it to persistent storage. If that succeeds we finish by rotating backup archives, that is, deleting all exports older than a week. The parameters (when and how often the export runs, the retention period, and so on) are passed to the template at creation time.
Let’s say we are up and running. What is happening in our backup project?
Fig. 1 Backup service
A nightly CronJob object instantiates a pod that runs the script project_export.sh. Its sole dependencies are oc and jq. It’s tempting at first glance to equip this pod with the ability to restore the exported object definitions, but that would require sweeping write access to the cluster. As mentioned earlier, the pod writes its output to persistent storage. The storage mode is ReadWriteMany, so we can access our files whether an export is currently running or not. Use the spare pod deployed alongside the CronJob object to retrieve the backup archives.
Policy
The permissions aspect is crucial here. The pod’s service account is granted cluster reader access and an additional, bespoke cluster role secret-reader. It is defined as follows:
kind: ClusterRole apiVersion: v1 metadata: name: ${NAME}-secret-reader rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "list"]
Perhaps the greatest benefit of custom cluster roles is that they remove the temptation to grant cluster-admin rights to a service account.
The export should not fail just because we decide that a given resource type (e.g. secrets or routes) is out of bounds. Nor should it be necessary to comment out parts of the export script. To restrict access, simply modify the service account’s permissions. For each resource type, the script checks whether access is possible and exports only resources the service account can view.
Fig. 2 Permissions
Administrator permissions are required only to create the project at the outset. The expectation is that this would be done by an authenticated user rather than a service account. As Fig. 2 illustrates, the pod that does the actual work is given security context constraint ‘restricted’ and security context ‘non-privileged’. For the most part, the pod’s service account has read access to the etcd object store and write access to its persistent volume.
How to get started, and why
To set up your own backup service, enter:
$ git clone https://github.com/gerald1248/openshift-backup $ make -C openshift-backup
If you’d rather not wait until tomorrow, store the permanent pod’s name in variable pod and enter:
$ oc exec ${pod} openshift-backup $ oc exec ${pod} -- ls -l /openshift-backup
Please check that the output has been written to /openshift-backup as intended. You can use the script project_import.sh (found next to project_export.sh in the openshift/openshift-ansible-contrib repository) to restore one project at a time. However, in most cases it will be preferable to use this backup as an analytical tool, and restore individual objects as required.
It’s worth considering the sheer number of objects the object store holds for a typical project. Each of them could have been edited manually or patched programmatically. It could also lack certain properties that are present in the version that is stored in Git. Kubernetes is prone to drop incorrectly indented properties at object creation time.
In short, there is ample scope for ‘unknown impurities’. Given how few computing resources are required, and how little space a week’s worth of project backups takes up, I would suggest that there is every reason to have a laboratory analysis to hand when the vials in the fridge run out.
Der Beitrag OpenShift project backups erschien zuerst auf codecentric AG Blog.
OpenShift project backups published first on https://medium.com/@koresol
0 notes
monoshah · 8 years ago
Text
Love: Everything I learned in a year
A year ago, I set myself on an inquiry of love. I wanted to see if love really is all that its hyped up to be. Being someone whose not had much skill in managing relationships, I though it was high time I actually study this thing we all seemingly long for.
At first, the idea of people not doing such an inquiry struck me. I mean think about it- we’re all, for better or for worse, advocates of love. At each stage of our lives, love has an important part to play, and yet- we fail to put in the work to study it. We just “let it happen”, as if its this magical thing that the universe just gives us.
But what if we looked closely? What if we stopped assuming that a tiny baby with wings and an arrow controlled our destinies?
So, for the past year, I made it a point to put it under a microscope and see it for what it is, not what the media portrays it to be. Every other month, I read a book about it, sometimes, I didn’t find any insight, while other times, I found lessons hidden inside long paragraphs of sophisticated language.In addition to that, I started reflecting more on my skills pertaining to relationships, trying to discover patterns. 
Either way, there are somethings I learned. Regardless of what your relationship status is, I hope this soothes you.
P.S: You may notice that I’ve used my own example from past relationships to convey certain points. The idea here isn’t to make you feel bad for me (you can if you want to though). Instead, it’s to encourage you to try and apply it to your own life. 
 Lessons
Context: Crushes Lesson: Crushes are delusional 
If we really examine our crushes under a microscope, we’d realize how irrational our assumptions about them are. Think about it- one look at someone, and somehow, we know exactly what kind of person they are, the vulnerabilities they share. What’s worse, the seemingly trivial things they do or possess makes us even more attracted to them: the way they comb their hair, that little mark under their right chin. We use these little things to paint a picture of who they are as individuals.
So, if we try to eliminate the romanticized filter, they, just like anyone else, would seem so normal. All of our assumptions drop as we see reality.
After I read about this, I thought about applying it in my life. So, every time I felt attracted to someone, I journaled about it. I asked myself the seemingly tough question- why? As in- why am I attracted to this individual?
Strangely enough, I could never answer it. I could never think about rational reasons for my crush. Here’s a snapshot of a paragraph from my journal after I started crushing on someone at work and thought about getting some perspective about it by journaling.
“Okay, I think it started when she slacked you- there you are, missed you. You looked up and there she was, smiling, as she steadily used her right forefinger to place some strands of her air behind her ear. It looked so theatrical, as if she’d been practicing the same move for weeks, just for you. And yet, it was so normal. So, Monil, think about it- that one move made you picture how happily the two of you would be in the future. A seemingly trivial move, and there you were- so sure that she was the one”
So, should we completely disregard our crushes? Of course not, regardless of how irrational, they’re fucking amazing. Then, instead of blindly believing in them, maybe this knowledge will make us question it. So, in the future, we make wiser choices.
Context: Romanticism Lesson: After a point, romanticism has the power to ruin love.
We’ve all been there- that sweet honeymoon phase where everything is just so amazing. Whatever our partners do, it somehow makes us feel good. The way they talk on the phone, those cute fights when they don’t hang up and ask you to (don’t deny it, you know you’ve been there).
Now, that’s great isn’t it? It makes us believe that this is it, we’ve done it. This is love and oh my god, its so beautiful. She/he has to be the one, there’s no doubt to it.
However, what happens when the things we realize during that phase are tested against time? That is- we try to evaluate our relationships based on the honeymoon phase? We’re bound to find loopholes. Those cute love songs no longer do it for us, and it seems like, like we’re going farther and farther away from our sweet ideal phase, something has to be wrong right?
Wrong.
I learned that the major issue new couples face during love comes from the sweet poison that is romanticism. Unfortunately, there is little we can do to escape it, the society wraps it up and presents it to us in the form of media. So, of course we’ll believe that we’re the next Romeo and Juliet, how could we not be?
The major problem with romanticism is rooted in how it changes our perception of love. It makes us strongly believe that if we’re with the right partner, we will feel right. Forget reason, its all about feelings. And, the only way for us to feel better is to follow an invisible script (so to speak), a template, both of which are made by the society, updated with the modern culture.
This very template tells us what is “normal” in love, or worse- how a relationship should unfold, in and out of bed. So, one might righteously assume that this template has been written “by the Gods”, or in other words- “its always been like this”.  Interestingly enough, romanticism has its history.
The ideology emerged in Europe around mid eighteenth century through the minds and hearts of poets, philosophers, and writers.  And since then, it has taken over modern society. I can literally go on and on about exactly how it ruined love, but I’ll let the masters do that for you. 
Context: How we choose our partners Lesson: There is a reason for our “type”
Each of us has a type, whether we admit to it or not, we often find ourselves dating similar types of people. Now, instead of labeling these types as “assholes” or “nice guys”, what if we questioned why we have the types we do?
Why, for example, do I only date people who are hard to please emotionally? What’s worse, I started noticing that I actually enjoy the toxicity of our relationship. In a very real sense, I like my love to be unrequited. I mean, who doesn’t want to walk around with heartbreak songs blasting through the headphones in the street and feeling like they’re the only ones?!
Despite the momentary pride, that’s madness.
And, when (as they say) the “stars do align”, and my partner starts feeling the same, I feel like all the excitement is lost. In a way, I don’t really want you to love me, but I will keep loving you, to a point where you will feel suffocated. 
To put this habit into perspective, here’s Alain De Boton in his book Essays in Love: 
“We fall in love because we long to escape from ourselves with someone as beautiful, intelligent, and witty as we are ugly, stupid, and dull. But what if such a perfect being should one day turn around and decide they will love us back? We can only be somewhat shocked-how can they be as wonderful as we had hoped when they have the bad taste to approve of someone like us?” 
Such insanity begs to be inquired.
If I may put it bluntly- I learned that we seek familiarity, not love. That is, we try to replicate the kind of love we were used to in our childhood, to adult life. Therefore, the reason why I choose such partners is because they replicate the love I received when I was a little kid. Similarly, I don’t want anyone to love me because I genuinely feel I don’t deserve it.
(Note- If you feel like blaming your parents, check out this and this post).
The solution, then, so to speak, is internal healing. I can’t and shouldn’t expect another individual to heal me, for me. That’s my responsibility. Only when I heal from the inside, can I love more authentically on the outside.
Context:  The way we choose our partners Lesson: Modern Metrics for attraction are trivial
Although Romanticism ruined love, to some extent, it did make falling in love easy. It taught us that importance of physical attraction in the realm of love. And, given the obedient students that we are of the society, we followed it blindly. Then, entered technology and now, not only do we confidently fall for someone by the way they look, but we do it in front of our screens. Its as easy as swiping right.
So, it always amazed me- as much as physical attraction is important, what are some other, more realistic metrics we can use to choose our partners wisely? And interestingly enough, I found that these very metrics, in a way, can help us go beyond our feelings and tap into a more trustworthy resource- reason.
I’m sure you must’ve heard the phrase- when in doubt, go old school.
So, I did.
I wondered- what was love like before romanticism? And came across a wonderful play written by Plato, called Symposium.
A symposium, in greek, is a drinking party. The plot is simple- a couple of philosophers attend a drinking party and each one of them ends up giving a speech about what they consider love to be. Think about it- a bunch of people whose profession was literally to just sit and think, come together, in the presence of a lot of alcohol, and try to solve the mystery of love.
Naturally, I picked up some great insights. Amongst all the speeches, Diotima (a fictional character made by Socrates) seemed to have given the most serious one (picked up from GradeSaver):
“First, Love leads a person to love one body and beget beautiful ideas. From these ideas, this person realizes that the beauty of one body is found in all bodies and if he is seeking beauty in form, he must see beauty in all bodies and become lover to all beautiful bodies. After that, the person moves on to thinking the beauty of souls is greater than the beauty of bodies. Here, Diotima specifically refers to giving birth through the soul to make young men better. This results in the lover seeing love in activities and eyes, over the beauty of bodies. Ella also refers to these as beautiful customs, from which the lover loves beautiful things, or other kinds of knowledge. The lover will lastly fall on giving birth to many beautiful ideas and theories, finding love of wisdom. This love never passes away and is always beautiful. The end lesson is learning of this very Beauty (wisdom), coming to know what is beautiful. Only at this point will a lover be able to give birth to true virtue. This person will be loved by the gods and is one of the few who could become immortal. The “Rites of Love,” otherwise referred to as the “Ladder of Love,” is the ultimate conclusion in Diotima’s speech. The last rung of the ladder makes one a “lover of wisdom,” or a philosopher, which in one respect is not surprising, since Plato is a philosopher. Philosophy is love’s highest expression, which allows a person to see Beauty.”
So, one way to contemplate this to reality is to use “the love of wisdom” as a spearhead in our relationships. No longer will beauty mean something superficial and time bounded. More importantly- no longer will we stay in love through instinct. Maybe this will help us think about tapping into another useful resource- reason. 
Context: The “right” person Lesson: Why we will all end up marrying the wrong person
There is a lot of emphasis on finding the “right” partner, someone who can magically understand our mood, who can tell us why we’re upset even if we ourselves can’t quite grasp the reason. So naturally, if our current relationship starts becoming boring, we confidently believe that its the partner. That, they’re not “right” for us.
A natural second though, then, is to believe that there is someone out there for us. We don’t want to even think about (let alone acknowledge) the seemingly anxious thought that maybe, just maybe- there is no one out there for us, that, the dots that make up love are joined by co-incidence and not design. That maybe the universe doesn’t really give a fuck about us (or anyone else for that matter). 
And, because we fail to acknowledge such a thought, our search for the right partner never ends. Strangely enough, most of us don’t even know why we may consider someone as the “right” person, our only metric being- “she/he should understand me one hundred percent, cure my loneliness, answer the scary question of what my purpose is on this planet, and of course- be amazing in bed.”
I learned that as much as we want it to be true, there is no right person for anyone. That the cupid is too young and immature to direct us to our right partner. Practically,  the biggest reason for this stems from the obviously tough (and yet seemingly simple) fact that we cannot fully understand ourselves, let alone make sense of another human being for a lifetime.
Here’s Alain De Boton in his popular NY Times Article “Why you will marry the wrong person”: Marriage ends up as a hopeful, generous, infinitely kind gamble taken by two people who don’t know yet who they are or who the other might be, binding themselves to a future they cannot conceive of and have carefully avoided investigating.
This, as mentioned before, is rooted from the fact that love, ever since romanticism was born, is guided by instinct.
You can read the entire article here, or better- buy the book he wrote about it.
So, in hindsight, if there is no right person for us, what’s the silver lining? The abundance of opportunity. Think about it- if there is no right person for us, that also means that there is no wrong person for us. This makes us acknowledge a soothing reality- that whoever we’re with or will be with, will disappoint us and make us happy at the same time. Our metric, then, isn’t a 4.0. 
Instead, In Alain’s words, it’s-
The person who is best suited to us is not the person who shares our every taste (he or she doesn’t exist), but the person who can negotiate differences in taste intelligently — the person who is good at disagreement. Rather than some notional idea of perfect complementarity, it is the capacity to tolerate differences with generosity that is the true marker of the “not overly wrong” person. Compatibility is an achievement of love; it must not be its precondition.
Context: Attachement Styles Lesson: Why we are, the way we are to our partners
If I started drawing insights from every relationship I’ve had to date, one thing in particular would stand out- the painful memory of being overly attached to my partner. At first, looking at it naively, it just confirmed a rather boastful belief that I had about myself- that I’m just a hopeless romantic.
And, despite the seemingly great status that title carried in itself, being a hopeless romantic is painful. It requires total and utter submission, to give yourself up and trust your significant other. Forget healthy boundaries and say goodbye to your self esteem.
So, just like everything else, I put this under a microscope. I asked myself- why? Why do I do this every time? In a very real sense, I wanted to see if being a “hopeless romantic” was worth it.
Fortunately, I found my answers in a book called Attached, by Amir Levine and Rachel S.F. Heller.  It deduced three typical attachment styles in every adult relationship:Anxious, Avoidant, and Secure.
An anxious attachment style encompasses a painful hobby of being preoccupied with one’s relationship and feeling worried about our partners ability to love us back. There- I hit the jackpot.
Avoidant’s are the exact opposite. For them, intimacy equates to a loss of interdependency and so they try to minimize closeness as much as possible.
As you might’ve guessed, an anxious-avoidant pair serves as the perfect toxic playground. While one person tries to pull, the other one is pushing. Thus, tension is always present.
A secured relationship style is the goal, where, people are comfortable with intimacy and are warm and loving.
More than simply making sense out of my misery, this book also served as a guide, helping me think about what I can do in my future relationships for not feeling so painful. In that context, I learned that instead of changing my attachment style, I have to use it as a roadmap to select partners. That is- I have to think twice before being involved with someone with a avoidant attachment style, and this is the hardest part. Why? Because of the push-pull habit I described above. Anxious and avoidance complement each other. Additionally, the reason why I’ve never attracted someone with a secured attachment style is because to me, there’s no excitement present in that relationship. Secured people communicate their wants and needs clearly. They don’t send mixed signals and I find that boring, given that I’ve learned to work really hard for love.
To put this lesson bluntly- I have to stop equating my anxious attachment style to passion and love. Its not.
Second, I have to stop labelling my “neediness” as good or bad. And, further, acknowledge that I too deserve compatibility and love.
Apply this to yourself: what’s your attachment style? How has that impacted your past or present relationships? 
Context: “How did you two meet?” Lesson: Coincidence has a big hand to play in our love stories
There is a sense of pride that we feel as couples, when someone asks us the typical question- how did you two meet?
Most often than not, our answer revolves around the magical statement- it was just so meant to be. And yet, if we really think about it, that hardly seems to be the case.
Let's say you met your partner at a party. You ordered a Diet Coke with lime and you noticed the bartender take out two glasses. She gave one to you and as you looked up, the other glass was given to someone else. Your eyes met with his, he raised his glass and so did you, and then the two of you got talking. He shared his aspirations of being an architect, while you tried not to blabber too much about how much you love graphic designing.
The mutuality of both of you using the right side of your brains as a profession made you wonder- wow, this is so meant to be.
Then, he shared his embarrassment of ordering a Diet Coke with lime and you reassured him that his drink choice didn’t make him not cool or not young enough. That act of reassurance made you fantasize about how perfect this relationship would be.
In a way, you started following a script, a script, that, although written and edited by the media and culture, made you think it was written by cupid- just for you. All the while, forgetting that at any moment, you had the chance to tear that script, change the ink of the pen you were writing it, or close the book altogether.
Alright, let’s back up a bit.
What if you decided not to attend the party? What if you chose staying in and reading a good book? What if you decided to get another drink? The point is- there are so many possibilities, so many scenarios that could’ve played out.
I learned that we fall in love by coincidence, not design. There is no one up there writing our love stories, for better or for worse, we have the ability to do it ourselves. Maybe this makes you a tad upset, but give it a minute and think about it- because there is no predetermined “right place” or “right time”, there is also no wrong place or wrong time.
Choice has always been with us, and it always will be. So, the next time people ask your partner and you that question about how the two of you met- really think about it, how did the two of you meet? And then question the typical norm of crediting cupid.
Context: How Love Stories Typically End Lesson: The other side of happily ever after
If we take notes about what every love story typically consists of, we can find ourselves jotting down the following:
- Every love story has a nemesis that has to be defeated - Most of the time, both individuals are never on the same page - When they are- they meet and the story ends
And we fucking love it. Don’t get me wrong, despite studying this for a year now, every time I pick up a typical romance book, I can’t let go of it. Romance novels emphasize romanticism, they make us believe that maybe, just maybe, our horribleness is worthy of love too.
That of course there’s a special determined someone for us, who will step in and just cure our misery, who will let make living just a tad bit better. 
That said, here’s the problem I have with them- the ending.
More often than not, love stories end at a point where real life starts. In actuality, “happily ever after” encompasses marriage, kids, death, and what not. Isn’t that where we really need some guidance? I mean, modern culture has taught us romanticism well, we’re all brilliantly aware of how to handle the honeymoon phase, the things to say to our crushes, the songs to listen to; unfortunately, we’re pretty blindfolded in the next stage- that is, living a life with them.
We don’t know what to do when suddenly our partner starts disagreeing with us, when they can’t quite get along with your friends, when they leave the bathroom lights on; wouldn’t we all want to see what would’ve happened if Romeo and Juliet got an apartment in Brooklyn and started living a life together?
Of course we would, it’s an area we’re not at all educated about.
Now, most definitely, its not a perfect science. If there’s one thing we know about relationships, its that we can’t really master them. Its a tight rope between two cliffs, we’re all just.. trying our best not to fall. That said, just because we can’t master them doesn’t mean we can’t explore the other side, look at it, be curios about it, and if we’re lucky- learn something from it.
Fortunately, I found the perfect book that explores this concept, The Course Of Love by Alain De Boton. The best part about this book isn’t how the two individuals meet, instead, it’s what happens after they meet and fall in love- marriage, kids, death, and what not. The book explores fascinating questions like- why, after being so sure that our partner is the right one, do we start realizing that maybe we’ve made a mistake? How and why do couples loose the “spark”?
All I’d say is- get the book, read it, and treat it like your bible for relationships.
If you do, I’d suggest reading Essays in Love first, it was his first book. Although it doesn’t cover marriage, the text still explores the first few phases of love- crushes, breakups, affairs, etc. 
Ending Thoughts
Our modern definition of love acknowledges the reality that love stories don’t end with the two individuals meet. They go on, in fact, it’s essential for them to go on, given life,
So, how can we use this insight?
We can let it change our perspective of love and relationships. We can, in a very real sense, let it change the typical attitude we have towards it- a sense of wonder, something that the cupid/God controls. Instead, we can finally consider the possibility that maybe, just maybe, love is more deliberate, more logical, and thus- a skill that can be learned.
What this means is, the next time we crush on someone, we don’t let it fully consume us, we acknowledge that yes- they’re lovely, of course they’re perfect for us, but at the same time, there’s a side of them we haven’t seen yet. Not because they’re hiding it, but because we’re blindfolded by love. 
The point isn’t to point out faults in love, it’s to let a little bit of cynicism help us stay aware of what’s happening.
When I started on this endeavor, a quote by Victor Frankyl amazed me. In his book Man’s Search For Meaning, he said- “The salvation of man is through love and in love”, and after everything I’ve learned, I think he’s right. 
The problem with quotes and phrases about love is that they’re very vague. Which is why they need to be delayered. These sayings make us believe that the right approach to love has to be taken from the surface, however, we need to delayer it. We need to get rid of the really attractive bottle cap and pour out the contents of the bottle. Then, be curious about it and hopefully- learn something from it. 
Real love in essence is transparency. Not only is it in acknowledging the scars both partners share but also how those scars affect how they are and what they do in relationships. That is where, I reckon, the whole salvation thing comes in; when we’re not only accepted for who we are but also loved for it.
I hope this post made you (at the least) think about the possibility that love isn’t random, it doesn’t happen when the stars align. Instead, it’s something that can be logical, something that can be guided by reason. That doesn’t necessarily make love unsexy or unromantic, it actually makes it sustainable, so you can grow in and with it. 
0 notes
for-the-user · 8 years ago
Text
mongodb replication
window.location.replace("https://blog.sebastianfromearth.com/post/20170510090839");
Member Types
The primary node receives all write operations and records all changes to its data sets in its operation log, i.e. oplog. The secondaries replicate the primary’s oplog and apply the operations to their data sets. They apply these operations from the primary asynchronously.
If the primary is unavailable, an eligible secondary will hold an election to elect itself the new primary.
You may add an extra mongod instance to a replica set as an arbiter. Arbiters do not maintain a data set, their purpose is to maintain a replica set by responding to heartbeat and election requests by other replica set members. An arbiter will always be an arbiter whereas a primary may step down and become a secondary and a secondary may become the primary during an election.
Read preference
By default, clients read from the primary, however, clients can specify a read preference to send read operations to secondaries. Read preference modes are: primary Default mode. All operations read from the current replica set primary. primaryPreferred In most situations, operations read from the primary but if it is unavailable, operations read from secondary members. secondary All operations read from the secondary members of the replica set. secondaryPreferred In most situations, operations read from secondary members but if no secondary members are available, operations read from the primary. nearest Operations read from member of the replica set with the least network latency, irrespective of the member’s type.
Automatic Failover
Replica set members send heartbeats (pings) to each other every two seconds. If a heartbeat does not return within 10 seconds, the other members mark the delinquent member as inaccessible.
When a member is marked as inaccessible, the secondary with the highest priority available will call an election. Secondary members with a priority value of 0 cannot become primary and do not seek election.
When a primary does not communicate with the other members of the set for more than 10 seconds, an eligible secondary will hold an election to elect itself the new primary. The first secondary to hold an election and receive a majority of the members’ votes becomes primary.
You can additionally configure a secondary to: Prevent it from becoming a primary in an election Prevent applications from reading from it Keep a running “historical” snapshot for use in recovery
Initiate a replica set
conf = { "_id": "my_groovy_replica_set", "version": 1, "members": [ { "_id": 0, "host": "this.is.mongo0:27017" }, { "_id": 1, "host": "this.is.mongo1:27017" }, { "_id": 2, "host": "this.is.mongo2:27017" } ] }; rs.initiate(conf);
Commands
Add a member to a replica set. You must run it from the primary of the replica set.
rs.add('this.is.mongo.3:27017') OR rs.add( { host: "this.is.mongo3:27017", priority: 0 } )
Add an arbiter to an existing replica set.
rs.addArb(this.is.mongo3:27107) OR rs.add('this.is.mongo3:27017', true)
Remove a member from an existing replica set.
rs.remove(this.is.mongo1)
Make a replica set member ineligible to become primary (for x seconds).
rs.freeze(seconds) OR cfg = rs.conf() cfg.members[2].priority = 0 rs.reconfig(cfg)
To allow the current connection to allow read operations to run on secondary members.
rs.slaveOk() OR db.getMongo().setSlaveOk()
Trigger the current primary to step down and trigger an election for a new primary (blocks all writes to the primary while it runs).
rs.stepDown(60)
Check current replica set config.
rs.conf()
Check the status of the replication set.
rs.status() OR use admin db.runCommand( { replSetGetStatus : 1 } )
Check if the current member is the primary.
rs.isMaster().ismaster
Check if the current member is a secondary.
rs.isMaster().secondary
Return a formatted report of the status of a replica set from the perspective of the secondary member of the set.
rs.printSlaveReplicationInfo() source: m1.example.net:27017 syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT) 0 secs (0 hrs) behind the primary source: m2.example.net:27017 syncedTo: Thu Apr 10 2014 10:27:47 GMT-0400 (EDT) 0 secs (0 hrs) behind the primary
Print a report of the replica set member's oplog.
rs.printReplicationInfo() configured oplog size: 192MB log length start to end: 65422secs (18.17hrs) oplog first event time: Mon Jun 23 2014 17:47:18 GMT-0400 (EDT) oplog last event time: Tue Jun 24 2014 11:57:40 GMT-0400 (EDT) now: Thu Jun 26 2014 14:24:39 GMT-0400 (EDT)
Restarting stuff
What happens to the replica set?
Take this setup for instance:
no arbiter this.is.mongo0 - secondary this.is.mongo1 - secondary this.is.mongo2 - primary
If I restart this.is.mongo0, when the secondary is rebooting and I check the replication status on the primary:
this.is.mongo2:PRIMARY> rs.status() . . . { "_id" : 0, "name" : "this.is.mongo0:27017", "health" : 0, "state" : 8, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2017-05-09T14:38:46.137Z"), "lastHeartbeatRecv" : ISODate("2017-05-09T14:27:12.363Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "Connection refused", "configVersion" : -1 }, . . .
After 10 seconds, this will change from this:
"lastHeartbeatMessage" : "Connection refused",
To this:
"lastHeartbeatMessage" : "Couldn't get a connection within the time limit",
Or this:
"lastHeartbeatMessage" : "no response within election timeout period",
And configVersion will change from 1 or 3 to -1
At this point the members will elect a new primary. It could be the same host is re-elected primary, but in this case, my replication set looked like this after:
this.is.mongo0 - secondary this.is.mongo1 - primary this.is.mongo2 - secondary
Primary election is very fast with three members, I'm not sure how long it takes with a large cluster size.
0 notes
energitix · 8 years ago
Quote
I bet by now you’ve heard about the AWS S3 outage. Somewhat ironically, at 17:47 last night I was watching a training video on Amazon Web Services (AWS) Simple Storage System (S3) best practices when the video stopped. The video service, hosted on AWS as you would expect, wouldn’t refresh so I decided to look up some AWS S3 documents instead. However, my browser displayed a HTTP 500 (service unavailable) from aws.amazon.com. It was highly unusual and thus unexpected. The AWS status page was showing a sea of green so, like anyone else looking for near-real time news, I searched Twitter for “AWS 500”. Was anyone else seeing this? Yes, but just a few people. I’d encountered what was the start of the “February 28th AWS US-East-1 S3 Increase Error Rates,” or the Twitter trend called “Amazon S3 Outage.”is it just me getting HTTP 500 from https://t.co/4OvzDj0RTd? http://pic.twitter.com/EsuPVGpu2s— UK Cloud Pro (@ukcloudpro) February 28, 2017Cue social media reactions: sarcastic meme-driven humor: “Oh my god the Internet is dead”; shrieking; gloating competitors; private cloud apologists; and anti-cloud ideologues.Amazon S3 according to the #AWS status page. http://pic.twitter.com/WpMAyHs0nY— David C. Campbell (@DCCampbell) February 28, 2017However, there is “signal” in this social noise and here are five important things to take away from the AWS S3 “event.”1. AWS wasn’t down, but US East 1 was “impaired”AWS is a global cloud service made up of 16 regions and counting. When people on Twitter shrieked “AWS is down!” this wasn’t true. The reality was that an important service in one region was impaired, but other regions were A-OK.Source: Amazon Web Services These regions are completely independent of each other, except for the special US-East-1 region which handles some global services. And inside each region you have multiple availability zones (AZ) which you can think of as datacenters. These datacenters are resilient with their own redundant power and connectivity, and they’re also connected to each other inside the region so that services like S3 can automatically replicate your data three times – giving you data durability of eleven nines. The standard availability of the S3 service is four nines.Source: Amazon Web Services What we experienced yesterday was the S3 service in the Region US-East-1 suffering from what AWS called “high error rates” and the service was impaired. Other regions were mostly OK, such as EU-West-1, but the US-East-1 S3 event had a wider “blast radius” because: When you create an S3 bucket and don’t specify a region, the default is US-East-1. US-East-1 is a special region, sometimes called a “global region,” because it’s the default endpoint for “global resources” such as hosted zones, resource record sets, health checks, and cost allocation tags. The S3 in US-East-1 is used by a lot of AWS services including the AWS Status Page and the AWS documentation. US-East-1 is huge (five AZs) and is older and more popular than other regions, running a lot of well-known online applications that use S3 to host their data. Therefore, the blast radius was very public on Twitter, with many companies explaining that their downtime was due to AWS. US-East-1 is in “spy country,” which will please the conspiracy theorists!Therefore, any impact to a core AWS service in a core AWS region like U- East-1 is likely to have a ripple-out service impact across a wide blast radius.2. Why did the S3 impairment in one region have such a big impact?S3 is sometimes referred to as “the storage for the Internet” because it’s used everywhere. Any object like a thumbnail or log record put into it is easily accessible over HTTP from anywhere. Your app doesn’t have to be running on AWS, it could be running in your datacenter and accessing S3 over the Internet.When Amazon S3 is down. #awscloud #awss3 http://pic.twitter.com/KQo4sVvkAl— Fernando (@fmc_sea) February 28, 2017S3 is the epitome of one of the core “essential cloud characteristics” from the NIST Cloud Standards: broad network access. It’s what makes AWS the public cloud leader. It also means that, because S3 is cheap and easy to use by any user with no special skills required, it’s used universally as the backing store for many trillions of objects serving millions of requests per second. Both AWS services and customer applications use S3, so it was like a house of cards even though other regions’ S3 was working fine.Source: ViewYonder Awkwardly, even the status indicators on the AWS service status page rely on S3 for the storage of its health marker graphics. And thus, during the outage the status page was showing all services as green despite the service being “impaired.” AWS did add a banner to the status page to explain why this was happening, and they did issue this tweet to keep customers informed.The dashboard not changing color is related to S3 issue. See the banner at the top of the dashboard for updates.— Amazon Web Services (@awscloud) February 28, 2017Twitter was awash with companies apologizing to customers for service degradation or outage, and looking at SimilarTech data – the number of sites they claim are running on AWS S3 is significant:Source: SimilarTech.com Interestingly home users were also affected with reports that parents couldn’t get Alexa to sing nursery rhymes to their children, i.e. failing to access Amazon Music, and home automation systems locking people in their homes (or into darkness).4. People don’t always use the redundant AWS servicesEven though there are multiple AZs in one region, and multiple regions across the world, not everyone uses these availability features for a range of reasons:Unknown dependencies on S3. An application running in one region can be accessing data from an S3 in another region across the Internet – it’s just a URL, and developers are known to hardcode filesystem paths and URLs into their code. As applications age, do developers keep track of where data is coming from? You would think so, but humans make mistakes and dependencies can accrue over time to create fragile applications.Cloud-immigrant applications. Within AWS, customers can deploy applications that are not cloud native and cannot benefit from “distributed computing” that multi-AZ and multi-regions offer. It might be an old, monolithic, and fragile COTS (commercial off-the-shelf) product that stubbornly sits in a “snowflake” virtual machine running the database locally. If this virtual machine is in a failing AZ or region, then it’s back to standard backup, restore, and disaster recovery scenarios. EC2 in the US-East-1 region was affected because many EC2 virtual machines use S3 for storage.Availability isn’t worth the cost. There are those that understand the technology but don’t want to pay for the extra resources such as multi-AZ databases, or pay for lots of snapshots, or pay for cross-region replication or many of the other HA/DR (high availability/disaster recovery) features AWS makes readily available. “That app isn’t worth the costs of availability” might be the view – but has this been really worked out? Which other applications rely on this application?This isn’t unique to cloud. For as long as IT has been around, people have chosen to save money on HA/DR, or not even known it was something they needed to consider. 5. “Knee Jerks” attempt to muddy the waterWhenever AWS has an issue there are a three types of commentator to surface on social media, what I like to call “Knee Jerks”:The AWS-bashing competitor: “AWS has an outage! Time to reconsider and use our cloud.” It’s unlikely that you will see Azure or Google bashing AWS but you might see an anti-cloud enterprise technology vendor or smaller, regional service provider naively bashing AWS during an outage. I don’t know any businesses that are impressed by competitor bashing.The hybrid/private cloud ideologues: “Public cloud places all your eggs in one basket and is flawed – use our hybrid cloud and hedge your bets.” Whatever people say hybrid cloud might be, and there’s no consensus or standard, it couldn’t do what S3 does. So this point is moot and shows a lack of understanding of modern Internet and cloud architectures.The anti-cloud on-premises techie: “If it was your own kit you’d be able to troubleshoot it yourself! Can’t do that in the cloud!” But this is the whole point of cloud – I don’t want to troubleshoot it myself! AWS has all the skills and know-how and resources I don’t have. This is usually a techie blinded by their fear of the cloud, who wants us to build IT systems in a shed at the end of their garden.It’s unlikely that anyone who understands AWS will pay any attention to this kind of rhetoric, but non-cloud-savvy business users might unfortunately be influenced by this kind of fear, uncertainty, and doubt.The calm after the stormEventually, AWS got it status page working and it didn’t look pretty, in fact it was commented on that some long-running cloud users had never seen this before. It’s quite a testament to AWS long-term reliability:Source: Amazon Web ServicesAWS will fix this issue and their service will return to normal (and will be by the time this is published). Then it will work out how to never have this particular issue/problem again – they have people like James Hamilton who live to find and solve these issues. And if you use AWS, then James is working for you.AWS will also produce a post-mortem and they are unique in leading cloud service providers in doing this – in my experience, many cloud service providers either don’t write outages up, or merely provide an exec-level generic blog piece. In the AWS post-mortem, we’ll find out what went wrong, how they fixed it, and what AWS is going to do better. And I’m certain that they will also tell their customers how they can do things differently too.Image CreditThe post 5 Things You Need To Know About The AWS S3 Outage appeared first on ITSM.tools.
https://itsm.tools/2017/03/01/5-things-aws-s3-outage/
0 notes
lupbiy · 9 years ago
Text
What’s New in MongoDB 3.4, Part 3: Modernized Database Tooling
Welcome to the final post in our 3-part MongoDB 3.4 blog series.
In part 1 we demonstrated the extended multimodel capabilities of MongoDB 3.4, including native graph processing, faceted navigation, rich real-time analytics, and powerful connectors for BI and Apache Spark
In part 2 we covered the enhanced capabilities for running mission-critical applications, including geo-distributed MongoDB zones, elastic clustering, tunable consistency, and enhanced security controls.
We are concluding this series with the modernized DBA and Ops tooling available in MongoDB 3.4. Remember, if you want to get the detail now on everything the new release offers, download the What’s New in MongoDB 3.4 white paper .
MongoDB Compass
MongoDB Compass is the easiest way for DBAs to explore and manage MongoDB data. As the GUI for MongoDB, Compass enables users to visually explore their data, and run ad-hoc queries in seconds – all with zero knowledge of MongoDB's query language.
The latest Compass release expands functionality to allow users to manipulate documents directly from the GUI, optimize performance, and create data governance controls.
DBAs can interact with and manipulate MongoDB data from Compass. They can edit, insert, delete, or clone existing documents to fix data quality or schema issues in individual documents identified during data exploration. If a batch of documents need to be updated, the query string generated by Compass can be used in an update command within the mongo shell.
Trying to parse text output can significantly increase the time to resolve query performance issues. Visualization is core to Compass, and has now been extended to generating real-time performance statistics, and presenting indexes and explain plans.
Figure 1: Real-time performance statistics now available from MongoDB Compass
The visualization of the same real-time server statistics generated by the mongotop and mongostat commands directly within the Compass GUI allows DBAs to gain an immediate snapshot of server status and query performance.
If performance issues are identified, DBAs can visualize index coverage, enabling them to determine which specific fields are indexed, their type, size, and how often they are used.
Compass also provides the ability to visualize explain plans, presenting key information on how a query performed – for example the number of documents returned, execution time, index usage, and more. Each stage of the execution pipeline is represented as a node in a tree, making it simple to view explain plans from queries distributed across multiple nodes.
If specific actions, such as adding a new index, need to be taken, DBAs can use MongoDB’s management tools to automate index builds across the cluster.
Figure 2: MongoDB Compass visual query plan for performance optimization across distributed clusters
Document validation allows DBAs to enforce data governance by applying checks on document structure, data types, data ranges, and the presence of mandatory fields. Validation rules can now be managed from the Compass GUI. Rules can be created and modified directly using a simple point and click interface, and any documents violating the rules can be clearly presented. DBAs can then use Compass’s CRUD support to fix data quality issues in individual documents.
MongoDB Compass is included with both MongoDB Professional and MongoDB Enterprise Advanced subscriptions used with your self-managed instances, or hosted MongoDB Atlas instances. MongoDB Compass is free to use for evaluation and in development environments. You can get MongoDB Compass from the download center, and read about it in the documentation.
Operational Management for DevOps Teams
Ops Manager is the simplest way to run MongoDB on your own infrastructure, making it easy for operations teams to deploy, monitor, backup, and scale MongoDB. Ops Manager is available as part of MongoDB Enterprise Advanced, and its capabilities are also available in Cloud Manager, a tool hosted by MongoDB in the cloud. Ops Manager and Cloud Manager provide an integrated suite of applications that manage the complete lifecycle of the database:
Automated deployment and management with a single click and zero-downtime upgrades
Proactive monitoring providing visibility into the performance of MongoDB, history, and automated alerting on 100+ system metrics
Disaster recovery with continuous, incremental backup and point-in-time recovery, including the restoration of complete running clusters from your backup files
Ops Manager has been enhanced as part of the MongoDB 3.4 release, now offering:
Finer-grained monitoring telemetry
Configuration of MongoDB zones and LDAP security
Richer private cloud integration with server pools and Cloud Foundry
Encrypted backups
Support for Amazon S3 as a location for backups
Ops Manager Monitoring
Ops Manager now allows telemetry data to be collected every 10 seconds, up from the previous minimum 60 seconds interval. By default, telemetry data at the 10-second interval is available for 24 hours. 60-second telemetry is retained for 7 days, up from the previous 48-hour period. These retention policies are now fully configurable, so administrators can tune the timelines available for trend analysis, capacity planning, and troubleshooting.
Generating telemetry views synthesized from hardware and software statistics helps administrators gain a complete view of each instance to better monitor and maintain database health. Ops Manager has always displayed hardware monitoring telemetry alongside metrics collected from the database, but required a third party agent to collect the raw hardware data. The agent increased the number of system components to manage, and was only available for Linux hosts. The Ops Manager agent has now been extended to collect hardware statistics, such as disk utilization and CPU usage, alongside existing MongoDB telemetry. In addition, platform support has been extended to include Windows and OS X.
Private Cloud Integration
Many organizations are seeking to replicate benefits of the public cloud into their own infrastructure through the build-out of private clouds. A number of organizations are using MongoDB Enterprise Advanced to deliver an on-premise Database-as-a-Service (DBaaS). This allows them to standardize the way in which internal business units and project teams consume MongoDB, improving business agility, corporate governance, cost allocation, and operational efficiency.
Ops Manager now provides the ability to create pre-provisioned server pools. The Ops Manager agent can be installed across a fleet of servers (physical hardware, VMs, AWS instances, etc.) by a configuration management tool such as Chef, Puppet, or Ansible. The server pool can then be exposed to internal teams, ready for provisioning servers into their local groups, either by the programmatic Ops Manager API or the Ops Manager GUI. When users request an instance, Ops Manager will remove the server from the pool, and then provision and configure it into the local group. It can return the server to the pool when it is no longer required, all without sysadmin intervention. Administrators can track when servers are provisioned from the pool, and receive alerts when available server resources are running low. Pre-provisioned server pools allow administrators to create true, on-demand database resources for private cloud environments. You can learn more about provisioning with Ops Manager server pools from the documentation.
Building upon server pools, Ops Manager now offers certified integration with Cloud Foundry. BOSH, the Cloud Foundry configuration management tool, can install the Ops Manager agent onto the server configuration requested by the user, and then use the Ops Manager API to build the desired MongoDB configuration. Once the deployment has reached goal state, Cloud Foundry will notify the user of the URL of their MongoDB deployment. From this point, users can log in to Ops Manager to monitor, back-up, and automate upgrades of their deployment.
MongoDB Ops Manager is available for evaluation from the download center.
Backups to Amazon S3
Ops Manager can now store backups in the Amazon S3 storage service, with support for deduplication, compression, and encryption. The addition of S3 provides administrators with greater choice in selecting the backup storage architecture that best meets specific organizational requirements for data protection:
MongoDB blockstore backups
Filesystem backups (SAN, NAS, & NFS)
Amazon S3 backups
Whichever architecture is chosen, administrators gain all of the benefits of Ops Manager, including point-in-time recovery of replica sets, cluster-wide snapshots of sharded databases, and data encryption.
You can learn more about Ops Manager backups from the documentation.
MongoDB Atlas: VPC Peering
The MongoDB Atlas database service provides the features of MongoDB, without the operational heavy lifting required for any new application. MongoDB Atlas is available on-demand through a pay-as-you-go model and billed on an hourly basis, letting developers focus on apps, rather than ops.
MongoDB Atlas offers the latest 3.4 release (community edition) as an option. In addition, MongoDB Atlas also now offers AWS Virtual Private Cloud (VPC) peering. Each MongoDB Atlas group is provisioned into its own AWS VPC, thus isolating the customer’s data and underlying systems from other MongoDB Atlas users. With the addition of VPC peering, customers can now connect their application servers deployed to another AWS VPC directly to their MongoDB Atlas cluster using private IP addresses. Whitelisting public IP addresses is not required for servers accessing MongoDB Atlas from a peered VPC. Services such as AWS Elastic Beanstalk or AWS Lambda that use non-deterministic IP addresses can also be connected to MongoDB Atlas without having to open up wide public IP ranges that could compromise security. VPC peering allows users to create an extended, private network connecting their application servers and backend databases.
You can learn more about MongoDB Atlas from the documentation.
Next Steps
As we have seen through this blog series, MongoDB 3.4 is a significant evolution of the industry’s fastest growing database:
Native graph processing, faceted navigation, richer real-time analytics, and powerful connectors for BI and Spark integration bring additional multimodel database support right into MongoDB.
Geo-distributed MongoDB zones, elastic clustering, tunable consistency, and enhanced security controls bring state-of-the-art database technology to your most mission-critical applications.
Enhanced DBA and DevOps tooling for schema management, fine-grained monitoring, and cloud-native integration allow engineering teams to ship applications faster, with less overhead and higher quality.
Remember, you can get the detail now on everything packed into the new release by downloading the What’s New in MongoDB 3.4 white paper.
Alternatively, if you’d had enough of reading about it and want to get started now, then:
Download MongoDB 3.4
Alternatively, spin up your own MongoDB 3.4 cluster on the MongoDB Atlas database service
Sign up for our free 3.4 training from the MongoDB University
0 notes