Tumgik
mrkeu · 3 years
Text
Top 5 High Availability Dedicated Server Solutions
What Is a High Availability Dedicated Server?
A typical dedicated server is a powerful computer which is connected to a high-speed Internet connection, and housed in a state-of-the-art remote data center or optimized data facility.
A High Availability dedicated server is an advanced system equipped with redundant power supplies, a fully redundant network, RAID disk towers and backups, ensuring the highest uptime and the full reliability with no single point of failure.
Configuration For High Availability Dedicated Servers
As its name implies, high availability dedicated solutions are scalable and customized hosting solutions, designed to meet the unique needs of any business.
These configurations are carefully designed to provide fail-proof architecture to run the critical applications in your business – those that demand the highest availability.
Possible high availability server configurations might include multiple hosts managed by redundant load balancers and replication hosts. As well as redundant firewalls for added security and reliability.
Why High Availability Server Is Important for Business
Nowadays businesses rely on the Internet. Let’s face it – even the smallest downtime can cause huge losses to business. And not just financial losses. Loss of reputation can be equally devastating.
According to StrategicCompanies more than half of Fortune 500 companies experience a minimum of 1.6 hours of downtime each and every week. That amounts to huge losses of time, profit, and consumer confidence. If your customer can’t reach you online, you might as well be on the moon, as far as they are concerned.
Consider: In the year 2013, 30 minutes of an outage to Amazon.com reportedly cost the company nearly $2 million. That’s $66,240 per minute. Take a moment to drink that in. Even if you’re not Amazon, any unplanned downtime is harmful to your business.
Your regular hosting provider may provide 99% service availability. That might sound good, in theory. But think about that missing 1%… That’s 87 hours (3.62 days) of downtime per year! If the downtime hits during peak periods, the loss to your business can be disastrous.
The best way to prevent downtime and eliminate these losses is to opt for high availability hosting solutions.
Built on a complex architecture of hardware and software, all parts of this system work completely independently of each other. In other words – failure of any single component won’t collapse the entire system.
It can handle very large volume of requests or a sudden surge in traffic. It grows and shrinks with the size and needs of your organization. Your business is flexible, shouldn’t your computer systems be, as well? Following are some of the best high availability solutions you can use to host your business applications.
1. Ultra High Performance Dedicated Servers
High performance servers are high-end dedicated solutions with larger computing capacity, especially designed to achieve the maximum performance. They are an ideal solution to cater enterprise workloads.
A typical high performance dedicated server will consist of the following:
Single/Dual latest Intel Xeon E3 or E5 series processors.
64 GB to 256 GB RAM
8 to 24 TB SATA II HDD with RAID 10
Energy efficient and redundant power supply & cooling units
Offsite Backups
Note that the list above is just a sample configuration which can be customized/upgraded as per your unique requirements. If you need more power, we can build a setup with 96 drives, 3 TB RAM, and 40+ physical CPU cores.
Real World Applications (Case Study)Customer’s Requirement
One of our existing customers was looking for a high-end game server to host flash games with encoded PHP and MySQL server as a backend.
To achieve the highest availability, they demanded 2 load balancers with failover. Each of them contains 2 web servers and a database server.
Website Statistics
8000-10000 simultaneous players
100% uptime requirement
10 GB+ database size
Solution Proposed by AccuWebHosting
Our capacity planning team designed a fully redundant infrastructure with dual load balancers sitting in front of web and database servers.
This setup consists of 2 VMs with load balancers connected to a group of web servers through a firewall. The database server was built on ultra-fast SSD drives for the fastest disk I/O operations.
For a failover, we set up an exact replica of this architecture with real-time mirroring. Should the primary system fail, the secondary setup will seamlessly take over the workload. That’s right. Zero downtime.
Infrastructure Diagram
Tumblr media
2. Load Balanced Dedicated Servers
Load Balancing
The process of distributing incoming web traffic across a group of servers efficiently and without intervention is called Load Balancing.
A hardware or software appliance which provides this load balancing functionality is known as a Load Balancer.
Tumblr media
The dedicated servers equipped with a hardware/software load balancer are called Load Balanced Dedicated Servers.
How Load Balancing Works?
A load balancer sits in front of your servers and routes the visitor requests across the servers. It ensures even distribution, i.e., all requests must be fulfilled in a way that it maximizes speed and capacity utilization of all servers and none of them is over or under-utilized.
When your customers visit your website, they are first connected to load balancer and the load balancer routes them to one of the web servers in your infrastructure. If any server goes down, the load balancer instantly redirects the traffic to the remaining online servers.
As web traffic increases, you can add new servers quickly and easily to the existing pool of load-balanced servers. When a new server is added, the load balancer will start sending requests to new server automatically. That’s right – there’s no user-intervention required.
Types Of Load Balancing
Load balancing can be performed with one of the following methods.
Load Balancing Through DNS
Load Balancing Through Hardware
Load Balancing Through Software
Load Balancing With DNS
The DNS service balances the web traffic across the multiple servers. Note that when you perform the traffic load balancing through this method you cannot choose which load balancing algorithm. It always uses the Round Robin algorithm to balance the load.
Load Balancing Through Hardware
This is the most expensive way of load balancing. It uses a dedicated hardware device that handles traffic load balancing.
Most of the hardware based load balancer systems run embedded Linux distribution with a load balancing management tool that allows ease of access and a configuration overview.
Load Balancing Through Software
Software-based load balancing is one of the most reliable methods for distributing the load across servers. In this method, the software balances the incoming requests through a variety of algorithms.
Load Balancing Algorithms
There are a number of algorithms that can be used to achieve load balance on the inbound requests. The choice of load balancing method depends on the service type, load balancing type, network status and your own business requirements.
Typically, for low load systems, simple load balancing methods (i.e., Round Robin) will suffice, whereas, for high load systems, more complex methods should be used. Check out this link for more information on some industry standard Load Balancing Algorithms used by the load balancers.
Setup Load Balancing On Linux
HAProxy (High Availability Proxy) is the best available tool to set up a load balancer on Linux machines (web server, database server, etc).
It is an open-source TCP and HTTP load balancer used by some of the largest websites including Github, StackOverflow, Reddit, Tumblr and Twitter.
It is also used as a fast and lightweight proxy server software with a small memory footprint and low CPU usage.
Following are some excellent tutorials to setup a load balancing on Apache, NGINX and MySQL server.
Setup HAProxy as Load Balancer for Nginx on CentOS 7
Setup High-Availability Load Balancer for Apache with HAProxy
Setup MySQL Load Balancing with HAProxy
Setup Load Balancing On Windows
Check out below the official Microsoft document to setup load balancing with IIS web server.
Setup Load Balancing on IIS
3. Scalable Private Cloud
A scalable private cloud is a cloud-based system that gives you self-service, scalability, and elasticity through a proprietary architecture.
Private clouds are highly scalable that means whenever you need more resources, you can upgrade them, be it memory, storage space, CPU or bandwidth.
Tumblr media
It gives the best level of security and control making it an ideal solution for a larger business. It enables you to customise computer, storage and networking components to best suit custom requirements.
Private Cloud Advantages
Tumblr media
Enhanced Security & Privacy
All your data is stored and managed on dedicated servers with dedicated access. If your Cloud is on-site, the server will be monitored by your internal IT team and if it is at a datacenter, their technicians will monitor it. Thus, physical security is not your concern.
Fully Redundant Platform
A private cloud platform provides a level of redundancy to compensate from multiple failures of the hard drive, processing power etc. When you have a private cloud, you do not have to purchase any physical infrastructure to handle fluctuation in traffic.
Efficiency & Control
Private cloud gives you more control on your data and infrastructure. It has dedicated resources and no one else has access of the server except the owner of the server.
Scalable Resources
Each company has a set of technical and business requirements which usually differ from other companies based on company size, industry and business objectives etc.
A private cloud allows you to customize the server resources as per your unique requirements. It also allows you to upgrade the resources of the server when necessary.
Private Cloud DisadvantagesCost
As compared to the public cloud and simple dedicated server setup, a private cloud is more expensive. Investments in hardware and resources are also required.
You can also rent a private cloud, however the costs will likely be the same or even higher, so this might not be an advantage.
Maintenance
Purchasing or renting a private cloud is only one part of the cost. Obviously, for a purchase, you’ll have a large outlay of cash at the onset. If you are renting you’ll have continuous monthly fees.
But even beyond these costs, you will need to consider maintenance and accessories. Your private cloud will need enough power, cooling facilities, a technician to manage the server and so on.
Under-utilization
Even if you are not utilizing the server resources, you still need to pay the full cost of your private cloud. Whether owning or renting, the cost of capacity under-utilization can be daunting, so scale appropriately at the beginning of the process.
Complex Implementation
If you are not tech savvy, you may face difficulties maintaining a private cloud. You will need to hire a cloud expert to manage your infrastructure, yet another cost.
Linux & Windows Private Cloud Providers
Cloud providers give you an option to select your choice of OS: either Windows or any Linux distribution. Following are some of the private cloud solution providers.
AccuWebHosting
Amazon Web Services
Microsoft Azure
Rackspace
Setting Up Your Own Private Cloud
There are many paid and open source tools available to setup your own private cloud.
OpenStack
VMware vSphere
VMmanager
OnApp
OpenNode Cloud Platform
OpenStack is an open source platform that provides IAAS (Infrastructure As A Service) for both public and private cloud.
Click here to see the complete installation guide on how you can deploy your own private cloud infrastructure with OpenStack on a single node in CentOS or RHEL 7.
4. Failover
Tumblr media
Failover means instantly switching to a standby server or a network upon the failure of the primary server/network.
When the primary host goes down or needs maintenance, the workload will be automatically switched to a secondary host. This should be seamless, with your users completely unaware that it happened.
Failover prevents a single point of failure (SPoF) and hence it is the most suitable option for mission critical applications where the system has to be online without even one second of downtime.
How Failover Works?
Surprisingly, automated failover system is quite easy to set up. A failover infrastructure consists of 2 identical servers, A primary server and a secondary. Both servers will serve the same data.
A third server will be used for monitoring. It continuously monitors the primary server and if it detects a problem, it will automatically update the DNS records for your website so that traffic will be diverted to the secondary server.
Once the primary server starts functioning again, traffic will be routed back to primary server. Most of the time your users won’t even notice a downtime or lag in server response.
Failover TypesCold Failover
A Cold Failover is a redundancy method that involves having one system as a backup for another identical primary system. The Cold Failover system is called upon only on failure of the primary system.
So, Cold Failover means that the second server is only started up after the first one has been shut down. Clearly this means you must be able to tolerate a small amount of downtime during the switch-over.
Hot Failover
Hot Failover is a redundant method in which one system runs simultaneously with an identical primary system.
Upon failure of the primary system, the Hot Failover system immediately takes over, replacing the primary system. However, data is still mirrored in real time ensuring both systems have identical data.
Setup Failover
Checkout the below tutorials to setup and deploy a failover cluster.
Setup Failover Cluster on Windows Server 2012
Configure High Avaliablity Cluster On CentOS
The Complete Guide on Setting up Clustering In Linux
Available Solutions
There are four major providers of failover clusters listed as below.
Microsoft Failover Cluster
RHEL Failover Cluster
VMWare Failover Cluster
Citrix Failover Cluster
Failover Advantages
Failover Server clustering is completely a scalable solution. Resources can be added or removed from the cluster.
If a dedicated server from the cluster requires maintenance, it can be stopped while other servers handle its load. Thus, it makes maintenance easier.
Failover Disadvantages
Failover Server clustering usually requires more servers and hardware to manage and monitor, thus, increases the infrastructure.
Failover Server clustering is not flexible, as not all the server types can be clustered.
There are many applications which are not supported by the clustered design.
It is not a cost-effective solution, as it needs a good server design which can be expensive.
5. High Availability Clusters
Tumblr media
A high availability cluster is a group of servers that support server applications which can be utilized with a minimal amount of downtime when any server node fails or experiences overload.
You may require a high availability clusters for any of the reasons like load balancing, failover servers, and backup system. The most common types of Cluster configuration are active-active and active-passive.
Active-Active High Availability Cluster
Tumblr media
It consists of at least two nodes, both actively running same the service. An active-active cluster is most suitable for achieving true load balancing. The workload is distributed across the nodes. Generally, significant improvement in response time and read/write speed is experienced.
Active-Passive High Availability Cluster
Tumblr media
Active-passive also consists of at least two nodes. However, not all nodes remain active simultaneously. The secondary node remains in passive or standby mode. Generally, this cluster is more suitable for a failover cluster environment.
Setup A High Availability Cluster
Here are some excellent tutorials to setup a high availability cluster.
Configuring A High Availability Cluster On CentOS
Configure High-Availability Cluster on CentOS 7 / RHEL 7
Available Solutions
There are very well-known vendors out there, who are experts in high availability services. A few of them are listed below.
Dell Windows High Availability solutions
HP High Availability (HA) Solutions for Microsoft and Linux Clusters
VMware HA Cluster
High Availability Cluster Advantages
Tumblr media
Protection Against Downtime
With HA solutions, if any server of a cluster goes offline, all the services are migrated to an active host. The quicker you get your server back online, the quicker you can get back to business. This prevents your business from remaining non-productive.
Optimum Flexibility
High availability solutions offer greater flexibility if your business demands 24×7 availability and security.
Saves Downtime Cost
The quicker you get your server back up online, the quicker you can get back to business.This prevents your business from remaining non-productive.
Easy Customization
With HA solutions, it is a matter of seconds to switch over to the failover server and continue production. You can customize your HA cluster as per your requirement. You can either set data to be up-to-date in minutes or within seconds. Moreover, the data replication scheme, versions can be specified as per your needs.
High Availability Cluster Disadvantages
Continuous Grow in infrastructure
It demands many servers and loads of hardware to deliver a failover and load balancing. This increases your infrastructure.
Application Not Supported!
HA clustering offers much flexibility at the hardware level but not all software applications support clustered environment.
Expensive
HA clustering is not a cost-effective solution, the more sophistication you need, the more money you need to invest.
6. Complex Configuration Built By AccuWebHosting
Customer’s Requirement
An eCommerce website that can handle the peak load of 1000 HTTP requests per second, more than 15,000 visitors per day and 3 times the load in less than 10 seconds. During the peak hours and new product launches, visits count to the website will be multiplied by 2.
Website Statistics
40K products and product related articles
40 GB of static contents (images and videos and website elements)
6 GB of database
Solution We Delivered
We suggested a high availability Cloud Infrastructure, to handle the load and ensure the maximum availability as well. To distribute the load, we mounted 2 load balancer servers in front of the setup with load balanced IP address on top of them.
We deployed a total of 8 web servers, 3 physical dedicated servers, and 5 Cloud instances to absorb the expected traffic. The setup was configured to get synchronized between the various components through the rsync cluster.
The Cloud instances were used in a way that they can be added or removed as per load of the peak traffic without incurring the costs associated with additional physical servers.
Each Cloud instance contained the entire website (40GB of static content) to give the user a smooth website experience.
The 6 GB database was hosted on a master dedicated server, which was replicated on a secondary slave server to take over when the master server fails. Both of these DB servers have SSD disks for better read/write performance.
Tumblr media
A team of 15 developers and content writers update the content through backoffice servers hosted on a dedicated server. Any changes made by the team are propagated by rsync on the production environment and the database.
The entire infrastructure was monitored by Zabbix, which is installed on a high availability Cloud VPS. Zabbix will monitor the data provided by the infrastructure servers and then generate a series of graphs to depict the RAM usage, load average, disk consumption and network stats. Zabbix will also send an alert when any of the usage reaches its threshold or if any of the services goes down.
0 notes
mrkeu · 3 years
Text
What is Cloud Migration? Benefits of Moving to the Cloud
Cloud Computing is one of the most implemented methods for developing and delivering enterprise applications currently. It is also the most opted solution for the ever-expanding needs of SMEs and large scale enterprises alike.
As businesses grow and their process technologies improve, there is a growing trend towards companies migrating to the cloud. This process of moving services and applications to the cloud is the basic definition of Cloud Migration. The enthusiasm that companies have towards cloud migration is evident in the massive amounts of money and resources they dedicate to improve their operations.
In this article, we will introduce Cloud Migration processes and different ways of adapting to your organizational structure.
What is Cloud Computing?
Before tackling the question of “What is Cloud Migration?” let’s define Cloud Computing.
Cloud Computing is an enhanced IT service model that provides services over the Internet. Scalable and virtual resources like servers, data storage, networking, and software are just examples of these services. Cloud computing can also mean running workloads in a provider’s powerful data centers and on servers for a price.
Cloud Computing is considered one of the cutting edge technologies of the 21st century. Its innovative ability to provide relatively inexpensive and convenient networking and processing resources has fueled wide-ranging adaptation in the computing world.
The Cloud Migration process is an inevitable outcome of Cloud Computing. It has revolutionized the business world by facilitating easy access to data and software through any IoT (internet-connected) device. Moreover, it facilitates parts of the SDLC (Software Development Life Cycle), such as development and testing, without considering physical infrastructure.
What is Cloud Migration?
Cloud Migration is simply the adoption of cloud computing. It is the process of transferring data, application code, and other technology-related business processes from an on-premise or legacy infrastructure to the cloud environment.
Cloud Migration is a phenomenal transformation in the business information system domain as it provides adequate services for the growing needs of businesses. However, moving data to the cloud requires preparation and planning in deciding on an approach.
The other use-case for Cloud Migration is cloud to cloud transfer.
Tumblr media
Types of Cloud Migration
The process of Cloud Migration creates a great deal of concern in the business and corporate world who have to prepare for many contingencies that come along with it. The type and degree of migration may differ from one organization to another. While certain organizations may opt for a complete migration, others may do so in part while others remain on-premises. Some process-heavy organizations may require more than one cloud service.
In addition to the degree of adoption, other parameters categorize Cloud Migration. These are some of the more commonly seen use-cases.
Lift and Shift
This process involves moving software from on-premise resources to the cloud without any changes in the application or a process used before. It is the fastest type of cloud migration available and involves fewer work disruptions since it involves only infrastructure, information, and security teams. Furthermore, it is more cost-effective compared to other methods available.
The only downside to this method is that it does not maximize the advantages of the performance and the cloud’s versatility as it involves only moving the application to a new location. Therefore it’s more suitable for companies with regular peak schedules and who follow market trends. Consider it as a first step in the adoption of the Cloud Migration process.
The Shift to SaaS
This method involves outsourcing one or more preferred applications to a specialized cloud service provider. Through this model, businesses can off-load less business-critical processes and be more focused on their core applications. This setup will lead to them becoming more streamlined and competitive.
While this method provides the ability to personalize your application, it sometimes can cause problems in the support model provided by the SaaS (Software-as-a-service) platform. It’s risky enough that you could lose some competitive edge in your industry. This method is more suitable for non-customer facing applications and routine functionalities such as email and payroll.
Legacy Application Refactoring
Cloud migration processes allow companies to replicate their legacy applications completely into the cloud platform by refactoring them. In this way, you can allow legacy applications to function and concurrently build new applications to replace the old ones on the cloud.
Refactoring lets you prioritize business processes by moving less critical ones to the cloud, first. This method is cost-effective, improves response time, and helps in prioritizing updates for better interactions.
Re-platforming
Re-platforming is a cloud migration process that involves replacing the application code to make it cloud-native. This process is the most resource-intensive type of migration, as it requires a lot of planning.
Completely rewriting business processes can also be quite costly. Nonetheless, this is the migration method that allows for total flexibility and brings you all the benefits of the cloud to its fullest extent.
Tumblr media
The Cloud Migration Process
This process is how an organization achieves Cloud Migration. These cloud migration steps depend entirely on the specific resource that the organization is planning to move to the cloud and the type of migration performed.
Here are the five main stages of the process:
Step 1. Create a Cloud Migration Strategy
This step is the most important part of the process. It’s where you create your cloud migration plan and identify the specifics of the migration. You will need to understand the data, technical, financial, resource, and security requirements and decide on the necessary operations for the migration.
Expert consultation is also recommended during this stage to ensure successful planning. Identifying potential risks and failure points is another important part of this stage of the process. Mitigation actions or plans for resolutions will also need to be put in place to ensure business continuity.
Step 2. Selecting a Cloud Deployment Model
While part of this stage is related to the first one, you must choose the best-suited cloud deployment model considering both the organization and the resources at hand. A single or multi-cloud solution will need to be planned based on the types of resources that are required. If you are a small or medium scale organization with minimal resources, the public cloud is the recommended option.
If your organization uses a SaaS application but needs extra layers like security for application data, a Hybrid Cloud architecture will be better suited to your needs. Private Cloud solutions are perfect for sensitive data, and instances when full control over the system is essential.
Step 3. Selecting your Service Model
In this stage, you can decide on the different service models necessary for each of your business operations. The available service models are IaaS (Infrastructure-as-a-service), PaaS (Platform-as-a-service), and SaaS (Software-as-a-service). The difference in choices relates to the type of migration planned, as each one requires a different type of service.
Step 4. Define KPIs
Defining KPIs will ensure that you can monitor the migrated application within the cloud environment. These KPIs may include system performance, user experience, and infrastructure availability.
Step 5. Moving to the Cloud
You can adopt many different methods to move data from your infrastructure to the cloud. This move may be using the public Internet, a private network connection, or offline data transfer. Once all data and processes are moved, and the migration is complete, make sure that all requirements are fulfilled based on the pre-defined KPIs.
Tumblr media
What is Cloud Migration Strategy?
A cloud migration strategy is the process of planning and preparation an organization conducts to move its data from on-premises to the cloud.
Cloud migration comes with many different advantages and business solutions, but moving to a cloud platform may be easier said than done. You must take into consideration several factors before initiating the migration process. And you may likely face many challenges both during the strategy development stage and during the actual migration process.
Tumblr media
To address the above issues and to ensure a smooth and seamless migration process, you should develop cloud migration strategies which consider all these factors, including risk evaluation, disaster management, and in-depth business and technology analysis.
Get free advice in our Cloud Migration Checklist for an in-depth understanding of how to get started.
Benefits of Cloud Migration
Businesses tend to spend quite a lot when it comes to software development and deployment. But cloud migration offers a variety of methods to choose from, and they can be used to access SaaS at a much lower cost while safely storing and sharing data.
Cloud migration benefits include:
Cost Saving
Maintaining and managing a physical data center can be costly. But cloud migration allows curtailing operational expenses since cloud service providers like SaaS or even PaaS takes care of maintenance and upgrades of these data centers for a minimal upfront cost.
In addition to the direct cost savings in comparison to maintaining your own data centers, cloud migration provides indirect benefits for cost savings in the form of not requiring a dedicated technical team. Another benefit is that most licensing requirements are taken care of by the service provider.
Flexibility
Cloud migration facilitates upward or downward business expansions based on its necessities. Small scale businesses can easily scale up their processes into new territories, and large scale businesses can expand their services to an international audience through cloud migration.
This flexibility is possible in terms of expanding horizontally through globally distributed data centers as well through integrating hybrid cloud computing solutions such as AI, Machine Learning (ML), and image processing.
It also provides the ability for users to access data and services easily from anywhere and on any device. And companies can outsource certain functionalities to service providers so they can focus more on their main processes.
Quality Performance
Cloud migration allows for maintaining better interactions and communications within business communities due to the higher visibility of data. It also facilitates quick decision making since it reduces the time spent on infrastructure. Organizations can extend their ability to integrate different cloud-based solutions with other enterprise systems and solutions. This capability, in turn, ensures the quality and performance of the systems.
Automatic Updates
Updating systems can be a tedious task, especially for large scale companies, as they can require prolonged analysis. With Cloud migration, companies no longer need to worry about this as the infrastructure is off-premises, and cloud service providers are likely to take care of automatic updates. Ready-to-go software updates are part of most cloud computing plans and are available at a fraction of the cost of usual licensing fees.
Enhanced Security
Many studies have proven that data stored in a cloud environment is more secure compared to data in on-premise data centers. Cloud vendors are experts in data security and secure data proactively by updating their mechanisms regularly. Moreover, the cloud offers better control over data accessibility and availability, allowing only authorized users access to data.
Ensuring Business Continuity
Businesses often need to set up additional resources for disaster recovery. Cloud migration provides smart and inexpensive disaster management solutions. It ensures that applications are functional and available even during and after critical incidents, ensuring business continuity.
Cloud service providers take serious care of their data centers and ensure that they are protected both virtually and physically. This security, along with the availability of geographically dispersed locations, makes it convenient to set up robust Disaster Recovery and Business Continuity plans.
Cloud Migration Tools and Services
Many commercially available tools assist in the planning and execution of a cloud migration strategy. Some well-known cloud migration services include Google, Amazon Web Services (AWS), and Microsoft Azure. They provide services for public cloud data transfer, private networks, and offline transactions.
They also come with tools to plan and track the progress of the migration process, completed by collecting the system’s on-premise data such as system dependencies. Some examples of migration services include Google Cloud Data Transfer Service, AWS Migration Hub, AWS Snowball, and Azure Migrate. Additionally, there are third-party vendors like Racemi, RiverMeadow, and CloudVelox. When choosing a tool, it is better to consider factors like functionality, compatibility, and price.
Migration tools can be categorized into three main types as follows:
Open Source – These are free or low-cost tools that can be easily customized.
Batch Processing Tools – These are the tools used when large amounts of data need processing at regular intervals.
Cloud-based Tools – These are task-specific tools that bind data and cloud with connectors and toolsets.
Is Moving To The Cloud Right For Your Business?
Taking all this into account, you can now finally decide if Cloud Migration is an option for your organization. If so, the next decision will be which migration model to adopt, and how to plan the migration process.
Each migration is unique, so your plan will also need to be tailor-made. Find out and understand the requirements of your organization and applications to create a cloud migration methodology accordingly and move forward with the plan.
Discover the benefits of cloud computing for business with our cloud services.
0 notes
mrkeu · 3 years
Text
What Is Load Balancing?
Introduction
Modern websites and applications generate lots of traffic and serve numerous client requests simultaneously. Load balancing helps meet these requests and keeps the website and application response fast and reliable.
In this article, you will learn what load balancing is, how it works, and which different types of load balancing exist.
Tumblr media
Load Balancing Definition
Load balancing distributes high network traffic across multiple servers, allowing organizations to scale horizontally to meet high-traffic workloads. Load balancing routes client requests to available servers to spread the workload evenly and improve application responsiveness, thus increasing website availability.
Load balancing applies to layers 4-7 in the seven-layer Open System Interconnection (OSI) model. Its capabilities are:
L4. Directing traffic based on network data and transport layer protocols, e.g., IP address and TCP port.
L7. Adds content switching to load balancing, allowing routing decisions depending on characteristics such as HTTP header, uniform resource identifier, SSL session ID, and HTML form data.
GSLB. Global Server Load Balancing expands L4 and L7 capabilities to servers in different sites.
Why Is Load Balancing Important?
Load balancing is essential to maintain the information flow between the server and user devices used to access the website (e.g., computers, tablets, smartphones).
There are several load balancing benefits:
Reliability. A website or app must provide a good UX even when traffic is high. Load balancers handle traffic spikes by moving data efficiently, optimizing application delivery resource usage, and preventing server overloads. That way, the website performance stays high, and users remain satisfied.
Availability. Load balancing is important because it involves periodic health checks between the load balancer and the host machines to ensure they receive requests. If one of the host machines is down, the load balancer redirects the request to other available devices. Load balancers also remove faulty servers from the pool until the issue is resolved. Some load balancers even create new virtualized application servers to meet an increased number of requests.
Security. Load balancing is becoming a requirement in most modern applications, especially with the added security features as cloud computing evolves. The load balancer's off-loading function protects from DDoS attacks by shifting attack traffic to a public cloud provider instead of the corporate server.
Predictive Insight. Load balancing includes analytics that can predict traffic bottlenecks and allow organizations to prevent them. The predictive insights boost automation and help organizations make decisions for the future.
How Does Load Balancing Work?
Load balancers sit between the application servers and the users on the internet. Once the load balancer receives a request, it determines which server in a pool is available and then routes the request to that server.
Tumblr media
By routing the requests to available servers or servers with lower workloads, load balancing takes the pressure off stressed servers and ensures high availability and reliability.
Load balancers dynamically add or drop servers in case of high or low demand. That way, it provides flexibility in adjusting to demand.
Load balancing also provides failover in addition to boosting performance. The load balancer redirects the workload from a failed server to a backup one, mitigating the impact on end-users.
Types of Load Balancing
Load balancers vary in storage type, balancer complexity, and functionality. The different types of load balancers are explained below.
Hardware-Based
A hardware-based load balancer is dedicated hardware with proprietary software installed. It can process large amounts of traffic from various application types.
Hardware-based load balancers contain in-built virtualization capabilities that allow multiple virtual load balancer instances on the same device.
Software-Based
A software-based load balancer runs on virtual machines or white box servers, usually incorporated into ADC (application delivery controllers). Virtual load balancing offers superior flexibility compared to the physical one.
Software-based load balancers run on common hypervisors, containers, or as Linux processes with negligible overhead on a bare metal server.
Virtual
A virtual load balancer deploys the proprietary load balancing software from a dedicated device on a virtual machine to combine the two above-mentioned types. However, virtual load balancers cannot overcome the architectural challenges of limited scalability and automation.
Cloud-Based
Cloud-based load balancing utilizes cloud infrastructure. Some examples of cloud-based load balancing are:
Network Load Balancing. Network load balancing relies on layer 4 and takes advantage of network layer information to determine where to send network traffic. Network load balancing is the fastest load balancing solution, but it lacks in balancing the distribution of traffic across servers.
HTTP(S) Load Balancing. HTTP(S) load balancing relies on layer 7. It is one of the most flexible load balancing types, allowing administrators to make traffic distribution decisions based on any information that comes with an HTTP address.
Internal Load Balancing. Internal load balancing is almost identical to network load balancing, except it can balance distribution in internal infrastructure.
Load Balancing Algorithms
Different load balancing algorithms offer different benefits and complexity, depending on the use case. The most common load balancing algorithms are:
Round Robin
Distributes requests sequentially to the first available server and moves that server to the end of the queue upon completion. The Round Robin algorithm is used for pools of equal servers, but it doesn't consider the load already present on the server.
Tumblr media
Least Connections
The Least Connections algorithm involves sending a new request to the least busy server. The least connection method is used when there are many unevenly distributed persistent connections in the server pool.
Tumblr media
Least Response Time
Least Response Time load balancing distributes requests to the server with the fewest active connections and with the fastest average response time to a health monitoring request. The response speed indicates how loaded the server is.
Tumblr media
Hash
The Hash algorithm determines where to distribute requests based on a designated key, such as the client IP address, port number, or the request URL. The Hash method is used for applications that rely on user-specific stored information, for example, carts on e-commerce websites.
Tumblr media
Custom Load
The Custom Load algorithm directs the requests to individual servers via SNMP (Simple Network Management Protocol). The administrator defines the server load for the load balancer to take into account when routing the query (e.g., CPU and memory usage, and response time).
Tumblr media
0 notes
mrkeu · 3 years
Text
Shared Nothing Architecture Explained
Introduction
Why are companies such as Google and Facebook using the Shared Nothing Architecture, and how does it differ from other models?
Read on to learn what Shared Nothing is, how it compares to other architectures, and its advantages and disadvantages.
Tumblr media
What Is Shared Nothing Architecture?
Shared Nothing Architecture (SNA) is a distributed computing architecture that consists of multiple separated nodes that don’t share resources. The nodes are independent and self-sufficient as they have their own disk space and memory.
In such a system, the data set/workload is split into smaller sets (nodes) distributed into different parts of the system. Each node has its own memory, storage, and independent input/output interfaces. It communicates and synchronizes with other nodes through a high-speed interconnect network. Such a connection ensures low latency, high bandwidth, as well as high availability (with a backup interconnect available in case the primary fails).
Since data is horizontally partitioned, the system supports incremental growth. You can add new nodes to scale the distributed system horizontally and increase the transmission capacity.
Shared Nothing Architecture Diagram
The best way to understand the architecture of the shared-nothing model is to see it side by side with other types of architectures.
Below you see the difference in shared vs. non-shared components in different models - Shared Everything, Shared Storage, and Shared Nothing.
Unlike the others, SNA has no shared resources. The only thing connecting the nodes is the network layer, which manages the system and communication among nodes.
Tumblr media
Other Shared Architecture Types Explained
The concept of “shared nothing” was first introduced by Michael Stonebraker in his 1986 research paper, in which he contrasted shared disk and shared memory architecture. While comparing these two options, Stonebraker included the possibility of creating a system in which neither memory nor storage is shared.
When deciding whether SNA is the solution for your use case, it is best to compare it with other cluster types. Alternative options include:
Shared Disk Architecture
Shared Memory Architecture
Shared Everything Architecture
Shared-Disk Architecture
Shared disk is a distributed computing architecture in which all the nodes in the system are linked to the same disk device but have their own private memory. The shared data is accessible from all cluster nodes and usually represents a shared disk (such as a database) or a shared filesystem (like a storage area network or network-attached storage). The shared disk architecture is best for use cases in which data partitioning isn’t an option. Compared to SNA, it is far less scalable.
Tumblr media
Shared-Memory Architecture
Shared memory is an architectural model in which nodes within the system use one shared memory resource. This setup offers simplicity and load balancing as it includes point-to-point connections between devices and the main memory. Fast and efficient communication among processors is key to ensure efficient transmission of data and to avoid redundancy. Such communication is carried out through an interconnection network and managed by a single operating system.
Shared-Everything Architecture
On the opposite side of the spectrum, there is the shared everything architecture. This architectural model consists of nodes that share all resources within the system. Each node has access to the same computing resources and shared storage. The main idea behind such a system is maximizing resource utilization. The disadvantage is that shared resources also lead to reduced performance due to contention.
Tumblr media
Advantages and Disadvantages of Shared Nothing Architecture
When compared to different shared architectures mentioned above, it is clear the Shared Nothing Architecture comes with many benefits. Take a look at some of the advantages, as well as disadvantages of such a model.
Advantages
There are many advantages of SNA, the main ones being scalability, fault tolerance, and less downtime.
Easier to Scale
There is no limit when it comes to scaling in the shared-nothing model. Unlimited scalability is one of the best features of this type of architecture. Since nodes are independent and don’t share resources, scaling up your application won’t disrupt the entire system or lead to resource contention.
Eliminates Single Points of Failure
If one of the nodes in the application fails, it doesn’t affect the functionality of others as each node is self-reliant. Although node failure can impact performance, it doesn’t disrupt the overall behavior of the app as a whole.
Simplifies Upgrades and Prevents Downtime
There is no need to shut down the system while working on or upgrading individual nodes. Thanks to redundancy, upgrading one node at a time doesn’t impact the effectiveness of others. What’s more, having redundant copies of data on different nodes prevents unexpected downtime caused by disk failure or data loss.
Disadvantages
Once you considered the benefits of SNA, take a look at a couple of disadvantages that can help you decide whether it is the best option for you.
Cost
A node consists of its individual processor, memory, and disk. Having dedicated resources essentially means higher costs when it comes to setting up the system. Additionally, transmitting data that requires software interaction is more expensive compared to architectures with shared disk space and/or memory.
Decreased Performance
Scaling up your system can eventually affect the overall performance if the cross-communication layer isn’t set up correctly.
Conclusion
After reading this article, you should have a better understanding of Shared Nothing Architecture and how it works. Take into account all the advantages and disadvantages of the model before deciding on the architecture for your application.
0 notes
mrkeu · 3 years
Text
What is High Availability Architecture?
High Availability Definition
A highly available architecture involves multiple components working together to ensure uninterrupted service during a specific period. This also includes the response time to users’ requests. Namely, available systems have to be not only online, but also responsive.
Implementing a cloud computing architecture that enables this is key to ensuring the continuous operation of critical applications and services. They stay online and responsive even when various component failures occur or when a system is under high stress.
Highly available systems include the capability to recover from unexpected events in the shortest time possible. By moving the processes to backup components, these systems minimize downtime or eliminate it. This usually requires constant maintenance, monitoring, and initial in-depth tests to confirm that there are no weak points.
High availability environments include complex server clusters with system software for continuous monitoring of the system’s performance. The top priority is to avoid unplanned equipment downtime. If a piece of hardware fails, it must not cause a complete halt of service during the production time.
Staying operational without interruptions is especially crucial for large organizations. In such settings, a few minutes lost can lead to a loss of reputation, customers, and thousands of dollars. Highly available computer systems allow glitches as long as the level of usability does not impact business operations.
A highly available infrastructure has the following traits:
Hardware redundancy
Software and application redundancy
Data redundancy
The single points of failure eliminated
Tumblr media
How To Calculate High Availability Uptime Percentage?
Availability is measured by how much time a specific system stays fully operational during a particular period, usually a year.
It is expressed as a percentage. Note that uptime does not necessarily have to mean the same as availability. A system may be up and running, but not available to the users. The reasons for this may be network or load balancing issues.
The uptime is usually expressed by using the grading with five 9’s of availability.
If you decide to go for a hosted solution, this will be defined in the Service Level Agreement (SLA). A grade of “one nine” means that the guaranteed availability is 90%. Today, most organizations and businesses require having at least “three nines,” i.e., 99.9% of availability.
Businesses have different availability needs. Those that need to remain operational around the clock throughout the year will aim for “five nines,” 99.999% of uptime. It may seem like 0.1% does not make that much of a difference. However, when you convert this to hours and minutes, the numbers are significant.
Refer to the table of nines to see the maximum downtime per year every grade involves:
Tumblr media
As the table shows, the difference between 99% and 99.9% is substantial.
Note that it is measured in days per year, not hours or minutes. The higher you go on the scale of availability, the cost of the service will increase as well.
How to calculate downtime? It is essential to measure downtime for every component that may affect the proper functioning of a part of the system, or the entire system. Scheduled system maintenance must be a part of the availability measurements. Such planned downtimes also cause a halt to your business, so you should pay attention to that as well when setting up your IT environment.
As you can tell, 100% availability level does not appear in the table.
Simply put, no system is entirely failsafe. Additionally, the switch to backup components will take some period, be that milliseconds, minutes, or hours.
How to Achieve High Availability
Businesses looking to implement high availability solutions need to understand multiple components and requirements necessary for a system to qualify as highly available. To ensure business continuity and operability, critical applications and services need to be running around the clock. Best practices for achieving high availability involve certain conditions that need to be met. Here are 4 Steps to Achieving 99.999% Reliability and Uptime.
1. Eliminate Single Points of Failure High Availability vs. Redundancy
The critical element of high availability systems is eliminating single points of failure by achieving redundancy on all levels. No matter if there is a natural disaster, a hardware or power failure, IT infrastructures must have backup components to replace the failed system.
There are different levels of component redundancy. The most common of them are:
The N+1 model includes the amount of the equipment (referred to as ‘N’) needed to keep the system up. It is operational with one independent backup component for each of the components in case a failure occurs. An example would be using an additional power supply for an application server, but this can be any other IT component. This model is usually active/passive. Backup components are on standby, waiting to take over when a failure happens. N+1 redundancy can also be active/active. In that case, backup components are working even when primary components function correctly. Note that the N+1 model is not an entirely redundant system.
The N+2 model is similar to N+1. The difference is that the system would be able to withstand the failure of two same components. This should be enough to keep most organizations up and running in the high nines.
The 2N model contains double the amount of every individual component necessary to run the system. The advantage of this model is that you do not have to take into consideration whether there was a failure of a single component or the whole system. You can move the operations entirely to the backup components.
The 2N+1 model provides the same level of availability and redundancy as 2N with the addition of another component for improved protection.
The ultimate redundancy is achieved through geographic redundancy.
That is the only mechanism against natural disasters and other events of a complete outage. In this case, servers are distributed over multiple locations in different areas.
The sites should be placed in separate cities, countries, or even continents. That way, they are entirely independent. If a catastrophic failure happens in one location, another would be able to pick up and keep the business running.
This type of redundancy tends to be extremely costly. The wisest decision is to go for a hosted solution from one of the providers with data centers located around the globe.
Next to power outages, network failures represent one of the most common causes of business downtime.
For that reason, the network must be designed in such a way that it stays up 24/7/365. To achieve 100% network service uptime, there have to be alternate network paths. Each of them should have redundant enterprise-grade switches and routers.
2. Data Backup and recovery
Data safety is one of the biggest concerns for every business. A high availability system must have sound data protection and disaster recovery plans.
An absolute must is to have proper backups. Another critical thing is the ability to recover in case of a data loss quickly, corruption, or complete storage failure. If your business requires low RTOs and RPOs and you cannot afford to lose data, the best option to consider is using data replication. There are many backup plans to choose from, depending on your business size, requirements, and budget.
Data backup and replication go hand in hand with IT high availability. Both should be carefully planned. Creating full backups on a redundant infrastructure is vital for ensuring data resilience and must not be overlooked.
Tumblr media
3. Automatic failover with Failure Detection
In a highly available, redundant IT infrastructure, the system needs to instantly redirect requests to a backup system in case of a failure. This is called failover. Early failure detections are essential for improving failover times and ensuring maximum systems availability.
One of the software solutions we recommend for high availability is Carbonite Availability. It is suitable for any infrastructure, whether it is virtual or physical.
For fast and flexible cloud-based infrastructure failover and failback, you can turn to Cloud Replication for Veeam. The failover process applies to either a whole system or any of its parts that may fail. Whenever a component fails or a web server stops responding, failover must be seamless and occur in real-time.
The process looks like this:
There is Machine 1 with its clone Machine 2, usually referred to as Hot Spare.
Machine 2 continually monitors the status of Machine 1 for any issues.
Machine 1 encounters an issue. It fails or shuts down due to any number of reasons.
Machine 2 automatically comes online. Every request is now routed to Machine 2 instead of Machine 1. This happens without any impact to end users. They are not even aware there are any issues with Machine 1.
When the issue with the failed component is fixed, Machine 1 and Machine 2 resume their initial roles
The duration of the failover process depends on how complicated the system is. In many cases, it will take a couple of minutes. However, it can also take several hours.
Planning for high availability must be based on all these considerations to deliver the best results. Each system component needs to be in line with the ultimate goal of achieving 99.999 percent availability and improve failover times.
4. Load Balancing
A load balancer can be a hardware device or a software solution. Its purpose is to distribute applications or network traffic across multiple servers and components. The goal is to improve overall operational performance and reliability.
It optimizes the use of computing and network resources by efficiently managing loads and continuously monitoring the health of the backend servers.
How does a load balancer decide which server to select?
Many different methods can be used to distribute load across a server pool. Choosing the one for your workloads will depend on multiple factors. Some of them include the type of application that is served, the status of the network, and the status of the backend servers. A load balancer decides which algorithm to use according to the current amount of incoming requests.
Some of the most common load balancing algorithms are:
Round Robin. With Round Robin, the load balancer directs requests to the first server in line. It will move down the list to the last one and then start from the beginning. This method is easy to implement, and it is widely used. However, it does not take into consideration if servers have different hardware configurations and if they can overload faster.
Least Connection. In this case, the load balancer will select the server with the least number of active connections. When a request comes in, the load balancer will not assign a connection to the next server on the list, as is the case with Round Robin. Instead, it will look for one with the least current connections. Least connection method is especially useful to avoid overloading your web servers in cases where sessions last for a long time.
Source IP hash. This algorithm will determine which server to select according to the source IP address of the request. The load balancer creates a unique hash key using the source and destination IP address. Such a key enables it always to direct a user’s request to the same server.
Load balancers indeed play a prominent role in achieving a highly available infrastructure. However, merely having a load balancer does not mean that you have a high system availability.
If a configuration with a load balancer only routes the traffic to decrease the load on a single machine, that does not make a system highly available.
By implementing redundancy for the load balancer itself, you can eliminate it as a single point of failure.
In Closing: Implement High Availability Architecture
No matter what size and type of business you run, any kind of service downtime can be costly without a cloud disaster recovery solution.
Even worse, it can bring permanent damage to your reputation. By applying a series of best practices listed above, you can reduce the risk of losing your data. You also minimize the possibilities of having production environment issues.
Your chances of being offline are higher without a high availability system.
From that perspective, the cost of downtime dramatically surpasses the costs of a well-designed IT infrastructure. In recent years, hosted and cloud computing solutions have become more popular than in-house solutions support. The main reason for this is the fact it reduces IT costs and adds more flexibility.
No matter which solution you go for, the benefits of a high availability system are numerous:
You save money and time as there is no need to rebuild lost data due to storage or other system failures. In some cases, it is impossible to recover your data after an outage. That can have a disastrous impact on your business.
Less downtime means less impact on users and clients. If your availability is measured in five nines, that means almost no service disruption. This leads to better productivity of your employees and guarantees customer satisfaction.
The performance of your applications and services will be improved.
You will avoid fines and penalties if you do not meet the contract SLAs due to a server issue.
0 notes
mrkeu · 3 years
Text
vCloud Availability for Cloud-to-Cloud DR 1.5 Reference Architecture
Overview
The vCloud Availability Cloud-to-Cloud DR solution provides replication and failover capabilities for vCloud Director workloads at both VM and vApp level.
Tumblr media
VMware vCloud Availability for Cloud-to-Cloud DR Reference Architecture (PDF format here)
This blog demonstrates the reference architecture of vCloud Availability for Cloud-to-Cloud Disaster Recovery 1.5, VMware vCloud Availability for Cloud-to-Cloud DR 1.5 allows tenant and service provider users to protect vApps between different virtual data centers within a vCloud Director environment and across different vCloud Director based clouds.
The architecture diagram illustrates the needed solution components between cloud provider’s two data centers which are backed by different vCloud Director cloud management platform, it also shows the network flow directions and port number required for communication among components in the vCloud Availability for Cloud-to-Cloud DR solution. Architecture supports symmetrical replication operations between cloud environments.
The service operates through a VMware Cloud Provider Program, and each installation provides recovery for multiple cloud environments. The vCloud Availability for Cloud-to-Cloud DR provides: Self-service protection and failover workflows per virtual machine (VM). Single installation package as a Photon-based virtual appliance. The capability of each deployment to serve as both source and recovery vCloud Director instance (site). There are no dedicated source and destination sites. Symmetrical replication flow that can be started from either the source or the recovery vCloud Director site. Replication and recovery of vApps and VMs between vCloud Director sites. Using a single-site vCloud Availability for Cloud-to-Cloud DR installation, you can migrate vApps and VMs between Virtual Data Centers that belong to a single vCloud Director Organization. Secure Tunneling through a TCP proxy. Integration with existing vSphere environments. Multi-tenant support. Built-in encryption or encryption and compression of replication traffic. Support for multiple vCenter Server and ESXi versions.
Architecture Explained When you implement this solution from the ova file in your production environment, make sure you are not choosing the “Combined” configuration type, instead you need to choose the “Manager node with vCloud Director Support’ configuration (icon # 6 in the RA), you’ll see the configuration description showing “The H4 Management Node. Deploy one of these if you need to configure replications to/from vCD”, H4 represents the vCloud Availability Replicator or Manager (C4 is for vCloud Availability vApp Replication Service or Manager), by selecting this configuration type, the ova will install three vCAV components all together in a single appliance:
1. vCloud Availability Cloud-to-Cloud DR Portal (icon # 5 in the RA) 2. vCloud Availability vAPP Replication Manager (icon # 4 in the RA) 3. vCloud Availability Replication Manager (icon # 3 in the RA)
Tumblr media Tumblr media
The above three components are located in a white-colored rectangle box (icon # 6) in the reference architecture diagram, all the communications between those three components are happened internally and will never route through outside this appliance, for example, vCloud Availability vAPP Replication Manager will use REST API calls to vCloud Availability Replication Manager in order to perform required replication tasks.
vCloud DirectorWith the vCloud Director, cloud provider can build secure, multi-tenant private clouds by pooling infrastructure resources into virtual data centers and exposing them to users through Web- based portals and programmatic interfaces as fully automated, catalog-based services.
vCloud Availability Replicator Appliance For production deployments, You deploy and configure dedicated vCloud Availability Replicator appliance or appliances, it exposes the low-level HBR primitives as REST APIs.
vCloud Availability Replicator ManagerA management service operating on the vCenter Server level. It understands the vCenter Server level concepts for starting the replication workflow for the virtual machines. It must have TCP access to the Lookup Service and all the vCloud Availability Replicator appliances in both local, and remote sites.
vCloud Availability vApp Replication ManagerProvides the main interface for the Cloud-to-Cloud replication operations. It understands the vCloud Director level concepts and works with vApps and virtual machines using vCD API calls.
vCloud Availability C2C DR PortalIt provides tenants and service providers with a graphic user interface to facilitate the management of the vCloud Availability for Cloud-to-Cloud DR solution. It also provides overall system and workload information.
Manager node with vCloud Director Support Single appliance that contains the following services:vCloud Availability Cloud-to-Cloud DR Portal vCloud Availability vAPP Replication Manager vCloud Availability Replication Manager
vCenter Server with Platform Services ControllerThe PSC provides common infrastructure services to the vSphere environment. Services include licensing, certificate management, and authentication with VMware vCenter Single Sign-On.
vCloud Availability Tunnel ApplianceThis solution requires that each component on a local site has bidirectional TCP connectivity to each component on the remote site, If bidirectional connections between sites are a problem, you configure Cloud-to-Cloud Tunneling, you must provide connectivity between the vCloud Availability Tunnel appliances on each site. It simplifies provider networking setup by channeling all incoming and outgoing traffic for a site through a single point.
Network Address TranslationYou must set an IP and port in the local site that is reachable for remote sites and forward it to the private address of the vCloud Availability Tunnel appliance, port 8048, for example, by using destination network address translation (DNAT).
Coexistence
Based on the product release nodes, vCloud Availability for Cloud-to-Cloud DR 1.5 and vCloud Availability for vCloud Director 2.X can be installed and can operate together in the same vCloud Director environment. You can protect virtual machines either by using vCloud Availability for Cloud-to-Cloud DR 1.5 or vCloud Availability for vCloud Director 2.X.
vCloud Availability for Cloud-to-Cloud DR 1.5 and vCloud Director Extender 1.1.X can be installed and can operate together in the same vCloud Director environment. You can migrate virtual machines either by using vCloud Availability for Cloud-to-Cloud DR 1.5 or vCloud Director Extender 1.1.X.
Interoperability
vSphere Hypervisor (ESXi) –  5.5 and above
vCenter Server – 6.0, 6.5 and 6.7
vCloud Director for Service Providers – 8.20, 9.0, 9.1 and 9.5
* Please visit VMware Product Interoperability Matrices website to check the latest support products version.
Notes
There’s a comprehensive vCloud Availability Cloud-to-Cloud DR Design and Deploy Guide available here, which was published by my colleague, Avnish Tripathi, you can find detail design guidelines for this solution.
VMware official vCloud Availability for Cloud-to-Cloud DR Documentation is here.
0 notes
mrkeu · 3 years
Text
Deep Dive Architecture Comparison of DaaS & VDI, Part 2
In part 1 of this blog series, I discussed the Horizon 7 architecture and a typical single-tenant deployment using Pods and Blocks. In this post I will discuss the Horizon DaaS platform architecture and how this offers massive scale for multiple tenants in a service provider environment.
Horizon DaaS Architecture
The fundamental difference with the Horizon DaaS platform is multi-tenancy architecture. There are no Connection or Security servers, but there are some commonalities. I mentioned Access Point previously, this was originally developed for Horizon Air, and is now a key component for both Horizon 7 and DaaS for remote access.
Tumblr media
If you take a look at the diagram above you’ll see these key differences. Let’s start with the management appliances. There are five virtual appliances (OVA) used for Horizon DaaS; Service Provider, Tenant, Desktop Manager, Resource Manager and Access Point. When these appliances are deployed, they are always provisioned as an HA pair (master/slave), except for Access Point which is active/active across multiple appliances. No load-balancer is required, only for multiple Access Point appliances. The remaining virtual appliances use a virtual IP in the master/slave configuration. There is only a single OVA (template), and upon initial installation, the bootstrap process uses this template as a base for each of the virtual appliance types.
I’ve already introduced Access Point with the Horizon 7 architecture previously, but it’s worth mentioning that this is a recent addition. Previously with the original Desktone product and subsequent versions of Horizon DaaS platform, remote access was provided using dtRAM (Desktone Remote Access Manager). The dtRAM is also a virtual appliance (based on FreeBSD and pfSense) and still available, but I would now recommend using Access Point for the latest features.
Service Provider
The service provider has two different types of virtual appliance (HA pair); the Service Provider and Resource Manager.
The Service Provider appliance provides the Service Center portal where the provider can perform a number of activities including Desktop Model management, Tenant management, monitoring and discovery of infrastructure resources. This appliance also contains a Resource Manager service which targets and deploys other management virtual appliances. For example, when a Tenant Appliance pair is created, it’s name, networks, IP address, and so on, are stored in the FDB (Fabric Database). The Service Provider appliance then instructs the resource manager service to clone a tenant appliance.
Resource Manager
The Resource Manager virtual appliance communicates with the infrastructure (vCenter) to carry out desktop provisioning, and provides management of all desktops for tenants. Unlike Horizon 7 that can provision View Composer linked clones, Instant Clones or full clones, only full clones are currently supported with Horizon DaaS. Resources are assigned to tenants so they can consume compute, storage and network for virtual desktops.
It’s important to note that Resource Manager appliances are tied to the service provider, and not the tenant.
Tenant
The tenant also has two different types of virtual appliance (HA pair); Tenant and Desktop Manager virtual appliance.
The Tenant appliance provides a web-based UI (Enterprise Center) for both the tenant end-user and IT administrator. End-users can manage their own virtual desktops, and the administrator functions allow for creation and management of the tenant desktops.
Other tenant operations provided by Enterprise Center, include:
Domain registration
Gold pattern conversion
Desktop pool creation
AD user and group assignment to virtual desktops
The Tenant virtual appliance also contains a Desktop Manager component which brokers connections to tenant virtual desktops. Each Desktop Manager supports up to 5,000  virtual desktops. If more are required then a HA-pair of Desktop Manager virtual appliances can be deployed.
Desktop Manager
The Desktop Manager virtual appliance is the same as the Tenant appliance, but does not include the brokering or Enterprise Center portal. You can deploy Desktop Manager appliances to scale beyond the 5,000 virtual desktop limit.
Resources are assigned to the Desktop Manager for consumption by the tenant. In some cases you may have a vSphere cluster dedicated for 3D workloads with vDGA pass-through. These 3D specific resources would be dedicated to a Desktop Manager virtual appliance pair.
Each virtual desktop is installed with the DaaS Agent which sends heartbeats to the Desktop Manager in order to keep track of it’s state.
Networking
As shown in the above diagram, there are three networks associated with Horizon DaaS; Backbone Link Local network, Service Provider network, and tenant networks.
The Backbone Link Local network is a private network that is dedicated for all virtual appliances. Although the Tenant virtual appliances are connected to this network, there is no access from the tenant network.
The Service Provider management network provides access for service provider administration of the Service Provider appliances, and vSphere infrastructure.
The Tenant network (per tenant) is dedicated for virtual desktops. This also has IP connectivity to the tenants supporting infrastructure such as Active Directory, DNS, NTP, and file servers.
Horizon DaaS Terminology
Tumblr media
Conclusion
VMware Horizon® is a family of desktop and application virtualization solutions that has matured significantly over the past few years. vCloud Air Network service providers can provide customers with either a managed Horizon 7 platform, or Desktop as a Service with Horizon DaaS.
Both Horizon 7 and Horizon DaaS offer virtual desktops and applications, and used in combination with App Volumes, applications can be delivered in near real-time to end-users.
Access Point provides remote access to both Horizon 7 and Horizon DaaS which provide many advantages to the service provider. With their active/active scalable deployment, and hardened Linux platform, service providers and customers can benefit from multiple authentication and access methods from any device and any location.
For both Horizon solutions, RDSH continues to be an attractive choice for delivering desktop or application sessions. These can either be presented to the user with the Horizon Client, or with integration with Workspace ONE and Identity Manager.
Finally, the vCloud Air Network is a global ecosystem of service providers that are uniquely positioned to supply modern enterprises with the ideal VMware-based solutions they need to grow their businesses. Built on the foundation of existing VMware technology, vCloud Air Network Service Providers deliver a seamless entry into the cloud. You can learn more about the vCloud Air Network, or search for a vCAN service provider here: http://vcloudproviders.vmware.com
0 notes
mrkeu · 3 years
Text
Deep Dive Architecture Comparison of DaaS & VDI, Part 1
In this two part blog series, I introduce the architecture behind Horizon DaaS and the recently announced Horizon 7. From a service provider point of view, the Horizon® family of products offers massive scale from both single-tenant deployments and multi-tenanted service offerings.
Many of you are very familiar with the term Virtual Desktop Infrastructure (VDI), but I don’t think the term does any justice to the evolution of the virtual desktop. VDI can have very different meanings depending on who you are talking to. Back in 2007 when VMware acquired Propero, which soon became VDM (then View and Horizon), VDI was very much about brokering virtual machines running a desktop OS to end-users using a remote display protocol. Almost a decade later, VMware Horizon is vastly different and it has matured into an enterprise desktop and application delivery platform for any device. Really… Horizon 7 is the ultimate supercar of VDI compared to what it was a decade ago.
I’ve read articles that compare VDI to DaaS but they all seem to skip this evolution of VDI and compare it to the traditional desktop broker of the past. DaaS on the other hand provides the platform of choice for service providers offering Desktops as a Service. DaaS was acquired in October 2013 (formerly Desktone). In fact I remember the day of the announcement because I was working on a large VMware Horizon deployment for a service provider at the time.
For this blog post I’d like to start our comparisons on the fundamental architecture of the Horizon DaaS platform to Horizon 7 which was announced in February 2016. This article is aimed at consultants and architects wishing to learn more about the DaaS platform.
Quick Comparison
Tumblr media
As you can see in the table above, they look very similar. Thanks Ray, that helps a bunch! – Hey, no problem 📷
Horizon DaaS has been built from the ground up to be consumed by multiple tenants. This makes it attractive to service providers wanting to offer a consumption based desktop model for their customers (OPEX).
Horizon 7 on the other hand, which is also designed for massive scale (up to 50,000 sessions in a cloud pod), provides a single tenant architecture for multiple data centers. This suits organizations of any size hosting their own infrastructure.
It’s all well and good that we can host tens of thousands of desktops, but you are probably thinking “What about me? I only want to start with thirty desktops, maybe one hundred, but not thousands!”. We hear you loud and clear. Horizon DaaS scales for both the service provider infrastructure and tenants joining the service. Once the DaaS platform is deployed you can start with just a handful of desktops. Horizon 7, while a single-tenant solution uses a building block approach so you can scale from just a few to thousands of desktops. More on that later.
For customers that want to host infrastructure in their own data centers, but take advantage of the cloud then we have Horizon Air Hybrid-Mode. You may remember the announcement at VMworld 2015 with Project Enzo. You can also read more about Horizon Air Hybrid-Mode with this blog from Shikha Mittal, Snr. Product Line Manager.
Microsoft Licensing
I really don’t want to get into licensing, but I feel I need to dispel some myths that surround DaaS and VDI. Regardless of DaaS or VDI, if you are hosting a Windows desktop virtual machine (e.g. Windows 10) and want to provide remote access, then the Windows VM must be licensed. For that you have two options; Microsoft VDA (Virtual Desktop Access) or Software Assurance with Microsoft Volume licensing.
The VDA license is aimed at users with thin-clients that don’t have an existing Windows desktop PC, but want to remotely connect to a virtual desktop. The VDA license is tied to the client device.
In my opinion, the better option is Software Assurance (SA) which is part of a Microsoft Volume license agreement and is licensed per-user or per-device. Software Assurance includes virtual desktop access rights, so a VDA license is not required.
Now back in the day with ye old VDI, we only had support for virtual desktops, but for some time now Horizon supports both desktop virtual machines and Remote Desktop Session Host (RDSH) sessions. So regardless of DaaS or VDI, if you are a service provider offering session based desktops or applications then you can use the Microsoft SPLA (Service Provider Licensing Agreement) which is a monthly cost.
Horizon Architecture
At the core of any VDI solution are the desktop brokers, and for Horizon 7 we call these Connection servers. A single Connection server can support up to 2,000 desktop or application sessions. Notice I said ‘up to‘, so you could run just a handful of desktops, being brokered by a single Connection server in the knowledge that you can scale this to 2,000 desktop or application sessions without adding more servers. That said, I really wouldn’t recommend deploying just a single Connection server outside of a demo, lab or PoC environment. If that server were to fail then your entire Horizon solution (we call that a Pod) can’t broker any more connections. I mentioned Pod, let’s take a look at some of the terminology used with Horizon 7.
A single pod supports up to 7 Connection servers, and we support up to 10,000 sessions per Pod. An entire Cloud Pod can handle up to 50,000 sessions. Looking back at a smaller deployment, adding two or more Connection servers provides resilience should a Connection server fail, and most smaller Horizon deployments typically start with two Connection servers for availability.
The diagram below represents what a single Pod may look like. Management components such as vCenter, Horizon servers and virtual appliances are hosted in a dedicated management cluster. Each desktop Block is delineated by a dedicated vCenter server, each hosting one or more desktop resource clusters.
Tumblr media
I mentioned that a ‘Cloud Pod’ supports up to 50,000 sessions. Put simply, VMware recommend that each desktop Block (vCenter) hosts up to 2,000 desktop or application sessions. A single ‘Pod’ as shown in the diagram above can contain multiple desktop ‘Blocks’ in addition to the management Block, up to a maximum of 10,000 sessions.
Cloud Pod Architecture is a technology introduced in Horizon 6 that allows for multiple ‘Pods’ to be linked using VIPA (View Inter-Pod API). Users (or groups) can be assigned to virtual desktop or application pools using Global Entitlements. Therefore a ‘Cloud Pod’ can host up to 50,000 desktop or application sessions.
Remote Access
Remote access to the virtual desktop or application is catered for in two ways with Horizon 7. The first option is to use Security servers as shown in the diagram below. Think of these as a DMZ gateway service that facilitates external connections. Like the Connection server, a Security server is installed on a Windows Server OS. Each Security server you deploy must be paired with a Connection server. It is not recommended that you use Connection servers that are paired for external access, for internal access as well. Always dedicate Connection servers for your internal connections, and use Security/Connection server pairs for external connections.
Tumblr media
In the diagram above, internal connections either via VPN or within the company network will connect via a load-balancer. I’ve used an NSX load-balancer in this example, which sits in front of a pair of Connection servers. Once the user has authenticated via one of the Connection servers, the actual connection is direct from the Horizon Client to the virtual desktop or application. This is called direct-mode.
Using the same example above, external connections also hit the load-balancer first which sits in front of two or more Security servers. Once the user authenticates and selects a desktop or application, the Security server responds with its external URL. The Horizon Client will connect to the Security server’s external URL (public facing IP address). The remote display protocol is then forwarded from the Security server to the virtual desktop or application (not direct from the Horizon Client). This is called tunneled-mode.
Access Point is another option and can be used instead of Security servers. These are virtual appliances that have some major advantages, including the fact it doesn’t require pairing with the Connection server, and also its running a hardened Linux distribution. Many service providers are keen to use virtual appliances where possible as this avoids using additional Windows Server licenses, but also favors the use of Linux virtual appliances in the DMZ rather than Windows Servers.
Desktop Deployment Options
Horizon offers full clones, linked clones, and new with Horizon 7 are Instant Clones. Full clones use a template virtual machine in vCenter (our master) and a full clone desktop pool will contain a number of desktops that are full copies of the master (or parent) VM. They will have their own MAC address, computer name and IP address, but are otherwise full copies of the parent virtual machine. This is a good option for providing a dedicated desktop to someone that wants complete control, such as installing their own applications. However, it’s not the only option for the dedicated desktop.
Next we have ‘linked clones’. These are ideal for the non-persistent desktop where a master image is maintained and a number of linked clones are created based of the master. This differs to full clones in a number of ways. First, the linked clone technology is extremely efficient on storage space. Rather than simply cloning the master VM each time, it is linked, meaning that the linked clone VM contains the unique delta changes.
To make this possible, Horizon uses View Composer which is typically hosted on a dedicated Windows Server virtual machine.
The virtual machine disk is also constructed differently. When a linked clone desktop pool is created, the master virtual machine is cloned to a ‘replica��� virtual machine. The replica is a essentially a virtual disk that is used for read operations. As data is changed, the data is written to the delta virtual disk, unique to each virtual desktop.
The other advantage to linked clones is you have the option to refresh or even delete the virtual machine at log off. Next time the user logs in they get a fresh copy of the mater desktop image. This is a great option for maintaining corporate desktop standards.
Horizon 7 introduces another new technology called Instant Clones which when used in combination with App Volumes and User Environment Manager, allows for Just-in-Time desktops. You may remember at VMworld 2014 we announced VM Fork (aka Project Fargo), an exciting new technology that creates desktops in seconds. Providing you have vSphere 6 U1 or higher, Horizon 7 leverages this technology for Instant Clones. Instant Clones do not require View Composer.
RDSH Sessions
I mentioned earlier in this post about RDSH (Remote Desktop Session Host) sessions. Please don’t think of these as a second class citizen, apart from the obvious benefits of licensing, RDSH sessions can also provide the same rich user experience.
RDSH sessions can be deployed into both desktop or application pools, meaning that the end-user doesn’t necessarily have to launch a desktop session to access their applications. With further integration with Workspace ONE, end-users can open applications on any device with single sign-on (True SSO). You can learn more about Workspace ONE here.
Horizon 7 Terminology
Tumblr media
Conclusion
In part 1 I introduced you to the Horizon 7 architecture and a typical single-tenant deployment using Pods and Blocks. In part 2, I will discuss the Horizon DaaS platform architecture and how this offers massive scale for multiple tenants in a service provider environment.
0 notes
mrkeu · 3 years
Text
Overview of VMware DaaS
As recent times have show, many organisations require the ability to work from any location and have secure access to their desktops and applications at all time. Typically, this would require substantial CAPEX investment to run the infrastructure as well as having an operational overhead to manage the environment efficiently.
VMware's Horizon Cloud offerings give organisations the flexibility to run their desktop and application workloads but in purpose-built Cloud platforms, thus reducing the CAPEX costs and moving to an OPEX model based on subscriptions.
Other benefits that VMware Horizon Cloud provide include:
Provide predictable resource requirements. For example increased demands during seasonal periods or when new projects require a burst of new systems to be commissioned and decommissioned
Maintain the same level of security and control as if the workloads are hosted on-premises
Speed and flexibility to deliver services to the masses without the traditional procurement delays
VMware offers several solutions for providing virtual desktops and applications as listed below:
Horizon 7 On-Premises
Horizon 7 on VMware Cloud on AWS
Horizon Cloud on IBM Cloud
Horizon Cloud on Microsoft Azure
This blog post will focus on VMware's Desktop-as-a-Service (DaaS) offering that is specifically designed for VMware Service Provider Partners (VSPP) as part of the broader VMware Cloud Provider Program (VCPP).
VMware Horizon DaaS allows Service Providers to:
Provide a single management console for provisioning and delivering virtual desktops and applications from the cloud without the tenants needing to understand the underlying infrastructure
Host multi-tenants, providing dedicated compute resources across dedicated or shared VMware vSphere clusters
Allows tenants to bring in their own network services (Active Directory, DNS, DHCP, File Servers, etc.) to provide the same level of security and control as if the workloads were running on-premises
Horizon DaaS Architecture
The figure below shows an example of a Horizon DaaS architecture where a Service Provider is providing the resources to two separate tenants in an isolated deployment, with their own vCenter Servers. The tenants can also be deployed on a shared resource cluster and also be the clusters can be managed by a single vCenter Server, the design comes down to a number of factors such as; number of tenants, number of desktops and applications per tenant, security requirements, etc.
Figure 1: Horizon DaaS Architecture Example
Tumblr media
Components of VMware Horizon DaaS
Horizon Version Manager appliance - Provides orchestration and automation for Horizon DaaS components. The HVM holds the appliance template, runtime scripts, which allow for the automatic creation of the Service Provider appliances and the Resource Manager appliances. This is a Linux virtual appliance that is deployed from an .OVA file in vCenter Server.
Horizon Air Link appliance - Once the HVM appliance is deployed and the template and scripts copied to the machine, the next stage to to deploy the HAL appliance from the HVM admin portal. The HAL is responsible to sending API operations to vCenter Server to create the appliances.
Service Provider appliances - This is deployed as a pair for high availability. The SP provides the Service Provider administrators access to a web-based portal (Service Center) where they can manage the Horizon DaaS environment. This is the main console from where tenants are deployed, which resource cluster they use, as well as creating desktop collections, which are essentially capacity model for virtual desktops.
Resource Manager appliances - Like the SPs, this is deployed by the HAL in a pair. The role of the RM is to provide access and show the hardware resources available from the vCenter Server(s) that is configured for Horizon DaaS. The RM allows the Service Provider administrators to configure the compute resources for the tenants by allocating resources.
Tenant appliances - The tenant appliances (pair) are created from the Service Center portal. You configure the settings for the tenant, such as quotas for user licensing and desktop capacity.
Unified Access Gateway - This is a hardened Linux appliance that is deployed within the DMZ network to provide secure incoming traffic from external environments. External Horizon Clients make a connection to the UAG and do not see the backend environment, it is the UAG that communicates with the backend Horizon environment. The UAG supports multi-factor authentication to provide further security when accessing the virtual desktops and applications from the Internet.
0 notes
mrkeu · 3 years
Text
VMware Horizon DaaS for Service Providers
Tumblr media
This post has already been read 36119 times!
Many organizations are nowadays busy with questions regarding the IT Cloud. It’s not the question if they want cloud services, but more when and how. Some organizations are still busy with Cloud strategy whilst other organizations are already embracing cloud with services like SaaS or IaaS.
One of the cloud related topics is around the virtual Desktop (DaaS). In this blog article we will be focusing on the VMware Horizon DaaS platform and why this platform is interesting for Service Providers to compete in the DaaS market.
So you are a Service Provider and maybe thinking about or already provide virtual desktops and applications to your customers. How do you provide multi-tenancy or keep the operating roles between you and your customers separated?
What is Horizon DaaS?
VMware Horizon DaaS, formally known as Desktone, is a Desktop-As-A-Service (DaaS) cloud solution built on VMware technologies and solutions to deliver cloud computing to users connected from any type of devices.
The first question around Horizon DaaS I often get from customers, is the way it differs from the Enterprise solutions like VMware View. Although many things are different between the two VMware solutions, they share the same Protocols like PCoIP and Blast. Also the agents within the virtual desktops are the same as in View. Both platforms can be used to deliver VDI desktops in a statefull (Static) or Stateless (Dynamic) scenario or deliver RDSH desktops and applications. Users have to ability to connect to their desktops from the VMware View client or use the web portal.
The table below shows a quick overview of some of the major differences between both products:
Tumblr media
In this article I will explain two of most important differences for Service Providers:
Multi-tenancy
Demarcation within the platform
If you want to achieve a multi-tenant environment with VMware View or any other Enterprise VDI Solution, you probably have to have a dedicated environment per customer, resulting in many different environments with a lot of infrastructure components. All those different infrastructure servers must be managed which could be very time consuming.
Multi-tenancy
The VMware Horizon DaaS platform consists of multiple Linux based virtual appliances:
Service Provider Appliances – The Service Provider appliances provides the web based Management Console for overall management of the solution and act as a transit point for enabling SSH access to all management appliances within the data center.
Resources Managers – The Resource Managers appliances integrates with the Virtual Infrastructure and abstracts all specifics to the tenant appliances.
Tenant Appliances – The Tenant Appliances provides the Administrative and User portal to manage and access the customer (tenant) specific virtual desktop environment.
All appliances are managed by the Service Provider but only the Service Provider and Resource Manager appliances are part of the Service Provider network. The Service Provider network is typically a network where Active Directory, DNS and other services from the Service Provider itself are hosted. The Tenant appliances are participating in the network of the customer; this network is referred as the Tenant network.
For security reasons all appliances are connected to a backbone link local (BBLL) network for management purposes. This is a non-routable network within the 169.254.x.x IP range. The Service Provide is able to manage the Tenant Appliances from the Service Center portal (See Management Portals below) over the BBLL, however traffic between the Service Provider network and tenant networks is not possible.
To separate customer environments from each other, different vCenter (clusters), dedicated networking, storage, and compute can be used keeping things securely separated.
Tenant Appliances and virtual machines, either VDI desktops or a supporting infrastructure servers like Active Directory and File Servers, are all participating in the network and active directory domain of the customer. To enable access between the tenant environment in the cloud and the customer on-premises network, a VPN or MPLS can be used.
Tumblr media
High Level Overview VMware Horizon DaaS
Management Portals
Another big difference between the two solutions is the way role separation is delivered. With a typical Enterprise VDI solution in a Service Provider scenario, there are a few design considerations:
Is the VDI solution part of the Customer Active Directory?
If the brokering platform is managed by the Service Provider, they need to have administrative permissions in the customer Active Directory.
The way role separation is arranged within the tooling.
How is the customer taking care of the images used in the VDI solution? vCenter access?
What to do with Service Accounts (security issue?)
Horizon DaaS comes with three different portals in total:
Tumblr media
The Service Provider Portal (Service Center) is used by the Service Provider and integrates with the Service Provider Active Directory. This portal is not accessible to the customers and is used for overall management of the platform.
Each enrolled tenant in the Horizon DaaS platform has its own unique Enterprise Center portal where the customer can manage their own VDI environment, such as creating or editing patterns (images), creating or editing pools etc. The portal runs on the Tenant Appliances which is part of the Customer network and the customer Active Directory. Customers must use their own AD credentials to access Enterprise Center for administration.
The end-user’s portal also runs on the Tenant appliances.  Users are authenticating with their regular AD usernames and passwords.  The Service Provider typically doesn’t have an account in the domain of the customer and is unable to access the Enterprise Center and user portal.
Because each portal has its own functionality and the Service Provider and Tenant portals are using their own active directory domain, there is a very clear separation of roles and responsibilities. There is no integration between the Customer and Service Provider Active Directory which is a big advantage.
Conclusion
The solution as described in this blog will make hosting DaaS in a multi-tenant scenario very easy and keeping roles between the providers and customers very clear. Because of its architecture and automated deployment of new appliances, VMware Horizon DaaS is very easy to scale and very easy to onboard new tenants.
If you are not a Service provider but interested in VMware Desktops in the Cloud, there are multiple options from which you can choose:
Contact a Service Provider who has the VMware Horizon DaaS offering. For example XtraDesktop in the Netherlands
VMware Horizon Air based on the same software technology but operated by VMware.
VMware Horizon Air Hybrid mode, based on similar technology and enables unified management of on-premises virtual desktops and applications through a single cloud control plane.
For more information, please feel free to ask or go directly to:
http://www.vmware.com/nl/products/daas-vspp.html
0 notes
mrkeu · 3 years
Text
AWS Certified Solutions Architect – Associate
Tumblr media
Giới thiệu
AWS Certification là chứng chỉ được câp bởi Amazon đánh giá mức độ hiểu biết về aws cloud, cụ thể là các dịch vụ của Amazon Web Services (AWS) cũng như việc áp dụng các dịch vụ đó 1 cách hiệu quả vào trong các bài toán thực tế. Bộ chứng chỉ này được chia thành các phần: Cloud Practitioner, Architect, Developer và Operations, ngoài ra có thêm Specialty. Về độ khó thì có 3 mức:
Foundational
Asociate
Professional
Chi tiết về các chứng chỉ các bạn tham khảo ở hình dưới đây: Tham khảo: https://aws.amazon.com/certification/
Tumblr media
Kế hoạch học thi
Đầu tiên mình chọn mục tiêu là chứng chỉ: AWS Certified Solutions Architect – Associate. Chứng chỉ này có độ khó trung bình theo mình biết thì ngang tầm 1 năm kinh nghiệm sử dụng AWS, sở dĩ mình chọn chứng chỉ này vì các lý do sau:
Bản thân cũng có 2 năm kinnh nghiệm về Architect nhưng là một cloud khác, ngoài phần infrastrcuture ra thì team mình control toàn bộ, và không có nhiều service support như AWS.
Chứng chỉ với độ khó trung bình trước mắt phù hợp với năng lực bản thân.
Mình chọn khoá học ở https://linuxacademy.com/ và vì chưa từng có kinnh nghiêmh sử dụng AWS nên mình sẽ phải học qua khoá AWS Cloud Practitioner, khoá này mất khoảng 17 giờ học online và thời gian thực tế mình hoàn thành khoá học này mấy khoảng 10 ngày học. Tiếp theo mình sẽ học AWS Certified Solutions Architect – Associate - khoá học này có thời lượng 57 giờ học online, mình lên plan dự kiến hoàn thành trong một tháng. Mình đã hoàn thành khóa học AWS Cloud Practitioner và AWS Certified Solutions Architect – Associate trong khoảng 2 tháng, mỗi ngày 1-2h và 2-3h vào cuối tuần.
Đi thi
Sau 2 tháng học và khoảng 1 tuần để ôn tập kiến thức thì mình đi thi, mình đã book lịch thi từ thời điểm bắt đầu khóa Solutions Architect – Associate. Vì vậy ở thời điểm thi mình còn nhiều kiến thức chưa chuyên sâu, mập mờ giữa các service. Kết quả: 674/1000 => FAIL Đề thi có 65, 720/1000 trở lên là điểm pass. Hơi tiết một chút nhưng dù sao nó cũng năm trong dự đoán của mình (từ 650-700). Vì mình có làm một số đề thi thử nhưng cũng chỉ đạt khoảng 50-70%, trong khi đề thi thật thì dài và khó hơn 1 chút.
Các chủ đề ôn tập
Sau khi sumit bài thi thì bạn biết luôn kết quả là PASS hay FAIL, nhưng để biết điểm thì bạn phải vào trang aws training - nơi mà bạn đăng ký online. OK fail rồi, thế thì để rút kinh nghiệm cho lần thi sau mình đã nhớ lại các câu hỏi làm mình mập mờ giữa 2 đáp án, hay những câu mà mình thấy có nhiều phương án đúng nhưng lựa chọn thằng nào là tốt nhất cho trường hợp tiết kiệm chi phí hay là high availbility nhất v.v.., sau đó mình lục lại tài liệu ở khóa học trên trang https://linuxacademy.com/ bản pdf để lướt qua lại một lần và note lại những điểm cần phải tìm hiểu kỹ hơn cũng như so sánh những service giống nhau. Và dưới đây là các chủ đề mà mình note nhanh lại và sẽ lên kế hoạch để cường hóa kiến thức cho lần thi sắp tới. Mình đã book lịch thi vào đầu tháng 8/2020 nhưng do dịch covid-19 bùng phát tại Đà Nẵng nên mình đã dời sang đầu tháng 10/2020.
DataSync
VPN và Direct Connect
on-premise to Cloud
Snowball, Snowball Edge và Snowmobile (chú ý đến dung lượng)
Dung lượng data chuyển lên Cloud
NAT gatewate và NAT Instance
S3
SNS, SQS, (FIFO, Standard)
So sánh Amazon MQ, SQS, SNS
Data từ on-premeise to Cloud thông qua EFS, File Storage gatewate, Volume storage gatewate.
Storage Gateway
EBS Throughput Optimized HDD , provisined, popular purpose
Tìm hiểu về CloudFoudation và Elast ic Beanstalk
CloudFront hạn chế user theo khu vực địa lý : Geo.. và Geoxy...
Athena và Redshift
Tìm hiểu kỹ về DynamoDB, Aurora
Secure
Khi nào dùng NAT, VPC endpoint, VPC privatelink. Best Pract ice and HA
Kiness và Lampda trigger
DB nào cho phép AZ, hay replicate another region
EFS: performance method: General purpose and Max I/O, thoughput model: Bursting Throughput and Provisioned
I/O liên quan đến RAM hay CPU
Cloudfront và OAI
Encryption: S3, DB, bucket
S3: versioning and presigned URL, MFA Delete and Encrypt, Cross-Origin Resource Sharing (CORS)
Rout 53: Rout e 53 and Healt h Checks, các loại record
ELB
VPC: NAT gatewate and NAT Insrance, Egress-only int ernet gat eways, VPC Endpoint
Net work Access Cont rol List s (NACLs):
Sercurity group
Bast ion Hosts
ECS: EC2 mode và Fargate Mode
Lamda
các loại Instance trong Ec2
Role and policy
DynamoDB Accelerator (DAX)
auto scaling group is configured with default termination policy
EBS cross region or create snapshot
So sanh Dynamo Redshift Aurora
using standard SQL and existing business intelligence tools
Amazon GuardDuty
CloudWatch Event
VNP Cloudhub
Mình sẽ không viết kỹ về các service vì cái này có nhiều blog khác đã viết (tiếng Việt và tiếng Anh). Mình sẽ nêu ra những điểm cần lưu ý khi nói về các service đó, và đưa link tham khảo để các bạn tìm hiểu kỹ hơn.
S3
Khá nhiều câu hỏi liên quan đến service này, một service kinh điển của AWS. Nguồn: https://www.msp360.com/resources/blog/amazon-s3-storage-classes-guide/
Tumblr media
Lưu ý
S3 Standard: Chi phí lưu trữ cao nhất, tốc độ truy cập nhanh S3 Standard Infrequent Access: Giống như cái tên của nó, khi chúng ta cần lưu trữ dữ liệu nhưng không truy cập thường xuyên, tuy nhiên khi cần truy cập phải được phản hồi ngay lập tức. S3 Intelligent Tiering: có thể nói đây không phải là một storage class, vậy khi nào thì bạn chọn loại này - khi chúng ta không dự đoán được tấn suất truy cập cho những dữ liệu này. AWS sẽ monitor và sẽ chuyển qua chuyển lại giữa các class tùy theo tần suất truy cập đến dữ liệu để đảm bảo tiết kiệm chi phí cho bạn. Amazon Glacier: Đây là class dành cho dữ liệu truy cập không thường xuyên và chi phí rẻ, và tùy thuộc vào thời gian bạn muốn lấy giữ liệu mà nó chia làm các retrieval options như sau (cái này hay hỏi bài thi):
expedite retrieval - 1-5 minutes
standard - 3-5 hours
bulk - 5-12 hours
Amazon Glacier Deep Archive: Đây là class có chi phí lưu trữ rẻ nhất, cùng với đó là thời gian để truy xuất dữ liệu cũng chậm 12 - 48 hours.
Câu hỏi mẫu
Tumblr media Tumblr media
Tài liệu tham khảo
Tumblr media
https://aws.amazon.com/s3/?nc1=h_ls https://aws.amazon.com/s3/storage-classes/?nc=sn&loc=3 https://www.msp360.com/resources/blog/amazon-s3-storage-classes-guide/
EC2
Một số lưu ý để ước tính chi phí Amazon EC2 là: Operating systems, Clock hours of server time, Pricing Model, Instance type and Number of instances
Pricing models for Amazon EC2:
On-Demand Instances
Reserved Instances
Spot Instances
Dedicated Hosts
On-Demand Instances
Không cần trả trước, bạn có thể tăng giảm khả năng compute để đáp ứng như cầu của bạn và bạn sẽ chi trả cho những gì bạn dùng. Phù hợp với các công việc thử nghiệm trong thời gian ngắn và không đoán trước được. Vd cho trường hợp sử dụng On-Demand Instances là khi bạn scale OUT instances để phục vụ lượng request lớn tăng đột ngột ramdom trong ngày mà bạn không đoán trước được thời gian nào.
Reserved Instances
Amazon EC2 Reserved cung cấp cho bạn mức chiếc khấu lên đến 75% so với On-Demand Instance. Khi chúng ta có một ứng dụng chạy dài hạn 1-3 năm và chúng ta có thể dự đoán được khả năng tính toán (compute) của instance thì loại instance này là phù hợp để tiết kiệm chi phí.
Spot Instances
Amazon EC2 Spot instances allow you to request spare Amazon EC2 computing capacity for up to 90% off the On-Demand price.
Amazon EC2 Spot Instances cho phép bạn tận dụng EC2 capacity chưa được sử dụng trên AWS cloud. Nói cách khác, là có thể sử dụng hàng ngon với giá rẻ. Khi sử dụng loại instance này, bạn phải chấp nhận việc các xử lý có thể bị gián đoạn (interrupted). và EC2 bị terminated. Tham khảo thêm tại: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-spot-instances.html https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-leveraging-ec2-spot-instances/when-to-use-spot-instances.html
Dedicated Hosts
Giống như cái tên, Nó là một physical EC2 server dành riêng cho bạn. Dedicated Hosts giúp bạn giảm chi phí bằng cách cho phép bạn sử dụng các software licenses như Windows server, SQL server v.v... đang có. Tìm hiểu thêm tại: https://aws.amazon.com/ec2/dedicated-hosts/
Tài liệu tham khảo
https://aws.amazon.com/ec2/pricing/?nc1=h_ls https://cuongquach.com/tim-hieu-ve-amazon-ec2-amazon-elastic-compute-cloud.html
Câu hỏi mẫu
Tumblr media Tumblr media
Migrating Data to the Cloud
Trong rất nhiều tình huống được đưa ra, đa phần sẽ có các caai hỏi liên quan đến việc migrate data từ on-premise lên AWS cloud. Mình sẽ tổng hợp lại một số cách thức và chỉ ra điểm lưu ý ở các service này.
Online data transferAWS Virtual Private Network
Thiết lập nhanh
Chi phí thấp
Không đảm bảo băng thông ổn định, và mức độ bảo mật không cao
AWS Database Migration Service
Database của bạn vẫn hoạt động bình thường trong lúc tranfer
Chuyển và phân tích data nhạy cảm
Đơn giản, tự động và tăng tốc
AWS Direct Connect
Thiết lâp kết nối private chuyên dụng với cáp quang 1G hoặc 10G
Sử dụng khi chúng ta cần 1 kênh nhanh, tin cậy và bảo mật để truyền dữ liệu
Tốn chi phí và mất nhiều thời gian set up hơn so với VPN
Với chi phí và thời gian cần thiết để thiết lập, nó không phải là một tùy chọn truyền dữ liệu lý tưởng nếu bạn chỉ cần nó để thực hiện di chuyển một lần.
AWS S3 Transfer Acceleration
Nếu bạn đang chuyển dữ liệu của mình sang nền tảng lưu trữ AWS phổ biến S3 ở một khoảng cách xa, thì AWS S3 Transfer Acceleration giúp bạn thực hiện nhanh hơn: trung bình nhanh hơn 171%, theo AWS.
Nhanh chóng, an toàn thông qua public internet
Tiết kiệm thời gian khi chuyển data từ các location khác nhau
Dữ liệu được truyền thông qua Amazon CloudFront - tối đa hóa băng thông
AWS DataSync
Nguồn: https://aws.amazon.com/datasync/?whats-new-cards.sort-by=item.additionalFields.postDateTime&whats-new-cards.sort-order=desc
Tumblr media
Offline data transferAWS Snowball
AWS Snowball là em bé của gia đình Snowball: một thiết bị siêu bền, có kích thước bằng vali, có thể được tải lên tới 80TB dữ liệu.
50 - 80TB
Mã hóa 256 bit
Encryption keys được quản lý thông qua AWS Key Management Service (KMS)
AWS Snowball Edge
Có kích thước lớn lơn Snowball với khả năng tải lên tới 100TB. Snowball Edge cũng có thể chạy các ứng dụng dựa trên Lambda và EC2, ngay cả khi không có kết nối mạng.
Điều này làm cho nó lý tưởng cho các trường hợp sử dụng cần xử lý hoặc phân tích cục bộ trước khi dữ liệu được đưa lên AWS Cloud. Giá của nó dĩ nhiên sẽ đắt hơn Snowball.
AWS Snowmobile
AWS Snowmobile là một giải pháp di chuyển dữ liệu quy mô exabyte, đóng gói tương đương với 1.250 thiết bị AWS Snowball vào container vận chuyển dài 45ft.
AWS Snowmobile có thể vận chuyển tới 100PB dữ liệu trong một chuyến đi, với chi phí chỉ bằng 1/5 chi phí chuyển dữ liệu qua kết nối internet tốc độ cao. Snowmobile không chỉ là cách nhanh nhất và rẻ nhất để chuyển lượng dữ liệu khổng lồ lên cloud, nó còn có tính bảo mật cao. Nhân viên an ninh chuyên dụng, theo dõi GPS, giám sát báo động và giám sát video 24/7 phối hợp với nhau để giữ an toàn cho dữ liệu của bạn trong suốt hành trình. Dữ liệu được mã hóa bằng các khóa mã hóa 256 bit.
Tài liệu tham khảo
https://www.jeffersonfrank.com/aws-blog/aws-data-transfer-costs/#AWSDC https://www.slideshare.net/AmazonWebServices/migrating-data-to-the-cloud-exploring-your-options-from-aws-stg205r1-aws-reinvent-2018
Câu hỏi mẫu
Tumblr media
Lời kết
Ở phần 1 này mình tạm thời memo lại bấy nhiêu thôi, mình không chắc những cái mình memo ở đây đầy đủ và chính xác, quan điểm của mình là đưa ra các chủ đề có trong bài thi và những điểm mập mờ cần làm rõ để phân biệt các service và khi nào nên sử dụng cái nào. Lượng kiến thức dành cho bài thi này không đến mức chuyên sâu nhưng trãi đều các nội dung. Nắm vững khái niệm, khi nào sử dụng và một số lưu ý để so sánh giữa các service giống nhau sẽ là chìa khóa để bạn chinh phục bài test này. Trên cả việc pass chứng chỉ hay không pass chứng chỉ đó là kiến thức bạn nắm được, hiểu được để có thể vận dụng trong thực tế, vì vậy sẽ hoàn hảo hơn khi chúng ta tìm hiểu các service này ở một mức độ tương đối chứ không qua loa. Nếu các bạn có gặp các câu hỏi nào hay service này cần thảo luận, hãy để lại comment và chúng ta cùng thảo luận nhé.
0 notes
mrkeu · 3 years
Text
MIỄN PHÍ 100% | Series tự học AWS từ cơ bản tới nâng cao (cập nhật liên tục...)
AWS là viết tắt của Amazon Web Services sử dụng cơ sở hạ tầng CNTT(infrastructure) phân tán để cung cấp các tài nguyên CNTT khác nhau theo yêu cầu.
Trong series bao gồm:
Sách, video và tài liệu học AWS
Lộ trình học AWS từ cơ bản tới nâng cao
Thực hành từng bước sử dụng các dịch vụ của AWS với hình ảnh chi tiết
Sách, video và tài liệu
0.0 Kho sách AWS
0.1 Nơi đăng ký nhận ebook lập trình, ebook công nghệ thông tin
0.2Video học AWS(Đang cập nhật...)
0 notes
mrkeu · 3 years
Text
Kinh nghiệm thi chứng chỉ AWS Certified Solution Architect – Associate
Bài viết chia sẻ quá trình ôn tập cho kỳ thi AWSSA-A (Amazon AWS Certified Solutions Architect - Associate) - 1 trong 3 kỳ thi của AWS dành cho level Associate.
Hi vọng việc chia sẻ lộ trình học tập sẽ giúp ích cho các bạn có quỹ thời gian hạn chế như mình lười như mình tiết kiệm được thời gian và đạt được kết quả tốt nhất.
AWSSA-A là gì?
Tumblr media
Amazon AWS Certified Solutions Architect - Associate là một trong 3 kỳ thi tương ứng với 3 career path của AWS cho các kỹ sư nhân sự làm việc với hệ thống của mình. 2 chứng chỉ còn lại dành cho lập trình viên (Developer) và vận hành hệ thống (SysOps). Mỗi kỳ thi có 2 level: Associate và Professional.
Trong quá trình thiết kế và triển khai dự án trên AWS, kiến thức học được từ AWSSA-A sẽ đem lại cho các kỹ sư những cái nhìn tổng quan về các dịch vụ, những điều nên làm (và không nên làm) để xây dựng một hệ thống tốt trên cloud.
Lượng kiến thức mà kỳ thi cover khá rộng (gần như toàn bộ các services), nếu bạn chưa có kinh nghiệm sử dụng các dịch vụ của AWS sẽ là một bất lợi (mất nhiều thời gian để học hơn). Bù lại, do khối lượng kiến thức rộng nên sẽ không quá tập trung đi sâu vào phần core của từng service nên đây cũng là một chứng chỉ khá dễ học cho những người mới bắt đầu.
Tại sao nên có chứng chỉ AWSSA-A?
Tumblr media
Kiến thức tổng quát về AWS, các best practices (worst practices) trong thiết kế hệ thống (không chỉ cho AWS)
Giúp CV đẹp hơn, tăng lương,...
Giúp công ty của bạn join APN (Amazon Partners Network), nghe nói được in lên cả danh thiếp, khá xịn sò.
Cụ thể hơn một ngày đẹp trời, bạn sẽ thấy nó có ích lúc:
Đặt máy chủ NAT rồi mà EC2 vẫn không kết nối được internet
Đặt load balancer mà connect đến EC2 lúc được lúc không
Sử dụng life cycle cho S3 để giảm chi phí cho công ty (sếp khoái)
Đề xuất các công nghệ, giải pháp mới để giải quyết (một cách tốt hơn) bài toán được đặt ra (Lambda để giảm chi phí, DynamoDB để có thể scale tốt hơn,...)
Chung quy lại là chỉ có được, không có mất (đùa đấy, mất thời gian), nếu bạn thấy nó có ích thì nên bắt đầu lên kế hoạch ngay từ hôm nay.
Cấu trúc kỳ thi
Các kỳ thi của AWS sẽ renew liên tục để có thể phù hợp với các thay đổi và bổ sung diễn ra hằng ngày, chính vì thế các thông tin về kỳ thi trên mạng có thể sẽ khác nhau.
Mình sẽ cung cấp thông tin của kỳ thi mới nhất - AWSSA-A Released February 2018 (English version)
Đề thi sẽ gồm 65 câu hỏi trắc nghiệm (bao gồm cả cả multiple answers)
Thời gian: 130 phút → trung bình 2 phút/câu
Vì cấu trúc đề thi có rất nhiều câu hỏi multiple answers nên việc ôn tủ hay đánh bừa cũng không giúp ích được quá nhiều (kể cả bạn có người thân làm to ở Sơn La hay Hoà Bình).
Trong các kỳ thi trước, điểm để pass rơi vào khoảng 65% tuỳ độ khó. Tuy nhiên ở phiên bản mới nhất, điểm pass cố định ở mức 720/1000 (72%) → để đủ điểm đỗ bạn nên target tầm 75% trở lên.
Chuẩn bị cho kì thi
Tumblr media
Mình chủ động nhận nhiều task liên quan đến phần infra với mục đích tiếp xúc với AWS hằng ngày, việc có thể quen với các core services đã giúp mình chỉ cần khoảng 1 tháng để ôn thi (30p mỗi tối và cuối tuần).
① AWS fundementals by AWS trên coursera
Link: https://www.coursera.org/learn/aws-fundamentals-going-cloud-native
Note:Khoá học free Cung cấp kiến thức, khái niệm cơ bản của các core services cho người mới bắt đầu Giảng viên xinh, nói chuyện dễ thương
② AWS Certified Solutions Architect - Associate 2019 by Ryan Kroonenburg trên Udemy
Link: https://www.udemy.com/aws-certified-solutions-architect-associate/
Note:Khoá học mất phí (12$ ~ 1 bữa nhậu) Cover toàn bộ kiến thức trong kỳ thi AWSCSA-A Cung cấp khá nhiều tips, tricks, kinh nghiệm với dự án thực tế nên sẽ dễ hiểu hơn việc đọc sách Một số chương khá nặng lý thuyết, nghe liên tục quá 30 phút có tác dụng thay thuốc ngủ. Highly recommend đọc sách ↓ song song với việc học khoá này
③ Giáo trình AWS Certified Solutions Architect – Associate by AWS Associate Team
Link sách lậu cho các thanh niên nhân phẩm kém (như mình)
Note:Sách khá dày (500+ trang) nhưng trình bày dễ hiểu, dễ đọc, đừng quá lo lắng nếu tiếng Anh của bạn chưa được tốt Nên vừa đọc vừa highlight để có thể nhớ được luôn những ý quan trọng, tránh lan man Đọc hết sách, làm hết test (>75%) trong này thì kê cao gối ngủ rồi đi thi được rồi However, sách có một số thông tin bị outdated. Đến trước hôm thi vẫn còn 1,2 chương gì đó chưa đọc được nên chuyển qua học online ↑ để được giảng viên tổng hợp nội dung.
④ FAQs của các service chính: VPC, RDS, EC2, S3 and IAM.
Note:Miễn phí AWS có hệ thống document rất tử tế, tử tế đến mức đọc khổ như đọc sách vì quá dài. Đâu đó khoảng 10% câu hỏi trong kỳ thi sẽ xuất hiện trong phần này.
⑤ Khoá AWS Certified Solutions Architect - Associate Practice Tests cũng của acloud guru trên udemy
Link: https://www.udemy.com/aws-certified-solutions-architect-associate-practice-tests
Note:Mất thêm một bữa nhậu Có thể biết được cấu trúc đề thi, dạng câu hỏi trong bài thi Mình có làm 3 mock test vào đêm trước hôm đi thi (fail cả 3 test), tuy nhiên hôm sau thi gặp khoảng 3-4 câu hỏi khá giống những câu đã làm.
⑥ [Optional] Thay cho khoá học số ②
Khoá học AWS Certified Solutions Architect - Associate trên pluralsight miễn phí bằng cách đăng ký tại đây
⑦ [Optional] AWS Fundamentals: Building Serverless Applications trên courera
Link: https://www.coursera.org/learn/aws-fundamentals-building-serverless-applications
Note:Khoá học miễn phí Google cloud có hệ thống khoá học trên coursera khá đầy đủ, AWS mới chỉ tập trung bán khoá học của mình (cậy là provider cho nhà giàu), tuy nhiên gần đây AWS bắt đầu có những khoá học miễn phí khá hay trên coursera Lambda, serverless đang là một keyword rất hot, việc tìm hiểu về nó cũng giúp ích cho kỳ thi (khoảng 5 câu hỏi liên quan tới Lambda)
⑧ Pre-exam
Review lại bằng study note
Review bằng mind-map
Review 3 chương đầu tại đây (quá bận lười để tổng kết 9 chương còn lại)
⑨ Đăng ký thi
Đăng ký tại certmetrics (mình thi tại Nhật)
Mang theo 2 loại giấy tờ tuỳ thân (thẻ ngoại kiều, passport hoặc chứng minh thư + thẻ ngân hàng)
Đến lúc nào là có thể thi ngay lúc đó, làm bài 100% trên máy tính
Tips
Nên đăng ký một tài khoản free-tier để làm bài tập thực hành(trong sách) và labs (trong khoá học). Có thể bỏ qua nếu bạn có kinh nghiệm làm việc với AWS
Nếu chưa yên tâm, đăng ký thêm tài khoản tại qwiklabs.com để làm nhiều bài thực hành hơn
Nên học khoảng 20-30% khoá học ① và ② rồi bắt đầu đọc sách để dễ hiểu hơn
Tumblr media
Nên chú ý phần sử dụng EC2 Instance Types, , Instance Store vs EBS, OLAP vs OLTP solutions
Hiểu kỹ các encryption scenario của S3, RDS, EBS và các snapshots
Dành thời gian học phần AWS Well-Architecture Framework, vừa có ích trong kỳ thi, vừa có ích cho việc thiết kế hệ thống thực tế
Thời gian thi là 130 phút, khoảng 50 phút là mình hoàn thành, tuy nhiên nên check lại đáp án thật kỹ, đừng ngại đặt flag với những câu mình chưa chắc chắn
Kết quả
Kết quả PASS/FAIL sẽ được báo ngay sau khi hoàn thành bài thi và survey, điểm số cụ thể sẽ được gửi sau tối đa 5 ngày làm việc.
Nếu có thể hãy share kết quả và quá trình chuẩn bị của bản thân, nó có thể giúp ích cho những người cũng có ý định đăng ký kỳ thi này
Điểm của mình 852/1000, may là không module nào bị kém quá =)
Tumblr media
Ref
https://notcuder.com/toi-da-lay-chung-chi-aws-solutions-architect-associate-nhu-the-nao/
http://mistwire.com/2016/05/aws-certified-solutions-architect-associate-study-notes/
https://www.awslagi.com/
https://quizlet.com/194513754/aws-certified-solutions-architect-associate-practice-questions-flash-cards/
https://medium.com/@franktran/how-to-succeed-in-aws-certified-solutions-architect-associate-exam-8a30344347f
0 notes
mrkeu · 3 years
Text
Gửi anh em quy trình tự học, tự triển khai Oracle và SQL Server Database.
Với các anh em bán chuyên (sysadmin mà phải giao quản lý luôn database) thì tài liệu này sẽ áp dụng được luôn (vì có đủ cả từ kiến trúc, sao lưu, khôi phục, quản lý user, giám sát...)
Với các bạn sinh viên đang tìm hiểu nghề, các video chia sẻ về con đường trở thành DBA sẽ giúp ích cho bạn. Hãy tìm hiểu trước khi quyết định có theo nghề hay không.
Với các anh em đang là DBA, các video demo thực chiến sẽ hữu ích với mọi người.
P/S: Các bác nào được tăng lương hoặc nhận dự án ngoài thành công thì mời em cafe giao lưu là được nhé.
Chi tiết:
A. Nếu bạn là sinh viên hoặc anh em làm trái ngành, các video sau sẽ hữu ích:
- Không biết Code có theo DBA được không: https://youtu.be/-uc56BeY4jQ
- Cách thức trở thành 1 Oracle DBA: https://youtu.be/UXXCrPb2hpA
- Bí kíp đi xin thực tập để trở thành DBA của tôi: https://youtu.be/Zjil1G3A0Vk
- Làm thế nào vừa học vừa thực tập hiệu quả: https://youtu.be/dyYbzKPvjVM
- Ngoài ngoại đạo cũng nên biết về Database, vì nó là mọi thứ đang diễn ra xung quanh chúng ta: https://youtu.be/jr9IkKNFdAU
B. Làm sao để có môi trường tự học Oracle
- [HÀNG HIẾM] Môi trường Lab Oracle được đóng gói sẵn, chỉ mở ra và sử dụng: https://youtu.be/bLAcNz1TmzQ
- Quy trình triển khai Oracle 19c trên Oracle Linux 7 tận răng: https://tranquochuy.org/huong-dan-trien-khai-oracle.../
- Quy trình triển khai Oracle 21c trên Oracle Linux 7 tận răng: https://tranquochuy.org/trien-khai-oracle-database-21c.../
C. Video hướng dẫn tự học Oracle dễ hiểu, thực chiến.
- Bài quan trọng bậc nhất khi làm quản trị: hiểu về kiến trúc Oracle trong 15 phút: https://youtu.be/6icn0a5lKi4
- Demo kiểm tra thông tin cơ sở dữ liệu trên Window: https://youtu.be/R2ojy5-pS0s
- Làm thế nào để tắt database một cách an toàn: https://youtu.be/UMeBLxHsu6A
D. Dành cho các bạn muốn tự học phân tích dữ liệu sử dụng SQL trong Oracle
- Bài 1. Cách lấy dữ liệu từ bảng: https://youtu.be/uWw4rPcntN8
- Bài 2. Cách lọc dữ liệu: https://youtu.be/tQj-r1cbK3s
- Bài 3. Sắp xếp dữ liệu: https://youtu.be/a6IkkBPcL1M
- Bài 4. Các hàm thường xử dụng trong SQL: https://youtu.be/YLOhV7G3Plg
E. Tự luyện thi chứng chỉ quốc tế Oracle. Lương cao đang đợi bạn.
- Series các câu hỏi luyện thi (có giải đáp): https://youtu.be/neJR4lt4f7k
F. Bí kíp thực chiến trong dự án - dành cho các bạn đang là DBA hoặc System Admin
- Xử lý sự cố không bật được Oracle database kinh điển: https://youtu.be/zT0LqPv3UXo
- Hiểu các hệ thống có nhiều giao dịch đồng thời: https://youtu.be/0EzNU1esr-s
- Hiểu và vận dụng chiến lược sao lưu cơ sở dữ liệu: https://youtu.be/q6PxE6Hpj8k
- Bí kíp giám sát hoạt động của cơ sở dữ liệu: https://youtu.be/ANdcPqWtqsE
- Script kiểm tra dung lượng Cơ sở dữ liệu:
- Script giám sát các active session trong cơ sở dữ liệu:
- Xử lý sự cố corrupt spfile trên database production: https://youtu.be/MtqT2aewUio
- Sự cố xóa Redo log file trên database production: https://youtu.be/cWZ2_YFhH1c
- Sự cố xóa control files trên database production: https://youtu.be/0bQE4y3F5eY
G. Dành cho các bạn muốn tìm hiểu về cơ sở dữ liệu SQL Server.
- Quản lý User, login trong SQL Server: https://youtu.be/gEi3vK408AM
- Sử dụng Activity Monitor để quản lý SQL Server: https://youtu.be/PDQbGcGKwUw
- Quản lý dung lượng trong SQL Server: https://youtu.be/vmX0AKXYwQE
- Cấu hình tự động sao lưu cơ sở dữ liệu SQL Server: https://youtu.be/AO79NjGRH3A
- Hướng dẫn khôi phục cơ sở dữ liệu cực dễ hiểu: https://youtu.be/CB6Fj-9iYFY
0 notes
mrkeu · 3 years
Text
My good boy 😜
0 notes
mrkeu · 3 years
Text
12 MẸO TÂM LÝ GIÚP BẠN "CHIẾM ƯU THẾ" TRONG GIAO TIẾP XÃ HỘI
1. Nếu bạn muốn biết về một thứ gì từ ai đó, hãy hỏi họ một câu hỏi và khi nào họ trả lời xong, hãy giữ im lặng, duy trì việc kết nối bằng ánh mắt. Họ sẽ cho bạn biết thêm một số chi tiết nãy chưa nói, có khi là gần như mọi thứ.
2. Khi bạn cố thuyết phục ai đó về điều gì đấy, không phải trên bàn đàm phán, hãy chắc chắn rằng họ đang ngồi và bạn đang đứng. Điều này khiến họ tin tưởng bạn hơn.
3. Chìa khóa để tự tin bước vào phòng là giả định trong đầu và hãy thuyết phục rằng tất cả mọi người đều thích bạn. Tin vào mình trước
4. Gọi những người bạn vừa gặp bằng tên của họ. Mọi người đều muốn thấy mình đặc biệt với ai đó, và họ thường thích (thấy thân thuộc) tên của mình, nó sẽ tạo ra một cảm giác tin tưởng và tình bạn ngay lập tức được bắt đầu. Ví dụ: "Rất vui được gặp Trang. Vậy làm thế nào mà Trang biết Linh thế?. Tiếp tục lặp lại tên của họ và tên của bạn trong suốt cuộc trò chuyện.
5. Nếu ai đó bị thu hút bởi bạn, đôi mắt của họ sẽ bắt đầu nhấp nháy nhiều hơn bình thường trong cuộc trò chuyện với bạn. (Luôn luôn có những dấu hiệu riêng tư được thiết lập giữa hai người có cảm tình với nhau, những dấu hiệu chỉ họ nhận ra nhau)
6. Chú ý đến bàn chân và hướng thân người đang giao tiếp với bạn. Để biết nếu ai đó quan tâm đến một cuộc trò chuyện nhìn vào đôi chân của họ, nếu họ đang hướng về phía bạn, thì tức là họ đang hứng thú với bạn. Nếu họ đang nghiêng người sang hai bên khác thì họ thực sự muốn kết thúc câu chuyện. Bàn chân không nói dối. [Điều này có thể áp dụng ngược lại, bạn nói chuyện với ai đó, và thân thể nghiêng sang phía khác, ng nói chuyện với bạn sẽ tự nhận ra dấu hiệu và biết là bạn đang ko hứng thú nghe, thân nó tự nhận ra nhau]
7. Khi ở trong một bữa tiệc hay một cuộc họp. Hãy cười đùa và quan sát những người đang cười bên cạnh bạn. Những người cảm thấy gần gũi nhau thường sẽ nhìn nhau trước, hướng tới nhau. Điều này rất hữu ích cho việc khám phá ra tình bạn và các mối quan hệ khác.
8. Làm thế nào để mọi người làm những gì bạn muốn họ làm. Hãy cho họ một sự lựa chọn thay vì một lệnh. Ví dụ, thay vì nói hãy uống sữa của con, bạn hãy cho họ 2 lựa chọn, con thích uống sữa từ bình hay đổ luôn ra cốc? Điều này làm cho người ta có cảm giác mình là người kiểm soát vấc đề, như vậy nó sẽ tạo ra cơ hội đồng ý cao hơn.
9 Nếu hai người cãi nhau, và một người mất bình tĩnh và bắt đầu hét lên, khuynh hướng tự nhiên của con người (bạn) là hét lại. ĐỪNG! Hãy cố gắng bình tĩnh và trả lời trong im lặng. Điều này sẽ giúp mọi chuyện dịu lại, và tất nhiên đừng để cái mặt lỳ lượm, hãy thể hiện sự tiếp thú kiến một cách tích cực. Cười thật tươi và nói là con hiểu rồi, con hiểu mà.
10. Để xây dựng lòng tin. Nếu bạn tinh tế bắt chước ngôn ngữ cơ thể của người bạn đang nói chuyện, bạn có thể xây dựng lòng tin với họ. Bằng cách phản ánh cách họ nói chuyện, tông giọng, hệ thuật ngữ họ dùng và cách họ di chuyển, họ sẽ thích bạn hơn, bởi vì, đối với họ, có vẻ như bạn khá hợp với họ
Tumblr media
11. Để đưa một hạt giống ý tưởng vào trong tâm trí của một ai đó, hãy tình cờ vài lần nhắc tới nó, vd tôi thấy hai người như kiểu yêu nhau ý nhỉ? Cậu có cảm tình với A không? Lâu dần họ sẽ bắt đầu có suy nghĩ đấy trong đầu. Hay là B thích mình nhỉ? Hoặc nói là tôi yêu cầu bạn KHÔNG được nghĩ về những con VOI. Bạn đang nghĩ về cái gì vậy?
12. Người khôn ngoan làm chủ cuộc đời trong từng chuyện nhỏ nhất. Nhỏ đến mức một ánh mắt, một cái bắt tay, một cái quay đầu, một câu cảm thán... họ đều hiểu rõ ý nghĩa của nó! Không vội vàng quy kết, không vì một phút nóng giận mà làm hỏng việc lớn. Bình tĩnh xử trí, đâu khắc có đó.
Nguồn: Sưu tầm
0 notes
mrkeu · 3 years
Text
My games in memorial
0 notes