#kubernetes controller manager vs scheduler
Explore tagged Tumblr posts
coredgeblogs · 1 month ago
Text
Kubernetes vs. Traditional Infrastructure: Why Clusters and Pods Win
In today’s fast-paced digital landscape, agility, scalability, and reliability are not just nice-to-haves—they’re necessities. Traditional infrastructure, once the backbone of enterprise computing, is increasingly being replaced by cloud-native solutions. At the forefront of this transformation is Kubernetes, an open-source container orchestration platform that has become the gold standard for managing containerized applications.
But what makes Kubernetes a superior choice compared to traditional infrastructure? In this article, we’ll dive deep into the core differences, and explain why clusters and pods are redefining modern application deployment and operations.
Understanding the Fundamentals
Before drawing comparisons, it’s important to clarify what we mean by each term:
Traditional Infrastructure
This refers to monolithic, VM-based environments typically managed through manual or semi-automated processes. Applications are deployed on fixed servers or VMs, often with tight coupling between hardware and software layers.
Kubernetes
Kubernetes abstracts away infrastructure by using clusters (groups of nodes) to run pods (the smallest deployable units of computing). It automates deployment, scaling, and operations of application containers across clusters of machines.
Key Comparisons: Kubernetes vs Traditional Infrastructure
Feature
Traditional Infrastructure
Kubernetes
Scalability
Manual scaling of VMs; slow and error-prone
Auto-scaling of pods and nodes based on load
Resource Utilization
Inefficient due to over-provisioning
Efficient bin-packing of containers
Deployment Speed
Slow and manual (e.g., SSH into servers)
Declarative deployments via YAML and CI/CD
Fault Tolerance
Rigid failover; high risk of downtime
Self-healing, with automatic pod restarts and rescheduling
Infrastructure Abstraction
Tightly coupled; app knows about the environment
Decoupled; Kubernetes abstracts compute, network, and storage
Operational Overhead
High; requires manual configuration, patching
Low; centralized, automated management
Portability
Limited; hard to migrate across environments
High; deploy to any Kubernetes cluster (cloud, on-prem, hybrid)
Why Clusters and Pods Win
1. Decoupled Architecture
Traditional infrastructure often binds application logic tightly to specific servers or environments. Kubernetes promotes microservices and containers, isolating app components into pods. These can run anywhere without knowing the underlying system details.
2. Dynamic Scaling and Scheduling
In a Kubernetes cluster, pods can scale automatically based on real-time demand. The Horizontal Pod Autoscaler (HPA) and Cluster Autoscaler help dynamically adjust resources—unthinkable in most traditional setups.
3. Resilience and Self-Healing
Kubernetes watches your workloads continuously. If a pod crashes or a node fails, the system automatically reschedules the workload on healthy nodes. This built-in self-healing drastically reduces operational overhead and downtime.
4. Faster, Safer Deployments
With declarative configurations and GitOps workflows, teams can deploy with speed and confidence. Rollbacks, canary deployments, and blue/green strategies are natively supported—streamlining what’s often a risky manual process in traditional environments.
5. Unified Management Across Environments
Whether you're deploying to AWS, Azure, GCP, or on-premises, Kubernetes provides a consistent API and toolchain. No more re-engineering apps for each environment—write once, run anywhere.
Addressing Common Concerns
“Kubernetes is too complex.”
Yes, Kubernetes has a learning curve. But its complexity replaces operational chaos with standardized automation. Tools like Helm, ArgoCD, and managed services (e.g., GKE, EKS, AKS) help simplify the onboarding process.
“Traditional infra is more secure.”
Security in traditional environments often depends on network perimeter controls. Kubernetes promotes zero trust principles, pod-level isolation, and RBAC, and integrates with service meshes like Istio for granular security policies.
Real-World Impact
Companies like Spotify, Shopify, and Airbnb have migrated from legacy infrastructure to Kubernetes to:
Reduce infrastructure costs through efficient resource utilization
Accelerate development cycles with DevOps and CI/CD
Enhance reliability through self-healing workloads
Enable multi-cloud strategies and avoid vendor lock-in
Final Thoughts
Kubernetes is more than a trend—it’s a foundational shift in how software is built, deployed, and operated. While traditional infrastructure served its purpose in a pre-cloud world, it can’t match the agility and scalability that Kubernetes offers today.
Clusters and pods don’t just win—they change the game.
0 notes
aarna-blog · 1 month ago
Text
Why GPU PaaS Is Incomplete Without Infrastructure Orchestration and Tenant Isolation
GPU Platform-as-a-Service (PaaS) is gaining popularity as a way to simplify AI workload execution — offering users a friendly interface to submit training, fine-tuning, and inferencing jobs. But under the hood, many GPU PaaS solutions lack deep integration with infrastructure orchestration, making them inadequate for secure, scalable multi-tenancy.
If you’re a Neocloud, sovereign GPU cloud, or an enterprise private GPU cloud with strict compliance requirements,  you are probably looking at offering job scheduling of Model-as-a-Service to your tenants/users. An easy approach is to have a global Kubernetes cluster that is shared across multiple tenants. The problem with this approach is poor security as the underlying OS kernel, CPU, GPU, network, and storage resources are shared by all users without any isolation. Case-in-point, in September 2024, Wiz discovered a critical GPU container and Kubernetes vulnerability that affected over 35% of environments. Thus, doing just Kubernetes namespace or vCluster isolation is not safe.  
You need to provision bare metal, configure network and fabric isolation, allocate high-performance storage, and enforce tenant-level security boundaries — all automated, dynamic, and policy-driven.
In short: PaaS is not enough. True GPUaaS begins with infrastructure orchestration.
The Pitfall of PaaS-Only GPU Platforms
Many AI platforms stop at providing:
A web UI for job submission
A catalog of AI/ML frameworks or models
Basic GPU scheduling on Kubernetes  
What they don’t offer:
Control over how GPU nodes are provisioned (bare metal vs. VM)
Enforcement of north-south and east-west isolation per tenant
Configuration and Management of Infiniband, RoCE or Spectrum-X fabric
Lifecycle Management and Isolation of External Parallel Storage like DDN, VAST, or WEKA
Per-Tenant Quota, Observability, RBAC, and Policy Governance  
Without these, your GPU PaaS is just a thin UI on top of a complex, insecure, and hard-to-scale backend.
What Full-Stack Orchestration Looks Like
To build a robust AI cloud platform — whether sovereign, Neocloud, or enterprise — the orchestration layer must go deeper.
How aarna.ml GPU CMS Solves This Problem
aarna.ml GPU CMS is built from the ground up to be infrastructure-aware and multi-tenant-native. It includes all the PaaS features you would expect, but goes beyond PaaS to offer:
‍BMaaS and VMaaS orchestration: Automated provisioning of GPU bare metal or VM pools for different tenants.
‍Tenant-level network isolation: Support for VXLAN, VRF, and fabric segmentation across Infiniband, Ethernet, and Spectrum-X.
‍Storage orchestration: Seamless integration with DDN, VAST, WEKA with mount point creation and tenant quota enforcement.
‍Full-stack observability: Usage stats, logs, and billing metrics per tenant, per GPU, per model.
All of this is wrapped with a PaaS layer that supports Ray, SLURM, KAI, Run:AI, and more, giving users flexibility while keeping cloud providers in control of their infrastructure and policies.
Why This Matters for AI Cloud Providers
If you're offering GPUaaS or PaaS without infrastructure orchestration:
You're exposing tenants to noisy neighbors or shared vulnerabilities
You're missing critical capabilities like multi-region scaling or LLM isolation
You’ll be unable to meet compliance, governance, and SemiAnalysis ClusterMax1 grade maturity
With aarna.ml GPU CMS, you deliver not just a PaaS, but a complete, secure, and sovereign-ready GPU cloud platform.
Conclusion
GPU PaaS needs to be a complete stack with IaaS — it’s not just a model serving interface!
To deliver scalable, secure, multi-tenant AI services, your GPU PaaS stack must be expanded to a full GPU cloud management software stack to include automated provisioning of compute, network, and storage, along with tenant-aware policy and observability controls.
Only then is your GPU PaaS truly production-grade.
Only then are you ready for sovereign, enterprise, and commercial AI cloud success.
To see a live demo or for a free trial, contact aarna.ml
This post orginally posted on https://www.aarna.ml/
0 notes
sathcreation · 1 month ago
Text
DevOps with Docker and Kubernetes Coaching by Gritty Tech
Introduction
In the evolving world of software development and IT operations, the demand for skilled professionals in DevOps with Docker and Kubernetes coaching is growing rapidly. Organizations are seeking individuals who can streamline workflows, automate processes, and enhance deployment efficiency using modern tools like Docker and Kubernetes For More…
Gritty Tech, a leading global platform, offers comprehensive DevOps with Docker and Kubernetes coaching that combines hands-on learning with real-world applications. With an expansive network of expert tutors across 110+ countries, Gritty Tech ensures that learners receive top-quality education with flexibility and support.
What is DevOps with Docker and Kubernetes?
Understanding DevOps
DevOps is a culture and methodology that bridges the gap between software development and IT operations. It focuses on continuous integration, continuous delivery (CI/CD), automation, and faster release cycles to improve productivity and product quality.
Role of Docker and Kubernetes
Docker allows developers to package applications and dependencies into lightweight containers that can run consistently across environments. Kubernetes is an orchestration tool that manages these containers at scale, handling deployment, scaling, and networking with efficiency.
When combined, DevOps with Docker and Kubernetes coaching equips professionals with the tools and practices to deploy faster, maintain better control, and ensure system resilience.
Why Gritty Tech is the Best for DevOps with Docker and Kubernetes Coaching
Top-Quality Education, Affordable Pricing
Gritty Tech believes that premium education should not come with a premium price tag. Our DevOps with Docker and Kubernetes coaching is designed to be accessible, offering robust training programs without compromising quality.
Global Network of Expert Tutors
With educators across 110+ countries, learners benefit from diverse expertise, real-time guidance, and tailored learning experiences. Each tutor is a seasoned professional in DevOps, Docker, and Kubernetes.
Easy Refunds and Tutor Replacement
Gritty Tech prioritizes your satisfaction. If you're unsatisfied, we offer a no-hassle refund policy. Want a different tutor? We offer tutor replacements swiftly, without affecting your learning journey.
Flexible Payment Plans
Whether you prefer monthly billing or paying session-wise, Gritty Tech makes it easy. Our flexible plans are designed to suit every learner’s budget and schedule.
Practical, Hands-On Learning
Our DevOps with Docker and Kubernetes coaching focuses on real-world projects. You'll learn to set up CI/CD pipelines, containerize applications, deploy using Kubernetes, and manage cloud-native applications effectively.
Key Benefits of Learning DevOps with Docker and Kubernetes
Streamlined Development: Improve collaboration between development and operations teams.
Scalability: Deploy applications seamlessly across cloud platforms.
Automation: Minimize manual tasks with scripting and orchestration.
Faster Delivery: Enable continuous integration and continuous deployment.
Enhanced Security: Learn secure deployment techniques with containers.
Job-Ready Skills: Gain competencies that top tech companies are actively hiring for.
Curriculum Overview
Our DevOps with Docker and Kubernetes coaching covers a wide array of modules that cater to both beginners and experienced professionals:
Module 1: Introduction to DevOps Principles
DevOps lifecycle
CI/CD concepts
Collaboration and monitoring
Module 2: Docker Fundamentals
Containers vs. virtual machines
Docker installation and setup
Building and managing Docker images
Networking and volumes
Module 3: Kubernetes Deep Dive
Kubernetes architecture
Pods, deployments, and services
Helm charts and configurations
Auto-scaling and rolling updates
Module 4: CI/CD Integration
Jenkins, GitLab CI, or GitHub Actions
Containerized deployment pipelines
Monitoring tools (Prometheus, Grafana)
Module 5: Cloud Deployment
Deploying Docker and Kubernetes on AWS, Azure, or GCP
Infrastructure as Code (IaC) with Terraform or Ansible
Real-time troubleshooting and performance tuning
Who Should Take This Coaching?
The DevOps with Docker and Kubernetes coaching program is ideal for:
Software Developers
System Administrators
Cloud Engineers
IT Students and Graduates
Anyone transitioning into DevOps roles
Whether you're a beginner or a professional looking to upgrade your skills, this coaching offers tailored learning paths to meet your career goals.
What Makes Gritty Tech Different?
Personalized Mentorship
Unlike automated video courses, our live sessions with tutors ensure all your queries are addressed. You'll receive personalized feedback and career guidance.
Career Support
Beyond just training, we assist with resume building, interview preparation, and job placement resources so you're confident in entering the job market.
Lifetime Access
Enrolled students receive lifetime access to updated materials and recorded sessions, helping you stay up to date with evolving DevOps practices.
Student Success Stories
Thousands of learners across continents have transformed their careers through our DevOps with Docker and Kubernetes coaching. Many have secured roles as DevOps Engineers, Site Reliability Engineers (SRE), and Cloud Consultants at leading companies.
Their success is a testament to the effectiveness and impact of our training approach.
FAQs About DevOps with Docker and Kubernetes Coaching
What is DevOps with Docker and Kubernetes coaching?
DevOps with Docker and Kubernetes coaching is a structured learning program that teaches you how to integrate Docker containers and manage them using Kubernetes within a DevOps lifecycle.
Why should I choose Gritty Tech for DevOps with Docker and Kubernetes coaching?
Gritty Tech offers experienced mentors, practical training, flexible payments, and global exposure, making it the ideal choice for DevOps with Docker and Kubernetes coaching.
Is prior experience needed for DevOps with Docker and Kubernetes coaching?
No. While prior experience helps, our coaching is structured to accommodate both beginners and professionals.
How long does the DevOps with Docker and Kubernetes coaching program take?
The average duration is 8 to 12 weeks, depending on your pace and session frequency.
Will I get a certificate after completing the coaching?
Yes. A completion certificate is provided, which adds value to your resume and validates your skills.
What tools will I learn in DevOps with Docker and Kubernetes coaching?
You’ll gain hands-on experience with Docker, Kubernetes, Jenkins, Git, Terraform, Prometheus, Grafana, and more.
Are job placement services included?
Yes. Gritty Tech supports your career with resume reviews, mock interviews, and job assistance services.
Can I attend DevOps with Docker and Kubernetes coaching part-time?
Absolutely. Sessions are scheduled flexibly, including evenings and weekends.
Is there a money-back guarantee for DevOps with Docker and Kubernetes coaching?
Yes. If you’re unsatisfied, we offer a simple refund process within a stipulated period.
How do I enroll in DevOps with Docker and Kubernetes coaching?
You can register through the Gritty Tech website. Our advisors are ready to assist you with the enrollment process and payment plans.
Conclusion
Choosing the right platform for DevOps with Docker and Kubernetes coaching can define your success in the tech world. Gritty Tech offers a powerful combination of affordability, flexibility, and expert-led learning. Our commitment to quality education, backed by global tutors and personalized mentorship, ensures you gain the skills and confidence needed to thrive in today’s IT landscape.
Invest in your future today with Gritty Tech — where learning meets opportunity.
0 notes
jenniferphilop0420 · 3 months ago
Text
How to Ensure 24/7 Uptime in Cryptocurrency Exchange Development
Tumblr media
Cryptocurrency exchanges operate in a high-stakes environment where even a few minutes of downtime can result in significant financial losses, security vulnerabilities, and loss of customer trust. Ensuring 24/7 uptime in cryptocurrency exchange development requires a combination of advanced infrastructure, strategic planning, security measures, and continuous monitoring. This guide explores the best practices and technologies to achieve maximum uptime and ensure seamless operations.
1. Choosing the Right Infrastructure
The backbone of any high-availability exchange is its infrastructure. Consider the following:
1.1 Cloud-Based Solutions vs. On-Premises Hosting
Cloud-based solutions: Scalable, reliable, and backed by industry leaders such as AWS, Google Cloud, and Microsoft Azure.
On-premises hosting: Offers more control but requires extensive maintenance and security protocols.
1.2 High Availability Architecture
Load balancing: Distributes traffic across multiple servers to prevent overload.
Redundant servers: Ensures backup servers take over in case of failure.
Content Delivery Networks (CDNs): Improve response times by caching content globally.
2. Implementing Failover Mechanisms
2.1 Database Redundancy
Use Primary-Replica architecture to maintain real-time backups.
Implement automatic failover mechanisms for instant switching in case of database failure.
2.2 Active-Passive and Active-Active Systems
Active-Passive: One server remains on standby and takes over during failures.
Active-Active: Multiple servers actively handle traffic, ensuring zero downtime.
3. Ensuring Network Resilience
3.1 Distributed Denial-of-Service (DDoS) Protection
Implement DDoS mitigation services like Cloudflare or Akamai.
Use rate limiting and traffic filtering to prevent malicious attacks.
3.2 Multiple Data Centers
Distribute workload across geographically dispersed data centers.
Use automated geo-routing to shift traffic in case of regional outages.
4. Continuous Monitoring and Automated Alerts
4.1 Real-Time Monitoring Tools
Use Nagios, Zabbix, or Prometheus to monitor server health.
Implement AI-driven anomaly detection for proactive issue resolution.
4.2 Automated Incident Response
Develop automated scripts to resolve common issues.
Use chatbots and AI-powered alerts for instant notifications.
5. Regular Maintenance and Software Updates
5.1 Scheduled Maintenance Windows
Plan updates during non-peak hours.
Use rolling updates to avoid complete downtime.
5.2 Security Patching
Implement automated patch management to fix vulnerabilities without disrupting service.
6. Advanced Security Measures
6.1 Multi-Layer Authentication
Use 2FA (Two-Factor Authentication) for secure logins.
Implement hardware security modules (HSMs) for cryptographic security.
6.2 Cold and Hot Wallet Management
Use cold wallets for long-term storage and hot wallets for active trading.
Implement multi-signature authorization for withdrawals.
7. Scalability Planning
7.1 Vertical vs. Horizontal Scaling
Vertical Scaling: Upgrading individual server components (RAM, CPU).
Horizontal Scaling: Adding more servers to distribute load.
7.2 Microservices Architecture
Decouple services for independent scaling.
Use containerization (Docker, Kubernetes) for efficient resource management.
8. Compliance and Regulatory Requirements
8.1 Adherence to Global Standards
Ensure compliance with AML (Anti-Money Laundering) and KYC (Know Your Customer) policies.
Follow GDPR and PCI DSS standards for data protection.
8.2 Audit and Penetration Testing
Conduct regular security audits and penetration testing to identify vulnerabilities.
Implement bug bounty programs to involve ethical hackers in security improvements.
Conclusion
Achieving 24/7 uptime in cryptocurrency exchange development requires a comprehensive approach involving robust infrastructure, failover mechanisms, continuous monitoring, and security best practices. By integrating these strategies, exchanges can ensure reliability, security, and customer trust in a highly competitive and fast-evolving market.
0 notes
hawkstack · 5 months ago
Text
OpenShift vs Kubernetes: Key Differences Explained
Kubernetes has become the de facto standard for container orchestration, enabling organizations to manage and scale containerized applications efficiently. However, OpenShift, built on top of Kubernetes, offers additional features that streamline development and deployment. While they share core functionalities, they have distinct differences that impact their usability. In this blog, we explore the key differences between OpenShift and Kubernetes.
1. Core Overview
Kubernetes:
Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and operation of application containers. It provides the building blocks for containerized workloads but requires additional tools for complete enterprise-level functionality.
OpenShift:
OpenShift is a Kubernetes-based container platform developed by Red Hat. It provides additional features such as a built-in CI/CD pipeline, enhanced security, and developer-friendly tools to simplify Kubernetes management.
2. Installation & Setup
Kubernetes:
Requires manual installation and configuration.
Cluster setup involves configuring multiple components such as kube-apiserver, kube-controller-manager, kube-scheduler, and networking.
Offers flexibility but requires expertise to manage.
OpenShift:
Provides an easier installation process with automated scripts.
Includes a fully integrated web console for management.
Requires Red Hat OpenShift subscriptions for enterprise-grade support.
3. Security & Authentication
Kubernetes:
Security policies and authentication need to be manually configured.
Role-Based Access Control (RBAC) is available but requires additional setup.
OpenShift:
Comes with built-in security features.
Uses Security Context Constraints (SCCs) for enhanced security.
Integrated authentication mechanisms, including OAuth and LDAP support.
4. Networking
Kubernetes:
Uses third-party plugins (e.g., Calico, Flannel, Cilium) for networking.
Network policies must be configured separately.
OpenShift:
Uses Open vSwitch-based SDN by default.
Provides automatic service discovery and routing.
Built-in router and HAProxy-based load balancing.
5. Development & CI/CD Integration
Kubernetes:
Requires third-party tools for CI/CD (e.g., Jenkins, ArgoCD, Tekton).
Developers must integrate CI/CD pipelines manually.
OpenShift:
Comes with built-in CI/CD capabilities via OpenShift Pipelines.
Source-to-Image (S2I) feature allows developers to build images directly from source code.
Supports GitOps methodologies out of the box.
6. User Interface & Management
Kubernetes:
Managed through the command line (kubectl) or third-party UI tools (e.g., Lens, Rancher).
No built-in dashboard; requires separate installation.
OpenShift:
Includes a built-in web console for easier management.
Provides graphical interfaces for monitoring applications, logs, and metrics.
7. Enterprise Support & Cost
Kubernetes:
Open-source and free to use.
Requires skilled teams to manage and maintain infrastructure.
Support is available from third-party providers.
OpenShift:
Requires a Red Hat subscription for enterprise support.
Offers enterprise-grade stability, support, and compliance features.
Managed OpenShift offerings are available via cloud providers (AWS, Azure, GCP).
Conclusion
Both OpenShift and Kubernetes serve as powerful container orchestration platforms. Kubernetes is highly flexible and widely adopted, but it demands expertise for setup and management. OpenShift, on the other hand, simplifies the experience with built-in security, networking, and developer tools, making it a strong choice for enterprises looking for a robust, supported Kubernetes distribution.
Choosing between them depends on your organization's needs: if you seek flexibility and open-source freedom, Kubernetes is ideal; if you prefer an enterprise-ready solution with out-of-the-box tools, OpenShift is the way to go.
For more details click www.hawkstack.com 
0 notes
govindhtech · 2 years ago
Text
Appreciative GKE Stateful HA Controller
Tumblr media
GKE Stateful HA Controller: For any operational application running on Google Kubernetes Engine (GKE), designing for application needs is a crucial business consideration. The case for stateful applications, such as databases and message queues, is particularly strong. However, running stateful applications frequently requires a trade-off between availability and cost. For instance, you can save expenses but trade availability by running a single replica application in a single zone. You can also run additional application replicas to provide data redundancy in the case of a node failure if you require higher availability, but this comes at a cost in terms of computation and network infrastructure.
Additionally, Kubernetes’ scheduler adopts an eventually consistent strategy when faced with disruptive events (such as upgrades or zone failure). Despite the fact that this is effective for stateless applications, clients prefer stateful applications to take a more proactive stance. They want to be able to manage failover times and see how stateful apps respond to disruptions.
Customers demand a compromise that combines the cost-effectiveness of a single replica application with the availability of numerous replicas. Stateful HA Operator on GKE, a new feature that delivers proactive scheduling to stateful applications while balancing cost and availability, is what Google are announcing today to help. Operator for Stateful HA is in preview.
Let’s examine Stateful HA Operator in more detail to see how it might assist you in balancing cost and availability for your stateful apps.
Understanding the Stateful HA Controller
High level: By integrating with regional persistent disk (RePD), Stateful HA Operator gives proactive controls to stateful applications and boosts availability.
Numerous low-cost availability tools provide eventually consistent failover, with a failover procedure that lasts about ten minutes. If you wish to reach the industry benchmark Recovery Time Objective (RTO) of 60–120 seconds, this is too long. Stateful HA Operator minimizes this delay and enables workload-specific customization of your failover response, allowing you to match failover times to business needs. You can audit and follow any failover action because you have full observability.
Meanwhile, the usage of regional persistent SSDs opens up a new choice in the cost vs. availability debate. A storage choice known as regional persistent disk enables synchronous data replication between two zones in a region. You can balance cost and availability since running more computation is typically more expensive than adding more storage, and because RePD does not charge for cross-zone networking. Your application has available compute capacity to fail over to in the event of a total failure when the Stateful HA Operator is used with Spare Capacity or PriorityClass.
A case study
Use case 1: Upgrading PostgreSQL availability for a single replica only at an 8% cost increase
Consider a typical PostgreSQL application with a single replica that costs $391 per month on the list price. Applications with a single replica are susceptible to outages, and the RTO might range from hours to days.
You can add tolerance to disruptions like zonal failures for a negligible increase of 8% by deploying the Stateful HA Controller and allowing it to execute failover using Regional PD. At a very appealing price point, Stateful HA Operator redeploys your replica within a specified timeout, enabling you to reduce the application’s unavailability window to match its SLA. Add failover compute capacity if you require even greater availability.
Use case 2: : Lowering costs for a multi-replica Kafka with up to 53% in cost savings
Although inter-zone application replication adds extra computation and storage costs, some applications need to maintain a high RTO in the event of node failures. All replicas can be rescheduled from the primary to the backup zone in the unlikely case of a zone failure. Under typical operating conditions, this enables the application to maximize cost savings on inter-zone networking.
Kafka was created to operate over a flat network. Some applications observe that inter-zone data replication can approach over 80% of overall application expenses, greatly outweighing the cost of both computation and storage, depending on data replication prices. The zonal isolation theory can be used to Kafka. Consider a Kafka application with six replicas. The total list price for all Kafka brokers, distributed equally across zones, is $3,969.54 per month. You can cut the expenditures by up to 53% by deploying the Stateful HA Controller.
Test out the Helm-based GKE Stateful HA Controller
The Stateful HA Operator is a fully automated solution that eliminates the laborious process of tailoring your application to satisfy its availability requirements.
Read more on Govindhtech.com
0 notes
codeonedigest · 2 years ago
Text
Kubernetes Cloud Controller Manager Tutorial for Beginners
Hi, a new #video on #kubernetes #cloud #controller #manager is published on #codeonedigest #youtube channel. Learn kubernetes #controllermanager #apiserver #kubectl #docker #proxyserver #programming #coding with #codeonedigest #kubernetescontrollermanag
Kubernetes is a popular open-source platform for container orchestration. Kubernetes follows client-server architecture and Kubernetes cluster consists of one master node with set of worker nodes.  Cloud Controller Manager is part of Master node. Let’s understand the key components of master node. etcd is a configuration database stores configuration data for the worker nodes. API Server to…
Tumblr media
View On WordPress
0 notes
swarnalata31techiio · 3 years ago
Text
What is Kubernetes?
Kubernetes (aka "Kube" or k8s) is an open-source container orchestration platform written in Go. It was initially developed by Google in 2014 but is currently maintained by the Cloud Native Computing Foundation (CNCF). According to surveys, Kubernetes usage share has grown from 58% in 2014 to 83% in 2021, being by far the most popular of the orchestration technologies. Leading public cloud providers like Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud, and Microsoft Azure include managed Kubernetes services in their packages.
What is Nomad?
Nomad is HashiCorps' answer to developers looking for a powerful yet flexible platform for application deployment or container orchestration. Heralded as simple to run and maintain, Nomad is cloud-agnostic and designed to natively handle multi-datacenter and multi-region deployments with a high scalability potential. It is referred to as "Kubernetes without the complexity," but it's making a name for itself on its own merit.
Nomad vs Kubernetes: how to choose?
Kubernetes is an amazing assortment of parts that cooperate, incorporated into one center unit. It is intended to send, oversee and scale application holders across bunches of hosts, very much like a working framework for cloud-local applications.
Wanderer begins as a group chief and undertaking scheduler, yet it very well may be associated with different devices like Consul to grow its capacities. Its adaptability to adjust to various jobs makes Nomad extremely interesting to medium-sized organizations with less equipment and staff assets. It's more straightforward to begin with, simpler to keep up with, however needs local area support.
However, you don't need to pick either Kubernetes and Nomad.
Nomad AND Kubernetes
Both platforms can work together, complementing each other: Kubernetes is used by global companies and is offered as a service by Google Cloud Platform, Azure, and AWS, the three most prominent cloud providers, because it is recognized as a powerful container orchestration tool with cutting edge features. But Nomad's agility makes it perfect for maintenance and core scheduling purposes.
Here's a head to head comparison:
Kubernetes:
Complexity:More complex but provides a higher level of control
Community:Superior community, providing tools, resources and support
Costs :Potencial higher costs due to larger teams and more demanding architecture
Workload support:Focused on Linux containers
Openness:Community supported
Nomad:
Complexity:Easier to start with, but more immature
Community:Lacks a significative community, with the consequential lack of resources
Costs:Requires smaller teams, less servers, and it’s less time consuming
Workload support:Nomad supports virtualized, containerized and standalone applications (Java, Windows apps and even binary.)
Openness:It is closely tied to HashiCorp’s products and development.
0 notes
cladeymoore · 5 years ago
Text
Container technologies at Coinbase
Why Kubernetes is not part of our stack
By Drew Rothstein, Director of Engineering
TLDR: Container orchestration platforms are complex and amazing technologies, helping some businesses and teams solve a whole suite of problems. What’s commonly overlooked however, is that container technologies also create a large set of challenges that must be overcome to prevent failures.
This post is adapted from an internal blog post as I haven’t seen many write-ups like this externally available. Minimal redaction has been done and images have been added to provide more flare. If you are interested in working on some of what we discuss below — we are actively hiring on our Infrastructure team.
History
Before jumping into the current day, it is important to understand the technologies that led us here.
1980s: chroot
1990s: jail
2000s (early): jail > FreeBSD
2000s (mid): cgroups, 2.6.24
2000s (late): LXC (Linux Containers), 2.6.4
2010s (early): Docker
2010s (late): Kubernetes
There is a more detailed history in Chapter 7 of Enterprise Docker if interested.
Without containers as we know them today, let’s go back ~10yrs. At this time we did not have/use docker, rkt, or any other mainstream containerized wrapper/service. Most large-scale companies built in-house systems to bundle their applications to go from source code to deployment in production. What engineers ran on their machine was usually not what was running in production or if it was, it was lovingly one-off built/packaged in a manner that was likely very custom and complex.
In this world of an in-house system to bundle and deploy applications there was a large operations team, usually in a platform or infrastructure organization that would manage the bundle/building processes, deployment, and post-deployment. These roles were generally highly operational involving troubleshooting bad hosts, diagnosing specific dependency issues on OS patches/upgrades, etc. Post-deployment had minimal to no automated orchestration and involved capacity planning, ordering more servers, getting them racked/installed, and somehow getting software updated on them.
If you were lucky, there was some regular process to build a “golden image” (think: Packer by Hashicorp) that was well documented, potentially even codified, and run by a Continuous Integration system such as Hudson (previous to Jenkins {ref}). These images were somehow distributed to your systems either manually or automatically through some sort-of configuration management utilities and then started in some ordering, likely with parallel SSH or similar.
This past decade everything has changed. We went from gigantic monolithic applications to breaking down services into more discrete and less coupled parts. We went from having to build/own your own compute to having a managed or Public Cloud offering with a couple clicks and a credit card. We went from scaling applications vertically to re-architecting them to scale horizontally. All of this was happening at the same time that societal changes were also occurring: cell phones in every pocket, network speeds improving, network latencies dropping across the world, to doing everything online from booking your dog walker to commoditized video conferencing.
AWS’s offering in 2009 was quite limited. For perspective, it wasn’t until 2008 when AWS’s EC2 offering exited beta and began offering an SLA (ref). For reference, GCP didn’t launch a compute offering in GA until 2013 (ref).
Why do companies choose to containerize their applications?
Companies choose to containerize their applications to increase engineering output/developer productivity in a quick, safe, and reliable manner. Containerizing is a choice made vs. building images, although containers can sometimes be built into images, but that is out of scope (ref).
Containers enable engineers to develop, test, and run their applications locally in the same or similar manner that they will run in other environments (staging and production). Containers enable bundling of dependencies to be articulated and explicit vs. implied (the OS will always contain package $foo that my service depends on). Containers allow for more discreet service encapsulation and resource definition (using X CPUs and Y GB of Memory). Containers inherently enable you to think about scaling your application horizontally vs. vertically, resulting in more robust architectural decisions.
Some of these points could be argued in great detail. These are purposely bold and a bit over-extended to move the conversation forward as this isn’t a discussion of the pros/cons of containerization or service-ification (i.e. the breakdown of monolithic applications to a proliferation of more discreet services that run separately).
What about virtualization?
Virtualization is the concept of being able to run multiple containers on an OS virtualized system (ref). Containers can only see the devices/resources granted to it. On a managed compute platform such as AWS you are actually running below a Hypervisor (ref) which manages the VMs that your OS and resulting containers run within.
Simplified diagram
Virtualization enables the world of containers today. Without the ability to virtualize, hardware resources running multiple applications in containers wouldn’t be possible today.
What problem does a container orchestration platform (Mesos, Kubernetes, Docker Swarm) solve?
A container orchestration platform solves the following types of problems:
Managed/Standardized deployment tooling (deployment).
Scaling of applications based-on some defined heuristic (horizontal scaling).
Re-scheduling/Moving containers when failures occur (self-healing).
While some platforms may state that they have other features such as storage orchestration, secret/config. management, and automatic bin packing to name a few: the reality is that these generally do not work for larger scale installations without intense investments either in forking / customization or through integrations and separation.
For example, most folks that run large-scale container orchestration platforms cannot utilize their built-in secret or configuration management. These primitives are generally not meant, designed, or built for hundreds of engineers on tens of teams and generally do not include the necessary controls to be able to sanely manage, own, and operate their applications. It is extremely common for folks to separate their secret and config. management to a system that has stronger guarantees and controls (not to mention scaling).
Similarly for service discovery and load balancing it is quite common to separate this out and run an overlay or abstract control plane. It is quite common to deploy Istio to handle this for Kubernetes. Managing and running Istio is not a trivial task and many modern-day cluster outages are due to misconfiguration of this control plane/service mesh and a lack of understanding of the minute details of it.
What do we use as our container orchestration platform?
Our container orchestration platform is Odin + AWS ASGs (auto-scaling groups). When you click Deploy from Codeflow (our internal UI for deployments), Odin is invoked with an API call from Codeflow. Odin kicks off a step function and begins to deploy your application. New VMs are stood up in AWS and loaded into a new ASG, your software is fetched from various internal locations, a load balancer starts health-checking these new instances, and eventually traffic is cut over in a Blue/Green manner to the new hosts in the new ASG behind the load balancer.
Our container orchestration platform is extremely simple (on purpose). We enable the same key features of Kubernetes: A single Deploy + Rollback button in Codeflow, Scaling based-on some defined heuristic (we support custom AWS metrics or standard CPU metrics), and re-scheduling/moving of your containers if your VM dies/becomes unhealthy in your ASG.
To handle secrets and configuration management we have built a dynamic configuration service that provides libraries to all internal customers with a p95 of 6ms. It is backed by DynamoDB and serves 100s of thousands of requests per minute of synchronous and asynchronous methods types.
To handle service discovery and load balancing we utilize Route53 (DNS), ALBs (Application Load Balancers), and client-side load balancing for gRPC either natively or through Envoy. We expect to invest more here later in the year.
Why do we not run Kubernetes?
Running Kubernetes does not solve any customer (engineering) problems. Running Kubernetes would actually create a whole new set of problems.
We would need to build/staff a full-time Compute team. While we may do this anyway as we grow, this would be required immediately so that they could focus on building out tens of clusters (likely separate for each team/org), starting to scope/build the wrapping/glue tooling, starting to build out the abstract control plane/service mesh, etc.
Securing Kubernetes is not a trivial, easy, or well understood operation. To enable us to own/operate Kubernetes we would need the same tooling and controls that we have today with our entire platform (Odin, ASGs, Step Deployers — and everything they enforce). To build these same primitives providing the same level of safety that these provide today would be a substantial investment both by a (future) Compute team and our Security team.
Managed Kubernetes (EKS on AWS, GKE on Google) is very much in its infancy and doesn’t solve most of the challenges with owning/operating Kubernetes (if anything it makes them more difficult at this time). At AWS they are scaling their support/operations teams to run EKS and at Google it isn’t uncommon for them to have multi-hour outages with GKE (ref). You are trading off some operations issues and challenges to another operations team (and removing a lot of visibility).
Cluster upgrades and management require a much more operationally heavy focus than we have today. The only way to sanely run Kubernetes is by giving teams/orgs their own clusters (similar to giving them their own AWS accounts or GCP projects). Upgrading clusters and patching vulnerabilities is not a quick/easy task even with Istio and associated tooling. Generally you have to build/run a secondary cluster, failover all applications, and then fail back after an upgrade. This primitive is not built into any abstracts at this point in time. While this may exist for managed clusters (GKE) it doesn’t always work as you might expect and rolling back once started is generally not well handled.
Today, we do not carry this burden. We run on a hardened OS with minimal > no dependencies. Our AMI rollout is managed starting with development and then moving forward after weeks of testing. If we need to rollback we have the ability to do so with a trivial one-line change. On average we spend < 5hrs/month on anything even closely related to this area of concern.
Additional references on the complexities of owning/operating Kubernetes & Istio:
OpenStack (Kubernetes Issues At Scale 900 Minions)
OpenAI (Scaling Kubernetes to 2,500 Nodes)
Civis (Breaking Kubernetes: How We Broke and Fixed our K8s Cluster)
k8s.af
Securing Kubernetes
Let’s discuss some of the complexity of securing and running Kubernetes as a business that stores more than $8 billion in crypto assets.
Components
The basics of securing a Kubernetes cluster (ref) are well known/understood but once you dig into each of them the complexities start to unravel. Securing all of the system components (etcd, kubelet), the API server, and any abstracts/overlays (Istio) opens up a lot of surface to understand, test, and secure. Going deep into namespaces, seccomp, SELinux, cgroups, etc. is all required given the increased attack surface. Kubernetes is so large that it has its own CIS benchmark & InSpec suite (thankfully).
Vulnerabilities
A small list of references that provide a good starting point for researching:
CVE-2019–5736 (8.6 High): Allows attackers to overwrite the host runc binary (and consequently obtain host root access).
CVE-2019–11246 (6.5 Medium): If the tar binary in the container is malicious, it could run any code and output unexpected, malicious results.
CVE-2019–11253 (7.5 High): Allows authorized users to send malicious YAML or JSON payloads, causing the API server to consume excessive CPU or memory, potentially crashing and becoming unavailable.
Overview
Kubernetes is a powerful PaaS as a kit with lots of security-relevant options to support the variety of deployment scenarios that it can be used in. It’s incredibly valuable from a security perspective when it is the universal consensus choice for PaaS, because most of those options can be abstracted away, and secondary systems must be put into place to support its use.
Kubernetes is fundamentally designed for workload orchestration — Trust is not the differentiator or purpose behind the encapsulation or pieces in Kubernetes; The multi-tenancy purpose is for bin packing and not in support of furthering permission boundaries. It provides several layers at which you can choose to place mild boundaries of varying enforceability. Some of these boundaries are built-in, while others are simply integration points for other tools to help manage. Here are some of the primitives Kube provides (and doesn’t provide) to isolate workloads.
Control Plane (AWS Account / GCP Project)
Kubernetes clusters operate within the services and networks that are provided to them, and naturally have some interaction with the AWS/GCP control plane such as provisioning load balancers for ingress, accessing secrets stored in KMS, etc. Teams grow and expand to have separate accounts, projects, and further isolation over time. A separate AWS account or GCP project is the primary primitive by which you can achieve total IAM segmentation.
A Kubernetes cluster, on the other hand, needs to operate within one AWS account (even if federated with other clusters elsewhere). This limits segmentation options and flexibility. We could provision a cluster per team or service, but this takes away many of the efficiency gains that Kubernetes brings, and brings on new management problems, like meta-orchestrating all of those clusters.
Clusters & Nodes & Pods & Containers (Oh my!)
Clusters
Cluster master (API) servers are a secondary control plane (besides the AWS one) that we need to secure as well. Service accounts and access scopes, which containers can assume to access resources both within and outside the cluster, are just as complex as AWS’ IAM is, and need to be mapped against one another strictly so that a breakout does not affect the, e.g., AWS control plane.
Nodes
The operating system of the underlying nodes must be maintained much as we do today. In fact, our OS is very similar to the base OS Google uses for GKE. While we wouldn’t necessarily have to change anything to move our OS to Kubernetes, we wouldn’t gain anything either.
Pods
Creation of pods in the cluster, and the rules about what standards they have to meet to be created, are accomplished through PodSecurityPolicy, which operates similarly to Salus and our consensus management tooling today. We would have to invest in significant integration work, and additional open source dependencies, to cleanly integrate them.
Pods are segmented from each other through networking policies, much as we do today with Security Groups and/or our internal service framework. But in the world of Kubernetes, identity, authentication, and authorization of pods to communicate with each other involve a number of supporting technologies, such as SPIFFE and SPIRE for identity format and attestation below the node level, Envoy for authorization gating, Istio for authN and Z orchestration, and OPA for authorization policy. Each of these is a significant effort to standardize and adopt.
Containers
Containers are not security boundaries, they’re resource boundaries. In order to define security boundaries around containers, you need to delve into custom kernel namespaces, syscall filtering, mandatory access control frameworks, and/or vm-based isolation technologies designed for containers like gVisor.
Currently, we have not invested much in this area because we do not operate in a multi-tenant fashion. If we moved to a multi-tenant model, we would have to make significant investments here almost immediately so that we could trust that pods/containers are only running on the same nodes as similarly classified pods, and that they are not interfering with one another with host/vm isolation technologies.
When will we run Kubernetes and is Kubernetes in our future?
If/when there are significant use cases for a more advanced container orchestration platform we will likely first visit the problem statement. If this is something that can easily be added to our existing platform: we will likely visit that first and explore/scope from there. If we deem it unreasonable to extend/add to our platform then we would visit all potential options — not just Kubernetes. It is much more likely that we would visit AWS’s managed offerings first such as Fargate and ECS before diving in to Kubernetes based on the above.
If/when there are significant gains to be had by our engineers by offering Kubernetes (or any other container orchestration platform) we will explore offering them. At this time there is not a significant gain to be had by offering Kubernetes. This may change if/when Kubernetes offers enough new features that we haven’t kept up, they have paid down their technical debt (or we have not), or our customers require new functionality that they are able to offer and we are not in the foreseeable future. If the barrier to entry of our current platform were to significantly change and that were now a clear differentiator, then we would also explore offering a different platform.
If/when we hit the limits of our existing platform, are too deeply burdened or foresee being too deeply burdened in our platform due to missing features that our customers need, extending our platform is becoming too onerous, or we are having too many outages that are violating our SLA, than we would likely revisit a different container orchestration platform.
If/when we lose support of a major upstream dependency such as AWS or ASGs, we would then look into other options.
These are a few of the reasons we might choose to look into another container orchestration platform. At this time we have no plans to build/own/operate Kubernetes.
Doesn’t Kubernetes solve various problems such as re-balancing/auto-healing, auto-scaling, and service discovery? How do we solve these today?
Kubernetes at a smaller scale solves most of these problems without a lot of fuss. At a larger scale it requires a lot more thought, glue code, and putting wrappers / safe guards on pretty much everything to make it work safely and reliably. Generally, as mentioned above, folks tend to add a Service Mesh such as Istio to enable more advanced features / requirements.
Today we solve:
Re-balancing/auto-healing with Odin and ASGs.
Service discovery with DNS and Envoy.
Kubernetes has Storage Orchestration and we don’t have that today, should we?
We have two major stateful applications at Coinbase today- blockchain nodes and the trading engines that could be potential customers of a feature such as storage orchestration. For the former (blockchain nodes) the usage of storage is fairly custom and we have built a custom deployer that gives them the features that they need. For the latter (trading engines), we embedded from the Reliability (SRE) team and provided support to a number of their specific challenges.
While having Storage Orchestration built-in to Kubernetes might have been a nice starting point for both blockchain nodes and the trading engines- a lot of the same issues we have with the underlying technology would still exist.
What is the future of a Container Orchestration Platform if not Kubernetes?
We will explore and migrate to a higher-level abstracted service for some applications. We will explore Fargate and ECS as contenders for this purpose. The current initial reasons would be utilization and cost improvements — both of which are not very customer focused. We may choose to wait until we have more customer focused reasons to implement.
Potential customer focused asks would be around deployment times, deployment patterns (beyond canaries), more complex service mesh needs than exist today, or specific improvements/ features that may be added to Fargate or ECS that building onto existing tooling is not possible/not reasonable. These are some of the potential customer focused asks that are possible but not known or realized at this time.
Ideally the move to a different underlying container technology would be fairly invisible as the tools to interact with them wouldn’t fundamentally change. The reality of moving to a different platform would likely uncover hidden or unknown expectations about the existing system. How you deploy and debug services in staging and production would still be abstracted, but there may be different features offered that do not exist today.
Do I/we hate Kubernetes? Does Kubernetes fail as a container platform?
No. It is a great tool despite its challenges. Kubernetes has moved our industry forward in an increasingly positive direction. With Kubernetes well into a v1, the development of Knative, Fargate, and Cloud Run are increasingly raising the level of abstraction and solving the underlying challenges with managing Kubernetes. The future is bright. As these underlying challenges are solved, many existing concerns will likely be alleviated in the future.
If you are interested in working on our next generation of container technologies, our dynamic configuration service or other technologies mentioned above — we are actively hiring on our Infrastructure team. Please reach out and we would love to chat with you.
This website contains links to third-party websites or other content for information purposes only (“Third-Party Sites”). The Third-Party Sites are not under the control of Coinbase, Inc., and its affiliates (“Coinbase”), and Coinbase is not responsible for the content of any Third-Party Site, including without limitation any link contained in a Third-Party Site, or any changes or updates to a Third-Party Site. Coinbase is not responsible for webcasting or any other form of transmission received from any Third-Party Site. Coinbase is providing these links to you only as a convenience, and the inclusion of any link does not imply endorsement, approval or recommendation by Coinbase of the site or any association with its operators.
Unless otherwise noted, all images provided herein are by Coinbase.
Container technologies at Coinbase was originally published in The Coinbase Blog on Medium, where people are continuing the conversation by highlighting and responding to this story.
from Money 101 https://blog.coinbase.com/container-technologies-at-coinbase-d4ae118dcb6c?source=rss----c114225aeaf7---4 via http://www.rssmix.com/
0 notes
coredgeblogs · 1 month ago
Text
Serverless vs. Containers: Which Cloud Computing Model Should You Use?
In today’s cloud-driven world, businesses are building and deploying applications faster than ever before. Two of the most popular technologies empowering this transformation are Serverless computing and Containers. While both offer flexibility, scalability, and efficiency, they serve different purposes and excel in different scenarios.
If you're wondering whether to choose Serverless or Containers for your next project, this blog will break down the pros, cons, and use cases—helping you make an informed, strategic decision.
What Is Serverless Computing?
Serverless computing is a cloud-native execution model where cloud providers manage the infrastructure, provisioning, and scaling automatically. Developers simply upload their code as functions and define triggers, while the cloud handles the rest.
 Key Features of Serverless:
No infrastructure management
Event-driven architecture
Automatic scaling
Pay-per-execution pricing model
Popular Platforms:
AWS Lambda
Google Cloud Functions
Azure Functions
What Are Containers?
Containers package an application along with its dependencies and libraries into a single unit. This ensures consistent performance across environments and supports microservices architecture.
Containers are orchestrated using tools like Kubernetes or Docker Swarm to ensure availability, scalability, and automation.
Key Features of Containers:
Full control over runtime and OS
Environment consistency
Portability across platforms
Ideal for complex or long-running applications
Popular Tools:
Docker
Kubernetes
Podman
Serverless vs. Containers: Head-to-Head Comparison
Feature
Serverless
Containers
Use Case
Event-driven, short-lived functions
Complex, long-running applications
Scalability
Auto-scales instantly
Requires orchestration (e.g., Kubernetes)
Startup Time
Cold starts possible
Faster if container is pre-warmed
Pricing Model
Pay-per-use (per invocation)
Pay-per-resource (CPU/RAM)
Management
Fully managed by provider
Requires devops team or automation setup
Vendor Lock-In
High (platform-specific)
Low (containers run anywhere)
Runtime Flexibility
Limited runtimes supported
Any language, any framework
When to Use Serverless
Best For:
Lightweight APIs
Scheduled jobs (e.g., cron)
Real-time processing (e.g., image uploads, IoT)
Backend logic in JAMstack websites
Advantages:
Faster time-to-market
Minimal ops overhead
Highly cost-effective for sporadic workloads
Simplifies event-driven architecture
Limitations:
Cold start latency
Limited execution time (e.g., 15 mins on AWS Lambda)
Difficult for complex or stateful workflows
When to Use Containers
Best For:
Enterprise-grade microservices
Stateful applications
Applications requiring custom runtimes
Complex deployments and APIs
Advantages:
Full control over runtime and configuration
Seamless portability across environments
Supports any tech stack
Easier integration with CI/CD pipelines
Limitations:
Requires container orchestration
More complex infrastructure setup
Can be costlier if not optimized
Can You Use Both?
Yes—and you probably should.
Many modern cloud-native architectures combine containers and serverless functions for optimal results.
Example Hybrid Architecture:
Use Containers (via Kubernetes) for core services.
Use Serverless for auxiliary tasks like:
Sending emails
Processing webhook events
Triggering CI/CD jobs
Resizing images
This hybrid model allows teams to benefit from the control of containers and the agility of serverless.
Serverless vs. Containers: How to Choose
Business Need
Recommendation
Rapid MVP or prototype
Serverless
Full-featured app backend
Containers
Low-traffic event-driven app
Serverless
CPU/GPU-intensive tasks
Containers
Scheduled background jobs
Serverless
Scalable enterprise service
Containers (w/ Kubernetes)
Final Thoughts
Choosing between Serverless and Containers is not about which is better—it’s about choosing the right tool for the job.
Go Serverless when you need speed, simplicity, and cost-efficiency for lightweight or event-driven tasks.
Go with Containers when you need flexibility, full control, and consistency across development, staging, and production.
Both technologies are essential pillars of modern cloud computing. The key is understanding their strengths and limitations—and using them together when it makes sense. 
1 note · View note
faizrashis1995 · 5 years ago
Text
Kubernetes vs. Docker: What Does It Really Mean?
“Kubernetes vs. Docker” is a phrase that you hear more and more these days as Kubernetes becomes ever more popular as a container orchestration solution.
 However, “Kubernetes vs. Docker” is also a somewhat misleading phrase. When you break it down, these words don’t mean what many people intend them to mean, because Docker and Kubernetes aren’t direct competitors. Docker is a containerization platform, and Kubernetes is a container orchestrator for container platforms like Docker.
 This post aims to clear up some common confusion surrounding Kubernetes and Docker, and explain what people really mean when they talk about “Docker vs. Kubernetes.”
 [EBOOK] Kubernetes Observability
Learn how how to monitor, troubleshoot, and secure your Kubernetes environment with Sumo Logic.
 Get eBook
The Rise of Containerization and Docker
It is impossible to talk about Docker without first exploring containers. Containers solve a critical issue in the life of application development. When developers are writing code they are working on their own local development environment. When they are ready to move that code to production this is where problems arise. The code that worked perfectly on their machine doesn’t work in production. The reasons for this are varied; different operating system, different dependencies, different libraries.
 Containers solved this critical issue of portability allowing you to separate code from the underlying infrastructure it is running on. Developers could package up their application, including all of the bins and libraries it needs to run correctly, into a small container image. In production that container can be run on any computer that has a conterization platform.
  Advantages of Containers
In addition to solving the major challenge of portability, containers and container platforms provide many advantages over traditional virtualization.
 Containers have an extremely small footprint. The container just needs its application and a definition of all of the bins and libraries it requires to run. Unlike VMs which each have a complete copy of a guest operating system, container isolation is done on the kernel level without the need for a guest operating system. In addition, libraries can be across containers, so it eliminates the need to have 10 copies of the same library on a server, further saving space. If I have 3 apps all running node and express, I don't have to have 3 instances of node and express, those apps can share those bins and libraries.  Allowing for applications to become encapsulated in self-contained environments allows for quicker deployments, closer parity between development environments, and infinite scalability.
 What is Docker?
Docker is currently the most popular container platform. Docker appeared on the market at the right time, and was open source from the beginning, which likely led to its current market domination. 30% of enterprises currently use Docker in their AWS environment and that number continues to grow.
  When most people talk about Docker they are talking about Docker Engine, the runtime that allows you to build and run containers. But before you can run a Docker container they must be built, starting with a Docker File. The Docker File defines everything needed to run the image including the OS network specifications, and file locations. Now that you have a Docker file, you can build a Docker Image which is the portable, static component that gets run on the Docker Engine. And if you don’t want to start from scratch Docker even has a service called Docker Hub, where you can store and share images.
 The Need for Orchestration Systems
While Docker provided an open standard for packaging and distributing containerized applications, there arose a new problem. How would all of these containers be coordinated and scheduled? How do you seamlessly upgrade an application without any interruption of service? How do you monitor the health of an application, know when something goes wrong and seamlessly restart it?
  Solutions for orchestrating containers soon emerged. Kubernetes, Mesos, and Docker Swarm are some of the more popular options for providing an abstraction to make a cluster of machines behave like one big machine, which is vital in a large-scale environment.
 When most people talk about “Kubernetes vs. Docker,” what they really mean is “Kubernetes vs. Docker Swarm.” The latter is Docker’s own native clustering solution for Docker containers, which has the advantage of being tightly integrated into the ecosystem of Docker, and uses its own API. Like most schedulers, Docker Swarm provides a way to administer a large number of containers spread across clusters of servers. Its filtering and scheduling system enables the selection of optimal nodes in a cluster to deploy containers.
 Kubernetes is the container orchestrator that was developed at Google which has been donated to the CNCF and is now open source. It has the advantage of leveraging Google’s years of expertise in container management. It is a comprehensive system for automating deployment, scheduling and scaling of containerized applications, and supports many containerization tools such as Docker.
 For now, Kubernetes is the market leader and the standardized means of orchestrating containers and deploying distributed applications. Kubernetes can be run on a public cloud service or on-premises, is highly modular, open source, and has a vibrant community. Companies of all sizes are investing into it, and many cloud providers offer Kubernetes as a service. Sumo Logic provides support for all orchestration technologies, including Kubernetes-powered applications.
 How does Kubernetes work?
It is easy to get lost in the details of Kubernetes, but at the end of the day, what Kubernetes is doing is pretty simple. Cheryl Hung of the CNCF describes Kubernetes as a control loop. Declare how you want your system to look (3 copies of container image a and 2 copies of container image b) and Kubernetes makes that happen. Kubernetes compares the desired state to the actual state, and if they aren’t the same, it takes steps to correct it.
  Kubernetes architecture and components
Kubernetes is made up many components that do not know are care about each other. The components all talk to each other through the API server. Each of these components operates its own function and then exposes metrics, that we can collect for monitoring later on. We can break down the components into three main parts.
 The Control Plane - The Master.
Nodes - Where pods get scheduled.
Pods - Holds containers.
 The Control Plane - The Master Node
The control plane is the orchestrator. Kubernetes is an orchestration platform, and the control plane facilitates that orchestration. There are multiple components in the control plane that help facilitate that orchestration. Etcd for storage, the API server for communication between components, the scheduler which decides which nodes pods should run on, and the controller manager, responsible for checking the current state against the desired state.
 Nodes
Nodes make up the collective compute power of the Kubernetes cluster. This is where containers actually get deployed to run. Nodes are the physical infrastructure that your application runs on, the server of VMs in your environment.
 Pods
Pods are the lowest level resource in the Kubernetes cluster. A pod is made up of one or more containers, but most commonly just a single container. When defining your cluster, limits are set for pods which define what resources, CPU and memory, they need to run. The scheduler uses this definition to decide on which nodes to place the pods. If there is more than one container in a pod, it is difficult to estimate the required resources and the scheduler will not be able to appropriately place pods.
 How Does Kubernetes Relate to Docker?
Kubernetes and Docker are both comprehensive de-facto solutions to intelligently manage containerized applications and provide powerful capabilities, and from this some confusion has emerged. “Kubernetes” is now sometimes used as a shorthand for an entire container environment based on Kubernetes. In reality, they are not directly comparable, have different roots, and solve for different things.
 Docker is a platform and tool for building, distributing, and running Docker containers. It offers its own native clustering tool that can be used to orchestrate and schedule containers on machine clusters. Kubernetes is a container orchestration system for Docker containers that is more extensive than Docker Swarm and is meant to coordinate clusters of nodes at scale in production in an efficient manner. It works around the concept of pods, which are scheduling units (and can contain one or more containers) in the Kubernetes ecosystem, and they are distributed among nodes to provide high availability. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution and is meant to include custom plugins.
 Kubernetes and Docker are both fundamentally different technologies but they work very well together, and both facilitate the management and deployment of containers in a distributed architecture.
 Can you use Docker without Kubernetes?
Docker is commonly used without Kubernetes, in fact this is the norm. While Kubernetes offers many benefits, it is notoriously complex and there are many scenarios where the overhead of spinning up Kubernetes is unnecessary or unwanted.
 In development environments it is common to use Docker without a container orchestrator like Kubernetes. In production environments often the benefits of using a container orchestrator do not outweigh the cost of added complexity. Additionally, many public cloud services like AWS, GCP, and Azure provide some orchestration capabilities making the tradeoff of the added complexity unnecessary.[Source]-https://www.sumologic.com/blog/kubernetes-vs-docker/
Beginners & Advanced level Docker Training in Mumbai. Asterix Solution's 25 Hour Docker Training gives broad hands-on practicals.
0 notes
coredgeblogs · 1 month ago
Text
Pods in Kubernetes Explained: The Smallest Deployable Unit Demystified
As the foundation of Kubernetes architecture, Pods play a critical role in running containerized applications efficiently and reliably. If you're working with Kubernetes for container orchestration, understanding what a Pod is—and how it functions—is essential for mastering deployment, scaling, and management of modern microservices.
In this article, we’ll break down what a Kubernetes Pod is, how it works, why it's a fundamental concept, and how to use it effectively in real-world scenarios.
What Is a Pod in Kubernetes?
A Pod is the smallest deployable unit in Kubernetes. It encapsulates one or more containers, along with shared resources such as storage volumes, IP addresses, and configuration information.
Unlike traditional virtual machines or even standalone containers, Pods are designed to run tightly coupled container processes that must share resources and coordinate their execution closely.
Key Characteristics of Kubernetes Pods:
Each Pod has a unique IP address within the cluster.
Containers in a Pod share the same network namespace and storage volumes.
Pods are ephemeral—they can be created, destroyed, and rescheduled dynamically by Kubernetes.
Why Use Pods Instead of Individual Containers?
You might ask: why not just deploy containers directly?
Here’s why Kubernetes Pods are a better abstraction:
Grouping Logic: When multiple containers need to work together—such as a main app and a logging sidecar—they should be deployed together within a Pod.
Shared Lifecycle: Containers in a Pod start, stop, and restart together.
Simplified Networking: All containers in a Pod communicate via localhost, avoiding inter-container networking overhead.
This makes Pods ideal for implementing design patterns like sidecar containers, ambassador containers, and adapter containers.
Pod Architecture: What’s Inside a Pod?
A Pod includes:
One or More Containers: Typically Docker or containerd-based.
Storage Volumes: Shared data that persists across container restarts.
Network: Shared IP and port space, allowing containers to talk over localhost.
Metadata: Labels, annotations, and resource definitions.
Here’s an example YAML for a single-container Pod:
yaml
CopyEdit
apiVersion: v1
kind: Pod
metadata:
  name: myapp-pod
spec:
  containers:
  - name: myapp-container
    image: myapp:latest
    ports:
    - containerPort: 80
Pod Lifecycle Explained
Understanding the Pod lifecycle is essential for effective Kubernetes deployment and troubleshooting.
Pod phases include:
Pending: The Pod is accepted but not yet running.
Running: All containers are running as expected.
Succeeded: All containers have terminated successfully.
Failed: At least one container has terminated with an error.
Unknown: The Pod state can't be determined due to communication issues.
Kubernetes also uses Probes (readiness and liveness) to monitor and manage Pod health, allowing for automated restarts and intelligent traffic routing.
Single vs Multi-Container Pods
While most Pods run a single container, Kubernetes supports multi-container Pods, which are useful when containers need to:
Share local storage.
Communicate via localhost.
Operate in a tightly coupled manner (e.g., a log shipper running alongside an app).
Example use cases:
Sidecar pattern for logging or proxying.
Init containers for pre-start logic.
Adapter containers for API translation.
Multi-container Pods should be used sparingly and only when there’s a strong operational or architectural reason.
How Pods Fit into the Kubernetes Ecosystem
Pods are not deployed directly in most production environments. Instead, they're managed by higher-level Kubernetes objects like:
Deployments: For scalable, self-healing stateless apps.
StatefulSets: For stateful workloads like databases.
DaemonSets: For deploying a Pod to every node (e.g., logging agents).
Jobs and CronJobs: For batch or scheduled tasks.
These controllers manage Pod scheduling, replication, and failure recovery, simplifying operations and enabling Kubernetes auto-scaling and rolling updates.
Best Practices for Using Pods in Kubernetes
Use Labels Wisely: For organizing and selecting Pods via Services or Controllers.
Avoid Direct Pod Management: Always use Deployments or other controllers for production workloads.
Keep Pods Stateless: Use persistent storage or cloud-native databases when state is required.
Monitor Pod Health: Set up liveness and readiness probes.
Limit Resource Usage: Define resource requests and limits to avoid node overcommitment.
Final Thoughts
Kubernetes Pods are more than just containers—they are the fundamental building blocks of Kubernetes cluster deployments. Whether you're running a small microservice or scaling to thousands of containers, understanding how Pods work is essential for architecting reliable, scalable, and efficient applications in a Kubernetes-native environment.
By mastering Pods, you’re well on your way to leveraging the full power of Kubernetes container orchestration.
0 notes
swarnalata31techiio · 3 years ago
Text
Nomad vs. Kubernetes: Comparison of Container Orchestration tools
What is Kubernetes?
Kubernetes (aka "Kube" or k8s) is an open-source container orchestration platform written in Go. It was initially developed by Google in 2014 but is currently maintained by the Cloud Native Computing Foundation (CNCF). According to surveys, Kubernetes usage share has grown from 58% in 2014 to 83% in 2021, being by far the most popular of the orchestration technologies. Leading public cloud providers like Amazon Web Services (AWS), Google Cloud Platform, IBM Cloud, and Microsoft Azure include managed Kubernetes services in their packages.
What is Nomad?
Nomad is HashiCorps' answer to developers looking for a powerful yet flexible platform for application deployment or container orchestration. Heralded as simple to run and maintain, Nomad is cloud-agnostic and designed to natively handle multi-datacenter and multi-region deployments with a high scalability potential. It is referred to as "Kubernetes without the complexity," but it's making a name for itself on its own merit.
Nomad VS Kubernetes
Both platforms can work together, complementing each other: Kubernetes is used by global companies and is offered as a service by Google Cloud Platform, Azure, and AWS, the three most prominent cloud providers, because it is recognized as a powerful container orchestration tool with cutting edge features. But Nomad's agility makes it perfect for maintenance and core scheduling purposes.Here's a head to head comparison:Kubernetes:
Complexity:More complex but provides a higher level of control
Community:Superior community, providing tools, resources and support
Costs :Potencial higher costs due to larger teams and more demanding architecture
Workload support:Focused on Linux containers
Openness:Community supported
Nomad:
Complexity:Easier to start with, but more immature
Community:Lacks a significative community, with the consequential lack of resources
Costs:Requires smaller teams, less servers, and it’s less time consuming
Workload support:Nomad supports virtualized, containerized and standalone applications (Java, Windows apps and even binary.)
Openness:It is closely tied to HashiCorp’s products and development.
0 notes
faizrashis1995 · 5 years ago
Text
Ansible Vs. Kubernetes
What Is Ansible?
The best definition comes, not surprisingly, from the software’s developers: “Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs.”
 Ansible is an open-source software solution that doesn’t depend on the typical client-server model. Ansible’s designers tout it as the only automation engine that automates everything in the whole application lifecycle as well as the continuous delivery pipeline. Difficult and time-consuming processes get changed into repeatable playbooks, which increases production speed while bringing a much-needed element of simplicity.
 Ansible’s name comes from a science fiction story, used to describe an instantaneous hyperspace communications system.
 Ansible requires a Linux/Unix host (e.g., Debian, Red Hat Enterprise Linux, CentOS, macOS, BSD) as its control machine. Also, Ansible uses the Python programming language, versions 2.7 or 3.5. Ansible runs on several cloud platforms, including:
 Amazon Web Services (AWS)
Atomic
CenturyLink
Cloudscale
CloudStack
DigitalOcean
Dimension Data
Docker
Google Cloud Platform
KVM
Linode
LXC
LXD
Microsoft Azure
OpenStack
OVH
oVirt
Packet
Profitbricks
PubNub
Rackspace
Scaleway
SmartOS
SoftLayer
Univention
VMware
Webfaction
XenServer.
Ansible’s features include:
 Simplicity
You don’t need any unique coding skills to use Ansible’s playbooks. Ansible is easy to set up. Just run the shell script once, and you’re good to go.
Power
Ansible handles highly complex IT workflows.
Zero Cost
Ansible is a free, open-source software solution.
Flexibility
You can orchestrate the entire application environment no matter where you want to deploy it. Since it has hundreds of modules available, you can customize Ansible to fit your unique needs.
Easy to Use Playbooks
Most of the playbooks are written in YAML, making them easy to edit and read.
Agentless Installation
You can set Ansible up in minutes using OpenSSH. You also don’t need to set up agents on remote servers.
Efficiency
Ansible doesn’t require you to install any extra software, so there are more resources to dedicate to your other applications.
What Is Kubernetes?
Kubernetes is a container as a service (CaaS) project released by Google. According to a blurb on the developer’s website, “Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation.” The automation aspect improves the development processes of the overall applications.
 “Kubernetes” is a Greek word meaning a pilot or helmsman.
 Kubernetes uses the Go programming language. Like Ansible, it runs on many different cloud platforms, including:
 AWS
Azure
CloudStack
GCE
OpenStack
OVirt
Photon
VSphere
IBM Cloud Kubernetes Service
Baidu Cloud Container Engine.
Kubernetes’ features include:
 Container Balancing
The Kubernetes platform calculates the best location for a given container without requiring any user interaction.
Flexibility
Because Kubernetes is an open-source cloud-based tool, it’s portable and offers multiple environment flexibility, meaning it can run on public cloud systems, on-premises servers, or hybrid clouds.
Zero Cost
Kubernetes is a free, open-source platform.
Process Automation
Kubernetes can automatically decide which server will host any given container.
Self-Monitoring
Kubernetes stays vigilant, maintaining constant checking of the servers’ and containers’ health.
Scalability
It provides horizontal scaling, allowing companies and organizations to quickly scale-out storage, fitting their workload needs.
Storage Orchestration
Kubernetes integrates with most storage systems; for example, you can combine it with an AWS Elastic File System.
Do These Tools Have Any Drawbacks or Disadvantages?
Of course, they do! No tool is flawless, including Ansible and Kubernetes. Each has its share of obstacles and difficulties.
 Ansible’s user interface leaves a lot to be desired. The UI executes only 85 percent of the commands that are usually run on the command line. While 85 percent sounds like a good figure, a decent UI gives you nothing less than 100 percent.
 Furthermore, Ansible has no notion of the state; it just runs tasks sequentially until it’s done or encounters an error.
 Also, Ansible’s Windows support still has a lot of catching up to do. You still need a Linux control machine to manage Windows hosts.
 Finally, since it’s still a relative newcomer to the DevOps scene, Ansible has less experience in delivering support to enterprise-level users and the smallest user/developer community. That latter deficiency makes it tougher for users to perform troubleshooting tasks.
 Kubernetes isn’t perfect either. It reportedly has a steep learning curve. Even the most experienced DevOps professionals encounter difficulty trying to figure out the platform’s ins and outs. Kubernetes users should be familiar with the entire cloud-native ecosystem as a whole.
 Kubernetes is challenging to install and configure manually since you will need to configure security and multi-host networking; attach storage; and enable monitoring, auditing, and logging.
 Also, Kubernetes doesn’t have a default high availability (HA) mode, so you have to configure your HA to create a fault-tolerant cluster manually.
 If these disadvantages appear intimidating, then you can always hire some Kubernetes experts to round out your team and handle these challenges. Hiring more personnel, of course, leads to the final disadvantage: spending additional money to recruit some dedicated Kubernetes talent. While this is a good thing for professionals who are looking for work in the DevOps field, it’s a pain for companies that are trying to adhere to a fixed budget.
 How Are These Two Tools Alike?
Kubernetes and Ansible don’t have much in common. Both of them are cost-effective since they’re both open-source software. Additionally, they’re both touted as being powerful yet easy to use. Still, there’s little chance of confusing one for the other!
 What Is the Major Ansible vs Kubernetes Differences?
The differences between these two products are profound. Ansible is an IT automation tool that deploys software, configures systems, and organizes more complex IT functions such as rolling updates or continuous deployments. On the other hand, Kubernetes is a system designed to orchestrate Docker containers. It manages workloads and uses nodes to handle scheduling to make sure that their condition matches the users’ expectations.
 In other words, Ansible deploys changes to hosts, while Kubernetes manages containers and keeps them working properly.
 Ansible is an excellent useful tool for front-end developers, particularly in situations where some programming is required. Kubernetes is best suited to developing larger apps.
 Based on the properties of both tools, it’s like comparing apples to oranges. Granted, they’re both DevOps tools that handle configuration management, but the purposes for which they’re used have minimal overlap.
 Just How Popular Are These tools?
Each solution has its share of adherents. AppDirect, Bose, Comcast, eBay, Google, IBM, Nav, Nokia, Philips, Slack, Spotify, Unicom, and many more use Kubernetes.
 Ansible’s following, on the other hand, consists of customers like Capital One, Cisco, HootSuite, NASA, NEC, Twitter, and Verizon, among others.
 Since both tools tend to operate in different circles, it’s hard to compare their popularity in a head to head matchup. However, Ansible is the most popular configuration tool, commanding a 41 percent rating over similar tools like Chef and Puppet, according to Flexera’s RightScale 2019 State of the Cloud Report.
 Kubernetes, meanwhile, has become the darling of the container management systems, beating out competitors such as Docker Swarm and Apache Mesos, according to an article from Opensource. The main reason for Kubernetes’ popularity has to do more with the size of the community that supports it, according to an article from Container Journal.[Source]-https://www.simplilearn.com/ansible-vs-kubernetes-article
Basic & Advanced Kubernetes Training  using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
faizrashis1995 · 6 years ago
Text
What Is Kubernetes? An Introduction To Container Orchestration Tool
What Is Kubernetes?
Kubernetes is an open-source container management (orchestration) tool. It’s container management responsibilities include container deployment, scaling & descaling of containers & container load balancing.
   Going by the definition, you might feel Kubernetes is very ordinary and unimportant. But trust me, this world needs Kubernetes for managing containers, as much as it needs Docker for creating them. Let me tell you why! If you would favor a video explanation on the same, then you can go through the below video.
   Why Use Kubernetes?
Companies out there maybe using Docker or Rocket or maybe simply Linux containers for containerizing their applications. But, whatever it is, they use it on a massive scale. They don’t stop at using 1 or 2 containers in Prod. But rather, 10’s or 100’s of containers for load balancing the traffic and ensuring high availability.
 Keep in mind that, as the traffic increases, they even have to scale up the number of containers to service the ‘n’ no of requests that come in every second. And, they have to also scale down the containers when the demand is less. Can all this be done natively?
 Well to be honest, i’m not sure it can be done. Even if it can be done, it is only then its only after loads of manual effort for managing those containers. So, the real question is, is it really worth it? Won’t automated intervention make life easier? Absolutely it will!
 That is why, the need for container management tools is imminent. Both Docker Swarm and Kubernetes are popular tools for Container management and orchestration. But, Kubernetes is the undisputed market leader. Partly because it is Google’s brainchild and partly because of its better functionality.
 Logically speaking, Docker Swarm is a better option because it runs right on top of Docker right? If I were you, I would have had the same doubt and it would have been my #1 mystery to solve. So, if your thinking the same, read this blog on the comparison between Kubernetes vs Docker Swarm here.
 If I could choose my pick between the two, then it would have to be Kubernetes. The reason simply being: Auto-scaling of containers based on traffic needs. However, Docker Swarm is not intelligent enough to do Auto-scaling. Be as it may, let’s move onto the next topic of this, what is Kubernetes blog.
 Features Of Kubernetes
This is the right time to talk about Kubernetes’ features because, you already know what it does and how it compares against Docker Swarm.
 Course Curriculum
Certified Kubernetes Administrator Exam Training
Instructor-led SessionsAssessmentsLifetime Access24 x 7 Expert Support
    1. Automatic Binpacking
Kubernetes automatically packages your application and schedules the containers based on their requirements and available resources while not sacrificing availability. To ensure complete utilization and save unused resources, Kubernetes balances between critical and best-effort workloads.
 2. Service Discovery & Load balancing
With Kubernetes, there is no need to worry about networking and communication because Kubernetes will automatically assign IP addresses to containers and a single DNS name for a set of containers, that can load-balance traffic inside the cluster.
 3. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt for local storage, or choose a public cloud provider such as GCP or AWS, or perhaps use a shared network storage system such as NFS, iSCSI, etc.
 4. Self-Healing
Personally, this is my favorite feature. Kubernetes can automatically restart containers that fail during execution and kills those containers that don’t respond to user-defined health checks. But if nodes itself die, then it replaces and reschedules those failed containers on other available nodes.
 5. Secret & Configuration Management
Kubernetes can help you deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
 6. Batch Execution
In addition to managing services, Kubernetes can also manage your batch and CI workloads, thus replacing containers that fail, if desired.
 7. Horizontal Scaling
Kubernetes needs only 1 command to scale up the containers, or to scale them down when using the CLI. Else, scaling can also be done via the Dashboard (kubernetes UI).
 8. Automatic Rollbacks & Rollouts
Kubernetes progressively rolls out changes and updates to your application or its configuration, by ensuring that not all instances are worked at the same instance. Even if something goes wrong, Kubernetes will rollback the change for you.
 These were some of the notable features of Kubernetes. Let me delve into the attractive aspects of Kubernetes with a real-life implementation of it and how it solved a major industry worry.
 Case Study: How Kubernetes was at the center of Pokemon Go’s evolution
I’m pretty sure everyone reading this blog would have played this famous smartphone game. Or atleast you would have heard of this game. I’m so sure because this game literally smashed every record set by gaming applications in both the Android and iOS markets.
 DevOps Training
Pokemon Go developed by Niantic Labs and was initially launched only in North America, Australia & New Zealand. In just a few weeks upon its worldwide release, the game reached 500+ million downloads with an average of 20+ million daily active users. These stats bettered, those set by games like Candy Crush and Clash of Clans.
 Pokemon Go:– Game backend with Kubernetes
The app backend was written in Java combined with libGDX. The program was hosted on a Java cloud with Google Cloud Bigtable NoSQL database. And this architecture was built on top of Kubernetes, making it their scaling strategy.
   Rapid iteration of pushing updates worldwide was done thanks to MapReduce and in particular Cloud Dataflow for combining data, doing efficient MapReduce shuffles, and for scaling their infrastructure.
 The actual challenge: For most big applications like this is horizontal scaling. Horizontal scaling is when you are scaling up your servers for servicing the increasing the number of requests from multiple players and playing environments. But for this game in particular, vertical scaling was also a major challenge because of the changing environment of players in real-time. And this change also has to be reflected to all the others playing nearby because reflecting the same gaming world to everyone is how the game works. Each individual server’s performance and specs also had to be scaled simultaneously, and this was the ultimate challenge which needed to be taken care of by Kubrenetes.
 Conclusion: Not only did Kubernetes help in horizontal and vertical scaling of containers, but it excelled in terms of engineering expectations. They planned their deployment for a basic estimate and the severs were ready for a maximum of 5x traffic. However, the game’s popularity rose so much that, they had to scale up to 50x times. Ask engineers from other companies, and 95% of them will respond with their server meltdown stories and how their business went down crashing. But not at Niantic Labs, the developers of Pokemon Go.
 Edward Wu, Director of Software Engineering, at Niantics said,
  “We knew we had something special on hand when these were exceeded in hours.”   “We believe that people are healthier when they go outside and have a reason to be connected to others.”
Pokemon Go surpassed all engineering expectations by 50x times and has managed to keep running despite its early launch problems. This became an inspiration and a benchmark for modern day augmented reality games as it inspired users to walk over 5.4 billion miles in a year. The implementation at Niantic Labs, thus made this the largest Kubernetes ever deployed.
 Kubernetes Architecture
So, now on moving onto the next part of this ‘what is Kubernetes’ blog, let me explain the working architecture of Kubernetes.
 Since Kubernetes implements a cluster computing background, everything works from inside a Kubernetes Cluster. This cluster is hosted by one node acting as the ‘master’ of the cluster, and other nodes as ‘nodes’ which do the actual ‘containerization‘. Below is a diagram showing the same.
   Master controls the cluster, and the nodes in it. It ensures the execution only happens in nodes and coordinates the act. Nodes host the containers; in-fact these Containers are grouped logically to form Pods. Each node can run multiple such Pods, which are a group of containers, that interact with each other, for a deployment.
  So, that’s the Kubernetes architecture in simple fashion. You can expect more details on the architecture in my next blog. A better news is, the next blog will also have a hands-on demonstration of installing Kubernetes cluster and deploying an application.[Source]-https://www.edureka.co/blog/what-is-kubernetes-container-orchestration
Basic & Advanced Kubernetes Training using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
faizrashis1995 · 6 years ago
Text
Virtual Kubernetes Clusters
In the technology domain, virtualization implies the creation of a software-defined or “virtual” form of a physical resource e.g. compute, network or storage. Users of the virtual resource should see no significant differences from users of the actual physical resource. Virtualized resources are typically subject to restrictions on how the underlying physical resource is shared.
The most commonly used form of virtualization is server virtualization, where the physical server is divided into multiple virtual servers. Server virtualization is implemented by a software layer called a virtual machine manager (VMM) or hypervisor. There are two types of hypervisors:
Type 1 Hypervisor: a hypervisor that runs directly on a physical server and coordinates the sharing of resources for the server. Each virtual machine (VM) will have its own OS.
Type 2 Hypervisor: a hypervisor that runs on an operating system (the Host OS) and coordinates the sharing of resources of the server. Each VM will also have its own OS, referred to as the Guest OS.
There is another form of virtualization of compute resources, called operating system (OS) virtualization. With this type of virtualization, an OS kernel natively allows secure sharing of resources. If this sounds familiar, it’s because what we commonly refer to as “containers” today, is a form of OS Virtualization.
Server virtualization technologies, which became mainstream in the early 2000s, enabled a giant leap forward for information technology and also enabled cloud computing services. The initial use case for server virtualization, was to make it easy to run multiple types and versions of server operating systems such as Windows or Linux, on a single physical server. This was useful for the software test and quality-assurance industry, but did not trigger broad adoption of virtualization technologies. A few years later, with VMware’s ESX Type 1 Hypervisor server consolidation became a way to drive efficiencies for enterprise IT by enabling the sharing of servers across workloads, and hence reducing the number of physical servers that were required. And finally, VMware’s VMotion feature, which allowed the migration of running virtual servers across physical servers, became a game changer as patching and updating physical servers could now be performed without any downtime and high levels of business continuity were now easily achievable for IT servers.
Why Virtualize Kubernetes
Kubernetes has been widely declared as the de-facto standard for managing containerized applications. Yet, most enterprises are still in the early stages of adoption. A major inhibitor to faster adoption of Kubernetes is that it is fairly complex to learn and manage at scale. In a KubeCon survey, 50% of respondents cited lack of expertise as a leading hurdle to wider adoption of Kubernetes.
Most enterprises have several applications that are owned by different product teams. As these applications are increasingly packaged in containers and migrated to Kubernetes, and as DevOps practices are adopted, a major challenge for enterprises is to determine who is responsible for the Kubernetes stack, and how Kubernetes skills and responsibilities should be shared across the enterprise. It makes sense to have a small centralized team that builds expertise in Kubernetes, and allows the rest of the organization to focus on delivering business value. Another survey shows an increasing number (from 17.01% in 2018 to 35.5% in 2019) of deployments are driven by centralized IT Operations teams.
One approach that enterprises take is to put existing processes around new technologies to make adoption easier. In fact, traditional platform architectures tried to hide containers and container orchestration from developers, and provided familiar abstractions. Similarly, enterprises adopting Kubernetes may put it behind a CI/CD pipeline and not provide developers access to Kubernetes.
While this may be a reasonable way to start, this approach cripples the value proposition of Kubernetes which offers rich cloud native abstractions for developers.
Managed Kubernetes services make it easy to spin up Kubernetes control planes. This makes it tempting to simply assign each team their own cluster, or even use a “one cluster per app” model (if this sounds familiar, our industry did go through a “one VM per app” phase).
There are major problems with the approach “one cluster per team / app” approach:
Securing and managing Kubernetes is now more difficult. The Kubernetes Control plane is not that difficult to spin-up. Most of the heavy lifting is with configuring and securing Kubernetes once the control plane is up, and with managing workload configurations.
Resource utilization is highly inefficient as there is no opportunity to share the same resources across a diverse set of workloads. For public clouds, the “one cluster per team / app” model directly leads to higher costs.
Clusters now become the new “pets” (see “pets vs cattle”) and eventually cluster-sprawl where it becomes impossible to govern and manage deployments.
The solution is to leverage virtualization for proper separation of concerns across developers and cluster operators. Using virtualization, the Ops team can focus on managing core components and services shared across applications. A development team can have self-service access to a virtual cluster, which is a secure slice of a physical cluster.
The Kubernetes Architecture
Kubernetes automates the management of containerized applications.
Large system architectures, such as  Kubernetes, often use the concept of architectural layers or “planes” to provide separation of concerns. The Kubernetes control plane consists of services that manage placement, scheduling and provide an API for configuration and monitoring of all resources.
Application workloads typically run on worker nodes. Conceptually, the worker nodes can be thought of as the “data plane” for Kubernetes. Worker nodes also run a few Kubernetes services responsible for managing local state and resources. All communication across services happens via the API server making the system loosely coupled and composable.
Kubernetes Virtualization Techniques
Much like how server virtualization includes different types of virtualization, virtualizing Kubernetes can be accomplished at different layers of the system. The possible approaches are to virtualize the control plane, virtualize the data plane or virtualize both planes.
Kubernetes supports multiple virtual clusters backed by the same physical cluster. These virtual clusters are called namespaces.
 Easy, eh? Well, not quite. For a namespace to be used as a virtual cluster, proper configuration of several additional Kubernetes resources is required. The Kubernetes objects that need to be properly configured for each namespace are shown and discussed below:
Access Controls: Kubernetes access controls allow granular permission sets to be mapped to users and teams. This is essential for sharing clusters, and ideally is integrated with a central system for managing users, groups and roles.
Pod Security Policies: this resource allows administrators to configure exactly what pods (the Kubernetes unit of deployment and management) are allowed to do. It is critical that in a shared system, pods are not allowed to run as root and have limited access to other shared resources such as host disks and ports, as well as the apiserver.
Network Policies: Network policies are Kubernetes firewall rules that allow control over inbound and outbound traffic from pods. By default, Kubernetes allows all pods within a cluster to communicate with each other. This is obviously undesirable in a shared cluster, and hence it is important to configure default network policies for each namespace and then allow users to add firewall rules for their applications.
Limits and quotas: Kubernetes allows granular configurations of resources. For example, each pod can specify how much CPU and memory it requires. It is also possible to limit the total usage for a workload and for a namespace. This is required in shared environments, to prevent a workload from eating up a majority of the resources and starving other workloads.
Virtualizing the Kubernetes control plane means that users can get their own virtual instance of the control plane components. Having separate copies of the apiserver, and other Kubernetes control plane components, allows users to potentially run separate versions and full-isolated configurations.
For example, different users can even have namespaces with the same name. Another problem this approach solves is that different users can have custom resource definitions (CRDs) of different versions. CRDs are becoming increasingly important for Kubernetes, as new frameworks such as Istio, are being implemented as CRDs. This model is also great for service providers that offer managed Kubernetes services or want to dedicate one or more clusters for each tenant. One option service providers may use for hard multi-tenancy is to require separate worker nodes per tenant.
Current State and Activities
The Kubernetes multi-tenancy working group is chartered with exploring functionality related to secure sharing of a cluster. A great place to catch-up on the latest developments is at their bi-weekly meetings. The working group is looking at ways to simplify provisioning and management of virtual clusters, across managing namespaces using mechanisms like CRDs, nested namespaces, as well as using control plane virtualization. The group is also creating security profiles for different levels of multi-tenancy.
A proposal for Kubernetes control plane virtualization was provided by the team at Alibaba (here is a related blog post). In their design, a single “Super Master” coordinates scheduling and resource management across users and worker nodes can be shared. The Alibaba Virtual Cluster proposal also uses namespaces and related controls underneath for isolation at the data plane level. This means the proposal provides both control plane and data plane multi-tenancy.[Source]-https://www.nirmata.com/2019/08/26/virtual-kubernetes-clusters/
Basic & Advanced Kubernetes Certification using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training
0 notes