#Kubectl for beginners
Explore tagged Tumblr posts
Text
Kubectl get context: List Kubernetes cluster connections
Kubectl get context: List Kubernetes cluster connections @vexpert #homelab #vmwarecommunities #KubernetesCommandLineGuide #UnderstandingKubectl #ManagingKubernetesResources #KubectlContextManagement #WorkingWithMultipleKubernetesClusters #k8sforbeginners
kubectl, a command line tool, facilitates direct interaction with the Kubernetes API server. Its versatility spans various operations, from procuring cluster data with kubectl get context to manipulating resources using an assortment of kubectl commands. Table of contentsComprehending Fundamental Kubectl CommandsWorking with More Than One Kubernetes ClusterNavigating Contexts with kubectl…

View On WordPress
#Advanced kubectl commands#Kubectl config settings#Kubectl context management#Kubectl for beginners#Kubernetes command line guide#Managing Kubernetes resources#Setting up kubeconfig files#Switching Kubernetes contexts#Understanding kubectl#Working with multiple Kubernetes clusters
0 notes
Text
Kubernetes Tutorials | Waytoeasylearn
Learn how to become a Certified Kubernetes Administrator (CKA) with this all-in-one Kubernetes course. It is suitable for complete beginners as well as experienced DevOps engineers. This practical, hands-on class will teach you how to understand Kubernetes architecture, deploy and manage applications, scale services, troubleshoot issues, and perform admin tasks. It covers everything you need to confidently pass the CKA exam and run containerized apps in production.
Learn Kubernetes the easy way! 🚀 Best tutorials at Waytoeasylearn for mastering Kubernetes and cloud computing efficiently.➡️ Learn Now

Whether you are studying for the CKA exam or want to become a Kubernetes expert, this course offers step-by-step lessons, real-life examples, and labs focused on exam topics. You will learn from Kubernetes professionals and gain skills that employers are looking for.
Key Learning Outcomes: Understand Kubernetes architecture, components, and key ideas. Deploy, scale, and manage containerized apps on Kubernetes clusters. Learn to use kubectl, YAML files, and troubleshoot clusters. Get familiar with pods, services, deployments, volumes, namespaces, and RBAC. Set up and run production-ready Kubernetes clusters using kubeadm. Explore advanced topics like rolling updates, autoscaling, and networking. Build confidence with real-world labs and practice exams. Prepare for the CKA exam with helpful tips, checklists, and practice scenarios.
Who Should Take This Course: Aspiring CKA candidates. DevOps engineers, cloud engineers, and system admins. Software developers moving into cloud-native work. Anyone who wants to master Kubernetes for real jobs.
1 note
·
View note
Text
Understanding Kubernetes Architecture: A Beginner's Guide
Kubernetes, often abbreviated as K8s, is a powerful container orchestration platform designed to simplify deploying, scaling, and managing containerized applications. Its architecture, while complex at first glance, provides the scalability and flexibility that modern cloud-native applications demand.
In this blog, we’ll break down the core components of Kubernetes architecture to give you a clear understanding of how everything fits together.
Key Components of Kubernetes Architecture
1. Control Plane
The control plane is the brain of Kubernetes, responsible for maintaining the desired state of the cluster. It ensures that applications are running as intended. The key components of the control plane include:
API Server: Acts as the front end of Kubernetes, exposing REST APIs for interaction. All cluster communication happens through the API server.
etcd: A distributed key-value store that holds cluster state and configuration data. It’s highly available and ensures consistency across the cluster.
Controller Manager: Runs various controllers (e.g., Node Controller, Deployment Controller) that manage the state of cluster objects.
Scheduler: Assigns pods to nodes based on resource requirements and policies.
2. Nodes (Worker Nodes)
Worker nodes are where application workloads run. Each node hosts containers and ensures they operate as expected. The key components of a node include:
Kubelet: An agent that runs on every node to communicate with the control plane and ensure the containers are running.
Container Runtime: Software like Docker or containerd that manages containers.
Kube-Proxy: Handles networking and ensures communication between pods and services.
Kubernetes Objects
Kubernetes architecture revolves around its objects, which represent the state of the system. Key objects include:
Pods: The smallest deployable unit in Kubernetes, consisting of one or more containers.
Services: Provide stable networking for accessing pods.
Deployments: Manage pod scaling and rolling updates.
ConfigMaps and Secrets: Store configuration data and sensitive information, respectively.
How the Components Interact
User Interaction: Users interact with Kubernetes via the kubectl CLI or API server to define the desired state (e.g., deploying an application).
Control Plane Processing: The API server communicates with etcd to record the desired state. Controllers and the scheduler work together to maintain and allocate resources.
Node Execution: The Kubelet on each node ensures that pods are running as instructed, while kube-proxy facilitates networking between components.
Why Kubernetes Architecture Matters
Understanding Kubernetes architecture is essential for effectively managing clusters. Knowing how the control plane and nodes work together helps troubleshoot issues, optimize performance, and design scalable applications.
Kubernetes’s distributed nature and modular components provide flexibility for building resilient, cloud-native systems. Whether deploying on-premises or in the cloud, Kubernetes can adapt to your needs.
Conclusion
Kubernetes architecture may seem intricate, but breaking it down into components makes it approachable. By mastering the control plane, nodes, and key objects, you’ll be better equipped to leverage Kubernetes for modern application development.
Are you ready to dive deeper into Kubernetes? Explore HawkStack Technologies’ cloud-native services to simplify your Kubernetes journey and unlock its full potential. For more details www.hawkstack.com
#redhatcourses#information technology#containerorchestration#docker#container#kubernetes#linux#containersecurity#dockerswarm#hawkstack#hawkstack technologies
0 notes
Video
youtube
Kubernetes kubectl Tutorial with Examples for Devops Beginners and Students
Hi, a new #video on #kubernetes #kubectl is published on #codeonedigest #youtube channel. Learn #kubernetes #api #kubectlcommands #node #docker #container #cloud #aws #azure #programming #coding with #codeonedigest
@java #java #awscloud @awscloud #aws @AWSCloudIndia #Cloud #CloudComputing @YouTube #youtube #azure #msazure #microsoftazure #kubectl #kubectlcommands #kubectlinstall #kubectlport-forward #kubectlbasiccommands #kubectlproxy #kubectlconfig #kubectlgetpods #kubectlexeccommand #kubectllogs #kubectlinstalllinux #kubectlapply #kuberneteskubectl #kuberneteskubectltutorial #kuberneteskubectlcommands #kuberneteskubectl #kuberneteskubectlinstall #kuberneteskubectlgithub #kuberneteskubectlconfig #kuberneteskubectllogs #kuberneteskubectlpatch #kuberneteskubectlversion #kubernetes #kubernetestutorial #kubernetestutorialforbeginners #kubernetesinstallation #kubernetesinterviewquestions #kubernetesexplained #kubernetesorchestrationtutorial #kubernetesoperator #kubernetesoverview #containernetworkinterfaceaws #azure #aws #azurecloud #awscloud #orchestration #kubernetesapi #Kubernetesapiserver #Kubernetesapigateway #Kubernetesapipython #Kubernetesapiauthentication #Kubernetesapiversion #Kubernetesapijavaclient #Kubernetesapiclient
#youtube#kubernetes#kubernetes kubectl#kubectl#kubernetes command#kubectl commands#kubectl command line interface
3 notes
·
View notes
Text
Kubectl Basic Commands | Kubernetes for Beginners
youtube
#KubectlBasicCommands#KubernetesForBeginners#KubernetesKubectlBasicCommandsTraining#KubectlBasicCommandsCourse#KubectlBasicCommandsTutorial#Youtube
0 notes
Text
Beginner’s Guide To Setup Kubernetes
Steps to install Kubernetes Cluster
Requirements
The major requirements are stated below regarding the setup process.
Master: 2 GB RAM 2 Cores of CPU Slave/ Node: 1 GB RAM 1 Core of CPU
1.Install Kubernetes
The below steps mentioned to be executed on both the master and node machines. Let’s call the master as ‘kmaster‘ and node as ‘knode‘.
1.1 Change to root:
Here, the changing root has been applied because the sudo provides permission to set up Kubernetes, and to avoid the permission process we have changed the root.
$ sudo su
# apt-get update
This command used to update a system.
1.2 Turn Off Swap Space:
Kubernetes doesn't support "swap". So we have to apply the below command to turn off the swap space.
# swapoff -a
1.3 Fstab action
After that, you need to open the ‘fstab’ file and comment out the line which has mention of swap partition.
# nano /etc/fstab
Press ‘Ctrl+X’, after that press ‘Y’ and then press ‘Enter’ to Save the file.
1.4 Update The Hostnames
To change the hostname of both machines, run the below command to open the file and subsequently rename the master machine to ‘kmaster’ and your node machine to ‘knode’.
# nano /etc/hostname
Press ‘Ctrl+X’, after that press ‘Y’ and then press ‘Enter’ to Save the file.
1.5 Update The Hosts File With IPs Of Master & Node
Run the following command on both machines to note the IP addresses of each.
# ifconfig
Now go to the ‘hosts’ file by moving over the both master and node and add an Entry by just specifying their respective IP addresses along with their names i.e. ‘kmaster’ and ‘knode’.
# nano /etc/hosts
Press ‘Ctrl+X’, after that press ‘Y’ and then press ‘Enter’ to Save the file.
1.6 Setting Static IP Addresses
We will make the IP addresses used as above, static for the VMs. We can do, by just modifying/changing the network interfaces file. Then, run the following command to open the file:
# nano /etc/network/interfaces
Now enter the following lines in the file.
auto enp0s8 iface enp0s8 inet static address
Press ‘Ctrl+X’, after that press ‘Y’ and then press ‘Enter’ to Save the file.
After this, restart your machine.
1.7 Install Open SSH-Server
Now we have to install openshh-server. Run the following command:
# sudo apt-get install openssh-server
2. Install Docker
Now we need to install Docker as docker images will be utilized for managing the containers in the cluster. Run with the following commands:
# sudo su # apt-get update # apt-get install -y docker.io
We’ve just explained about how to docker in your own system instead the process of how to add a $user in a docker or how to install docker-compose, for the basics of kubernetes you can follow by just tapping over this link:
3. Install kubeadm, Kubelet And Kubectl
To move further, we’ve to Install these 3 essential components for just setting up the environment of Kubernetes: kubeadm, kubectl, and kubelet.
Run the following commands before installing the Kubernetes environment.
# apt-get update && apt-get install -y apt-transport-https curl # curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - # cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF # apt-get update
Kubelet is the lowest level component in Kubernetes. It’s reliable for what’s running on an individual or specific machine.
Kubeadm is used for administrating the Kubernetes cluster.
Kubectl is used for controlling the configurations on various nodes inside the cluster.
# apt-get install -y kubelet kubeadm kubectl
3.1 Updating Kubernetes Configuration
Next, we will change the configuration file of Kubernetes. Run the following command:
#nano /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
This will open a text editor, enter the following line after the last “Environment Variable”:
Environment="cgroup-driver=systemd/cgroup-driver=cgroupfs"
Press ‘Ctrl+X’, after that press ‘Y’ and then press ‘Enter’ to Save the file.
4. Steps Only For Kubernetes Master VM (kmaster)
All the required packages were installed on both servers till now. But, the further steps will work upon the Master Node only. Now, run the following command to initialize the Kubernetes Master.
4.1 Initialize Kubernetes Master with ‘kubeadm init’
Run the beneath command to initialize and setup kubernetes master.
# kubeadm init (or) # kubeadm init --apiserver-advertise-address=<ip-address-of-kmaster-vm> --pod-network-cidr=192.168.0.0/16 # kubeadm init --apiserver-advertise-address 192.168.1.206 --pod-network-cidr=172.16.0.0/16
In the selected part 1, when we initialize Kubeadm with the command then it will show you the kubernetes control-panel has initialized successfully. The three commands as shown in the images’ part 1 should be run to create .kube folder.
As mentioned before, run the commands from the above output as a non-root user
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
In the selected part 2, signifies about the “kubeadm join token”. The kubeadm token need to store somewhere in notepad and wherever you want to. After storing, you need to run that specific key token in a node terminal so that it can maintain the communication between the master and node.
You will notice from the previous command, that all the pods are running except one: ‘kube-dns’. For resolving this we will install a pod network. To install the CALICO pod network, run the following command:
$ kubectl apply -f https://docs.projectcalico.org/v3.0/getting-started/kubernetes/installation/hosted/kubeadm/1.7/calico.yaml
Install network add-on to enable the communication between the pods only on master nodes. Flannel is a network fabric for the containers, that are designed for the kubernetes.
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
To verify, if kubectl is working or not, run the following command.s
$ kubectl get pods -o wide --all-namespaces
use "kubectl get nodes" command to ensure the kubernetes master node status is ready.
$ kubectl get nodes
4.2 To reset kubernetes
Now, if you are done with the process of initiating the command and requiring a fresh start, then you can make changes by just following the below command.
$ kubeadm reset
5. Steps For Only Kubernetes Node VM (knode)
For trial purpose, we can create nodes in the same system with the help of virtual machine.
Prerequisites
1.3GHz or faster 64-bit processor 2 GB RAM minimum/ 4GB RAM or more recommended
install vmware workstation player on ubuntu
5.1 Install required packages
$ sudo apt update $ sudo apt install build-essential</strong> $ sudo apt install linux-headers-$(uname -r)
5.2 Download vmware workstation player
$ wget https://www.vmware.com/go/getplayer-linux
Once the download is completed make the installation file executable using the following command:
$ chmod +x getplayer-linux
5.3 install vmware workstation player
Start the Installation wizard by typing:
$ sudo ./getplayer-linux
1. Just accept the terms and conditions in the license agreement and click on the Next button.
2. Next, you will be asked whether you like to check for product updates on startup. Make your selection and click on the Next button.
3. VMware’s Customer Experience Improvement Program (“CEIP”) helps VMware to improve their products and services by sending anonymous system data and usage information to VMware. If you prefer not to participate in the program select No and click on the Next button
4. In the next step , if you don’t have a license key, vacate the field empty and click on the next button.
5. Next, you will see the following page informing you that the VMware Workstation Player is ready to be installed. Click on the Install button.
6. Start VMware Workstation Player
Create a new virtual machine
Open terminal in virtual system and follow the step to create user (knode) and enter command to make connection between master and node.
$ sudo su
Now we are in the ‘knode terminal’ and we need to run kubeadm init. token key in this terminal as we have described above to save the specific key so that it make connection between master( kmaster) and node(knode).
# kubeadm join 192.168.1.206:6443 --token 02p54b.p8oe045cpj3zmz2b --discovery-token-ca-cert-hash sha256:50ba20a59c9f8bc0559d4635f1ac6bb480230e173a0c08b338372d8b81fcd061
once worker node is joined with kubernetes master, then verify the list of nodes within the kubernetes cluster.
$ kubectl get nodes
we have successfully configured the kubernetes cluster.
kubernetes master and worker node is ready to deploy the application.
Bottom Line
Now that we have explained about kubernetes setup, and further we will move onto something more technical with the other parts of this kubernetes series. Our next tutorial would explain to you how to make a connection with dashboard. Till then enjoy learning and try something new.
#beginners guide to setup kubernetes#Steps to install Kubernetes#Install Kubernetes#Installing the Kubernetes Dashboard#Master and Node#Kubernetes Dashboard Token
0 notes
Photo

[FREE] Kubernetes for Absolute Beginners on AWS Cloud | Part-1 What you Will learn ? You will learn creating Pods, ReplicaSets, Deployments and Services using kubectl…
0 notes
Link
Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris Docker Sign up for a free Azure account To use containers in the Cloud like a private registry you will need a free Azure account Docker part I - basics This part covers what Docker is and why I think you should use it. It brings up concepts such as images and containers and takes you through building and running your first container Docker part II - volumes this is about Volumes and how we can use volumes to persist data but also how we can turn our development environment into a Volume and make our development experience considerably better Docker part III - databases, linking and networks this is about how to deal with Databases, putting them into containers and how to make containers talk to other containers using legacy linking but also the new standard through networks Docker part IV - introducing Docker Compose this is how we manage more than one service using Docker Compose ( this is 1/2 part on Docker Compose) Docker part V- going deeper with Docker Compose this part is the second and concluding part on Docker Compose where we cover Volumes, Environment Variables and working with Databases and Networks Dockerfile great practices for beginners This is a guide that will ensure your Docker Image will be as small as possible but also ensure it's performant and you understand why you should use certain commands. Improve your Docker workflow with this VS Code extension VS Code can really help you with your Docker workflow with this extension, build run, author, deploy, lot's of great commands This two-part series Part I, Creating your Microservices with Docker Part II, Bring your container to the Cloud show how you can use Docker to build Microservices with Docker Compose as part of building and Cloud Hosted API Want to keep using Docker in the Cloud? This shows how you can build your containers and bring them with you to the Cloud My crash course and learning journey with Docker, I describe how I as a frontend developer barely understood what I was doing to finally understanding why I needed Docker and took the needed time to learn it and leverage its features Kubernetes Fundamentals Kubernetes is about orchestrating containerized apps. Docker is great for your first few containers. As soon as you need to run on multiple machines and need to scale/up down and distribute the load and so on, you need an orchestrator - you need Kubernetes Nodes, Pods, Services and Labeling This second part aims to give additional context to Nodes, Pods and introduce the concept Service. There is some theory and some practice with Kubectl. Static scaling This third part aims to show how you scale your application. We can easily set the number of Replicas we want of a certain application and let Kubernetes figure out how to do that. This is us defining a so-called desired state. Auto scaling This fourth part is also about scaling but we are not talking about setting the number of Replicas. Instead, we are talking about automatic scaling, of letting Kubernetes scale to as many Replicas as it needs based on settings you set on CPU and other metrics. This is great way to handle a sudden barrage of incoming requests. YAML files in Kuberenetes Learn about YAML files and how to use them in Kuberenetes Sore eyes? Go to the "misc" section of your settings and select night theme ❤️
0 notes
Text
One year using Kubernetes in production: Lessons learned
Starting out with containers and container orchestration tools
I now believe containers are the deployment format of the future. They make it much easier to package an application with its required infrastructure. While tools such as Docker provide the actual containers, we also need tools to take care of things such as replication and failovers, as well as APIs to automate deployments to multiple machines.
The state of clustering tools such as Kubernetes and Docker Swarm was very immature in early 2015, with only early alpha versions available. We still tried using them and started with Docker Swarm.
At first we used it to handle networking on our own with the ambassador pattern and a bunch of scripts to automate the deployments. How hard could it possibly be? That was our first hard lesson: Container clustering, networking, and deployment automation are actually very hard problems to solve.
We realized this quickly enough and decided to bet on another one of the available tools. Kubernetes seemed to be the best choice, since it was being backed by Google, Red Hat, Core OS, and other groups that clearly know about running large-scale deployments.
Load balancing with Kubernetes
When working with Kubernetes, you have to become familiar with concepts such as pods, services, and replication controllers. If you're not already familiar with these concepts, there are some excellent resources available to get up to speed. The Kubernetes documentation is a great place to start, since it has several guides for beginners.
Once we had a Kubernetes cluster up and running, we could deploy an application using kubectl, the Kubernetes CLI, but we quickly found that kubectl wasn't sufficient when we wanted to automate deployments. But first, we had another problem to solve: How to access the deployed application from the Internet?
The service in front of the deployment has an IP address, but this address only exists within the Kubernetes cluster. This means the service isn’t available to the Internet at all! When running on Google Cloud Engine, Kubernetes can automatically configure a load balancer to access the application. If you’re not on GCE (like us), you need to do a little extra legwork to get load balancing working.
It’s possible to expose a service directly on a host machine port—and this is how a lot of people get started—but we found that it voids a lot of Kubernetes' benefits. If we rely on ports in our host machines, we will get into port conflicts when deploying multiple applications. It also makes it much harder to scale the cluster or replace host machines.
A two-step load-balancer setup
We found that a much better approach is to configure a load balancer such as HAProxy or NGINX in front of the Kubernetes cluster. We started running our Kubernetes clusters inside a VPN on AWS and using an AWS Elastic Load Balancer to route external web traffic to an internal HAProxy cluster. HAProxy is configured with a “back end” for each Kubernetes service, which proxies traffic to individual pods.
This two-step load-balancer setup is mostly in response AWS ELB's fairly limited configuration options. One of the limitations is that it can’t handle multiple vhosts. This is the reason we’re using HAProxy as well. Just using HAProxy (without an ELB) could also work, but you would have to work around dynamic AWS IP addresses on the DNS level.
In any case, we needed a mechanism to dynamically reconfigure the load balancer (HAProxy, in our case) when new Kubernetes services are created.
The Kubernetes community is currently working on a feature called ingress. It will make it possible to configure an external load balancer directly from Kubernetes. Currently, this feature isn’t really usable yet because it’s simply not finished. Last year, we used the API and a small open-source tool to configure load balancing instead.
Configuring load balancing
First, we needed a place to store load-balancer configurations. They could be stored anywhere, but because we already had etcd available, we decided to store the load-balancer configurations there. We use a tool called confd to watch configuration changes in etcd and generate a new HAProxy configuration file based on a template. When a new service is added to Kubernetes, we add a new configuration to etcd, which results in a new configuration file for HAProxy.
Kubernetes: Maturing the right way
There are still plenty of unsolved problems in Kubernetes, just as there are in load balancing generally. Many of these issues are recognized by the community, and there are design documents that discuss new features that can solve some of them. But coming up with solutions that work for everyone requires time, which means some of these features can take quite a while before they land in a release. This is a good thing, because it would be harmful in the long term to take shortcuts when designing new functionality.
This doesn’t mean Kubernetes is limited today. Using the API, it’s possible to make Kubernetes do pretty much everything you need it to if you want to start using it today. Once more features land in Kubernetes itself, we can replace custom solutions with standard ones.
After we developed our custom solution for load balancing, our next challenge was implementing an essential deployment technique for us: Blue-green deployments.
Blue-green deployments in Kubernetes
A blue-green deployment is one without any downtime. In contrast to rolling updates, a blue-green deployment works by starting a cluster of replicas running the new version while all the old replicas are still serving all the live requests. Only when the new set of replicas is completely up and running is the load-balancer configuration changed to switch the load to the new version. A benefit of this approach is that there’s always only one version of the application running, reducing the complexity of handling multiple concurrent versions. Blue-green deployments also work better when the number of replicas is fairly small.
Figure 2 shows a component “Deployer” that orchestrates the deployment. This component can easily be created by your own team because we open-sourced our implementation under the Apache License as part of the Amdatu umbrella project. It also comes with a web UI to configure deployments.
An important aspect of this mechanism is the health checking it performs on the pods before reconfiguring the load balancer. We wanted each component that was deployed to provide a health check. Now we typically add a health check that's available on HTTP to each application component.
Making the deployments automatic
With the Deployer in place, we were able to hook up deployments to a build pipeline. Our build server can, after a successful build, push a new Docker image to a registry such as Docker Hub. Then the build server can invoke the Deployer to automatically deploy the new version to a test environment. The same image can be promoted to production by triggering the Deployer on the production environment.
Know your resource constraints
Knowing our resource constraints was critical when we started using Kubernetes. You can configure resource requests and CPU/memory limits on each pod. You can also control resource guarantees and bursting limits.
These settings are extremely important for running multiple containers together efficiently. If we didn't set these settings correctly, containers would often crash because they couldn't allocate enough memory.
Start early with setting and testing constraints. Without constraints, everything will still run fine, but you'll get a big, unpleasant surprise when you put any serious load on one of the containers.
How we monitored Kubernetes
When we had Kubernetes mostly set up, we quickly realized that monitoring and logging would be crucial in this new dynamic environment. Logging into a server to look a log files just doesn’t work anymore when you're dealing with a large number of replicas and nodes. As soon as you start using Kubernetes, you should also have a plan to build centralized logging and monitoring.
Logging
There are plenty of open-source tools available for logging. We decided to use Graylog—an excellent tool for logging—and Apache Kafka, a messaging system to collect and digest logs from our containers. The containers send logs to Kafka, and Kafka hands them off to Graylog for indexing. We chose to make the application components send logs to Kafka themselves so that we could stream logs in an easy-to-index format. Alternatively, there are tools that retrieve logs from outside the container and forward them to a logging solution.
Monitoring
Kubernetes does an excellent job of recovering when there's an error. When pods crash for any reason, Kubernetes will restart them. When Kubernetes is running replicated, end users probably won't even notice a problem. Kubernetes recovery works so well that we have had situations where our containers would crash multiple times a day because of a memory leak, without anyone (including ourselves) noticing it.
Although this is great from the perspective of Kubernetes, you probably still want to know whenever there’s a problem. We use a custom health-check dashboard that monitors the Kubernetes nodes, individual pods—using application-specific health checks—and other services such as data stores. To implement a dashboard such as this, the Kubernetes API proves to be extremely valuable again.
We also thought it was important to measure load, throughput, application errors, and other stats. Again, the open-source space has a lot to offer. Our application components post metrics to an InfluxDB time-series store. We also use Heapster to gather Kubernetes metrics. The metrics stored in InfluxDB are visualized in Grafana, an open-source dashboard tool. There are a lot of alternatives to the InfluxDB/Grafana stack, and any one of them will provide a lot of value toward keeping track of how things are running.
Data stores and Kubernetes
A question that many new Kubernetes users ask is “How should I handle my data stores with Kubernetes?”
When running a data store such as MongoDB or MySQL, you most likely want the data to be persistent. Out of the box, containers lose their data when they restart. This is fine for stateless components, but not for a persistent data store. Kubernetes has the concept of volumes to work with persistent data.
A volume can be backed by a variety of implementations, including files on the host machines, AWS Elastic Block Store (EBS), and nfs. When we were researching the question of persistent data, this provided a good answer, but it wasn't an answer for our running data stores yet.
Replication issues
In most deployments, the data stores also run replicated. Mongo typically runs in a Replica Set, and MySQL could be running in primary/replica mode. This introduces a few problems. First of all, it’s important that each node in the data store’s cluster is backed by a different volume. Writing to the same volume will lead to data corruption. Another issue is that most data stores require precise configuration to get the clustering up and running; auto discovery and configuration of nodes is not common.
At the same time, a machine that runs a data store is often specifically tuned for that type of workload. Higher IOPS could be one example. Scaling (adding/removing nodes) is an expensive operation for most data stores as well. All these things don’t match very well with the dynamic nature of Kubernetes deployments.
The decision not to use Kubernetes for running data stores in production
This brings us to a situation where we found that the benefits of running a data store inside Kubernetes are limited. The dynamics that Kubernetes give us can’t really be used. The setup is also much more complex than most Kubernetes deployments.
Because of this, we are not running our production data stores inside Kubernetes. Instead, we set up these clusters manually on different hosts, with all the tuning necessary to optimize the data store in question. Our applications running inside Kubernetes just connect to the data store cluster like normal. The important lesson is that you don’t have to run everything in Kubernetes once you have Kubernetes. Besides data stores and our HAProxy servers, everything else does run in Kubernetes, though, including our monitoring and logging solutions.
Why we're excited about our next year with Kubernetes
Looking at our deployments today, Kubernetes is absolutely fantastic. The Kubernetes API is a great tool when it comes to automating a deployment pipeline. Deployments are not only more reliable, but also much faster, because we’re no longer dealing with VMs. Our builds and deployments have become more reliable because it’s easier to test and ship containers.
We see now that this new way of deployment was necessary to keep up with other development teams around the industry that are pushing out deployments much more often and lowering their overhead for doing so.
Cost calculation
Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.
For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.
A bright future for Kubernetes
Running Kubernetes in a pre-released version was challenging, and keeping up with (breaking) new releases was almost impossible at times. Development of Kubernetes has been happening at light-speed in the past year, and the community has grown into a legitimate powerhouse of dev talent. It’s hard to believe how much progress has been made in just over a year.[Source]-https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned
Basic & Advanced Kubernetes Course using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
Text
Adoption: Developing for containers
Red Hat Container Development Kit (CDK) is a pre-assembled holder advancement condition in view of Red Hat Enterprise Linux to enable you to begin creating compartment based applications rapidly. The compartments you fabricate can be effectively conveyed on any Red Hat holder host or stage, including: Red Hat Enterprise Linux, Red Hat Enterprise Linux Atomic Host, and our stage as-an administration arrangement, OpenShift Enterprise 3.
Begin with holders on Mac OS X, Microsoft Windows, or Linux
To spare you from assembling a holder advancement condition starting with no outside help, CDK conveys the most recent compartment apparatuses in a Red Hat Enterprise Linux virtual machine that you can use on your Mac OS X, Microsoft Windows, RHEL or Fedora Linux framework. What's more, you have your decision of virtualization stages (VirtualBox, VMware, and the Linux KVM/libvirt hypervisors are altogether bolstered). The greater part of the VM design subtle elements on your framework are dealt with for you by Vagrant, an open-source device for making and appropriating compact and reproducible improvement conditions.
Red Hat Container Development Kit 2 beta is accessible now to clients and accomplices with Red Hat Enterprise Linux Developer memberships and to accomplices who join the Container Zone by means of the Red Hat Connect for Technology Partners program. To figure out how to introduce the Red Hat CDK, allude to the Red Hat CDK Installation Guide.
Something for all levels of compartment experience
The CDK is for you whether you are attempting Docker-organized holders surprisingly, or need to see the most recent advancements in compartment instruments from Red Hat. In the event that you are simply beginning, attempt some compartment cases from the Getting Started with Container Development Kit manage.
On the off chance that you are prepared to give scaling a shot and organizing multi-compartment arrangements, CDK has OpenShift Enterprise 3 and Kubernetes introduced. You can pick between multi-compartment conditions that are overseen by OpenShift Enterprise or by just Kubernetes itself.
Need to attempt OpenShift Enterprise 3 Platform-as-a-Service?
The CDK gives a prebuilt, single-machine OpenShift Enterprise 3 condition, so you can attempt the most recent adaptation of the OpenShift stage as-an administration that incorporates bolster for docker-organized holders and Kubernetes. When you raise the rhel-ose Vagrantfile, OpenShift is begun and provisioned.
To enable you to begin building applications, various OpenShift layouts are incorporated. You can get to the OpenShift Web support from your program or work from the CLI utilizing the oc charge to convey holder applications. OpenShift is pre-designed with a neighborhood Docker registry accessible and a nearby form of Kubernetes running, so you can test the full involvement in an independent domain.
Organize applications utilizing Kubernetes
Kubernetes is an organization and administration stage for programmed sending, scaling and operation of use compartments on a group of machines. Conveying compartments utilizing Kubernetes requires metadata data as antiquities documents. So an engineer needs a straightforward setup of Kubernetes to have the capacity to test application holders and the antiques before moving the organization to different situations, for example, testing and generation conditions.
The CDK gives a rhel-k8s-singlenode-setup Vagrantfile that can start up a solitary hub Kubernetes arrangement (a solitary ace and hub setup) on a similar host just with a "vagrant up" summon. A designer could then utilize the kubectl summon to make units, administrations, replication controllers and different components to oversee and scale containerized applications.
Change over VM pictures to Containers with v2c
Virtual-to-Container (v2c) is a device for bringing in and changing over circle pictures (like virtual machine pictures) into docker-organized compartment pictures, finish with the suitable metadata. The v2c apparatus makes it easy to take a working VM that has a solitary application and create a dockerfile that runs a similar application in a holder.
A run of the mill v2c client is an association with a current gathering of VM pictures that insert huge association particular programming or setup. The subsequent base pictures give a beginning stage that you can expand on with extra dockerfile(s) and arrangement.
In the event that you might want to attempt this ability out please connect with us on the mailing list provided underneath.
Nuclear App and the Nulecule Specification
Those keen on the advancement of compartment tooling will need to attempt Atomic App, which is Red Hat's reference usage of the Nulecule particular. The Nulecule detail empowers complex containerized applications to be characterized, bundled and circulated utilizing standard holder advances. The subsequent holder incorporates conditions, bolsters numerous coordination suppliers, and can determine asset necessities. The Nulecule determination additionally bolsters the total of different composite applications.
For a review of Atomic App, see this blog entry: Running Nulecules in OpenShift by means of oc new-application.
Getting the CDK
Red Hat Container Development Kit 2 beta is accessible now to clients and accomplices with Red Hat Enterprise Linux Developer memberships and to accomplices who join the Container Zone by means of the Red Hat Connect for Technology Partners program.
Holder pictures
Regardless of whether changing over existing applications into basic, one-holder organizations or creating microservices design based multi-compartment applications without any preparation, the CDK gives the instruments and documentation that engineers need to begin. This incorporates access to these pictures by means of the Red Hat Container Registry:
Programming dialects (Python, Ruby, Node.js, PHP, Perl - see Red Hat Software Collections)
Databases (MySQL, MariaDB, PostgreSQL, MongoDB - see Red Hat Software Collections)
Web servers (Apache httpd, Passenger - see Red Hat Software Collections), JBoss Web Server (Tomcat)
Endeavor middleware items additionally accessible in picture organize incorporate JBoss Enterprise Application Platform (EAP), AMQ, Data Grid, and so on.
Red Hat Developer Toolset picture for engineers looking to make holder based applications constructed by means of the GNU Compiler Collection (GCC) instruments.
Give us your criticism and join the exchange
We need your criticism, join the exchange. Get included. The Red Hat Container Tools mailing rundown is interested in all. Kindly attempt the beta, and send us your criticism on the compartment [email protected] mailing list.
Improvement apparatuses for holders
The Red Hat Enterprise Linux engineer apparatuses makes it simple to get to industry-driving designer devices, instructional assets, and a biological system of specialists to enable designers to expand efficiency in building awesome Linux applications. If it's not too much trouble audit this segment as the choices have significantly extended as of late.
Red Hat Developer Toolset The Red Hat Developer Toolset empowers designers to exploit the most recent adaptations of GNU Compiler Collection (GCC), Eclipse, and more as they fabricate, test, and convey applications for RHEL 7.
Find out about the GNU Compiler Collection (GCC) toolchains accessible in Red Hat Developer Toolset.
Red Hat Developer Toolset 3.0
Introducing and Using Red Hat Developer Toolset
Red Hat Software Collections Red Hat Software Collections (RHSCL) incorporates every now and again refreshed arrangements of scripting dialects, databases, web servers and that's just the beginning. These give you regular improvement stacks for supporting both RHEL 6 and 7.
A nitty gritty depiction of the Software Collections for RHEL 7
A prologue to application advancement devices in Red Hat Enterprise Linux 7
SystemTap Beginners Guide
We now have dockerfiles for Red Hat Software Collections (RHSCL), helping designers to quickly assemble and convey containerized applications. Accessible for both Red Hat Enterprise Linux 6 and Red Hat Enterprise Linux 7, there are even a couple that consolidate Apache HTTP Server and your most loved scripting dialect. These are likewise now included as you introduce RHSCL parts.
0 notes
Text
A Comprehensive Guide to Kubernetes
Introduction
In the world of container orchestration, Kubernetes stands out as a robust, scalable, and flexible platform. Developed by Google and now maintained by the Cloud Native Computing Foundation (CNCF), Kubernetes has become the go-to solution for managing containerized applications in a distributed environment. Its ability to automate deployment, scaling, and operations of application containers has made it indispensable for modern IT infrastructure.
History and Evolution
Kubernetes, often abbreviated as K8s, originated from Google’s internal project called Borg. Released as an open-source project in 2014, it quickly gained traction due to its rich feature set and active community support. Over the years, Kubernetes has seen several key milestones, including the introduction of StatefulSets, Custom Resource Definitions (CRDs), and the deprecation of Docker as a container runtime in favor of more versatile solutions like containerd and CRI-O.
Core Concepts
Understanding Kubernetes requires familiarity with its core components:
Pods: The smallest deployable units in Kubernetes, representing a single instance of a running process.
Nodes: Worker machines that run containerized applications, managed by the control plane.
Clusters: A set of nodes managed by the Kubernetes control plane.
Services: Abstractions that define a logical set of pods and a policy for accessing them.
Deployments: Controllers that provide declarative updates to applications.
Architecture
Kubernetes' architecture is built around a master-worker model:
Master Node Components:
API Server: Central management entity that receives commands from users and the control plane.
Controller Manager: Oversees various controllers that regulate the state of the cluster.
Scheduler: Assigns work to nodes based on resource availability and other constraints.
Worker Node Components:
Kubelet: Ensures containers are running in a pod.
Kube-proxy: Manages networking for services on each node.
Key Features
Kubernetes offers several powerful features:
Scalability: Easily scale applications up or down based on demand.
Self-healing: Automatically restarts failed containers, replaces and reschedules containers when nodes die, kills containers that don’t respond to user-defined health checks, and doesn’t advertise them to clients until they are ready to serve.
Automated Rollouts and Rollbacks: Roll out changes to your application or its configuration, and roll back changes if necessary.
Secret and Configuration Management: Manage sensitive information such as passwords, OAuth tokens, and ssh keys.
Use Cases
Kubernetes is used across various industries for different applications:
E-commerce: Managing high-traffic websites and applications.
Finance: Ensuring compliance and security for critical financial applications.
Healthcare: Running scalable, secure, and compliant healthcare applications.
Setting Up Kubernetes
For beginners looking to set up Kubernetes, here is a step-by-step guide:
Install a Container Runtime: Install Docker, containerd, or CRI-O on your machines.
Install Kubernetes Tools: Install kubectl, kubeadm, and kubelet.
Initialize the Control Plane: Use kubeadm init to initialize your master node.
Join Worker Nodes: Use the token provided by the master node to join worker nodes using kubeadm join.
Deploy a Network Add-on: Choose and deploy a network add-on (e.g., Flannel, Calico).
Challenges and Solutions
Adopting Kubernetes comes with challenges, such as complexity, security, and monitoring. Here are some best practices:
Simplify Complexity: Use managed Kubernetes services like Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), or Amazon EKS.
Enhance Security: Regularly update your cluster, use RBAC, and monitor for vulnerabilities.
Effective Monitoring: Utilize tools like Prometheus, Grafana, and ELK stack for comprehensive monitoring.
Future of Kubernetes
Kubernetes continues to evolve, with emerging trends such as:
Serverless Computing: Integration with serverless frameworks.
Edge Computing: Expanding Kubernetes to manage edge devices.
AI and Machine Learning: Enhancing support for AI/ML workloads.
Conclusion
Kubernetes has revolutionized the way we manage containerized applications. Its robust architecture, scalability, and self-healing capabilities make it an essential tool for modern IT infrastructure. As it continues to evolve, Kubernetes promises to remain at the forefront of container orchestration, driving innovation and efficiency in the IT industry.
For more details click www.hawkstack.com
#redhatcourses#information technology#container#linux#docker#kubernetes#containerorchestration#containersecurity#dockerswarm#aws
0 notes
Video
youtube
Kubernetes API Tutorial with Examples for Devops Beginners and Students
Hi, a new #video on #kubernetesapi is published on #codeonedigest #youtube channel. Learn #kubernetes #api #kubectl #node #docker #container #cloud #aws #azure #programming #coding with #codeonedigest
@java #java #awscloud @awscloud #aws @AWSCloudIndia #Cloud #CloudComputing @YouTube #youtube #azure #msazure #microsoftazure #kubernetes #kubernetestutorial #kubernetestutorialforbeginners #kubernetesinstallation #kubernetesinterviewquestions #kubernetesexplained #kubernetesorchestrationtutorial #kubernetesoperator #kubernetesoverview #kubernetesnetworkpolicy #kubernetesnetworkpolicyexplained #kubernetesnetworkpolicytutorial #kubernetesnetworkpolicyexample #containernetworkinterface #containernetworkinterfaceKubernetes #containernetworkinterfaceplugin #containernetworkinterfaceazure #containernetworkinterfaceaws #azure #aws #azurecloud #awscloud #orchestration #kubernetesapi #Kubernetesapiserver #Kubernetesapigateway #Kubernetesapipython #Kubernetesapiauthentication #Kubernetesapiversion #Kubernetesapijavaclient #Kubernetesapiclient
#youtube#kubernetes#kubernetes api#kubectl#kubernetes orchestration#kubernetes etcd#kubernetes control plan#master node#node#pod#container#docker
2 notes
·
View notes
Text
Kubernetes API Tutorial with Examples for Devops Beginners and Students
Full Video Link https://youtube.com/shorts/YypaOaS1OSI Hi, a new #video on #kubernetesapi is published on #codeonedigest #youtube channel. Learn #kubernetes #api #kubectl #node #docker #container #cloud #aws #azure #programming #coding with #codeonedige
Kubernetes API serves as a foundation for declarative configuration schema for the system. Kubernetes API acts a communicator among different components of Kubernetes. Every action inside your Kubernetes cluster goes through the API. The entire kubectl tool is essentially a wrapper around this API. Example, when you run kubectl apply, you are sending a request that tells the control plane to…

View On WordPress
#aws#aws cloud#azure#azure cloud#container network interface#container network interface aws#container network interface azure#container network interface Kubernetes#container network interface plugin#kubernetes#kubernetes api#Kubernetes api authentication#Kubernetes api gateway#Kubernetes api java client#Kubernetes api python#Kubernetes api server#Kubernetes api version#kubernetes explained#kubernetes installation#kubernetes interview questions#kubernetes network policy#kubernetes network policy example#kubernetes network policy explained#kubernetes network policy tutorial#kubernetes operator#kubernetes orchestration tutorial#kubernetes overview#kubernetes tutorial#kubernetes tutorial for beginners#orchestration
1 note
·
View note
Video
youtube
Kubernetes Node Tutorial for Beginners | Kubernetes Node Explained
Hi, a new #video on #kubernetesnode is published on #codeonedigest #youtube channel. Learn #kubernetes #node #kubectl #docker #controllermanager #programming #coding with codeonedigest
#kubernetesnode #kubernetesnodeport #kubernetesnodeaffinity #kubernetesnodes #kubernetesnodesandpods #kubernetesnodeportvsclusterip #kubernetesnodenotready #kubernetesnodeaffinityvsnodeselector #kubernetesnodeselector #kubernetesnodetaint #kubernetesnodeexporter #kubernetesnodetutorial #kubernetesnodeexplained #kubernetesnodes #kubernetesnodesandpods #kubernetesnodesvspods #kubernetesnodesnotready #kubernetesnodesvscluster #kubernetesnodesvsnamespaces #kubernetesnodesnotreadystatus #kubernetesnodesstatusnotready
#youtube#kubernetes#kubernetes node#kubernetes cluster#kubernetes node management#kubernetes pod#node#pod#cloud
0 notes
Photo

[FREE] Kubernetes for Absolute Beginners on AWS Cloud | Part-2 What you Will learn ? You will learn creating Pods, ReplicaSets, Deployments and Services using kubectl…
0 notes
Text
One year using Kubernetes in production: Lessons learned
In early 2015, after years of running deployments on Amazon EC2, my team at Luminis Technologies was tasked with building a new deployment platform for all our development teams. The AWS-based setup had worked very well for deploying new releases over the years, but the deployment setup, with custom scripts and tooling to automate deployments, wasn’t very easy for teams outside of operations to use—especially small teams that didn’t have the resources to learn all of the details about these scripts and tools. The main issue was that there was no “unit-of-deployment,” and without one, there was a gap between development and operations. The containerization trend was clearly going to change that.
If you haven't bought in to the production readiness of Docker and Kubernetes yet, read about how my team became early adopters. We have now been running Kubernetes in production for over a year.
Starting out with containers and container orchestration tools
I now believe containers are the deployment format of the future. They make it much easier to package an application with its required infrastructure. While tools such as Docker provide the actual containers, we also need tools to take care of things such as replication and failovers, as well as APIs to automate deployments to multiple machines.
The state of clustering tools such as Kubernetes and Docker Swarm was very immature in early 2015, with only early alpha versions available. We still tried using them and started with Docker Swarm.
At first we used it to handle networking on our own with the ambassador pattern and a bunch of scripts to automate the deployments. How hard could it possibly be? That was our first hard lesson: Container clustering, networking, and deployment automation are actually very hard problems to solve.
We realized this quickly enough and decided to bet on another one of the available tools. Kubernetes seemed to be the best choice, since it was being backed by Google, Red Hat, Core OS, and other groups that clearly know about running large-scale deployments.
Load balancing with Kubernetes
When working with Kubernetes, you have to become familiar with concepts such as pods, services, and replication controllers. If you're not already familiar with these concepts, there are some excellent resources available to get up to speed. The Kubernetes documentation is a great place to start, since it has several guides for beginners.
Once we had a Kubernetes cluster up and running, we could deploy an application using kubectl, the Kubernetes CLI, but we quickly found that kubectl wasn't sufficient when we wanted to automate deployments. But first, we had another problem to solve: How to access the deployed application from the Internet?
The service in front of the deployment has an IP address, but this address only exists within the Kubernetes cluster. This means the service isn’t available to the Internet at all! When running on Google Cloud Engine, Kubernetes can automatically configure a load balancer to access the application. If you’re not on GCE (like us), you need to do a little extra legwork to get load balancing working.
It’s possible to expose a service directly on a host machine port—and this is how a lot of people get started—but we found that it voids a lot of Kubernetes' benefits. If we rely on ports in our host machines, we will get into port conflicts when deploying multiple applications. It also makes it much harder to scale the cluster or replace host machines.
A two-step load-balancer setup
We found that a much better approach is to configure a load balancer such as HAProxy or NGINX in front of the Kubernetes cluster. We started running our Kubernetes clusters inside a VPN on AWS and using an AWS Elastic Load Balancer to route external web traffic to an internal HAProxy cluster. HAProxy is configured with a “back end” for each Kubernetes service, which proxies traffic to individual pods.
This two-step load-balancer setup is mostly in response AWS ELB's fairly limited configuration options. One of the limitations is that it can’t handle multiple vhosts. This is the reason we’re using HAProxy as well. Just using HAProxy (without an ELB) could also work, but you would have to work around dynamic AWS IP addresses on the DNS level.
any case, we needed a mechanism to dynamically reconfigure the load balancer (HAProxy, in our case) when new Kubernetes services are created.
The Kubernetes community is currently working on a feature called ingress. It will make it possible to configure an external load balancer directly from Kubernetes. Currently, this feature isn’t really usable yet because it’s simply not finished. Last year, we used the API and a small open-source tool to configure load balancing instead.
Configuring load balancing
First, we needed a place to store load-balancer configurations. They could be stored anywhere, but because we already had etcd available, we decided to store the load-balancer configurations there. We use a tool called confd to watch configuration changes in etcd and generate a new HAProxy configuration file based on a template. When a new service is added to Kubernetes, we add a new configuration to etcd, which results in a new configuration file for HAProxy.
Kubernetes: Maturing the right way
There are still plenty of unsolved problems in Kubernetes, just as there are in load balancing generally. Many of these issues are recognized by the community, and there are design documents that discuss new features that can solve some of them. But coming up with solutions that work for everyone requires time, which means some of these features can take quite a while before they land in a release. This is a good thing, because it would be harmful in the long term to take shortcuts when designing new functionality.
This doesn’t mean Kubernetes is limited today. Using the API, it’s possible to make Kubernetes do pretty much everything you need it to if you want to start using it today. Once more features land in Kubernetes itself, we can replace custom solutions with standard ones.
After we developed our custom solution for load balancing, our next challenge was implementing an essential deployment technique for us: Blue-green deployments.
Blue-green deployments in Kubernetes
A blue-green deployment is one without any downtime. In contrast to rolling updates, a blue-green deployment works by starting a cluster of replicas running the new version while all the old replicas are still serving all the live requests. Only when the new set of replicas is completely up and running is the load-balancer configuration changed to switch the load to the new version. A benefit of this approach is that there’s always only one version of the application running, reducing the complexity of handling multiple concurrent versions. Blue-green deployments also work better when the number of replicas is fairly small.
An important aspect of this mechanism is the health checking it performs on the pods before reconfiguring the load balancer. We wanted each component that was deployed to provide a health check. Now we typically add a health check that's available on HTTP to each application component.
Making the deployments automatic
With the Deployer in place, we were able to hook up deployments to a build pipeline. Our build server can, after a successful build, push a new Docker image to a registry such as Docker Hub. Then the build server can invoke the Deployer to automatically deploy the new version to a test environment. The same image can be promoted to production by triggering the Deployer on the production environment.
Know your resource constraints
Knowing our resource constraints was critical when we started using Kubernetes. You can configure resource requests and CPU/memory limits on each pod. You can also control resource guarantees and bursting limits.
These settings are extremely important for running multiple containers together efficiently. If we didn't set these settings correctly, containers would often crash because they couldn't allocate enough memory.
Start early with setting and testing constraints. Without constraints, everything will still run fine, but you'll get a big, unpleasant surprise when you put any serious load on one of the containers.
How we monitored Kubernetes
When we had Kubernetes mostly set up, we quickly realized that monitoring and logging would be crucial in this new dynamic environment. Logging into a server to look a log files just doesn’t work anymore when you're dealing with a large number of replicas and nodes. As soon as you start using Kubernetes, you should also have a plan to build centralized logging and monitoring.
Logging
There are plenty of open-source tools available for logging. We decided to use Graylog—an excellent tool for logging—and Apache Kafka, a messaging system to collect and digest logs from our containers. The containers send logs to Kafka, and Kafka hands them off to Graylog for indexing. We chose to make the application components send logs to Kafka themselves so that we could stream logs in an easy-to-index format. Alternatively, there are tools that retrieve logs from outside the container and forward them to a logging solution.
Monitoring
Kubernetes does an excellent job of recovering when there's an error. When pods crash for any reason, Kubernetes will restart them. When Kubernetes is running replicated, end users probably won't even notice a problem. Kubernetes recovery works so well that we have had situations where our containers would crash multiple times a day because of a memory leak, without anyone (including ourselves) noticing it.
Although this is great from the perspective of Kubernetes, you probably still want to know whenever there’s a problem. We use a custom health-check dashboard that monitors the Kubernetes nodes, individual pods—using application-specific health checks—and other services such as data stores. To implement a dashboard such as this, the Kubernetes API proves to be extremely valuable again.
We also thought it was important to measure load, throughput, application errors, and other stats. Again, the open-source space has a lot to offer. Our application components post metrics to an InfluxDB time-series store. We also use Heapster to gather Kubernetes metrics. The metrics stored in InfluxDB are visualized in Grafana, an open-source dashboard tool. There are a lot of alternatives to the InfluxDB/Grafana stack, and any one of them will provide a lot of value toward keeping track of how things are running.
Data stores and KubernetesA question that many new Kubernetes users ask is “How should I handle my data stores with Kubernetes?”
When running a data store such as MongoDB or MySQL, you most likely want the data to be persistent. Out of the box, containers lose their data when they restart. This is fine for stateless components, but not for a persistent data store. Kubernetes has the concept of volumes to work with persistent data.
A volume can be backed by a variety of implementations, including files on the host machines, AWS Elastic Block Store (EBS), and nfs. When we were researching the question of persistent data, this provided a good answer, but it wasn't an answer for our running data stores yet.
Replication issues
In most deployments, the data stores also run replicated. Mongo typically runs in a Replica Set, and MySQL could be running in primary/replica mode. This introduces a few problems. First of all, it’s important that each node in the data store’s cluster is backed by a different volume. Writing to the same volume will lead to data corruption. Another issue is that most data stores require precise configuration to get the clustering up and running; auto discovery and configuration of nodes is not common.
At the same time, a machine that runs a data store is often specifically tuned for that type of workload. Higher IOPS could be one example. Scaling (adding/removing nodes) is an expensive operation for most data stores as well. All these things don’t match very well with the dynamic nature of Kubernetes deployments.
The decision not to use Kubernetes for running data stores in production
This brings us to a situation where we found that the benefits of running a data store inside Kubernetes are limited. The dynamics that Kubernetes give us can’t really be used. The setup is also much more complex than most Kubernetes deployments.
Because of this, we are not running our production data stores inside Kubernetes. Instead, we set up these clusters manually on different hosts, with all the tuning necessary to optimize the data store in question. Our applications running inside Kubernetes just connect to the data store cluster like normal. The important lesson is that you don’t have to run everything in Kubernetes once you have Kubernetes. Besides data stores and our HAProxy servers, everything else does run in Kubernetes, though, including our monitoring and logging solutions.
Why we're excited about our next year with Kubernetes
Looking at our deployments today, Kubernetes is absolutely fantastic. The Kubernetes API is a great tool when it comes to automating a deployment pipeline. Deployments are not only more reliable, but also much faster, because we’re no longer dealing with VMs. Our builds and deployments have become more reliable because it’s easier to test and ship containers.
We see now that this new way of deployment was necessary to keep up with other development teams around the industry that are pushing out deployments much more often and lowering their overhead for doing so.
Cost calculation
Looking at costs, there are two sides to the story. To run Kubernetes, an etcd cluster is required, as well as a master node. While these are not necessarily expensive components to run, this overhead can be relatively expensive when it comes to very small deployments. For these types of deployments, it’s probably best to use a hosted solution such as Google's Container Service.
For larger deployments, it’s easy to save a lot on server costs. The overhead of running etcd and a master node aren’t significant in these deployments. Kubernetes makes it very easy to run many containers on the same hosts, making maximum use of the available resources. This reduces the number of required servers, which directly saves you money. When running Kubernetes sounds great, but the ops side of running such a cluster seems less attractive, there are a number of hosted services to look at, including Cloud RTI, which is what my team is working on.
A bright future for Kubernetes
Running Kubernetes in a pre-released version was challenging, and keeping up with (breaking) new releases was almost impossible at times. Development of Kubernetes has been happening at light-speed in the past year, and the community has grown into a legitimate powerhouse of dev talent. It’s hard to believe how much progress has been made in just over a year.[Source]-https://techbeacon.com/devops/one-year-using-kubernetes-production-lessons-learned
Basic & Advanced Kubernetes Training Online using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes