#kubernetes labels and selectors example
Explore tagged Tumblr posts
fromdevcom · 6 months ago
Text
Are you exploring Kubernetes or considering using it? If yes, then you will be thankful you landed on this article about a little pondered but crucial component of Kubernetes: labels and annotations. As we know, Kubernetes is an orchestration tool that is usually used to manage containerized applications. In this article, we will be understanding labels and annotations are so important for managing containerized applications.Introduction to Labels and AnnotationsLabels are used in configuration files to specify attributes of objects that are meaningful and relevant to the user, especially in the grouping, viewing, and performing operations. In addition, labels can be used in the specs and metadata sections.Annotations, on the other hand, help provide a place to store non-identifying metadata, which may be used to elaborate on the context of an object.The following are some Kubernetes labels:name: Name of the applicationinstance: unique name of the instanceversion: semantic version numbercomponent: the component within your logical architecturepart-of: the name of the higher-level application this object is a part of.managed-by: the person who manages it.Note: Labels and selectors work together to identify groups of relevant resources. This procedure must be efficient because selectors are used to querying labels. Labels are always restricted by RFC 1123 to ensure efficient queries. Labels are limited to a maximum of 63 characters by RFC 1123, among other restrictions. When you want Kubernetes to group a set of relevant resources, you should use labels.LabelsLet’s dive in a little further. Labels are value pairs - like pods - that are attached to objects. They’re used to specify identifying attributes of objects which are relevant to users. They do not affect the semantics of the core system in any way - labels are just used to organize subsets of objects. Here, I have created labels using kubectl:Now, the question that arises is this: Why should we use labels in Kubernetes? This is the question most people ask, and so is one of the two main questions of this article.The main benefits of using labels in Kubernetes include help in organizing Kubernetes workloads in the clusters, mapping our own organizational structures into system objects, selecting or querying a specific item, grouping objects and accessing them when required, and finally enabling us to select anything we want and execute it in kubectl.You can use labels and annotations to attach metadata to objects – labels, in particular, can be used to select objects and find objects or collections of objects that satisfy certain criteria. Annotations, however, are not used for identification. Let’s look at annotations a bit more.AnnotationsNow, let us turn to the other question this article wishes to address: Why do we use annotations in Kubernetes?To answer this, let us first look at this photo:As you can see, I have created an image registry and specified the link. Now, in case of anything changes, I can track the changes in the URL specified. This is how we use annotations. One of the most frequently used examples for explaining the need for annotations is comparing it to storing phone numbers. The main benefits of using annotations in Kubernetes include helping us in elaborating the context at hand, the ability to track changes, communicating to the scheduler in Kubernetes about the scheduling policy, and keeping track of the replicates we have created. Kubernetes scheduling is the process of assigning pods to matched nodes in a cluster. The scheduler watches for newly created pods and assigns them to the best possible node based on scheduling principles and configuration options. In addition to that, annotations help us in deployment so that we can track replica sets, they help DevOps teams provide context and keep useful context, and they provide uniqueness, unlike labels.Some examples of information that can be recorded by annotations are fields; build, release, or image
information; pointers to logging; client libraries and tools; user or system provenance information; tool metadata; and directives from end-uses to implementations.ConclusionIn this article, we first went through the basic concept of what labels and annotations are. Then, we used kubectl, the most powerful and easy-to-use command-line tool for Kubernetes. Kubectl helps us query data in our Kubernetes cluster.As you can see, the use of labels and annotations in Kubernetes plays a key role in deployment. They not only help us add more information about our configuration files but also help other teams, especially the DevOps team, understand more about your files and their use in managing these applications.Thanks for reading. Let's keep building stuff together and learn a whole lot more every day! Stay tuned for more on Kubernetes, and happy learning!
0 notes
qcs01 · 10 months ago
Text
How to migrate your app to Kubernetes containers in GCP?
Migrating your application to Kubernetes containers in Google Cloud Platform (GCP) involves several steps. Here is a comprehensive guide to help you through the process:
Step 1: Prepare Your Application
Containerize Your Application:
Ensure your application is suitable for containerization. Break down monolithic applications into microservices if necessary.
Create a Dockerfile for each part of your application. This file will define how your application is built and run inside a container.
# Example Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
     2. Build and Test Containers:
Build your Docker images locally and test them to ensure they run as expected.
docker build -t my-app .
docker run -p 5000:5000 my-app
Step 2: Set Up Google Cloud Platform
Create a GCP Project:
If you don’t have a GCP project, create one via the GCP Console.
Install and Configure gcloud CLI:
Install the Google Cloud SDK and initialize it.
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
    3. Enable Required APIs:
Enable Kubernetes Engine API and other necessary services.
gcloud services enable container.googleapis.com
Step 3: Create a Kubernetes Cluster
Create a Kubernetes Cluster:
Use the gcloud CLI to create a Kubernetes cluster.
gcloud container clusters create my-cluster --zone us-central1-a
      2. Get Cluster Credentials:
Retrieve the credentials to interact with your cluster.
gcloud container clusters get-credentials my-cluster --zone us-central1-a
Step 4: Deploy to Kubernetes
Push Docker Images to Google Container Registry (GCR):
Tag and push your Docker images to GCR.
docker tag my-app gcr.io/your-project-id/my-app
docker push gcr.io/your-project-id/my-app
     2. Create Kubernetes Deployment Manifests:
Create YAML files for your Kubernetes deployments and services.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: gcr.io/your-project-id/my-app
        ports:
        - containerPort: 5000
3. Deploy to Kubernetes:
Apply the deployment and service configurations to your cluster.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Step 5: Manage and Monitor
Monitor Your Deployment:
Use Kubernetes Dashboard or other monitoring tools like Prometheus and Grafana to monitor your application.
Scale and Update:
Scale your application as needed and update your deployments using Kubernetes rolling updates.
kubectl scale deployment my-app --replicas=5
kubectl set image deployment/my-app my-app=gcr.io/your-project-id/my-app:v2
Additional Tips
Use Helm: Consider using Helm for managing complex deployments.
CI/CD Integration: Integrate with CI/CD tools like Jenkins, GitHub Actions, or Google Cloud Build for automated deployments.
Security: Ensure your images are secure and scanned for vulnerabilities. Use Google Cloud’s security features to manage access and permissions.
By following these steps, you can successfully migrate your application to Kubernetes containers in Google Cloud Platform, ensuring scalability, resilience, and efficient management of your workloads.
For more details click www.hawkstack.com 
0 notes
thedebugdiary · 2 years ago
Text
A Minimal Guide to Deploying MLflow 2.6 on Kubernetes
Introduction
Deploying MLflow on Kubernetes can be a straightforward process if you know what you're doing. This blog post aims to provide a minimal guide to get you up and running with MLflow 2.6 on a Kubernetes cluster. We'll use the namespace my-space for this example.
Prerequisites
A running Kubernetes cluster
kubectl installed and configured to interact with your cluster
Step 1: Create the Deployment YAML
Create a file named mlflow-minimal-deployment.yaml and paste the following content:
apiVersion: v1 kind: Namespace metadata: name: my-space --- apiVersion: apps/v1 kind: Deployment metadata: name: mlflow-server namespace: my-space spec: replicas: 1 selector: matchLabels: app: mlflow-server template: metadata: labels: app: mlflow-server name: mlflow-server-pod spec: containers: - name: mlflow-server image: ghcr.io/mlflow/mlflow:v2.6.0 command: ["mlflow", "server"] args: ["--host", "0.0.0.0", "--port", "5000"] ports: - containerPort: 5000 ---
apiVersion: v1 kind: Service metadata: name: mlflow-service namespace: my-space spec: selector: app: mlflow-server ports: - protocol: TCP port: 5000 targetPort: 5000
Step 2: Apply the Deployment
Apply the YAML file to create the deployment and service:
kubectl apply -f mlflow-minimal-deployment.yaml
Step 3: Verify the Deployment
Check if the pod is running:
kubectl get pods -n my-space
Step 4: Port Forwarding
To access the MLflow server from your local machine, you can use Kubernetes port forwarding:
kubectl port-forward -n my-space mlflow-server-pod 5000:5000
After running this command, you should be able to access the MLflow server at http://localhost:5000 from your web browser.
Step 5: Access MLflow within the Cluster
The cluster-internal URL for the MLflow service would be:
http://mlflow-service.my-space.svc.cluster.local:5000
You can use this tracking URL in other services within the same Kubernetes cluster, such as Kubeflow, to log your runs.
Troubleshooting Tips
Pod not starting: Check the logs using kubectl logs -n my-space mlflow-server-pod.
Service not accessible: Make sure the service is running using kubectl get svc -n my-space.
Port issues: Ensure that the port 5000 is not being used by another service in the same namespace.
Conclusion
Deploying MLflow 2.6 on Kubernetes doesn't have to be complicated. This guide provides a minimal setup to get you started. Feel free to expand upon this for your specific use-cases.
0 notes
codeonedigest · 2 years ago
Text
Kubernetes Labels and Selectors Tutorial for Beginners
Hi, a new #video on #kubernetes #labels and #selectors is published on #codeonedigest #youtube channel. Learn kubernetes #labelsandselectors #apiserver #kubectl #docker #proxyserver #programming #coding with #codeonedigest #kuberneteslabelsandselectors
Kubernetes Labels are key-value pairs which are attached to pods, replication controller and services. They are used as identifying attributes for objects such as pods and replication controller. They can be added to an object at creation time and can be added or modified at the run time. Kubernetes selectors allows us to select Kubernetes resources based on the value of labels and resource…
Tumblr media
View On WordPress
0 notes
engineering · 6 years ago
Text
Docker Registry Pruner release!
tl;dr: We are open-sourcing a new tool to apply retention policies to Docker images stored in a Docker Registry: ✨tumblr/docker-registry-pruner✨.
At Tumblr, we have been leaning into containerization of workloads for a number of years. One of the most critical components for a Docker-based build and deployment pipeline is the Registry. Over the past 5+ years, we have built a huge amount of Docker containers. We are constantly shipping new updates and building new systems that deprecate others. Some of our repos can have 100s of commits a day, each creating a new image via our CI/CD pipeline. Because of this rapid churn, we create a ton of Docker images; some of them are in production, others have been deprecated and are no longer in use. These images accumulate in our Registry, eating up storage space and slowing down Registry metadata operations.
Images can range from a few hundred MB to a few GB; over time, this can really add up to serious storage utilization. In order to reclaim space and keep the working set of images in our registry bounded, we created, and are now open-sourcing, the ✨tumblr/docker-registry-pruner✨! This tool allows you to specify retention policies for images in an API v2 compatible registry. Through a declarative configuration, the tool will match images and tags via regex, and then retain images by applying retention policies. Example policies could be something like keeping the last N days of images, keeping the latest N images, or keeping the last N versions (semantically sorted via semantic versioning).
Configuration Format
A more precise definition of how the tool allows you to select images, tags, and retention policies is below. A config is made up of registry connection details and a list of rules. Each rule is a combination of at least 1 selector and an action.
Selectors
A selector is a predicate that images must satisfy to be considered by the Action for deletion.
repos: a list of repositories to apply this rule to. This is literal string matching, not regex. (i.e. tumblr/someservice)
labels is a map of Docker labels that must be present on the Manifest. You can set these in your Dockerfiles with LABEL foo=bar. This is useful to create blanket rules for image retention that allow image owners to opt in to cleanups on their own.
match_tags: a list of regular expressions. Any matching image will have the rule action evaluated against it (i.e. ^v\d+)
ignore_tags: a list of regular expressions. Any matching image will explicitly not be evaluated, even if it would have matched match_tags
NOTE: the ^latest$ tag is always implicitly inherited into ignore_tags.
Actions
You must provide one action, either keep_versions, keep_recent, or keep_days. Images that match the selector and fail the action predicate will be marked for deletion.
keep_versions: Retain the latest N versions of this image, as defined by semantic version ordering. This requires that your tags use semantic versioning.
keep_days: Retain the only images that have been created in the last N days, ordered by image modified date.
keep_recent: Retain the latest N images, ordered by the image's last modified date.
You can see the documentation for more details, or check out the example.yaml configuration!
Tumblr uses this tool to (via Kubernetes CronJob) periodically scan and prune unneeded images from a variety of Docker repos. Hopefully this tool will help other organizations manage the sprawl of Docker images caused by rapid development and CI/CD, as well!
25 notes · View notes
computingpostcom · 3 years ago
Text
The standard way of exposing applications that are running on a set of Pods in a Kubernetes Cluster is by using service resource. Each Pod in Kubernetes has its own IP address, but a set of Pods can have a single DNS name. Kubernetes is able to load-balance traffic across the Pods without any modification in the application layer. A service, by default is assigned an IP address (sometimes called the “cluster IP“), which is used by the Service proxies. A Service is able to identify a set of Pods using label selectors. What is an Ingress Controller? Before you can answer this question, an understanding of Ingress object in Kubernetes is important. From the official Kubernetes documentation, an Ingress is defined like as: An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting. An Ingress in Kubernetes exposes HTTP and HTTPS routes from outside the cluster to services running within the cluster. All the traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to: Provide Services with externally-reachable URLs Load balance traffic coming into cluster services Terminate SSL / TLS traffic Provide name-based virtual hosting in Kubernetes An Ingress controller is what fulfils the Ingress, usually with a load balancer. Below is an example on how an Ingress sends all the client traffic to a Service in Kubernetes Cluster: For the standard HTTP and HTTPS traffic, an Ingress Controller will be configured to listen on ports 80 and 443. It should bind to an IP address from which the cluster will receive traffic from. A Wildcard DNS record for the domain used for Ingress routes will point to the IP address(s) that Ingress controller listens on. It is fact that Kubernetes adopts a BYOS (Bring-Your-Own-Software) approach to most of its addons and it doesn’t provide a software that does Ingress functions out of the box. You can choose from the plenty of Ingress Controllers available. Kubedex does a good job as well on summarizing list of Ingresses available for Kubernetes. With all the basics on Kubernetes Services and Ingress, we can now plunge into the actual installation of NGINX Ingress Controller Kubernetes. #1) Deploy Nginx Ingress Controller in Kubernetes We shall consider two major deployment options captured in the next sections. Option 1: Install Nginx Ingress Controller in Kubernetes without Helm With this method you’ll manually download and run deployment manifests using kubectl command line tool. Step 1: Install git, curl and wget tools Install git, curl and wget tools in your Bastion where kubectl is installed and configured: # CentOS / RHEL / Fedora / Rocky sudo yum -y install wget curl git # Debian / Ubuntu sudo apt update sudo apt install wget curl git Step 2: Apply Nginx Ingress Controller manifest The deployment process varies depending on your Kubernetes setup. My Kubernetes will use Bare-metal Nginx Ingress deployment guide. For other Kubernetes clusters including managed clusters refer to below guides: microk8s minikube AWS GCE – GKE Azure Digital Ocean Scaleway Exoscale Oracle Cloud Infrastructure Bare-metal The Bare-metal method applies to any Kubernetes clusters deployed on bare-metal with generic Linux distribution(Such as CentOS, Ubuntu, Debian, Rocky Linux) e.t.c. Download Nginx controller deployment for Baremetal: controller_tag=$(curl -s https://api.github.com/repos/kubernetes/ingress-nginx/releases/latest | grep tag_name | cut -d '"' -f 4) wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/$controller_tag/deploy/static/provider/baremetal/deploy.yaml Rename deployment file: mv deploy.yaml nginx-ingress-controller-deploy.yaml Feel free to check the file contents and modify where you see fit: vim nginx-ingress-controller-deploy.yaml Apply Nginx ingress controller manifest deployment file:
$ kubectl apply -f nginx-ingress-controller-deploy.yaml namespace/ingress-nginx created serviceaccount/ingress-nginx created configmap/ingress-nginx-controller created clusterrole.rbac.authorization.k8s.io/ingress-nginx created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created role.rbac.authorization.k8s.io/ingress-nginx created rolebinding.rbac.authorization.k8s.io/ingress-nginx created service/ingress-nginx-controller-admission created service/ingress-nginx-controller created deployment.apps/ingress-nginx-controller created ingressclass.networking.k8s.io/nginx created validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created serviceaccount/ingress-nginx-admission created clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created role.rbac.authorization.k8s.io/ingress-nginx-admission created rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created job.batch/ingress-nginx-admission-create created job.batch/ingress-nginx-admission-patch created You can update your current context to use nginx ingress namespace: kubectl config set-context --current --namespace=ingress-nginx Run the following command to check if the ingress controller pods have started: $ kubectl get pods -n ingress-nginx -l app.kubernetes.io/name=ingress-nginx --watch NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create--1-hpkzp 0/1 Completed 0 43s ingress-nginx-admission-patch--1-qnjlj 0/1 Completed 1 43s ingress-nginx-controller-644555766d-snvqf 1/1 Running 0 44s Once the ingress controller pods are running, you can cancel the command typing Ctrl+C. If you want to run multiple Nginx Ingress Pods, you can scale with the command below: $ kubectl -n ingress-nginx scale deployment ingress-nginx-controller --replicas 2 deployment.apps/ingress-nginx-controller scaled $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-lj278 0/1 Completed 0 18m ingress-nginx-admission-patch-zsjkp 0/1 Completed 0 18m ingress-nginx-controller-6dc865cd86-n474n 0/1 ContainerCreating 0 119s ingress-nginx-controller-6dc865cd86-tmlgf 1/1 Running 0 18m Option 2: Install Nginx Ingress Controller Kubernetes using Helm If you opt in for the Helm installation method then follow the steps provided in this section. Step 1: Install helm 3 in our workstation Install helm 3 in your Workstation where Kubectl is installed and configured. cd ~/ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh The installer script works for both Linux and macOS operating systems. Here is a successful installation output: Downloading https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz Verifying checksum... Done. Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm Let’s query for helm package version to validate working installation: $ helm version version.BuildInfoVersion:"v3.8.1", GitCommit:"5cb9af4b1b271d11d7a97a71df3ac337dd94ad37", GitTreeState:"clean", GoVersion:"go1.17.5" Step 2: Deploy Nginx Ingress Controller Download latest stable release of Nginx Ingress Controller code: controller_tag=$(curl -s https://api.github.com/repos/kubernetes/ingress-nginx/releases/latest | grep tag_name | cut -d '"' -f 4) wget https://github.com/kubernetes/ingress-nginx/archive/refs/tags/$controller_tag.tar.gz Extract the file downloaded: tar xvf $controller_tag.tar.gz Switch to the directory created: cd ingress-nginx-$controller_tag Change your working directory to charts folder: cd charts/ingress-nginx/
Create namespace kubectl create namespace ingress-nginx Now deploy Nginx Ingress Controller using the following commands helm install -n ingress-nginx ingress-nginx -f values.yaml . Sample deployment output NAME: ingress-nginx LAST DEPLOYED: Thu Nov 4 02:50:28 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. You can watch the status by running 'kubectl --namespace ingress-nginx get services -o wide -w ingress-nginx-controller' An example Ingress that makes use of the controller: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx name: example namespace: foo spec: ingressClassName: example-class rules: - host: www.example.com http: paths: - path: / pathType: Prefix backend: service: name: exampleService port: 80 # This section is only required if TLS is to be enabled for the Ingress tls: - hosts: - www.example.com secretName: example-tls If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided: apiVersion: v1 kind: Secret metadata: name: example-tls namespace: foo data: tls.crt: tls.key: type: kubernetes.io/tls Check status of all resources in ingress-nginx namespace: kubectl get all -n ingress-nginx Checking runningPods in the namespace. $ kubectl get pods -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-controller-6f5844d579-hwrqn 1/1 Running 0 23m To check logs in the Pods use the commands: $ kubectl -n ingress-nginx logs deploy/ingress-nginx-controller ------------------------------------------------------------------------------- NGINX Ingress controller Release: v1.0.4 Build: 9b78b6c197b48116243922170875af4aa752ee59 Repository: https://github.com/kubernetes/ingress-nginx nginx version: nginx/1.19.9 ------------------------------------------------------------------------------- W1104 00:06:59.684972 7 client_config.go:615] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1104 00:06:59.685080 7 main.go:221] "Creating API client" host="https://10.96.0.1:443" I1104 00:06:59.694832 7 main.go:265] "Running in Kubernetes cluster" major="1" minor="22" git="v1.22.2" state="clean" commit="8b5a19147530eaac9476b0ab82980b4088bbc1b2" platform="linux/amd64" I1104 00:06:59.937097 7 main.go:104] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem" I1104 00:06:59.956498 7 ssl.go:531] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key" I1104 00:06:59.975510 7 nginx.go:253] "Starting NGINX Ingress controller" I1104 00:07:00.000753 7 event.go:282] Event(v1.ObjectReferenceKind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5aea2f36-fdf2-4f5c-96ff-6a5cbb0b5b82", APIVersion:"v1", ResourceVersion:"13359975", FieldPath:""): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller I1104 00:07:01.177639 7 nginx.go:295] "Starting NGINX process" I1104 00:07:01.177975 7 leaderelection.go:243] attempting to acquire leader lease ingress-nginx/ingress-controller-leader... I1104 00:07:01.178194 7 nginx.go:315] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key" I1104 00:07:01.180652 7 controller.go:152] "Configuration changes detected, backend reload required" I1104 00:07:01.197509 7 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader
I1104 00:07:01.197857 7 status.go:84] "New leader elected" identity="ingress-nginx-controller-6f5844d579-hwrqn" I1104 00:07:01.249690 7 controller.go:169] "Backend successfully reloaded" I1104 00:07:01.249751 7 controller.go:180] "Initial sync, sleeping for 1 second" I1104 00:07:01.249999 7 event.go:282] Event(v1.ObjectReferenceKind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-6f5844d579-hwrqn", UID:"d6a2e95f-eaaa-4d6a-85e2-bcd25bf9b9a3", APIVersion:"v1", ResourceVersion:"13364867", FieldPath:""): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration To follow logs as they stream run: kubectl -n ingress-nginx logs deploy/ingress-nginx-controller -f Upgrading Helm Release I’ll set replica count of the controller Pods to 2: $ vim values.yaml controller: replicaCount: 3 We can confirm we currently have one Pod: $ kubectl -n ingress-nginx get deploy NAME READY UP-TO-DATE AVAILABLE AGE ingress-nginx-controller 1/1 1 1 43m Upgrade ingress-nginx release by running the following helm commands: $ helm upgrade -n ingress-nginx ingress-nginx -f values.yaml . Release "ingress-nginx" has been upgraded. Happy Helming! NAME: ingress-nginx LAST DEPLOYED: Thu Nov 4 03:35:41 2021 NAMESPACE: ingress-nginx STATUS: deployed REVISION: 5 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. Check current number of pods after the upgrade: $ kubectl -n ingress-nginx get deploy NAME READY UP-TO-DATE AVAILABLE AGE ingress-nginx-controller 3/3 3 3 45m Uninstalling the Chart To remove nginx ingress controller and all its associated resources that we deployed by Helm execute below command in your terminal. $ helm -n ingress-nginx uninstall ingress-nginx release "ingress-nginx" uninstalled #2) Configure Nginx Ingress Controller external connectivity With Ingress Controller installed, we need to configure external connectivity method. For this we have two major options. Option 1: Using Load Balancer (Highly recommended) Load balancer is used to expose an application running in Kubernetes cluster to the external network. It provides a single IP address to route incoming requests to Ingress controller application. In order to successfully create Kubernetes services of type LoadBalancer, you need to have the load balancer implementation inside / or outside Kubernetes. When a service is deployed in cloud environment, Load Balancer will be available to your service by default. Ingress service should get the LB IP address automatically. But for Baremetal installations you’ll need to deploy Load Balancer implementation for Kubernetes, we recommend MetalLB, use guide in link below to install it. Deploy MetalLB Load Balancer on Kubernetes Cluster 1. Setting Nginx Ingress to use MetalLB Check Nginx Ingress service. $ kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.108.4.75 80:30084/TCP,443:30540/TCP 5m55s ingress-nginx-controller-admission ClusterIP 10.105.200.185 443/TCP 5m55s If your Kubernetes cluster is a “real” cluster that supports services of type LoadBalancer, it will have allocated an external IP address or FQDN to the ingress controller. Use the following command to see that IP address or FQDN: $ kubectl get service ingress-nginx-controller --namespace=ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.108.4.75 80:30084/TCP,443:30540/TCP 7m53s Patch ingress-nginx-controller service by setting service type to LoadBalancer. kubectl -n ingress-nginx patch svc ingress-nginx-controller --type='json' -p '["op":"replace","path":"/spec/type","value":"LoadBalancer"]'
Confirm successful patch of the service. service/ingress-nginx-controller patched Service is assigned an IP address automatically from Address Pool as configured in MetalLB. $ kubectl get service ingress-nginx-controller --namespace=ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.108.4.75 192.168.1.30 80:30084/TCP,443:30540/TCP 10m 2. Mapping DNS name for Nginx Ingresses to LB IP We can create domain name, preferably wildcard for use when creating Ingress routes in Kubernetes. In our cluster, we have k8s.example.com as base domain. We’ll use a unique wildcard domain *.k8s.example.com for Ingress. The mapping is *.k8s.example.com pointing to IP address 192.168.1.30 (Nginx Ingress LB IP). 3. Deploy Services to test Nginx Ingress functionality Create a temporary namespace called demo kubectl create namespace demo Create test Pods and Services YAML file: cd ~/ vim demo-app.yml Paste below data into the file: kind: Pod apiVersion: v1 metadata: name: apple-app labels: app: apple spec: containers: - name: apple-app image: hashicorp/http-echo args: - "-text=apple" --- kind: Service apiVersion: v1 metadata: name: apple-service spec: selector: app: apple ports: - port: 5678 # Default port for image --- kind: Pod apiVersion: v1 metadata: name: banana-app labels: app: banana spec: containers: - name: banana-app image: hashicorp/http-echo args: - "-text=banana" --- kind: Service apiVersion: v1 metadata: name: banana-service spec: selector: app: banana ports: - port: 5678 # Default port for image Create pod and service objects: $ kubectl apply -f demo-app.yml -n demo pod/apple-app created service/apple-service created pod/banana-app created service/banana-service created Test if it is working $ kubectl get pods -n demo NAME READY STATUS RESTARTS AGE apple-app 1/1 Running 0 2m53s banana-app 1/1 Running 0 2m52s $ kubectl -n demo logs apple-app 2022/09/03 23:21:19 Server is listening on :5678 Create Ubuntu pod that will be used to test service connection. cat
0 notes
swarnalata31techiio · 3 years ago
Text
Using Daemonset with Kubernetes
Definition of Kubernetes Daemon set
Kubernetes makes sure that an application has ample resources, runs reliably, and maintains high availability throughout its lifecycle. The location of the app within the cluster is not a priority.
A DaemonSet is typically described using a YAML file. The fields in the YAML file give you added control of the Pod deployment process. A good example is utilizing labels to start specific Pods on a limited subset of nodes.
What is Kubernetes Daemon's set?
The Daemon set has normally defined with a YAML file and the additives within the YAML documents deliver the person additional management over the deployment method of the pod. Pods are the easy items deployed in Kubernetes which signifies the single instances of an executable manner in the cluster. Pod incorporates one or multiple packing containers which might be controlled as single resources.
A daemon set is a dynamic object in Kubernetes which is managed to utilize a controller. The user can set the favored country that represents the unique pods that want to exist on each node. The compromise in the manage loop can evaluate the cutting-edge sensible nation with the favored kingdom. If the sensible node, doesn’t fit the matching pod, then the controller of the daemon set creates a new one mechanically. This automatic technique has all lately created nodes and present nodes. The pods evolved utilizing the controller of the daemon set are not noted by the Kubernetes scheduler and gift because of the identical node itself.
Create a Daemon set
To create a Daemon set the following steps are involved.
The Daemon set is developed in a YAML file with a few parts.
It requires apiVersion
It requires the type or kind of the behavior for Daemon to set
It needs the metadata for the Daemon set
It needs spec_template for pod definition which the user needs to execute on all nodes.
It needs spec_selector to manage the pods for the Daemon set and this kind must be label specific in a template of pods. The selector name defined in the template is applied in the selector. But this name cannot be changed once the Daemon set is created without leaving the pods created before the Daemon set.
The spec_template-spec-node selector is used to execute only the nodes subset which suits the selector
Spec-template-spec-affinity is used to execute on nodes subset which has affinity match.
Once the configuration is completed, Daemon set is created in the cluster.
0 notes
inthetechpit · 4 years ago
Text
Kubernetes ReplicaSet example on Mac using VS Code
Kubernetes ReplicaSet example on Mac using VS Code
A ReplicaSet helps load balance and scale our Application up or down when the demand for it changes. It makes sure the desired number of pods are always running for high availability. I’m using VS Code on Mac to create the below yaml file. Create the following yaml file: apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp-replicaset labels: app: myapp spec: selector: matchLabels: app:…
View On WordPress
0 notes
tak4hir0 · 5 years ago
Link
A pod in Kubernetes represents the fundamental deployment unit. It may contain one or more containers packaged and deployed as a logical entity. A cloud native application running in Kubernetes may contain multiple pods mapped to each microservice. Pods are also the unit of scaling in Kubernetes. Here are five best practices to follow before deploying pods in Kubernetes. Even though there are other configurations that may be applied, these are the most essential practices that bring basic hygiene to cloud native applications. 1) Choose the Most Appropriate Kubernetes Controller While it may be tempting to deploy and run a container image as a generic pod, you should select the right controller type based on the workload characteristics. Kubernetes has a primitive called the controller which aligns with the key characteristic of the workload. Deployment, StatefulSet, and DaemonSet are the most often used controllers in Kubernetes. When deploying stateless pods, always use the deployment controller. This brings PaaS-like capabilities to pods through scaling, deployment history, and rollback features. When a deployment is configured with a minimum replica count of two, Kubernetes ensures that at least two pods are always running which brings fault tolerance. Even when deploying the pod with just one replica, it is highly recommended that you use a deployment controller instead of a plain vanilla pod specification. For workloads such as database clusters, a StatefulSet controller will create a highly available set of pods that have a predictable naming convention. Stateful workloads such as Cassandra, Kafka, ZooKeeper, and SQL Server that need to be highly available are deployed as StatefulSets in Kubernetes. When you need to run a pod on every node of the cluster, you should use the DaemonSet controller. Since Kubernetes automatically schedules a DaemonSet in newly provisioned worker nodes, it becomes an ideal candidate to configure and prepare the node for the workload. For example, if you want to mount an existing NFS or Gluster file share on the node before deploying the workload, package and deploy the pod as a DaemonSet. Make sure you are choosing the most appropriate controller type before deploying pods. 2) Configure Health Checks for Pods By default, all the running pods have the restart policy set to always which means the kubelet running within a node will automatically restart a pod when the container encounters an error. Health checks extend this capability of kubelet through the concept of container probes. There are three probes that can be configured for each pod — Readiness, liveness, and startup. You would have encountered a situation where the pod is in running state but the ready column shows 0/1. This indicates that the pod is not ready to accept requests. A readiness probe ensures that the prerequisites are met before starting the pod. For example, a pod serving a machine learning model needs to download the latest version of the model before serving the inference. The readiness probe will constantly check for the presence of the file before moving the pod to the ready state. Similarly, the readiness probe in a CMS pod will ensure that the datastore is mounted and accessible. The liveness probe will periodically check the health of the container and report to the kubelet. When this health check fails, the pod will not receive the traffic. The service will ignore the pod until the liveness probe reports a positive state. For example, a MySQL pod may include a liveness probe that continuously checks the state of the database engine. The startup probe which is still in alpha as of version 1.16, allows containers to wait for longer periods before handing over the health check to the liveness probe. This is helpful when porting legacy applications to Kubernetes that take unusual startup times. All the above health checks can be configured with commands, HTTP probes, and TCP probes. Refer to the Kubernetes documentation on the steps to configure health checks. 3) Make use of an Init Container to Prepare the Pod There are scenarios where the container needs initialization before becoming ready. The initialization can be moved to another container to does the groundwork before the pod moves to a ready state. An init container can be used to download files, create directories, change file permissions, and more. An init container can even be used to ensure that the pods are started in a specific sequence. For example, an Init Container will wait till the MySQL pod becomes available before starting the WordPress pod. A pod may contain multiple init containers with each container performing a specific initializing task. 4) Apply Node/Pod Affinity and Anti-Affinity Rules Kubernetes scheduler does a good job of placing the pods on associated nodes based on the resource requirements of the pod and resource consumption within the cluster. However, there may be a need to control the way pods are scheduled on nodes. Kubernetes provides two mechanisms — Node Affinity/anti-affinity and pod affinity/anti-affinity. Node affinity extends the already powerful nodeSelector rule to cover additional scenarios. Like the way Kubernetes annotations make labels/selectors more expressive and extensible, node affinity makes nodeSelector more expressive through additional rules. Node affinity will ensure that pods are scheduled on nodes that meet specific criteria. For example, a stateful database pod can be forced to be scheduled on a node that has an SSD attached. Similarly, node anti-affinity will help in avoiding scheduling the pods on the nodes that may cause issues. While node affinity does matchmaking between pods and nodes, there may be scenarios where you need to co-locate pods for performance or compliance. Pod affinity will help us place pods that need to share the same node. For example, an Nginx web server pod must be scheduled on the same node that has a Redis pod. This will ensure low latency between the web app and the cache. In other scenarios, you may want to avoid running two pods on the same node. When deploying HA workloads, you may want to force that no two instances of the same pod run on the same node. Pod anti-affinity will enforce rules that prevent this possibility. Analyze your workload to assess if you need to utilize node and pod affinity strategies for the deployments. 5) Take Advantage of Auto Scalers Hyperscale cloud platforms such as Amazon Web Services, Microsoft Azure, and Google Cloud Platform have built-in auto-scaling engines that can scale-in and scale-out a fleet of VMs based on the average resource consumption or external metrics. Kubernetes has similar auto-scaling capabilities for the deployments in the form of horizontal pod autoscaler (HPA), vertical pod autoscaler (VPA), and cluster auto-scaling. Horizontal pod autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization. HPA is represented as an object within Kubernetes which means it can be declared through a YAML file controlled via the kubectl CLI. Similar to the IaaS auto-scaling engines, HPA supports defining the CPU threshold, min and max instances of a pod, cooldown period and even custom metrics. Vertical pod autoscaling removes the guesswork involved in defining the CPU and memory configurations of a pod. This autoscaler has the ability to recommend appropriate values for CPU and memory requests and limits, or it can automatically update the values. The auto-update flag will decide if existing pods will be evicted or continue to run with the old configuration. Querying the VPA object will show the optimal CPU and memory requests through upper and lower bounds. While HPA and VPA scale the deployments and pods, Cluster Autoscaler will expand and shrink the size of the pool of worker nodes. It is a standalone tool to adjust the size of a Kubernetes cluster based on the current utilization. Cluster Autoscaler increases the size of the cluster when there are pods that failed to schedule on any of the current nodes due to insufficient resources or when adding a new node would increase the overall availability of cluster resources. Behind the scenes, Cluster Autoscaler negotiates with the underlying IaaS provider to add or remove nodes. Combining HPA with Cluster Autoscaler delivers maximum performance and availability of workloads. In the upcoming tutorials, I will cover each of the best practices in detail with use cases and scenarios. Stay tuned. Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live. Feature image via Pixabay.
0 notes
datamattsson · 6 years ago
Text
Using Traefik for simple Kubernetes Ingress
I’m a huge fan how Routes work in OpenShift. It’s just there when the platform is deployed and ready to use. All it needs externally is a wildcard DNS entry (CNAME that points to the compute nodes) to start serving HTTP/HTTPS traffic. Routes is nothing but a type of Ingress that is specific to OpenShift (Red Hat people will probably come after me for stating this, please excuse my ignorance). I wanted to figure out how to get this exact same behavior for a demo I did on both OpenShift and vanilla Kubernetes without too much hassle.
Swiss Army knife Traefik
I’ve dabbled a bit in the past with Traefik on Docker Swarm. It turns out it’s just as intuitive to setup and use on Kubernetes. It also meets my key objective, look and feel like Routes to reuse declarations between environments. Traefik is incredibly flexible and you can make it perform application routing like no other, but for my use case, I simply “unbox” it and it just works.
The full documentation on how to deploy this on Kubernetes is available in the Traefik User Guide for Kubernetes. There are multiple ways to deploy to Kubernetes but for my use case, the DaemonSet worked best. I’m not paying attention to HTTPS at this time and I suspect that is one of the areas where using OpenShift Routes vs Traefik will differ.
Deploy
I’m not going to be fuzzy and force a different deployment than the one from the official repos. So, go ahead:
kubectl apply -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-rbac.yaml kubectl create -f https://raw.githubusercontent.com/containous/traefik/v1.7/examples/k8s/traefik-ds.yaml
An optional step is to deploy the Service and Ingress for the Traefik UI. This YAML require modification depending on your wildcard DNS entry.
--- apiVersion: v1 kind: Service metadata: name: traefik-web-ui namespace: kube-system spec: selector: k8s-app: traefik-ingress-lb ports: - name: web port: 80 targetPort: 8080 --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: traefik-web-ui namespace: kube-system spec: rules: - host: traefik-ui.dev.datamattsson.io http: paths: - path: / backend: serviceName: traefik-web-ui servicePort: web
Note the .spec.rules.0.host value, which is grabbing a name from my *.dev.datamattsson.io DNS entry.
The UI is now accessible on http://traefik-ui.<your domain>:8080 and should look similar to this:
Tumblr media
Since my test systems are sitting fairly dormant in my labs, I don't bother securing access to the UI. If you're on a publicly accessible network, it's advised to secure the UI.
The UI Ingress can be secured like any other Ingress with Traefik as outlined in the official docs.
First, create a password file with the htpasswd command (the htpasswd command you may find lurking in the httpd-tools package on CentOS/RHEL, other distros may vary).
htpasswd -c ./passwd admin
Answer the prompts for a new password.
Create a Kubernetes secret from the passwd file.
kubectl create secret generic traefik-ui-secret --from-file passwd --namespace=kube-system
Next, you need to patch your Ingress created in the previous step with these annotations:
metadata: annotations: kubernetes.io/ingress.class: traefik traefik.ingress.kubernetes.io/auth-type: "basic" traefik.ingress.kubernetes.io/auth-secret: "traefik-ui-secret"
Save this to a file named mypatch.yaml and run:
kubectl -n kube-system patch ingress traefik-web-ui --patch "$(cat mypatch.yaml)"
Hit refresh on the UI and you'll be prompted by your web browser to input admin as user and the password you gave at the prompts.
Hello World
Now we’re all set. Another thing you could do is check that the Traefik is responding by curl’ing a name:
curl http://foobar.dev.datamattsson.io/ 404 page not found
The example application I was using in my demos was WordPress. So, for my Service and Ingress, I would deploy the following:
--- apiVersion: v1 kind: Service metadata: labels: app: wordpress name: wordpress spec: type: LoadBalancer ports: - name: wordpress port: 8080 targetPort: 80 protocol: TCP selector: app: wordpress --- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: wordpress spec: rules: - host: wp.dev.datamattsson.io http: paths: - path: / backend: serviceName: wordpress servicePort: wordpress
If I would diff the Kubernetes variant versus the OpenShift variant, this is what it would look like:
13c13 targetPort: 8080 24c24 - host: wp.apps.openshift.datamattsson.io
The difference in port numbers is that the OpenShift doesn’t allow Pods to bind ports below 1024 with the default restricted SecurityContextConstraints.
I have not deployed my WordPress app but you should be able to observe that Traefik grabbed the Ingress:
curl http://wp.dev.datamattsson.io/ Service Unavailable
Need might arise to do the HTTPS version of this in the future. Until then, Happy Routing!
0 notes
kureeeen · 6 years ago
Text
Kubernetes
Kubernetes 를 공부하면서 했던 메모.
Kubernetes
今こそ始めよう!Kubernetes 入門
History
Google 사내에서 이용하던 Container Cluster Manager “Borg” 에 착안하여 만들어진 Open Source Software (OSS)
2014년 6월 런칭
2015년 7월 version 1.0.
version 1.0 이후 Linux Foundation 의 Could Native Computing Foundation (CNCF) 로 이관되어 중립적 입장에서 개발
version 1.7 Production-Ready
De facto standard
2014년 11월 Google Cloud Platform (GCP) 가 Google Container Engine (GKE, 후에 Google Kuebernetes Engine) 제공 시작
2017년 2월 Microsoft Azure 가 Azure Container Service (AKS) 릴리즈
2017년 11월 Amazon Web Service (AWS) 가 Amazon Elastic Container Service for Kubernetese (Amazon EKS) 릴리즈
Kubernetes 로 가능한 일
Docker 를 Product 레벨에서 이용하기 위해서 고려해야 했던 점들
복수의 Docker Host 관리
Container 의 Scheduling
Rolling-Update
Scaling / Auto Scaling
Monitoring Container Live/Dead
Self Healing
Service Discovery
Load Balancing
Manage Data
Manage Workload
Manage Log
Infrastructure as Code
그 외 Ecosystem과의 연계와 확장
위 문제들을 해결하기 위해 Kubernetes 가 탄생
Kubernetes 에서는 YAML 형식 manifesto 사용
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: sample-deployment spec: replicas: 3 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:latest ports: - containerPort: 80 ` **Kubernets 는,**
복수의 Docker Host 를 관리해서 container cluster 를 구축
같은 container 의 replica 로 실행하여 부하 분산과 장애에 대비 가능
부하에 따라 container 의 replica 수를 조절 (auto scaling) 가능
Disk I/O, Network 통신량 등의 workload 나 ssd, cpu 등의 Docker Host spec에 따라서 Container 배치가 가능
GCP / AWS / OpenStack 등에서 구축할 경우, availability zone 등의 부가 정보로 간단히 multi region 에 container 배치 가능
기본적으로 CPU, Memory 등의 자원 상황에 따라 scaling
자원 부족 등의 경우 Kubernetes cluster auto scaling 이용 가능
container process 감시
container process 가 멈추면 self healing
HTTP/TCP, Shell Script 등을 이용한 Health Check 도 가능
특정 Container 군에 대해 Load Balancing 적용 가능
기능별로 세분화된 micro service architecture 에 필요한 service discovery 가능
Container 와 Service 의 데이터는 Backend 의 etcd 에 보존
Container 에서 공통적으로 설정이나 Application 에서 사용하는 데이터베이스의 암호 등의 정보를 Kubernetes Cluster 에서 중앙 관리 가능
Kubernetes 를 지원
Ansible : Deploy container to Kubernetes
Apache Ignite : Kubernetes 의 Service Discovery 기능을 이용한 자동 cluster 구성과 scaling
Fluentd : Kubernetes 상의 Container Log 를 전송
Jenkins : Deploy container to Kubernetes
OpenStack : Cloud 와 연계된 Kubernetes 구축
Prometheus : Kubernetes 감시
Spark : job 을 Kubernetes 상에서 Native 실행 (YARN 대체)
Spinnaker : Deploy container to Kubernetes
etc…
Kubernetes 에는 기능 확장이 가능하도록 되어 있어 독자적인 기능을 구현하는 것도 가능
Kubernetes 구축 환경 선택
개인 Windows / Mac 상에 로컬 Kubernetes 환경을 구축
구축 툴을 사용한 cluster 구축
public cloud 의 managed Kubernetes 를 이용
환경에 따라서 일부 이용 불가한 기능도 있으나 기본적으로 어떤 환경에서�� 동일한 동작이 가능하도록 CNCF 가 Conformance Program 을 제공
Local Kubernetes
Minikube
VirtualBox 필요 (xhyve, VMware Fusion 도 이용 가능)
Homebrew 등을 이용한 설치 가능
Install
`$ brew update $ brew install kubectl $ brew cask install virtualbox $ brew install minikube `
Run
minikube 기동 시, 필요에 따라 kubernetes 버전을 지정 가능 --kubernetes-version
`$ minikube start —kubernetes-version v1.8.0 `
Minikube 용으로 VirtualBox 상에 VM 가 기동될 것이고 kubectl 로 Minikube 의 클러스터를 조작하는 것이 가능
상태 확인
`$ minikube status `
Minikube cluster 삭제
`$ minikube delete `
Docker for Mac
DockerCon EU 17 에 Docker 사에서 Kubernetes support 발표
Kubernetes 의 CLI 등에서 Docker Swarm 을 조작하는 등의 연계 기능 강화
17.12 CE Edge 버전부터 로컬에 Kubernetes 를 기동하는 것이 가능
Kubernetes 버전 지정은 불가
Docker for Mac 설정에서 Enable Kubernetes 지정
이후 kubectl 로 cluster 조작 가능
`$ kubectl config use-context docker-for-desktop `
kubectl 상에선 Docker Host 가 node로 인식
`$ kubectl get nodes `
Kubernetes 관련 component가 container 로서 기동
`$ docker ps --format 'table {{.Image}}\t{{.Command}}' | grep -v pause `
Kubernetes 구축 Tool
kubeadm
Kubernetes 가 공식적으로 제공하는 구축 도구
여기서는 Ubuntu 16.04 기준으로 기록 (환경 및 필요 버전에 따라 일부 변경 필요함)
준비
`apt-get update && apt-get install -y apt-transport-https curl -s https://package.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat </etc/aptsources.list.d/kubernetes.list deb http://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet=1.8.5-00 kubeadm=1.8.5-00 kubectl=1.8.5-00 docker.io sysctl net.bridge.bridge-nf-call-iptables=1 `
Master node 를 위한 설정
--pod-network-cidr은 cluster 내 network (pod network) 용으로 Flannel을 이용하기 위한 설정
`$ kubeadm init --pod-network-cidr=10.244.0.0/16 `
위 설정 명령으로 마지막에 Kubernetes node 를 실행하기 위한 명령어가 출력되며 이후 node 추가시에 실행한다.
`$ kubeadm join --token ... 10.240.0.7:6443 --discovery-token-ca-cert-hash sha256:... `
kubectl 에서 사용할 인증 파일 준비
`$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config `
Flannel deamon container 기동
`$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml `
Flannel 이 외에도 다른 선택이 가능 Installing a pod network add-on
Rancher
Rancher Labs 사
Open Source Container Platform
version 1.0 에서는 Kubernetes 도 서포트 하는 형식
version 2.0 부터는 Kubernetes 를 메인으로
Kubernetes cluster 를 다양한 플랫폼에서 가능 (AWS, OpenStack, VMware etc..)
기존의 Kubernetes cluster 를 Rancher 관리로 전환 가능
중앙집중적인 인증, 모니터링, WebUI 등의 기능을 제공
풍부한 Application Catalog
Rancher Server 기동
`docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:v2.0.0-alpha10 `
이 Rancher Server 에서 각 Kubernetes cluster 의 관리와 cloud provider 연계 등을 수행
etc
Techtonic (CoreOS)
Kubespray
kops
OpenStack Magnum
Public Cloud managed Kubernetes
GKE (Google Kubernetes Engine)
많은 편리한 기능을 제공
GCP (Google Cloud Platform) 와 Integration.
HTTP LoadBalancer (Ingress) 사용 가능
NodePool
GUI or gcloud 명령어 사용
cluster version 간단 update
GCE (Google Compute Engine) 를 사용한 cluster 구축 가능
Container 를 사용하여 Kubernetes 노드가 재생성되어도 서비스에 영향을 미치지 않게 설계 가능
Kubernetes cluster 내부의 node 에 label 을 붙여 Group 화 가능
Group 화 하여 Scheduling 에 이용 가능
cloud 명령어로 cluster 구축
`$ gcloud container clusters create example-cluster `
인증 정보 저장
`$ gcloud container clusters get-credentials example-cluster `
etc
Google Kubernetes Engine
AKS (Azure Container Service)
Azure Container Service
EKS (Elastic Container Service for Kubernetes)
Amazon EKS
Kubernetes 기초
Kubernetes 는 실제로 Docker 이외의 container runtime 을 이용한 host 도 관리할 수 있도록 되어 있다. Kubernetes = Kubernetes Master + Kubernetes Node
Kubernetes Master
Kubernetes Node
API endpoint 제공
container scheduling
container scaling
Docker Host 처럼 실제로 container 가 동작하는 host
Kubernetes cluster 를 조작할 땐, CLI tool 인 kubectl 과 YAML 형식 manifest file 을 사용하여 Kubernetes Master 에 resource 등록 kubectl 도 내부적으로는 Kubernetes Master API 를 사용 = Library, curl 등을 이용한 조작도 가능
Kubernetes & Resource
resource 를 등록하면 비동기로 container 실행과 load balancer 작성된다. Resource 종류에 따라 YAML manifest 에 사용되는 parameter 가 상이
Kubernetes API Reference Docs
Kubernetes Resource
Workloads : container 실행에 관련
Discovery & LB : container 외부 공개 같은 endpoint를 제공
Config & Storage : 설정, 기밀정보, Persistent volume 등에 관련
Cluster : security & quota 등에 관련
Metadata : resource 조작
Workloads
cluster 상의 container 를 기동하기 위해 이용 내부적으로 이용하는 것을 ���외한 이용자가 직접 이용하는 것으로는 다음과 같은 종류
Pod
ReplicationController
ReplicaSet
Deployment
DaemonSet
StatefulSet
Job
CronJob
Discovery & LB
container 의 service discovery, endpoint 등을 제공 내부적으로 이용하는 것을 제외한 이용자가 직접 이용하는 것으로는 다음과 같은 종류
Service : endpoint 의 제공방식에 따라 복수의 타입이 존재
Ingress
ClusterIP
NodePort
LoadBalancer
ExternalIP
ExternalName
Haedless
Config & Storage
설정이나 기밀 데이터 등을 container 에 넣거나 Persistent volume을 제공
Secret
ConfigMap
PersistentVolumeClaim Secret 과 ConfigMap 은 key-value 형식의 데이터 구조
Cluster
cluster 의 동작을 정의
Namespace
ServiceAccount
Role
ClusterRole
RoleBinding
ClusterRoleBinding
NetworkPolicy
ResourceQuota
PersistentVolume
Node
Metadata
cluster 내부의 다른 resource 동작을 제어
CustomResourceDefinition
LimitRange
HorizontalPodAutoscaler
Namespace 에 따른 가상 cluster 의 분리
Kubernetes 가상 cluster 분리 기능 (완전 분리는 아님) 하나의 Kubernetes cluster 를 복수 팀에서 이용 가능하게 함 Kubernetes cluster 는 RBAC (Role-Based Access Control) 이 기본 설정으로 Namesapce 를 대상으로 권한 설정을 할 수 있어 분리성을 높이는 것이 가능
초기 상태의 3가지 Namespace
default
kube-system : Kubernetes cluster 의 component와 addon 관련
kube-public : 모두가 사용 가능한 ConfigMap 등을 배치
CLI tool kubectl & 인증 정보
kubectl 이 Kubernetes Master 와 통신하기 위해 접속 서버의 정보와 인증 정보 등이 필요. 기본으로는 `~/.kube/config` 에 기록된 정보를 이용 `~/.kube/config` 도 YAML Manifest `~/.kube/config` example <pre>`apiVersion: v1 kind: Config preferences: {} clusters: - name: sample-cluster cluster: server: https://localhost:6443 users: - name: sample-user user: client-certificate-data: agllk5ksdgls2... client-key-data: aglk14l1t1ok15... contexts: - name: sample-context context: cluster: sample-cluster namespace: default user: sample-user current-context: sample-context `</pre>
`~/.kube/config` 에는 기본적으로 cluster, user, context 3가지를 정의 cluster : 접속하기 위한 cluster 정보 user : 인증 정보 context : cluster 와 user 페어에 namespace 지정 kubectl 를 사용한 설정 <pre>`# 클러스터 정의 $ kubectl config set-cluster prd-cluster --server=https://localhost:6443 # 인증정보 정의 $ kubectl config set-credentials admin-user \ --client-certificate \ --client-key=./sample.key \ --embed-certs=true # context(cluster, 인증정보, Namespace 정의) $ kubectl config --set-context prd-admin \ --cluster=prd-cluster \ --user=admin-user \ --namespace=default `</pre>
context 를 전환하는 것으로 복수의 cluster 와 user 를 사용하는 것이 가능
`# context 전환 $ kubectx prd-admin Switched to context "prd-admin". # namespace 전환 $ kubens kube-system Context "prd-admin" is modified. Active Namespace is "kube-system". `
## kubectl & YAML Manifest YAML Manifest 를 사용한 container 기동
pod 작성
`# sample-pod.yml apiVersion: vi kind: Pod metadata: name: sample-pod spec: containers: - name: nginx-container image: nginx:1.12 `
resource 작성
`# create resource $ kubectl create -f sample-pod.yml `
resource 삭제
`# delete resource $ kubectl delete -f sample-pod.yml `
resource update
`# apply 외 set, replace, edit 등도 사용 가능 $ kubectl apply -f sample-pod.yml `
## kubectl 사용법
resource 목록 획득 (get)
`$ kubectl get pods # 획득한 목록 상세 출력 $ kubectl get pods -o wide `
-o, —output 옵션을 사용하여 JSON / YAML / custom-columns / Go Template 등 다양한 형식으로 출력하는 것이 가능. 그리고 상세한 정보까지 확인 가능. pods 를 all 로 바꾸면 모든 리소스 일람 획득
resource 상세 정보 확인 (describe)
`$ kubectl describe pods sample-pod $ kubectl describe node k15l1 `
get 명령어 보다 resource 에 관련한 이벤트나 더 상세한 정보를 확인 가능
로그 확인 (logs)
`# Pod 내 container 의 로그 출력 $ kubectl logs sample-pod # 복수 container 가 포함된 Pod 에서 특정 container 의 로그 출력 $ kubectl logs sample-pod -c nginx-container # log follow option -f $ kubectl logs -f sample-pod # 최근 1시간, 10건, timestamp 표시 $ kubectl logs --since=1h --tail=10 --timestamp=true sample-pod `
Pod 상의 특정 명령 실행 (exec)
`# Pod 내 container 에서 /bin/sh $ kubectl exec -it sample-pod /bin/sh # 복수 container 가 포함된 Pod 의 특정 container 에서 /bin/sh $ kubectl exec -it sample-pod -c nginx-container /bin/sh # 인수가 있는 명령어의 경우, -- 이후에 기재 $ kubectl exec -it sample-pod -- /bin/ls -l / `
port-forward
`# localhost:8888 로 들어오는 데이터를 Pod의 80 포트로 전송 $ kubectl port-forward sample-pod 8888:80 # 이후 localhost:8888 을 통해 Pod의 80 포트로 접근 가능 $ curl localhost:8888 `
shell completion
`# bash $ source <(kubectl completion bash) # zsh completion $ source <(kubectl completion zsh) `
## Kubernetes Workloads Resource ## Workloads Resource
cluster 상에서 container 를 기동하기 위해 이용
8 종류의 resource 존재
Pod
ReplicationController
ReplicaSet
Deployment
DaemonSet
StatefulSet
Job
CronJob
디버그, 확인 용도로 주로 이용
ReplicaSet 사용 추천
Pod 을 scale 관리
scale 관리할 workload 에서 기본적으로 사용 추천
각 노드에 1 Pod 씩 배치
Persistent Data 나 stateful 한 workload 의 경우 사용
work queue & task 등의 container 종료가 필요한 workload 에 사용
정기적으로 Job을 수행
Pod
Kubernetes Workloads Resource 의 최소단위
1개 이상의 container 로 구성
Pod 단위로 IP Address 가 할당
대부분의 경우 하나의 Pod은 하나의 container 를 포함하는 경우가 대부분
proxy, local cache, dynamic configure, ssh 등의 보조 역할을 하는 container 를 같이 포함 하는 경우도 있다.
같은 Pod 에 속한 container 들은 같은 IP Address
container 들은 localhost 로 서로 통신 가능
Network Namespace 는 Pod 내에서 공유
보조하는 sub container 를 side car 라고 부르기도 한다.
Pod 작성
sample pod 을 작성하는 pod_sample.ymlapiVersion: v1 kind: Pod metadata: name: sample-pod spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80
nginx:1.12 image를 사용한 container 가 하나에 80 포트를 개방
설정 파일을 기반으로 Pod 작성
`$ kubectl apply -f ./pod_sample.yml `
기동한 Pod 확인
`$ kubectl get pods # 보다 자세한 정보 출력 $ kubectl get pods --output wide `
**2 개의 container 를 포함한 Pod 작성**
2pod_sample.yml
`apiVersion: v1 kind: Pod metadata: name: sample-2pod spec: containers: - name: nginx-container-112 image: nginx:1.12 ports: - containerPort: 80 - name: nginx-container-113 image: nginx:1.13 ports: - containerPort: 8080 `
**container 내부 진입**
container 의 bash 등을 실행하여 진입
`$ kubectl exec -it sample-pod /bin/bash `
-t : 모의 단말 생성
-i : 표준입력 pass through
ReplicaSet / ReplicationController
Pod 의 replica 를 생성하여 지정한 수의 Pod을 유지하는 resource
초창기 ReplicationController 였으나 ReplicaSet 으로 후에 변경됨
ReplicationController 는 equality-based selector 이용. 폐지 예정.
ReplicaSet 은 set-based selector 이용. 기본적으로 이를 이용할 것.
ReplicaSet 작성
sample ReplicaSet 작성 (rs_sample.yml)
`apiVersion: apps/v1 kind: ReplicaSet metadata: name: sample-rs spec: replicas: 3 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80 `
ReplicaSet 작성
`$ kubectl apply -f ./rs_sample.yml `
ReplicaSet 확인
`$ kubectl get rs -o wide `
Label 지정하여 Pod 확인
`$ kubectl get pod -l app=sample-app -o wide `
**Pod 정지 & auto healing**
auto healing = ReplicaSet 은 node 나 pod 에 장애가 발생해도 pod 수를 지정한 수만큼 유지되도록 별도의 node 에 container 를 기동해주기에 장애에 대비하여 영향을 최소화할 수 있도록 가능하다.
Pod 삭제
`$ kubectl delete pod sample-rs-7r6sr `
Pod 삭제 후 다시 Pod 확인 하면 ReplicaSet 이 새로 Pod 이 생성된 것을 확인 가능
ReplicaSet 의 Pod 증감은 kubectl describe rs 명령어로 이력을 확인 가능
Label & ReplicaSet
ReplicaSet 은 Kubernetes 가 Pod 을 감시하여 수를 조정
감시하기 위한 Pod Label 은 spec.selector 에서 지정
특정 라벨이 붙은 Pod 의 수를 세는 것으로 감시
부족하면 생성, 초과하면 삭제
`selector: matchLabels: app: sample-app `
생성되는 Pod Label 은 labels 에 정의.
spec.template.metadata.labels 의 부분에도 app:sample-app 식으로 설정이 들어가서 Label 가 부여된 상태로 Pod 이 생성됨.
`labels: app: sample-app `
spec.selector 와 spec.template.metadata.labels 가 일치하지 않으면 Pod 이 끝없이 생성되다가 에러가 발생하게 될 것…
ReplicaSet 을 이용하지 않고 외부에서 별도로 동일한 label 을 사용하는 Pod 을 띄우면 초과한 수만큼의 Pod 을 삭제하게 된다. 이 때, 어느 Pod 이 지워지게 될지는 알 수 없으므로 주의가 필요
하나의 container 에 복수 label 을 부여하는 것도 가능
`labels: env: dev codename: system_a role: web-front `
**Pod scaling**
yaml config 을 수정하여 kubectl apply -f FILENAME 을 실행하여 변경된 설정 적용
kubectl scale 명령어로 scale 처리
scale 명령어로 처리 가능한 대상은
Deployment
Job
ReplicaSet
ReplicationController
`$ kubectl scale rs sample-rs --replicas 5 `
## Deployment
복수의 ReplicaSet 을 관리하여 rolling update 와 roll-back 등을 실행 가능
방식
전환 방식
Kubernetes 에서 가장 추천하는 container 의 기동 방법
새로운 ReplicaSet 을 작성
새로운 ReplicaSet 상의 Replica count 를 증가시킴
오래된 ReplicaSet 상의 Replica count 를 감소시킴
2, 3 을 반복
새로운 ReplicaSet 상에서 container 가 기동하는지, health check를 통과하는지 확인하면서
ReplicaSet 을 이행할 때의 Pod 수의 상세 지정이 가능
Deployment 작성
deployment_sample.yml
`apiVersion: apps/v1 kind: Deployment metadata: name: sample-deployment spec: replicas: 3 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80 `
deployment 작성
`# record 옵션을 사용하여 update 시 이력을 보존 가능 $ kubectl apply -f ./deployment_sample.yml --record `
이력은 metadata.annotations.kubernetes.io/change-cause에 보존
현재 ReplicaSet 의 Revision 번호는 metadata.annotations.deployment.kubernetes.io/revision에서 확인 가능
`$ kubectl get rs -o yaml | head `
kubectl run 으로 거의 같은 deployment 를 생성하는 것도 가능
다만 default label run:sample-deployment 가 부여되는 차이 정도
`$ kubectl run sample-deployment --image nginx:1.12 --replicas 3 --port 80 `
deployment 확인
`$ kubectl get deployment $ kubectl get rs $ kubectl get pods `
container update
`# nginx container iamge 버전을 변경 $ kubectl set image deployment sample-deployment nginx-container=nginx:1.13 `
**Deployment update condition**
Deployment 에서 변경이 있으면 ReplicaSet 이 생성된다.
replica 수는 변경 사항 대상에 포함되지 않는다
생성되는 Pod 의 내용 변경이 대상
spec.template 의 변경이 있으면 ReplicaSet 을 신규 생성하여 rolling update 수행
spec.template이하의 구조체 해쉬값을 계산하여 그것을 이용해 label 을 붙이고 관리를 한다.
`# Deployment using hash value $ kubectl get rs sample-deployment-xxx -o yaml `
**Roll-back**
ReplicaSet 은 기본적으로 이력으로서 형태가 남고 replica 수를 0으로 하고 있다.
변경 이력 확인 kubectl rollout history
`$ kubectl rollout history deployment sample-deployment `
deployment 작성 시 —record 를 사용하면 CHANGE_CAUSE 부분의 값도 존재
roll-back 시 revision 값 지정 가능. 미지정시 하나 전 revision 사용.
`# 한 단계 전 revision (default --to-revision = 0) $ kubectl rollout undo deployment sample-deployment # revision 지정 $ kubectl rollout undo deployment sample-deployment --to-revision 1 `
roll-back 기능보다 이전 YAML 파일을 kubectl apply로 적용하는게 더 편할 수 있음.
spec.template을 같은 걸로 돌리면 Template Hash 도 동일하여 kubectl rollout 과 동일한 동작을 수행하게 된다.
Deployment Scaling
ReplicaSets 와 동일한 방법으로 kubectl scale or kubectl apply -f을 사용하여 scaling 가능
보다 고급진 update 방법
recreate 라는 방식이 존재
DaemonSet
ReplicaSet 의 특수한 형식
모든 Node 에 1 pod 씩 배치
모든 Node 에서 반드시 실행되어야 하는 process 를 위해 이용
replica 수 지정 불가
2 pod 씩 배치 불가
ReplicaSet 은 각 Kubernetes Node 상에 상황에 따라 Pod 을 배치하는 것이기에 균등하게 배포된다는 보장이 없다.
DaemonSet 작성
ds_sample.yml
`apiVersion: apps/v1 kind: DaemonSet metadata: name: sample-ds spec: selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80 `
DaemonSet 작성
`$ kubectl apply -f ./ds_sample.yml `
확인
`$ kubectl get pods -o wide `
## StatefulSet
ReplicaSet 의 특수한 형태
database 처럼 stateful 한 workloads 에 대응하기 위함
생성되는 Pod 명이 숫자로 indexing
persistent 성
sample-statefulset-1, sample-statefulset-2, …
PersistentVolume을 사용하는 경우 같은 disk 를 이용하여 재작성
Pod 명이 바뀌지 않음
StatefulSet 작성
spec.volumeClaimTemplates 지정 가능
statefulset-sample.yml
persistent data 영역을 재사용하여 pod 이 복귀했을 때 동일 데이터를 사용하여 container 가 작성되도록 가능
`apiVersion: apps/v1 kind: StatefulSet metadata: name: sample-statefulset spec: replicas: 3 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80 volumeMounts: - name: www mountPath: /usr/share/nginx/html volumeClaimTemplates: - metadata: name: www spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi `
StatefulSet 작성
`$ kubectl apply -f ./statefulset_sample.yml `
확인 (ReplicaSet 과 거의 동일한 정보)
`$ kubectl get statefulset # Pod 이름에 연속된 수로 index 가 suffix 된 것을 확인 $ kubectl get pods -o wide `
scale out 시 0, 1, 2 의 순으로 만들어짐
scale in 시 2, 1, 0 의 순으로 삭제
StatefulSet Scaling
ReplicaSets 와 동일 kubectl scale or kubectl apply -f
Persistent 영역 data 보존 확인
`$ kubectl exec -it sample-statefulset-0 ls /usr/share/nginx/html/sample.html ls: cannot access /usr/share/nginx/html/sample.html: No such file or directory $ kubectl exec -it sample-statefulset-0 touch /usr/share/nginx/html/sample.html $ kubectl exec -it sample-statefulset-0 ls /usr/share/nginx/html/sample.html /usr/share/nginx/html/sample.html `
kubectl 로 Pod 삭제를 하던지 container 내부에서 Exception 등이 발생하는 등으로 container 가 정지해도 file 이 사라지지 않는다.
Pod 명이 바뀌지 않아도 IP Address 는 바뀔 수 있다.
Life Cycle
ReplicaSet 과 다르게 복수의 Pod 을 병렬로 생성하지 않고 1개씩 생성하여 Ready 상태가 되면 다음 Pod 을 작성한다.
삭제 시, index 가 가능 큰 (최신) container 부터 삭제
index:0 이 Master 가 되도록 구성을 짤 수 있다.
Job
container 를 이용하여 일회성 처리를 수행
병렬 실행이 가능하면서 지정한 횟수만큼 container 를 실행 (정상종료) 하는 것을 보장
Job 을 이용 가능한 경우와 Pod 과의 차이
Pod 이 정지하는 것을 전제로 만들어져 있는가?
Pod, ReplicaSets 에서 정지=예상치 못한 에러
Job 은 정지=정상종료
patch 등의 처리에 적합
Job 작성
job_sample.yml : 60초 sleep
ReplicaSets 와 동일하게 label 과 selector 를 지정가능하지만 kubernetes 에서 자동으로 충돌하지 않도록 uuid 를 자동 생성함으로 굳이 지정할 필요 없다.
`apiVersion: batch/v1 kind: Job metadata: name: sample-job spec: completions: 1 parallelism: 1 backoffLimit: 10 template: spec: containers: - name: sleep-container image: centos:latest command: ["sleep"] args: ["60"] restartPolicy: Never `
Job 작성
`$ kubectl apply -f job_sample.yml `
Job 확인
`$ kubectl get jobs $ kubectl get pods `
**restartPolicy**
spec.template.spec.restartPolicy 에는 OnFailure or Never 지정 가능
Never : 장애 시 신규 Pod 작성
OnFailure : 장애 시 동일 Pod 이용하여 Job 재개 (restart count 가 올라간다)
Parallel Job & work queue
completions : 실행 횟수
parallelism : 병렬수
backoffLimit : 실패 허용 횟수. 미지정 시 6
1개씩 work queue 형태로 실행할 경우 completions 를 미지정
parallelism만 지정하면 Persistent 하게 Job을 계속 실행
deployment 등과 동일하게 kubectl scale job… 명령으로 나중에 제어 하는 것도 가능
CronJob
ScheduledJob -> CronJob 으로 명칭 변경됨
Cron 처럼 scheduling 된 시간에 Job 을 생성
create CronJob
cronjob_sample.yml : 60초 마다 30초 sleep
`apiVersion: batch/v1beta1 kind: CronJob metadata: name: sample-cronjob spec: schedule: "*/1 * * * *" concurrencyPolicy: Forbid startingDeadlineSeconds: 30 successfulJobHistoryLimit: 5 failedJobsHistoryLimit: 5 jobTemplate: spec: template: spec: containers: - name: sleep-container image: centos:latest command: ["sleep"] args: ["30"] restartPolicy: Never `
create
`$ kubectl apply -f cronjob_sample.yml `
별도 설정없이 kubectl run —schedule 로 create 가능
`$ kubectl run sample-cronjob --schedule = "*/1 * * *" --restart Never --image centos:latest -- sleep 30 `
확인
`$ kubectl get cronjob $ kubectl get job `
**일시 정지**
spec.suspend 가 true 로 설정되어 있으면 schedule 대상에서 제외됨
YAML 을 변경한 후 kubectl apply
kubectl patch 명령어로도 가능
`$ kubectl patch cronjob sample-cronjob -p '{"spec":{"suspend":true}}' `
kubectl patch에서는 내부적으로 HTTP PATCH method 를 사용하여 Kubernetes 독자적인 Strategic Merge Patch 가 수행된다.
실제로 수행되는 request 를 확인하고 싶으면 -v (Verbose) 옵션 사용
`$ kubectl -v=10 patch cronjob sample-cronjob -p '{"spec":{"suspend":true}}' `
kubectl get cronjob 에서 SUSPEND 항목이 True 로 된 것을 확인
다시 scheduling 대상에 넣고 싶으면 spec.suspend 를 false 로 설정
동시 실행 제어
spec.concurrencyPolicy
spec.startingDeadlineSeconds : Kubernetes Master 가 일시적으로 동작 불가한 경우 등 Job 개시 시간이 늦어졌을 때 Job 을 개시 허용할 수 있는 시간(초)를 지정
spec.successfulJobsHistoryLimit : 보존하는 성공 Job 수.
spec.failedJobsHistoryLimit : 보존하는 실패 Job 수.
Allow (default) : 동시 실행과 관련 제어 하지 않음
Forbid : 이전 Job 이 종료되지 않았으면 새로운 Job 을 실행하지 않음.
Replace : 이전 Job 을 취소하고 새로운 Job 을 실행
300 의 경우, 지정된 시간 보다 5분 늦어도 실행 가능
기본으론 늦어진 시간과 관계없이 Job 생성 가능
default 3.
0 은 바로 삭��.
default 3.
0 은 바로 삭제
Discovery & LB resource
cluster 상의 container 에 접근할 수 있는 endpoint 제공과 label 이 일치하는 container 를 찾을 수 있게 해줌
2 종류가 존재
Service : Endpoint 의 제공방법이 다른 type 이 몇가지 존재
Ingress
ClusterIP
NodePort
LoadBalancer
ExternalIP
ExternalName
Headless (None)
Cluster 내 Network 와 Service
Kubernetes 에서 cluster 를 구축하면 Pod 을 위한 Internal 네트워크가 구성된다.
Internal Network 의 구성은 CNI (Common Network Interface) 라는 pluggable 한 module 에 따라 다르지만, 기본적으로는 Node 별로 상이한 network segment 를 가지게 되고, Node 간의 traffic 은 VXLAN 이나 L2 Routing 을 이용하여 전송된다.
Kubernetes cluster 에 할당된 Internal network segment 는 자동적으로 분할되어 node 별로 network segment 를 할당하기 때문에 의식할 필요 없이 공통의 internal network 를 이용 가능하다.
이러한 특징으로 기본적으로 container 간 통신이 가능하지만 Service 기능을 이용함으로써 얻을 수 있는 이점이 있다.
Pod 에 발생하는 traffic 의 load balancing
Service discovery & internal dns
위 2가지 이점은 모든 Service Type 에서 이용 가능
Pod Traffic 의 load balancing
Service 는 수신한 traffic을 복수의 Pod 에 load balancing 하는 기능을 제공
Endpoint 의 종류에는 cluster 내부에서 이용 가능한 VIP (Virtual IP Address) 와 외부의 load balancer 의 VIP 등 다양한 종류가 제공된다
example) deployment_sample.yml
Deployment 로 복수 Pod 이 생성되면 제각각 다른 IP Address 를 가지게 되는데 이대로는 부하분산을 이용할 수 없지만 Service 가 복수의 Pod 을 대상으로 load balancing 가능한 endpoint 를 제공한다.
`apiVersion: apps/v1 kind: Deployment metadata: name: sample-deployment spec: replicas: 3 selector: matchLabels: app: sample-app template: metadata: labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 ports: - containerPort: 80 `
`$ kubectl apply -f deployment_sample.yml `
Deployment 로 생성된 Pod 의 label 과 pod-template-hash 라벨을 사용한다.
`$ kubectl get pods sample-deployment-5d.. -o jsonpath='{.metadata.labels}' map[app:sample-app pod-template-hash:...]% `
전송할 Pod 은 spec.selector 를 사용하여 설정 (clusterip_sample.yml)
`apiVersion: v1 kind: Service metadata: name: sample-clusterip spec: type: ClusterIP ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 selector: app: sample-app `
`$ kubectl apply -f clusterip_sample.yml `
Service 를 만들면 상세정보의 Endpoint 부분에 복수의 IP Address 와 Port 가 표시된다. 이는 selector 에 지정된 조건에 매칭된 Pod 의 IP Address 와 Port.
`$ kubectl describe svc sample-clusterip `
Pod 의 IP 를 비교하기 위해 특정 JSON path 를 column 으로 출력
`$ kubectl get pods -l app=sample-app -o custom-columns="NAME:{metadata.name},IP:{status.podIP}" `
load balancing 확인을 쉽게 위해 테스트로 각 pod 상의 index.html 을 변경.
Pod 의 이름을 획득하고 각각의 pod 의 hostname 을 획득한 것을 index.html 에 기록
`for PODNAME in `kubectl get pods -l app=sample-app -o jsonpath='{.items[*].metadata.name}'`; do kubectl exec -it ${PODNAME} -- sh -c "hostname > /usr/share/nginx/html/index.html"; done `
일시적으로 Pod 을 기동하여 Service 의 endpoint 로 request
`$ kubectl run --image=centos:7 --restart=Never --rm -i testpod -- curl -s http://[load balancer ip]:[port] `
**Service Discovery 와 Internal DNS**
Service Discovery
Service Discovery 방법
Service 가 제공
특정 조건의 대상이 되는 member 를 열거하거나, 이름으로 endpoint 를 판별
Service 에 속하는 Pod 을 열거하거나 Service 이름으로 endpoint 정보를 반환
A record 를 이용
SRV record 를 이용
환경변수를 이용
A record 를 이용한 Service Discovery
Service 를 만들면 DNS record 가 자동적으로 등록된다
내부적으로는 kube-dns 라는 system component가 endpoint 를 대상으로 DNS record를 생성
Service 를 이용하여 DNS 명을 사용할 수 있으면 IP Address 관련한 관리나 설정을 신경쓰지 않아도 되기 때문에 이를 사용하는 것이 편리
`# IP 대신에 sample-clusterip 라는 Service 명을 이용 가능 $ kubectl run --image=centos:7 --restart=Never --rm -i testpod -- curl -s http://sample-clusterip:8080 `
실제로 kube-dns 에 등록되는 정식 FQDN 은 [Service name].[Namespace name].svc.[ClusterDomain name]
`# container 내부에서 sample-clusterip.default.svc.cluster.local 를 조회 $ kubectl run --image=centos:6 --restart=Never --rm -i testpod -- dig sample-clusterip.default.svc.cluster.local `
FQDN 에서는 Service 충돌을 방지하기 위해 Namespace 등이 포함되어 있으나 container 내부의 /etc/resolv.conf 에 간략한 domain 이 지정되어 있어 실제로는 sample-clusterip.default 나 sample-clusterip 만으로도 조회가 가능하다.
IP 로도 FQDN 을 반대로 조회하는 것도 가능
`$ kubectl run --image=centos:6 --restart=Never --rm -i testpod -- dig -x 10.11.245.11 `
**SRV record 를 이용한 Service Discovery**
[_Service Port name].[_Protocol].[Service name].[Namespace name].svc.[ClusterDomain name]의 형식으로도 확인 가능
`$ kubectl run --image=centos:6 --restart=Never --rm -i testpod -- dig _http-port._tcp.sample-clusterip.default.svc.cluster.local SRV `
**환경변수를 이용한 Service Discovery**
Pod 내부에서는 환경변수로도 같은 Namespace 의 서비스가 확인 가능하도록 되어 있다.
‘-‘ 가 포함된 서비스 이름은 ‘_’ 로, 그리고 대문자로 변환된다.
docker --links ...와 같은 형식으로 환경변수가 보존
container 기동 후에 Service 추가나 삭제에 따라 환경변수가 갱신되는 것은 아니라서 예상 못한 사고가 발생할 가능성도 있다.
Service 보다 Pod 이 먼저 생성된 경우에는 환경변수가 등록되어 있지 않기에 Pod 을 재생성해야 한다.
Docker 만으로 이용하던 환경에서 이식할 때에도 편리
`$ kubectl exec -it sample-deployment-... env | grep -i sample_clusterip `
**복수 port 를 사용하는 Service 와 Service Discovery**
clusterip_multi_sample.yml
`apiVersion: v1 kind: Service metadata: name: sample-clusterip spec: type: ClusterIP ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 - name: "https-port" protocol: "TCP" port: 8443 targetPort: 443 selector: app: sample-app `
## ClusterIP
가장 기본
Kubernetes cluster 내부에서만 접근 가능한 Internal Network 의 VIP 가 할당된다
ClusterIP 로의 통신은 node 상에서 동작하는 system component 인 kube-proxy가 Pod을 대상으로 전송한다. (Proxy-mode 에 따라 상이)
Kubernetes cluster 외부에서의 접근이 필요없는 곳에 이용한다
기본으로는 Kubernetes API 에 접속하기 위한 Service 가 만들어져 있고 ClusterIP 가 를 사용한다.
`# TYPE 의 ClusterIP 확인 $ kubectl get svc `
**create ClusterIP Service**
clusterip_sample.yml
`apiVersion: v1 kind: Service metadata: name: sample-clusterip spec: type: ClusterIP ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 selector: app: sample-app `
type: ClusterIP 지정
spec.ports[x].port 에는 ClusterIP로 수신하는 Port 번호
spec.ports[x].targetPort 는 전달할 container 의 Port 번호
Static ClusterIP VIP 지정
database 를 이용하는 등 기본적으로는 Kubernetes Service에 등록된 내부 DNS record를 이용하여 host 지정하는 것을 추천
수동으로 지정할 경우 spec.clusterIP를 지정 (clusterip_vip_sample.yml)
`apiVersion: v1 kind: Service metadata: name: sample-clusterip spec: type: ClusterIP clusterIP: 10.11.111.11 ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 selector: app: sample-app `
이미 ClusterIP Service가 생성되어 있는 상태에서는 ClusterIP 를 변경할 수 없다.
kubectl apply 로도 불가능
먼저 생성된 Service 를 삭제해야 한다.
ExternalIP
특정 Kubernetes Node 의 IP:Port 로 수신한 traffic 을 container 로 전달하는 방식으로 외부와 연결
create ExternalIP Service
externalip_sample.yml
`apiVersion: v1 kind: Service metadata: name: sample-externalip spec: type: ClusterIP externalIPs: - 10.1.0.7 - 10.1.0.8 ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 selector: app: sample-app `
type: ExternalIP 가 아닌 type: ClusterIP 인 것에 주의
spec.ports[x].port 는 ClusterIP 로 수신하는 Port
spec.ports[x].targetPort 는 전달할 container 의 Port
모든 Kubernetes Node 를 지정할 필요는 없음
ExternalIP 에 이용 가능한 IP Address 는 node 정보에서 확인
GKE 는 OS 상에서는 global IP Address가 인식되어 있지 않아서 이용 불가
`# IP address 확인 $ kubectl get node -o custom-columns="NAME:{metadata.name},IP:{status.addresses[].address}" `
ExternalIP Service 를 생성해도 container 내부에서 사용할 ClusterIP 도 자동적으로 할당된다.
container 안에서 DNS로 ExternalIP Service 확인
`$ kubectl run --image=centos:6 --restart=Never --rm -i testpod -- dig sample-externalip.default.svc.cluster.local `
ExternalIP 를 이용하는 node 의 port 상태
`$ ss -napt | grep 8080 `
ExternalIP 를 사용하면 Kubernetes cluster 밖에서도 접근이 가능하고 또한 Pod 에 분산된다.
NodePort
모든 Kubernetes Node 의 IP:Port 에서 수신한 traffic 을 container 에 전송
ExternalIP Service 의 모든 Node 버전 비슷한 느낌
Docker Swarm 의 Service 를 Expose 한 경우와 비슷
create NodePort Service
nodeport_sample.yml
`apiVersion: v1 kind: Service metadata: name: sample-nodeport spec: type: NodePort ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 nodePort: 30080 selector: app: sample-app `
spec.ports[x].port 는 ClusterIP 로 수신하는 Port
spec.ports[x].targetPort 는 전달할 container Port
spec.ports[x].nodePort 는 모든 Kubernetes Node 에서 수신할 Port
container 안에서 통신에 사용할 ClusterIP 도 자동적으로 할당된다
지정하지 않으면 자동으로 비어있는 번호를 사용
Kubernetes 기본으로는 이용할 수 있는 번호가 30000~32767
복수의 NodePort 가 동일 Port 사용 불가
Kubernetes Master 설정에서 변경 가능
`$ kubectl get svc `
container 안에서 확인하면 내부 DNS 가 반환하는 IP Address 는 External IP 가 아닌 Cluster IP
`$ kubectl run --image=centos:6 --restart=Never --rm -i testpod -- dig sample-nodeport.default.svc.cluster.local `
Kubernetes Node 상에서 Port 상태를 확인하면 nodePort 에 지정한 값으로 Listen
`$ ss -napt | grep 30080 `
ExternalIP 와 다르게 모든 Node 의 IP Address 로 Kubernetes Cluster 외부에서 접근 가능하며 Pod 으로의 request 도 분산된다.
GKE 도 GCE에 할당된 global IP Address 로 접근 가능
Node 간 통신의 배제 (Node 를 건넌 load balancing 배제)
NodePort 에서는 Node 상의 NodePort 에 도달한 packet 은 Node 를 건너서도 load balancing 이 이루어진다
DaemonSet 등을 사용하면 각 Node에 1 Pod 이 존재하기에 같은 Node 상의 Pod 에만 전달하고 싶을 때 사용
spec.externalTrafficPolicy 를 사용하여 실현 가능
externalTrafficPolicy 를 Cluster 에서 Local 로 변경하기 위해선 YAML 이용 (nodeport_local_sample.yml)
Cluster (Default)
Local
Node 에 도달한 후 각 Node 에 load balancing
실제로는 kube-proxy 설정으로 iptables 의 proxy mode를 사용할 경우, 자신의 Node 에 좀 더 많이 전달되도록 되는 것으로 보여짐 (iptables-save 등으로 statistics 부분을 확인)
도달한 Node 에 속한 Pod 에 전달 (no load balancing)
만약 Pod 이 존재하지 않으면 Response 불가
만일 Pod 이 복수 존재한다면 균등하게 분배
`apiVersion: v1 kind: Service metadata: name: sample-nodeport-local spec: type: NodePort externalTrafficPolicy: Local ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 nodePort: 30081 selector: app: sample-app `
## LoadBalancer
가장 사용성이 좋고 실용적
Kubernetes Cluster 외부의 LoadBalancer 에 VIP 를 할당 가능
NodePort 등에선 결국 Node 에 할당된 IP Address 에 endpoint 역할까지 담당시키는 것이라 SPoF (Single Point of Failure) 로 Node 장애에 약하다
외부의 load balancer 를 이용하는 것으로 Kubernetes Node 장애에 강하다
단 외부 LoadBalancer 와 연계 가능한 환경으로 GCP, AWS, Azure, OpenStack 등의 CloudProvider 에 한정된다 (이는 추후에 점차 확대될 수 있다)
NodePort Service 를 만들어서 Cluster 외부의 Load Balancer 에서 Kubernetes Node 에 balancing 한다는 느낌
create LoadBalancer Service
lb_sample.yml
`apiVersion: v1 kind: Service metadata: name: sample-lb spec: type: LoadBalancer ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 nodePort: 30082 selector: app: sample-app `
spec.ports[x].port 는 LoadBalancer VIP 와 ClusterIP 로 수신하는 Port
spec.ports[x].targetPort 는 전달할 container Port
NodePort 도 자동적으로 할당됨으로 spec.ports[x].nodePort 의 지정도 가능
확인
`$ kubectl get svc sample-lb `
EXTERNAL-IP 가 pending 상태인 경우 LoadBalancer 가 준비되는데 시간이 필요한 경우
Container 내부 통신에는 Cluster IP 를 사용하기에 ClusterIP 도 자동할당
NodePort 도 생성
VIP 는 Kubernetes Node 에 분산되기 때문에 Kubernetes Node 의 scaling 시 변경할 것이 없다
Node 간 통신 배제 (Node 를 건넌 load balancing 배제)
NodePort 와 동일하게 externalTrafficPolicy 를 이용 가능
LoadBalancer VIP 지정
spec.LoadBalancerIP 로 외부의 LoadBalancer IP Address 지정 가능
`# lb_fixip_sample.yml apiVersion: v1 kind: Service metadata: name: sample-lb-fixip spec: type: LoadBalancer loadBalancerIP: xxx.xxx.xxx.xxx ports: - name: "http-port" protocol: "TCP" port: 8080 targetPort: 80 nodePort: 30083 selector: app: sample-app `
미지정 시 자동 할당
GKE 등의 cloud provider 의 경우 주의
GKE 에서는 LoadBalancer service 를 생성하면 GCLB 가 생성된다.
GCP LoadBalancer 등으로 비용이 증가 되지 않도록 주의
IP Address 중복이 허용되거나 deploy flow 상 문제가 없으면 가급적 Service 를 정리할 것
Service 를 만든 상태에서 GKE cluster 를 삭제하면 GCLB 가 과금 되는 상태로 남아버리므로 주의
Headless Service
Pod 의 IP Address 가 반환되는 Service
보통 Pod 의 IP Address 는 자주 변동될 수 있기 때문에 Persistent 한 StatefulSet 한정하여 사용 가능
IP endpoint 를 제공하는 것이 아닌 DNS Round Robin (DNS RR) 을 사용한 endpoint 제공
DNS RR 은 전달할 Pod 의 IP Address 가 cluster 안의 DNS 에서 반환되는 형태로 부하분산이 이루어지기에 client 쪽 cache 에 주의할 필요가 있다.
기본적으로 Kubernetes 는 Pod 의 IP Address 를 의식할 필요가 없도록 되어 있어서 Pod 의 IP Address 를 discovery 하기 위해서는 API 를 사용해야만 한다
Headless Service 를 이용하여 StatefulSet 한정으로 Service 경유로 IP Address 를 discovery 하는 것이 가능하다
create Headless Service
3가지 조건이 충족되어야 한다
Service 의 spec.type 이 ClusterIP
Service 의 metadata.name 이 StatefulSet 의 spec.serviceName 과 같을 것
Service 의 spec.clusterIP 가 None 일것
위 조건들이 충족되지 않으면 그냥 Service 로만 동작하고 Pod 이름을 얻는 등이 불가능하다.
`# headless_sample.yml apiVersion: v1 kind: Service metadata: name: sample-svc spec: type: ClusterIP clusterIP: None ports: - name: "http-port" protocol: "TCP" port: 80 targetPort: 80 selector: app: sample-app `
**Headless Service 를 이용한 Pod 이름 조회**
보통 Service 를 만들면 복수 Pod 에 대응하는 endpoint 가 만들어져 해당 endpoint. 에 대응하여 이름을 조회하는 것이 가능하지만 각각의 Pod 의 이름 조회는 불가능하다
보통 Service 의 이름 조회는 [Service name].[Namespace name].svc.[domain name] 로 조회가 가능하도록 되어 있지만 Headless Service 로 그대로 조회하면 DNS Round Robin 으로 Pod 중의 IP 가 반환되기에 부하 분산에는 적합하지 않다
StatefulSet 의 경우에만, [Pod name].[Service name].[Namespace name].svc.[domain name] 형식으로 Pod 이름 조회가 가능하다
container 의 resolv.conf 등에 search 로 entry 가 들어가 있다면 [Pod name].[Service name] 혹은 [Pod name].[Service name].[Namespace] 등으로 조회 가능
ReplicaSet 등의 Resource 에서도 가능
ExternalName
다른 Service 들과 다르게 Service 이름 조회에 대응하여 CNAME 을 반환하는 Service
주로 다른 이름을 설정하고 싶거나 cluster 안에서 endpoint 를 전환하기 쉽게 하고 싶을 때 사용
create ExternalName service
`# externalname_sample.yml apiVersion: v1 kind: Service metadata: name: sample-externalname namespace: default spec: type: ExternalName externalName: external.example.com `
`$ kubectl get svc `
EXTERNAL-IP 부분에 CNAME 용의 DNS 가 표시된다
container 내부에서 [Service name] 이나 [Service name].[Namespace name].svc.[domain name] 으로 조회하면 CNAME 가 돌아오는 것을 확인 가능
`$ dig sample-externalname.default.svc.cluster.local CNAME `
**Loosely Coupled with External Service**
Cluster 내부에서는 Pod 로의 통신에 Service 이름 조회를 사용하는 것으로 서비스 간의 Loosely Coupled 를 가지는 것이 가능했지만, SaaS 나 IaaS 등의 외부 서비스를 이용하는 경우에도 필요가 있다.
Application 등에서 외부 endpoint를 설정하면 전환할 때 Application 쪽 설정 변경이 필요해지는데 ExternalName을 이용하여 DNS 의 전환은 ExternalName Service 의 변경만으로 가능하여 Kubernetes 상에서 가능해지고 외부와 Kubernetes Cluster 사이의 Loosely Coupled 상태도 유지 가능하다.
외부 서비스와 내부 서비스 간의 전환
ExternalName 이용으로 외부 서비스와의 Loosely Coupled 확보하고 외부 서비스와 Kubernetes 상의 Cluster 내부 서비스의 전환도 유연하게 가능하다
Ingress
L7 LoadBalancer 를 제공하는 Resource
Kubernetes 의 Network Policy resource 에 Ingress/Egress 설정항목과 관련 없음
Ingress 종류
아직 Beta Service 일 가능성
크게 구분하여 2가지
Cluster 외부의 Load Balancer를 이용한 Ingress
Cluster 내부에 Ingress 용의 Pod 을 생성하여 이용하는 Ingress
GKE
Nginx Ingress
Nghttpx Ingress
Cluster 외부의 Load Balancer 를 이용한 Ingress
GKE 같은 Cluster 외부의 Load Balancer를 이용한 Ingress 의 경우, Ingress rosource 를 만드는 것만으로 LoadBalancer 의 VIP 가 만들어져 이용하는 것이 가능
GCP의 GCLB (Google Cloud Load Balancer) 에서 수신한 traffic을 GCLB 에서 HTTPS 종단이나 path base routing 등을 수행하여 NodePort에 traffic을 전송하는 것으로 대상 Pod 에 도달
Cluster 내부에 Ingress 용의 Pod 을 생성하여 이용하는 Ingress
L7 역할을 할 Pod을 Cluster 내부에 생성하는 형태로 실현
Cluster 외부에서 접근 가능하도록 별도 Ingress 용 Pod에 LoadBalancer Service를 작성하는 등의 준비가 필요하다
Ingress 용의 Pod 이 HTTPS 종단이나 path base routing 등의 L7 역할을 하기 위해 Pod 의 replica 수의 auto scale 등도 고려할 필요가 있다.
LB와 일단 Nginx Pod 에 전송하여 Nginx 가 L7 역할을 하여 처리한 후 대상 Pod 에 전송한다.
NodePort 경유하지 않고 직접 Pod IP 로 전송
create Ingress resource
사전 ���비가 필요
사전에 만들어진 Service를 Back-end로서 활용하여 전송을 하는 형태
Back-end 로 이용할 Service 는 NodePort 를 지정
`# sample-ingress-apps apiVersion: apps/v1 kind: Deployment metadata: name: sample-ingress-apps spec: replicas: 1 selector: matchLabels: ingress-app: sample template: metadata: labels: ingress-app: sample spec: containers: - name: nginx-container image: zembutsu/docker-sample-nginx:1.0 ports: - containerPort: 80 `
`# ingress service sample apiVersion: v1 kind: Service metadata: name: svc1 spec: type: NodePort ports: - name: "http-port" protocol: "TCP" port: 8888 targetPort: 80 selector: ingress-app: sample `
Ingress 로 HTTPS 를 이용하는 경우에는 인증서는 사전에 Secret 으로 등록해둘 필요가 있다.
Secret 은 인증서의 정보를 바탕으로 YAML 파일을 직접 만들거나 인증서 파일을 지정하여 만든다.
`# 인증서 작성 $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt -subj "/CN=sample.example.com" # Secret 작성 (인증서 파일을 지정하는 경우) $ kubectl create secret tls tls-sample --key /tmp/tls.key --cert /tmp/tls.crt `
Ingress resource 는 L7 Load Balancer 이기에 특정 host 명에 대해 request path > Service back-end 의 pair 로 전송 rule 을 설정한다.
하나의 IP Address 로 복수의 Host 명을 가지는 것이 가능하다.
spec.rules[].http.paths[].backend.servicePort 에 설정하는 Port 는 Service 의 spec.ports[].port 를 지정
`# ingress_sample.yml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: sample-ingress annotations: ingress.kubernetes.io/rewrite-target: / spec: rules: - host: sample.example.com http: paths: - path: /path1 backend: serviceName: svc1 servicePort: 8888 backend: serviceName: svc1 servicePort: 8888 tls: - hosts: - sample.example.com secretName: tls-sample `
**Ingress resource & Ingress Controller**
Ingress resource = YAML file 에 등록된 API resource
Ingress Controller = Ingress resource 가 Kubernetes 에 등록되었을 때, 어떠한 처리를 수행하는 것
GCP 의 GCLB 를 조작하여 L7 LoadBalancer 설정을 하는 것이나,
Nginx 의 Config 를 변경하여 reload 하는 등
GKE 의 경우
GKE 의 경우, 기본으로 GKE 용 Ingress Controller 가 deploy 되어 있어 딱히 의식할 필요 없이 Ingress resource 마다 자동으로 IP endpoint 가 만들어진다.
Nginx Ingress 의 경우
Nginx Ingress 를 이용하는 경우에는 Nginx Ingress Controller 를 작성해야 한다.
Ingress Controller 자체가 L7 역할을 하는 Pod 이 되기도 하기에 Controller 라는 이름이지만 실제 처리도 수행한다.
GKE 와 같이 cluster 외부에서도 접근을 허용하기 위해서는 Nginx Ingress Controller 으로의 LoadBalancer Service (NodePort 등도 가능) 를 작성할 필요가 있다.
개별적으로 Service 를 만드는 것이기에 kubectl get ingress 등으로 endpoint IP Address 를 확인 불가 하기에 주의가 필요하다.
rule 에 매칭되지 않을 경우의 default 로 전송할 곳을 작성할 필요가 있으니 주의
실제로는 RBAC, resource 제한, health check 간격 등 세세한 설정해 둬야 할 수 있다.
nginx ingress 추천 설정
`# Nginx ingress를 이용하는 YAML sample apiVersion: apps/v1 kind: Deployment metadata: name: default-http-backend labels: app: default-http-backend spec: replicas: 1 selector: matchLabels: app: default-http-backend template: metadata: labels: app: default-http-backend spec: containers: - name: default-http-backend image: gcr.io/google_containers/defaultbackend:1.4 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: default-http-backend labels: app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: app: default-http-backend --- apiVersion: apps/v1 kind: Deployment metadata: name: nginx-ingress-controller spec: replicas: 1 selector: matchLabels: app: ingress-nginx template: metadata: labels: app: ingress-nginx spec: containers: - name: nginx-ingress-controller image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP --- apiVersion: v1 kind: Service metadata: name: ingress-endpoint labels: app: ingress-nginx spec: type: LoadBalancer ports: - port: 80 targetPort: 8080 selector: app: ingress-nginx `
default back-end pod 이나 L7 처리할 Nginx Ingress Controller Pod 의 replica 수가 고정이면 traffic 이 늘었을 때 감당하지 못할 가능성도 있으니 Pod auto scaling 을 수행하는 Horizontal Pod Autoscaler (HPA) 의 이용도 검토해야 할 수 있음
deploy 한 Ingress Controller 는 cluster 상의 모든 Ingress resource 를 봐 버리기에 충돌할 가능성이 있다.
상세 사양
Ingress Class 를 이용하여 처리하는 대상 Ingress resource 를 분리하는 것이 가능
Ingress resource 에 Ingress Class Annotation 을 부여하여 Nginx Ingress Controller 에 대상으로 하는 Ingress Class를 설정하는 것으로 대상 분리 가능
Nginx Ingress Controller 의 기동 시 --ingress-class 옵션 부여
Ingress resource Annotation
/nginx-ingress-controller --ingress-class=system_a ...
kubernetes.io/ingress.class: "system_a"
정리
Kubernetes Service & Ingress
Service
Ingress
L4 Load Balancing
Cluster 내부 DNS 로 lookup
label 을 이용한 Pod Service Discovery
L7 Load Balancing
HTTPS 종단
path base routing
Kubernetes Service
ClusterIP : Kubernetes Cluster 내부 한정으로 통신 가능한 VIP
ExternalIP : 특정 Kubernetes Node 의 IP
NodePort : 모든 Kubernetes Node 의 모든 IP (0.0.0.0)
LoadBalancer : Cluster 외부에 제공되는 Load Balancer의 VIP
ExternalName : CNAME 을 사용한 Loosely Coupled
Headless : Pod 의 IP 를 사용한 DNS Round Robin
Kubernetes Ingress
Cluster 외부의 Load Balancer 를 이용한 Ingress : GKE
Cluster 내부의 Ingress 용 Pod 을 이용한 Ingress : Nginx Ingress, Nghttpx Ingress
Kubernetes Config & Storage resource
container 에 대한 설정 파일, 암호 등의 기밀 정보나 Persistent Volume 등에 관한 resource
3 종류
Secret
ConfigMap
PersistentVolumeClaim
환경변수 이용
Kubernetes 에서는 개별 container에 대한 설정의 내용은 환경변수나 파일이 포함된 영역을 mount 해서 넘기는 것이 일반적이다
환경변수를 넘기려면 pod template 에 env 혹은 envFrom 을 지정
5개의 정보 source로부터 환경변수를 심는 것이 가능
정적설정
Pod 정보
Container 정보
Secret resource 의 기밀 정보
ConfigMap resource 의 Key-Value 값
정적설정
spec.containers[].env 에 정적 값을 정의
`apiVersion: v1 kind: Pod metadata: name: sample-env labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 env: - name: MAX_CONNECTION value: "100" `
Pod 정보
Pod 이 속한 Node 나, Pod 의 IP Address, 기동시간 등의 Pod 에 관련된 정보는 fieldRef를 사용하여 참조할 수 있다.
참조 가능한 값은 kubectl get pods -o yaml 등으로 확인할 수 있다.
등록한 YAML file 의 정보와 별도로 IP, host 정도 등도 추가되었다.
`# 동작 중인 Pod 의 정보 확인 $ kubectl get pod nginx-pod -o yaml ... spec: nodeName: gke-k8s-... ... `
`# env-pod-sample.yml # set Kubernetes Node's name to env K8S_NODE apiVersion: v1 kind: Pod metadata: name: sample-env-pod labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 env: - name: K8S_NODE valueFrom: fieldRef: fieldPath: spec.nodeName `
**Container 정보**
Container 에 관련한 정보는 resourceFieldRef를 사용하여 참조할 수 있다.
Pod 에는 복수 container 의 정보가 포함되어 있어서 각 container에 설정 가능한 값에 대해서는 fieldRef로는 참조할 수 없는 것에 주의.
참조 가능한 값에 관해선 kubectl get pods -o yaml 등오로 확인 가능
`# env-container-sample.yml #set CPU Requests/Limits to ENV apiVersion: v1 kind: Pod metadata: name: sample-env-container labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 env: - name: CPU_REQUEST valueFrom: resourceFieldRef: containerName: nginx-container resource: requests.cpu - name: CPU_LIMIT valueFrom: resourceFieldRef: containerName: nginx-container resource: limits.cpu `
**Secret resource 기밀 정보**
기밀 정보는 별도 Secret resource 를 만들어 환경변수로 참조 시키는 것을 추천
ConfigMap resource 에서 Key-Value 값
단순한 Key-Value 값이나 설정 파일 등은 ConfigMap 으로 관리하는 것이 가능
일괄 변경이나 중복이 많이 많은 경우 ConfigMap 을 사용하는 것이 유용할 수 있음
환경변수 이용 시 주의점
`# env fail sample apiVersion: v1 kind: Pod metadata: name: sample-fail-env labels: app: sample-app spec: containers: - name: nginx-container image: nginx:1.12 command: ["echo"] args: ["${TESTENV}", "${HOSTNAME}"] env: - name: TESTENV value: "100" `
command 나 args 에 환경변수를 이용하기 위해선 ${} 가 아닌 $()를 사용해야 한다
command 나 args 에서 참조가능한 환경 변수는 해당 Pod template 내부에서 정의된 환경 변수에 제한된다. (위 예제에서는 TESTENV 값만 참조 가능)
OS 등에서만 참조할 수 있는 환경 변수를 이용하기 위해서는 script 등을 이용하여 실행하도록 해야 한다.
Secret
DataBase 등을 사용할 때 필요한 유저, 암호 등의 인증에 필요한 경우 이용할 수 있는 방식
유저명과 암호를 별도 resource 로 정의해두고 Pod 에서 이를 불러들여 사용하기 위한 resource
Secret이 정의된 YAML Manifest를 암호화하는 ::kubesec:: 이란 OSS 존재
gpg, Google Cloud KMS, AWS KMS 등을 이용하여 간단하게 data.* 부분만 암호화할 수 있어 외부로 공개되어도 문제 소지가 적다.
Docker build 시 Container Image 에 첨부
환경변수나 실행 인수 등에 포함시켜 Container Image 를 build.
기밀 정보를 포함하고 있는 만큼 해당 Image 를 외부에 공개하거나 배포하기 곤란하며 인증 정보가 변경된다면 Image 를 다시 build 해야 하는 등 불편
Pod 이나 Deployment YAML Manifest 에 첨부
이 역시 YAML 이 외부에 알려져서는 안되고 복수 Application 에서 동일한 정보를 사용하게 된다면 여기저기 퍼지게 되는 것이라 이 역시 문제
Secret 분류
Generic (type: Opaque)
TLS (type: kubernetes.io/tls)
Docker Registry (type: kubernetes.io/dockerconfigjson)
Service Account (type: kubernetes.io/service-account-token)
Generic (type: Opaque)
보통 사용하는 암호 등에 이용
작성 방법
file 을 이용 (--from-file)
yaml
kubectl 이용하어 직접 생성 (--from-literal)
envfile
Secret 에서는 복수의 Key-Value 값이 보존된다.
db-auth 라는 이름의 Secret 에는 username, password 라는 Key가 있고 그에 해당하는 Value 값이 존재할 수 있다.
복수의 DB를 사용하는 경우, Secret 이름이 겹치지 않게 정의하거나 시스템별로 Namespace를 분할하는 등의 작업이 필요
from File
--from-file 옵션을 사용하여 파일을 지정하여 사용한다
파일명이 곧 Key 가 되기에 파일명에 포함된 확장��� 등은 제거하는게 좋을 수 있다.
확장자를 제거하기 싫다면, --from-file=username=username.txt 처럼 사용할 수도 있다.
파일에 개행문자(\n)가 들어가지 않도록 echo -n 을 사용하는 등으로 주의해야 한다.
`# source 파일 생성 $ echo -n "user" > ./username $ echo -n "password" > ./password # Secret 생성 $ kubectl create secret generic sample-db-auth --from-file=./username --from-file=./password # json 형식으로 base64 형식으로 encode 된 Secret data 부분을 확인 $ kubectl get secret sample-db-auth -o json | jq .data # base64 형식으로 encode 된 내용을 평문으로 확인 $ kubectl get secret sample-db-auth -o json | jq -r .data.username | base64 -d `
from yaml
`apiVersion: v1 kind: Secret metadata: name: sample-db-auth type: Opaque data: username: dXNlcgo= password: cGFzc3dvcmQK `
using kubectl (--from-literal)
—from-literal 옵션 사용
`$ kubectl create secret generic sample-db-auth --from-literal=username=user --from-literal=password=password `
from envfile
일괄적으로 처리할 때 이용가능하며 Docker 에서 —env-file 옵션을 사용하여 container를 사용하고 있었다면 그대로 Secret 에 이식하는 것도 가능하다.
`username=user password=password $ kubectl create secret generic sample-db-auth --from-env-file ./env_secret `
TLS (type: kubernetes.io/tls)
Ingress 등에서 참조 가능한 TLS 용 Secret
주로 인증서 등을 이용하기 위해 사용하며 일반적으로 파일을 이용하여 이용한다.
--key 와 —cert 로 비밀키와 인증서를 지정한다.
`# create certificate $ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /tmp/tls.key -out /tmp/tls.crt - subj "/CN=sample1.example.com" # create TLS Secret $ kubectl create secret tls tls-sample --key /tmp/tls.key --cert /tmp/tls.crt `
Docker Registry (type: kubernetes.io/dockerconfigjson)
Docker Registry 인증 정보용
using kubectl
kubectl 을 이용할 경우 Registry 서버와 인증정보를 인수로 지정한다.
`$ kubectl create secret docker-registry sample-registry-auth \ --docker-server=REGISTRY_SERVER \ --docker-username=REGISTRY_USER \ --docker-password=REGISTRY_USER_PASSWORD \ --docker-email=REGISTRY_USER_EMAIL # get $ kubectl get secret -o json sample-registry-auth | jq .data `
Secret 을 이용한 image 획득
인증이 필요한 Docker Registry 나 Docker Hub 의 Private repository 의 Image 를 가져오기 위해서 Secret 을 사전에 만들어놓고 Pod 의 spec.imagePullSecrets 에 docker-registry 타입의 Secret 을 지정한다.
`# secret-pull_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-pod spec: containers: - name: secret-image-container image: REGISTRY_NAME/secret-image:latest imagePullSecrets: - name: sample-registry-auth `
Service Account
수동으로 만드는 것은 아니지만 Pod 에 Service Account 의 Token 을 mount 하기 위함
Secret 이용
Secret 을 container에서 이용할 경우, 크게 나눠서 2가지 패턴
환경변수
Volume으로 mount
Secret 의 특정 Key 한정
Secret 의 모든 Key
Secret의 특정 Key 한정
Secret 의 모든 키
환경변수
환경변수로 넘길 경우, 특정 Key만을 넘기 던가, Secret 전체를 넘기던가.
특정 key 만을 넘길 경우, spec.containers[].env 의 valueFrom.secretKeyRef 를 사용하여 넘길 키를 지정
`# secret_single_env_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-secret-single-env spec: containers: - name: secret-container image: nginx:1.12 env: - name: DB_USERNAME valueFrom: secretKeyRef: name: sample-db-auth key: username `
env 로 1개씩 정의가 가능하여 환경변수 명을 지정 가능하다.
Secret 전체를 넘길 경우
`# secret_multi_env_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-secret-multi-env spec: containers: - name: secret-container image: nginx:1.12 envFrom: - secretRef: name: sample-db-auth `
Key 를 일일이 지정하지 않아도 되어 간결하지만 Secret 에 저장되어 있는 값을 Pod Template 로는 파악하기 힘들다. **Volume Mount**
특정 Key 만을 넘길 경우, spec.volumes[] 의 secret.item[] 을 사용하여 지정한다.
`# secret_single_volume_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-secret-single-volume spec: containers: - name: secret-container image: nginx:1.12 volumeMounts: - name: config-volume mountPath: /config volumes: - name: config-volume secret: secretName: sample-db-auth items: - key: username path: username.txt `
mount 할 파일을 1개씩 정의할 수 있어 파일명을 지정 가능하다. <pre>`$ kubectl exec -it sample-secret-single-volume cat /config/username.txt `</pre>
Secret 전체를 변수로 넘길 경우
`# secret_multi_volume_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-secret-multi-volume spec: containers: - name: secret-container image: nginx:1.12 volumeMounts: - name: config-volume mountPath: /config volumes: - name: config-volume secret: secretName: sample-db-auth `
`$ kubectl exec -it sample-secret-multi-volume ls /config `
**동적 Secret 갱신** Volume Mount 으로 Secret 을 이용할 경우 일정 기간 주기 (kubelet 의 Sync Loop 의 타이밍) 로 kube-apiserver 에 변경을 확인해서 변경이 있을 경우 갱신한다. 기본적으로 SyncLoop의 간격은 60초로 설정되어 있으나 kuebelet 의 옵션 `—sync-frequency` 로 변경 가능하다. (이는 환경변수를 이요하는 경우에는 이용 불가 하다.) 이 경우 Volume 에 마운트 된 파일의 값이 바뀌어도 Pod 이 재생성 되는 것이 아니어서 끊기는 것을 걱정할 필요도 없다. Secret 에서 삭제된 것 역시 같이 삭제된다. ## ConfigMap ConfigMap 은 설정 정보 등을 Key-Value 로 보존할 수 있는 데이터를 저장하기 위한 resource. Key-Value 라고 해도 nginx.conf 나 httpd.conf 와 같은 설정 파일 그 자체도 보존가능하다. **create ConfigMap** Generic type Secret 과 거의 동일한 방법.
3가지 방법
파일을 사용 (—from-file)
yaml 파일을 사용
kubectl로 직접 생성 (—from-literal)
ConfigMap 에는 복수의 Key-Value 값이 들어간다. nginx.conf 전체를 ConfigMap 안에 넣거나 nginx.conf 의 설정 parameter만 넣어도 된다.
파일을 사용
파일을 사용할 경우. —from-file 을 지정한다. 보통 파일명이 그대로 Key 로 사용되며 Key명을 변경하고 싶을 경우, —from-file=nginx.conf=sample-nginx.conf 와 같은 형식으로 지정한다.
`# create ConfigMap $ kubectl create configmap sample-configmap --from-file=./nginx.conf # 확인 $ kubectl get configmap sample-configmap -o json | jq .data # describe $ kubectl describe configmap sample-configmap `
YAML 파일을 사용
Value 값이 길 경우, Key: | 처럼 정의
`apiVersion: v1 kind: ConfigMap metadata: name: sample-configmap data: thread: "16" connection.max: "100" connection.min: "10" sample.properties: | property.1=value-1 property.2=value-2 property.3=value-3 nginx.conf: | user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; ... `
kubectl 을 사용 (—from-literal)
`$ kubectl create configmap web-config \ --from-literal=connection.max=100 \ --from-literal=connection.min=10 `
ConfigMap 이용
환경변수
Volume mount
특정 Key
모든 Key
특정 Key
모든 Key
환경변수
특정 Key만 넘길 경우, spec.containers[].env 의 valueFrom.configMapKeyRef 를 사용
`# configmap_single_env_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-configmap-single-env spec: containers: - name: configmap-container image: nginx:1.12 env: - name: CONNECTION_MAX valueFrom: configMapKeyRef: name: sample-configmap key: connection.max `
env 한개씩 정의가 가능해서 환경변수명을 지정 가능
모든 Key를 넘길 경우, 한개씩 정의할 필요가 없어 YAML 이 간략해지지만 YAML 만으로는 어떤 값이 있는지 판단하긴 힘들다
`# configmap_multi_env_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-configmap-multi-env spec: containers: - name: configmap-container imgae: nginx:1.12 envFrom: - configMapRef: name: sample-configmap `
환경변수를 이용하는 경우, 다음과 같은 패턴은 표현할 수 없어 전달되지 않는다.
. 이 포함되는 경우
개행이 포함되는 경우. (Key. | 로 정의된 경우)
Volume Mount
특정 Key 만 넘길 경우, spec.volumes[] 의 configMap.items[] 를 사용한다.
`# configmap_single_volume_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-configmap-single-volume spec: containers: - name: configmap-container image: nginx:1.12 volumeMounts: - name: config-volume mountPath: /config volumes: - name: config-volume configMap: name: sample-configmap items: - key: nginx.conf path: nginx-sample.conf `
mount 할 파일을 하나씩 정의하므로 파일명을 지정 가능하다. <pre>`$ kubectl exec -it sample-configmap-single-volume cat /config/nginx-sample.conf `</pre>
모든 Key 를 넘길 경우, YAML 이 간결해지지만 YAML 만으로는 어떤 값이 있는지 판단하기 힘들다.
`# configmap_multi_volume_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-configmap-multi-volume spec: containers: - name: configmap-container image: nginx:1.12 volumeMounts: - name: config-volume mountPath: /config volumes: - name: config-volume configMap: name: sample-configmap `
`$ kubectl exec -it sample-configmap-multi-volume ls /config `
**동적 ConfigMap 갱신** volume mount 를 이용하는 경우, 일정시간 주기 (kubelet 의 Sync Loop 타이밍)로 kube-apiserver 에 변경을 확인해서, 변경이 있으면 갱신한다. SyncLoop 의 기본값은 60초로 설정되어 있으나 이를 변경하고 싶을 경우, kubelet 의 `—sync-frequency` 를 사용하여 설정한다. (환경 변수의 경우 동적 갱신이 불가) ## Volume 과 PersistentVolume, PersistentVolumeClaim 의 차이 Volume 은 기존의 Volume (host 영역, NFS, Ceph, GCP Volume) 등을 YAML Manifest 에 직접 지정하여 이용 가능하게 하는 것이다. 그래서 이용자가 신규로 Volume 을 작성하거나, 기존의 Volume 을 삭제하는 등의 조작이 불가능하다. 그리고 YAML Manifest 로 Volume resource 를 만드는 등의 처리도 불가능하다. PersistentVolume은 외부의 Persistent 한 Volume을 제공하는 시스템과 연계하여 신규 Volume 을 만들거나 기존의 Volume 을 삭제하는 것이 가능하다. 구체적으로는 YAML Manifest 등을 통해 Persistent Volume resource 를 별도 만드는 형식이다. PersistentVolume 의 plug-in 에서는 Volume 의 생성과 삭제와 같은 life cycle 을 처리하는 것이 가능하지만 Volume 의 plug-in 의 경우, 이미 있는 Volume 을 사용하는 것만 가능하다. PersistentVolumeClaim 은 PersistentVolume resource 에서 assign 하기 위한 resource. PersistentVolume은 cluster에 volume 을 등록하기만 하는거라, 실제로 Pod 에서 이용하기 위해서는 PersistentVolumeClaim 을 정의해야 한다. Dynamic Provisioning 기능을 이용할 경우, PersistentVolumeClaim을 이용하는 시점에 Persistent Volume 을 동적으로 생성하는 것이 가능해서 순서가 반대가 될 수 있다. ## Volume [Volume Plug-in](https://kubernetes.io/docs/concepts/storage/volumes/) PersistentVolume과 달리 Pod 에 대해 정적으로 영역을 지정하기 위한 형태가 되기 때문에 충돌(경쟁)에 주의 **EmptyDir**
Pod 의 일시적인 디스크 영역으로 이용 가능
Pod 이 Terminate 되면 삭제됨
`# emptydir-sample.yml apiVersion: v1 kind: Pod metadata: name: sample-emptydir spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} `
**HostPath**
Kubernetes Node 상의 영역을 container 에 mapping 하기 위한 plug-in.
type
Directory
DirectoryOrCreate
File
Socket
BlockDevice
`# hostpath-sample.yml apiVersion: v1 kind: Pod metadata: name: sample-hostpath spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /srv name: hostpath-sample volumes: - name: hostpath-sample hostPath: path: /data type: DirectoryOrCreate `
## PersistentVolume (PV)
Volume 이 Pod 정의에 포함되었다면 PersistentVolume은 resource 로 별도로 작성
엄밀히 말하면 Config&Storage resource 보다 Cluster resource.
PersistentVolume 종류
기본적으로 network 를 이용해 disk attach 한다.
Persistent Volumes - Kubernetes
GCE Persistent Disk
AWS Elastic Block Store
NFS
iSCSI
Ceph
OpenStack Cinder
GlusterFS
create PersistentVolume
label
용량
access mode
reclaim policy
mount option
storage class
setting for each PersistentVolume
`# pv_sample.yml apiVersion: v1 kind: PersistentVolume metadata: name: sample-pv labels: type: nfs envrionment: stg spec: capacity: storage: 10G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow mountOptions: - hard nfs: server: xxx.xxx.xxx.xxx path: /nfs/sample `
`$ kubectl get pv `
**Label** Dynamic Provisioining 을 사용하지 않고 PersistentVolume을 사용하는 경우, 종류가 알 수 없게 되기 쉬우니 type, environment, speed 등의 label 을 붙이는걸 추천한다. Label 을 붙이면 PersistentVolumeClaim 에서 Volume 의 Label 을 지정가능하여 scheduling 을 유난하게 수행할 수 있다. **용량** Dynamic Provisioning 을 이용할 수 없을 경우, 무작정 용량을 크게 잡아선 안 된다. 요구한 용량과 가장 가까운 용량을 가진 storage 가 사용되기 때문. **access mode**
ReadWriteOnce (RWO) : 단일 node 에서 Read / Write
ReadOnlyMany (ROX) : 단일 node 에서 Write, 복수 노드에서 Read
ReadWriteMany (RWX) : 복수 node 에서 Read / Write
Persistent Volumes - Kubernetes
Reclaim Policy
Persistent Volume 사용이 끝난 후 처리방법
Retain
Recycle
Delete
data 를 삭제하지 않고 보존
다른 PersistentVolumeClaim 으로 재 mount 되는 일은 없다
data 삭제 (rm -rf ./*), 재이용가능한 상태로
다른 PersistentVolumeClaim 에서 재 mount 가능
PersistentVolume 삭제
주로 외부 volume 을 사용할 때 사용
Mount Options
PersistentVolume 의 종류에 따라 상이. 확인.
Storage Class
Dynamic Provisioning 의 경우 사용자가 PersistentVolumeClaim 을 사용하여 PersistentVolume 을 요구할 때 원하는 디스크를 지정하기 위해 사용.
Storage Class 선택 = 외부 Volume 종류 선택
`#storageclass_sample.yml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sample-storageclass parameters: availability: test-zone-la type: scaleio provisioner: kubernetes.io/cinder `
PersistentVolume plug-in 별 설정
실제로는 종류에 따라 제각가이라 확인 필요
PersistentVolumeClaim (PVC)
PersistemtVolumeClaim 에 지정된 조건을 가지고 요구하여 Scheduler 는 현재 보유하고 있는 PersistentVolume 에서 적합한 Volume 을 할당
PersistentVolumeClaim 설정
label selector
capacity
access mode
Storage Class
PVC 에서 요구하는 용량이 PV 용량보다 작으면 할당된다. 8기가를 요구 했는데 딱 맞는게 없고 그 보다 큰 20 기가가 있으면 20기가 를 할당.
NFS 의 경우 Quota 가 걸려 있지 않아 PV 의 용량이 사실상 무시된다.
create PersistentVolumeClaim
`# pvc_sample.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sample-pvc spec: selector: matchLabels: type: "nfs" matchExpressions: - {key: environment, operator: In, values: [stg]} accessModes: - ReadWriteOnce resources: requests: storage: 4Gi `
`$ kubectl get pvc $ kubectl get pv `
PVC 가 PV 확보에 실패하면 pending 상태가 유지된다.
Retain Policy 를 사용하고 있을 경우, Pod 이 종료되면 Bound 상태에서 Released 상태로 바뀐다. 이 Released 상태가 된 PV 는 PVC 가 재할당하지 않는다.
use in Pod
spec.volumes 의 persistentVolumeClaim.claimName 을 사용
`# pvc_pod_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-pvc-pod spec: containers: - name: nginx-container image: nginx ports: - containerPort: 80 name: "http" volumeMounts: - mountPath: "/usr/share/nginx/html" name: nginx-pvc volumes: - name: nginx-pvc persistentVolumeClaim: className: sample-pvc `
Dynamic Provisioning
동적으로 PV 를 만들기 때문에 용량을 효율적으로 사용하는 것이 가능하고 사전에 PV 를 만들어 둘 필요가 없다.
많은 Provisioner 가 ReadWriteOnce 밖에 지원하지 않는 경우가 많다.
`# Storage Class # storageclass_sample.yml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sample-storageclass parameters: type: pd-standard provisioner: kubernetes.io/gce-pd reclaimPolicy: Delete `
`# PVC # pvc_provisioner_sample.yml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sample-pvc-provisioner annotations: volume.beta.kubernetes.io/storage-class: sample-storageclass spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi `
`# Pod # pvc_provisioner_pod_sample.yml apiVersion: v1 kind: Pod metadata: name: sample-pvc-provisioner-pod spec: containers: - name: nginx-container image: nginx prots: - containerPort: 80 name: "http" volumeMounts: - mountPath: "/usr/share/nginx/html" name: nginx-pvc volumes: - name: nginx-pvc persistentVolumeClaim: claimName: sample-pvc-provisioner `
`$ kubectl get pv `
PersistentVolumeClaim in StatefulSet
StatefulSet 에서는 PersistentVolumeClaim 을 이용할 경우가 많고spec.volumeClaimTemplate 을 이용하면 별도로 PVC 를 정의할 필요 없다.
`apiVersion: apps/v1beta1 kind: StatefulSet metadata: ... spec: template: spec: containers: - name: sample-pvct image: nginx:1.12 volumeMounts: - name: pvc-template-volume mountPath: /tmp volumeClaimTemplates: - matadata: name: pvc-template-volume spec: accessModes: - ReadWriteOnce resources: requests: storage: 10Gi storageClassName: "sample-storageclass"
0 notes
faizrashis1995 · 5 years ago
Text
“Let’s use Kubernetes!” Now you have 8 problems
If you’re using Docker, the next natural step seems to be Kubernetes, aka K8s: that’s how you run things in production, right?
 Well, maybe. Solutions designed for 500 software engineers working on the same application are quite different than solutions for 50 software engineers. And both will be different from solutions designed for a team of 5.
 If you’re part of a small team, Kubernetes probably isn’t for you: it’s a lot of pain with very little benefits.
 Let’s see why.
 Everyone loves moving parts
Kubernetes has plenty of moving parts—concepts, subsystems, processes, machines, code—and that means plenty of problems.
 Multiple machines
Kubernetes is a distributed system: there’s a main machine that controls worker machines. Work is scheduled across different worker machines. Each machine then runs the work in containers.
 So already you’re talking about two machines or virtual machines just to get anything at all done. And that just gives you … one machine. If you’re going to scale (the whole point of the exercise) you need three or four or seventeen VMs.
 Lots and lots and lots of code
The Kubernetes code base as of early March 2020 has more than 580,000 lines of Go code. That’s actual code, it doesn’t count comments or blank lines, nor did I count vendored packages. A security review from 2019 described the code base as follows:
 “…the Kubernetes codebase has significant room for improvement. The codebase is large and complex, with large sections of code containing minimal documentation and numerous dependencies, including systems external to Kubernetes. There are many cases of logic re-implementation within the codebase which could be centralized into supporting libraries to reduce complexity, facilitate easier patching, and reduce the burden of documentation across disparate areas of the codebase.”
 This is no different than many large projects, to be fair, but all that code is something you need working if your application isn’t going to break.
 Architectural complexity, operational complexity, configuration complexity, and conceptual complexity
Kubernetes is a complex system with many different services, systems, and pieces.
 Before you can run a single application, you need the following highly-simplified architecture (original source in Kubernetes documentation):
   The concepts documentation in the K8s documentation includes many educational statements along these lines:
 In Kubernetes, an EndpointSlice contains references to a set of network endpoints. The EndpointSlice controller automatically creates EndpointSlices for a Kubernetes Service when a selector is specified. These EndpointSlices will include references to any Pods that match the Service selector. EndpointSlices group network endpoints together by unique Service and Port combinations.
 By default, EndpointSlices managed by the EndpointSlice controller will have no more than 100 endpoints each. Below this scale, EndpointSlices should map 1:1 with Endpoints and Services and have similar performance.
 I actually understand that, somewhat, but notice how many concepts are needed: EndpointSlice, Service, selector, Pod, Endpoint.
 And yes, much of the time you won’t need most of these features, but then much of the time you don’t need Kubernetes at all.
 Another random selection:
 By default, traffic sent to a ClusterIP or NodePort Service may be routed to any backend address for the Service. Since Kubernetes 1.7 it has been possible to route “external” traffic to the Pods running on the Node that received the traffic, but this is not supported for ClusterIP Services, and more complex topologies — such as routing zonally — have not been possible. The Service Topology feature resolves this by allowing the Service creator to define a policy for routing traffic based upon the Node labels for the originating and destination Nodes.
 Here’s what that security review I mentioned above had to say:
 “Kubernetes is a large system with significant operational complexity. The assessment team found configuration and deployment of Kubernetes to be non-trivial, with certain components having confusing default settings, missing operational controls, and implicitly defined security controls.”
 Development complexity
The more you buy in to Kubernetes, the harder it is to do normal development: you need all the different concepts (Pod, Deployment, Service, etc.) to run your code. So you need to spin up a complete K8s system just to test anything, via a VM or nested Docker containers.
 And since your application is much harder to run locally, development is harder, leading to a variety of solutions, from staging environments, to proxying a local process into the cluster (I wrote a tool for this a few years ago), to proxying a remote process onto your local machine…
 There are plenty of imperfect solutions to choose; the simplest and best solution is to not use Kubernetes.
 Microservices (are a bad idea)
A secondary problem is that since you have this system that allows you to run lots of services, it’s often tempting to write lots of services. This is a bad idea.
 Distributed applications are really hard to write correctly. Really. The more moving parts, the more these problems come in to play.
 Distributed applications are hard to debug. You need whole new categories of instrumentation and logging to getting understanding that isn’t quite as good as what you’d get from the logs of a monolithic application.
 Microservices are an organizational scaling technique: when you have 500 developers working on one live website, it makes sense to pay the cost of a large-scale distributed system if it means the developer teams can work independently. So you give each team of 5 developers a single microservice, and that team pretends the rest of the microservices are external services they can’t trust.
 If you’re a team of 5 and you have 20 microservices, and you don’t have a very compelling need for a distributed system, you’re doing it wrong. Instead of 5 people per service like the big company has, you have 0.25 people per service.
 But isn’t it useful?
Scaling
Kubernetes might be useful if you need to scale a lot. But let’s consider some alternatives:
 You can get cloud VMs with up to 416 vCPUs and 8TiB RAM, a scale I can only truly express with profanity. It’ll be expensive, yes, but it will also be simple.
You can scale many simple web applications quite trivially with services like Heroku.
This presumes, of course, that adding more workers will actually do you any good:
 Most applications don’t need to scale very much; some reasonable optimization will suffice.
Scaling for many web applications is typically bottlenecked by the database, not the web workers.
Reliability
More moving parts means more opportunity for error.
 The features Kubernetes provides for reliability (health checks, rolling deploys), can be implemented much more simply, or already built-in in many cases. For example, nginx can do health checks on worker processes, and you can use docker-autoheal or something similar to automatically restart those processes.
 And if what you care about is downtime, your first thought shouldn’t be “how do I reduce deployment downtime from 1 second to 1ms”, it should be “how can I ensure database schema changes don’t prevent rollback if I screw something up.”
 And if you want reliable web workers without a single machine as the point of failure, there are plenty of ways to do that that don’t involve Kubernetes.[Source]-https://pythonspeed.com/articles/dont-need-kubernetes/
Basic & Advanced
Kubernetes Certification
using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
dmroyankita · 5 years ago
Text
Monitoring in the Kubernetes era
What is Kubernetes?
Container technologies have taken the infrastructure world by storm. Ideal for microservice architectures and environments that scale rapidly or have frequent releases, containers have seen a rapid increase in usage in recent years. But adopting Docker, containerd, or other container runtimes introduces significant complexity in terms of orchestration. That’s where Kubernetes comes into play.
 The conductor
Kubernetes, often abbreviated as “K8s,” automates the scheduling, scaling, and maintenance of containers in any infrastructure environment. First open sourced by Google in 2014, Kubernetes is now part of the Cloud Native Computing Foundation.
 Just like a conductor directs an orchestra, telling the musicians when to start playing, when to stop, and when to play faster, slower, quieter, or louder, Kubernetes manages your containers—starting, stopping, creating, and destroying them automatically to reflect changes in demand or resource availability. Kubernetes automates your container infrastructure via:
 Container scheduling and auto-scaling
Health checking and recovery
Replication for parallelization and high availability
Internal network management for service naming, discovery, and load balancing
Resource allocation and management
Kubernetes can orchestrate your containers wherever they run, which facilitates multi-cloud deployments and migrations between infrastructure platforms. Hosted and self-managed flavors of Kubernetes abound, from enterprise-optimized platforms such as OpenShift and Pivotal Container Service to cloud services such as Google Kubernetes Engine, Amazon Elastic Kubernetes Service, Azure Kubernetes Service, and Oracle’s Container Engine for Kubernetes.
 Since its introduction in 2014, Kubernetes has been steadily gaining in popularity across a range of industries and use cases. Datadog’s research shows that almost one-half of organizations running containers were using Kubernetes as of November 2019.
 Key components of a Kubernetes architecture
Containers
At the lowest level, Kubernetes workloads run in containers, although part of the benefit of running a Kubernetes cluster is that it frees you from managing individual containers. Instead, Kubernetes users make use of abstractions such as pods, which bundle containers together into deployable units (and which are described in more detail below).
 Kubernetes was originally built to orchestrate Docker containers, but has since opened its support to a variety of container runtimes. In version 1.5, Kubernetes introduced the Container Runtime Interface (CRI), an API that allows users to adopt any container runtime that implements the CRI. With pluggable runtime support via the CRI, you can now choose between Docker, containerd, CRI-O, and other runtimes, without needing specialized support for each technology individually.
 Pods
Kubernetes pods are the smallest deployable units that can be created, scheduled, and managed with Kubernetes. They provide a layer of abstraction for containerized components to facilitate resource sharing, communication, application deployment and management, and discovery.
 Each pod contains one or more containers on which your workloads are running. Kubernetes will always schedule containers within the same pod together, but each container can run a different application. The containers in a given pod run on the same host and share the same IP address, port space, context, namespace (see below), and even resources like storage volumes.
 You can manually deploy individual pods to a Kubernetes cluster, but the official Kubernetes documentation recommends that users manage pods using a controller such as a deployment or replica set. These objects, covered below, provide higher levels of abstraction and automation to manage pod deployment, scaling, and updating.
 Nodes, clusters, and namespaces
Pods run on nodes, which are virtual or physical machines, grouped into clusters. A cluster of nodes has at least one master node that runs four key services for cluster administration:
 The API server exposes the Kubernetes API for interacting with the cluster
The Controller Manager watches the current state of the cluster and attempts to move it toward the desired state
The Scheduler assigns workloads to nodes
etcd stores data about cluster configuration, cluster state, and more
To ensure high availability, you can run multiple master nodes and distribute them across different zones to avoid a single point of failure for the cluster.
 All the non-master nodes in a cluster are workers, each of which runs an agent called a kubelet. The kubelet receives instructions from the API server about the makeup of individual pods and makes sure that all the containers in each pod are running properly.
 You can create multiple virtual Kubernetes clusters, called namespaces, on the same physical cluster. Namespaces allow cluster administrators to create multiple environments (e.g., dev and staging) on the same cluster, or to apportion resources to different teams or projects within a cluster.
 kubernetes architecture utilizes cluster pods and nodes
Kubernetes controllers
Controllers play a central role in how Kubernetes automatically orchestrates workloads and cluster components. Controller manifests describe a desired state for the cluster, including which pods to launch and how many copies to run. Each controller watches the API server for any changes to cluster resources and makes changes of its own to keep the actual state of the cluster in line with the desired state. A type of controller called a replica set is responsible for creating and destroying pods dynamically to ensure that the desired number of pods (replicas) are running at all times. If any pods fail or are terminated, the replica set will automatically attempt to replace them.
 A Deployment is a higher-level controller that manages your replica sets. In a Deployment manifest (a YAML or JSON document defining the specifications for a Deployment), you can declare the type and number of pods you wish to run, and the Deployment will create or update replica sets at a controlled rate to reach the desired state. The Kubernetes documentation recommends that users rely on Deployments rather than managing replica sets directly, because Deployments provide advanced features for updating your workloads, among other operational benefits. Deployments automatically manage rolling updates, ensuring that a minimum number of pods remain available throughout the update process. They also provide tooling for pausing or rolling back changes to replica sets.
 Services
Since pods are constantly being created and destroyed, their individual IP addresses are dynamic, and can’t reliably be used for communication. So Kubernetes architectures rely on services, which are simple REST objects that provide a level of abstraction and stability across pods and between the different components of your applications. A service acts as an endpoint for a set of pods by exposing a stable IP address to the external world, which hides the complexity of the cluster’s dynamic pod scheduling on the backend. Thanks to this additional abstraction, services can continuously communicate with each other even as the pods that constitute them come and go. It also makes service discovery and load balancing possible.
 kubernetes abstraction
Services target specific pods by leveraging labels, which are key-value pairs applied to objects in Kubernetes. A Kubernetes service uses labels to dynamically identify which pods should handle incoming requests. For example, the manifest below creates a service named web-app that will route requests to any pod carrying the app=nginx label. See the section below for more on the importance of labels.
 apiVersion: v1
kind: Service
metadata:
 name: web-app
spec:
 selector:
   app: nginx
 ports:
   - protocol: TCP
     port: 80
If you’re using CoreDNS, which is the default DNS server for Kubernetes, CoreDNS will automatically create DNS records each time a new service is created. Any pods within your cluster will then be able to address the pods running a particular service using the name of the service and its associated namespace. For instance, any pod in the cluster can talk to the web-app service running in the prod namespace by querying the DNS name web-app.prod.
 Auto-scaling
Deployments enable you to adjust the number of running pods on the fly with simple commands like kubectl scale and kubectl edit. But Kubernetes can also scale the number of pods automatically based on user-provided criteria. The Horizontal Pod Autoscaler is a Kubernetes controller that attempts to meet a CPU utilization target by scaling the number of pods in a deployment or replica set based on real-time resource usage. For example, the command below creates a Horizontal Pod Autoscaler that will dynamically adjust the number of running pods in the deployment (nginx-deployment), between a minimum of 5 pods and a maximum of 10, to maintain an average CPU utilization of 65 percent.
 kubectl autoscale deployment nginx-deployment --cpu-percent=65 --min=5 --max=10
Kubernetes has progressively rolled out support for auto-scaling with metrics besides CPU utilization, such as memory consumption and, as of Kubernetes 1.10, custom metrics from outside the cluster.
 What does Kubernetes mean for your monitoring?
Kubernetes requires you to rethink and reorient your monitoring strategies, especially if you are used to monitoring traditional, long-lived hosts such as VMs or physical machines. Just as containers have completely transformed how we think about running services on virtual machines, Kubernetes has changed the way we interact with containerized applications.
 The good news is that the abstraction inherent to a Kubernetes-based architecture already provides a framework for understanding and monitoring your applications in a dynamic container environment. With a proper monitoring approach that dovetails with Kubernetes’s built-in abstractions, you can get a comprehensive view of application health and performance, even if the containers running those applications are constantly shifting across hosts or scaling up and down.
 Monitoring Kubernetes differs from traditional monitoring of more static resources in several ways:
 Tags and labels are essential for continuous visibility
Additional layers of abstraction means more components to monitor
Applications are highly distributed and constantly moving
Tags and labels were important . . . now they’re essential
Just as Kubernetes uses labels to identify which pods belong to a particular service, you can use these labels to aggregate data from individual pods and containers to get continuous visibility into services and other Kubernetes objects.
 In the pre-container world, labels and tags were important for monitoring your infrastructure. They allowed you to group hosts and aggregate their metrics at any level of abstraction. In particular, tags have proved extremely useful for tracking the performance of dynamic cloud infrastructure and investigating issues that arise there.
 A container environment brings even larger numbers of objects to track, with even shorter lifespans. The automation and scalability of Kubernetes only exaggerates this difference. With so many moving pieces in a typical Kubernetes cluster, labels provide the only reliable way to identify your pods and the applications within.
 To make your observability data as useful as possible, you should label your pods so that you can look at any aspect of your applications and infrastructure, such as:
 environment (prod, staging, dev, etc.)
app
team
version
These user-generated labels are essential for monitoring since they are the only way you have to slice and dice your metrics and events across the different layers of your Kubernetes architecture.[Source]-https://www.datadoghq.com/blog/monitoring-kubernetes-era/
Basic & Advanced Kubernetes Training Online using cloud computing, AWS, Docker etc. in Mumbai. Advanced Containers Domain is used for 25 hours Kubernetes Training.
0 notes
kubernetesdeveloper-com · 6 years ago
Text
Certified Kubernetes Applications Developer
hello and welcome to this course on the certified kubernetes applications developer my name is moonshot monomyth and i will be your instructor for this course so about me I'm a Solutions Architect I specialized on cloud automation and DevOps technologies I have authored several best-seller and top rated courses on docker kubernetes and OpenShift as well as automation technologies like a stable chef and puppet this course is the second installment in the series on kubernetes and focuses on a certification let's take a look at the structure of this course we start with a series of lectures on various topics in kubernetes where we simplify complex concepts using illustration and animation we have optional quizzes that test your knowledge after each lecture we have coding quizzes that help you practice what you learned on a real live environment right in your browser the kubernetes certification is a hands-on practical exam so the coding exercises will give you enough experience and practice on getting ready for it more on this topic in the upcoming lectures we will also discuss some tips and tricks to crack the certification exam and as always if you have any questions you may reach out directly to us through our Q&A; section now this is one of the course in the series on kubernetes and focuses on getting the kubernetes application developer certification so a basic understanding of kubernetes is required for example you must know how to set up a lab environment to practice on the certification curriculum does not include kubernetes set up or install so you could set up a learning environment in any way you like we discussed a lot of these in the beginners course you also need a good understanding of Yama language for creating configuration files in kubernetes and a basic understanding of what a master and worker nodes are and what pods replica sets and deployments are we do refresh some of these topics in this course but if you are absolute beginner I highly recommend taking my kubernetes for the absolute beginners course let us now look at the course objectives the objectives of this course are aligned to match the certified coordinators application developer exam curriculum we will discuss about details around the certification itself in one of the upcoming lectures before heading into any of these topics we start with the core concepts we have covered a lot of the core concepts in the beginners course we will recap some of these in this course to refresh our memory such as the kubernetes architecture what ports are and how to create and configure pods etc the next section is on configuration and covers topics like config maps security contacts resource requirements secrets service accounts etc we will then look deeper into multi container pods the different patterns of multi container pods such as ambassador adapter and sidecar we will look at some examples and use cases around these we then learn about readiness and lightness probes and what you need them we will also look at some of the monitoring logging and debugging options available with kubernetes specifically around pods containers and applications we then move on to labels and selectors and then rolling updates and rollbacks in deployment we will learn about why you need jobs and cron jobs and how to schedule them we will then learn about services and network policies and finally we look at persistent volumes and claims for all of these topics we have lectures that makes these complex topics easy to understand followed by coding challenges where you will be practicing what you learned on a real live environment let's take a look at that the kubernetes certification is a practical hands-on exam so it is very important to practice what you learn which is why we have built a custom solution that will give you access to a real kubernetes environment right in your browser along with a quiz portal that provides fun and challenging problems for you to solve you were required to gain a set of different skills working with kubernetes such as how to look for and find information how to troubleshoot issues etc that is why we have questions where you will be asked to find information within an environment we'll also be asked to perform configuration tasks where you will be required to configure and deploy applications and services on the cluster we will test your work and provide feedback instantly we will occasionally make changes to the environment or break stuff and ask you to fix them these are common issues that one would face while working with kubernetes and these exercises will help you troubleshoot and fix issues quickly when you see an error message you should be able to understand what that means and where to look for and how to fix it you need practice you need to get faster one of the major villains in a practical test like the one in the kubernetes certification is time even simple issues like a typo or an indentation error in a yellow file can take a beginner hours to fix this is why we have hundreds of such exercises in this course that will make you an expert and give you enough practice to help you clear the exam well that's all for now thank you for taking this course and I am excited to get started
https://youtu.be/X_Ur118Kd_E
0 notes
kubernetesdevelopers-com · 6 years ago
Text
Certified Kubernetes Applications Developer
hello and welcome to this course on the certified kubernetes applications developer my name is moonshot monomyth and i will be your instructor for this course so about me I'm a Solutions Architect I specialized on cloud automation and DevOps technologies I have authored several best-seller and top rated courses on docker kubernetes and OpenShift as well as automation technologies like a stable chef and puppet this course is the second installment in the series on kubernetes and focuses on a certification let's take a look at the structure of this course we start with a series of lectures on various topics in kubernetes where we simplify complex concepts using illustration and animation we have optional quizzes that test your knowledge after each lecture we have coding quizzes that help you practice what you learned on a real live environment right in your browser the kubernetes certification is a hands-on practical exam so the coding exercises will give you enough experience and practice on getting ready for it more on this topic in the upcoming lectures we will also discuss some tips and tricks to crack the certification exam and as always if you have any questions you may reach out directly to us through our Q&A; section now this is one of the course in the series on kubernetes and focuses on getting the kubernetes application developer certification so a basic understanding of kubernetes is required for example you must know how to set up a lab environment to practice on the certification curriculum does not include kubernetes set up or install so you could set up a learning environment in any way you like we discussed a lot of these in the beginners course you also need a good understanding of Yama language for creating configuration files in kubernetes and a basic understanding of what a master and worker nodes are and what pods replica sets and deployments are we do refresh some of these topics in this course but if you are absolute beginner I highly recommend taking my kubernetes for the absolute beginners course let us now look at the course objectives the objectives of this course are aligned to match the certified coordinators application developer exam curriculum we will discuss about details around the certification itself in one of the upcoming lectures before heading into any of these topics we start with the core concepts we have covered a lot of the core concepts in the beginners course we will recap some of these in this course to refresh our memory such as the kubernetes architecture what ports are and how to create and configure pods etc the next section is on configuration and covers topics like config maps security contacts resource requirements secrets service accounts etc we will then look deeper into multi container pods the different patterns of multi container pods such as ambassador adapter and sidecar we will look at some examples and use cases around these we then learn about readiness and lightness probes and what you need them we will also look at some of the monitoring logging and debugging options available with kubernetes specifically around pods containers and applications we then move on to labels and selectors and then rolling updates and rollbacks in deployment we will learn about why you need jobs and cron jobs and how to schedule them we will then learn about services and network policies and finally we look at persistent volumes and claims for all of these topics we have lectures that makes these complex topics easy to understand followed by coding challenges where you will be practicing what you learned on a real live environment let's take a look at that the kubernetes certification is a practical hands-on exam so it is very important to practice what you learn which is why we have built a custom solution that will give you access to a real kubernetes environment right in your browser along with a quiz portal that provides fun and challenging problems for you to solve you were required to gain a set of different skills working with kubernetes such as how to look for and find information how to troubleshoot issues etc that is why we have questions where you will be asked to find information within an environment we'll also be asked to perform configuration tasks where you will be required to configure and deploy applications and services on the cluster we will test your work and provide feedback instantly we will occasionally make changes to the environment or break stuff and ask you to fix them these are common issues that one would face while working with kubernetes and these exercises will help you troubleshoot and fix issues quickly when you see an error message you should be able to understand what that means and where to look for and how to fix it you need practice you need to get faster one of the major villains in a practical test like the one in the kubernetes certification is time even simple issues like a typo or an indentation error in a yellow file can take a beginner hours to fix this is why we have hundreds of such exercises in this course that will make you an expert and give you enough practice to help you clear the exam well that's all for now thank you for taking this course and I am excited to get started
https://youtu.be/X_Ur118Kd_E
0 notes
computingpostcom · 3 years ago
Text
Running databases in Kubernetes can be a little bit delicate owing to the fact that data needs to be persisted incase the pods die, are restarted or are malfunctioning due to one reason or another. Due to this, most people still choose the virtual machine way to run their databases because among other things, it feels safe and you know exactly where the data is being stored/persisted. But in case you need all of your applications to reside in Kubernetes, you will be delighted to know that it can handle stateful applications such as databases. There are two ways in which you can achieve this. You can either choose to use Deployments or StatefulSets. Now the two have their pros and cons and their differences are showed in the table below: StatefulSet Deployment Their pod names remain the same regardless Pods created from deployments have random names If a volume is configured, each pod will be provision its own persistent volume Pod replicas in deployments share the same persistent volume Recommended for deploying Stateful applications e.g databases that need persistent data across replicas. Recommended for deploying stateless applications for example load balancers, proxies etc. A headless service is responsible for the network identity of the pods A service has to be provisioned to interact with pods in a deployment Every replica is named in sequential order, beginning from ordinal number zero. Every replica is named randomly Due to the advantages that StatefulSets have compared to Deployments such as separate volumes for each of the pods created, let us go ahead and deploy Redis cluster using this kind. It is best practice to use headless services (service without an IP address) when using StatefulSets. This is because we would wish our clients to connect directly to the pods and good thing is that StatefulSets maintain the same DNS identity. So we shall create a headless service to go along with this Redis cluster. Pre-requisites You need to have the following to get this accomplished: A working Kubernetes Cluster kubectl installed locally or somewhere that is connected to your cluster Ability to access your cluster For this we will also require Dynamic Volume Provisioning in your cluster. Once you are ready, we shall therefore set off. Step 1: Create and deploy headless service In this step, we shall simply create our headless service as follows. $ vim redis-service.yml apiVersion: v1 kind: Service metadata: name: redis-service namespace: redis labels: app: redis spec: ports: - port: 6379 clusterIP: None selector: app: redis As you can notice, the service has its clusterIP set to “None” and hence will prevent the service from having an IP giving clients the ability to connect directly to what lies behind it. With the clean service created, proceed to deploy it as shown below. We will create the “redis” namespace then apply the file. Remember that you can choose any namespace of your choice here. $ kubectl create ns redis $ kubectl apply -f redis-service.yml service/redis-ss created Step 2: Create and deploy the configuration via a ConfigMap We shall use a ConfigMap to add configuration parameters dynamically to our Redis Master and Replicas. Create a ConfigMap as follows: $ vim redis-configmap.yml apiVersion: v1 kind: ConfigMap metadata: name: redis-ss-configuration namespace: redis labels: app: redis data: master.conf: | maxmemory 400mb maxmemory-policy allkeys-lru maxclients 20000 timeout 300 appendonly no dbfilename dump.rdb dir /data slave.conf: | slaveof redis-ss-0.redis-ss.redis 6379 maxmemory 400mb maxmemory-policy allkeys-lru maxclients 20000 timeout 300 dir /data The data section of the ConfigMap has the two configurations for master and replica which we shall direct them to the right pods depending on the StatefulSets’ ordinal numbers.
For your information, StatefulSets assign a sticky identity called an ordinal number starting from zero to each Pod instead of assigning random IDs for each replica Pod. This is as was clarified on the table in the introductory section. You can add more of your configurations on this part as needed. Once you are ready/comfortable with the configs, let’s go ahead and deploy it right away. $ kubectl apply -f redis-configmap.yml configmap/redis-configuration-ss created Step 3: Create and deploy the StatefulSet We are now in the interesting part of this meal. All of the other parts are ready and we will just plug in the engine and we will be ready to hit the road. Create a new file and fill it with the following StatefulSet configuration then we explain what it does. vim redis-statefulset.yml Paste and modify the contents below where applicable. apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-ss namespace: redis spec: serviceName: "redis-service" replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: initContainers: - name: init-redis image: redis:latest command: - bash - "-c" - | set -ex # Generate redis server-id from pod ordinal index. [[ `hostname` =~ -([0-9]+)$ ]] || exit 1 ordinal=$BASH_REMATCH[1] # Copy appropriate redis config files from config-map to respective directories. if [[ $ordinal -eq 0 ]]; then cp /mnt/master.conf /etc/redis-config.conf else cp /mnt/slave.conf /etc/redis-config.conf fi volumeMounts: - name: redis-claim mountPath: /etc - name: config-map mountPath: /mnt/ containers: - name: redis image: redis:latest ports: - containerPort: 6379 name: redis-ss command: - redis-server - "/etc/redis-config.conf" volumeMounts: - name: redis-data mountPath: /data - name: redis-claim mountPath: /etc volumes: - name: config-map configMap: name: redis-ss-configuration volumeClaimTemplates: - metadata: name: redis-claim spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi - metadata: name: redis-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi When using StorageClass for PV creation update volumeClaimTemplates section like below. volumeClaimTemplates: - metadata: name: redis-claim spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi storageClassName: rook-cephfs - metadata: name: redis-data spec: accessModes: [ "ReadWriteOnce" ] resources: requests: storage: 1Gi storageClassName: rook-cephfs You can list configured Storage Classes in your cluster with the command below: $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 204d rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 204d In our Statefulset configuration, the Ordinal Number will be 0 for the first pod here and it will be named “redis-ss-0” because the name of the StatefulSet is “redis-ss” All the way from top, we have declared that we are creating a StatefulSet and we are picking the headless service called “redis-service” that we created in Step 1 as the main entry point. We will be using an initContainer to plug the configuration files to the pods.
Init containers are run before the app containers are started and once it completes its task, it goes off and hands over the mantle to the main pod. In this case, the init container is instructed to check the ordinal number of the pod. Remember the “ordinal number” we explained before, here is one of the instances it becomes useful. So we are saying that, in case the ordinal number is zero i.e “redis-ss-0” (notice the tailing zero), pick the “master.conf” configuration from the ConfigMap (now available at /mnt in the initcontainer) and copy it to “/etc/redis-config.conf” file in the main pod. This configuration will have the redis master configs. In case it has any other ordinal number like 1 from the tailing number like “redis-ss-1” then pick the “slave.conf” (now also available at /mnt in the initcontainer) file from the ConfigMap and copy it to “/etc/redis-config.conf” of that new Pod. Pretty Cool!! You may now be wondering and asking how the ConfigMap config is found at “/mnt” path. As you can see from the VolumeMounts section, we have the name “config-map” representing a mount point at the “/mnt” path. When we check the “volumes” section of the file as well, we see that there is a volume also known as “config-map” that is attached to the ConfigMap called “redis-ss-configuration” we created in Step 2. Basically, the contents of the ConfigMap (master.conf and slave.conf) will be dropped in the volume called “config-map” which will be later mounted at the path “/mnt” and its contents accessed. That is how we get the data/files (master.conf and slave.conf) into the initContainer. Another thing you will notice is that the initContainer and the main pod share one of the volumes and mounted on the same path. This is the “redis-claim” volume. This is correct so that the files transferred from the initContainer can be found by the main pod when it starts and mounts the volume into its directory hierarchy which it “/etc” in this case. Depending on the ordinal number of the pod, the initContainer will copy either master.conf or slave.conf into “/etc/redis-config.conf”. Once that is done, there is a redis command that will start redis using the “/etc/redis-config.conf” file copied from the initContainer. The command here is: redis-server /etc/redis-config.conf Later in the file, we see the “volumeClaimTemplates” which is a way the StatefulSet will create persistent volumes dynamically. This is why one of the requirements for this deployment is the ability of your cluster to create persistent volumes dynamically. In case this is not possible, you will have to create persistent volumes with the same names as the ones in “volumeClaimTemplates” then apply this StatefulSet. The persistent volumes here will be used by the main pod to store its data at “/data” as well as the configurations at “/etc”. We hope the explanation was clear enough. Finally, apply the file like we have done with the others before: $ kubectl apply -f redis-statefulset.yml statefulset.apps/redis-ss created Step 4: Checking if all we have done is working By now, should be good to go. And the only way to know is to simply check if everything was successfully deployed: Check the ConfigMap $ kubectl get configmap -n redis NAME DATA AGE redis-ss-configuration 2 3h35m Check the Service $ kubectl get svc -n redis NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE redis-service ClusterIP None 6379/TCP 3h40m Check the StatefulSet $ kubectl get statefulset -n redis NAME READY AGE redis-ss 1/1 3h43m Check the Pods created from the StatefulSet $ kubectl get pods -n redis NAME READY STATUS RESTARTS AGE redis-ss-0 1/1 Running 0 3h42m Check the Persistent Volumes $ kubectl get pvc -n redis redis-claim-redis-ss-0 Bound pvc-8a30a867-8b3e-49c2-ac1d-be05d8fe3255 1Gi RWO standard 3h40m
redis-data-redis-ss-0 Bound pvc-04cd967c-93ec-457e-9cef-582aabae496a 1Gi RWO standard 3h40m Let us now exec into the pod and check that the configuration we want are the ones we get. $ kubectl exec -it redis-ss-0 -c redis -n redis -- /bin/bash bash-5.1# Once inside, enter into redis cli interface: bash-5.1# redis-cli Find out if the timeout matches the one on the config file 127.0.0.1:6379> config get timeout 1) "timeout" 2) "300" Find out if the maxclients matches the one on the config file 127.0.0.1:6379> config get maxclients 1) "maxclients" 2) "20000" Cool! It seems like we are out of the woods 😄. Step 5: See if replication works Now that the master is functioning well, let us increase the replicas and see if the master replicates to the replica pod. Navigate to the StatefulSet file we created and edit the replicas to 2 from 1 as follows then apply the file. $ vim redis-statefulset.yml apiVersion: apps/v1 kind: StatefulSet metadata: name: redis-ss namespace: redis spec: serviceName: "redis-service” replicas: 2 ##
0 notes