Tumgik
#matchlabel
sonohausgraphics · 2 years
Photo
Tumblr media Tumblr media Tumblr media
熱海【バー コマド】
KVデザイン、マッチラベルデザイン、ポストカードデザイン、名刺デザイン
(2022.04)
0 notes
qcs01 · 3 months
Text
Performance Optimization on OpenShift
Optimizing the performance of applications running on OpenShift involves several best practices and tools. Here's a detailed guide:
1. Resource Allocation and Management
a. Proper Sizing of Pods and Containers:
- Requests and Limits:Set appropriate CPU and memory requests and limits to ensure fair resource allocation and avoid overcommitting resources.
  - Requests: Guaranteed resources for a pod.
  - Limits:Maximum resources a pod can use.
- Vertical Pod Autoscaler (VPA):Automatically adjusts the CPU and memory requests and limits for containers based on usage.
b. Resource Quotas and Limits:
- Use resource quotas to limit the resource usage per namespace to prevent any single application from monopolizing cluster resources.
c. Node Selector and Taints/Tolerations:
- Use node selectors and taints/tolerations to control pod placement on nodes with appropriate resources.
2. Scaling Strategies
a. Horizontal Pod Autoscaler (HPA):
- Automatically scales the number of pod replicas based on observed CPU/memory usage or custom metrics.
- Example Configuration:
  ```yaml
  apiVersion: autoscaling/v1
  kind: HorizontalPodAutoscaler
  metadata:
    name: my-app-hpa
  spec:
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment
      name: my-app
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 70
  ```
b. Cluster Autoscaler:
- Automatically adjusts the size of the OpenShift cluster by adding or removing nodes based on the workload requirements.
3. Application and Cluster Tuning
a. Optimize Application Code:
- Profile and optimize the application code to reduce resource consumption and improve performance.
- Use tools like JProfiler, VisualVM, or built-in profiling tools in your IDE.
b. Database Optimization:
- Optimize database queries and indexing.
- Use connection pooling and proper caching strategies.
c. Network Optimization:
- Use service meshes (like Istio) to manage and optimize service-to-service communication.
- Enable HTTP/2 or gRPC for efficient communication.
4. Monitoring and Analyzing Performance
a. Prometheus and Grafana:
- Use Prometheus for monitoring and alerting on various metrics.
- Visualize metrics in Grafana dashboards.
- Example Prometheus Configuration:
  ```yaml
  apiVersion: monitoring.coreos.com/v1
  kind: ServiceMonitor
  metadata:
    name: my-app
  spec:
    selector:
      matchLabels:
        app: my-app
    endpoints:
      - port: web
        interval: 30s
  ```
b. OpenShift Monitoring Stack:
- Leverage OpenShift's built-in monitoring stack, including Prometheus, Grafana, and Alertmanager, to monitor cluster and application performance.
c. Logging with EFK/ELK Stack:
- Use Elasticsearch, Fluentd, and Kibana (EFK) or Elasticsearch, Logstash, and Kibana (ELK) stack for centralized logging and log analysis.
- Example Fluentd Configuration:
  ```yaml
  apiVersion: v1
  kind: ConfigMap
  metadata:
    name: fluentd-config
  data:
    fluent.conf: |
      <source>
        @type tail
        path /var/log/containers/*.log
        pos_file /var/log/fluentd-containers.log.pos
        tag kubernetes.*
        format json
        time_format %Y-%m-%dT%H:%M:%S.%NZ
      </source>
  ```
d. APM Tools (Application Performance Monitoring):
- Use tools like New Relic, Dynatrace, or Jaeger for distributed tracing and APM to monitor application performance and pinpoint bottlenecks.
5. Best Practices for OpenShift Performance Optimization
a. Regular Health Checks:
- Configure liveness and readiness probes to ensure pods are healthy and ready to serve traffic.
  - Example Liveness Probe:
    ```yaml
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
    ```
b. Efficient Image Management:
- Use optimized and minimal base images to reduce container size and startup time.
- Regularly scan and update images to ensure they are secure and performant.
c. Persistent Storage Optimization:
- Use appropriate storage classes for different types of workloads (e.g., SSD for high I/O applications).
- Optimize database storage configurations and perform regular maintenance.
d. Network Policies:
- Implement network policies to control and secure traffic flow between pods, reducing unnecessary network overhead.
Conclusion
Optimizing performance on OpenShift involves a combination of proper resource management, scaling strategies, application tuning, and continuous monitoring. By implementing these best practices and utilizing the available tools, you can ensure that your applications run efficiently and effectively on the OpenShift platform.
For more details click www.hawkstack.com 
0 notes
thedebugdiary · 1 year
Text
A Minimal Guide to Deploying MLflow 2.6 on Kubernetes
Introduction
Deploying MLflow on Kubernetes can be a straightforward process if you know what you're doing. This blog post aims to provide a minimal guide to get you up and running with MLflow 2.6 on a Kubernetes cluster. We'll use the namespace my-space for this example.
Prerequisites
A running Kubernetes cluster
kubectl installed and configured to interact with your cluster
Step 1: Create the Deployment YAML
Create a file named mlflow-minimal-deployment.yaml and paste the following content:
apiVersion: v1 kind: Namespace metadata: name: my-space --- apiVersion: apps/v1 kind: Deployment metadata: name: mlflow-server namespace: my-space spec: replicas: 1 selector: matchLabels: app: mlflow-server template: metadata: labels: app: mlflow-server name: mlflow-server-pod spec: containers: - name: mlflow-server image: ghcr.io/mlflow/mlflow:v2.6.0 command: ["mlflow", "server"] args: ["--host", "0.0.0.0", "--port", "5000"] ports: - containerPort: 5000 ---
apiVersion: v1 kind: Service metadata: name: mlflow-service namespace: my-space spec: selector: app: mlflow-server ports: - protocol: TCP port: 5000 targetPort: 5000
Step 2: Apply the Deployment
Apply the YAML file to create the deployment and service:
kubectl apply -f mlflow-minimal-deployment.yaml
Step 3: Verify the Deployment
Check if the pod is running:
kubectl get pods -n my-space
Step 4: Port Forwarding
To access the MLflow server from your local machine, you can use Kubernetes port forwarding:
kubectl port-forward -n my-space mlflow-server-pod 5000:5000
After running this command, you should be able to access the MLflow server at http://localhost:5000 from your web browser.
Step 5: Access MLflow within the Cluster
The cluster-internal URL for the MLflow service would be:
http://mlflow-service.my-space.svc.cluster.local:5000
You can use this tracking URL in other services within the same Kubernetes cluster, such as Kubeflow, to log your runs.
Troubleshooting Tips
Pod not starting: Check the logs using kubectl logs -n my-space mlflow-server-pod.
Service not accessible: Make sure the service is running using kubectl get svc -n my-space.
Port issues: Ensure that the port 5000 is not being used by another service in the same namespace.
Conclusion
Deploying MLflow 2.6 on Kubernetes doesn't have to be complicated. This guide provides a minimal setup to get you started. Feel free to expand upon this for your specific use-cases.
0 notes
chrisshort · 1 year
Text
0 notes
computingpostcom · 2 years
Text
Welcome to this guide on how to clone a private Git Repository in Kubernetes with user authentication. May at times, we have config variables that keep being updated regularly by developers and thus need to update the environment in our containers. This problem can be solved by creating a pod with multiple containers sharing a volume. Git can be used to store the data and each time the code is updated, the data is pulled to the volume. Git-sync is a sidecar container that perfectly clones a git repo and keeps it synchronized with the upstream. It can be configured to pull one time or regularly as per your preferences. It allows one to pull over SSH or via HTTPS (with or without authentication) Now let’s dive in and see how we can clone a private Git Repository in Kubernetes with user authentication. Getting Started This guide requires one to have a Kubernetes cluster already set up. There are many methods one can use to set up a Kubernetes cluster. Some of them are demonstrated in the guides below: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Install Kubernetes Cluster on Ubuntu using K3s Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube With a cluster set up, proceed as below. Using Git-sync to Clone Git Repository in Kubernetes There are two methods here i.e using HTTPS which works with or without authentication and using SSH which requires SSH keys. In this guide, we will run two containers in a pod: Nginx webserver Git-sync as an init container to clone the private Git Repository. Create the deployment manifest. In this guide, we will generate an Nginx template and modify it to accommodate git-sync ## Generate deployment YAML file ## kubectl create deployment --image=nginx nginx --dry-run=client -o yaml >nginx-deployment.yml ## Generate Pod creation YAML file ### kubectl run nginx-helloworld --image nginx --restart=Never --dry-run=client -o yaml >nginx-helloworld-pod.yml You will have a YAML file with the below lines. $ cat nginx-deployment.yml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: nginx name: nginx spec: replicas: 1 selector: matchLabels: app: nginx strategy: template: metadata: creationTimestamp: null labels: app: nginx spec: containers: - image: nginx name: nginx resources: status: Let’s create a namespace called “demo” for these tasks $ kubectl create ns demo namespace/demo created Let’s set the current context to the demo namespace $ kubectl config set-context --current --namespace demo Context "Default" modified. Option 1. Clone a Private Repository Using HTTPS For git-sync to clone the private git repo over HTTPS, you need to have the repository’s username and a Personal Access Token. Proceed and modify your deployment file vim nginx-deployment.yml Paste and modify the contents below accordingly. apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx-helloworld ports: - containerPort: 80 volumeMounts: - mountPath: "/usr/share/nginx/html" name: www-data initContainers: - name: git-sync image: k8s.gcr.io/git-sync:v3.1.5 volumeMounts: - name: www-data mountPath: /data env: - name: GIT_SYNC_REPO value: "https://github.com/computingpost/hello-world-nginx.git" ##Private repo-path-you-want-to-clone - name: GIT_SYNC_USERNAME value: "computingpost" ##The username for the repository - name: GIT_SYNC_PASSWORD value: "ghpsdhkshkdj_kndk...." ##The Personal Access Token for the repository
- name: GIT_SYNC_BRANCH value: "master" ##repo-branch - name: GIT_SYNC_ROOT value: /data - name: GIT_SYNC_DEST value: "hello" ##path-where-you-want-to-clone - name: GIT_SYNC_PERMISSIONS value: "0777" - name: GIT_SYNC_ONE_TIME value: "true" securityContext: runAsUser: 0 volumes: - name: www-data emptyDir: Option 2. Clone a Private Repository using SSH Ensure that you already have SSH keys generated from your server and copied to the GIT HOST. Verify if you can connect via SSH $ ssh -T [email protected] The authenticity of host 'github.com (140.82.121.3)' can't be established. ECDSA key fingerprint is SHA256:p2QAMXNIC1TJYWeIOttrVc98/R1BUFWu3/LiyKgUfQM. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added 'github.com,140.82.121.3' (ECDSA) to the list of known hosts. Hi computingpost/hello-world-nginx! You've successfully authenticated, but GitHub does not provide shell access. Obtain the host keys for the Git server ssh-keyscan YOUR_GIT_HOST > /tmp/known_hosts For example. ssh-keyscan github.com > /tmp/known_hosts With the keys, create the secrets as below: kubectl create secret generic git-creds \ --from-file=ssh=$HOME/.ssh/id_rsa \ --from-file=known_hosts=/tmp/known_hosts Verify if the secret has been deployed. $ kubectl get secret NAME TYPE DATA AGE default-token-nz74s kubernetes.io/service-account-token 3 72m git-creds Opaque 2 11s Edit the YAML to be able to run git-sync as an init container that clones a private Git Repository. vim nginx-deployment.yml The will contain the below lines. apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx name: nginx-helloworld ports: - containerPort: 80 volumeMounts: - mountPath: "/usr/share/nginx/html" name: www-data initContainers: - name: git-sync image: k8s.gcr.io/git-sync:v3.1.5 volumeMounts: - name: www-data mountPath: /data env: - name: GIT_SYNC_REPO value: "[email protected]:computingpost/hello-world-nginx.git" ##repo-path-you-want-to-clone - name: GIT_SYNC_BRANCH value: "master" ##repo-branch - name: GIT_SYNC_SSH value: "true" - name: GIT_SYNC_ROOT value: /data - name: GIT_SYNC_DEST value: "hello" ##path-where-you-want-to-clone - name: GIT_SYNC_PERMISSIONS value: "0777" - name: GIT_SYNC_ONE_TIME value: "true" securityContext: runAsUser: 0 volumes: - name: www-data emptyDir: - name: git-secret secret: defaultMode: 256 secretName: git-creds # your-ssh-key Run the Application In the above configuration files, we have an init container named git-sync that clones a repository to /data which is a volume mount called www-data. This volume is shared between the two containers. Deploy the manifest. kubectl create -f nginx-deployment.yml Verify if the pod has been created: ###First run $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-helloworld 0/1 Init:0/1 0 7s #Second run $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-helloworld 0/1 PodInitializing 0 10s #Third run $ kubectl get pods NAME READY STATUS RESTARTS AGE nginx-helloworld 1/1 Running 0 13s If you want to use persistent volume claim instead, you need to update your manifest as below.
volumes: - name: my-pv-storage persistentVolumeClaim: claimName: mypv-claim Now you should have your file cloned to /data/hello and /data share the same volume with /usr/share/nginx/html verified as below: $ kubectl exec --stdin --tty nginx-helloworld -- /bin/bash # ls -al /usr/share/nginx/html/hello/ .git README.md index.html My cloned git repo HelloWord-Computingpost has the files above. Delete the pods and secret using the command: kubectl delete all --all -n demo kubectl delete ns demo Conclusion. That marks the end of this guide. We have successfully walked through how to clone a private Git Repository in Kubernetes With User Authentication. Futhermore I have demonstrated how a public repository can be cloned via HTTPS without authentication. I hope this was significant.
0 notes
biointernet · 5 years
Text
Hourglass 223 The First Bank
Tumblr media
“There's only one day at a time here, then it's tonight and then tomorrow will be today again.”  ― Bob Dylan, Chronicles: Volume One
The First National Bank
Hourglass 223 matchbox
Tumblr media
Hourglass 223 matchbox Hourglass, Sand Clock, Sand Watch, Egg Timer, Sablier, Sanduhr, Reloj de arena, الساعة الرملية, Rellotge de sorra, přesýpací hodiny, velago, itula tioata, Clessidra, 砂時計, timeglass, Zandloper, Timglas, Isikhwama, Soatglass, MHC Magic
Time to save
The First National Bank of Elk river
Hourglass 223 matchbox
The Full History of Time Exhibition MHC by Kirill Korotkov and Lena Rhomberg Soon The Full History of Time Main Topics: Art, Science, Love, Magic, Technologies, Human Light System, Time Philosophy Beauty Bio Net Exhibition 3DHM Dynamic Vision Board Mental Model by Lena Rhomberg and Adam Pierce on MHC Virtual Museum Mental Model Key Words: Beauty, Beginning, Beautiful Power, Infinity https://www.myhourglasscollection.com/beauty-bio-net-26/ You can download images from Beauty Bio Net Gallery and use it like Translighters Digital – files with functions. How to use Translighters Digital. Beauty Bio Net Mental Model Key Words: Beauty, Beginning, Beautiful Power, Infinity.
The Hourglass Figure is one of four female body shapes
About Hourglass Figure: Hourglass Figure Celebrities on MHC Hourglass Figure Marilyn Monroe MHC hourglass figure workout Hourglass body measurements – body shape online calculator Hourglass Figure Sophia Loren by Adam Pierce About Hourglass Body or Hourglass figure Hourglass Figure, the movie How to dress an hourglass figure Cyclocosmia hourglass spider Hourglass Figure Department on MHC Virtual Museum
The Hourglass Figure FAQ
What qualifies as an hourglass figure?What is the most common body shape?What body shape am i?Which hormones shape the hourglass figure?Is an hourglass figure attractive?Does a woman’s body shape change with age?How does one maintain an hourglass figure?What does cute hourglass figure mean?What are the 5 female body types?What is the meaning of 36 24 36 figure?Can you change your body shape?What is a healthy waist size?Does your body shape change when you lose weight?What is a zero figure? Mhc Exhibition レナ・ロンバーグ 著 砂時計フィギュア マリリン・モンロー Hourglass 223 The First Bank Read the full article
2 notes · View notes
anorakcity-sapporo · 7 years
Photo
Tumblr media
東京蚤の市でも好評だったヴィンテージマッチラベルとデンマークのクリスマスチャリティシールが店頭に並んでいます。マッチラベルは1000枚ほどあるのでお好きな絵柄を掘り出して下さい チャリティシールのはシートものが需実しています。額装しても良いと思います。 #sappro #札幌 #紙モノ #紙もの #ヴィンテージ #matchlabel #design #vintage (Anorakcity Störe)
0 notes
inthetechpit · 3 years
Text
Kubernetes ReplicaSet example on Mac using VS Code
Kubernetes ReplicaSet example on Mac using VS Code
A ReplicaSet helps load balance and scale our Application up or down when the demand for it changes. It makes sure the desired number of pods are always running for high availability. I��m using VS Code on Mac to create the below yaml file. Create the following yaml file: apiVersion: apps/v1 kind: ReplicaSet metadata: name: myapp-replicaset labels: app: myapp spec: selector: matchLabels: app:…
View On WordPress
0 notes
aperturedevops · 4 years
Text
Quản lý tài nguyên cho cluster Kubernetes
Đặt vấn đề
Một trong những thiếu sót khi sử dụng k8s trong môi trường production là không thiết đặt giới hạn tài nguyên cho hệ thống. Khi bạn không giới hạn tài nguyên cho các pod trong k8s, không sớm thì muộn sẽ có một thời điểm server của bạn sẽ hết sạch tài nguyên (điển hình là hết CPU và RAM). Việc này rất dễ xảy ra chạy các workload nặng trên nhiều node có cấu hình khác nhau. Cuối cùng server của bạn sẽ crash, hoặc là chạy rất chậm, khiến hệ thống trở nên kém ổn định, thậm chí là mất mát dữ liệu, gây tổn thất về uy tín và tiền bạc. Vậy một trong những việc đầu tiên khi một đưa bất kì thứ gì lên k8s là phải thiết đặt tài nguyên cho nó.
Ý tưởng về cấp phát và giới hạn tài nguyên
Mặc định k8s chỉ có thể kiểm soát tài nguyên về CPU và RAM của hệ thống. Component chịu trách nhiệm kiểm soát tài nguyên và deploy pod lên các node được gọi là kube-scheduler. Có hai thuộc tính quan trọng để cấp phát và giới hạn tài nguyên được định nghĩa lúc bạn tạo workload:
Requests: Bảo đảm về lượng tài nguyên mà một pod trong workload sẽ được cấp. Mặc định các thuộc tính này sẽ để trống hoặc lấy theo yêu cầu mặc định của namespace (sẽ được đề cập sau). Tài nguyên request tối đa phải nhỏ hơn lượng tài nguyên mà một server mạnh nhất trong cluster có thể tải được.
Ví dụ: request: 100mCPU, 256MiB RAM mang ý nghĩa bạn sẽ luôn đảm bảo rằng k8s sẽ cấp cho bạn ít nhất 100 mCPU và 256MiB RAM. Hệ thống sẽ chỉ deploy pod của bạn trên node có đủ số lượng tài nguyên như trên. Tất nhiên kube-scheduler có đánh dấu lượng tài nguyên này đã bị chiếm dụng.
Giả sử bạn có 2 node, node 1 có 2 vCore và 8GiB RAM, node 2 có 4 vCore 16GiB RAM, nếu workload của bạn yêu cầu request: 2500mCPU, 8GiB RAM thì server sẽ chỉ deploy pod của workload này vào node 2.
Trong trường hợp cũng với cấu hình trên mà bạn yêu cầu tài nguyên request: 4000mCPU, 8GiB RAM, sẽ không có pod nào được deploy cả.
Limits: Giới hạn lượng tài nguyên mà một pod trong workload sẽ được sử dụng
Ví dụ: limits: 1000mCPU, 2GiB RAM mang ý nghĩa rằng bạn chỉ có thể dùng tối đa 1 CPU và 2GiB RAM.
Để thiết đặt chúng khi tạo một deployment, đây là YML ví dụ tạo deployment ubuntu:
apiVersion: apps/v1 kind: Deployment metadata: name: ubuntu-test labels: app: ubuntu spec: replicas: 1 selector: matchLabels: app: ubuntu template: metadata: labels: app: ubuntu spec: containers: - name: ubuntu image: ubuntu:bionic ports: - containerPort: 80 # chú ý mục resources này resources: limits: cpu: 1 memory: 2Gi requests: cpu: 500 memory: 2Gi
Bạn để ý ở mục spec.template.spec.containers[0].resource, sẽ thấy có hai setting là limits và requests. Đây chính là nơi thay đổi về lượng tài nguyên đảm bảo và giới hạn.
Qua 2 ví dụ trên, cần phải lưu ý hai điều sau:
k8s hoàn toàn có thể deploy pod của bạn vào một node mà lượng tài nguyên còn lại ít hơn lượng tài nguyên giới hạn. Ví dụ k8s sẽ có thể deploy workload request: 100 mCPU, 256MiB RAM, limits: 1000mCPU, 2GiB RAM vào một server chỉ còn trống 700 mCPU và 1 GiB RAM. Vậy nên nếu nếu workload của bạn yêu cầu nhiều tài nguyên hơn thì bạn cần chú ý cấp phát thêm tài nguyên cho chúng, vì trong trường hợp xấu, một pod dùng quá số lượng tài nguyên request mà node đã hết tài nguyên, node sẽ crash và buộc k8s phải kill một số pod khác hoặc tệ hơn là cả node đó sẽ không thể nào truy cập được nữa. Cơ chế kill pod đòi lại tài nguyên của k8s sẽ được đề cập ở mục dưới.
Kể cả khi bạn không dùng thì một pod cũng đã k8s đã tính pod của bạn đã chiếm dụng lượng tài nguyên bằng với lượng tài nguyên yêu cầu.
Tính chất tài nguyên CPU và RAM là khác nhau. Nếu pod của bạn vượt quá lượng tài nguyên CPU, k8s có thể "hãm" pod của bạn lại và không cho nó vượt quá giới hạn. Nhưng tài nguyên về bộ nhớ không thể bị giới hạn như vậy. Ngay khi pod của bạn vượt quá tài nguyên RAM cho phép, k8s sẽ kill luôn pod đó. Vậy nên cần chú ý về yêu cầu bộ nhớ của workload để tránh trường hợp pod bị kill ngoài ý muốn.
Việc deploy pod vào các node còn phải tùy việc server đó còn bao nhiêu tài nguyên (khá hiển nhiên nhưng vẫn phải đề cập). Giả sử một node có 4 vCore và 16GiB RAM nhưng đã bị nhiều pod chiếm dụng mất 3 vCore thì khi bạn request lượng tài nguyên request: 2500mCPU, 8GiB RAM thì pod của bạn cũng sẽ không bao giờ được deploy lên đó.
Cơ chế deploy và kill của kube-scheduler
Nếu bạn tạo một cluster k8s từ các công cụ như k3s, kubeadm hay từ các bên cung cấp nền tảng như AKS (của Azure), EKS (của Amazon) thì mặc định cluster đó sẽ sử dụng kube-scheduler để kiểm soát việc quản lý tài nguyên và deploy pod. Việc deploy hay kill pod của kube-scheduler đều hoạt động theo hai bước: Lọc và Chấm điểm.
Trường hợp k8s deploy một pod vào cluster:
Bước Lọc: kube-scheduler sẽ liệt kê tất cả các node thỏa mãn điều kiện tối thiểu của workload (tức thỏa mãn lượng tài nguyên request). Nếu trong danh sách đó không có node nào, pod của bạn sẽ không bao giờ được deploy. Các pod chưa được deploy vẫn sẽ có cơ hội được chạy do kube-scheduler sẽ thực hiện việc chấm điểm này liên tục.
Bước Chấm điểm: kube-scheduler sẽ đánh giá các node thông qua nhiều tiêu chí khác nhau, từ đó đưa ra điểm số của node đó. kube-scheduler sẽ deploy vào node có điểm số cao nhất. Nếu có nhiều hơn 1 node có cùng một điểm số, pod sẽ được deploy ngẫu nhiên vào một trong các node đó.
Còn trường hợp k8s cần kill pod để thu hồi lại tài nguyên:
Bước Lọc: kube-scheduler sẽ liệt kê tất cả các pod đang hoạt động trong node đang bị quá tải.
Bước Chấm điểm: kube-scheduler sẽ đánh giá các pod đó thông qua độ ưu tiên của pod (Pod Priority). Pod nào có điểm ưu tiên thấp hơn sẽ bị kill, các pod có điểm ưu tiên ngang nhau sẽ được kill ngẫu nhiên. Việc này sẽ lặp đi lặp lại đến khi server đủ tài nguyên thì thôi. Nhưng thông thường chúng ta thường bỏ quên mất tính năng này, nên các pod sẽ có điểm ưu tiên ngang nhau, vì vậy các pod sẽ bị chấm điểm thông qua lượng tài nguyên sử dụng. Các pod vượt quá tài nguyên yêu cầu càng nhiều, thì pod đó càng có khả năng bị kill. Việc này cũng sẽ lặp đi lặp lại đến khi server đủ tài nguyên thì thôi.
Trong trường hợp bạn có đặt mức độ ưu tiên cho pod, nếu bạn đặt cho pod của mình có độ ưu tiên cao hơn các pod hệ thống như kubelet, k8s có thể kill luôn các pod đó để thu hồi bộ nhớ. Tất nhiên việc này vừa có lợi điểm và hại điểm.
Điểm tốt: Node của bạn vẫn sẽ chạy và mọi thứ sẽ được deploy trở lại khi pod của bạn trả lại tài nguyên cho hệ thống.
Hại điểm: Khiến bạn lo lắng khi node được thông báo là đã crash và không có thông tin nào được cập nhật về. Tệ hơn nữa, node fail quá lâu sẽ khiến cho hệ thống bên thứ ba tưởng node của bạn đã sập (node tained), node sẽ bị xoá đi và thay thế bằng node mới, mất toàn bộ những gì mà node của bạn đang thực hiện (ví dụ như GKE autoscaler sẽ thay node đang sập bằng node mới sau một khoảng thời gian).
Việc Lọc và Chấm điểm sẽ được định đoạt bằng một trong hai quy cách: Thông qua các quy chế đã quy định (Policies) hoặc thông qua các profile quy chế (Profiles) nhưng trong phạm vi bài viết này, chúng ta sẽ không đề cập kĩ đến hai quy cách phức tạp trên mà sẽ xoáy vào cơ chế tài nguyên CPU và RAM.
Phân chia tài nguyên của cluster cho các namespace
Việc phân chia tài nguyên cho namespace (Resource Quota) được coi là TỐI QUAN TRỌNG đối với bất kì một người làm hệ thống nào. Thường một hệ thống sử dụng k8s sẽ không chỉ dành cho một mình DevOps hay SysAdmin sử dụng, mà sẽ được chia ra cho mỗi team (hiện tại đang trong một project) nắm một hoặc một vài namespace. Họ hoàn toàn có thể cung cấp quá ít tài nguyên cho workload để pod có thể hoạt động, hoặc đặt tài nguyên giới hạn quá cao khiến chúng chiếm hết tài nguyên hệ thống, vân vân. Tất cả đều sẽ dẫn đến một kết cục cuối cùng: sập server. Vậy nên với góc độ là người làm hệ thống, bạn cần phải phân chia tài nguyên của các namespace lại để đảm bảo server không bị quá tải. Khi họ vượt quá lượng tài nguyên yêu cầu, các pod nằm trong namespace sẽ bị kill nhưng sẽ không ảnh hưởng
Ý tưởng của việc phân chia tài nguyên cũng rất đơn giản như sau:
Mặc định các namespace sẽ không được định nghĩa gì về phân chia tài nguyên, các pod trong namespace sẽ thoải mái đặt ra bất kì yêu cầu tài nguyên nào mà mình muốn.
Khi thiết đặt phân chia tài nguyên, các pod trong namespace đó chỉ có thể yêu cầu hoặc giới hạn lượng tài nguyên nhỏ hơn hoặc bằng số lượng tài nguyên phân chia, tương tự như cấp phát và giới hạn tài nguyên của pod, cũng có các thông số như sau:
Request: Tổng lượng tài nguyên yêu cầu mà các pod có thể sử dụng khi deploy vào namespace. Vượt quá lượng tài nguyên trên pod sẽ không thể deploy
Ví dụ: Một namespace được cấp phát request: 4CPU, 8GiB RAM có ý nghĩa tổng tất cả lượng tài nguyên request của các pod phải nhỏ hơn hoặc bằng 4CPU và 8192MiB RAM. Ví dụ bạn có thể deploy 4 pod yêu cầu request: 1CPU, 2GiB RAM, 2 pod yêu cầu request: 2CPU, 4GiB RAM hoặc một pod yêu cầu request: 4CPU, 8GiB RAM
Limits: Tổng lượng tài nguyên giới hạn mà các pod có thể đạt được trong namespace. Các pod khi chạy mà tổng vượt quá giới hạn này, chúng sẽ bị kill theo cơ chế đã được nêu ra ở mục trên.
Qua đây chúng ta có thể đảm bảo rằng, khi một team chẳng may gây ra sự cố ở namespace của họ, tất cả hệ thống vẫn sẽ chạy bình thường chứ không hề gây ra sự cố hỏng hóc gì cho toàn hệ thống. Đặc biệt trong các cluster dev, việc này là quan trọng để tránh một team sẽ phá hỏng tiến độ làm việc cho các team khác. Việc thiết đặt cấp phát tài nguyên này cũng là một phương pháp để các team có thể ước lượng được lượng tài nguyên họ sẽ sử dụng rồi đưa những thông số này vào áp dùng lên product.
Kết thúc bài viết và các tài liệu tham khảo
Mong rằng thông qua bài viết này, mọi người có thể hiểu được cơ chế quản lý và deploy, kill pod của kube-scheduler, cũng như sự quan trọng của việc giới hạn và cấp phát tài nguyên trong hệ thống.
Các tài liệu tham khảo:
Setting Resource Requests and Limits in Kubernetes - https://www.youtube.com/watch?v=xjpHggHKm78
Kubernetes Scheduler - https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/
Pod Priority and Preemption - https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
Resource Quotas - https://kubernetes.io/docs/concepts/policy/resource-quotas/
0 notes
gunyang-breads · 4 years
Text
Tutorial: launch a web application on EKS
Goal: launch a web application on EKS that load balances traffic to three nginx web servers.
Notes: 
Tumblr (or my theme) is converting double dash (”-” followed by “-”) to em dash (”–”). For some commands, you’ll need to replace the em dash with double dashes. (sorry.)
Though this creates a public/private subnet cluster, it will launch the worker nodes in the public network. For better security, the worker nodes should be launches in private subnets.
TOC
Prereqs
Preview Nginx image locally (optional)
Create cluster
Deploy Kubernetes Dashboard (optional)
Running Nginx
Clean up
Part 1: Prereqs
You'll need to install the AWS CLI, eksctl, and kubectl, as well as authenticate with your cluster.
Follow these prerequisite instructions, but stop once the tools are installed. (Don’t create a cluster.)
Optionally, if you want to complete part 2, you should also install Docker locally. 
Part 2: Preview Nginx image locally (optional)
If you’d like a preview of what you will see once completing this tutorial, you can run the Nginx Docker container locally.
docker run --name some-nginx -p 8081:80 -d nginx
Once you do this, visit http://localhost:8081 in browser. You should see this:
Tumblr media
After validating, let’s stop the server and clean up resources:
docker stop some-nginx && docker rm some-nginx
Part 3: Create cluster
We’re going to create an cluster with 3 Linux managed nodes. (source):
eksctl create cluster \ --name nginx-webapp-cluster \ --version 1.18 \ --region us-west-2 \ --nodegroup-name linux-nodes \ --nodes 3 \ --with-oidc \ --managed
Let’s confirm successful creation by ensuring three are three nodes:
kubectl get nodes
Part 4: Deploy Kubernetes Dashboard (optional)
While not necessary for this tutorial, this will allow you to view details on your cluster, such as nodes, running pods, etc, and is recommended.
Instructions for running the Kubernetes Dashboard.
Part 5: Running Nginx
1. Create a namespace
kubectl create namespace nginx-webapp
2. Create a file, nginx-webapp-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata:  name: nginx-webapp  namespace: nginx-webapp spec:  replicas: 3  selector:    matchLabels:      app: nginx-webapp  template:    metadata:      labels:        app: nginx-webapp    spec:      containers:        - name: nginx-webapp-container          image: nginx          ports:          - name: http            containerPort: 80d
3. Run: 
kubectl apply -f nginx-webapp-deployment.yaml
4. Create a file, nginx-webapp-service.yaml:
apiVersion: v1 kind: Service metadata:  name: nginx-webapp-service  namespace: nginx-webapp  labels:    app: nginx-webapp spec:  ports:  - port: 80    targetPort: 80  selector:    app: nginx-webapp  type: LoadBalancer
5. Run: 
kubectl apply -f nginx-webapp-service.yaml
6. Confirm three instances of Nginx is running:
kubectl get deployments --namespace=nginx-webapp
(wait for 3/3 ready)
7. Fetch load balancer IP:
kubectl get service --namespace=nginx-webapp
Visit the URL in the EXTERNAL-IP column. (Note: it may take a few minutes for the DNS to replicate, so try again if address not found.)
Part 6: Clean up
EKS is expensive, so make sure you clean up your resource when you are done:
eksctl delete cluster \ --region=us-west-2 \ --name=nginx-webapp-cluster
0 notes
qcs01 · 3 months
Text
How to migrate your app to Kubernetes containers in GCP?
Migrating your application to Kubernetes containers in Google Cloud Platform (GCP) involves several steps. Here is a comprehensive guide to help you through the process:
Step 1: Prepare Your Application
Containerize Your Application:
Ensure your application is suitable for containerization. Break down monolithic applications into microservices if necessary.
Create a Dockerfile for each part of your application. This file will define how your application is built and run inside a container.
# Example Dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
     2. Build and Test Containers:
Build your Docker images locally and test them to ensure they run as expected.
docker build -t my-app .
docker run -p 5000:5000 my-app
Step 2: Set Up Google Cloud Platform
Create a GCP Project:
If you don’t have a GCP project, create one via the GCP Console.
Install and Configure gcloud CLI:
Install the Google Cloud SDK and initialize it.
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
    3. Enable Required APIs:
Enable Kubernetes Engine API and other necessary services.
gcloud services enable container.googleapis.com
Step 3: Create a Kubernetes Cluster
Create a Kubernetes Cluster:
Use the gcloud CLI to create a Kubernetes cluster.
gcloud container clusters create my-cluster --zone us-central1-a
      2. Get Cluster Credentials:
Retrieve the credentials to interact with your cluster.
gcloud container clusters get-credentials my-cluster --zone us-central1-a
Step 4: Deploy to Kubernetes
Push Docker Images to Google Container Registry (GCR):
Tag and push your Docker images to GCR.
docker tag my-app gcr.io/your-project-id/my-app
docker push gcr.io/your-project-id/my-app
     2. Create Kubernetes Deployment Manifests:
Create YAML files for your Kubernetes deployments and services.
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: gcr.io/your-project-id/my-app
        ports:
        - containerPort: 5000
3. Deploy to Kubernetes:
Apply the deployment and service configurations to your cluster.
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
Step 5: Manage and Monitor
Monitor Your Deployment:
Use Kubernetes Dashboard or other monitoring tools like Prometheus and Grafana to monitor your application.
Scale and Update:
Scale your application as needed and update your deployments using Kubernetes rolling updates.
kubectl scale deployment my-app --replicas=5
kubectl set image deployment/my-app my-app=gcr.io/your-project-id/my-app:v2
Additional Tips
Use Helm: Consider using Helm for managing complex deployments.
CI/CD Integration: Integrate with CI/CD tools like Jenkins, GitHub Actions, or Google Cloud Build for automated deployments.
Security: Ensure your images are secure and scanned for vulnerabilities. Use Google Cloud’s security features to manage access and permissions.
By following these steps, you can successfully migrate your application to Kubernetes containers in Google Cloud Platform, ensuring scalability, resilience, and efficient management of your workloads.
For more details click www.hawkstack.com 
0 notes
dummdida · 7 years
Text
Running Ubuntu on Kubernetes with KubeVirt v0.3.0
You have this image, of a VM, which you want to run - alongside containers - why? - well, you need it. Some people would say it's dope, but sometimes you really need it, because it has an app you want to integrate with pods.
Here is how you can do this with KubeVirt.
1 Deploy KubeVirt
Deploy KubeVirt on your cluster - or follow the demo guide to setup a fresh minikube cluster.
2 Download Ubuntu
While KubeVirt comes up (use kubectl get --all-namespaces pods), download Ubuntu Server
3 Install kubectl plugin
Make sure to have the latest or recent kubectl tool installed, and install the pvc plugin:
curl -L https://github.com/fabiand/kubectl-plugin-pvc/raw/master/install.sh | bash
4 Create disk
Upload the Ubuntu server image:
$ kubectl plugin pvc create ubuntu1704 1Gi $PWD/ubuntu-17.04-server-amd64.iso disk.img Creating PVC persistentvolumeclaim "ubuntu1704" created Populating PVC pod "ubuntu1704" created total 701444 701444 -rw-rw-r-- 1 1000 1000 685.0M Aug 25 2017 disk.img Cleanup pod "ubuntu1704" deleted
5 Create and launch VM
Create a VM:
$ kubectl apply -f - apiVersion: kubevirt.io/v1alpha1 kind: VirtualMachinePreset metadata: name: large spec: selector: matchLabels: kubevirt.io/size: large domain: resources: requests: memory: 1Gi --- apiVersion: kubevirt.io/v1alpha1 kind: OfflineVirtualMachine metadata: name: ubuntu spec: running: true selector: matchLabels: guest: ubuntu template: metadata: labels: guest: ubuntu kubevirt.io/size: large spec: domain: devices: disks: - name: ubuntu volumeName: ubuntu disk: bus: virtio volumes: - name: ubuntu claimName: ubuntu1710
6 Connect to VM
$ ./virtctl-v0.3.0-linux-amd64 vnc --kubeconfig ~/.kube/config ubuntu
Final notes - This is booting the Ubuntu ISO image. But this flow should work for existing images, which might be much more useful.
2 notes · View notes
masaa-ma · 4 years
Text
Kubernetesの負荷試験で絶対に担保したい13のチェックリスト
from https://qiita.com/taisho6339/items/56a5442c1fc4d714c941?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
概要
ここ最近、Kubernetesクラスタを本番運用するにあたって負荷試験を行ってきました。
Kubernetesクラスタに乗せるアプリケーションの負荷試験は、通常の負荷試験でよく用いられる観点に加えて、クラスタ特有の観点も確認していく必要があります。
適切にクラスタやPodが設定されていない場合、意図しないダウンタイムが発生したり、想定する性能を出すことができません。
そこで私が設計した観点を、汎用的に様々なPJでも応用できるよう整理しました。 一定の負荷、スパイク的な負荷をかけつつ、主に下記の観点を重点的に記載します。
Podの性能
Podのスケーラビリティ
クラスタのスケーラビリティ
システムとしての可用性
本記事ではこれらの観点のチェックリスト的に使えるものとしてまとめてみます。
確認観点
攻撃ツール
Podレベル
想定レイテンシでレスポンスを返せること
想定スループットを満たせること
突然のスパイクに対応できること
ノードレベルの障害、ダウンを想定した設定になっていること
配置が想定どおりに行われていること
新バージョンリリースがダウンタイム無しで可能なこと
長時間運転で問題が起こり得ないこと
クラスタレベル
Podの集約度が適切であること
配置するPodの特性に合わせたノードになっていること
突然のスパイクに対応できること
ノードのサージアップグレードの設定が適切であること
Preemptibleノードの運用が可能であるか
攻撃ツールの観点
攻撃ツールがボトルネックになりえないこと
意外と失念しがちですが、攻撃ツール自体が想定する負荷をかけられるとは限りません。
たとえば一つのマシンで大きな負荷をかけようとすればファイルディスクリプタやポート、CPUのコアなどが容易に枯渇します。 大きなRPSを扱う場合は、JMeterやlocustなど柔軟にスケールして、分散して負荷をかけることのできる環境を用意しましょう。
検証方法
私のPJではlocustのk8sクラスタを立て、同じクラスタ内にNginxのPodを立て、静的ページに対してリクエストさせてどのくらいのスループットが出るかを検証しました。
要件的には6000RPSほど担保すれば良いシステムだったので、workerの数やユーザの数を調整して余裕をもって8000RPSくらいまでは出せることを確認しました。
Podの観点
想定レイテンシでレスポンスを返せること
HPAを無効にしてPod単体の性能が要件を満たすかをまず確認します。
想定レイテンシを超過してしまう場合、Podのresource requestとlimitを積んでいきましょう。
また、もしアプリケーションに問題があってレイテンシが超過する場合はアプリケーションのチューニングが必要です。
想定スループットを満たせること
HPAを無効にしてPodを手動でスケールアウトしていき、どこまでスループットを伸ばせるかを確認します。
スケールアウトしてもスループットが伸びない場合、どこかにボトルネックが出ている可能性が高いです。
その場合まず私は下記を確認します。
各PodのCPU使用率
特定のPodだけCPU使用率が偏ってる場合はルーティングポリシーの再確認
各Podのレイテンシ
一つ前の「想定レイテンシでレスポンスを返せること」に戻って確認
攻撃側のCPU使用率
突然のスパイクに対応できること
HPAを設定しておけばオートスケーリングしてくれますが、ポリシーを適切に設定する必要があります。
HPAは一定期間でPodの数をアルゴリズムに応じて算出し、定期的に調整することで実現されています。
HPAのスケールアルゴリズム
よって、スケールする条件がギリギリに設定されていたりすると突然負荷が高まっても急にスケールできずに最悪ダウンタイムを挟んでしまったりします。
また、k8sの1.18からはスパイクで急激にPodが増えたり減ったりしすぎないように、behaviorという設定��目も追加されています。
私の場合はlocustで急激な負荷を再現し、下記の観点をチェックしました。
監視するメトリクスは適切か?
監視するメトリクスのしきい値は適切か?
Podの増減制御が必要そうか?
ノードレベルの障害、ダウンを想定した設定になっていること
k8sは、クラスタのオートアップグレードなどでノードが柔軟にダウンしたり、 クラスタのオートスケールでノードがスケールインするので、かなりの頻度でPodが削除されて再作成されることを考慮する必要があります。
そこで注意することとして2つの観点があります。
1つ目はPodのライフサイクルです。
Podが削除されるとき、まずServiceからルーティングされないようにすることと、コンテナへのSIGTERMが同時に発生するため、ルーティングが止まる前にコンテナが終了しないようにする必要があります。
具体的にはlifecycle hookを使い、preStopでsleepしてルーティングが止まるまで待ちましょう。 Kubernetes: 詳解 Pods の終了
lifecycle: preStop: exec: command: ["/bin/sh", "-c", "sleep 30"]
また、NEGなどを活用してContainer Nativeなロードバランシングを行っている場合、下記のような配慮も必要です。
【Kubernetes】GKEのContainer Native LoadbalancingのPodのTerminationの注意点
2つ目はPod Distruption Budgetです。
これを適切に設定しておくことで、ノードのアップデートなどでPodが排出される際に一気に排出されないよう制御することができます。 ノードのSurge Updateと合わせて確認するといいでしょう。
apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: sample-service spec: maxUnavailable: "25%" selector: matchLabels: app: sample-service namespace: default project: default
クラスタのサージアップグレード
Podの配置が想定どおりに行われていること
Podの配置は何も考えないと空いているリソースからkube-schedulerが任意に選択してしまうため、下記のような観点の考慮が不可欠です。
特定のゾーン、ノードに集中して置かれてしまっている場合、いくらPodを冗長化したとしても、ノードのダウンや、ゾーン障害で一斉にサービスが止まる
Preemptibleノードなどを使っている場合は一斉に複数ノードが落ちることがある
(通常ノードとPreemptibleノードは併用するのが定石になっています)
Container Native Loadbalancingを使用する場合、LBはPodに平等にルーティングするわけではなく、NEGに対して均等にルーティングしようとするため、NEG(つまりはゾーン)でPod数の偏りがあると安定性やスケーラビリティ、スループットに悪影響を与える
ディスクIO処理が多いPodなどはディスクタイプがssdのノードの配置するなど、Podの特性に応じたノード選択が必要かどうか
Podは、下記を活用することで配置制御を行うことができるので、これらを駆使して配置制御を行いましょう。
TaintとToleration
Pod Affinity
Node Selector
Node Affinity
Topology Spread Constraints
新バージョンリリースがダウンタイム無しで可能なこと
これはチームのデプロイ運用方針でも変わりますが、一定の負荷をかけつつ、ダウンタイム無しでPodのバージョンを切り替えられるか検証しておくと良いと思います。
長時間運転で問題が起こり得ないこと
アプリケーションの実装がイケてない場合、メモリリークや、ファイルディスクリプタの枯渇などがよく発生します。
1日以上負荷をかけ続けたときに消費が増加し続けるようなリソースがないか、ノードのスケールイン、スケールアウト、GKEならメンテナンスウィンドウの時間でも問題なく稼働し続けられているかは検証しておくと良いでしょう。
クラスタ観点
Podの集約度が適切であること
Podは、PodのRequestされたリソースと、ノード内の割当可能なリソースを加味してスケジューリングされます。
つまりRequestが適切に設定されていないとリソースが全然余っているのにどんどんノードが増えてしまったり、逆にスケールしてほしいのに全然スケールしてくれない、といったことが起こりえます。
GCPなどのダッシュボードや、kubectl topコマンドを用いてノードのリソースを有効に活用できているかをチェックしておきましょう。
突然のスパイクに対応できること
これは前述の集約度の話と共通していますが、PodのRequestによってどのようにしてノードがスケールするか、という点が決まります。
基本的に、GKEではスケジュールするためのノードが足りなくなって初めてスケールアウトします。 つまり、突然スパイクしてPodがスケールアウトしようとしたものの、配置できるノードが足りないため、まずノードがスケールアウトしてからPodのスケジューリングがされるケースが発生します。 このような場合にも突然のスパイクに耐えうるか、というのは検証しておく必要があります。
運用したいシステムの要件次第ですが、柔軟にスケールしたい場合はPodのHPAの設定をゆるくしたりなど工夫が必要になります。
クラスタ オートスケーラー
Preemptibleノードの運用が適切であるか
基本的に本番クラスタでPreemptibleノードOnlyで運用するのは危険です。 Preemptibleノードを運用する場合は、通常のノードと一緒に運用し、 かつTaintとTolerationを適切に設定して、Preemptibleノードによりすぎないようにしましょう。
まとめ
今回は私が負荷試験によって担保した、
スケーラビリティ
可用性、安定性
レイテンシとスループット
リソース利用効率
の観点を整理しました。 もしご意��、感想あればぜひコメントなどいただけると嬉しいです!
0 notes
computingpostcom · 2 years
Text
Fluent bit is an open source, light-weight log processing and forwarding service. Fluent bit allows to collect logs, events or metrics from different sources and process them. These data can then be delivered to different backends such as Elastic search, Splunk, Kafka, Data dog, InfluxDB or New Relic. Fluent bit is easy to setup and configure. It gives you full control of what data to collect, parsing the data to provide a structure to the data collected. It allows one to remove unwanted data, filter data and push to an output destination. Therefore, it provides an end to end solution for data collection. Some wonderful features of fluent bit are: High Performance It is super Lightweight and fast, requires less resource and memory It supports multiple data formats. The configuration file for Fluent Bit is very easy to understand and modify. Fluent Bit has built-in TLS/SSL support. Communication with the output destination is secured. Asynchronous I/O Fluent Bit is compatible with docker and kubernetes and can therefore be used to aggregate application logs. There are several ways to log in kubernetes. One way is the default stdout logs that are written to a host path”/var/log/containers” on the nodes in a cluster. This method requires a fluent bit DaemonSet to be deployed. A daemon sets deploys a fluent bit container on each node in the cluster. The second way of logging is the use of a persistent volume. This allows logs to be written and persistent in an internal or external storage such as Cephfs. Fluent bit can be setup as a deployment to read logs from a persistent Volume. In this Blog, we will look at how to send logs from a Kubernetes Persistent Volume to Elastic search using fluent bit. Once logs are sent to elastic search, we can use kibana to visualize and create dashboards using application logs and metrics. PREREQUISITES: First, we need to have a running Kubernetes Cluster. You can use our guides below to setup one if you do not have one yet: Install Kubernetes Cluster on Ubuntu with kubeadm Install Kubernetes Cluster on CentOS 7 with kubeadm Install Production Kubernetes Cluster with Rancher RKE Secondly, we will need an elastic search cluster setup. You can use elasticsearch installation guide if you don’t have one in place yet. In this tutorial, we will setup a sample elastic search environment using stateful sets deployed in the kubernetes environment. We will also need a kibana instance to help us visualize this logs. Deploy Elasticsearch Create the manifest file. This deployment assumes that we have a storage class cephfs in our cluster. A persistent volume will be created along side the elastic search stateful set. Modify this configuration as per your needs. $ vim elasticsearch-ss.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster spec: serviceName: elasticsearch replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0 resources: limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.seed_hosts value: "es-cluster-0.elasticsearch" - name: cluster.initial_master_nodes value: "es-cluster-0" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m"
initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data labels: app: elasticsearch spec: accessModes: [ "ReadWriteOnce" ] storageClassName: cephfs resources: requests: storage: 5Gi Apply this configuration $ kubectl apply -f elasticsearch-ss.yaml 2. Create an elastic search service $ vim elasticsearch-svc.yaml kind: Service apiVersion: v1 metadata: name: elasticsearch labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node $ kubectl apply -f elasticsearch.svc 3. Deploy Kibana $ vim kibana.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: kibana labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.2.0 resources: limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 --- apiVersion: v1 kind: Service metadata: name: kibana labels: app: kibana spec: ports: - port: 5601 selector: app: kibana Apply this configuration: $ kubectl apply -f kibana.yaml 4. We then need to configure and ingress route for the kibana service as follows: $ vim kibana-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: "true" ingress.kubernetes.io/force-ssl-redirect: "true" name: kibana spec: rules: - host: kibana.computingpost.com http: paths: - backend: serviceName: kibana servicePort: 5601 path: / tls: - hosts: - kibana.computingpost.com secretName: ingress-secret // This can be created prior if using custom certs $ kubectl apply -f kibana-ingress.yaml Kibana service should now be accessible via https://kibana.computingpost.com/ Once we have this setup, We can proceed to deploy fluent Bit. Step 1: Deploy Service Account, Role and Role Binding Create a deployment file with the following contents: $ vim fluent-bit-role.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: fluent-bit --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluent-bit-read rules: - apiGroups: [""] resources: - namespaces - pods verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: fluent-bit-read roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fluent-bit-read subjects: - kind: ServiceAccount name: fluent-bit namespace: default Apply deployment config by running the command below. kubectl apply -f fluent-bit-role.yaml Step 2: Deploy a Fluent Bit configMap This config map allows us to be able to configure our fluent Bit service accordingly. Here, we define the log parsing and routing for Fluent Bit. Change this configuration to match your needs. $ vim fluentbit-configmap.yaml
apiVersion: v1 kind: ConfigMap metadata: labels: k8s-app: fluent-bit name: fluent-bit-config data: filter-kubernetes.conf: | [FILTER] Name kubernetes Match * Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag * Path /var/log/*.log Parser json DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 output-elasticsearch.conf: | [OUTPUT] Name es Match * Host $FLUENT_ELASTICSEARCH_HOST Port $FLUENT_ELASTICSEARCH_PORT Logstash_Format On Replace_Dots On Retry_Limit False parsers.conf: | [PARSER] Name apache Format regex Regex ^(?[^ ]*) [^ ]* (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^\"]*?)(?: +\S*)?)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?[^ ]*) [^ ]* (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^ ]*) +\S*)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?[^\]]*)\] \[(?[^\]]*)\](?: \[pid (?[^\]]*)\])?( \[client (?[^\]]*)\])? (?.*)$ [PARSER] Name nginx Format regex Regex ^(?[^ ]*) (?[^ ]*) (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^\"]*?)(?: +\S*)?)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%d %H:%M:%S.%L Time_Keep On [PARSER] # http://rubular.com/r/tjUt3Awgg4 Name cri Format regex Regex ^(?[^ ]+) (?stdout|stderr) (?[^ ]*) (?.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z [PARSER] Name syslog Format regex Regex ^\(?[^ ]* 1,2[^ ]* [^ ]*) (?[^ ]*) (?[a-zA-Z0-9_\/\.\-]*)(?:\[(?[0-9]+)\])?(?:[^\:]*\:)? *(?.*)$ Time_Key time Time_Format %b %d %H:%M:%S kubectl apply -f fluentbit-configmap.yaml Step 3: Create a Persistent Volume Claim This is where we will write application logs. $ vim pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: logs-pvc spec: accessModes: - ReadWriteMany storageClassName: cephfs #Change accordingly resources: requests: storage: 5Gi $ kubectl apply -f pvc.yaml Step 4: Deploy a kubernetes deployment using the config map in a file $ vim fluentbit-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: fluent-bit-logging name: fluent-bit spec: replicas: 1 selector: matchLabels:
k8s-app: fluent-bit-logging template: metadata: annotations: prometheus.io/path: /api/v1/metrics/prometheus prometheus.io/port: "2020" prometheus.io/scrape: "true" labels: k8s-app: fluent-bit-logging kubernetes.io/cluster-service: "true" version: v1 spec: containers: - env: - name: FLUENT_ELASTICSEARCH_HOST value: elasticsearch - name: FLUENT_ELASTICSEARCH_PORT value: "9200" image: fluent/fluent-bit:1.5 imagePullPolicy: Always name: fluent-bit ports: - containerPort: 2020 protocol: TCP resources: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/log name: varlog - mountPath: /fluent-bit/etc/ name: fluent-bit-config dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: serviceAccount: fluent-bit serviceAccountName: fluent-bit volumes: - name: varlog persistentVolumeClaim: claimName: logs-pvc - configMap: defaultMode: 420 name: fluent-bit-config name: fluent-bit-config Create objects by running the command below: $ kubectl apply -f fluentbit-deployment.yaml Step 5: Deploy an application Let’s test that our fluent bit service works as expected. We will use an test application that writes logs to our persistent volume. $ vim testpod.yaml apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /var/log/app.log; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /var/log volumes: - name: persistent-storage persistentVolumeClaim: claimName: logs-pvc Apply with the command: $ kubectl apply -f testpod.yaml Check if the pod is running. $ kubectl get pods You should see the following output: NAME READY STATUS RESTARTS AGE test-pod 1/1 Running 0 107s Once the pod is running, We can proceed to check if logs are sent to Elastic search. On Kibana, we will have to create an index as shown below. Click on “Management > Index Patterns> Create index pattern” Once the index has been created. Click on the discover icon to see if our logs are in place: See more guides on Kubernetes on our site.
0 notes
biointernet · 5 years
Text
Hourglass 225 green restaurant
Tumblr media
The Hour Glass Restaurant
Hourglass 225 green restaurant MHC Virtual Museum My Hourglass Collection virtual museum about Time and Light, Space and Love, Science and Magic MHC Virtual Museum based on the Hourglass Collection MHC – My Hourglass Collection virtual museum about Time and Light, Space and Magic  the Biointernet Hub on The Fantasy Network MHC Exhibitions – Dynamic Vision Board meta models Philumenie , Matchbox Labels , Lucifers , Zündhölzer , Streichhölzer , Safety Matches , Zündis , Etiquettes , matchlabel , matches, matchboxes, matchbox, fosforos, tändstickor, tändstikker, fiammiferi, allumettes, allumette, d’allumettes, streichholzschachteln, zündholzschachtel, zündholz, streichholz, Briefchen, Heftchen, Zündholzbriefchen, Zündholzschachteln Hourglass 225 matchbox
Tumblr media
Hourglass 225 matchbox Hourglass, Sand Clock, Sand Watch, Egg Timer, Sablier, Sanduhr, Reloj de arena, الساعة الرملية, Rellotge de sorra, přesýpací hodiny, velago, itula tioata, Clessidra, 砂時計, timeglass, Zandloper, Timglas, Isikhwama, Soatglass, MHC Magic
Symbol of Time is The Hourglass
time traveling symbolism
See also:
Time symbolism
Time is… The Full History of Time Time in physics and time Science Symbolism of Melencolia I by Albrecht Dürer Time and Text
DADA Time
Text, Time, MHC Extinction Rebellion – Time against Life The End of Time Hourglass and Death on St Thomas’ Church Hourglass – symbol of Death Death does not Exist Hourglass and Skeleton “Hourglass and Cards” Exhibition Father and Mother of Time Time Hub Time Philosophy Time synonyms
Qualia and Time Sense
Time perception and Sense of Time The Hourglass of Emotions Time Travel + Time Management = Time Travel Management The Hourglass, Hourglass History Hourglass symbolism Hourglass Figure Hourglass Tattoo Symbols of Time Beauty Bio-Net Father Time Department Father Time and Mother Nature Lunar calendar and Moon’s phases Time Management Time Management tools Time Travel Management MHC SM: MHC Flikr, MHC Pinterest, MHC Facebook, MHC Instagram, MHC YouTube, MHC Twitter
The Hourglass Figure:
MHC Exhibitions: Hourglass Figure Sophia Loren by Adam PierceHourglass Figure Marilyn Monroe About Hourglass Body or Hourglass Figure Hourglass body measurements – body shape online calculator Hourglass Figure Celebrities on MHC Hourglass Figure, the movie MHC hourglass figure workout by Marten Sport Hourglass Figure Department on MHC Virtual Museum Read the full article
1 note · View note
tak4hir0 · 4 years
Link
こんにちは。サイオステクノロジー株式会社でエンジニアをしております武井宜行(タケイ・ノリユキ/ @noriyukitakei )と申します。本稿では、比較的新しいKubernetesの活用法とその実践を紹介します。 Kubernetesは、ご存じの通り、ここ数年で一気に認知度が向上したコンテナオーケストレーターであり、コンテナをプロダクション環境で動作させるための様々な機能(Rolling Updateや負荷分散など)を持ったオープンソースソフトウェアです。 Kubernetesはかなり進化のスピードが速く、取り巻く環境も常々変化しています。新しい活用法、といってもさまざまなトピックがあり迷うところですが、本稿では「The Twelve-Factor Appを援用したKubernetes設計」「Kubernetesのサーバーレス化」の2点を特筆すべきポイントとしてピックアップしたいと思います。 「The Twelve-Factor App」に基づく、Kubernetesシステム設計のポイント まずはKubernetesによるシステム設計のポイントをお伝えします。最近では「The Twelve-Factor App」に従い、Kubernetesの特性を生かした設計が見られるようになってきました。 「The Twelve-Factor App」とは、Herokuのエンジニアによって提唱された、モダンなアプリケーションを開発するための12のベストプラクティスをまとめたものです。もともとKubernetesのために定められた指針ではありませんが、12のうちのいくつかは、Kubernetesの構築に欠かせない重要な要素が含まれており、多くのエンジニアに参照されています。本家サイトより、12の要素を以下に記載します。 コードベース:バージョン管理されている1つのコードベースと複数のデプロイ 依存関係:依存関係を明示的に宣言し分離する 設定:設定を環境変数に格納する バックエンドサービス:バックエンドサービスをアタッチされたリソースとして扱う ビルド、リリース、実行:ビルド、リリース、実行の3つのステージを厳密に分離する プロセス:アプリケーションを1つもしくは複数のステートレスなプロセスとして実行する ポートバインディング:ポートバインディングを通してサービスを公開する 並行性:プロセスモデルによってスケールアウトする 廃棄容易性:高速な起動とグレースフルシャットダウンで堅牢性を最大化する 開発 / 本番一致:開発、ステージング、本番環境をできるだけ一致させた状態を保つ ログ:ログをイベントストリームとして扱う 管理プロセス:管理タスクを1回限りのプロセスとして実行する 上記12の中でも、Kubernetesの設計に特に欠かせないのは「設定」「プロセス」「廃棄容易性」「ログ」です。これら4項目について、具体的な事例を交えながら設計の勘どころを紹介します。 【設定】ビルドの手間を減らす、設定ファイルの配置 データベースの接続情報などの設定ファイルは、ソースコードを格納するリポジトリの中に保存するのではなく、環境変数で外部から与えることが望ましいです。もしソースコードに含めてしまうと環境ごとにビルドが必要になってしまうからです。例えば、ステージング用、プロダクション用にビルドが必要となり、これにデモ用など新しい環境が加わると、さらに新しいビルドが必要になってきます。さらに設定ファイルを修正するだけでもビルドが必要となるという憂き目にあってしまいます。   実際の運用では、以下の例のようにConfigMapやSecretなどに設定情報を保存し、マニフェストにて環境変数として渡します。 spec: terminationGracePeriodSeconds: 40 containers: - name: wordpress image: wordpress:php7.4 ports: - containerPort: 80 env: - name: WORDPRESS_DB_USER valueFrom: configMapKeyRef: name: wp-user key: WORDPRESS_DB_USER - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: wp-secret key: WORDPRESS_DB_PASSWORD 【プロセス】Podをステートレスにし、ユーザビリティを確保する KubernetesのPodは、スケールアウトやスケールイン、ハード障���やPodそのものの障害などによる自律復旧のため、生成や破棄を繰り返すことが前提となっています。このため、セッション情報やデータベースに格納するデータは、Podの外に永続化し、Pod自体はデータを持たない、いわゆるステートレスにするべきとされています。   仮にPod内部にセッション情報を持たせてしまうと、負荷の増減の影響でスケールアウト / スケールインし、ユーザーのリクエストが別のPodに振られると、認証のセッションが切れて突然ログイン画面が表示されるという事態になります。   実際の運用では、MySQLやRedisなどに永続化します。Azure Kubernetes Serviceの場合は、MySQLのマネージドサービスであるAzure Database for MySQL、RedisのマネージドサービスであるAzure Cache for Redisなどを使うことがあります。 【廃棄容易性】Podを安全に停止する 繰り返しになりますが、Podは生成や破棄を繰り返すことが前提となっています。よって破棄された後は即座に起動する必要があり、またグレースフルシャットダウンにより、サービスを止めることなくPodを起動・破棄することが要求されます。   運用での対策として、Podが高速に起動するために、Dockerリポジトリのイメージをできるだけ小さくするという方法が挙げられます。以下は、レイヤーの数を減らすために、できるだけRUNで実施するコマンドを結合したり、aptのキャッシュを削除するなどしてイメージの容量を削減している例です。 RUN apt-get update && apt-get install -y \ php \ apache2 \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* また、Podを安全に停止する(グレースフルシャットダウン)ために、preStopコマンドを活用します。 KubernetesのPodを終了するシーケンスは以下のようになります。 kubetctlが、Podを終了するためのリクエストをAPI Serverに送信する。 kubeletが、Pod終了のリクエストをAPI Server経由で受け取り、Podの終了処理を開始する。 「サービスからPodを除外する処理」と「preStop+SIGTERMをPodに送信する」という2つの処理を同時に開始する。これらの処理は完全に非同期で行われる。 事前に定義したterminationGracePeriodSeconds秒以内に3の処理が終わらなかった場合、PodにSIGKILLが送信されて、強制的に終了される。 より具体的な事例として、Apacheをgracefulに停止する方法(ユーザーのリクエストを途中で切断することなく安全に停止する)を考えてみます。 先述のとおり最終的にはSIGTERMが送信されますが、ApacheはSIGTERMを受け取ると、ユーザーがリクエストを送受信している途中でも強制的にプロセスを終了し切断してしまいます。 これを防ぐために、preStopを使います。preStopを使うと、Podを終了する際の処理を定義でき、Apacheをgracefulにシャットダウンできるのです。 その一連の処理を図解すると以下の通りとなります。 最初に説明したように、Podは停止のリクエストを受け取ると、まずサービスから除外する処理を行い、該当のPodにリクエストが届かないようになります。 ただし、除外処理が終わる前にApacheのgraceful shutdownが始まってしまうと、ユーザーのリクエストが処理中に切断されてしまう恐れがあるので、除外処理が終わるであろう時間(約3秒程度)sleepします。 そして、サービス除外の処理によって、停止対象のPodにリクエストが振られなくなると、apachectl –k graceful-stopによって、Apacheはgracefulにシャットダウンされます。 そのあと30秒ほど待ちます。これは、Apacheのリクエストタイムアウトの設定が30秒だったと仮定した場合ですが、Apacheのgraceful shutdownは非同期で行われるので、ここで30秒待機しないと、ユーザーのリクエストが終わる前にSIGTERMが送信され、処理中に強制的に切断されてしまいます。 terminationGracePeriodSecondsについては、Podの終了処理がこの値で定義した秒数以内に終わらないとSIGKILLが送信され強制的に切断されるので、サービス除外の処理(約3秒)+ユーザーのリクエストタイムアウト(30秒)=33秒にプラスアルファして40秒としています。 この設定であればPodを安全に停止でき、廃棄容易性を実現できるのです。 これらの処理を実現するpreStopは以下のように定義します。 lifecycle: preStop: exec: command: ["sh", "-c", "sleep 3; apachectl -k graceful-stop; sleep 30"] 【ログ】Pod破棄に対応したログ出力 前項のとおり、Podは生成と破棄を繰り返すことを前提として設計するべきです。従来のように、ログを個別のファイルに書き込んで、ローテーション……といった処理では、Podが破棄されると同時にそのログも消失してしまいます。   よってPodからは標準出力にログを出力し、そのログはflunetdなどのログコレクターに収集、ストレージに出力したり、ログのビジュアライザーなどで閲覧・分析するといった運用がベストです。   具体例を説明します。Pod内のアプリケーションのログを標準出力に出力すると、ワーカーノードの/var/log/containers以下にログがファイルとして出力されます。そのログをDaemonSetのPodとして構築したfluentd(ログコレクター)で収集し、Azureのログ管理サービスであるApplication Insightsに送信するといった運用です。 Kubernetesの“真の”サーバーレス化ソリューション 現在ではマスターノードやワーカーノードの構築や運用管理が不要な、いわゆるマネージドなKubernetesがいくつかあります。AWSのElastic Kubernetes Service、GCPのGoogle Kubernetes Engine、そしてAzure Kubernetes Serviceもそのひとつです。Azure Kubernetes Service(以下、AKS)を一例に挙げますが、AKSではマスターノードは確かにフルマネージドですが、ワーカーノードの実態は仮想マシンです。そしてAKSの運用では、仮想マシンであることを意識することが必要なケースがあります。 例えば、クラスタ新規構築時に仮想マシンのサイズを指定したり、ワーカーノード(稼働マシン)をスケールさせる際も同様に、スケール上限数などを設定する必要があります。ただし、実際の運用において、未来のワークロードを正確に予測して、仮想マシンの適正サイズや数を事前に設定するというのは、なかなかに困難です。 この問題を解決するために、ワーカーノードさえも意識しなくてよい、本当の意味でのサーバーレスKubernetesを実現するソリューションがいくつか登場しました。その中から今回はVirtual Kubeletを紹介します。 ユーザーを仮想マシン運用から開放するサーバーレスコンテナプラットフォーム Virtual Kubeletをご説明する前に、その構成の中核をなす「サーバーレスコンテナプラットフォーム」について、説明します。 まずクラウド上でコンテナを稼働させたい場合にはどのようにするでしょうか。すぐ思いつくのは、仮想マシンを立ち上げ、そこにDockerをインストールし、その上でコンテナを立てる、といった方法でしょう。しかし、この方法では、「仮想マシンの運用」が必要になってきます。サーバーレスが隆盛を極める昨今、やはりこの手間は省きたいものです。 「サーバーレスコンテナプラットフォーム」はこうした課題を解決するためのテクノロジーです。利用者はコンテナを稼働させるためのインフラを意識することなく、設定を定義するだけで必要なだけコンテナを稼働させることができます。コンテナを実行する基盤はすべてクラウド側で面倒を見てくれます。 また、サーバーレスコンテナプラットフォームは、コンテナが実行された時間だけ秒単位で課金されます。例えば1日1時間しか動かさないバッチのために、仮想マシンを1日中動作させているのはコストのムダですが、サーバーレスコンテナプラットフォームでは、この場合1日1時間分しか課金がされず、限られたリソースの有効活用が可能です。 サーバーレスコンテナプラットフォームは各社から出揃っており、AWSではAWS Fargate、GCPではServerless containers、そしてAzureではAzure Container Instancesが提供されています。 サーバーレスコンテナプラットフォームを実現するVirtual Kubelet こうしたサーバーレスコンテナプラットフォームに、Kubeletが実際に行う処理を委ねる仕組みが、KubernetesにおけるKubeletの実装の一つであるVirtual Kubeletです。Kubernetesからはノードのひとつのように見えますが、従来の仮想マシンで構成されていたノードのような実態はありません。 仮想的なノード内にあるVirtual Kubeletが従来のKubeletのように振る舞い、先に紹介したAzure Container InstancesやAWS FargateなどのサーバーレスコンテナプラットフォームにAPIを発行し、Pod(コンテナを格納する最小単位)を作成します。その構成は、下図のようなイメージになります。 例えば、Podを100個増やすとします。仮想マシンで構成されるワーカーノードの場合、Podを100個動かすためには、ワーカーノードをスケールアップしたりスケールアウトしたりと、仮想マシンのサイズや数を意識した運用が必須になります。しかし、Virtual Kubeletの場合は、サーバーレスコンテナプラットフォームが管理するコンピューティングリソースの許す限り、Podを増やせます。必要な作業は下記のコマンド一発です。 kubectl scale --replicas=100 rs/hoge これだけでPodを100個生成できます。仮想マシンを全く意識せずにPodを作成できるーーこれがサーバーレスと表現される由縁です。 Virtul Kubeletのプロバイダー(Virtual KubeletがAPIを発行する先のサービス)は、上図に記載のAWS FargateやAzure Container Instancesの他に、Alibaba CloudElastic Container Instanceのなど多数のものが利用できます。 Azure Kubernetes Serviceによるサーバーレスの実践 では、サーバーレスをいかにして実現するか。本章では実践編としてAKSを用いて、Kubernetesクラスターを構築するとともに、先程ご紹介した「irtual Kubeletによって、サーバーレスKubernetesを実現する手順を紹介します。 AKSによるKubernetesクラスターの構築 まずはAKSの構築から始めましょう。本実践はAzure CLIを用いて行いますので、以下のリンクを参考に事前にAzure CLIの環境を用意してください。 Azure CLI のインストール | Microsoft Docs まず、AKSクラスターを格納するリソースグループを作成します。 $ az group create --name myAKSrg --location japaneast 続いて、AKSクラスターが配置される仮想ネットワークを作成します。 $ az network vnet create \ --resource-group myAKSrg \ --name myVnet \ --address-prefixes 10.0.0.0/8 \ --subnet-name myAKSSubnet \ --subnet-prefix 10.240.0.0/16 AKSクラスターが仮想ネットワークなどの各種リソースにアクセスできるよう、サービスプリンシパル(サービス専用アカウント)を作成します。 $ az ad sp create-for-rbac --skip-assignment { "appId": "cef76eb3-f743-4a97-8534-03e9388811fc", "displayName": "azure-cli-2020-05-30-18-42-00", "name": "http://azure-cli-2020-05-30-18-42-00", "password": "1d257915-8714-4ce7-a7fb-0e5a5411df7f", "tenant": "73f988dd-86f2-41vf-91ad-2e7cd011eb48" } また、AKSクラスターが仮想ネットワークにアクセスできるように、先程作成したサービスプリンシパルに適切な権限を付与します。まずは、先程作成した仮想ネットワークのリソースIDを取得します。 $ az network vnet show --resource-group myAKSrg --name myVnet --query id -o tsv /subscriptions/a2b379a4-5530-4d43-9360-b705cf560d75/resourceGroups/myAKSrg/providers/Microsoft.Network/virtualNetworks/myVnet 以下のコマンドで、サービスプリンシパルに権限を付与します。 --role Contributor 先の手順で作成したAKS用のサブネットに、AKSクラスターをデプロイします。そのために、このサブネットのIDを取得します。 $ az network vnet subnet show --resource-group myAKSrg --vnet-name myVnet --name myAKSSubnet --query id -o tsv 以下のコマンドでAKSクラスターを作成します。 \ -s Standard_B2s さらに、AKSクラスターに接続するための資格情報を取得します。 $ az aks get-credentials --resource-group myAKSrg --name myAKSCluster AKSクラスターの作成が完了していることを確認します。以下のようにワーカーノードが3つ表示されるはずです。 $ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-38191134-vmss000000 Ready agent 16m v1.15.11 aks-nodepool1-38191134-vmss000001 Ready agent 15m v1.15.11 aks-nodepool1-38191134-vmss000002 Ready agent 16m v1.15.11 続いて、このAKSクラスターにアプリケーションをデプロイしてみます。ReplicaSetのリソースを使い、ApacheのDockerイメージを元にしたPodを3つ作成します。さらに、Load Balancer Serviceを作成し、外部からアクセス可能にします。Load Balancer Serviceを作成すると、内部的にはAzure Load Balancerが作成され、Podへのリクエストが外部から到達可能になります。 まず、上記の内容を実現するためのマニフェストを作成します。 apiVersion: apps/v1 kind: Deployment metadata: name: aks-sample spec: replicas: 3 selector: matchLabels: app: aks-sample template: metadata: labels: app: aks-sample spec: containers: - name: aks-sample image: httpd:2.4.43 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: aks-sample spec: type: LoadBalancer selector: app: aks-sample ports: - protocol: TCP port: 80 targetPort: 80 このマニフェストをAKSクラスターに適用します。 $ kubectl apply -f aks-sample.yml 以下のコマンドを実施して、ロードバランサーのIPアドレス(aks-sampleリソースのEXTERNAL-IP)を取得します。 443/TCP 41m にアクセスすると、Apacheデフォルトのindex.html(It works!)が表示されます。これでAKSクラスターの構築は完了です。 Virtual KubeletによるKubernetesのサーバーレス化 では、先程作成したAKSクラスターをサーバーレス化してみます。まずはVirtual Kubeletによって仮想ノードを作成します。 過去、Azure Container Instancesを一度も使用していない場合、Azure Container Instancesのサービスをプロバイダー登録する必要があります。登録済みかどうかは以下のコマンドで確認できま���。 $ az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table コマンド実施後、以下のように表示されれば登録済みです。 Namespace RegistrationState RegistrationPolicy --------------------------- ------------------- -------------------- Microsoft.ContainerInstance Registered RegistrationRequired もし未登録の場合には、以下のコマンドを実行して、プロバイダー登録します。 $ az provider register --namespace Microsoft.ContainerInstance プロバイダーの登録が完了したら、次に仮想ノード用のサブネットを作成します。 $ az network vnet subnet create \ --resource-group myAKSrg \ --vnet-name myVnet \ --name myVirtualNodeSubnet \ --address-prefixes 10.241.0.0/16 仮想ノードを有効にするために、以下のコマンドを実施します。先ほど作成した myVirtualNodeSubnetに仮想ノードを作成します。 $ az aks enable-addons \ --resource-group myAKSrg \ --name myAKSCluster \ --addons virtual-node \ --subnet-name myVirtualNodeSubnet これで仮想ノードの作成は完了です。実際に仮想ノードが作成されていることを確認してみます。以下のvirtual-node-aci-linuxが作成した仮想ノードです。 $ kubectl get nodes NAME STATUS ROLES AGE VERSION aks-nodepool1-38191134-vmss000000 Ready agent 84m v1.15.11 aks-nodepool1-38191134-vmss000001 Ready agent 83m v1.15.11 aks-nodepool1-38191134-vmss000002 Ready agent 84m v1.15.11 virtual-node-aci-linux Ready agent 5m55s v1.14.3-vk-azure-aci-v1.2.1.1 この仮想ノードにPodをデプロイしてみましょう。動作確認を簡単にするため、先程AKSクラスター構築の際にデプロイしたPodは以下のコマンドで削除します。 $ kubectl delete deployment.apps/aks-sample Podをデプロイするために以下のマニフェストを作成します。 apiVersion: apps/v1 kind: Deployment metadata: name: aks-sample spec: replicas: 3 selector: matchLabels: app: aks-sample template: metadata: labels: app: aks-sample spec: containers: - name: aks-sample image: httpd:2.4.43 ports: - containerPort: 80 nodeSelector: kubernetes.io/role: agent beta.kubernetes.io/os: linux type: virtual-kubelet tolerations: - key: virtual-kubelet.io/provider operator: Exists - key: azure.com/aci effect: NoSchedule AKSを構築したときのマニフェストとは異なるポイントが2つあります。 ひとつは、nodeSelectorを指定することで、作成された仮想ノードにPodがデプロイされるようにする必要がある点です。 もうひとつは、Kube-proxyなどの重要なPodが仮想ノードにデプロイされないよう、仮想ノードにはTaintsが付与されている点です。よって、仮想ノードにPodをデプロイする場合は、Tolerationsを指定し強制的に仮想ノードにPodをデプロイさせる必要があります。 では、以下のコマンドで先程のマニフェストをデプロイしてみましょう。 $ kubectl apply -f aks-sample-virtual-nodes.yml ここでPodの状態を取得してみると、3個のPodが仮想ノードにデプロイされているのがわかります。 $ kubectl describe nodes virtual-node-aci-linux ・・・ 略 ・・・ Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default aks-sample-6d586c44b8-4vrf9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m default aks-sample-6d586c44b8-7n98h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 21m default aks-sample-6d586c44b8-rwkk4 0 (0%) 0 (0%) 0 (0%) 0 (0%) 25m ・・・ 略 ・・・ にアクセスすると、Apacheデフォルトのindex.html(It works!)が表示されます。 ここでAzureポータルで、AKSクラスターのリソースグループを見てみます。 マニフェストにおいてReplicaSetsで指定した3つのAzure Container Instancesが作成されています。Virtual KubeletがサーバーレスコンテナプラットフォームにAPIを発行して、Podを作成しているのがわかるかと思います。 まとめ 駆け足でしたが、設計、サーバーレス化の実践までKubernetesの比較的新しい活用ノウハウを紹介してきました。繰り返しになりますがKubernetesは非常に速いスピードで進化しており、その活用法も拡大し続けています。例えば、Iot分野におけるエッジ(IoTセンサー / デバイスと、クラウドサービスの中間に位置し、集計やフィルタリングを実施手からクラウドにデータを送付する機能を持つサーバー)へのKubernetesの導入など、新たなトピックも豊富です。本稿だけでなく、ぜひ、さまざまなKubernetesの最新事例にアンテナを張り、みなさんの開発に役立ててみてください。 武井宜行(たけい・のりゆき) @noriyukitakei サイオステクノロジー株式会社シニアアーキテクト。Java、PHP、サーバーレスアプリケーション、コンテナ、Azure(Functions、Azure Kubernetes Serviceなど)を得意とする。多数の登壇、ブログ『 SIOS Tech.Lab』での記事執筆は年間で100本を超えるなど、発信活動にも注力している。Microsoft MVP for Azure。 関連記事 編集:馮富久(株式会社技術評論社)
0 notes