#Ceph RBD and Kubernetes
Explore tagged Tumblr posts
Text
Top 5 Open Source Kubernetes Storage Solutions
Top 5 Open Source Kubernetes Storage Solutions #homelab #ceph #rook #glusterfs #longhorn #openebs #KubernetesStorageSolutions #OpenSourceStorageForKubernetes #CephRBDKubernetes #GlusterFSWithKubernetes #OpenEBSInKubernetes #RookStorage #LonghornKubernetes
Historically, Kubernetes storage has been challenging to configure, and it required specialized knowledge to get up and running. However, the landscape of K8s data storage has greatly evolved with many great options that are relatively easy to implement for data stored in Kubernetes clusters. Those who are running Kubernetes in the home lab as well will benefit from the free and open-source…

View On WordPress
#block storage vs object storage#Ceph RBD and Kubernetes#cloud-native storage solutions#GlusterFS with Kubernetes#Kubernetes storage solutions#Longhorn and Kubernetes integration#managing storage in Kubernetes clusters#open-source storage for Kubernetes#OpenEBS in Kubernetes environment#Rook storage in Kubernetes
0 notes
Text
OpenShift Virtualization Architecture: Inside KubeVirt and Beyond
OpenShift Virtualization, powered by KubeVirt, enables organizations to run virtual machines (VMs) alongside containerized workloads within the same Kubernetes platform. This unified infrastructure offers seamless integration, efficiency, and scalability. Let’s delve into the architecture that makes OpenShift Virtualization a robust solution for modern workloads.
The Core of OpenShift Virtualization: KubeVirt
What is KubeVirt?
KubeVirt is an open-source project that extends Kubernetes to manage and run VMs natively. By leveraging Kubernetes' orchestration capabilities, KubeVirt bridges the gap between traditional VM-based applications and modern containerized workloads.
Key Components of KubeVirt Architecture
Virtual Machine (VM) Custom Resource Definition (CRD):
Defines the specifications and lifecycle of VMs as Kubernetes-native resources.
Enables seamless VM creation, updates, and deletion using Kubernetes APIs.
Virt-Controller:
Ensures the desired state of VMs.
Manages operations like VM start, stop, and restart.
Virt-Launcher:
A pod that hosts the VM instance.
Ensures isolation and integration with Kubernetes networking and storage.
Virt-Handler:
Runs on each node to manage VM-related operations.
Communicates with the Virt-Controller to execute tasks such as attaching disks or configuring networking.
Libvirt and QEMU/KVM:
Underlying technologies that provide VM execution capabilities.
Offer high performance and compatibility with existing VM workloads.
Integration with Kubernetes Ecosystem
Networking
OpenShift Virtualization integrates with Kubernetes networking solutions, such as:
Multus: Enables multiple network interfaces for VMs and containers.
SR-IOV: Provides high-performance networking for VMs.
Storage
Persistent storage for VMs is achieved using Kubernetes StorageClasses, ensuring that VMs have access to reliable and scalable storage solutions, such as:
Ceph RBD
NFS
GlusterFS
Security
Security is built into OpenShift Virtualization with:
SELinux: Enforces fine-grained access control.
RBAC: Manages access to VM resources via Kubernetes roles and bindings.
Beyond KubeVirt: Expanding Capabilities
Hybrid Workloads
OpenShift Virtualization enables hybrid workloads by allowing applications to:
Combine VM-based legacy components with containerized microservices.
Transition legacy apps into cloud-native environments gradually.
Operator Framework
OpenShift Virtualization leverages Operators to automate lifecycle management tasks like deployment, scaling, and updates for VM workloads.
Performance Optimization
Supports GPU passthrough for high-performance workloads, such as AI/ML.
Leverages advanced networking and storage features for demanding applications.
Real-World Use Cases
Dev-Test Environments: Developers can run VMs alongside containers to test different environments and dependencies.
Data Center Consolidation: Consolidate traditional and modern workloads on a unified Kubernetes platform, reducing operational overhead.
Hybrid Cloud Strategy: Extend VMs from on-premises to cloud environments seamlessly with OpenShift.
Conclusion
OpenShift Virtualization, with its KubeVirt foundation, is a game-changer for organizations seeking to modernize their IT infrastructure. By enabling VMs and containers to coexist and collaborate, OpenShift bridges the past and future of application workloads, unlocking unparalleled efficiency and scalability.
Whether you're modernizing legacy systems or innovating with cutting-edge technologies, OpenShift Virtualization provides the tools to succeed in today’s dynamic IT landscape.
For more information visit: https://www.hawkstack.com/
0 notes
Text
Top 5 Open Source Kubernetes Storage Solutions - Virtualization Howto
0 notes
Text
The OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for cluster operations that require data persistence. As a developer you can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. In this short guide you’ll learn how to expand an existing PVC in OpenShift when using OpenShift Container Storage. Before you can expand persistent volumes, the StorageClass must have the allowVolumeExpansion field set to true. Here is a list of Storage classes available in my OpenShift cluster. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 169d openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d I’ll change the default storage class which is ocs-storagecluster-cephfs. Let’s export the configuration to yaml file: oc get sc ocs-storagecluster-cephfs -o yaml >ocs-storagecluster-cephfs.yml I’ll modify the file to add allowVolumeExpansion field. allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ocs-storagecluster-cephfs parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true # Added field Delete current configured storageclass since a SC is immutable resource. $ oc delete sc ocs-storagecluster-cephfs storageclass.storage.k8s.io "ocs-storagecluster-cephfs" deleted Apply modified storage class configuration by running the following command: $ oc apply -f ocs-storagecluster-cephfs.yml storageclass.storage.k8s.io/ocs-storagecluster-cephfs created List storage classes to confirm it was indeed created. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 5m20s openshift-storage.
noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d Output yalm and confirm the new setting was applied. $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | "allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":"annotations":"storageclass.kubernetes.io/is-default-class":"true","name":"ocs-storagecluster-cephfs","parameters":"clusterID":"openshift-storage","csi.storage.k8s.io/node-stage-secret-name":"rook-csi-cephfs-node","csi.storage.k8s.io/node-stage-secret-namespace":"openshift-storage","csi.storage.k8s.io/provisioner-secret-name":"rook-csi-cephfs-provisioner","csi.storage.k8s.io/provisioner-secret-namespace":"openshift-storage","fsName":"ocs-storagecluster-cephfilesystem","provisioner":"openshift-storage.cephfs.csi.ceph.com","reclaimPolicy":"Delete","volumeBindingMode":"Immediate" storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-10-31T13:33:56Z" name: ocs-storagecluster-cephfs resourceVersion: "242503097" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 5aa95d3b-c39c-438d-85af-5c8550d6ed5b parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate How To Expand a PVC in OpenShift List available PVCs in the namespace. $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-harbor-harbor-redis-0 Bound pvc-e516b793-60c5-431d-955f-b1d57bdb556b 1Gi RWO ocs-storagecluster-cephfs 169d database-data-harbor-harbor-database-0 Bound pvc-00a53065-9790-4291-8f00-288359c00f6c 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-chartmuseum Bound pvc-405c68de-eecd-4db1-9ca1-5ca97eeab37c 5Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-jobservice Bound pvc-e52f231e-0023-41ad-9aff-98ac53cecb44 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-registry Bound pvc-77e159d4-4059-47dd-9c61-16a6e8b37a14 100Gi RWX ocs-storagecluster-cephfs 39d Edit PVC and change capacity $ oc edit pvc data-harbor-harbor-redis-0 ... spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Delete pod with claim. $ oc delete pods harbor-harbor-redis-0 pod "harbor-harbor-redis-0" deleted Recreate the deployment that was claiming the storage and it should utilize the new capacity. Expanding PVC on OpenShift Web Console You can also expand a PVC from the web console. Click on “Expand PVC” and set the desired PVC capacity.
0 notes
Quote
Open Source Definitely Changed Storage Industry With Linux and other technologies and products, it impacts all areas. By Philippe Nicolas | February 16, 2021 at 2:23 pm It’s not a breaking news but the impact of open source in the storage industry was and is just huge and won’t be reduced just the opposite. For a simple reason, the developers community is the largest one and adoption is so wide. Some people see this as a threat and others consider the model as a democratic effort believing in another approach. Let’s dig a bit. First outside of storage, here is the list some open source software (OSS) projects that we use every day directly or indirectly: Linux and FreeBSD of course, Kubernetes, OpenStack, Git, KVM, Python, PHP, HTTP server, Hadoop, Spark, Lucene, Elasticsearch (dual license), MySQL, PostgreSQL, SQLite, Cassandra, Redis, MongoDB (under SSPL), TensorFlow, Zookeeper or some famous tools and products like Thunderbird, OpenOffice, LibreOffice or SugarCRM. The list is of course super long, very diverse and ubiquitous in our world. Some of these projects initiated some wave of companies creation as they anticipate market creation and potentially domination. Among them, there are Cloudera and Hortonworks, both came public, promoting Hadoop and they merged in 2019. MariaDB as a fork of MySQL and MySQL of course later acquired by Oracle. DataStax for Cassandra but it turns out that this is not always a safe destiny … Coldago Research estimated that the entire open source industry will represent $27+ billion in 2021 and will pass the barrier of $35 billion in 2024. Historically one of the roots came from the Unix – Linux transition. In fact, Unix was largely used and adopted but represented a certain price and the source code cost was significant, even prohibitive. Projects like Minix and Linux developed and studied at universities and research centers generated tons of users and adopters with many of them being contributors. Is it similar to a religion, probably not but for sure a philosophy. Red Hat, founded in 1993, has demonstrated that open source business could be big and ready for a long run, the company did its IPO in 1999 and had an annual run rate around $3 billion. The firm was acquired by IBM in 2019 for $34 billion, amazing right. Canonical, SUSE, Debian and a few others also show interesting development paths as companies or as communities. Before that shift, software developments were essentially applications as system software meant cost and high costs. Also a startup didn’t buy software with the VC money they raised as it could be seen as suicide outside of their mission. All these contribute to the open source wave in all directions. On the storage side, Linux invited students, research centers, communities and start-ups to develop system software and especially block storage approach and file system and others like object storage software. Thus we all know many storage software start-ups who leveraged Linux to offer such new storage models. We didn’t see lots of block storage as a whole but more open source operating system with block (SCSI based) storage included. This is bit different for file and object storage with plenty of offerings. On the file storage side, the list is significant with disk file systems and distributed ones, the latter having multiple sub-segments as well. Below is a pretty long list of OSS in the storage world. Block Storage Linux-LIO, Linux SCST & TGT, Open-iSCSI, Ceph RBD, OpenZFS, NexentaStor (Community Ed.), Openfiler, Chelsio iSCSI, Open vStorage, CoprHD, OpenStack Cinder File Storage Disk File Systems: XFS, OpenZFS, Reiser4 (ReiserFS), ext2/3/4 Distributed File Systems (including cluster, NAS and parallel to simplify the list): Lustre, BeeGFS, CephFS, LizardFS, MooseFS, RozoFS, XtreemFS, CohortFS, OrangeFS (PVFS2), Ganesha, Samba, Openfiler, HDFS, Quantcast, Sheepdog, GlusterFS, JuiceFS, ScoutFS, Red Hat GFS2, GekkoFS, OpenStack Manila Object Storage Ceph RADOS, MinIO, Seagate CORTX, OpenStack Swift, Intel DAOS Other data management and storage related projects TAR, rsync, OwnCloud, FileZilla, iRODS, Amanda, Bacula, Duplicati, KubeDR, Velero, Pydio, Grau Data OpenArchive The impact of open source is obvious both on commercial software but also on other emergent or small OSS footprint. By impact we mean disrupting established market positions with radical new approach. It is illustrated as well by commercial software embedding open source pieces or famous largely adopted open source product that prevent some initiatives to take off. Among all these scenario, we can list XFS, OpenZFS, Ceph and MinIO that shake commercial models and were even chosen by vendors that don’t need to develop themselves or sign any OEM deal with potential partners. Again as we said in the past many times, the Build, Buy or Partner model is also a reality in that world. To extend these examples, Ceph is recommended to be deployed with XFS disk file system for OSDs like OpenStack Swift. As these last few examples show, obviously open source projets leverage other open source ones, commercial software similarly but we never saw an open source project leveraging a commercial one. This is a bit antinomic. This acts as a trigger to start a development of an open source project offering same functions. OpenZFS is also used by Delphix, Oracle and in TrueNAS. MinIO is chosen by iXsystems embedded in TrueNAS, Datera, Humio, Robin.IO, McKesson, MapR (now HPE), Nutanix, Pavilion Data, Portworx (now Pure Storage), Qumulo, Splunk, Cisco, VMware or Ugloo to name a few. SoftIron leverages Ceph and build optimized tailored systems around it. The list is long … and we all have several examples in mind. Open source players promote their solutions essentially around a community and enterprise editions, the difference being the support fee, the patches policies, features differences and of course final subscription fees. As we know, innovations come often from small agile players with a real difficulties to approach large customers and with doubt about their longevity. Choosing the OSS path is a way to be embedded and selected by larger providers or users directly, it implies some key questions around business models. Another dimension of the impact on commercial software is related to the behaviors from universities or research centers. They prefer to increase budget to hardware and reduce software one by using open source. These entities have many skilled people, potentially time, to develop and extend open source project and contribute back to communities. They see, in that way to work, a positive and virtuous cycle, everyone feeding others. Thus they reach new levels of performance gaining capacity, computing power … finally a decision understandable under budget constraints and pressure. Ceph was started during Sage Weil thesis at UCSC sponsored by the Advanced Simulation and Computing Program (ASC), including Sandia National Laboratories (SNL), Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL). There is a lot of this, famous example is Lustre but also MarFS from LANL, GekkoFS from University of Mainz, Germany, associated with the Barcelona Supercomputing Center or BeeGFS, formerly FhGFS, developed by the Fraunhofer Center for High Performance Computing in Germany as well. Lustre was initiated by Peter Braam in 1999 at Carnegie Mellon University. Projects popped up everywhere. Collaboration software as an extension to storage see similar behaviors. OwnCloud, an open source file sharing and collaboration software, is used and chosen by many universities and large education sites. At the same time, choosing open source components or products as a wish of independence doesn’t provide any kind of life guarantee. Rremember examples such HDFS, GlusterFS, OpenIO, NexentaStor or Redcurrant. Some of them got acquired or disappeared and create issue for users but for sure opportunities for other players watching that space carefully. Some initiatives exist to secure software if some doubt about future appear on the table. The SDS wave, a bit like the LMAP (Linux, MySQL, Apache web server and PHP) had a serious impact of commercial software as well as several open source players or solutions jumped into that generating a significant pricing erosion. This initiative, good for users, continues to reduce also differentiators among players and it became tougher to notice differences. In addition, Internet giants played a major role in open source development. They have talent, large teams, time and money and can spend time developing software that fit perfectly their need. They also control communities acting in such way as they put seeds in many directions. The other reason is the difficulty to find commercial software that can scale to their need. In other words, a commercial software can scale to the large corporation needs but reaches some limits for a large internet player. Historically these organizations really redefined scalability objectives with new designs and approaches not found or possible with commercial software. We all have example in mind and in storage Google File System is a classic one or Haystack at Facebook. Also large vendors with internal projects that suddenly appear and donated as open source to boost community effort and try to trigger some market traction and partnerships, this is the case of Intel DAOS. Open source is immediately associated with various licenses models and this is the complex aspect about source code as it continues to create difficulties for some people and entities that impact projects future. One about ZFS or even Java were well covered in the press at that time. We invite readers to check their preferred page for that or at least visit the Wikipedia one or this one with the full table on the appendix page. Immediately associated with licenses are the communities, organizations or foundations and we can mention some of them here as the list is pretty long: Apache Software Foundation, Cloud Native Computing Foundation, Eclipse Foundation, Free Software Foundation, FreeBSD Foundation, Mozilla Foundation or Linux Foundation … and again Wikipedia represents a good source to start.
Open Source Definitely Changed Storage Industry - StorageNewsletter
0 notes
Text
MySQL雲原生方案在攜程開發測試場景中的實踐
一、背景與使用場景
隨著Kubernetes平台在容器雲計算領域的一統天下,雲原生 (Cloud Native) 一詞也被提的越來越頻繁���各類應用紛紛走上了容器化、雲原生化的道路,無狀態服務應用在Kubernetes平台上的運行,已經得到了大規模生產級別的實踐認可。
相比之下,有狀態應用就沒有那麼順利了,特別是那些十分重要卻又”歷史悠久”、不是按照分佈式架構理念設計的有狀態服務,尤其困難。 MySQL就是其中的代表,為此我們做了諸多嘗試,從一開始的MySQL單實例容器化使用本地存儲,到計算存儲分離的方案,走了一些彎路。最終在開發測試場景下找了一個合適的切入點,實現了一套計算和存儲分離,以Kubernetes Operator為核心,以CEPH RBD為後端存儲,以數據庫版本化管理為特性的可行方案。
我們典型的使用場景是這樣的:測試人員需要構造一個生產環境批量訂單數據異常的測試場景, …
from MySQL雲原生方案在攜程開發測試場景中的實踐 via KKNEWS
0 notes
Photo
New Post has been published on https://dev-ops-notes.ru/cloud/%d0%b8%d1%81%d0%bf%d0%be%d0%bb%d1%8c%d0%b7%d0%be%d0%b2%d0%b0%d0%bd%d0%b8%d0%b5-aws-go-sdk-%d0%b2-aws-%d1%81%d0%be%d0%b2%d0%bc%d0%b5%d1%81%d1%82%d0%b8%d0%bc%d0%be%d0%bc-%d0%be%d0%b1%d0%bb%d0%b0%d0%ba/?utm_source=TR&utm_medium=andrey-v-maksimov&utm_campaign=SNAP%2Bfrom%2BDev-Ops-Notes.RU
Использование AWS-GO-SDK в AWS совместимом облаке
На днях в поисках Docker драйвера для поддержки EBS дисков в AWS совместимом облаке, наткнулся на несколько интересных библиотек и решений. Собственно вот они:
blocker – самый простой драйвер, позволяющий вам подключить уже созданные EBS диски к вашим Docker контейнерам в регионе, где запущен ваш сервер с Docker-ом
libstorage – очень интересная и захватившая меня открытая библиотека от EMC, позволяющая работать с различными типами стораджей на Go. Она поддерживает работу с дисками в:
Azure
OpenStack (Cinder)
AWS EBS
AWS EFS
Google Compute Engine
Isilon
Ceph (RBD)
S3FS
ScaleIO
VirtualBox
Vfs
Rex-Ray – собственно драйвер для работы с EBS дисками в различных облачных средах
Т.к. все эти библиотеки и решения написаны на Go, пришлось быстренько освоить новый для себя язык программирования, чтобы попробовать “прикрутить” их к Облаку КРОК. Результатами я поделюсь чуточку попозже, т.к. без багов как обычно не обошлось, так что придется потратить некоторое количество времени на работу с комьюнити и патчинг.
Собственно libstorage был найден в процессе исследования реализации дискового драйвера для Docker Rex-Ray (еще одно открытое решение от EMC) – судя по коду, это решение можно отлично использовать для управления дисками не только в Docker, но и Kubernetes. Особенность решения в том, что оно может работать на каждом вычислительном хосте или в качестве сервиса на мастерах, способно самостоятельно создавать EBS диски в Облаке, а также подключать их к вашим контейнерам.
Пока я глубже разбираюсь с этими библиотеками и драйверами, решил поделиться с вами способом использования AWS-Go-SDK в Облаке КРОК (читай, в любом AWS совместимом облаке). Вдруг кому-то еще пригодится. Итак, парочка примеров:
Подключение к Облаку
Подключение к AWS совместимому облаку (например, Облаку КРОК или Eucalyptus) осуществляется следующим образом:
package main import ( "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/service/ec2" ) func main() region := "ru-msk" endpoint := "api.cloud.croc.ru" awsKey := "Your Key" awsSecret := "Your Secret KEY" sess := session.New() svc := ec2.New(sess, &aws.Config Region: ®ion, Endpoint: &endpoint, Credentials: credentials.NewChainCredentials( []credentials.Provider &credentials.StaticProvider Value: credentials.Value AccessKeyID: awsKey, SecretAccessKey: awsSecret, , , , ), , ) fmt.Println(svc)
Вывод списка дисков
Далее, я не буду перепечатывать все примеры, доступные в официальной документации, а просто покажу, как использовать объект svc для дальнейшей работы на примере отображения списка созданных дисков, остальное вы и сами легко сможете сделать.
package main import ( "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/awslabs/aws-sdk-go/aws/awserr" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/aws/credentials" "github.com/aws/aws-sdk-go/service/ec2" ) func main() region := "ru-msk" endpoint := "api.cloud.croc.ru" awsKey :="Your Key" awsSecret :="Your Secret KEY" sess := session.New() svc := ec2.New(sess, &aws.Config Region: ®ion, Endpoint: &endpoint, Credentials: credentials.NewChainCredentials( []credentials.Provider &credentials.StaticProvider Value: credentials.Value AccessKeyID: awsKey, SecretAccessKey: awsSecret, , , , ), , ) input := &ec2.DescribeVolumesInput result, err := svc.DescribeVolumes(input) if err != nil if aerr, ok := err.(awserr.Error); ok switch aerr.Code() default: fmt.Println(aerr.Error()) else // Print the error, cast err to awserr.Error to get the Code and // Message from an error. fmt.Println(err.Error()) return fmt.Println(result)
Заключение
В статье приведены несколько библиотек и драйверов на Go, которые можно использовать для управления дисками в Docker в основных облачных платформах. Насколько просто использовать их в AWS совместимых облаках еще только предстоит выяснить. Ждите обновлений.
0 notes
Text
The Dynamic volume provisioning in Kubernetes allows storage volumes to be created on-demand, without manual Administrator intervention. When developers are doing deployments without dynamic provisioning, cluster administrators have to manually make calls to their cloud or storage provider to create new storage volumes, from where the PersistentVolumes are created. This guide will discuss how you can achieve Dynamic Volume Provisioning on Kubernetes by using GlusterFS distributed storage solution and Heketi RESTful management interface. It is expected you have deployed Heketi and GlusterFS scale-out network-attached storage file system. For Ceph, check: Ceph Persistent Storage for Kubernetes with Cephfs Persistent Storage for Kubernetes with Ceph RBD How Dynamic Provisioning is configured in Kubernetes In Kubernetes, dynamic volume provisioning is based on the API object StorageClass from the API group storage.k8s.io. As a cluster administrator, you’ll define as many StorageClass objects as needed, each specifying a volume plugin ( provisioner) that provisions a volume and the set of parameters to pass to that provisioner when provisioning. So below are the steps you’ll use to configure Dynamic Volume Provisioning on Kubernetes using Gluster and Heketi API. Setup GlusterFS and Heketi It is expected you have a running Gluster and Heketi before you continue with configurations on the Kubernetes end. Refer to our guide below on setting them up. Setup GlusterFS Storage With Heketi on CentOS 8 / CentOS 7 At the moment we only have guide for CentOS, but we’re working on a deployment guide for Ubuntu/Debian systems. For containerized setup, check: Setup Kubernetes / OpenShift Dynamic Persistent Volume Provisioning with GlusterFS and Heketi Once the installation is done, proceed to step 2: Create StorageClass Object on Kubernetes We need to create a StorageClass object to enable dynamic provisioning for container platform users. The StorageClass objects define which provisioner should be used and what parameters should be passed to that provisioner when dynamic provisioning is invoked. Check your Heketi Cluster ID $ heketi-cli cluster list Clusters: Id:b182cb76b881a0be2d44bd7f8fb07ea4 [file][block] Create Kubernetes Secret Get a base64 format of your Heketi admin user password. $ echo -n "PASSWORD" | base64 Then create a secret with the password for accessing Heketi. $ vim gluster-secret.yaml apiVersion: v1 kind: Secret metadata: name: heketi-secret namespace: default type: "kubernetes.io/glusterfs" data: # echo -n "PASSWORD" | base64 key: cGFzc3dvcmQ= Where: cGFzc3dvcmQ= is the output of echo command. Create the secret by running the command: $ kubectl create -f gluster-secret.yaml Confirm secret creation. $ kubectl get secret NAME TYPE DATA AGE heketi-secret kubernetes.io/glusterfs 1 1d Create StorageClass Below is a sample StorageClass for GlusterFS using Heketi. $ cat glusterfs-sc.yaml kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: gluster-heketi provisioner: kubernetes.io/glusterfs reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true parameters: resturl: "http://heketiserverip:8080" restuser: "admin" secretName: "heketi-secret" secretNamespace: "default" volumetype: "replicate:2" volumenameprefix: "k8s-dev" clusterid: "b182cb76b881a0be2d44bd7f8fb07ea4" Where: gluster-heketi is the name of the StorageClass to be created. The valid options for reclaim policy are Retain, Delete or Recycle. The Delete policy means that a dynamically provisioned volume is automatically deleted when a user deletes the corresponding PersistentVolumeClaim. The volumeBindingMode field controls when volume binding and dynamic provisioning should occur.
Valid options are Immediate & WaitForFirstConsumer. The Immediate mode indicates that volume binding and dynamic provisioning occurs once the PersistentVolumeClaim is created. The WaitForFirstConsumer mode delays the binding and provisioning of a PersistentVolume until a Pod using the PersistentVolumeClaim is created. The resturl is the URL of your heketi endpoint heketi-secret is the secret created for Heketi credentials. default is the name of namespace where secret was created replicate:2 indicated the default replication factor for Gluster Volumes created. For more HA, use 3. volumenameprefix: By default dynamically provisioned volumes have the naming schema of vol_UUID format. We have provided a desired volume name from storageclass. So the naming scheme will be: volumenameprefix_Namespace_PVCname_randomUUID b182cb76b881a0be2d44bd7f8fb07ea4 is the ID of the cluster obtained from the command heketi-cli cluster list Another parameter that can be set is: volumeoptions: "user.heketi.zone-checking strict" The default setting/behavior is: volumeoptions: "user.heketi.zone-checking none" This forces Heketi to strictly place replica bricks in different zones. The required minimum number of nodes required to be present in different zones is 3 if the replica value is set to 3. Once the file is created, run the following command to create the StorageClass object. $ kubectl create -f gluster-sc.yaml Confirm StorageClass creation. $ kubectl get sc NAME PROVISIONER AGE glusterfs-heketi kubernetes.io/glusterfs 1d local-storage kubernetes.io/no-provisioner 30d Step 2: Create PersistentVolumeClaim Object When a user is requesting dynamically provisioned storage, a storage class should be included in the PersistentVolumeClaim. Let’s create a 1GB request for storage: $ vim glusterfs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gluster-pvc annotations: volume.beta.kubernetes.io/storage-class: gluster-heketi spec: accessModes: - ReadWriteMany resources: requests: storage: 1Gi Create object: $ kubectl create --save-config -f glusterfs-pvc.yaml Confirm: $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE glusterfs-pvc Bound pvc-34b9b5e9-fbde-11e9-943f-00505692ee7e 1Gi RWX glusterfs-heketi 1d After creation, you can use it in your deployments. To use the volume we reference the PVC in the YAML file of any Pod/Deployment like this for example: apiVersion: v1 kind: Pod metadata: name: gluster-pod labels: name: gluster-pod spec: containers: - name: gluster-pod image: busybox command: ["sleep", "60000"] volumeMounts: - name: gluster-vol mountPath: /usr/share/busybox readOnly: false volumes: - name: gluster-vol persistentVolumeClaim: claimName: glusterfs-pvc That’s it for today. You should have a working Dynamic Volume Provisioning With Heketi & GlusterFS for your Kubernetes platform.
0 notes
Text
In our last tutorial, we discussed on how you can Persistent Storage for Kubernetes with Ceph RBD. As promised, this article will focus on configuring Kubernetes to use external Ceph Ceph File System to store Persistent data for Applications running on Kubernetes container environment. If you’re new to Ceph but have a running Ceph Cluster, Ceph File System(CephFS), is a POSIX-compliant file system built on top of Ceph’s distributed object store, RADOS. CephFS is designed to provide a highly available, multi-use, and performant file store for a variety of applications. This tutorial won’t dive deep to Kubernetes and Ceph concepts. It is to serve as an easy step-by-step guide on configuring both Ceph and Kubernetes to ensure you can provision persistent volumes automatically on Ceph backend with Cephfs. So follow steps below to get started. Ceph Persistent Storage for Kubernetes with Cephfs Before you begin this exercise, you should have a working external Ceph cluster. Most Kubernetes deployments using Ceph will involve using Rook. This guide assumes you have a Ceph storage cluster deployed with Ceph Ansible, Ceph Deploy or manually. We’ll be updating the link with other guides on the installation of Ceph on other Linux distributions. Step 1: Deploy Cephfs Provisioner on Kubernetes Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+. vim cephfs-provisioner.yml Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Cephfs provisioner. --- kind: Namespace apiVersion: v1 metadata: name: cephfs --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cephfs-provisioner namespace: cephfs subjects: - kind: ServiceAccount name: cephfs-provisioner namespace: cephfs roleRef: kind: ClusterRole name: cephfs-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: cephfs-provisioner namespace: cephfs rules: - apiGroups: [""] resources: ["secrets"] verbs: ["create", "get", "delete"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: cephfs-provisioner namespace: cephfs roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: cephfs-provisioner subjects: - kind: ServiceAccount name: cephfs-provisioner --- apiVersion: v1 kind: ServiceAccount metadata: name: cephfs-provisioner namespace: cephfs --- apiVersion: apps/v1 kind: Deployment metadata: name: cephfs-provisioner namespace: cephfs spec: replicas: 1 selector: matchLabels: app: cephfs-provisioner strategy: type: Recreate template: metadata: labels: app: cephfs-provisioner spec: containers: - name: cephfs-provisioner image: "quay.io/external_storage/cephfs-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/cephfs - name: PROVISIONER_SECRET_NAMESPACE
value: cephfs command: - "/usr/local/bin/cephfs-provisioner" args: - "-id=cephfs-provisioner-1" serviceAccount: cephfs-provisioner Apply manifest: $ kubectl apply -f cephfs-provisioner.yml namespace/cephfs created clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created role.rbac.authorization.k8s.io/cephfs-provisioner created rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created serviceaccount/cephfs-provisioner created deployment.apps/cephfs-provisioner created Confirm that Cephfs volume provisioner pod is running. $ kubectl get pods -l app=cephfs-provisioner -n cephfs NAME READY STATUS RESTARTS AGE cephfs-provisioner-7b77478cb8-7nnxs 1/1 Running 0 84s Step 2: Get Ceph Admin Key and create Secret on Kubernetes Login to your Ceph Cluster and get the admin key for use by RBD provisioner. $ sudo ceph auth get-key client.admin Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes. $ kubectl create secret generic ceph-admin-secret \ --from-literal=key='' \ --namespace=cephfs Where is your Ceph admin key. You can confirm creation with the command below. $ kubectl get secrets ceph-admin-secret -n cephfs NAME TYPE DATA AGE ceph-admin-secret Opaque 1 6s Step 3: Create Ceph pool for Kubernetes & client key A Ceph file system requires at least two RADOS pools: For both: Data Metadata Generally, the metadata pool will have at most a few gigabytes of data. 64 or 128 is commonly used in practice for large clusters. For this reason, a smaller PG count is usually recommended. Let’s create Ceph OSD pools for Kubernetes: sudo ceph osd pool create cephfs_data 128 128 sudo ceph osd pool create cephfs_metadata 64 64 Create ceph file system on the pools: sudo ceph fs new cephfs cephfs_metadata cephfs_data Confirm creation of Ceph File System: $ sudo ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] UI Dashboard confirmation: Step 4: Create Cephfs Storage Class on Kubernetes A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called cephfs. vim cephfs-sc.yml The contents to be added to file: --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: cephfs namespace: cephfs provisioner: ceph.com/cephfs parameters: monitors: 10.10.10.11:6789,10.10.10.12:6789,10.10.10.13:6789 adminId: admin adminSecretName: ceph-admin-secret adminSecretNamespace: cephfs claimRoot: /pvc-volumes Where: cephfs is the name of the StorageClass to be created. 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors. You can list them with the command: $ sudo ceph -s cluster: id: 7795990b-7c8c-43f4-b648-d284ef2a0aba health: HEALTH_OK services: mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h) mgr: cephmon01(active, since 30h), standbys: cephmon02 mds: cephfs:1 0=cephmon01=up:active 1 up:standby osd: 9 osds: 9 up (since 32h), 9 in (since 32h) rgw: 3 daemons active (cephmon01, cephmon02, cephmon03) data: pools: 8 pools, 618 pgs objects: 250 objects, 76 KiB usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail pgs: 618 active+clean After modifying the file with correct values of Ceph monitors, use kubectl command to create the StorageClass. $ kubectl apply -f cephfs-sc.yml storageclass.storage.k8s.io/cephfs created List available StorageClasses: $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd ceph.com/rbd Delete Immediate false 25h cephfs ceph.com/cephfs Delete Immediate false 2m23s
Step 5: Create a test Claim and Pod on Kubernetes To confirm everything is working, let’s create a test persistent volume claim. $ vim cephfs-claim.yml --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: cephfs-claim1 spec: accessModes: - ReadWriteOnce storageClassName: cephfs resources: requests: storage: 1Gi Apply manifest file. $ kubectl apply -f cephfs-claim.yml persistentvolumeclaim/cephfs-claim1 created If it was successful in binding, it should show Bound status. $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 25h cephfs-claim1 Bound pvc-1bfa81b6-2c0b-47fa-9656-92dc52f69c52 1Gi RWO cephfs 87s We can then deploy a test pod using the claim we created. First create a file to hold the data: vim cephfs-test-pod.yaml Add contents below: kind: Pod apiVersion: v1 metadata: name: test-pod spec: containers: - name: test-pod image: gcr.io/google_containers/busybox:latest command: - "/bin/sh" args: - "-c" - "touch /mnt/SUCCESS && exit 0 || exit 1" volumeMounts: - name: pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: pvc persistentVolumeClaim: claimName: claim1 Create pod: $ kubectl apply -f cephfs-test-pod.yaml pod/test-pod created Confirm the pod is in the running state: $ kubectl get pods test-pod NAME READY STATUS RESTARTS AGE test-pod 0/1 Completed 0 2m28s Enjoy using Cephfs for Persistent volume provisioning on Kubernetes.
0 notes
Text
How can I use Ceph RBD for Kubernetes Dynamic persistent volume provisioning?. Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. One of the key requirements when deploying stateful applications in Kubernetes is data persistence. In this tutorial, we’ll look at how you can create a storage class on Kubernetes which provisions persistent volumes from an external Ceph Cluster using RBD (Ceph Block Device). Ceph block devices are thin-provisioned, resizable and store data striped over multiple OSDs in a Ceph cluster. Ceph block devices leverage RADOS capabilities such as snapshotting, replication and consistency. The Ceph’s RADOS Block Devices (RBD) interact with OSDs using kernel modules or the librbd library. Before you begin this exercise, you should have a working external Ceph cluster. Most Kubernetes deployments using Ceph will involve using Rook. This guide assumes you have a Ceph storage cluster deployed with Ceph Ansible, Ceph Deploy or manually. Step 1: Deploy Ceph Provisioner on Kubernetes Login to your Kubernetes cluster and Create a manifest file for deploying RBD provisioner which is an out-of-tree dynamic provisioner for Kubernetes 1.5+. vim ceph-rbd-provisioner.yml Add the following contents to the file. Notice our deployment uses RBAC, so we’ll create cluster role and bindings before creating service account and deploying Ceph RBD provisioner. --- kind: ServiceAccount apiVersion: v1 metadata: name: rbd-provisioner namespace: kube-system --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner namespace: kube-system rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner namespace: kube-system subjects: - kind: ServiceAccount name: rbd-provisioner namespace: kube-system roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: rbd-provisioner namespace: kube-system rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: kube-system --- apiVersion: apps/v1 kind: Deployment metadata: name: rbd-provisioner namespace: kube-system spec: replicas: 1 selector: matchLabels: app: rbd-provisioner strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: "quay.io/external_storage/rbd-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner Apply the file to create the resources. $ kubectl apply -f ceph-rbd-provisioner.yml clusterrole.rbac.authorization.k8s.io/rbd-provisioner created clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created
role.rbac.authorization.k8s.io/rbd-provisioner created rolebinding.rbac.authorization.k8s.io/rbd-provisioner created deployment.apps/rbd-provisioner created Confirm that RBD volume provisioner pod is running. $ kubectl get pods -l app=rbd-provisioner -n kube-system NAME READY STATUS RESTARTS AGE rbd-provisioner-75b85f85bd-p9b8c 1/1 Running 0 3m45s Step 2: Get Ceph Admin Key and create Secret on Kubernetes Login to your Ceph Cluster and get the admin key for use by RBD provisioner. sudo ceph auth get-key client.admin Save the Value of the admin user key printed out by the command above. We’ll add the key as a secret in Kubernetes. kubectl create secret generic ceph-admin-secret \ --type="kubernetes.io/rbd" \ --from-literal=key='' \ --namespace=kube-system Where is your ceph admin key. You can confirm creation with the command below. $ kubectl get secrets ceph-admin-secret -n kube-system NAME TYPE DATA AGE ceph-admin-secret kubernetes.io/rbd 1 5m Step 3: Create Ceph pool for Kubernetes & client key Next is to create a new Ceph Pool for Kubernetes. $ sudo ceph ceph osd pool create # Example $ sudo ceph ceph osd pool create k8s 100 For more details, check our guide: Create a Pool in Ceph Storage Cluster Then create a new client key with access to the pool created. $ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=' # Example $ sudo ceph auth add client.kube mon 'allow r' osd 'allow rwx pool=k8s' Where k8s is the name of pool created in Ceph. You can then associate the pool with an application and initialize it. sudo ceph osd pool application enable rbd sudo rbd pool init Get the client key on Ceph. sudo ceph auth get-key client.kube Create client secret on Kubernetes kubectl create secret generic ceph-k8s-secret \ --type="kubernetes.io/rbd" \ --from-literal=key='' \ --namespace=kube-system Where is your Ceph client key. Step 4: Create a RBD Storage Class A StorageClass provides a way for you to describe the “classes” of storage you offer in Kubernetes. We’ll create a storageclass called ceph-rbd. vim ceph-rbd-sc.yml The contents to be added to file: --- kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: ceph-rbd provisioner: ceph.com/rbd parameters: monitors: 10.10.10.11:6789, 10.10.10.12:6789, 10.10.10.13:6789 pool: k8s-uat adminId: admin adminSecretNamespace: kube-system adminSecretName: ceph-admin-secret userId: kube userSecretNamespace: kube-system userSecretName: ceph-k8s-secret imageFormat: "2" imageFeatures: layering Where: ceph-rbd is the name of the StorageClass to be created. 10.10.10.11, 10.10.10.12 & 10.10.10.13 are the IP address of Ceph Monitors. You can list them with the command: $ sudo ceph -s cluster: id: 7795990b-7c8c-43f4-b648-d284ef2a0aba health: HEALTH_OK services: mon: 3 daemons, quorum cephmon01,cephmon02,cephmon03 (age 32h) mgr: cephmon01(active, since 30h), standbys: cephmon02 mds: cephfs:1 0=cephmon01=up:active 1 up:standby osd: 9 osds: 9 up (since 32h), 9 in (since 32h) rgw: 3 daemons active (cephmon01, cephmon02, cephmon03) data: pools: 8 pools, 618 pgs objects: 250 objects, 76 KiB usage: 9.6 GiB used, 2.6 TiB / 2.6 TiB avail pgs: 618 active+clean After modifying the file with correct values of Ceph monitors, apply config: $ kubectl apply -f ceph-rbd-sc.yml storageclass.storage.k8s.io/ceph-rbd created List available StorageClasses: $ kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ceph-rbd ceph.com/rbd Delete Immediate false 17s cephfs ceph.com/cephfs Delete Immediate false 18d Step 5: Create a test Claim and Pod on Kubernetes
To confirm everything is working, let’s create a test persistent volume claim. $ vim ceph-rbd-claim.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: ceph-rbd-claim1 spec: accessModes: - ReadWriteOnce storageClassName: ceph-rbd resources: requests: storage: 1Gi Apply manifest file to create claim. $ kubectl apply -f ceph-rbd-claim.yml persistentvolumeclaim/ceph-rbd-claim1 created If it was successful in binding, it should show Bound status. $ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ceph-rbd-claim1 Bound pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304 1Gi RWO ceph-rbd 43s Nice!.. We are able create dynamic Persistent Volume Claims on Ceph RBD backend. Notice we didn’t have to manually create a Persistent Volume before a Claim. How cool is that?.. We can then deploy a test pod using the claim we created. First create a file to hold the data: vim rbd-test-pod.yaml Add: --- kind: Pod apiVersion: v1 metadata: name: rbd-test-pod spec: containers: - name: rbd-test-pod image: busybox command: - "/bin/sh" args: - "-c" - "touch /mnt/RBD-SUCCESS && exit 0 || exit 1" volumeMounts: - name: pvc mountPath: "/mnt" restartPolicy: "Never" volumes: - name: pvc persistentVolumeClaim: claimName: ceph-rbd-claim1 Create pod: $ kubectl apply -f rbd-test-pod.yaml pod/rbd-test-pod created If you describe the Pod, you’ll see successful attachment of the Volume. $ kubectl describe pod rbd-test-pod ..... vents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled default-scheduler Successfully assigned default/rbd-test-pod to rke-worker-02 Normal SuccessfulAttachVolume 3s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6f4399d-43cf-4fc1-ba14-cc22f5c85304" If you have Ceph Dashboard, you can see a new block image created. Our next guide will cover use of Ceph File System on Kubernetes for Dynamic persistent volume provisioning.
0 notes