#Rook storage in Kubernetes
Explore tagged Tumblr posts
Text
Top 5 Open Source Kubernetes Storage Solutions
Top 5 Open Source Kubernetes Storage Solutions #homelab #ceph #rook #glusterfs #longhorn #openebs #KubernetesStorageSolutions #OpenSourceStorageForKubernetes #CephRBDKubernetes #GlusterFSWithKubernetes #OpenEBSInKubernetes #RookStorage #LonghornKubernetes
Historically, Kubernetes storage has been challenging to configure, and it required specialized knowledge to get up and running. However, the landscape of K8s data storage has greatly evolved with many great options that are relatively easy to implement for data stored in Kubernetes clusters. Those who are running Kubernetes in the home lab as well will benefit from the free and open-source…

View On WordPress
#block storage vs object storage#Ceph RBD and Kubernetes#cloud-native storage solutions#GlusterFS with Kubernetes#Kubernetes storage solutions#Longhorn and Kubernetes integration#managing storage in Kubernetes clusters#open-source storage for Kubernetes#OpenEBS in Kubernetes environment#Rook storage in Kubernetes
0 notes
Text
Architecture Overview and Deployment of OpenShift Data Foundation Using Internal Mode
Introduction
OpenShift Data Foundation (ODF), formerly known as OpenShift Container Storage (OCS), is Red Hat’s unified and software-defined storage solution for OpenShift environments. It enables persistent storage for containers, integrated backup and disaster recovery, and multicloud data management.
One of the most common deployment methods for ODF is Internal Mode, where the storage devices are hosted within the OpenShift cluster itself — ideal for small to medium-scale deployments.
Architecture Overview: Internal Mode
In Internal Mode, OpenShift Data Foundation relies on Ceph — a highly scalable storage system — and utilizes three core components:
Rook Operator Handles deployment and lifecycle management of Ceph clusters inside Kubernetes.
Ceph Cluster (Mon, OSD, MGR, etc.) Provides object, block, and file storage using the available storage devices on OpenShift nodes.
NooBaa Manages object storage interfaces (S3-compatible) and acts as a data abstraction layer for multicloud object storage.
Core Storage Layers:
Object Storage Daemons (OSDs): Store actual data and replicate across nodes for redundancy.
Monitor (MON): Ensures consistency and cluster health.
Manager (MGR): Provides metrics, dashboard, and cluster management.
📦 Key Benefits of Internal Mode
No need for external storage infrastructure.
Faster to deploy and manage via OpenShift Console.
Built-in replication and self-healing mechanisms.
Ideal for lab environments, edge, or dev/test clusters.
🚀 Deployment Prerequisites
OpenShift 4.10+ cluster with minimum 3 worker nodes, each with:
At least 16 CPU cores and 64 GB RAM.
At least one unused raw block device (no partitions or file systems).
Internet connectivity or local OperatorHub mirror.
Persistent worker node roles (not shared with infra/control plane).
🔧 Steps to Deploy ODF in Internal Mode
1. Install ODF Operator
Go to OperatorHub in the OpenShift Console.
Search and install OpenShift Data Foundation Operator in the appropriate namespace.
2. Create StorageCluster
Use the ODF Console to create a new StorageCluster.
Select Internal Mode.
Choose eligible nodes and raw devices.
Validate and apply.
3. Monitor Cluster Health
Access the ODF dashboard from the OpenShift Console.
Verify the status of MON, OSD, and MGR components.
Monitor used and available capacity.
4. Create Storage Classes
Default storage classes (like ocs-storagecluster-ceph-rbd, ocs-storagecluster-cephfs) are auto-created.
Use these classes in PVCs for your applications.
Use Cases Supported
Stateful Applications: Databases (PostgreSQL, MongoDB), Kafka, ElasticSearch.
CI/CD Pipelines requiring persistent storage.
Backup and Disaster Recovery via ODF and ACM.
AI/ML Workloads needing large-scale data persistence.
📌 Best Practices
Label nodes intended for storage to prevent scheduling other workloads.
Always monitor disk health and usage via the dashboard.
Regularly test failover and recovery scenarios.
For production, consider External Mode or Multicloud Gateway for advanced scalability.
🎯 Conclusion
Deploying OpenShift Data Foundation in Internal Mode is a robust and simplified way to bring storage closer to your workloads. It ensures seamless integration with OpenShift, eliminates the need for external SAN/NAS, and supports a wide range of use cases — all while leveraging Ceph’s proven resilience.
Whether you're running apps at the edge, in dev/test, or need flexible persistent storage, ODF with Internal Mode is a solid choice.
For more info, Kindly follow: Hawkstack Technologies
0 notes
Text
Enterprise Kubernetes Storage with Red Hat OpenShift Data Foundation (DO370)
In the era of cloud-native transformation, data is the fuel powering everything from mission-critical enterprise apps to real-time analytics platforms. However, as Kubernetes adoption grows, many organizations face a new set of challenges: how to manage persistent storage efficiently, reliably, and securely across distributed environments.
To solve this, Red Hat OpenShift Data Foundation (ODF) emerges as a powerful solution — and the DO370 training course is designed to equip professionals with the skills to deploy and manage this enterprise-grade storage platform.
🔍 What is Red Hat OpenShift Data Foundation?
OpenShift Data Foundation is an integrated, software-defined storage solution that delivers scalable, resilient, and cloud-native storage for Kubernetes workloads. Built on Ceph and Rook, ODF supports block, file, and object storage within OpenShift, making it an ideal choice for stateful applications like databases, CI/CD systems, AI/ML pipelines, and analytics engines.
🎯 Why Learn DO370?
The DO370: Red Hat OpenShift Data Foundation course is specifically designed for storage administrators, infrastructure architects, and OpenShift professionals who want to:
✅ Deploy ODF on OpenShift clusters using best practices.
✅ Understand the architecture and internal components of Ceph-based storage.
✅ Manage persistent volumes (PVs), storage classes, and dynamic provisioning.
✅ Monitor, scale, and secure Kubernetes storage environments.
✅ Troubleshoot common storage-related issues in production.
🛠️ Key Features of ODF for Enterprise Workloads
1. Unified Storage (Block, File, Object)
Eliminate silos with a single platform that supports diverse workloads.
2. High Availability & Resilience
ODF is designed for fault tolerance and self-healing, ensuring business continuity.
3. Integrated with OpenShift
Full integration with the OpenShift Console, Operators, and CLI for seamless Day 1 and Day 2 operations.
4. Dynamic Provisioning
Simplifies persistent storage allocation, reducing manual intervention.
5. Multi-Cloud & Hybrid Cloud Ready
Store and manage data across on-prem, public cloud, and edge environments.
📘 What You Will Learn in DO370
Installing and configuring ODF in an OpenShift environment.
Creating and managing storage resources using the OpenShift Console and CLI.
Implementing security and encryption for data at rest.
Monitoring ODF health with Prometheus and Grafana.
Scaling the storage cluster to meet growing demands.
🧠 Real-World Use Cases
Databases: PostgreSQL, MySQL, MongoDB with persistent volumes.
CI/CD: Jenkins with persistent pipelines and storage for artifacts.
AI/ML: Store and manage large datasets for training models.
Kafka & Logging: High-throughput storage for real-time data ingestion.
👨🏫 Who Should Enroll?
This course is ideal for:
Storage Administrators
Kubernetes Engineers
DevOps & SRE teams
Enterprise Architects
OpenShift Administrators aiming to become RHCA in Infrastructure or OpenShift
🚀 Takeaway
If you’re serious about building resilient, performant, and scalable storage for your Kubernetes applications, DO370 is the must-have training. With ODF becoming a core component of modern OpenShift deployments, understanding it deeply positions you as a valuable asset in any hybrid cloud team.
🧭 Ready to transform your Kubernetes storage strategy? Enroll in DO370 and master Red Hat OpenShift Data Foundation today with HawkStack Technologies – your trusted Red Hat Certified Training Partner. For more details www.hawkstack.com
0 notes
Text
🌐 Cloud Native Storage to Hit $95.8B by 2034 – Here's Why
Cloud Native Storage Market is undergoing a transformative evolution, growing from $14.2 billion in 2024 to a projected $95.8 billion by 2034, at a CAGR of 21%. This remarkable growth is driven by the rising demand for scalable, resilient, and flexible data storage solutions that align with modern cloud-native architectures. Cloud native storage leverages containerization, microservices, and orchestration platforms like Kubernetes to meet the agility and performance needs of today’s enterprises. It plays a crucial role in supporting digital transformation initiatives by enabling automation, dynamic provisioning, and streamlined data management across public, private, and hybrid cloud environments.
Market Dynamics
The market’s expansion is deeply tied to the surge in cloud-native application development, particularly across industries such as finance, healthcare, e-commerce, and telecommunications. Kubernetes-based storage has emerged as the front-runner due to its deep integration with container ecosystems and the seamless experience it offers developers and IT teams. Object storage, known for its efficiency with unstructured data, follows closely behind.
Click to Request a Sample of this Report for Additional Market Insights: https://www.globalinsightservices.com/request-sample/?id=GIS26224
Key drivers include the increasing adoption of DevOps, the shift towards microservices, and the need for rapid data access and scalability. Challenges such as integration with legacy systems, regulatory compliance (like GDPR and CCPA), data security, and a shortage of skilled professionals remain significant. Despite these barriers, the market is witnessing innovation in encryption, AI-driven analytics, and energy-efficient infrastructure, all of which are shaping the future of cloud native storage.
Key Players Analysis
Leading the competitive landscape are tech giants such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, all of whom are continuously enhancing their cloud-native storage portfolios with advanced capabilities like low-latency access, cross-region replication, and intelligent tiering.
In addition, innovative solutions from Portworx, Ceph, MinIO, Diamanti, and Rook are disrupting traditional models by offering container-native storage solutions that are flexible and developer-friendly. Emerging startups such as Storlytics, Sky Vault, and Zephyr Data are also gaining traction by providing nimble, open-source-aligned solutions that focus on edge storage and cost-efficiency.
Regional Analysis
North America remains the largest market, with the United States leading due to its robust IT infrastructure, widespread adoption of cloud technologies, and high levels of investment in digital transformation. Canada is catching up, supported by favorable government policies and a growing tech ecosystem.
In Europe, countries like Germany, the UK, and France are advancing rapidly with strong regulatory frameworks and strategic focus on cloud migration. Asia-Pacific, led by China, India, and Japan, is showing the fastest growth, fueled by the explosion of digital services and government-backed initiatives to bolster cloud infrastructure. Meanwhile, Latin America and the Middle East & Africa are emerging markets with promising growth, especially in nations like Brazil, Mexico, UAE, and South Africa, where cloud adoption is on the rise.
Recent News & Developments
Recent years have seen accelerated investments in containerization and microservices, prompting a shift from monolithic to distributed storage systems. Open-source initiatives like OpenEBS and GlusterFS are gaining popularity, providing budget-friendly alternatives to proprietary solutions.
Furthermore, the rise of edge computing is influencing storage strategies, requiring robust storage architectures closer to the data source. Companies are also becoming increasingly aware of the environmental impact of data storage and are turning to sustainable and energy-efficient storage infrastructures to reduce carbon footprints and align with ESG goals.
Browse Full Report :https://www.globalinsightservices.com/reports/cloud-native-storage-market/
Scope of the Report
This report covers a comprehensive analysis of the Cloud Native Storage Market, spanning historical data from 2018 to 2023 and forecast insights from 2025 to 2034. It delves into various market segments, including type (block, file, object), deployment (public, private, hybrid), applications (DevOps, analytics, backup), and end-user industries (BFSI, healthcare, manufacturing, and more).
The report also evaluates market dynamics such as drivers, restraints, opportunities, and threats, supported by tools like SWOT, PESTLE, and value-chain analysis. Additionally, it profiles leading companies and emerging players, and highlights key trends shaping the future of cloud-native storage solutions.
Discover Additional Market Insights from Global Insight Services:
Commercial Drone Market : https://www.globalinsightservices.com/reports/commercial-drone-market/
Push to Talk Market: https://www.globalinsightservices.com/reports/push-to-talk-market/
Retail Analytics Market : https://www.globalinsightservices.com/reports/retail-analytics-market/
Cloud Based Contact Center Market : https://www.globalinsightservices.com/reports/cloud-based-contact-center-market/
Digital Content Creation Market : https://www.globalinsightservices.com/reports/digital-content-creation-market/
#cloudnative #cloudnativestorage #kubernetes #microservices #cloudarchitecture #cloudsolutions #objectstorage #datamanagement #cloudinfrastructure #devops #digitaltransformation #cloudcomputing #containerstorage #hybridcloud #cloudtech #storagesolutions #aws #googlecloud #azure #edgecomputing #datasecurity #scalablestorage #cloudbackup #cloudnativeapps #aiincloud #dataprivacy #gdprcompliance #cloudorchestration #cloudplatforms #cloudinnovation #techtrends #openstack #dataanalytics #iotstorage #softwaredefinedstorage #openebs #glusterfs #ceph #cloudstrategy #cloudops #techstack
About Us:
Global Insight Services (GIS) is a leading multi-industry market research firm headquartered in Delaware, US. We are committed to providing our clients with highest quality data, analysis, and tools to meet all their market research needs. With GIS, you can be assured of the quality of the deliverables, robust & transparent research methodology, and superior service.
Contact Us:
Global Insight Services LLC 16192, Coastal Highway, Lewes DE 19958 E-mail: [email protected] Phone: +1–833–761–1700 Website: https://www.globalinsightservices.com/
0 notes
Text
Cloud Native Storage Market Insights: Industry Share, Trends & Future Outlook 2032
TheCloud Native Storage Market Size was valued at USD 16.19 Billion in 2023 and is expected to reach USD 100.09 Billion by 2032 and grow at a CAGR of 22.5% over the forecast period 2024-2032
The cloud native storage market is experiencing rapid growth as enterprises shift towards scalable, flexible, and cost-effective storage solutions. The increasing adoption of cloud computing and containerization is driving demand for advanced storage technologies.
The cloud native storage market continues to expand as businesses seek high-performance, secure, and automated data storage solutions. With the rise of hybrid cloud, Kubernetes, and microservices architectures, organizations are investing in cloud native storage to enhance agility and efficiency in data management.
Get Sample Copy of This Report: https://www.snsinsider.com/sample-request/3454
Market Keyplayers:
Microsoft (Azure Blob Storage, Azure Kubernetes Service (AKS))
IBM, (IBM Cloud Object Storage, IBM Spectrum Scale)
AWS (Amazon S3, Amazon EBS (Elastic Block Store))
Google (Google Cloud Storage, Google Kubernetes Engine (GKE))
Alibaba Cloud (Alibaba Object Storage Service (OSS), Alibaba Cloud Container Service for Kubernetes)
VMWare (VMware vSAN, VMware Tanzu Kubernetes Grid)
Huawei (Huawei FusionStorage, Huawei Cloud Object Storage Service)
Citrix (Citrix Hypervisor, Citrix ShareFile)
Tencent Cloud (Tencent Cloud Object Storage (COS), Tencent Kubernetes Engine)
Scality (Scality RING, Scality ARTESCA)
Splunk (Splunk SmartStore, Splunk Enterprise on Kubernetes)
Linbit (LINSTOR, DRBD (Distributed Replicated Block Device))
Rackspace (Rackspace Object Storage, Rackspace Managed Kubernetes)
Robin.Io (Robin Cloud Native Storage, Robin Multi-Cluster Automation)
MayaData (OpenEBS, Data Management Platform (DMP))
Diamanti (Diamanti Ultima, Diamanti Spektra)
Minio (MinIO Object Storage, MinIO Kubernetes Operator)
Rook (Rook Ceph, Rook EdgeFS)
Ondat (Ondat Persistent Volumes, Ondat Data Mesh)
Ionir (Ionir Data Services Platform, Ionir Continuous Data Mobility)
Trilio (TrilioVault for Kubernetes, TrilioVault for OpenStack)
Upcloud (UpCloud Object Storage, UpCloud Managed Databases)
Arrikto (Kubeflow Enterprise, Rok (Data Management for Kubernetes)
Market Size, Share, and Scope
The market is witnessing significant expansion across industries such as IT, BFSI, healthcare, retail, and manufacturing.
Hybrid and multi-cloud storage solutions are gaining traction due to their flexibility and cost-effectiveness.
Enterprises are increasingly adopting object storage, file storage, and block storage tailored for cloud native environments.
Key Market Trends Driving Growth
Rise in Cloud Adoption: Organizations are shifting workloads to public, private, and hybrid cloud environments, fueling demand for cloud native storage.
Growing Adoption of Kubernetes: Kubernetes-based storage solutions are becoming essential for managing containerized applications efficiently.
Increased Data Security and Compliance Needs: Businesses are investing in encrypted, resilient, and compliant storage solutions to meet global data protection regulations.
Advancements in AI and Automation: AI-driven storage management and self-healing storage systems are revolutionizing data handling.
Surge in Edge Computing: Cloud native storage is expanding to edge locations, enabling real-time data processing and low-latency operations.
Integration with DevOps and CI/CD Pipelines: Developers and IT teams are leveraging cloud storage automation for seamless software deployment.
Hybrid and Multi-Cloud Strategies: Enterprises are implementing multi-cloud storage architectures to optimize performance and costs.
Increased Use of Object Storage: The scalability and efficiency of object storage are driving its adoption in cloud native environments.
Serverless and API-Driven Storage Solutions: The rise of serverless computing is pushing demand for API-based cloud storage models.
Sustainability and Green Cloud Initiatives: Energy-efficient storage solutions are becoming a key focus for cloud providers and enterprises.
Enquiry of This Report: https://www.snsinsider.com/enquiry/3454
Market Segmentation:
By Component
Solution
Object Storage
Block Storage
File Storage
Container Storage
Others
Services
System Integration & Deployment
Training & Consulting
Support & Maintenance
By Deployment
Private Cloud
Public Cloud
By Enterprise Size
SMEs
Large Enterprises
By End Use
BFSI
Telecom & IT
Healthcare
Retail & Consumer Goods
Manufacturing
Government
Energy & Utilities
Media & Entertainment
Others
Market Growth Analysis
Factors Driving Market Expansion
The growing need for cost-effective and scalable data storage solutions
Adoption of cloud-first strategies by enterprises and governments
Rising investments in data center modernization and digital transformation
Advancements in 5G, IoT, and AI-driven analytics
Industry Forecast 2032: Size, Share & Growth Analysis
The cloud native storage market is projected to grow significantly over the next decade, driven by advancements in distributed storage architectures, AI-enhanced storage management, and increasing enterprise digitalization.
North America leads the market, followed by Europe and Asia-Pacific, with China and India emerging as key growth hubs.
The demand for software-defined storage (SDS), container-native storage, and data resiliency solutions will drive innovation and competition in the market.
Future Prospects and Opportunities
1. Expansion in Emerging Markets
Developing economies are expected to witness increased investment in cloud infrastructure and storage solutions.
2. AI and Machine Learning for Intelligent Storage
AI-powered storage analytics will enhance real-time data optimization and predictive storage management.
3. Blockchain for Secure Cloud Storage
Blockchain-based decentralized storage models will offer improved data security, integrity, and transparency.
4. Hyperconverged Infrastructure (HCI) Growth
Enterprises are adopting HCI solutions that integrate storage, networking, and compute resources.
5. Data Sovereignty and Compliance-Driven Solutions
The demand for region-specific, compliant storage solutions will drive innovation in data governance technologies.
Access Complete Report: https://www.snsinsider.com/reports/cloud-native-storage-market-3454
Conclusion
The cloud native storage market is poised for exponential growth, fueled by technological innovations, security enhancements, and enterprise digital transformation. As businesses embrace cloud, AI, and hybrid storage strategies, the future of cloud native storage will be defined by scalability, automation, and efficiency.
About Us:
SNS Insider is one of the leading market research and consulting agencies that dominates the market research industry globally. Our company's aim is to give clients the knowledge they require in order to function in changing circumstances. In order to give you current, accurate market data, consumer insights, and opinions so that you can make decisions with confidence, we employ a variety of techniques, including surveys, video talks, and focus groups around the world.
Contact Us:
Jagney Dave - Vice President of Client Engagement
Phone: +1-315 636 4242 (US) | +44- 20 3290 5010 (UK)
#cloud native storage market#cloud native storage market Scope#cloud native storage market Size#cloud native storage market Analysis#cloud native storage market Trends
0 notes
Text
Ensuring Data Resilience: Top 10 Kubernetes Backup Solutions
In the dynamic landscape of container orchestration, Kubernetes has emerged as a leading platform for managing and deploying containerized applications. As organizations increasingly rely on Kubernetes for their containerized workloads, the need for robust data resilience strategies becomes paramount. One crucial aspect of ensuring data resilience in Kubernetes environments is implementing reliable backup solutions. In this article, we will explore the top 10 Kubernetes backup solutions that organizations can leverage to safeguard their critical data.
1. Velero
Velero, an open-source backup and restore tool, is designed specifically for Kubernetes clusters. It provides snapshot and restore capabilities, allowing users to create backups of their entire cluster or selected namespaces.
2. Kasten K10
Kasten K10 is a data management platform for Kubernetes that offers backup, disaster recovery, and mobility functionalities. It supports various cloud providers and on-premises deployments, ensuring flexibility for diverse Kubernetes environments.
3. Stash
Stash, another open-source project, focuses on backing up Kubernetes volumes and custom resources. It supports scheduled backups, retention policies, and encryption, providing a comprehensive solution for data protection.
4. TrilioVault
TrilioVault specializes in protecting stateful applications in Kubernetes environments. With features like incremental backups and point-in-time recovery, it ensures that organizations can recover their applications quickly and efficiently.
5. Ark
Heptio Ark, now part of VMware, offers a simple and robust solution for Kubernetes backup and recovery. It supports both on-premises and cloud-based storage, providing flexibility for diverse storage architectures.
6. KubeBackup
KubeBackup is a lightweight and easy-to-use backup solution that supports scheduled backups and incremental backups. It is designed to be simple yet effective in ensuring data resilience for Kubernetes applications.
7. Rook
Rook extends Kubernetes to provide a cloud-native storage orchestrator. While not a backup solution per se, it enables the creation of distributed storage systems that can be leveraged for reliable data storage and retrieval.
8. Backupify
Backupify focuses on protecting cloud-native applications, including those running on Kubernetes. It provides automated backups, encryption, and a user-friendly interface for managing backup and recovery processes.
9. StashAway
StashAway is an open-source project that offers both backup and restore capabilities for Kubernetes applications. It supports volume backups, making it a suitable choice for organizations with complex storage requirements.
10. Duplicity
Duplicity, though not Kubernetes-specific, is a versatile backup tool that can be integrated into Kubernetes environments. It supports encryption and incremental backups, providing an additional layer of data protection.
In conclusion, selecting the right Kubernetes backup solution is crucial for ensuring data resilience in containerized environments. The options mentioned here offer a range of features and capabilities, allowing organizations to choose the solution that best fits their specific needs. By incorporating these backup solutions into their Kubernetes strategy, organizations can mitigate risks and ensure the availability and integrity of their critical data.
0 notes
Text
Top 5 Open Source Kubernetes Storage Solutions - Virtualization Howto
0 notes
Text
The OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for cluster operations that require data persistence. As a developer you can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. In this short guide you’ll learn how to expand an existing PVC in OpenShift when using OpenShift Container Storage. Before you can expand persistent volumes, the StorageClass must have the allowVolumeExpansion field set to true. Here is a list of Storage classes available in my OpenShift cluster. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 169d openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d I’ll change the default storage class which is ocs-storagecluster-cephfs. Let’s export the configuration to yaml file: oc get sc ocs-storagecluster-cephfs -o yaml >ocs-storagecluster-cephfs.yml I’ll modify the file to add allowVolumeExpansion field. allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ocs-storagecluster-cephfs parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true # Added field Delete current configured storageclass since a SC is immutable resource. $ oc delete sc ocs-storagecluster-cephfs storageclass.storage.k8s.io "ocs-storagecluster-cephfs" deleted Apply modified storage class configuration by running the following command: $ oc apply -f ocs-storagecluster-cephfs.yml storageclass.storage.k8s.io/ocs-storagecluster-cephfs created List storage classes to confirm it was indeed created. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 5m20s openshift-storage.
noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d Output yalm and confirm the new setting was applied. $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | "allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":"annotations":"storageclass.kubernetes.io/is-default-class":"true","name":"ocs-storagecluster-cephfs","parameters":"clusterID":"openshift-storage","csi.storage.k8s.io/node-stage-secret-name":"rook-csi-cephfs-node","csi.storage.k8s.io/node-stage-secret-namespace":"openshift-storage","csi.storage.k8s.io/provisioner-secret-name":"rook-csi-cephfs-provisioner","csi.storage.k8s.io/provisioner-secret-namespace":"openshift-storage","fsName":"ocs-storagecluster-cephfilesystem","provisioner":"openshift-storage.cephfs.csi.ceph.com","reclaimPolicy":"Delete","volumeBindingMode":"Immediate" storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-10-31T13:33:56Z" name: ocs-storagecluster-cephfs resourceVersion: "242503097" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 5aa95d3b-c39c-438d-85af-5c8550d6ed5b parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate How To Expand a PVC in OpenShift List available PVCs in the namespace. $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-harbor-harbor-redis-0 Bound pvc-e516b793-60c5-431d-955f-b1d57bdb556b 1Gi RWO ocs-storagecluster-cephfs 169d database-data-harbor-harbor-database-0 Bound pvc-00a53065-9790-4291-8f00-288359c00f6c 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-chartmuseum Bound pvc-405c68de-eecd-4db1-9ca1-5ca97eeab37c 5Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-jobservice Bound pvc-e52f231e-0023-41ad-9aff-98ac53cecb44 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-registry Bound pvc-77e159d4-4059-47dd-9c61-16a6e8b37a14 100Gi RWX ocs-storagecluster-cephfs 39d Edit PVC and change capacity $ oc edit pvc data-harbor-harbor-redis-0 ... spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Delete pod with claim. $ oc delete pods harbor-harbor-redis-0 pod "harbor-harbor-redis-0" deleted Recreate the deployment that was claiming the storage and it should utilize the new capacity. Expanding PVC on OpenShift Web Console You can also expand a PVC from the web console. Click on “Expand PVC” and set the desired PVC capacity.
0 notes
Text
[Media] Rook
Rook Storage Orchestration for Kubernetes. Open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments. https://github.com/rook/rook

0 notes
Link
Data wrangling for Kubernetes-orchestrated containers is the new storage frontier. Pure Storage’s Portworx, Veeam’s Kasten, Commvault’s Hedvig Robin.io, StorageOS, SUSE’s Rancher, MayaData’s Mayastor, Rook and many others are building products and services to mine this very rich, new seam
0 notes
Text
Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph
Kubernetes Persistent Volume Setup with Microk8s Rook and Ceph @vexpert #vmwarecommunities #homelab #Kubernetespersistentvolumes #persistentvolumeclaims #storageclassesinKubernetes #accessmodes #kubernetes #kubernetesstorage #ceph #rook #blockstorage
Kubernetes persistent volume management is a cornerstone of modern container orchestration. Utilizing persistent storage can lead to more resilient and scalable applications. This guide delves into an experiment using Microk8s, Ceph, and Rook to create a robust storage solution for your Kubernetes cluster. Table of contentsWhat is a Kubernetes Persistent Volume?Understanding Persistent…

View On WordPress
0 notes
Text
Enterprise Kubernetes Storage With Red Hat OpenShift Data Foundation
In today’s enterprise IT environments, the adoption of containerized applications has grown exponentially. While Kubernetes simplifies application deployment and orchestration, it poses a unique challenge when it comes to managing persistent data. Stateless workloads may scale with ease, but stateful applications require a robust, scalable, and resilient storage backend — and that’s where Red Hat OpenShift Data Foundation (ODF) plays a critical role.
🌐 Why Enterprise Kubernetes Storage Matters
Kubernetes was originally designed for stateless applications. However, modern enterprise applications — databases, analytics engines, monitoring tools — often need to store data persistently. Enterprises require:
High availability
Scalable performance
Data protection and recovery
Multi-cloud and hybrid-cloud compatibility
Standard storage solutions often fall short in a dynamic, containerized environment. That’s why a storage platform designed for Kubernetes, within Kubernetes, is crucial.
🔧 What is Red Hat OpenShift Data Foundation?
Red Hat OpenShift Data Foundation is a Kubernetes-native, software-defined storage solution integrated with Red Hat OpenShift. It provides:
Block, file, and object storage
Dynamic provisioning for persistent volumes
Built-in data replication, encryption, and disaster recovery
Unified management across hybrid cloud environments
ODF is built on Ceph, a battle-tested distributed storage system, and uses Rook to orchestrate storage on Kubernetes.
Key Capabilities
1. Persistent Storage for Containers
ODF provides dynamic, persistent storage for stateful workloads like PostgreSQL, MongoDB, Kafka, and more, enabling them to run natively on OpenShift.
2. Multi-Access and Multi-Tenancy
Supports file sharing between pods and secure data isolation between applications or business units.
3. Elastic Scalability
Storage scales with compute, ensuring performance and capacity grow as application needs increase.
4. Built-in Data Services
Includes snapshotting, backup and restore, mirroring, and encryption, all critical for enterprise-grade reliability.
Integration with OpenShift
ODF integrates seamlessly into the OpenShift Console, offering a native, operator-based deployment model. Storage is provisioned and managed using familiar Kubernetes APIs and Custom Resources, reducing the learning curve for DevOps teams.
🔐 Enterprise Benefits
Operational Consistency: Unified storage and platform management
Security and Compliance: End-to-end encryption and audit logging
Hybrid Cloud Ready: Runs consistently across on-premises, AWS, Azure, or any cloud
Cost Efficiency: Optimize storage usage through intelligent tiering and compression
✅ Use Cases
Running databases in Kubernetes
Storing logs and monitoring data
AI/ML workloads needing high-throughput file storage
Object storage for backups or media repositories
📈 Conclusion
Enterprise Kubernetes storage is no longer optional — it’s essential. As businesses migrate more critical workloads to Kubernetes, solutions like Red Hat OpenShift Data Foundation provide the performance, flexibility, and resilience needed to support stateful applications at scale.
ODF helps bridge the gap between traditional storage models and cloud-native innovation — making it a strategic asset for any organization investing in OpenShift and modern application architectures.
For more info, Kindly follow: Hawkstack Technologies
0 notes
Text
Enterprise Kubernetes Storage with Red Hat OpenShift Data Foundation (DO370)
As organizations continue their journey into cloud-native and containerized applications, the need for robust, scalable, and persistent storage solutions has never been more critical. Red Hat OpenShift, a leading Kubernetes platform, addresses this need with Red Hat OpenShift Data Foundation (ODF)—an integrated, software-defined storage solution designed specifically for OpenShift environments.
In this blog post, we’ll explore how the DO370 course equips IT professionals to manage enterprise-grade Kubernetes storage using OpenShift Data Foundation.
What is OpenShift Data Foundation?
Red Hat OpenShift Data Foundation (formerly OpenShift Container Storage) is a unified and scalable storage solution built on Ceph, NooBaa, and Rook. It provides:
Block, file, and object storage
Persistent volumes for containers
Data protection, encryption, and replication
Multi-cloud and hybrid cloud support
ODF is deeply integrated with OpenShift, allowing for seamless deployment, management, and scaling of storage resources within Kubernetes workloads.
Why DO370?
The DO370: Enterprise Kubernetes Storage with Red Hat OpenShift Data Foundation course is designed for OpenShift administrators and storage specialists who want to gain hands-on expertise in deploying and managing ODF in enterprise environments.
Key Learning Outcomes:
Understand ODF Architecture Learn how ODF components work together to provide high availability and performance.
Deploy ODF on OpenShift Clusters Hands-on labs walk through setting up ODF in a variety of topologies, from internal mode (hyperconverged) to external Ceph clusters.
Provision Persistent Volumes Use Kubernetes StorageClasses and dynamic provisioning to provide storage for stateful applications.
Monitor and Troubleshoot Storage Issues Utilize tools like Prometheus, Grafana, and the OpenShift Console to monitor health and performance.
Data Resiliency and Disaster Recovery Configure mirroring, replication, and backup for critical workloads.
Manage Multi-cloud Object Storage Integrate NooBaa for managing object storage across AWS S3, Azure Blob, and more.
Enterprise Use Cases for ODF
Stateful Applications: Databases like PostgreSQL, MongoDB, and Cassandra running in OpenShift require reliable persistent storage.
AI/ML Workloads: High throughput and scalable storage for datasets and model checkpoints.
CI/CD Pipelines: Persistent storage for build artifacts, logs, and containers.
Data Protection: Built-in snapshot and backup capabilities for compliance and recovery.
Real-World Benefits
Simplicity: Unified management within OpenShift Console.
Flexibility: Run on-premises, in the cloud, or in hybrid configurations.
Security: Native encryption and role-based access control (RBAC).
Resiliency: Automatic healing and replication for data durability.
Who Should Take DO370?
OpenShift Administrators
Storage Engineers
DevOps Engineers managing persistent workloads
RHCSA/RHCE certified professionals looking to specialize in OpenShift storage
Prerequisite Skills: Familiarity with OpenShift (DO180/DO280) and basic Kubernetes concepts is highly recommended.
Final Thoughts
As containers become the standard for deploying applications, storage is no longer an afterthought—it's a cornerstone of enterprise Kubernetes strategy. Red Hat OpenShift Data Foundation ensures your applications are backed by scalable, secure, and resilient storage.
Whether you're modernizing legacy workloads or building cloud-native applications, DO370 is your gateway to mastering Kubernetes-native storage with Red Hat.
Interested in Learning More?
📘 Join HawkStack Technologies for instructor-led or self-paced training on DO370 and other Red Hat courses.
Visit our website for more details - www.hawkstack.com
0 notes
Text
The Cloud Native Computing Foundation adds etcd to its open source stable
The Cloud Native Computing Foundation (CNCF), the open source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.
Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open source projects that use etcd include Cloud Foundry and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.
“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”
Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 different projects that fall under its ‘incubated technologies’ umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.
That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone, about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).
0 notes
Text
Rook 101: Building software-defined containerised storage in Kubernetes
from ComputerWeekly.com https://ift.tt/2PG8eOy via IFTTT
0 notes
Text
This article intends to cover in detail the installation and configuration of Rook, and how to integrate a highly available Ceph Storage Cluster to an existing kubernetes cluster. I’m performing this process on a recent deployment of Kubernetes in Rocky Linux 8 servers. But it can be used with any other Kubernetes Cluster deployed with Kubeadm or automation tools such as Kubespray and Rancher. In the initial days of Kubernetes, most applications deployed were Stateless meaning there was no need for data persistence. However, as Kubernetes becomes more popular, there was a concern around reliability when scheduling stateful services. Currently, you can use many types of storage volumes including vSphere Volumes, Ceph, AWS Elastic Block Store, Glusterfs, NFS, GCE Persistent Disk among many others. This gives us the comfort of running Stateful services that requires robust storage backend. What is Rook / Ceph? Rook is a free to use and powerful cloud-native open source storage orchestrator for Kubernetes. It provides support for a diverse set of storage solutions to natively integrate with cloud-native environments. More details about the storage solutions currently supported by Rook are captured in the project status section. Ceph is a distributed storage system that provides file, block and object storage and is deployed in large scale production clusters. Rook will enable us to automate deployment, bootstrapping, configuration, scaling and upgrading Ceph Cluster within a Kubernetes environment. Ceph is widely used in an In-House Infrastructure where managed Storage solution is rarely an option. Rook uses Kubernetes primitives to run and manage Software defined storage on Kubernetes. Key components of Rook Storage Orchestrator: Custom resource definitions (CRDs) – Used to create and customize storage clusters. The CRDs are implemented to Kubernetes during its deployment process. Rook Operator for Ceph – It automates the whole configuration of storage components and monitors the cluster to ensure it is healthy and available DaemonSet called rook-discover – It starts a pod running discovery agent on every nodes of your Kubernetes cluster to discover any raw disk devices / partitions that can be used as Ceph OSD disk. Monitoring – Rook enables Ceph Dashboard and provides metrics collectors/exporters and monitoring dashboards Features of Rook Rook enables you to provision block, file, and object storage with multiple storage providers Capability to efficiently distribute and replicate data to minimize potential loss Rook is designed to manage open-source storage technologies – NFS, Ceph, Cassandra Rook is an open source software released under the Apache 2.0 license With Rook you can hyper-scale or hyper-converge your storage clusters within Kubernetes environment Rook allows System Administrators to easily enable elastic storage in your datacenter By adopting rook as your storage orchestrator you are able to optimize workloads on commodity hardware Deploy Rook & Ceph Storage on Kubernetes Cluster These are the minimal setup requirements for the deployment of Rook and Ceph Storage on Kubernetes Cluster. A Cluster with minimum of three nodes Available raw disk devices (with no partitions or formatted filesystems) Or Raw partitions (without formatted filesystem) Or Persistent Volumes available from a storage class in block mode Step 1: Add Raw devices/partitions to nodes that will be used by Rook List all the nodes in your Kubernetes Cluster and decide which ones will be used in building Ceph Storage Cluster. I recommend you use worker nodes and not the control plane machines. [root@k8s-bastion ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8smaster01.hirebestengineers.com Ready control-plane,master 28m v1.22.2 k8smaster02.hirebestengineers.com Ready control-plane,master 24m v1.22.2
k8smaster03.hirebestengineers.com Ready control-plane,master 23m v1.22.2 k8snode01.hirebestengineers.com Ready 22m v1.22.2 k8snode02.hirebestengineers.com Ready 21m v1.22.2 k8snode03.hirebestengineers.com Ready 21m v1.22.2 k8snode04.hirebestengineers.com Ready 21m v1.22.2 In my Lab environment, each of the worker nodes will have one raw device – /dev/vdb which we’ll add later. [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / [root@k8s-worker-01 ~]# free -h total used free shared buff/cache available Mem: 15Gi 209Mi 14Gi 8.0Mi 427Mi 14Gi Swap: 614Mi 0B 614Mi The following list of nodes will be used to build storage cluster. [root@kvm-private-lab ~]# virsh list | grep k8s-worker 31 k8s-worker-01-server running 36 k8s-worker-02-server running 38 k8s-worker-03-server running 41 k8s-worker-04-server running Add secondary storage to each node If using KVM hypervisor, start by listing storage pools: $ sudo virsh pool-list Name State Autostart ------------------------------ images active yes I’ll add a 40GB volume on the default storage pool. This can be done with a for loop: for domain in k8s-worker-01..4-server; do sudo virsh vol-create-as images $domain-disk-2.qcow2 40G done Command execution output: Vol k8s-worker-01-server-disk-2.qcow2 created Vol k8s-worker-02-server-disk-2.qcow2 created Vol k8s-worker-03-server-disk-2.qcow2 created Vol k8s-worker-04-server-disk-2.qcow2 created You can check image details including size using qemu-img command: $ qemu-img info /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 image: /var/lib/libvirt/images/k8s-worker-01-server-disk-2.qcow2 file format: raw virtual size: 40 GiB (42949672960 bytes) disk size: 40 GiB To attach created volume(s) above to the Virtual Machine, run: for domain in k8s-worker-01..4-server; do sudo virsh attach-disk --domain $domain \ --source /var/lib/libvirt/images/$domain-disk-2.qcow2 \ --persistent --target vdb done --persistent: Make live change persistent --target vdb: Target of a disk device Confirm add is successful Disk attached successfully Disk attached successfully Disk attached successfully Disk attached successfully You can confirm that the volume was added to the vm as a block device /dev/vdb [root@k8s-worker-01 ~]# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 253:0 0 40G 0 disk ├─vda1 253:1 0 1M 0 part ├─vda2 253:2 0 1G 0 part /boot ├─vda3 253:3 0 615M 0 part └─vda4 253:4 0 38.4G 0 part / vdb 253:16 0 40G 0 disk Step 2: Deploy Rook Storage Orchestrator Clone the rook project from Github using git command. This should be done on a machine with kubeconfig configured and confirmed to be working. You can also clone Rook’s specific branch as in release tag, for example: cd ~/ git clone --single-branch --branch release-1.8 https://github.com/rook/rook.git All nodes with available raw devices will be used for the Ceph cluster. As stated earlier, at least three nodes are required cd rook/deploy/examples/ Deploy the Rook Operator The first step when performing the deployment of deploy Rook operator is to use. Create required CRDs as specified in crds.yaml manifest: [root@k8s-bastion ceph]# kubectl create -f crds.yaml customresourcedefinition.apiextensions.k8s.io/cephblockpools.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclients.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephclusters.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephfilesystemmirrors.ceph.rook.io created
customresourcedefinition.apiextensions.k8s.io/cephfilesystems.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephnfses.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectrealms.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstores.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectstoreusers.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzonegroups.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephobjectzones.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/cephrbdmirrors.ceph.rook.io created customresourcedefinition.apiextensions.k8s.io/objectbucketclaims.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/objectbuckets.objectbucket.io created customresourcedefinition.apiextensions.k8s.io/volumereplicationclasses.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumereplications.replication.storage.openshift.io created customresourcedefinition.apiextensions.k8s.io/volumes.rook.io created Create common resources as in common.yaml file: [root@k8s-bastion ceph]# kubectl create -f common.yaml namespace/rook-ceph created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-admission-controller created clusterrole.rbac.authorization.k8s.io/rook-ceph-admission-controller-role created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-admission-controller-rolebinding created clusterrole.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created clusterrole.rbac.authorization.k8s.io/rook-ceph-system created role.rbac.authorization.k8s.io/rook-ceph-system created clusterrole.rbac.authorization.k8s.io/rook-ceph-global created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrole.rbac.authorization.k8s.io/rook-ceph-object-bucket created serviceaccount/rook-ceph-system created rolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-global created serviceaccount/rook-ceph-osd created serviceaccount/rook-ceph-mgr created serviceaccount/rook-ceph-cmd-reporter created role.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-osd created clusterrole.rbac.authorization.k8s.io/rook-ceph-mgr-system created role.rbac.authorization.k8s.io/rook-ceph-mgr created role.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created rolebinding.rbac.authorization.k8s.io/rook-ceph-cluster-mgmt created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-system created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-cluster created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter created Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ podsecuritypolicy.policy/00-rook-privileged created clusterrole.rbac.authorization.k8s.io/psp:rook created clusterrolebinding.rbac.authorization.k8s.io/rook-ceph-system-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-default-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-osd-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-mgr-psp created rolebinding.rbac.authorization.k8s.io/rook-ceph-cmd-reporter-psp created serviceaccount/rook-csi-cephfs-plugin-sa created serviceaccount/rook-csi-cephfs-provisioner-sa created role.rbac.authorization.k8s.io/cephfs-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/cephfs-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-cephfs-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/cephfs-csi-provisioner-role created serviceaccount/rook-csi-rbd-plugin-sa created serviceaccount/rook-csi-rbd-provisioner-sa created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-plugin-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rook-csi-rbd-provisioner-sa-psp created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created role.rbac.authorization.k8s.io/rook-ceph-purge-osd created rolebinding.rbac.authorization.k8s.io/rook-ceph-purge-osd created serviceaccount/rook-ceph-purge-osd created Finally deploy Rook ceph operator from operator.yaml manifest file: [root@k8s-bastion ceph]# kubectl create -f operator.yaml configmap/rook-ceph-operator-config created deployment.apps/rook-ceph-operator created After few seconds Rook components should be up and running as seen below: [root@k8s-bastion ceph]# kubectl get all -n rook-ceph NAME READY STATUS RESTARTS AGE pod/rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 45s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/rook-ceph-operator 1/1 1 1 45s NAME DESIRED CURRENT READY AGE replicaset.apps/rook-ceph-operator-9bf8b5959 1 1 1 45s Verify the rook-ceph-operator is in the Running state before proceeding: [root@k8s-bastion ceph]# kubectl -n rook-ceph get pod NAME READY STATUS RESTARTS AGE rook-ceph-operator-76dc868c4b-zk2tj 1/1 Running 0 69s Step 3: Create a Ceph Storage Cluster on Kubernetes using Rook Now that we have prepared worker nodes by adding raw disk devices and deployed Rook operator, it is time to deploy the Ceph Storage Cluster. Let’s set default namespace to rook-ceph: # kubectl config set-context --current --namespace rook-ceph Context "kubernetes-admin@kubernetes" modified. Considering that Rook Ceph clusters can discover raw partitions by itself, it is okay to use the default cluster deployment manifest file without any modifications. [root@k8s-bastion ceph]# kubectl create -f cluster.yaml cephcluster.ceph.rook.io/rook-ceph created For any further customizations on Ceph Cluster check Ceph Cluster CRD documentation. When not using all the nodes you can expicitly define the nodes and raw devices to be used as seen in example below: storage: # cluster level storage configuration and selection useAllNodes: false useAllDevices: false nodes: - name: "k8snode01.hirebestengineers.com" devices: # specific devices to use for storage can be specified for each node - name: "sdb" - name: "k8snode03.hirebestengineers.com" devices: - name: "sdb" To view all resources created run the following command: kubectl get all -n rook-ceph Watching Pods creation in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph -w This is a list of Pods running in the namespace after a successful deployment: [root@k8s-bastion ceph]# kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-8vrgj 3/3 Running 0 5m39s csi-cephfsplugin-9csbp 3/3 Running 0 5m39s csi-cephfsplugin-lh42b 3/3 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-kh89q 6/6 Running 0 5m39s csi-cephfsplugin-provisioner-b54db7d9b-l92gm 6/6 Running 0 5m39s csi-cephfsplugin-xc8tk 3/3 Running 0 5m39s csi-rbdplugin-28th4 3/3 Running 0 5m41s csi-rbdplugin-76bhw 3/3 Running 0 5m41s csi-rbdplugin-7ll7w 3/3 Running 0 5m41s csi-rbdplugin-provisioner-5845579d68-5rt4x 6/6 Running 0 5m40s csi-rbdplugin-provisioner-5845579d68-p6m7r 6/6 Running 0 5m40s csi-rbdplugin-tjlsk 3/3 Running 0 5m41s rook-ceph-crashcollector-k8snode01.hirebestengineers.com-7ll2x6 1/1 Running 0 3m3s rook-ceph-crashcollector-k8snode02.hirebestengineers.com-8ghnq9 1/1 Running 0 2m40s rook-ceph-crashcollector-k8snode03.hirebestengineers.com-7t88qp 1/1 Running 0 3m14s rook-ceph-crashcollector-k8snode04.hirebestengineers.com-62n95v 1/1 Running 0 3m14s rook-ceph-mgr-a-7cf9865b64-nbcxs 1/1 Running 0 3m17s rook-ceph-mon-a-555c899765-84t2n 1/1 Running 0 5m47s rook-ceph-mon-b-6bbd666b56-lj44v 1/1 Running 0 4m2s rook-ceph-mon-c-854c6d56-dpzgc 1/1 Running 0 3m28s rook-ceph-operator-9bf8b5959-nz6hd 1/1 Running 0 13m rook-ceph-osd-0-5b7875db98-t5mdv 1/1 Running 0 3m6s rook-ceph-osd-1-677c4cd89-b5rq2 1/1 Running 0 3m5s rook-ceph-osd-2-6665bc998f-9ck2f 1/1 Running 0 3m3s rook-ceph-osd-3-75d7b47647-7vfm4 1/1 Running 0 2m40s rook-ceph-osd-prepare-k8snode01.hirebestengineers.com--1-6kbkn 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com--1-5hz49 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com--1-4b45z 0/1 Completed 0 3m14s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com--1-4q8cs 0/1 Completed 0 3m14s Each worker node will have a Job to add OSDs into Ceph Cluster: [root@k8s-bastion ceph]# kubectl get -n rook-ceph jobs.batch NAME COMPLETIONS DURATION AGE rook-ceph-osd-prepare-k8snode01.hirebestengineers.com 1/1 11s 3m46s rook-ceph-osd-prepare-k8snode02.hirebestengineers.com 1/1 34s 3m46s rook-ceph-osd-prepare-k8snode03.hirebestengineers.com 1/1 10s 3m46s rook-ceph-osd-prepare-k8snode04.hirebestengineers.com 1/1 9s 3m46s [root@k8s-bastion ceph]# kubectl describe jobs.batch rook-ceph-osd-prepare-k8snode01.hirebestengineers.com Verify that the cluster CR has been created and active: [root@k8s-bastion ceph]# kubectl -n rook-ceph get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL rook-ceph /var/lib/rook 3 3m50s Ready Cluster created successfully HEALTH_OK
Step 4: Deploy Rook Ceph toolbox in Kubernetes TheRook Ceph toolbox is a container with common tools used for rook debugging and testing. The toolbox is based on CentOS and any additional tools can be easily installed via yum. We will start a toolbox pod in an Interactive mode for us to connect and execute Ceph commands from a shell. Change to ceph directory: cd ~/ cd rook/deploy/examples Apply the toolbox.yaml manifest file to create toolbox pod: [root@k8s-bastion ceph]# kubectl apply -f toolbox.yaml deployment.apps/rook-ceph-tools created Connect to the pod using kubectl command with exec option: [root@k8s-bastion ~]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# Check Ceph Storage Cluster Status. Be keen on the value of cluster.health, it should beHEALTH_OK. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph status cluster: id: 470b7cde-7355-4550-bdd2-0b79d736b8ac health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 5m) mgr: a(active, since 4m) osd: 4 osds: 4 up (since 4m), 4 in (since 5m) data: pools: 1 pools, 128 pgs objects: 0 objects, 0 B usage: 25 MiB used, 160 GiB / 160 GiB avail pgs: 128 active+clean List all OSDs to check their current status. They should exist and be up. [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd status ID HOST USED AVAIL WR OPS WR DATA RD OPS RD DATA STATE 0 k8snode04.hirebestengineers.com 6776k 39.9G 0 0 0 0 exists,up 1 k8snode03.hirebestengineers.com 6264k 39.9G 0 0 0 0 exists,up 2 k8snode01.hirebestengineers.com 6836k 39.9G 0 0 0 0 exists,up 3 k8snode02.hirebestengineers.com 6708k 39.9G 0 0 0 0 exists,up Check raw storage and pools: [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 160 GiB 160 GiB 271 MiB 271 MiB 0.17 TOTAL 160 GiB 160 GiB 271 MiB 271 MiB 0.17 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL device_health_metrics 1 32 0 B 0 0 B 0 51 GiB replicapool 3 32 35 B 8 24 KiB 0 51 GiB k8fs-metadata 8 128 91 KiB 24 372 KiB 0 51 GiB k8fs-data0 9 32 0 B 0 0 B 0 51 GiB [root@rook-ceph-tools-96c99fbf-qb9cj /]# rados df POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR USED COMPR UNDER COMPR device_health_metrics 0 B 0 0 0 0 0 0 0 0 B 0 0 B 0 B 0 B k8fs-data0 0 B 0 0 0 0 0 0 1 1 KiB 2 1 KiB 0 B 0 B k8fs-metadata 372 KiB 24 0 72 0 0 0 351347 172 MiB 17 26 KiB 0 B 0 B replicapool 24 KiB 8 0 24 0 0 0 999 6.9 MiB 1270 167 MiB 0 B 0 B total_objects 32 total_used 271 MiB total_avail 160 GiB total_space 160 GiB Step 5: Working with Ceph Cluster Storage Modes You have three types of storage exposed by Rook: Shared Filesystem: Create a filesystem to be shared across multiple pods (RWX) Block: Create block storage to be consumed by a pod (RWO) Object: Create an object store that is accessible inside or outside the Kubernetes cluster All the necessary files for either storage mode are available in rook/cluster/examples/kubernetes/ceph/ directory. cd ~/ cd rook/deploy/examples 1. Cephfs Cephfs is used to enable shared filesystem which can be mounted with read/write permission from multiple pods.
Update the filesystem.yaml file by setting data pool name, replication size e.t.c. [root@k8s-bastion ceph]# vim filesystem.yaml apiVersion: ceph.rook.io/v1 kind: CephFilesystem metadata: name: k8sfs namespace: rook-ceph # namespace:cluster Once done with modifications let Rook operator create all the pools and other resources necessary to start the service: [root@k8s-bastion ceph]# kubectl create -f filesystem.yaml cephfilesystem.ceph.rook.io/k8sfs created Access Rook toolbox pod and check if metadata and data pools are created. [root@k8s-bastion ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph fs ls name: k8sfs, metadata pool: k8sfs-metadata, data pools: [k8sfs-data0 ] [root@rook-ceph-tools-96c99fbf-qb9cj /]# ceph osd lspools 1 device_health_metrics 3 replicapool 8 k8fs-metadata 9 k8fs-data0 [root@rook-ceph-tools-96c99fbf-qb9cj /]# exit Update the fsName and pool name in Cephfs Storageclass configuration file: $ vim csi/cephfs/storageclass.yaml parameters: clusterID: rook-ceph # namespace:cluster fsName: k8sfs pool: k8fs-data0 Create StorageClass using the command: [root@k8s-bastion csi]# kubectl create -f csi/cephfs/storageclass.yaml storageclass.storage.k8s.io/rook-cephfs created List available storage classes in your Kubernetes Cluster: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 97s Create test PVC and Pod to test usage of Persistent Volume. [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pvc.yaml persistentvolumeclaim/cephfs-pvc created [root@k8s-bastion ceph]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-fd024cc0-dcc3-4a1d-978b-a166a2f65cdb 1Gi RWO rook-cephfs 4m42s [root@k8s-bastion csi]# kubectl create -f csi/cephfs/pod.yaml pod/csicephfs-demo-pod created PVC creation manifest file contents: --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cephfs-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: rook-cephfs Checking PV creation logs as captured by the provisioner pod: [root@k8s-bastion csi]# kubectl logs deploy/csi-cephfsplugin-provisioner -f -c csi-provisioner [root@k8s-bastion ceph]# kubectl get pods | grep csi-cephfsplugin-provision csi-cephfsplugin-provisioner-b54db7d9b-5dpt6 6/6 Running 0 4m30s csi-cephfsplugin-provisioner-b54db7d9b-wrbxh 6/6 Running 0 4m30s If you made an update and provisioner didn’t pick you can always restart the Cephfs Provisioner Pods: # Gracefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner # Forcefully $ kubectl delete pod -l app=csi-cephfsplugin-provisioner --grace-period=0 --force 2. RBD Block storage allows a single pod to mount storage (RWO mode). Before Rook can provision storage, a StorageClass and CephBlockPool need to be created [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion csi]# kubectl create -f csi/rbd/storageclass.yaml cephblockpool.ceph.rook.io/replicapool created storageclass.storage.k8s.io/rook-ceph-block created [root@k8s-bastion csi]# kubectl create -f csi/rbd/pvc.yaml persistentvolumeclaim/rbd-pvc created List StorageClasses and PVCs: [root@k8s-bastion csi]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE rook-ceph-block rook-ceph.rbd.csi.ceph.com Delete Immediate true 49s rook-cephfs rook-ceph.cephfs.csi.ceph.com Delete Immediate true 6h17m
[root@k8s-bastion csi]# kubectl get pvc rbd-pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE rbd-pvc Bound pvc-c093e6f7-bb4e-48df-84a7-5fa99fe81138 1Gi RWO rook-ceph-block 43s Deploying multiple apps We will create a sample application to consume the block storage provisioned by Rook with the classic wordpress and mysql apps. Both of these apps will make use of block volumes provisioned by Rook. [root@k8s-bastion ~]# cd [root@k8s-bastion ~]# cd rook/deploy/examples [root@k8s-bastion kubernetes]# kubectl create -f mysql.yaml service/wordpress-mysql created persistentvolumeclaim/mysql-pv-claim created deployment.apps/wordpress-mysql created [root@k8s-bastion kubernetes]# kubectl create -f wordpress.yaml service/wordpress created persistentvolumeclaim/wp-pv-claim created deployment.apps/wordpress created Both of these apps create a block volume and mount it to their respective pod. You can see the Kubernetes volume claims by running the following: [root@k8smaster01 kubernetes]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cephfs-pvc Bound pvc-aa972f9d-ab53-45f6-84c1-35a192339d2e 1Gi RWO rook-cephfs 2m59s mysql-pv-claim Bound pvc-4f1e541a-1d7c-49b3-93ef-f50e74145057 20Gi RWO rook-ceph-block 10s rbd-pvc Bound pvc-68e680c1-762e-4435-bbfe-964a4057094a 1Gi RWO rook-ceph-block 47s wp-pv-claim Bound pvc-fe2239a5-26c0-4ebc-be50-79dc8e33dc6b 20Gi RWO rook-ceph-block 5s Check deployment of MySQL and WordPress Services: [root@k8s-bastion kubernetes]# kubectl get deploy wordpress wordpress-mysql NAME READY UP-TO-DATE AVAILABLE AGE wordpress 1/1 1 1 2m46s wordpress-mysql 1/1 1 1 3m8s [root@k8s-bastion kubernetes]# kubectl get svc wordpress wordpress-mysql NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE wordpress LoadBalancer 10.98.120.112 80:32046/TCP 3m39s wordpress-mysql ClusterIP None 3306/TCP 4m1s Retrieve WordPress NodePort and test URL using LB IP address and the port. NodePort=$(kubectl get service wordpress -o jsonpath='.spec.ports[0].nodePort') echo $NodePort Cleanup Storage test PVC and pods [root@k8s-bastion kubernetes]# kubectl delete -f mysql.yaml service "wordpress-mysql" deleted persistentvolumeclaim "mysql-pv-claim" deleted deployment.apps "wordpress-mysql" deleted [root@k8s-bastion kubernetes]# kubectl delete -f wordpress.yaml service "wordpress" deleted persistentvolumeclaim "wp-pv-claim" deleted deployment.apps "wordpress" deleted # Cephfs cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/cephfs/pvc.yaml # RBD Cleanup [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pod.yaml [root@k8s-bastion kubernetes]# kubectl delete -f ceph/csi/rbd/pvc.yaml Step 6: Accessing Ceph Dashboard The Ceph dashboard gives you an overview of the status of your Ceph cluster: The overall health The status of the mon quorum The sstatus of the mgr, and osds Status of other Ceph daemons View pools and PG status Logs for the daemons, and much more. List services in rook-ceph namespace: [root@k8s-bastion ceph]# kubectl get svc -n rook-ceph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE csi-cephfsplugin-metrics ClusterIP 10.105.10.255 8080/TCP,8081/TCP 9m56s csi-rbdplugin-metrics ClusterIP 10.96.5.0 8080/TCP,8081/TCP 9m57s rook-ceph-mgr ClusterIP 10.103.171.189 9283/TCP 7m31s rook-ceph-mgr-dashboard ClusterIP 10.102.140.148 8443/TCP 7m31s
rook-ceph-mon-a ClusterIP 10.102.120.254 6789/TCP,3300/TCP 10m rook-ceph-mon-b ClusterIP 10.97.249.82 6789/TCP,3300/TCP 8m19s rook-ceph-mon-c ClusterIP 10.99.131.50 6789/TCP,3300/TCP 7m46s From the output we can confirm port��8443 was configured. Use port forwarding to access the dashboard: $ kubectl port-forward service/rook-ceph-mgr-dashboard 8443:8443 -n rook-ceph Forwarding from 127.0.0.1:8443 -> 8443 Forwarding from [::1]:8443 -> 8443 Now, should be accessible over https://locallhost:8443 Login username is admin and password can be extracted using the following command: kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Access Dashboard with Node Port To create a service with the NodePort, save this yaml as dashboard-external-https.yaml. # cd # vim dashboard-external-https.yaml apiVersion: v1 kind: Service metadata: name: rook-ceph-mgr-dashboard-external-https namespace: rook-ceph labels: app: rook-ceph-mgr rook_cluster: rook-ceph spec: ports: - name: dashboard port: 8443 protocol: TCP targetPort: 8443 selector: app: rook-ceph-mgr rook_cluster: rook-ceph sessionAffinity: None type: NodePort Create a service that listens on Node Port: [root@k8s-bastion ~]# kubectl create -f dashboard-external-https.yaml service/rook-ceph-mgr-dashboard-external-https created Check new service created: [root@k8s-bastion ~]# kubectl -n rook-ceph get service rook-ceph-mgr-dashboard-external-https NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr-dashboard-external-https NodePort 10.103.91.41 8443:32573/TCP 2m43s In this example, port 32573 will be opened to expose port 8443 from the ceph-mgr pod. Now you can enter the URL in your browser such as https://[clusternodeip]:32573 and the dashboard will appear. Login with admin username and password decoded from rook-ceph-dashboard-password secret. kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="['data']['password']" | base64 --decode && echo Ceph dashboard view: Hosts list: Bonus: Tearing Down the Ceph Cluster If you want to tear down the cluster and bring up a new one, be aware of the following resources that will need to be cleaned up: rook-ceph namespace: The Rook operator and cluster created by operator.yaml and cluster.yaml (the cluster CRD) /var/lib/rook: Path on each host in the cluster where configuration is cached by the ceph mons and osds All CRDs in the cluster. [root@k8s-bastion ~]# kubectl get crds NAME CREATED AT apiservers.operator.tigera.io 2021-09-24T18:09:12Z bgpconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z bgppeers.crd.projectcalico.org 2021-09-24T18:09:12Z blockaffinities.crd.projectcalico.org 2021-09-24T18:09:12Z cephclusters.ceph.rook.io 2021-09-30T20:32:10Z clusterinformations.crd.projectcalico.org 2021-09-24T18:09:12Z felixconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z globalnetworksets.crd.projectcalico.org 2021-09-24T18:09:12Z hostendpoints.crd.projectcalico.org 2021-09-24T18:09:12Z imagesets.operator.tigera.io 2021-09-24T18:09:12Z installations.operator.tigera.io 2021-09-24T18:09:12Z ipamblocks.crd.projectcalico.org 2021-09-24T18:09:12Z ipamconfigs.crd.projectcalico.org 2021-09-24T18:09:12Z ipamhandles.crd.projectcalico.org 2021-09-24T18:09:12Z ippools.crd.projectcalico.org 2021-09-24T18:09:12Z
kubecontrollersconfigurations.crd.projectcalico.org 2021-09-24T18:09:12Z networkpolicies.crd.projectcalico.org 2021-09-24T18:09:12Z networksets.crd.projectcalico.org 2021-09-24T18:09:12Z tigerastatuses.operator.tigera.io 2021-09-24T18:09:12Z Edit the CephCluster and add the cleanupPolicy kubectl -n rook-ceph patch cephcluster rook-ceph --type merge -p '"spec":"cleanupPolicy":"confirmation":"yes-really-destroy-data"' Delete block storage and file storage: cd ~/ cd rook/deploy/examples kubectl delete -n rook-ceph cephblockpool replicapool kubectl delete -f csi/rbd/storageclass.yaml kubectl delete -f filesystem.yaml kubectl delete -f csi/cephfs/storageclass.yaml Delete the CephCluster Custom Resource: [root@k8s-bastion ~]# kubectl -n rook-ceph delete cephcluster rook-ceph cephcluster.ceph.rook.io "rook-ceph" deleted Verify that the cluster CR has been deleted before continuing to the next step. kubectl -n rook-ceph get cephcluster Delete the Operator and related Resources kubectl delete -f operator.yaml kubectl delete -f common.yaml kubectl delete -f crds.yaml Zapping Devices # Set the raw disk / raw partition path DISK="/dev/vdb" # Zap the disk to a fresh, usable state (zap-all is important, b/c MBR has to be clean) # Install: yum install gdisk -y Or apt install gdisk sgdisk --zap-all $DISK # Clean hdds with dd dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync # Clean disks such as ssd with blkdiscard instead of dd blkdiscard $DISK # These steps only have to be run once on each node # If rook sets up osds using ceph-volume, teardown leaves some devices mapped that lock the disks. ls /dev/mapper/ceph-* | xargs -I% -- dmsetup remove % # ceph-volume setup can leave ceph- directories in /dev and /dev/mapper (unnecessary clutter) rm -rf /dev/ceph-* rm -rf /dev/mapper/ceph--* # Inform the OS of partition table changes partprobe $DISK Removing the Cluster CRD Finalizer: for CRD in $(kubectl get crd -n rook-ceph | awk '/ceph.rook.io/ print $1'); do kubectl get -n rook-ceph "$CRD" -o name | \ xargs -I kubectl patch -n rook-ceph --type merge -p '"metadata":"finalizers": [null]' done If the namespace is still stuck in Terminating state as seen in the command below: $ kubectl get ns rook-ceph NAME STATUS AGE rook-ceph Terminating 23h You can check which resources are holding up the deletion and remove the finalizers and delete those resources. kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph From my output the resource is configmap named rook-ceph-mon-endpoints: NAME DATA AGE configmap/rook-ceph-mon-endpoints 4 23h Delete the resource manually: # kubectl delete configmap/rook-ceph-mon-endpoints -n rook-ceph configmap "rook-ceph-mon-endpoints" deleted Recommended reading: Rook Best Practices for Running Ceph on Kubernetes
0 notes