#cephFS
Explore tagged Tumblr posts
Text
Architecture Overview and Deployment of OpenShift Data Foundation Using Internal Mode
Introduction
OpenShift Data Foundation (ODF), formerly known as OpenShift Container Storage (OCS), is Red Hat’s unified and software-defined storage solution for OpenShift environments. It enables persistent storage for containers, integrated backup and disaster recovery, and multicloud data management.
One of the most common deployment methods for ODF is Internal Mode, where the storage devices are hosted within the OpenShift cluster itself — ideal for small to medium-scale deployments.
Architecture Overview: Internal Mode
In Internal Mode, OpenShift Data Foundation relies on Ceph — a highly scalable storage system — and utilizes three core components:
Rook Operator Handles deployment and lifecycle management of Ceph clusters inside Kubernetes.
Ceph Cluster (Mon, OSD, MGR, etc.) Provides object, block, and file storage using the available storage devices on OpenShift nodes.
NooBaa Manages object storage interfaces (S3-compatible) and acts as a data abstraction layer for multicloud object storage.
Core Storage Layers:
Object Storage Daemons (OSDs): Store actual data and replicate across nodes for redundancy.
Monitor (MON): Ensures consistency and cluster health.
Manager (MGR): Provides metrics, dashboard, and cluster management.
📦 Key Benefits of Internal Mode
No need for external storage infrastructure.
Faster to deploy and manage via OpenShift Console.
Built-in replication and self-healing mechanisms.
Ideal for lab environments, edge, or dev/test clusters.
🚀 Deployment Prerequisites
OpenShift 4.10+ cluster with minimum 3 worker nodes, each with:
At least 16 CPU cores and 64 GB RAM.
At least one unused raw block device (no partitions or file systems).
Internet connectivity or local OperatorHub mirror.
Persistent worker node roles (not shared with infra/control plane).
🔧 Steps to Deploy ODF in Internal Mode
1. Install ODF Operator
Go to OperatorHub in the OpenShift Console.
Search and install OpenShift Data Foundation Operator in the appropriate namespace.
2. Create StorageCluster
Use the ODF Console to create a new StorageCluster.
Select Internal Mode.
Choose eligible nodes and raw devices.
Validate and apply.
3. Monitor Cluster Health
Access the ODF dashboard from the OpenShift Console.
Verify the status of MON, OSD, and MGR components.
Monitor used and available capacity.
4. Create Storage Classes
Default storage classes (like ocs-storagecluster-ceph-rbd, ocs-storagecluster-cephfs) are auto-created.
Use these classes in PVCs for your applications.
Use Cases Supported
Stateful Applications: Databases (PostgreSQL, MongoDB), Kafka, ElasticSearch.
CI/CD Pipelines requiring persistent storage.
Backup and Disaster Recovery via ODF and ACM.
AI/ML Workloads needing large-scale data persistence.
📌 Best Practices
Label nodes intended for storage to prevent scheduling other workloads.
Always monitor disk health and usage via the dashboard.
Regularly test failover and recovery scenarios.
For production, consider External Mode or Multicloud Gateway for advanced scalability.
🎯 Conclusion
Deploying OpenShift Data Foundation in Internal Mode is a robust and simplified way to bring storage closer to your workloads. It ensures seamless integration with OpenShift, eliminates the need for external SAN/NAS, and supports a wide range of use cases — all while leveraging Ceph’s proven resilience.
Whether you're running apps at the edge, in dev/test, or need flexible persistent storage, ODF with Internal Mode is a solid choice.
For more info, Kindly follow: Hawkstack Technologies
0 notes
Text
Ceph Backup: Don't Lose Your HCI Data
Ceph Backups: Don't Lose Your HCI Data @vexpert #vmwarecommunities #hcistorage #ceph #cephfs #veeam #nakivo #agentbasedbackup #filelevelbackups #virtualization #docker #dockerswarm #kubernetes
This is more a public service announcement rather than a technical blog, but we will take a high-level look at a warning that I want to make sure that those who are using Ceph or CephFS on top of Ceph for HCI storage either for virtual machines or your files, you want to make sure that you don’t just trust your hypervisor-based virtual machine backups. Why? Well, let’s take a look. Table of…
0 notes
Text
Fluent bit is an open source, light-weight log processing and forwarding service. Fluent bit allows to collect logs, events or metrics from different sources and process them. These data can then be delivered to different backends such as Elastic search, Splunk, Kafka, Data dog, InfluxDB or New Relic. Fluent bit is easy to setup and configure. It gives you full control of what data to collect, parsing the data to provide a structure to the data collected. It allows one to remove unwanted data, filter data and push to an output destination. Therefore, it provides an end to end solution for data collection. Some wonderful features of fluent bit are: High Performance It is super Lightweight and fast, requires less resource and memory It supports multiple data formats. The configuration file for Fluent Bit is very easy to understand and modify. Fluent Bit has built-in TLS/SSL support. Communication with the output destination is secured. Asynchronous I/O Fluent Bit is compatible with docker and kubernetes and can therefore be used to aggregate application logs. There are several ways to log in kubernetes. One way is the default stdout logs that are written to a host path”/var/log/containers” on the nodes in a cluster. This method requires a fluent bit DaemonSet to be deployed. A daemon sets deploys a fluent bit container on each node in the cluster. The second way of logging is the use of a persistent volume. This allows logs to be written and persistent in an internal or external storage such as Cephfs. Fluent bit can be setup as a deployment to read logs from a persistent Volume. In this Blog, we will look at how to send logs from a Kubernetes Persistent Volume to Elastic search using fluent bit. Once logs are sent to elastic search, we can use kibana to visualize and create dashboards using application logs and metrics. PREREQUISITES: First, we need to have a running Kubernetes Cluster. You can use our guides below to setup one if you do not have one yet: Install Kubernetes Cluster on Ubuntu with kubeadm Install Kubernetes Cluster on CentOS 7 with kubeadm Install Production Kubernetes Cluster with Rancher RKE Secondly, we will need an elastic search cluster setup. You can use elasticsearch installation guide if you don’t have one in place yet. In this tutorial, we will setup a sample elastic search environment using stateful sets deployed in the kubernetes environment. We will also need a kibana instance to help us visualize this logs. Deploy Elasticsearch Create the manifest file. This deployment assumes that we have a storage class cephfs in our cluster. A persistent volume will be created along side the elastic search stateful set. Modify this configuration as per your needs. $ vim elasticsearch-ss.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster spec: serviceName: elasticsearch replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0 resources: limits: cpu: 1000m requests: cpu: 100m ports: - containerPort: 9200 name: rest protocol: TCP - containerPort: 9300 name: inter-node protocol: TCP volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data env: - name: cluster.name value: k8s-logs - name: node.name valueFrom: fieldRef: fieldPath: metadata.name - name: discovery.seed_hosts value: "es-cluster-0.elasticsearch" - name: cluster.initial_master_nodes value: "es-cluster-0" - name: ES_JAVA_OPTS value: "-Xms512m -Xmx512m"
initContainers: - name: fix-permissions image: busybox command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"] securityContext: privileged: true volumeMounts: - name: data mountPath: /usr/share/elasticsearch/data - name: increase-vm-max-map image: busybox command: ["sysctl", "-w", "vm.max_map_count=262144"] securityContext: privileged: true - name: increase-fd-ulimit image: busybox command: ["sh", "-c", "ulimit -n 65536"] securityContext: privileged: true volumeClaimTemplates: - metadata: name: data labels: app: elasticsearch spec: accessModes: [ "ReadWriteOnce" ] storageClassName: cephfs resources: requests: storage: 5Gi Apply this configuration $ kubectl apply -f elasticsearch-ss.yaml 2. Create an elastic search service $ vim elasticsearch-svc.yaml kind: Service apiVersion: v1 metadata: name: elasticsearch labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node $ kubectl apply -f elasticsearch.svc 3. Deploy Kibana $ vim kibana.yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: kibana labels: app: kibana spec: replicas: 1 selector: matchLabels: app: kibana template: metadata: labels: app: kibana spec: containers: - name: kibana image: docker.elastic.co/kibana/kibana:7.2.0 resources: limits: cpu: 1000m requests: cpu: 100m env: - name: ELASTICSEARCH_URL value: http://elasticsearch:9200 ports: - containerPort: 5601 --- apiVersion: v1 kind: Service metadata: name: kibana labels: app: kibana spec: ports: - port: 5601 selector: app: kibana Apply this configuration: $ kubectl apply -f kibana.yaml 4. We then need to configure and ingress route for the kibana service as follows: $ vim kibana-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: "true" ingress.kubernetes.io/force-ssl-redirect: "true" name: kibana spec: rules: - host: kibana.computingpost.com http: paths: - backend: serviceName: kibana servicePort: 5601 path: / tls: - hosts: - kibana.computingpost.com secretName: ingress-secret // This can be created prior if using custom certs $ kubectl apply -f kibana-ingress.yaml Kibana service should now be accessible via https://kibana.computingpost.com/ Once we have this setup, We can proceed to deploy fluent Bit. Step 1: Deploy Service Account, Role and Role Binding Create a deployment file with the following contents: $ vim fluent-bit-role.yaml --- apiVersion: v1 kind: ServiceAccount metadata: name: fluent-bit --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluent-bit-read rules: - apiGroups: [""] resources: - namespaces - pods verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: fluent-bit-read roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fluent-bit-read subjects: - kind: ServiceAccount name: fluent-bit namespace: default Apply deployment config by running the command below. kubectl apply -f fluent-bit-role.yaml Step 2: Deploy a Fluent Bit configMap This config map allows us to be able to configure our fluent Bit service accordingly. Here, we define the log parsing and routing for Fluent Bit. Change this configuration to match your needs. $ vim fluentbit-configmap.yaml
apiVersion: v1 kind: ConfigMap metadata: labels: k8s-app: fluent-bit name: fluent-bit-config data: filter-kubernetes.conf: | [FILTER] Name kubernetes Match * Kube_URL https://kubernetes.default.svc:443 Kube_CA_File /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Kube_Token_File /var/run/secrets/kubernetes.io/serviceaccount/token Kube_Tag_Prefix kube.var.log Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off fluent-bit.conf: | [SERVICE] Flush 1 Log_Level info Daemon off Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE input-kubernetes.conf @INCLUDE filter-kubernetes.conf @INCLUDE output-elasticsearch.conf input-kubernetes.conf: | [INPUT] Name tail Tag * Path /var/log/*.log Parser json DB /var/log/flb_kube.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 output-elasticsearch.conf: | [OUTPUT] Name es Match * Host $FLUENT_ELASTICSEARCH_HOST Port $FLUENT_ELASTICSEARCH_PORT Logstash_Format On Replace_Dots On Retry_Limit False parsers.conf: | [PARSER] Name apache Format regex Regex ^(?[^ ]*) [^ ]* (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^\"]*?)(?: +\S*)?)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache2 Format regex Regex ^(?[^ ]*) [^ ]* (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^ ]*) +\S*)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name apache_error Format regex Regex ^\[[^ ]* (?[^\]]*)\] \[(?[^\]]*)\](?: \[pid (?[^\]]*)\])?( \[client (?[^\]]*)\])? (?.*)$ [PARSER] Name nginx Format regex Regex ^(?[^ ]*) (?[^ ]*) (?[^ ]*) \[(?[^\]]*)\] "(?\S+)(?: +(?[^\"]*?)(?: +\S*)?)?" (?[^ ]*) (?[^ ]*)(?: "(?[^\"]*)" "(?[^\"]*)")?$ Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name json Format json Time_Key time Time_Format %d/%b/%Y:%H:%M:%S %z [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%d %H:%M:%S.%L Time_Keep On [PARSER] # http://rubular.com/r/tjUt3Awgg4 Name cri Format regex Regex ^(?[^ ]+) (?stdout|stderr) (?[^ ]*) (?.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z [PARSER] Name syslog Format regex Regex ^\(?[^ ]* 1,2[^ ]* [^ ]*) (?[^ ]*) (?[a-zA-Z0-9_\/\.\-]*)(?:\[(?[0-9]+)\])?(?:[^\:]*\:)? *(?.*)$ Time_Key time Time_Format %b %d %H:%M:%S kubectl apply -f fluentbit-configmap.yaml Step 3: Create a Persistent Volume Claim This is where we will write application logs. $ vim pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: logs-pvc spec: accessModes: - ReadWriteMany storageClassName: cephfs #Change accordingly resources: requests: storage: 5Gi $ kubectl apply -f pvc.yaml Step 4: Deploy a kubernetes deployment using the config map in a file $ vim fluentbit-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: labels: k8s-app: fluent-bit-logging name: fluent-bit spec: replicas: 1 selector: matchLabels:
k8s-app: fluent-bit-logging template: metadata: annotations: prometheus.io/path: /api/v1/metrics/prometheus prometheus.io/port: "2020" prometheus.io/scrape: "true" labels: k8s-app: fluent-bit-logging kubernetes.io/cluster-service: "true" version: v1 spec: containers: - env: - name: FLUENT_ELASTICSEARCH_HOST value: elasticsearch - name: FLUENT_ELASTICSEARCH_PORT value: "9200" image: fluent/fluent-bit:1.5 imagePullPolicy: Always name: fluent-bit ports: - containerPort: 2020 protocol: TCP resources: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/log name: varlog - mountPath: /fluent-bit/etc/ name: fluent-bit-config dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: serviceAccount: fluent-bit serviceAccountName: fluent-bit volumes: - name: varlog persistentVolumeClaim: claimName: logs-pvc - configMap: defaultMode: 420 name: fluent-bit-config name: fluent-bit-config Create objects by running the command below: $ kubectl apply -f fluentbit-deployment.yaml Step 5: Deploy an application Let’s test that our fluent bit service works as expected. We will use an test application that writes logs to our persistent volume. $ vim testpod.yaml apiVersion: v1 kind: Pod metadata: name: test-pod spec: containers: - name: app image: centos command: ["/bin/sh"] args: ["-c", "while true; do echo $(date -u) >> /var/log/app.log; sleep 5; done"] volumeMounts: - name: persistent-storage mountPath: /var/log volumes: - name: persistent-storage persistentVolumeClaim: claimName: logs-pvc Apply with the command: $ kubectl apply -f testpod.yaml Check if the pod is running. $ kubectl get pods You should see the following output: NAME READY STATUS RESTARTS AGE test-pod 1/1 Running 0 107s Once the pod is running, We can proceed to check if logs are sent to Elastic search. On Kibana, we will have to create an index as shown below. Click on “Management > Index Patterns> Create index pattern” Once the index has been created. Click on the discover icon to see if our logs are in place: See more guides on Kubernetes on our site.
0 notes
Quote
Open Source Definitely Changed Storage Industry With Linux and other technologies and products, it impacts all areas. By Philippe Nicolas | February 16, 2021 at 2:23 pm It’s not a breaking news but the impact of open source in the storage industry was and is just huge and won’t be reduced just the opposite. For a simple reason, the developers community is the largest one and adoption is so wide. Some people see this as a threat and others consider the model as a democratic effort believing in another approach. Let’s dig a bit. First outside of storage, here is the list some open source software (OSS) projects that we use every day directly or indirectly: Linux and FreeBSD of course, Kubernetes, OpenStack, Git, KVM, Python, PHP, HTTP server, Hadoop, Spark, Lucene, Elasticsearch (dual license), MySQL, PostgreSQL, SQLite, Cassandra, Redis, MongoDB (under SSPL), TensorFlow, Zookeeper or some famous tools and products like Thunderbird, OpenOffice, LibreOffice or SugarCRM. The list is of course super long, very diverse and ubiquitous in our world. Some of these projects initiated some wave of companies creation as they anticipate market creation and potentially domination. Among them, there are Cloudera and Hortonworks, both came public, promoting Hadoop and they merged in 2019. MariaDB as a fork of MySQL and MySQL of course later acquired by Oracle. DataStax for Cassandra but it turns out that this is not always a safe destiny … Coldago Research estimated that the entire open source industry will represent $27+ billion in 2021 and will pass the barrier of $35 billion in 2024. Historically one of the roots came from the Unix – Linux transition. In fact, Unix was largely used and adopted but represented a certain price and the source code cost was significant, even prohibitive. Projects like Minix and Linux developed and studied at universities and research centers generated tons of users and adopters with many of them being contributors. Is it similar to a religion, probably not but for sure a philosophy. Red Hat, founded in 1993, has demonstrated that open source business could be big and ready for a long run, the company did its IPO in 1999 and had an annual run rate around $3 billion. The firm was acquired by IBM in 2019 for $34 billion, amazing right. Canonical, SUSE, Debian and a few others also show interesting development paths as companies or as communities. Before that shift, software developments were essentially applications as system software meant cost and high costs. Also a startup didn’t buy software with the VC money they raised as it could be seen as suicide outside of their mission. All these contribute to the open source wave in all directions. On the storage side, Linux invited students, research centers, communities and start-ups to develop system software and especially block storage approach and file system and others like object storage software. Thus we all know many storage software start-ups who leveraged Linux to offer such new storage models. We didn’t see lots of block storage as a whole but more open source operating system with block (SCSI based) storage included. This is bit different for file and object storage with plenty of offerings. On the file storage side, the list is significant with disk file systems and distributed ones, the latter having multiple sub-segments as well. Below is a pretty long list of OSS in the storage world. Block Storage Linux-LIO, Linux SCST & TGT, Open-iSCSI, Ceph RBD, OpenZFS, NexentaStor (Community Ed.), Openfiler, Chelsio iSCSI, Open vStorage, CoprHD, OpenStack Cinder File Storage Disk File Systems: XFS, OpenZFS, Reiser4 (ReiserFS), ext2/3/4 Distributed File Systems (including cluster, NAS and parallel to simplify the list): Lustre, BeeGFS, CephFS, LizardFS, MooseFS, RozoFS, XtreemFS, CohortFS, OrangeFS (PVFS2), Ganesha, Samba, Openfiler, HDFS, Quantcast, Sheepdog, GlusterFS, JuiceFS, ScoutFS, Red Hat GFS2, GekkoFS, OpenStack Manila Object Storage Ceph RADOS, MinIO, Seagate CORTX, OpenStack Swift, Intel DAOS Other data management and storage related projects TAR, rsync, OwnCloud, FileZilla, iRODS, Amanda, Bacula, Duplicati, KubeDR, Velero, Pydio, Grau Data OpenArchive The impact of open source is obvious both on commercial software but also on other emergent or small OSS footprint. By impact we mean disrupting established market positions with radical new approach. It is illustrated as well by commercial software embedding open source pieces or famous largely adopted open source product that prevent some initiatives to take off. Among all these scenario, we can list XFS, OpenZFS, Ceph and MinIO that shake commercial models and were even chosen by vendors that don’t need to develop themselves or sign any OEM deal with potential partners. Again as we said in the past many times, the Build, Buy or Partner model is also a reality in that world. To extend these examples, Ceph is recommended to be deployed with XFS disk file system for OSDs like OpenStack Swift. As these last few examples show, obviously open source projets leverage other open source ones, commercial software similarly but we never saw an open source project leveraging a commercial one. This is a bit antinomic. This acts as a trigger to start a development of an open source project offering same functions. OpenZFS is also used by Delphix, Oracle and in TrueNAS. MinIO is chosen by iXsystems embedded in TrueNAS, Datera, Humio, Robin.IO, McKesson, MapR (now HPE), Nutanix, Pavilion Data, Portworx (now Pure Storage), Qumulo, Splunk, Cisco, VMware or Ugloo to name a few. SoftIron leverages Ceph and build optimized tailored systems around it. The list is long … and we all have several examples in mind. Open source players promote their solutions essentially around a community and enterprise editions, the difference being the support fee, the patches policies, features differences and of course final subscription fees. As we know, innovations come often from small agile players with a real difficulties to approach large customers and with doubt about their longevity. Choosing the OSS path is a way to be embedded and selected by larger providers or users directly, it implies some key questions around business models. Another dimension of the impact on commercial software is related to the behaviors from universities or research centers. They prefer to increase budget to hardware and reduce software one by using open source. These entities have many skilled people, potentially time, to develop and extend open source project and contribute back to communities. They see, in that way to work, a positive and virtuous cycle, everyone feeding others. Thus they reach new levels of performance gaining capacity, computing power … finally a decision understandable under budget constraints and pressure. Ceph was started during Sage Weil thesis at UCSC sponsored by the Advanced Simulation and Computing Program (ASC), including Sandia National Laboratories (SNL), Lawrence Livermore National Laboratory (LLNL) and Los Alamos National Laboratory (LANL). There is a lot of this, famous example is Lustre but also MarFS from LANL, GekkoFS from University of Mainz, Germany, associated with the Barcelona Supercomputing Center or BeeGFS, formerly FhGFS, developed by the Fraunhofer Center for High Performance Computing in Germany as well. Lustre was initiated by Peter Braam in 1999 at Carnegie Mellon University. Projects popped up everywhere. Collaboration software as an extension to storage see similar behaviors. OwnCloud, an open source file sharing and collaboration software, is used and chosen by many universities and large education sites. At the same time, choosing open source components or products as a wish of independence doesn’t provide any kind of life guarantee. Rremember examples such HDFS, GlusterFS, OpenIO, NexentaStor or Redcurrant. Some of them got acquired or disappeared and create issue for users but for sure opportunities for other players watching that space carefully. Some initiatives exist to secure software if some doubt about future appear on the table. The SDS wave, a bit like the LMAP (Linux, MySQL, Apache web server and PHP) had a serious impact of commercial software as well as several open source players or solutions jumped into that generating a significant pricing erosion. This initiative, good for users, continues to reduce also differentiators among players and it became tougher to notice differences. In addition, Internet giants played a major role in open source development. They have talent, large teams, time and money and can spend time developing software that fit perfectly their need. They also control communities acting in such way as they put seeds in many directions. The other reason is the difficulty to find commercial software that can scale to their need. In other words, a commercial software can scale to the large corporation needs but reaches some limits for a large internet player. Historically these organizations really redefined scalability objectives with new designs and approaches not found or possible with commercial software. We all have example in mind and in storage Google File System is a classic one or Haystack at Facebook. Also large vendors with internal projects that suddenly appear and donated as open source to boost community effort and try to trigger some market traction and partnerships, this is the case of Intel DAOS. Open source is immediately associated with various licenses models and this is the complex aspect about source code as it continues to create difficulties for some people and entities that impact projects future. One about ZFS or even Java were well covered in the press at that time. We invite readers to check their preferred page for that or at least visit the Wikipedia one or this one with the full table on the appendix page. Immediately associated with licenses are the communities, organizations or foundations and we can mention some of them here as the list is pretty long: Apache Software Foundation, Cloud Native Computing Foundation, Eclipse Foundation, Free Software Foundation, FreeBSD Foundation, Mozilla Foundation or Linux Foundation … and again Wikipedia represents a good source to start.
Open Source Definitely Changed Storage Industry - StorageNewsletter
0 notes
Text
Canonical lança OpenStack Charms 20.02 com suporte ao CephFS
Canonical lança OpenStack Charms 20.02 com suporte ao CephFS
A Canonical anunciou hoje a disponibilidade do OpenStack Charms 20.02, uma versão importante da poderosa ferramenta para projetar, criar e gerenciar nuvens privadas do OpenStack no Ubuntu. O OpenStack Charms 20.02 é uma versão empolgante que adiciona novos recursos e melhorias em relação às versões anteriores. A maior mudança parece ser o suporte ao sistema de arquivos Ceph.Assim, o chamado Cephs…
View On WordPress
0 notes
Text
KubernetesのConfig&Storageリソース(その2)
from https://thinkit.co.jp/article/14195
Config&Storageリソース
前回、利用者が直接利用するConfig&Storageリソースは3種類あることを紹介し、そのうちのSecretとConfigMapを解説しました。今回は、残る1種類であるPersistentVolumeClaimについて解説しますが、PersistentVolumeClaimを理解するために必要となるPersistentVolume、Volumeについても取り上げます。
5種類に大別できるKubernetesのリソース
リソースの分類内容Workloadsリソースコンテナの実行に関するリソースDiscovery&LBリソースコンテナを外部公開するようなエンドポイントを提供するリソースConfig&Storageリソース設定・機密情報・永続化ボリュームなどに関するリソースClusterリソースセキュリティやクォータなどに関するリソースMetadataリソースリソースを操作する系統のリソース
VolumeとPersistentVolumeとPersistentVolumeClaimの違い
Volumeは既存のボリューム(ホストの領域、NFS、Ceph、GCP Volume)などをYAML Manifestに直接指定することで利用可能にするものです。そのため、利用者が新規でボリュームを作成したり、既存のボリュームを削除したりといった操作を行うことはできません。また、YAML ManifestからVolumeリソースを作成するといった処理も行いません。
一方でPersistentVolumeは、外部の永続ボリュームを提供するシステムと連携して、新規のボリュームの作成や、既存のボリュームの削除などを行うことが可能です。具体的には、YAML ManifestなどからPersistent Volumeリソースを別途作成する形になります。
PersistentVolumeにもVolumeにも同じプラグインが用意されています。例えばGCPやAWSのボリュームサービスでは、Persistent VolumeプラグインとVolumeプラグインの両方が用意されています。Persistent Volumeプラグインの方ではVolumeの作成と削除といったライフサイクルを処理することが可能(PersistentVolumeClaimを利用すればDynamic Provisioningも��能)ですが、Volumeプラグインの場合はすでにあるVolumeを利用することだけが可能です。
PersistentVolumeClaimは、その名のとおり作成されたPersistentVolumeリソースの中からアサインするためのリソースになります。Persistent Volumeはクラスタにボリュームを登録するだけなので、実際にPodから利用するにはPersistent Volume Claimを定義して利用する必要があります。また、Dynamic Provisioning機能(あとで解説します)を利用した場合は、Persistent Volume Claimが利用されたタイミングでPersistent Volumeを動的に作成することが可能なため、順番が逆に感じられるかもしれません。
Volume
Kubernetesでは、Volumeを抽象化してPodと疎結合なリソースとして定義しています。Volumeプラグインとして、下記のような様々なプラグインが提供されています。下記のリスト以外にもあるので、詳細は「https://kubernetes.io/docs/concepts/storage/volumes/」を確認して下さい。
EmptyDir
HostPath
nfs
iscsi
cephfs
GCPPersistentVolume
gitRepo
PersistentVolumeとは異なり、Podに対して静的に領域を指定するような形になるため、競合などに注意してください。
EmptyDir
EmptyDirはPod用の一時的なディスク領域として利用可能です。PodがTerminateされると削除されます。

EmptyDirのイメージ図
リスト1:EmptyDirを指定するemptydir-sample.yml
apiVersion: v1 kind: Pod metadata: name: sample-emptydir spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /cache name: cache-volume volumes: - name: cache-volume emptyDir: {} $ kubectl apply -f emptydir-sample.yml
HostPath
HostPathは、Kubernetes Node上の領域をコンテナにマッピングするプラグインです。typeにはDirectory、DirectoryOrCreate、File、Socket、BlockDeviceなどから選択します。DirectoryOrCreateとDirectoryとの差は、ディレクトリが存��しない場合に作成して起動するか否かの違いです。

[HostPathのイメージ図
リスト2:HostPathを使用するhostpath-sample.yml
apiVersion: v1 kind: Pod metadata: name: sample-hostpath spec: containers: - image: nginx:1.12 name: nginx-container volumeMounts: - mountPath: /srv name: hostpath-sample volumes: - name: hostpath-sample hostPath: path: /data type: DirectoryOrCreate $ kubectl apply -f hostpath-sample.yml
PersistentVolume(PV)
PersistentVolumeは、永続化領域として確保されるVolumeです。前述のVolumeはPodの定義内に直接書き込む形で接続を行っていましたが、Persistent Volumeはリソースとして個別に作成してから利用します。すなわち、YAML Manifestを使ってPersistent Volumeリソースを作成する必要があります。
また、PersistentVolumeは厳密にはConfig&StorageリソースではなくClusterリソースに分類されますが、今回は説明の都合上この章で説明しています。
PersistentVolumeの種類
PersistentVolumeは、基本的にネットワーク越しでディスクをアタッチするタイプのディスクとなります。シングルノード時のテスト用としてHostPathが提供されていますが、PersistentVolumeとしては実用的ではありません。PersistentVolumeはPluggableな構造となっており、一例として下記のようなものがあります。下記のリスト以外にもあるので、詳細は「https://kubernetes.io/docs/concepts/storage/persistent-volumes/」を確認して下さい。
GCE Persistent Disk
AWS Elastic Block Store
NFS
iSCSI
Ceph
OpenStack Cinder
GlusterFS
PersistentVolumeの作成
PersistentVolumeを作成する際には、下記のような項目を設定します。
ラベル
容量
アクセスモード
Reclaim Policy
マウントオプション
Storage Class
PersistentVolumeごとの設定
リスト3:PersistentVolumeを作成するpv_sample.yml
apiVersion: v1 kind: PersistentVolume metadata: name: sample-pv labels: type: nfs environment: stg spec: capacity: storage: 10G accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: slow mountOptions: - hard nfs: server: xxx.xxx.xxx.xxx path: /nfs/sample $ kubectl create -f pv_sample.yml
作成後に確認すると、正常に作成できたためBoundステータスになっていることが確認できます。
リスト4:PersistentVolumeの状態を確認
$ kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE sample-pv 10Gi RWX Retain Bound 6s
以下、PersistentVolumeの設定項目について解説します。
ラベル
Dynamic Provisioningを使わずにPersistentVolumeを作成していく場合、PersistentVolumeの種類がわからなくなってしまうため、type、environment、speedなどのラベルをつけていくことをお勧めします。ラベルをつけていない場合、既存のPersistentVolumeの中から自動的に割り当てを行う必要が生じるため、ユーザの意志によるディスクのアタッチが困難になります。

ラベルをつけなかった場合
一方でラベルをつけておくと、PersistentVolumeClaimでボリュームのラベルを指定できるで、スケジューリングを柔軟に行えます。

ラベルをつけた場合
容量
容量を指定します。ここで注意するべきポイントは、Dynamic Provisioningが利用できない環境では、小さめのPersistentVolumeも用意することです。例えば下の図のよう状況で、PersistentVolumeClaimからの要求が3GBだった場合、用意されているPersistent Volumeの中から最も近い容量である10GBのものが割り当てられることになります。

PersistentVolumeClaimとPersistentVolumeの実際の容量の相違
アクセスモード
アクセスモードは3つ存在しています。
PersistentVolumeのアクセスモード
モード内容ReadWriteOnce(RWO)単一ノードからRead/WriteされますReadOnlyMany(ROX)単一ノードからWrite、複数ノードからReadされますReadWriteMany(RWX)複数ノードからRead/Writeされます
PersistentVolumeによって、サポートしているアクセスモードは異なります。またGCP、AWS、OpenStackで提供されるブロックストレージサービスでは、ReadWriteOnceのみサポートされています。詳細は、https://kubernetes.io/docs/concepts/storage/persistent-volumes/を確認して下さい。
Reclaim Policy
Reclaim Policyは、PersistentVolumeを利用し終わった後の処理方法(破棄するか、再利用するかなど)を制御するポリシーです。
Retain
PersistentVolumeのデータも消さず���保持します
また、他のPersistentVolumeClaimによって、このPersistentVolumeが再度マウントされることはありません
Recycle
PersistentVolumeのデータを削除(rm -rf ./*)し、再利用可能時な状態にします
他のPersistentVolumeClaimによって再度マウントされます。
Delete
PersistentVolumeが削除されます
GCE、AWS、OpenStackなどで確保される外部ボリュームの際に利用されます
マウントオプション
PersistentVolumeの種別によっては、追加でマウントオプションを指定することが可能です。詳しくは各Persistent Volumeの仕様を確認して下さい。
Storage Class
Dynamic Provisioningの場合にユーザがPersistentVolumeClaimを使ってPersistentVolumeを要求する際に、どういったディスクが欲しいのかを指定するために利用されます。Storege Classの選択=外部ボリュームの種別選択となります。
例えばOpenStack Cinderの場合には、ボリュームを切り出す際に、どのバックエンド(Ceph、ScaleIO、Xtremioなど)か、どのゾーンかなどを選択可能です。
リスト5:Storage Classを指定するstorageclass_sample.yml
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: sample-storageclass parameters: availability: test-zone-1a type: scaleio provisioner: kubernetes.io/cinder
PersistentVolume Pluginごとの設定
今回はNFSの例にしましたが、実際の設定項目はPersistentVolume Pluginごとに異なります。例えばspec.nfsは、GlusterFS Pluginを利用した場合には利用されません。
https://thinkit.co.jp/sites/default/files/main_images/14195_main.jpg
0 notes
Text
CephFS+Kubernetes 在网易轻舟容器平台的实践
在网易集团,基于Kubernetes构建的网易轻舟云原软件生产力平台扮演着支撑数字化业务快速高效创新的重任,帮助业务团队快速实现云原生应用,提高研发效能,并节省运维成本。
作为网易轻舟云原生平台的存储后端,CephFS主要为网易轻舟容器平台NCS解决容器间共享存储的问题。尤其是在当前比较火的AI训练场景应用十分广泛,存储规模达已达数PB级,CephFS的性能优化等工作非常重要。
Ceph和CephFS简介
Ceph由RADOS作为底座,上层提供对象、块、文件等接口服务。RADOS由MON、OSD、MGR组成,MON负责集群的各���视图(osdmap,pgmap等),健康状态的管理。MGR则提供了丰富的系统信息查询功能,以及支持第三方模块接入(Zabbix,Prometheus,Dashboard等)。OSD则负责最终的数据存储,一般一个OSD对应一块磁盘。
CephFS在此架构基础之上增加了MDS和client,其中MDS负责文件系统的元数据管理和持久化操作。client则对外提供了兼容POSIX语义的文件系统客户端,可通过mount命令进行挂载。
CephFS典型实践
部署
整个CephFS在轻舟Kubernetes环境中的部署架构如下:
原文链接:【https://www.infoq.cn/article/h0mFSrDhSryoxBwrtBZK】。未经作者许可,禁止转载。
…
from CephFS+Kubernetes 在网易轻舟容器平台的实践 via KKNEWS
0 notes
Text
🔄 Backing Up and Restoring Kubernetes Block and File Volumes – No-Code Guide
Kubernetes has become a foundational platform for deploying containerized applications. But as more stateful workloads enter the cluster — like databases and shared storage systems — ensuring data protection becomes critical.
This no-code guide explores how to back up and restore Kubernetes block and file volumes, the differences between storage types, and best practices for business continuity and disaster recovery.
📌 What Is Kubernetes Volume Backup & Restore?
In Kubernetes, Persistent Volumes (PVs) store data used by pods. These volumes come in two main types:
Block Storage: Raw devices formatted by applications (e.g., for databases).
File Storage: File systems shared between pods (e.g., for media files or documents).
Backup and restore in this context means protecting this stored data from loss, corruption, or accidental deletion — and recovering it when needed.
Block vs 📂 File Storage: What's the Difference?
FeatureBlock StorageFile StorageUse CaseDatabases, apps needing low latencyMedia, documents, logsAccessSingle node accessMulti-node/shared accessExampleAmazon EBS, OpenStack CinderNFS, CephFS, GlusterFS
Understanding your storage type helps decide the right backup tool and strategy.
🔒 Why Backing Up Volumes Is Essential
🛡️ Protects critical business data
💥 Recovers from accidental deletion or failure
📦 Enables migration between clusters or cloud providers
🧪 Supports safe testing using restored copies
🔧 Common Backup Methods (No Code Involved)
1. Snapshots (for Block Volumes)
Most cloud providers and storage backends support volume snapshots, which are point-in-time backups of storage volumes. These can be triggered through the Kubernetes interface using storage plugins called CSI drivers.
Benefits:
Fast and efficient
Cloud-native and infrastructure-integrated
Easy to automate with backup tools
2. File Backups (for File Volumes)
For file-based volumes like NFS or CephFS, the best approach is to regularly copy file contents to a secure external storage location — such as object storage or an offsite file server.
Benefits:
Simple to implement
Granular control over which files to back up
Works well with shared volumes
3. Backup Tools (All-in-One Solutions)
Several tools offer full platform support to handle Kubernetes volume backup and restore — with user-friendly interfaces and no need to touch code:
Velero: Popular open-source tool that supports scheduled backups, volume snapshots, and cloud storage.
Kasten K10: Enterprise-grade solution with dashboards, policy management, and compliance features.
TrilioVault, Portworx PX-Backup, and Rancher Backup: Also offer graphical UIs and seamless Kubernetes integration.
✅ Backup Best Practices for Kubernetes Volumes
🔁 Automate backups on a regular schedule (daily/hourly)
🔐 Encrypt data at rest and in transit
🌍 Store backups in a different location/region from the primary cluster
📌 Use labels to categorize backups by application or environment
🧪 Periodically test restore processes to validate recoverability
♻️ How Restoration Works (No Coding Required)
Restoring volumes in Kubernetes depends on the type of backup:
For snapshots, simply point new volumes to an existing snapshot when creating them again.
For file backups, use backup tools to restore contents back into the volume or re-attach to new pods.
For full-platform backup tools, use the interface to select a backup and restore it — including associated volumes, pods, and configurations.
Many solutions provide dashboards, logs, and monitoring to confirm that restoration was successful.
🚀 Summary: Protect What Matters
As Kubernetes powers more business-critical applications, backing up your block and file volumes is no longer optional — it’s essential. Whether using built-in snapshots, file-based backups, or enterprise tools, ensure you have a backup and recovery plan that’s tested, automated, and production-ready.
Your Kubernetes environment can be resilient and disaster-proof — with zero code required.
For more info, Kindly follow: Hawkstack Technologies
#Kubernetes#K8s#DevOps#CloudNative#PersistentStorage#StatefulApps#KubernetesStorage#VolumeBackup#DisasterRecovery#DataProtection#PlatformEngineering#SRE#CloudSecurity#OpenSourceTools#NoCodeOps
0 notes
Text
GlusterFS vs Ceph: Two Different Storage Solutions with Pros and Cons
GlusterFS vs Ceph: Two Different Storage Solutions with Pros and Cons @vexpert #vmwarecommunities #ceph #glusterfs #glusterfsvsceph #cephfs #containerstorage #kubernetesstorage #virtualization #homelab #homeserver #docker #kubernetes #hci
I have been trying out various storage solutions in my home lab environment over the past couple of months or so. Two that I have been extensively testing are GlusterFS vs Ceph, and specifically GlusterFS vs CephFS to be exact, which is Ceph’s file system running on top of Ceph underlying storage. I wanted to give you a list of pros and cons of GlusterFS vs Ceph that I have seen in working with…
0 notes
Text
Red Hat OpenStack Platform 13 is here! - Rece computer service
Accelerate. Innovate. Empower.
In the digital economy, IT organizations can be expected to deliver services anytime, anywhere, and to any device. IT speed, agility, and innovation can be critical to help stay ahead of your competition. Red Hat OpenStack Platform lets you build an on-premise cloud environment designed to accelerate your business, innovate faster, and empower your IT teams.
Accelerate. Red Hat OpenStack Platform can help you accelerate IT activities and speed time to market for new products and services. Red Hat OpenStack Platform helps simplify application and service delivery using an automated self-service IT operating model, so you can provide users with more rapid access to resources. Using Red Hat OpenStack Platform, you can build an on-premises cloud architecture that can provide resource elasticity, scalability, and increased efficiency to launch new offerings faster.
Innovate. Red Hat OpenStack Platform enables you differentiate your business by helping to make new technologies more accessible without sacrificing current assets and operations. Red Hat’s open source development model combines faster-paced, cross-industry community innovation with production-grade hardening, integrations, support, and services. Red Hat OpenStack Platform is designed to provide an open and flexible cloud infrastructure ready for modern, containerized application operations while still supporting the traditional workloads your business relies on.
Empower. Red Hat OpenStack Platform helps your IT organization deliver new services with greater ease. Integrations with Red Hat’s open software stack let you build a more flexible and extensible foundation for modernization and digital operations. A large partner ecosystem helps you customize your environment with third-party products, with greater confidence that they will be interoperable and stable.
With Red Hat OpenStack Platform 13, Red Hat continues to bring together community-powered innovation with the stability, support, and services needed for production deployment. Red Hat OpenStack Platform 13 is a long-life release with up to three years of standard support and an additional, optional two years of extended life-cycle support (ELS). This release includes many features to help you adopt cloud technologies more easily and support digital transformation initiatives.
Fast forward upgrades
With both standard and long-life releases, Red Hat OpenStack Platform lets you choose when to implement new features in your cloud environment:
Upgrade every six months and benefit from one year of support on each release.
Upgrade every 18 months with long-life releases and benefit from 3 years of support on that release, with an optional ELS totalling to up to 5 years of support. Long life releases include innovations from all previous releases.
Now, with the fast forward upgrade feature, you can skip between long-life releases on an 18-month upgrade cadence. Fast forward upgrades fully containerize Red Hat OpenStack Platform deployment to simplify the process of upgrading between long-life releases. This means that customers who are currently using Red Hat OpenStack Platform 10 have an easier upgrade path to Red Hat OpenStack Platform 13—with fewer interruptions and no need for additional hardware.
Containerized OpenStack services
Red Hat OpenStack Platform now supports containerization of all OpenStack services. This means that OpenStack services can be independently managed, scaled, and maintained throughout their life cycle, giving you more control and flexibility. As a result, you can simplify service deployment and upgrades and allocate resources more quickly, efficiently, and at scale.
Red Hat stack integrations
The combination of Red Hat OpenStack Platform with Red Hat OpenShift provides a modern, container-based application development and deployment platform with a scalable hybrid cloud foundation. Kubernetes-based orchestration simplifies application portability across scalable hybrid environments, designed to provide a consistent, more seamless experience for developers, operations, and users.
Red Hat OpenStack Platform 13 delivers several new integrations with Red Hat OpenShift Container Platform:
Integration of openshift-ansible into Red Hat OpenStack Platform director eases troubleshooting and deployment.
Network integration using the Kuryr OpenStack project unifies network services between the two platforms, designed to eliminate the need for multiple network overlays and reduce performance and interoperability issues.
Load Balancing-as-a-Service with Octavia provides highly available cloud-scale load balancing for traditional or containerized workloads.
Additionally, support for the Open Virtual Networking (OVN) networking stack supplies consistency between Red Hat OpenStack Platform, Red Hat OpenShift, and Red Hat Virtualization.
Security features and compliance focus
Security and compliance are top concerns for organizations deploying clouds. Red Hat OpenStack Platform includes integrated security features to help protect your cloud environment. It encrypts control flows and, optionally, data stores and flows, enhancing the privacy and integrity of your data both at rest and in motion.
Red Hat OpenStack Platform 13 introduces several new, hardened security services designed to help further safeguard enterprise workloads:
Programmatic, API-driven secrets management through Barbican
Encrypted communications between OpenStack services using Transport Layer Security (TLS) and Secure Sockets Layer (SSL)
Cinder volume encryption and Glance image signing and verification
Additionally, Red Hat OpenStack Platform 13 can help your organization meet relevant technical and operational controls found in risk management frameworks globally. Red Hat can help support compliance guidance provided by government standards organizations, including:
The Federal Risk and Authorization Management Program (FedRAMP) is a U.S. government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.
Agence nationale de la sécurité des systèmes d’information (ANSSI) is the French national authority for cyber-defense and network and information security (NIS).
A updated security guide is also available to help you when deploying a cloud environment.
Storage and hyperconverged infrastructure options
Red Hat Ceph Storage provides unified, highly scalable, software-defined block, object, and file storage for Red Hat OpenStack Platform deployments and services. Integration between the two enables you to deploy, scale, and manage your storage back end just like your cloud infrastructure. New storage integrations included in Red Hat OpenStack Platform 13 give you more choice and flexibility. With support for the OpenStack Manila project, you can use the CephFS NFS file share as a service to better support applications using file storage. As a result, you can choose the type of storage for each workload, from a unified storage platform.
Red Hat Hyperconverged Infrastructure for Cloud combines Red Hat OpenStack Platform and Red Hat Ceph Storage into a single offering with a common life cycle and support. Both Red Hat OpenStack Platform compute and Red Hat Ceph Storage functions are run on the same host, enabling consolidation and efficiency gains. NFV use cases for Red Hat Hyperconverged Infrastructure for Cloud include:
Core datacenters
Central office datacenters
Edge and remote point of presence (POP) environments
Virtual radio access networks (vRAN)
Content delivery networks (CDN)
You can also add hyperconverged capabilities to your current Red Hat OpenStack Platform subscriptions using an add-on SKU.
Telecommunications optimizations
Red Hat OpenStack Platform 13 delivers new telecommunications-specific features that allow CSPs to build innovative, cloud-based network infrastructure more easily:
OpenDaylight integration lets you connect your OpenStack environment with the OpenDaylight software-defined networking (SDN) controller, giving it greater visibility into and control over OpenStack networking, utilization, and policies.
Real-time Kernel-based Virtual Machine (KVM) support designed to deliver ultra-low latency for performance-sensitive environments.
Open vSwitch (OVS) offload support (tech preview) lets you implement single root input/output virtualization (SR-IOV) to help reduce the performance impact of virtualization and deliver better performance for high IOPS applications.
Learn more
Red Hat OpenStack Platform combines community-powered innovation with enterprise-grade features and support to help your organization build a production-ready private cloud. With it, you can accelerate application and service delivery, innovate faster to differentiate your business, and empower your IT teams to support digital initiatives.
Learn more about Red Hat OpenStack Platform:
Red Hat OpenStack Platform product page
Online documentation
Free 60-day evaluation
0 notes
Text
The OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for cluster operations that require data persistence. As a developer you can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. In this short guide you’ll learn how to expand an existing PVC in OpenShift when using OpenShift Container Storage. Before you can expand persistent volumes, the StorageClass must have the allowVolumeExpansion field set to true. Here is a list of Storage classes available in my OpenShift cluster. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 169d openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d I’ll change the default storage class which is ocs-storagecluster-cephfs. Let’s export the configuration to yaml file: oc get sc ocs-storagecluster-cephfs -o yaml >ocs-storagecluster-cephfs.yml I’ll modify the file to add allowVolumeExpansion field. allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: storageclass.kubernetes.io/is-default-class: "true" name: ocs-storagecluster-cephfs parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate allowVolumeExpansion: true # Added field Delete current configured storageclass since a SC is immutable resource. $ oc delete sc ocs-storagecluster-cephfs storageclass.storage.k8s.io "ocs-storagecluster-cephfs" deleted Apply modified storage class configuration by running the following command: $ oc apply -f ocs-storagecluster-cephfs.yml storageclass.storage.k8s.io/ocs-storagecluster-cephfs created List storage classes to confirm it was indeed created. $ oc get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d localfile kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 186d ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 169d ocs-storagecluster-cephfs (default) openshift-storage.cephfs.csi.ceph.com Delete Immediate true 5m20s openshift-storage.
noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 169d thin kubernetes.io/vsphere-volume Delete Immediate false 169d unused kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 190d Output yalm and confirm the new setting was applied. $ oc get sc ocs-storagecluster-cephfs -o yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | "allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":"annotations":"storageclass.kubernetes.io/is-default-class":"true","name":"ocs-storagecluster-cephfs","parameters":"clusterID":"openshift-storage","csi.storage.k8s.io/node-stage-secret-name":"rook-csi-cephfs-node","csi.storage.k8s.io/node-stage-secret-namespace":"openshift-storage","csi.storage.k8s.io/provisioner-secret-name":"rook-csi-cephfs-provisioner","csi.storage.k8s.io/provisioner-secret-namespace":"openshift-storage","fsName":"ocs-storagecluster-cephfilesystem","provisioner":"openshift-storage.cephfs.csi.ceph.com","reclaimPolicy":"Delete","volumeBindingMode":"Immediate" storageclass.kubernetes.io/is-default-class: "true" creationTimestamp: "2020-10-31T13:33:56Z" name: ocs-storagecluster-cephfs resourceVersion: "242503097" selfLink: /apis/storage.k8s.io/v1/storageclasses/ocs-storagecluster-cephfs uid: 5aa95d3b-c39c-438d-85af-5c8550d6ed5b parameters: clusterID: openshift-storage csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage fsName: ocs-storagecluster-cephfilesystem provisioner: openshift-storage.cephfs.csi.ceph.com reclaimPolicy: Delete volumeBindingMode: Immediate How To Expand a PVC in OpenShift List available PVCs in the namespace. $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-harbor-harbor-redis-0 Bound pvc-e516b793-60c5-431d-955f-b1d57bdb556b 1Gi RWO ocs-storagecluster-cephfs 169d database-data-harbor-harbor-database-0 Bound pvc-00a53065-9790-4291-8f00-288359c00f6c 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-chartmuseum Bound pvc-405c68de-eecd-4db1-9ca1-5ca97eeab37c 5Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-jobservice Bound pvc-e52f231e-0023-41ad-9aff-98ac53cecb44 2Gi RWO ocs-storagecluster-cephfs 169d harbor-harbor-registry Bound pvc-77e159d4-4059-47dd-9c61-16a6e8b37a14 100Gi RWX ocs-storagecluster-cephfs 39d Edit PVC and change capacity $ oc edit pvc data-harbor-harbor-redis-0 ... spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi Delete pod with claim. $ oc delete pods harbor-harbor-redis-0 pod "harbor-harbor-redis-0" deleted Recreate the deployment that was claiming the storage and it should utilize the new capacity. Expanding PVC on OpenShift Web Console You can also expand a PVC from the web console. Click on “Expand PVC” and set the desired PVC capacity.
0 notes
Text
Portworxってのも良さそうなので、比較に追加。 - Google トレンドで「cephfs, scaleIO, storageOS, portworx - すべての国、過去 5 年間」の 人気度の動向 を見る - https://t.co/rGew630IGF
Portworxってのも良さそうなので、比較に追加。 - Google トレンドで「cephfs, scaleIO, storageOS, portworx - すべての国、過去 5 年間」の 人気度の動向 を見る - https://t.co/rGew630IGF
— m (@m3816) October 24, 2018
from Twitter https://twitter.com/m3816 October 24, 2018 at 09:31AM
0 notes
Link
Latest version of open source OpenStack-targeted software-defined storage adds CephFS, ISCSI and storage that can be deployed in containers to save hardware from ComputerWeekly: Latest IT News http://ift.tt/2hg93fq
0 notes
Text
Red Hat Ceph Storage 3 brings file, iSCSI and container storage
Latest version of open source OpenStack-targeted software-defined storage adds CephFS, ISCSI and storage that can be deployed in containers to save hardware from ComputerWeekly: Latest IT News http://www.computerweekly.com/news/450429722/Red-Hat-Ceph-Storage-3-brings-file-iSCSI-and-container-storage
0 notes
Text
Red Hat Ceph Storage 3 brings file, iSCSI and container storage
Latest version of open source OpenStack-targeted software-defined storage adds CephFS, ISCSI and storage that can be deployed in containers to save hardware from Clone Site Dummy Feed http://www.computerweekly.com/news/450429722/Red-Hat-Ceph-Storage-3-brings-file-iSCSI-and-container-storage
0 notes