#StoragePool
Explore tagged Tumblr posts
govindhtech · 8 months ago
Text
Principal Advantages Of The Storage Pool + Hyperdisk On GKE
Tumblr media
Do you want to pay less for storing GKE blocks? Storage Pool for Hyperdisks may assist Whether you’re managing GKE clusters, conventional virtual machines, or both, it’s critical to automate as many of your operational chores as you can in an economical way.
Pool Storage
Hyperdisk Storage Pool are a pre-purchased collection of capacity, throughput, and IOPS that you can then supply to your applications as required. Hyperdisk is a next-generation network connected block storage solution. Hyperdisk block storage disks allow you to optimize operations and costs by sharing capacity and performance across all the disks in a pool when you put them in storage pools. Hyperdisk Storage Pools may reduce your Total Cost of Ownership (TCO) associated with storage by up to 30–50%, and as of Google Kubernetes Engine (GKE) 1.29.2, they can be used on GKE!
Thin provisioning in Storage Pool makes this feasible by enabling you to use the capacity that is allocated inside the pool only when data is written, not when pool disks are provided. Rather of provisioning each disk for peak demand regardless of whether it ever experiences that load, capacity, IOPS, and throughput are bought at the pool level and used by the disks in the pool on an as-needed basis, enabling you to share resources as needed:
Why is Hyperdisk used?
Hyperdisk, the next generation of Google Cloud persistent block storage, is different from conventional persistent disks in that it permits control of throughput and IOPS in addition to capacity. Additionally, even after the disks are first configured, you may adjust their performance to match your specific application requirements, eliminating extra capacity and enabling cost savings.Image Credit Google Cloud
How about Storage Pool?
In contrast, storage pools allow you to share a thinly-provisioned capacity pool across many Hyperdisks in a single project that are all located in the same zone, or “Advanced Capacity” Storage Pool. Rather to using storage capacity that is provided, you buy the capacity up front and just use it for data that is written. Throughput and IOPS may be adjusted in a similar manner in a storage pool referred to as “Advanced Capacity & Advanced Performance.”
Combining Hyperdisk with Storage Pools reduces the total cost of ownership (TCO) for block storage by shifting management responsibilities from the disk level to the pool level, where all disks within the pool absorb changes. A Storage Pool is a zonal resource with a minimum capacity of 10TB and requires a hyperdisk of the same kind (throughput or balanced).
Hyperdisk
Storage Pool + Hyperdisk on GKE
Hyperdisk Balanced boot disks and Hyperdisk Balanced or Hyperdisk Throughput attached disks may now be created on GKE nodes within Storage Pool, as of GKE 1.29.2.
Let’s imagine you want to be able to adjust the performance to suit your workload for a demanding stateful application that is executing in us-central-a. You decide to use Hyperdisk Balanced for the workload’s block storage. You employ a Hyperdisk Balanced Advanced Capacity, Advanced Performance Storage Pools in place of trying to right-size each disk in your application. The capacity and performance are paid for beforehand.
Pool performance is used up when the disks in the storage pool notice an increase in IOPS or throughput, while pool capacity is only used up when your application writes data to the disks. Prior to creating the Hyperdisks inside the Storage Pool(s) must be created.
Google Cloud Hyperdisk
Use the following gcloud command to establish an Advanced Capacity, Advanced Performance StoragePools:gcloud compute storage-pools create pool-us-central1-a --provisioned-capacity=10tb --storage-pool-type=hyperdisk-balanced --zone=us-central1-a --project=my-project-id --capacity-provisioning-type=advanced --performance-provisioning-type=advanced --provisioned-iops=10000 --provisioned-throughput=1024
The Pantheon UI may also be used to construct Storage Pools.
You may also provide your node boot disks in the storage pool if your GKE nodes are utilizing Hyperdisk Balanced as their boot drives. This may be set up at cluster or node-pool construction, as well as during node-pool updates. You may use the Pantheon UI or the following gcloud command to provide your Hyperdisk Balanced node boot drives in your Storage Pool upon cluster setup. Keep in mind that your Storage Pool has to be established in the same zone as your cluster and that the machine type of the nodes needs to support Hyperdisk Balanced.
You must use the storage-pools StorageClass argument to define your Storage Pool in order to deploy the Hyperdisk Balanced disks that your stateful application uses in it. The Hyperdisk Balanced volume that your application will utilize is then provisioned using a Persistent Volume Claim (PVC) that uses the StorageClass.
The provisioned-throughput-on-create and provisioned-iops-on-create parameters are optional and may be specified by the StorageClass. The volume will default to 3000 IOPS and 140Mi throughput if provisioned-throughput-on-create and provisioned-iops-on-create are left empty. Any IOPS or Throughput from the StoragePool will only be used by IOPS and Throughput values that exceed these preset levels.
Google Hyperdisk
The allowed IOPS and throughput figures vary based on the size of the drive.
Only 40 MiB of throughput and 1000 IOPS will be used by volumes allocated with this StorageClass from the Storage Pools.
Next, create a PVC with a reference to the StorageClass storage-pools-sc.
The pooling-storage-sc When a Pod utilizing the PVC is formed, Storage Class’s Volume Binding Mode: Wait For First Consumer is used, delaying the binding and provisioning of a Persistent Volume.
Finally, utilize the aforementioned PVC to include these Hyperdisk Volumes into your Stateful application. It is necessary to schedule your application to a node pool that has computers capable of attaching Hyperdisk Balanced.
NodeSelectors are used in the Postgres deployment to make sure that pods are scheduled to nodes that allow connecting Hyperdisk Balanced, or C3 machine types.
You ought now be able to see that your storage pools has your Hyperdisk Balanced volume deployed.
Next actions
For your stateful applications, you may optimize storage cost reductions and efficiency by using a Storage Pools + Hyperdisk approach for GKE.
Read more on Govindhtech.com
0 notes
computingpostcom · 3 years ago
Text
In the previous article, we discussed how to Setup self-hosted Gitea private repository on a Kubernetes cluster. This article will discuss how to install Jenkins server on a Kubernetes/OpenShift cluster. Kubernetes/OpenShift adds an additional automation layer on Jenkins server which in turn makes sure that the resources allocated to the Jenkins deployment are efficiently utilized. This means that with the use of an orchestration layer such as Kubernetes/OpenShift, resources can be scaled up/down depending on consumption/usage. This article will discuss the available methods of setting up Jenkins server on a Kubernetes/OpenShift cluster. Install Jenkins on Kubernetes using Helm3 In our guide we assume that you have a fully fuctional Kubernetes/OpenShift cluster. You also need access to the control plane either natively or remote. Helm is a package manager for Kubernetes/OpenShift that packages deployments in a format called chat. The installation of Helm3 is covered in the article below Install and Use Helm 3 on Kubernetes Cluster Step 1. Create Jenkins namespace Create the jenkins namespace that will be used for this deployment. kubectl create ns jenkins Once you have Helm3 installed, add Jenkins repo to your environment $ helm repo add jenkinsci https://charts.jenkins.io $ helm repo update Step 2. Create a Persistent Volume Once the Jenkins repo is added, we need to configure a persistent volume since Jenkins is a stateful application and needs to store persistent data on a volume. Creating Persistent Volume from Host Path Create a PersistentVolume on your Kubernetes cluster: $ vim jenkins-localpv.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: jenkins-sc provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv labels: type: local spec: storageClassName: jenkins-sc capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt" --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc spec: storageClassName: jenkins-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Apply the configuration kubectl apply -f jenkins-localpv.yaml The above command creates a persistent volume and persistentVolumeClaim using hostPath. The volume will be saved at /mnt path of the node. Dynamic Persistent Volume creation using StorageClass If you have any StorageClass provided by a custom storage solution, create new file called jenkins-pvc.yaml vim jenkins-pvc.yaml Modify below configuration to provide StorageClass name: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc spec: accessModes: - ReadWriteOnce storageClassName: storage-class-name resources: requests: storage: 10Gi Then apply the configuration after modification: kubectl apply -f jenkins-pvc.yaml Using openEBS Storage You can use dynamic storage provisioning using tools such as openEBS and provision storageClasses. For dynamic storage, create a storageClass config: $ vim jenkins-sc.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: jenkins-sc annotations: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: ReplicaCount value: "2" - name: StoragePool value: gpdpool provisioner: openebs.io/provisioner-iscsi Apply the configuration kubectl apply -f jenkins-sc.yaml Create a persistenVolume and PersistenVolumeClaim based on the above storageClass $ vim dynamic-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv labels: type: local spec: storageClassName: jenkins-sc capacity: storage: 10Gi accessModes: - ReadWriteOnce hostPath: path: "/data/jenkins-volume" --- apiVersion: v1 kind: PersistentVolumeClaim metadata:
name: jenkins-pvc spec: storageClassName: jenkins-sc accessModes: - ReadWriteOnce resources: requests: storage: 10Gi Apply the configuration kubectl apply -f dynamic-pv.yaml More about persistent volumes on Kubernetes/OpenShift has been covered on the article below: Deploy and Use OpenEBS Container Storage on Kubernetes Step 3. Create a service account Create a service account for the pods to communicate with ther API server. We will also create the ClusterRole and the permissions. kubectl apply -f -
0 notes
fick-dich · 5 years ago
Text
Get-Storagepool
Repair-VirtualDisk
Get-PhysicalDisk | Where {$_.HealthStatus -ne "Healthy"}
Reset-PhysicalDisk
- Powershell-cmd
0 notes
dainasea-blog · 10 years ago
Text
記憶域プールで物理ディスクのメディアタイプの強制指定
RAIDカード経由などで繋げたHDD/SSDのメディアタイプをOSが判断できずメディアタイプが不明になる事があります。 このままではWS2k12R2で階層化などの機能が使えないので使いたい場合はPowerShellで強制的に指定します。 HDDをSSDとして認識させる事もできますので検証などで使う事があるかもしれません。
PowerShellを管理者で立ち上げて Set-PhysicalDisk -FriendlyName “hogehoge” -MediaType SSD Set-PhysicalDisk -FriendlyName “hogehoge” -MediaType HDD
にて変更します。 変更後は一度サーバーマネージャーを再起動しないと表示が反映されない事があるかもしれません。
View On WordPress
0 notes
govindhtech · 8 months ago
Text
Google Cloud Advanced Performance Storage Pools Start Today
Tumblr media
Block storage in Cloud
Hyperdisk Advanced Performance Storage Pools
Hyperdisk Storage Pools with Advanced Capacity, which help you reduce the Total Cost of Ownership (TCO) of your block storage capacity, were made generally available earlier this year. With Hyperdisk Storage Pools with Advanced Performance, Google is introducing that same breakthrough to block storage performance today. Now that you can provision IOPS and throughput aggregately, which Hyperdisk Storage Pools will dynamically allocate as your applications read and write data, you can significantly simplify performance planning and management and greatly boost resource usage.
With the Google Cloud console, you can begin using Advanced Performance Storage Pools right now.
The difficulty lies in allocating an appropriate quantity of performance resources
Clients have reported difficulties in striking a balance between making the most of all of their block storage performance resources and guaranteeing that their workloads have the resources necessary to be successful. This problem stems from what is known as the “sum of peaks” problem. Customers will provision their block storage at the maximum performance they have seen to guarantee that their workloads are never performance starved; nevertheless, in the majority of cases, their disks consume much less than that performance. This indicates that the performance usage of your disks is consistently low.
Utilizing Advanced Performance Storage Pools, reduce TCO by 40–50%
To address this issue, Google created Advanced Performance for Hyperdisk Storage Pools. You may now achieve high performance utilization and successful workloads simultaneously with Advanced Performance Storage Pools. When you provision performance in aggregate in a Hyperdisk Storage Pool with Advanced Performance, the Storage Pool intelligently divides those resources among the disks in the Pool as required. All without altering the behavior of drives, allowing your applications to function normally. Without compromising application success or resource efficiency, you can now more easily plan for your performance requirements and reduce your total cost of ownership.
To show how Hyperdisk Storage Pools with Advanced Performance can reduce your total cost of ownership, let’s examine a few workloads. Consider two workloads: a database workload that needs 75K IOPS at peak (such as when quarterly reports are due) but only 35K at steady state, and an enterprise application suite that needs 25K IOPS at peak (such as when all users are signed in simultaneously) but only 10K at steady state.
Since these workloads’ steady state performance would be around half of their allocated performance, they would function at 40–45% performance utilization outside of a pool. But because a Hyperdisk Storage Pool with Advanced Performance’s dynamic resource allocation guarantees that these workloads operate at about 80% utilization, the customer can supply far lower performance resources and reduce their TCO from 40–55% without modifying their applications.WorkloadPerformance RequirementsGCP Storage PoolsAlternative Cloud ProviderPools TCO SavingsEnterprise applicationsPeak IOPS: 25KAvg IOPS: 10KPeak Tput: 400 MiB/sAvg Tput: 160 MiB/s$33K/mo.$74K/mo.55%Databases (e.g. SQL Server)Peak IOPS: 75KAvg IOPS: 35KPeak Tput:  1.2 GiB/sAvg Tput:  560 MiB/s$15K/Mo.$25K/Mo.40%Price:IOPs – $.0095Throughput – $.076$81.92 / TiBIncludes: 1 TiB capacity, 5K IOPS, 200 MiB/s
Plan ahead and simplify block storage performance
A major advancement in performance resource provisioning is Advanced Performance Storage Pools. Infrastructure managers have historically had to decide between risking application failure by optimizing operational efficiency or accepting poor utilization. This implies that they must take on significant managerial effort and actively oversee each volume’s success.
Block storage performance management is made easy with Hyperdisk Advanced Performance Storage Pools. An Advanced Performance Storage Pool’s disk IOPS and throughput are “thin-provisioned.” To put it another way, you can provision up to five times the IOPS and throughput in a pool to the disks in the pool while maintaining resource-neutral performance at the disk level.
This results in great efficiency (because you no longer need to actively manage the IOPS and Throughput allocated to each disk) and eases deployment planning (since you no longer need to guess precisely what performance demands each disk should have). All you have to do is build the disks you need, as performanta as possible, and allow the Advanced Performance Storage Pool to provide resources as needed.
Early adopters of Advanced Performance Storage Pools, like REWE, have recognized the benefits of the product.
Start now
The same process used to create Hyperdisk Storage Pools can also be used to start Hyperdisk Advanced Performance Storage Pools. Go to Compute Engine by logging into the Google Cloud dashboard and selecting Storage. Next, establish your Storage Pool, choose Advanced Performance as the volume type (Balanced or Throughput), and enter the entire capacity and performance that the pool will require. Starting with the creation of new Hyperdisk volumes, you may start using the pool’s capacity as well as the extra advantage of dynamically sharing performance across your resources.
We believe that Advanced Performance Storage Pools will significantly enhance your ability to manage and get the optimal performance of your applications. The regions and zones that will receive Hyperdisk Advanced Performance Storage Pools are now being served. Start using Advanced Performance Storage Pools right now by opening the Google Cloud dashboard and navigating through Google Cloud’s documentation to establish, use, and maintain your pools.
Read more on govindhtech.com
0 notes