#CustomResourceDefinitions
Explore tagged Tumblr posts
govindhtech · 2 days ago
Text
Apigee APIM Operator for API Administration On Any Gateway
Tumblr media
We now provide the Apigee APIM Operator, a lightweight Application Programming Interface Management and API Gateway tool for GKE environments. This release is a critical step towards making Apigee API management available on every gateway, anywhere.
The Kubernetes-based Apigee APIM Operator allows you build and manage API offerings. Cloud-native developers benefit from its command-line interface for Kubernetes tools like kubectl. APIM resources help the operator sync your Google Kubernetes Engine cluster with Apigee.
Advantages
For your business, the APIM Operator offers:
With the APIM Operator, API producers may manage and protect their APIs using Kubernetes resource definitions. Same tools and methods for managing other Kubernetes resources can be used for APIs.
Load balancer-level API regulation streamlines networking configuration and API security and access for the operator.
Kubernetes' role-based access control (RBAC) and Apigee custom resource definitions enable fine-grained access control for platform administrators, infrastructure administrators, and API developers.
Integration with Kubernetes: The operator integrates Helm charts and Custom Resource Definitions to make cloud-native development easy.
Reduced Context Switching: The APIM Operator lets developers administer APIs from Kubernetes, eliminating the need to switch tools.
Use APIM Operator when
API producers who want Kubernetes API management should utilise APIM Operator. It's especially useful for cloud-native Kubernetes developers who want to manage their APIs using the same tools and methods. Our APIM Operator lets Apigee clients add Cloud Native Computing Foundation (CNCF)-based API management features.
limitations
The APIM Operator's Public Preview has certain restrictions:
Support is limited to REST APIs. Public Preview doesn't support GraphQL or gRPC.
The Public Preview edition supports 25 regional or global GKE Gateway resources and API management policies.
A single environment can have 25 APIM extension policies. Add extra APIM extension policies by creating a new environment.
Gateway resources can have API management policies, but not HTTPRoutes.
Public Preview does not support region extension. A setup APIM Operator cannot be moved to different regions.
Meaning for you?
With Kubernetes-like YAML, you can configure API management for many cloud-native enterprises that use CNCF-standardized tooling without switching tools.
APIM integration with Kubernetes and CNCF toolchains reduces conceptual and operational complexity for platform managers and service developers on Google Cloud.
Policy Management: RBAC administrators can create APIM template rules to let groups use different policies based on their needs. Add Apigee rules to APIM templates to give users and administrators similar capabilities as Apigee Hybrid.
Key Features and Capabilities
The GA version lets users set up a GKE cluster and GKE Gateway to use an Apigee Hybrid instance for API management via a traffic extension (ext-proc callout). It supports factory-built Day-Zero settings with workload modification and maintains API lifespan with Kubernetes/CNCF toolchain YAML rules.
Meeting Customer Needs
This functionality addresses the growing requirement for developer-friendly API management solutions. Apigee was considered less agile owing to its complexity and the necessity to shift from Kubectl to other tools. In response to this feedback, Google Cloud created the APIM Operator, which simplifies and improves API management.
Looking Ahead
It is exploring gRPC and GraphQL support to support more API types, building on current GA version's robust foundation. As features and support are added, it will notify the community. Google Cloud is also considering changing Gateway resource and policy attachment limits.
The APIM Operator will improve developer experience and simplify API management for clients, they believe. It looks forward to seeing how creatively you use this functionality in your apps.
0 notes
computingpostcom · 3 years ago
Text
Red Hat OpenShift Container Platform is a powerful platform created to provide IT organizations and developers with a hybrid cloud application platform. With this secure and scalable platform, you’re able deploy containerized applications with minimal configuration and management overhead. In this article we look at how you can perform an upgrade from OpenShift 4.8 To OpenShift 4.9. The OpenShift Container Platform 4.9 is supported on Red Hat Enterprise Linux CoreOS (RHCOS) 4.9, as well as on Red Hat Enterprise Linux (RHEL) 8.4 and 7.9. Red Hat recommends you run RHCOS machines on the the control plane nodes, and either RHCOS or RHEL 8.4+/7.9 on the compute machines. The Kubernetes version used in OpenShift Container Platform 4.9 is 1.22. In Kubernetes 1.22, a significant number of deprecated v1beta1 APIs were removed. There was a requirement introduced in OpenShift Container Platform 4.8.14 that requires an administrator to provide a manual acknowledgment before being able to upgrade the Cluster from OpenShift Container Platform 4.8 to 4.9. This helps to prevent issues after an upgrade to OpenShift Container Platform 4.9, where removed APIs are still in use by workloads and components in the Cluster. Removed Kubernetes APIs in OpenShift 4.9 The following is a list of deprecated v1beta1 APIs in OpenShift Container Platform 4.9 which uses Kubernetes 1.22: Resource API Notable changes APIService apiregistration.k8s.io/v1beta1 No CertificateSigningRequest certificates.k8s.io/v1beta1 Yes ClusterRole rbac.authorization.k8s.io/v1beta1 No ClusterRoleBinding rbac.authorization.k8s.io/v1beta1 No CSIDriver storage.k8s.io/v1beta1 No CSINode storage.k8s.io/v1beta1 No CustomResourceDefinition apiextensions.k8s.io/v1beta1 Yes Ingress extensions/v1beta1 Yes Ingress networking.k8s.io/v1beta1 Yes IngressClass networking.k8s.io/v1beta1 No Lease coordination.k8s.io/v1beta1 No LocalSubjectAccessReview authorization.k8s.io/v1beta1 Yes MutatingWebhookConfiguration admissionregistration.k8s.io/v1beta1 Yes PriorityClass scheduling.k8s.io/v1beta1 No Role rbac.authorization.k8s.io/v1beta1 No RoleBinding rbac.authorization.k8s.io/v1beta1 No SelfSubjectAccessReview authorization.k8s.io/v1beta1 Yes StorageClass storage.k8s.io/v1beta1 No SubjectAccessReview authorization.k8s.io/v1beta1 Yes TokenReview authentication.k8s.io/v1beta1 No ValidatingWebhookConfiguration admissionregistration.k8s.io/v1beta1 Yes VolumeAttachment storage.k8s.io/v1beta1 No v1beta1 APIs removed from Kubernetes 1.22 Table You are required to migrate manifests and API clients to use the v1 API version. More information on deprecated APIs migration can be found in the Official Kubernetes documentation. 1) Evaluate OpenShift 4.8 Cluster for removed APIs It is a responsibility of Kubernetes/OpenShift Administrator to properly evaluate all Workloads and other integrations to identify where APIs removed in Kubernetes 1.22 are in use. Ensure you’re on OpenShift Cluster 4.8 before trying to perform an upgrade to 4.9. The following commands helps you identify OCP release $ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.15 True False 5h45m Cluster version is 4.8.15 You can then use the APIRequestCount API to track API requests – if any of the requests is using one of the removed APIs. The following command can be use to identify APIs that will be removed in a future release but are currently in use. Focus on the REMOVEDINRELEASE output column. $ oc get apirequestcounts NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ingresses.v1beta1.extensions 1.22 2 364 The results can be filtered further by using -o jsonpath option: oc get apirequestcounts -o jsonpath='range .
items[?(@.status.removedInRelease!="")].status.removedInRelease"\t".metadata.name"\n"end' Example output: $ oc get apirequestcounts -o jsonpath='range .items[?(@.status.removedInRelease!="")].status.removedInRelease"\t".metadata.name"\n"end' 1.22 ingresses.v1beta1.extensions With API identified, you can examine the APIRequestCount resource for a given API version to help identify which workloads are using the API. oc get apirequestcounts .. -o yaml Example for the ingresses.v1beta1.extensions API: $ oc get apirequestcounts ingresses.v1beta1.extensions NAME REMOVEDINRELEASE REQUESTSINCURRENTHOUR REQUESTSINLAST24H ingresses.v1beta1.extensions 1.22 3 365 Migrate instances of removed APIs You can get more information on migrating removed Kubernetes APIs, see the Deprecated API Migration Guide in the Kubernetes documentation. 2) Acknowledge Upgrade to OpenShift Container Platform 4.9 After the evaluation and migration of removed Kubernetes APIs to v1 is complete, as an OpenShift Administrator you can provide the acknowledgment required to proceed. WARNING: It is a sole responsibility of administrator to ensure all uses of removed APIs is resolved and necessary migrations performed before providing this administrator acknowledgment. Run the following command to acknowledge that you have completed the evaluation and cluster can be upgraded to OpenShift Container Platform 4.9: oc -n openshift-config patch cm admin-acks --patch '"data":"ack-4.8-kube-1.22-api-removals-in-4.9":"true"' --type=merge Expected command output: configmap/admin-acks patched 3) Begin Upgrade from OpenShift 4.8 To 4.9 Login to OpenShift Web Console and navigate to Administration → Cluster Settings > Details Click on “Channel” and update channel to fast-4.9 or stable-4.9. Select fast-4.9 orstable-4.9 from the list of available channels. Channel can also be changed from the command line with below command syntax: oc adm upgrade channel clusterversion version --type json -p '["op": "add", "path": "/spec/channel", "value": "”]' Where  is replaced with either; stable-4.9 fast-4.9 candidate-4.9 New cluster updates should be visible now. Use the “Update” link to initiate upgrade from OpenShift 4.8 to OpenShift 4.9. Select the new version of OpenShift 4.9 that you’ll be upgrading to. The update process to OpenShift Container Platform 4.9 should begin shortly. You can also check upgrade status from CLI: $ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.15 True True 2m26s Working towards 4.9.0: 71 of 735 done (9% complete) Output once all the upgrades are completed. You now have OpenShift cluster version 4.9. List all the available MachineHealthCheck to ensure everything is in healthy state. $ oc get machinehealthcheck -n openshift-machine-api NAME MAXUNHEALTHY EXPECTEDMACHINES CURRENTHEALTHY machine-api-termination-handler 100% Check cluster components: $ oc get cs W1110 20:39:28.838732 2592723 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-3 Healthy "health":"true","reason":"" etcd-1 Healthy "health":"true","reason":"" etcd-0 Healthy "health":"true","reason":"" etcd-2 Healthy "health":"true","reason":"" List all cluster operators and review current versions. $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.9.5 True False False 32h baremetal 4.9.5 True False False 84d cloud-controller-manager 4.9.5 True False False 20d
cloud-credential 4.9.5 True False False 84d cluster-autoscaler 4.9.5 True False False 84d config-operator 4.9.5 True False False 84d console 4.9.5 True False False 37d csi-snapshot-controller 4.9.5 True False False 84d dns 4.9.5 True False False 84d etcd 4.9.5 True False False 84d image-registry 4.9.5 True False False 84d ingress 4.9.5 True False False 84d insights 4.9.5 True False False 84d kube-apiserver 4.9.5 True False False 84d kube-controller-manager 4.9.5 True False False 84d kube-scheduler 4.9.5 True False False 84d kube-storage-version-migrator 4.9.5 True False False 8d machine-api 4.9.5 True False False 84d machine-approver 4.9.5 True False False 84d machine-config 4.9.5 True False False 8d marketplace 4.9.5 True False False 84d monitoring 4.9.5 True False False 20d network 4.9.5 True False False 84d node-tuning 4.9.5 True False False 8d openshift-apiserver 4.9.5 True False False 20d openshift-controller-manager 4.9.5 True False False 7d8h openshift-samples 4.9.5 True False False 8d operator-lifecycle-manager 4.9.5 True False False 84d operator-lifecycle-manager-catalog 4.9.5 True False False 84d operator-lifecycle-manager-packageserver 4.9.5 True False False 20d service-ca 4.9.5 True False False 84d storage 4.9.5 True False False 84d Check if all nodes are available and in healthy state. $ oc get nodes Conclusion In this blog post we’ve be able to perform an upgrade of OpenShift from version 4.8 to 4.9. Ensure all Operators previously installed through Operator Lifecycle Manager (OLM) are updated to their latest version in their latest channel as selected during upgrade. Also ensure that all machine config pools (MCPs) are running and not paused. For any issues experienced during upgrade you can reach out to us through our comments section.
0 notes
foxutech · 3 years ago
Text
Kubernetes Custom Resource Definition (CRDs)
Kubernetes Custom Resource Definition (CRDs) #kubernetes #k8s #microservices #crd #crds #customresource ##customresourcedefinition
In the IT world, we mayn’t get always what we are want, especially with opensource, as there will be some feature still missing. If it is enterprise application or in-house, we have some option to get the feature enabled by request. With opensource, we should customize what we are looking for (if the tool/software supports). Like that even in Kubernetes, though it gives wide range of solutions,…
Tumblr media
View On WordPress
0 notes
maximopro · 3 years ago
Text
Maximo Manage 8でログやファイル名が文字化けする場合の対応方法
MAS および Maximo ManageでPodの日本語のログやファイル名などが???などで表示されてしまう場合があります。この場合、以下の方法で修正することができます 1. OpenShiftコンソールにログインする
2. 右のメニューから Administration > CustomResourceDefinitions を開く
Tumblr media
3. ManageWorkspaceを開く
Tumblr media
4. Instancesタブを開き対象のインスタンスを選択する
Tumblr media
5. YAML タブを開き serverBundles: を検索
Tumblr media
6. すべてのバンドルに jvmOptions: '-Dfile.encoding=UTF-8' を追加または既存のjvmOptionsを編集
Tumblr media
7. Podsの更新が始まるのですべて終わるまで待機 これで文字化けが解消されます。
0 notes
for-the-user · 7 years ago
Text
heptio ark k8s cluster backups
How do we use it? (in this example, i am using microsoft azure's cloud)
Prepare some cloud storage
Create a storage account and a blob container in the same subscription and resource group as the k8s cluster you want to be running backups on.
$ az storage account create \ --name $AZURE_STORAGE_ACCOUNT_ID \ --resource-group $RESOURCE_GROUP \ --sku Standard_LRS \ --encryption-services blob \ --https-only true \ --kind BlobStorage \ --access-tier Cool \ --subscription $SUBSCRIPTION $ az storage container create \ -n $STORAGE_RESOURCE_NAME \ --public-access off \ --account-name $AZURE_STORAGE_ACCOUNT_ID \ --subscription $SUBSCRIPTION
Get the storage account access key.
$ AZURE_STORAGE_KEY=`az storage account keys list \ --account-name $AZURE_STORAGE_ACCOUNT_ID \ --resource-group $RESOURCE_GROUP \ --query '[0].value' \ --subscription $SUBSCRIPTION \ -o tsv`
Create a service principle with appropriate permissions for heptio ark to use to read and write to the storage account.
$ az ad sp create-for-rbac \ --name "heptio-ark" \ --role "Contributor" \ --password $AZURE_CLIENT_SECRET \ --subscription $SUBSCRIPTION
Finally get the service principle's id called a client id.
$ AZURE_CLIENT_ID=`az ad sp list \ --display-name "heptio-ark" \ --query '[0].appId' \ --subscription $SUBSCRIPTION \ -o tsv`
Provision ark
Next we provision an ark instance to our kubernetes cluster with a custom namespace. First clone the ark repo
$ git clone https://github.com/heptio/ark.git
You will need to edit 3 files.
ark/examples/common/00-prereqs.yaml ark/examples/azure/00-ark-deployment.yaml ark/examples/azure/10.ark-config.yaml
In these yamls, it tries to create a namespace called "heptio-ark" and then put things into that namespace. Change all of these references to a namespace you prefer. I called it "my-groovy-system".
In the 10.ark-config.yaml, you also need to replace the placeholders YOUR_TIMEOUT & YOUR_BUCKET with some actual values. in our case, we use: 15m and the value of $STORAGE_RESOURCE_NAME, which in this case is ark-backups.
Create the pre-requisites.
$ kubectl apply -f examples/common/00-prereqs.yaml customresourcedefinition "backups.ark.heptio.com" created customresourcedefinition "schedules.ark.heptio.com" created customresourcedefinition "restores.ark.heptio.com" created customresourcedefinition "configs.ark.heptio.com" created customresourcedefinition "downloadrequests.ark.heptio.com" created customresourcedefinition "deletebackuprequests.ark.heptio.com" created customresourcedefinition "podvolumebackups.ark.heptio.com" created customresourcedefinition "podvolumerestores.ark.heptio.com" created customresourcedefinition "resticrepositories.ark.heptio.com" created namespace "my-groovy-system" created serviceaccount "ark" created clusterrolebinding "ark" created
Create a secret object, which contains all of the azure ids we gathered in part 1.
$ kubectl create secret generic cloud-credentials \ --namespace my-groovy-system \ --from-literal AZURE_SUBSCRIPTION_ID=$SUBSCRIPTION \ --from-literal AZURE_TENANT_ID=$TENANT_ID \ --from-literal AZURE_RESOURCE_GROUP=$RESOURCE_GROUP \ --from-literal AZURE_CLIENT_ID=$AZURE_CLIENT_ID \ --from-literal AZURE_CLIENT_SECRET=$AZURE_CLIENT_SECRET \ --from-literal AZURE_STORAGE_ACCOUNT_ID=$AZURE_STORAGE_ACCOUNT_ID \ --from-literal AZURE_STORAGE_KEY=$AZURE_STORAGE_KEY secret "cloud-credentials" created
Provision everything.
$ kubectl apply -f examples/azure/ $ kubectl get deployments -n my-groovy-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ark 1 1 1 1 1h $ kubectl get pods -n my-groovy-system NAME READY STATUS RESTARTS AGE ark-7b86b4d5bd-2w5x7 1/1 Running 0 1h $ kubectl get rs -n my-groovy-system NAME DESIRED CURRENT READY AGE ark-7b86b4d5bd 1 1 1 1h $ kubectl get secrets -n my-groovy-system NAME TYPE DATA AGE ark-token-b5nm8 kubernetes.io/service-account-token 3 1h cloud-credentials Opaque 7 1h default-token-xg6x4 kubernetes.io/service-account-token 3 1h
At this point the ark server is running. To interact with it, we need to use a client.
Install the Ark client locally
Download one from here and unzip it and add it to your path. Here's a mac example:
$ wget https://github.com/heptio/ark/releases/download/v0.9.3/ark-v0.9.3-darwin-amd64.tar.gz $ tar -xzvf ark-v0.9.3-darwin-amd64.tar.gz $ mv ark /Users/mygroovyuser/bin/ark $ ark --help
Take this baby for a test drive
Deploy an example thing. Ark provides something to try with.
$ kubectl apply -f examples/nginx-app/base.yaml
This creates a namespace called nginx-example and creates a deployment and service inside with a couple of nginx pods.
Take a backup.
$ ark backup create nginx-backup --include-namespaces nginx-example --namespace my-groovy-system Backup request "nginx-backup" submitted successfully. Run `ark backup describe nginx-backup` for more details. $ ark backup get nginx-backup --namespace my-groovy-system NAME STATUS CREATED EXPIRES SELECTOR nginx-backup Completed 2018-08-21 15:57:59 +0200 CEST 29d
We can see in our Azure storage account container a backup has been created by heptio ark.
If we look inside the folder, we see some json and some gzipped stuff
Let's simulate a disaster.
$ kubectl delete namespace nginx-example namespace "nginx-example" deleted
And try to restore from the Ark backup.
$ ark restore create --from-backup nginx-backup --namespace my-groovy-system Restore request "nginx-backup-20180821160537" submitted successfully. Run `ark restore describe nginx-backup-20180821160537` for more details. $ ark restore get --namespace my-groovy-system NAME BACKUP STATUS WARNINGS ERRORS CREATED SELECTOR nginx-backup-20180821160537 nginx-backup Completed 0 0 2018-08-21 16:05:38 +0200 CEST
Nice.
And to delete backups...
$ ark backup delete nginx-backup --namespace my-groovy-system Are you sure you want to continue (Y/N)? Y Request to delete backup "nginx-backup" submitted successfully. The backup will be fully deleted after all associated data (disk snapshots, backup files, restores) are removed. $ ark backup get nginx-backup --namespace my-groovy-system An error occurred: backups.ark.heptio.com "nginx-backup" not found
And its gone.
2 notes · View notes
musiccosmosru · 7 years ago
Link
The widespread misconception that Kubernetes was not ready for stateful applications such as MySQL and MongoDB has had a surprisingly long half-life. This misconception has been driven by a combination of the initial focus on stateless applications within the community and the relatively late addition of support for persistent storage to the platform.
Further, even after initial support for persistent storage, the kinds of higher-level platform primitives that brought ease of use and flexibility to stateless applications were missing for stateful workloads. However, not only has this shortcoming been addressed, but Kubernetes is fast becoming the preferred platform for stateful cloud-native applications.
Today, one can find first-class Kubernetes storage support for all of the major public cloud providers and for the leading storage products for on-premises or hybrid environments. While the availability of Kubernetes-compatible storage has been a great enabler, Kubernetes support for the Container Storage Interface (CSI) specification is even more important.
The CSI initiative not only introduces a uniform interface for storage vendors across container orchestrators, but it also makes it much easier to provide support for new storage systems, to encourage innovation, and, most importantly, to provide more options for developers and operators.
While increasing storage support for Kubernetes is a welcome trend, it is neither a sufficient nor primary reason why stateful cloud-native applications will be successful. To step back for a second, the driving force behind the success of a platform like Kubernetes is that it is focused on developers and applications, and not on vendors or infrastructure. In response, the Kubernetes development community has stepped in with significant contributions to create appropriate abstractions that bridge the gap between raw infrastructure such as disks and volumes and the applications that use that infrastructure.
Kubernetes StatefulSets, Operators, and Helm charts
First, to make it much simpler to build stateful applications, support for orchestration was added in the form of building blocks such as StatefulSets. StatefulSets automatically handle the hard problems of gracefully scaling and upgrading stateful applications, and of preserving network identity across container restarts. StatefulSets provide a great foundation to build, automate, and operate highly available applications such as databases.
Second, to make it easier to manage stateful applications at scale and without human intervention, the “Operator” concept was introduced. A Kubernetes Operator encodes, in software, the manual playbooks that go into operating complex applications. The benefits of these operators can be clearly seen in the operators published for MySQL, Couchbase, and multi-database environments.
In conjunction with these orchestration advances, the flourishing of Helm, the equivalent of a package manager for Kubernetes, has made it simple to deploy not only different databases but also higher-level applications such as GitLab that draw on multiple data stores. Helm uses a packaging format called “charts” to describe applications and their Kubernetes resources. A single-line command gets you started, and Helm charts can be easily embedded in larger applications to provide the persistence for any stack. In addition, multiple reference examples are available in the form of open source charts that can be easily customized for the needs of custom applications.
Kanister and the K10 Platform
At Kasten, we have been working on two projects, Kanister and K10, that make it dramatically easier for both developers and operators to consume all of the above advancements. Driven by extensive customer input, these projects don’t just abstract away some of the technical complexity inherent in Kubernetes but also present a homogeneous operational experience across applications and clouds at scale.
Kanister, an open-source project, has been driven by the increasing need for a universal and application-aware data management plane—one that supports multiple data services and performs data management tasks at the application level. Developers today frequently draw on multiple data sources for a single app (polyglot persistence), consume data services that are eventually consistent (e.g., Cassandra), and have complex requirements including consistent data capture, custom data masking, and application-centric backup and recovery.
Kanister addresses these challenges by providing a uniform control plane API for data-related actions such as backup, restore, masking, etc. At the same time, Kanister allows domain experts to capture application-specific data management actions in blueprints or recipes that can be easily shared and extended. While Kanister is based on the Kubernetes Operator pattern and Kubernetes CustomResourceDefinitions, those details are hidden from developers, allowing them to focus on their application’s requirements for these data APIs. Instead of learning how to write a Kubernetes Controller, they simply author actions for their data service in whatever language they prefer, ranging from Bash scripts to Go. Today, public examples cover everything from MongoDB backups to deep integration with PostgreSQL’s Point-in-Time-Recovery functionality.
Whereas Kanister handles data at an application level, significant operator challenges also exist for managing data within multiple applications and microservices spread across clusters, clouds, and development environments. We at Kasten introduced the K10 Platform to make it easy for enterprises to build, deploy, and manage stateful containerized applications at scale. With a unique application-centric view, K10 uses policy-driven automation to deliver capabilities such as compliance, data mobility, data manipulation, auditing, and global visibility for your cloud-native applications. For stateful applications, K10 takes the complexity out of a number of use cases including backup and recovery, cross-cluster and multi-cloud application migration, and disaster recovery.
The state of stateful Kubernetes
The need for products such as Kanister and the K10 Platform is being driven by the accelerating growth in the use of stateful container-based applications. A recent survey run by the Kubernetes Special Interest Group on Applications showed that more than 50 percent of users were running some kind of relational database or NoSQL system in their Kubernetes clusters. This number will only go up.
Further, we not only see the use of traditional database systems in cloud-native environments but also the growth of database systems that are built specifically for resiliency, manageability, and observability in a true cloud-native manner. As next-generation systems like Vitess, YugaByte, and CockroachDB mature, expect to see even more innovation in this space.
As we turn the page on this first chapter of the evolution of stateful cloud-native applications, the future holds both a number of opportunities as well as challenges. Given the true cloud portability being offered by cloud-native platforms such as Kubernetes, moving application data around multi-cluster, multi-cloud, and even planet-scale environments will require a new category of distributed systems to be developed.
Data gravity is a major challenge that will need to be overcome. New efficient distribution and transfer algorithms will be needed to work around the speed of light. Allowing enterprise platform operators to work at the unprecedented scale that these new cloud-native platforms enable will require a fundamental, application-centric rethinking of how the data in these environments is managed. What we are doing at Kasten with our K10 enterprise platform and Kanister not only tackles these issues but also sets the stage for true cloud-native data management.
Niraj Tolia is co-founder and CEO of Kasten, an early-stage startup working on cloud-native storage infrastructure. Previously, he was the senior director of software engineering at EMC/Maginatics where he was responsible for the CloudBoost family of products that focused on in-cloud data protection. Prior to EMC’s acquisition of Maginatics, he was a founding member of the Maginatics team and played multiple roles within the company including vice president of engineering, chief architect, and staff engineer. Niraj received his PhD, MS, and BS degrees in computer engineering from Carnegie Mellon University.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected].
<a style="display:none" rel="follow" href="http://megatheme.ir/" title="قالب وردپرس">قالب وردپرس</a>
The post How Kubernetes conquers stateful cloud-native applications appeared first on MusicCosmoS.
0 notes
computingpostcom · 3 years ago
Text
The Kubernetes Dashboard is a Web-based User interface that allows users to easily interact with the kubernetes cluster. It allows for users to manage, monitor and troubleshoot applications as well as the cluster. We already looked at how to deploy the dashboard in this tutorial. In this guide, we are going to explore integration of the kubernetes dashboard to Active Directory to ease user and password management. Kubernetes supports two categories of users: Service Accounts: This is a default method supported by kubernetes. One uses service account tokens to access the dashboard. Normal Users: Any other authentication method configured in the cluster. For this, we will use a project called Dex. Dex is an OpenID Connect provider done by CoreOS. It takes care of the translation between Kubernetes tokens and Active Directory users. Setup Requirements: You will need an IP on your network for the Active Directory server. In my case, this IP will be 172.16.16.16 You will also need a working Kubernetes cluster. The nodes of this cluster should be able to communicate with the Active Directory IP. Take a look at how to create a kubernetes cluster using kubeadm or rke if you don’t have one yet. You will also need a domain name that supports wildcard DNS entry. I will use the wildcard DNS “*.kubernetes.mydomain.com” to route external traffic to my Kubernetes cluster. Step 1: Deploy Dex on Kubernetes Cluster We will first need to create a namespace, create a service account for dex. Then, we will configure RBAC rules for the dex service account before we deploy it. This is to ensure that the application has proper permissions. Create a dex-namespace.yaml file. $ vim dex-namespace.yaml apiVersion: v1 kind: Namespace metadata: name: auth-system 2. Create the namespace for Dex. $ kubectl apply -f dex-namespace.yaml 3. Create a dex-rbac.yaml file. $ vim dex-rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: dex namespace: auth-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: dex namespace: auth-system rules: - apiGroups: ["dex.coreos.com"] resources: ["*"] verbs: ["*"] - apiGroups: ["apiextensions.k8s.io"] resources: ["customresourcedefinitions"] verbs: ["create"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: dex namespace: auth-system roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: dex subjects: - kind: ServiceAccount name: dex namespace: auth-system 4. Create the permissions for Dex. $ kubectl apply -f dex-rbac.yaml 5. Create a dex-configmap.yaml file. Make sure you modify the issuer URL, the redirect URIs, the client secret and the Active Directory configuration accordingly. $ vim dex-configmap.yaml kind: ConfigMap apiVersion: v1 metadata: name: dex namespace: auth-system data: config.yaml: | issuer: https://auth.kubernetes.mydomain.com/ web: http: 0.0.0.0:5556 frontend: theme: custom telemetry: http: 0.0.0.0:5558 staticClients: - id: oidc-auth-client redirectURIs: - https://kubectl.kubernetes.mydomain.com/callback - http://dashtest.kubernetes.mydomain.com/oauth2/callback name: oidc-auth-client secret: secret connectors: - type: ldap id: ldap name: LDAP config: host: 172.16.16.16:389 insecureNoSSL: true insecureSkipVerify: true bindDN: ldapadmin bindPW: 'KJZOBwS9DtB' userSearch: baseDN: OU=computingpost departments,DC=computingpost ,DC=net username: sAMAccountName idAttr: sn nameAttr: givenName emailAttr: mail groupSearch: baseDN: CN=groups,OU=computingpost,DC=computingpost,DC=net userMatchers: - userAttr: sAMAccountName
groupAttr: memberOf nameAttr: givenName oauth2: skipApprovalScreen: true storage: type: kubernetes config: inCluster: true 6. Configure Dex. $ kubectl apply -f dex-configmap.yaml 7. Create the dex-deployment.yaml file. $ vim dex-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: dex name: dex namespace: auth-system spec: replicas: 1 selector: matchLabels: app: dex strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: labels: app: dex revision: "1" spec: containers: - command: - /usr/local/bin/dex - serve - /etc/dex/cfg/config.yaml image: quay.io/dexidp/dex:v2.17.0 imagePullPolicy: IfNotPresent name: dex ports: - containerPort: 5556 name: http protocol: TCP resources: terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/dex/cfg name: config - mountPath: /web/themes/custom/ name: theme dnsPolicy: ClusterFirst serviceAccountName: dex restartPolicy: Always schedulerName: default-scheduler securityContext: terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 items: - key: config.yaml path: config.yaml name: dex name: config - name: theme emptyDir: 8. Deploy Dex.   $ kubectl apply -f dex-deployment.yaml 9. Create a dex-service.yaml file. $ vim dex-service.yaml apiVersion: v1 kind: Service metadata: name: dex namespace: auth-system spec: selector: app: dex ports: - name: dex port: 5556 protocol: TCP targetPort: 5556 10. Create a service for the Dex deployment.   $ kubectl apply -f dex-service.yaml 11. Create a dex-ingress secret. Make sure the certificate data for the cluster is at the location specified or change this path to point to it. If you have a Certificate Manager installed in your cluster, You can skip this step. $ kubectl create secret tls dex --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system 12. Create a dex-ingress.yaml file. Change the host parameters and your certificate issuer name accordingly. $ vim dex-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: dex namespace: auth-system annotations: kubernetes.io/tls-acme: "true" ingress.kubernetes.io/force-ssl-redirect: "true" spec: tls: - secretName: dex hosts: - auth.kubernetesuat.mydomain.com rules: - host: auth.kubernetes.mydomain.com http: paths: - backend: serviceName: dex servicePort: 5556 13. Create the ingress for the Dex service. $ kubectl apply -f dex-ingress.yaml Wait a couple of minutes until the cert manager generates a certificate for Dex. You can check if Dex is deployed properly by browsing to: https://auth.kubernetesuat.mydomain.com/.well-known/openid-configuration Step 2: Configure the Kubernetes API to access Dex as OpenID connect provider Next, We will look at how to configure the API server for both a RKE and Kubeadm Cluster. To enable the OIDC plugin, we need to configure the several flags on the API server as shown here: A. RKE CLUSTER 1. SSH to your rke node. $ ssh [email protected] 2. Edit the Kubernetes API configuration. Add the OIDC parameters and modify the issuer URL accordingly. $ sudo vim ~/Rancher/cluster.yml kube-api: service_cluster_ip_range: 10.43.0.0/16 # Expose a different port range for NodePort services service_node_port_range: 30
000-32767 extra_args: # Enable audit log to stdout audit-log-path: "-" # Increase number of delete workers delete-collection-workers: 3 # Set the level of log output to debug-level v: 4 #ADD THE FOLLOWING LINES oidc-issuer-url: https://auth.kubernetes.mydomain.com/ oidc-client-id: oidc-auth-client oidc-ca-file: /data/Certs/kubernetes.mydomain.com.crt oidc-username-claim: email oidc-groups-claim: groups extra_binds: - /data/Certs:/data/Certs ##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES 3. The Kubernetes API will restart by itself once you run an RKE UP. $ rke up B. KUBEADM CLUSTER 1. SSH to your node. $ ssh [email protected] 2. Edit the Kubernetes API configuration. Add the OIDC parameters and modify the issuer URL accordingly. $ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ... command: - /hyperkube - apiserver - --advertise-address=10.10.40.30 #ADD THE FOLLOWING LINES: ... - --oidc-issuer-url=https://auth.kubernetes.mydomain.com/ - --oidc-client-id=oidc-auth-client ##ENSURE THE WILDCARD CERTIFICATES ARE PRESENT IN THIS FILE PATH IN ALL MASTER NODES: - --oidc-ca-file=/etc/ssl/kubernetes/kubernetes.mydomain.com.crt - --oidc-username-claim=email - --oidc-groups-claim=groups ... 3. The Kubernetes API will restart by itself. STEP 3: Deploy the Oauth2 proxy and configure the kubernetes dashboard ingress 1. Generate a secret for the Oauth2 proxy. python -c 'import os,base64; print base64.urlsafe_b64encode(os.urandom(16))' 2. Copy the generated secret and use it for the OAUTH2_PROXY_COOKIE_SECRET value in the next step. 3. Create an oauth2-proxy-deployment.yaml file. Modify the OIDC client secret, the OIDC issuer URL, and the Oauth2 proxy cookie secret accordingly. $ vim oauth2-proxy-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: k8s-app: oauth2-proxy name: oauth2-proxy namespace: auth-system spec: replicas: 1 selector: matchLabels: k8s-app: oauth2-proxy template: metadata: labels: k8s-app: oauth2-proxy spec: containers: - args: - --cookie-secure=false - --provider=oidc - --client-id=oidc-auth-client - --client-secret=*********** - --oidc-issuer-url=https://auth.kubernetes.mydomain.com/ - --http-address=0.0.0.0:8080 - --upstream=file:///dev/null - --email-domain=* - --set-authorization-header=true env: # docker run -ti --rm python:3-alpine python -c 'import secrets,base64; print(base64.b64encode(base64.b64encode(secrets.token_bytes(16))));' - name: OAUTH2_PROXY_COOKIE_SECRET value: *********** image: sguyennet/oauth2-proxy:header-2.2 imagePullPolicy: Always name: oauth2-proxy ports: - containerPort: 8080 protocol: TCP 4. Deploy the Oauth2 proxy. $ kubectl apply -f oauth2-proxy-deployment.yaml 5. Create an oauth2-proxy-service.yaml file. $ vim oauth2-proxy-service.yaml apiVersion: v1 kind: Service metadata: labels: k8s-app: oauth2-proxy name: oauth2-proxy namespace: auth-system spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: k8s-app: oauth2-proxy 6. Create a service for the Oauth2 proxy deployment. $ kubectl apply -f oauth2-proxy-service.yaml 7. Create a dashboard-ingress.yaml file. Modify the dashboard URLs and the host parameter accordingly. $ vim dashboard-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: kubernetes-dashboard namespace: kube-system annotations: nginx.ingress.kubernetes.io/auth-url: "https://dashboard.kubernetes.mydomain.com/oauth2/auth" nginx.ingress.k
ubernetes.io/auth-signin: "https://dashboard.kubernetes.mydomain.com/oauth2/start?rd=https://$host$request_uri$is_args$args" nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/configuration-snippet: | auth_request_set $token $upstream_http_authorization; proxy_set_header Authorization $token; spec: rules: - host: dashboard.kubernetes.mydomain.com http: paths: - backend: serviceName: kubernetes-dashboard servicePort: 443 path: / 8. Create the ingress for the dashboard service. $ kubectl apply -f dashboard-ingress.yaml 9. Create a kubernetes-dashboard-external-tls ingress secret. Make sure the certificate data for the cluster is at the location specified or change this path to point to it. Skip this step if using a Certificate manager. $ kubectl create secret tls kubernetes-dashboard-external-tls --key /data/Certs/ kubernetes.mydomain.com.key --cert /data/Certs/ kubernetes.mydomain.com.crt -n auth-system 10. Create an oauth2-proxy-ingress.yaml file. Modify the certificate manager issuer and the host parameters accordingly. $ vim oauth2-proxy-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/tls-acme: "true" ingress.kubernetes.io/force-ssl-redirect: "true" name: oauth-proxy namespace: auth-system spec: rules: - host: dashboard.kubernetes.mydomain.com http: paths: - backend: serviceName: oauth2-proxy servicePort: 8080 path: /oauth2 tls: - hosts: - dashboard.kubernetes.mydomain.com secretName: kubernetes-dashboard-external-tls 10. Create the ingress for the Oauth2 proxy service. $ kubectl apply -f oauth2-proxy-ingress.yaml 11. Create the role binding. $ kubectl create rolebinding -rolebinding- --clusterrole=admin --user= -n e.g kubectl create rolebinding mkemei-rolebinding-default --clusterrole=admin [email protected] -n default // Note that usernames are case sensitive and we need to confirm the correct format before applying the rolebinding. 12. Wait a couple of minutes and browse to https://dashboard.kubernetes.mydomain.com. 13. Login with your Active Directory user. As you can see below: [email protected] should be able to see and modify the default namespace.  
0 notes