Tumgik
#openshift login
codecraftshop · 2 years
Text
How to deploy web application in openshift command line
To deploy a web application in OpenShift using the command-line interface (CLI), follow these steps: Create a new project: Before deploying your application, you need to create a new project. You can do this using the oc new-project command. For example, to create a project named “myproject”, run the following command:javascriptCopy codeoc new-project myproject Create an application: Use the oc…
Tumblr media
View On WordPress
0 notes
qcs01 · 3 months
Text
Deploying Your First Application on OpenShift
Deploying an application on OpenShift can be straightforward with the right guidance. In this tutorial, we'll walk through deploying a simple "Hello World" application on OpenShift. We'll cover creating an OpenShift project, deploying the application, and exposing it to the internet.
Prerequisites
OpenShift CLI (oc): Ensure you have the OpenShift CLI installed. You can download it from the OpenShift CLI Download page.
OpenShift Cluster: You need access to an OpenShift cluster. You can set up a local cluster using Minishift or use an online service like OpenShift Online.
Step 1: Log In to Your OpenShift Cluster
First, log in to your OpenShift cluster using the oc command.
oc login https://<your-cluster-url> --token=<your-token>
Replace <your-cluster-url> with the URL of your OpenShift cluster and <your-token> with your OpenShift token.
Step 2: Create a New Project
Create a new project to deploy your application.
oc new-project hello-world-project
Step 3: Create a Simple Hello World Application
For this tutorial, we'll use a simple Node.js application. Create a new directory for your project and initialize a new Node.js application.
mkdir hello-world-app cd hello-world-app npm init -y
Create a file named server.js and add the following content:
const express = require('express'); const app = express(); const port = 8080; app.get('/', (req, res) => res.send('Hello World from OpenShift!')); app.listen(port, () => { console.log(`Server running at http://localhost:${port}/`); });
Install the necessary dependencies.
npm install express
Step 4: Create a Dockerfile
Create a Dockerfile in the same directory with the following content:
FROM node:14 WORKDIR /app COPY package*.json ./ RUN npm install COPY . . EXPOSE 8080 CMD ["node", "server.js"]
Step 5: Build and Push the Docker Image
Log in to your Docker registry (e.g., Docker Hub) and push the Docker image.
docker login docker build -t <your-dockerhub-username>/hello-world-app . docker push <your-dockerhub-username>/hello-world-app
Replace <your-dockerhub-username> with your Docker Hub username.
Step 6: Deploy the Application on OpenShift
Create a new application in your OpenShift project using the Docker image.
oc new-app <your-dockerhub-username>/hello-world-app
OpenShift will automatically create the necessary deployment configuration, service, and pod for your application.
Step 7: Expose the Application
Expose your application to create a route, making it accessible from the internet.
oc expose svc/hello-world-app
Step 8: Access the Application
Get the route URL for your application.
oc get routes
Open the URL in your web browser. You should see the message "Hello World from OpenShift!".
Conclusion
Congratulations! You've successfully deployed a simple "Hello World" application on OpenShift. This tutorial covered the basic steps, from setting up your project and application to exposing it on the internet. OpenShift offers many more features for managing applications, so feel free to explore its documentation for more advanced topic
For more details click www.qcsdclabs.com 
0 notes
computingpostcom · 2 years
Text
You have a running OpenShift Cluster powering your production microservices and worried about etcd data backup?. In this guide we show you how to easily backup etcd and push the backup data to AWS S3 object store. An etcd is a key-value store for OpenShift Container Platform, which persists the state of all resource objects. In any OpenShift Cluster administration, it is a good and recommended practice to back up your cluster’s etcd data regularly and store it in a secure location. The ideal location for data storage is outside the OpenShift Container Platform environment. This can be an NFS server share, secondary server in your Infrastructure or in a Cloud environment. The other recommendation is taking etcd backups during non-peak usage hours, as the action is blocking in nature. Ensure etcd backup operation is performed after any OpenShift Cluster upgrade. The importance of this is that during cluster restoration, an etcd backup taken from the same z-stream release must be used. As an example, an OpenShift Container Platform 4.6.3 cluster must use an etcd backup that was taken from 4.6.3. Step 1: Login to one Master Node in the Cluster The etcd cluster backup has to be performed on a single invocation of the backup script on a master host. Do not take a backup for each master host. Login to one master node either through SSH or  debug session: # SSH Access $ ssh core@ # Debug session $ oc debug node/ For a debug session you need to change your root directory to the host: sh-4.6# chroot /host If the cluster-wide proxy is enabled, be sure that you have exported the NO_PROXY, HTTP_PROXY, and HTTPS_PROXY environment variables. Step 2: Perform etcd Backup on OpenShift 4.x An OpenShift cluster access as a user with the cluster-admin role is required to perform this operation. Before you proceed check to confirm if proxy is enabled: $ oc get proxy cluster -o yaml If you have proxy enabled, httpProxy, httpsProxy, and noProxy fields will have the values set. Run the cluster-backup.sh script to initiate etcd backup process. You should pass a path where backup is saved. $ mkdir /home/core/etcd_backups $ sudo /usr/local/bin/cluster-backup.sh /home/core/etcd_backups Here is my command execution output: 3e53f83f3c02b43dfa8d282265c1b0f9789bcda827c4e13110a9b6f6612d447c etcdctl version: 3.3.18 API version: 3.3 found latest kube-apiserver-pod: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-115 found latest kube-controller-manager-pod: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-24 found latest kube-scheduler-pod: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-26 found latest etcd-pod: /etc/kubernetes/static-pod-resources/etcd-pod-11 Snapshot saved at /home/core/etcd_backups/snapshot_2021-03-16_134036.db snapshot db and kube resources are successfully saved to /home/core/etcd_backups List files in the backup directory: $ ls -1 /home/core/etcd_backups/ snapshot_2021-03-16_134036.db static_kuberesources_2021-03-16_134036.tar.gz $ du -sh /home/core/etcd_backups/* 1.5G /home/core/etcd_backups/snapshot_2021-03-16_134036.db 76K /home/core/etcd_backups/static_kuberesources_2021-03-16_134036.tar.gz There will be two files in the backup: snapshot_.db: This file is the etcd snapshot. static_kuberesources_.tar.gz: This file contains the resources for the static pods. If etcd encryption is enabled, it also contains the encryption keys for the etcd snapshot. Step 3: Push the Backup to AWS S3 (From Bastion Server) Login from Bastion Server and copy backup files. scp -r core@serverip:/home/core/etcd_backups ~/ Install AWS CLI tool: curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" Install unzip tool: sudo yum -y install unzip Extract downloaded file: unzip awscli-exe-linux-x86_64.zip Install AWS CLI: $ sudo ./aws/install You can now run: /usr/local/bin/aws --version Confirm installation by checking the version:
$ aws --version aws-cli/2.1.30 Python/3.8.8 Linux/3.10.0-957.el7.x86_64 exe/x86_64.rhel.7 prompt/off Create OpenShift Backups bucket: $ aws s3 mb s3://openshiftbackups make_bucket: openshiftbackups Create an IAM User: $ aws iam create-user --user-name backupsonly Create AWS Policy for Backups user – user able to write to S3 only: cat >aws-s3-uploads-policy.json
0 notes
lakshya01 · 3 years
Text
Industry use cases of OpenShift
OpenShift is a cloud development Platform as a Service (PaaS) developed by Red Hat. It is an open-source development platform, which enables developers to develop and deploy their applications on cloud infrastructure. It is very helpful in developing cloud-enabled services. This tutorial will help you understand OpenShift and how it can be used in the existing infrastructure. All the examples and code snippets used in this tutorial are tested and working code, which can be simply used in any OpenShift setup by changing the current defined names and variables.
OpenShift
OpenShift is a cloud-enabled application Platform as a Service (PaaS). It’s an open-source technology that helps organizations move their traditional application infrastructure and platform from physical, virtual mediums to the cloud.
OpenShift supports a very large variety of applications, which can be easily developed and deployed on the OpenShift cloud platform. OpenShift basically supports three kinds of platforms for the developers and users.
Infrastructure as a Service (IaaS)
In this format, the service provider provides hardware-level virtual machines with some pre-defined virtual hardware configuration. There are multiple competitors in this space starting from AWS Google Cloud, Rackspace, and many more.
The main drawback of having IaaS after a long procedure of setup and investment is that one is still responsible for installing and maintaining the operating system and server packages, managing the network of infrastructure, and taking care of the basic system administration.
Software as a Service (SaaS)
With SaaS, one has the least worry about the underlying infrastructure. It is as simple as plug-and-play, wherein the user just has to sign up for the services and start using them. The main drawback with this setup is, one can only perform a minimal amount of customization, which is allowed by the service provider. One of the most common examples of SaaS is Gmail, where the user just needs to log in and start using it. The user can also make some minor modifications to his account. However, it is not very useful from the developer’s point of view.
Platform as a Service (PaaS)
It can be considered as a middle layer between SaaS and IaaS. The primary target of PaaS evaluation is for developers in which the development environment can be spin up with a few commands. These environments are designed in such a way that they can satisfy all the development needs, right from having a web application server with a database. To do this, you just require a single command and the service provider does the stuff for you.
Why Use OpenShift?
OpenShift provides a common platform for enterprise units to host their applications on the cloud without worrying about the underlying operating system. This makes it very easy to use, develop, and deploy applications on the cloud. One of the key features is, it provides managed hardware and network resources for all kinds of development and testing. With OpenShift, PaaS developer has the freedom to design their required environment with specifications.
OpenShift provides different kinds of service level agreements when it comes to service plans.
Free − This plan is limited to three years with 1GB space for each.
Bronze − This plan includes 3 years and expands up to 16 years with 1GB space per year.
Sliver − This is a 16-year plan of bronze, however, has a storage capacity of 6GB with no additional cost.
Other than the above features, OpenShift also offers an on-premises version known as OpenShift Enterprise. In OpenShift, developers have the leverage to design scalable and non-scalable applications and these designs are implemented using HAproxy servers.
Features
There are multiple features supported by OpenShift. Few of them are −
Multiple Language Support
Multiple Database Support
Extensible Cartridge System
Source Code Version Management
One-Click Deployment
Multi Environment Support
Standardized Developers’ workflow
Dependency and Build Management
Automatic Application Scaling
Responsive Web Console
Rich Command-line Toolset
Remote SSH Login to Applications
Rest API Support
Self-service On-Demand Application Stack
Built-in Database Services
Continuous Integration and Release Management
IDE Integration
Remote Debugging of Applications
Types of Openshift -1. OpenShift Online
OpenShift online is an offering of OpenShift community using which one can quickly build, deploy, and scale containerized applications on the public cloud. It is Red Hat’s public cloud application development and hosting platform, which enables automated provisioning, management and scaling of application which helps the developer focus on writing application logic.
2. OpenShift Container Platform
OpenShift container platform is an enterprise platform which helps multiple teams such as development and IT operations team to build and deploy containerized infrastructure. All the containers built in OpenShift uses a very reliable Docker containerization technology, which can be deployed on any data center of publically hosted cloud platforms.
3. OpenShift Dedicated
This is another offering added to the portfolio of OpenShift, wherein there is a customer choice of hosting a containerized platform on any of the public cloud of their choice. This gives the end user a true sense of multi-cloud offering, where they can use OpenShift on any cloud which satisfies their needs.
2 notes · View notes
karonbill · 2 years
Text
IBM C1000-150 Practice Test Questions
C1000-150 IBM Cloud Pak for Business Automation v21.0.3 Administration is the new exam replacement of the C1000-091 exam. PassQuestion has designed the C1000-150 Practice Test Questions to ensure your one attempt success in the IBM Cloud Pak for Business Automation v21.0.3 Administration Exam. You just need to learn all the IBM C1000-150 exam questions and answers carefully. You will get fully ready to attempt your IBM C1000-150 exam confidently. The best part is that the C1000-150 Practice Test Questions include the authentic and accurate answers that are necessary to learn for clearing the IBM C1000-150 exam.
IBM Cloud Pak for Business Automation v21.0.3 Administration (C1000-150)
The IBM Certified Administrator on IBM Cloud Pak for Business Automation v21.0.3 is an intermediate-level certification for an experienced system administrator who has extensive knowledge and experience of IBM Cloud Pak for Business Automation v21.0.3. This administrator can perform tasks related to Day 1 activities (installation and configuration). The administrator also handles Day 2 management and operation, security, performance, updates (including installation of fix packs and patches), customization, and/or problem determination. This exam does not cover installation of Red Hat OpenShift.
Recommended Skills
Basic concepts of Docker and Kubernetes Ability to write scripts in YAML Working knowledge of Linux Working knowledge of OpenShift command-line interface, web GUI, and monitoring Basic knowledge of Kafka, Elastic Search, Kibana, and HDFS Working knowledge of relational databases and LDAP Basic knowledge of event-driven architecture.
Exam Information
Exam Code: C1000-150 Number of questions: 60 Number of questions to pass: 39 Time allowed: 90 minutes Languages: English Certification: IBM Certified Administrator - IBM Cloud Pak for Business Automation v21.0.3
Exam Sections
Section 1: Planning and Install  26% Section 2: Troubleshooting  27% Section 3: Security  17% Section 4: Resiliency  10% Section 5: Management  20%
View Online IBM Cloud Pak for Business Automation v21.0.3 C1000-150 Free Questions
1. Which statement is true when installing Cloud Pak for Business Automation via the Operator Hub and Form view? A. Ensure the Persistent Volume Claim (PVC) is defined in the namespace. B. Use a login install ID that has at minimum Editor permission. C. The cluster can only be set up using silent mode. D. The secret key for admin.registrykey is automatically generated. Answer: A
2. After installing a starter deployment of the Cloud Pak for Business Automation, which statement is true about using the LDAP user registry? A. Only three users are predefined: cp4admin, user1, and user2, but others can be added manually. B. Predefined users’ passwords can be modified by updating the icp4adeploy-openldap-customldif secret. C. New users can be added by using the route to the openldap pod from an OpenLDAP browser. D. New users can be added by the predefined cp4admin user through the admin console of ZenUI. Answer: B
3. What might cause OpenShift to delete a pod and try to redeploy it again? A. Liveness probe detects an unhealthy state. B. Readiness probe returns a failed state. C. Pod accessed in debug mode. D. Unauthorized access attempted. Answer: A
4. After the root CA is replaced, what is the first item that must be completed in order to reload services? A. Delete the default token. B. Replace helm certificates. C. Delete old certificates. D. Restart related services. Answer: A
5. While not recommended, if other pods are deployed in the same namespace that is used for the Cloud Pak for Business Automation deployment, what default network policy is used? A. deny-all B. allow-all C. allow-same-namespace D. restricted Answer: B
6. What feature of a Kubernetes deployment of CP4BA contributes to high availability? A. Dynamic Clustering through WebSphere B. WebSphere Network Deployment application clustering C. Usage of EJB protocol D. Crashed pod restart managed by Kubernetes kubelet Answer: D
7. How are Business Automation Insights business events processed and stored for dashboards? A. Kafka is responsible for aggregating and storing summary events. B. Flink jobs write data to Elasticsearch. C. Business Automation Insights uses a custom Cloudant database to store events. D. The HDFS datalake serves this purpose. Answer: B
0 notes
chrisshort · 4 years
Link
If you work in a restricted network environment, you may encounter some problems when using the Red Hat Openshift command line to connect to a Red Hat cluster. One possible issue is a TLSHandshake error when you use the oc login command. This problem can occur with Kubernetes, as well.
0 notes
sandlerresearch · 4 years
Text
Macquarie Bank: Enterprise Tech Ecosystem Series published on
https://www.sandlerresearch.org/macquarie-bank-enterprise-tech-ecosystem-series.html
Macquarie Bank: Enterprise Tech Ecosystem Series
Macquarie Bank: Enterprise Tech Ecosystem Series
Summary
Macquarie Bank, a subsidiary of Macquarie Group Limited, offers retail financial and commercial banking services in Australia and select financial services at offshore locations. Macquarie Bank comprises two operating segments: Banking and Financial Services, and Commodities and Global Markets. Banking and Financial Services offers retail banking and financial services including personal banking (mortgages, savings and transaction accounts, credit cards, and vehicle finance), wealth management (cash management, financial advice, and private banking), and business banking.
Commodities and Global Markets includes businesses such as cash equities, credit markets, commodity markets and finance, and equity derivatives and trading. Macquarie Bank also comprises Corporate, a non-operating segment that includes head office and central service groups, including Treasury. Macquarie Bank also acts as an investment intermediary for corporate, institutional, retail, and government clients across the globe.
This report explores Macquarie Bank’s digital transformation strategies. It also provides extensive insight into its technology initiatives, covering partnerships and product launches. In addition, the report includes details of the company’s estimated ICT budget for 2020.
Scope
– Macquarie rolled out an open banking platform in 2017 to allow customers to securely link and share their banking data with fintech startups and technology companies, in order to receive personalized services through third-party applications. Since customers log in via third-party APIs, they do not need to share bank login details, thus ensuring data security. – In 2017, Macquarie introduced an upgraded mobile banking app that allows users to efficiently access and manage their finances. – The bank is migrating and deploying its applications to the cloud to run in containers hosted on Google and Amazon Web Services cloud platforms. The bank selected Red Hat’s OpenShift Container Platform as a platform-as-a-service solution to run its retail digital banking platforms.
Reasons to Buy
– Learn about Macquarie Bank’s fintech operations, including investments, product launches, partnerships, and acquisitions. – Gain insight into its fintech strategies and innovation initiatives. – Discover which technology themes are under the group’s focus.
0 notes
codecraftshop · 2 years
Text
Create project in openshift webconsole and command line tool
To create a project in OpenShift, you can use either the web console or the command-line interface (CLI). Create Project using Web Console: Login to the OpenShift web console. In the top navigation menu, click on the “Projects” dropdown menu and select “Create Project”. Enter a name for your project and an optional display name and description. Select an optional project template and click…
Tumblr media
View On WordPress
0 notes
qcs01 · 3 months
Text
Deploying a Containerized Application with Red Hat OpenShift
Introduction
In this post, we'll walk through the process of deploying a containerized application using Red Hat OpenShift, a powerful Kubernetes-based platform for managing containerized workloads.
What is Red Hat OpenShift?
Red Hat OpenShift is an enterprise Kubernetes platform that provides developers with a full set of tools to build, deploy, and manage applications. It integrates DevOps automation tools to streamline the development lifecycle.
Prerequisites
Before we begin, ensure you have the following:
A Red Hat OpenShift cluster
Access to the OpenShift command-line interface (CLI)
A containerized application (Docker image)
Step 1: Setting Up Your OpenShift Environment
First, log in to your OpenShift cluster using the CLI:
oc login https://your-openshift-cluster:6443
Step 2: Creating a New Project
Create a new project for your application:
oc new-project my-app
Step 3: Deploying Your Application
Deploy your Docker image using the oc new-app command:
oc new-app my-docker-image
Step 4: Exposing Your Application
Expose your application to create a route and make it accessible:
oc expose svc/my-app
Use Cases
OpenShift is ideal for deploying microservices architectures, CI/CD pipelines, and scalable web applications. Here are a few scenarios where OpenShift excels.
Best Practices
Use health checks to ensure your applications are running smoothly.
Implement resource quotas to prevent any single application from consuming too many resources.
Performance and Scalability
To optimize performance, consider using horizontal pod autoscaling. This allows OpenShift to automatically adjust the number of pods based on CPU or memory usage.
Security Considerations
Ensure your images are scanned for vulnerabilities before deployment. OpenShift provides built-in tools for image scanning and compliance checks.
Troubleshooting
If you encounter issues, check the logs of your pods:
oc logs pod-name
Conclusion
Deploying applications with Red Hat OpenShift is straightforward and powerful. By following best practices and utilizing the platform's features, you can ensure your applications are scalable, secure, and performant.
0 notes
computingpostcom · 2 years
Text
Red Hat recently announced the general availability of Red Hat Advanced Cluster Management for Kubernetes ( RHACM) v2.2. RHACM tool provides a central management console from where you can manage multiple Kubernetes-based clusters across data centers, public clouds, and private clouds. You can easily use the multicluster hub to create Red Hat OpenShift Container Platform clusters on selected providers, or import existing Kubernetes-based clusters. It becomes easy to take control of your application deployment with the management capabilities for cluster creation, application lifecycle, and provide security and compliance for all of them across data centers and hybrid cloud environments. With Red Hat Advanced Cluster Management for Kubernetes, Clusters and applications are all visible and managed from a single console, with built-in security policies. It becomes easy to run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet. The new release includes the Open Policy Agent (OPA) operator for tighter integration, added new Argo CD integration and more to help you manage and automate your Kubernetes clusters at scale. Below are some of the key features in v2.2 release: Import and manage Openshift clusters such Azure Red Hat OpenShift , OpenShift Dedicated, Openshift on Openstack and Openshift on IBM Z. Customized metrics and dashboards: Customization of Grafana dashboards based on metrics you define, along with the predefined metrics, to create personalized views of what is important to you. Contribute to and ship Open Policy Agent (OPA) as part of ACM: Support of OPA policies by distributing the OPA operator to the fleet. Compliance Operator support: Run OpenSCAP scans (via the Compliance Operator) against the fleet, and surface the compliance results in ACM. Argo CD integration: Utilize the fleet information from ACM and provide it to Argo CD, ensuring your applications are compliant and secure. Install Red Hat Advanced Cluster Management on OpenShift 4.x In the next steps we walk you through the process of installing Red Hat Advanced Cluster Management on OpenShift 4.x. You should have a working OpenShift 4.x cluster before you proceed with the installation steps. Step 1: Create rhacm project Let’s start by creating a new project for Red Hat Advanced Cluster Management deployment. From CLI: oc new-project rhacm For UI project creation, it is done under Home > Projects > Create Project Confirm the current working project is the one created. Step 2: Install Red Hat Advanced Cluster Management Operator Login to OpenShift Web console and navigate to Operators > OperatorHub and search for “Advanced Cluster Management”. Click the Install button to begin installation of the operator. Use Operator recommended namespace or create use the namespace we created in the first step. Choose the “Update Channel” and “Approval Strategy” then hit the “Subscribe” button. The Operator installation status can be checked under “Installed Operators” section. Here is a screenshot of successful installation. Step 3: Create the MultiClusterHub custom resource In the OpenShift Container Platform console navigation, select Installed Operators > Advanced Cluster Management for Kubernetes Select the MultiClusterHub tab. Select Create MultiClusterHub then update the default values in the YAML file, according to your needs. Wait for the installation to complete. Upon completion the state should change to “Running“. Step 4: Access Advanced Cluster Management for Kubernetes Console Check the route for the Advanced Cluster Management for Kubernetes under “Networking” > “Routes“ Open the URL of your hub on a new tab and login with the OpenShift user credentials. You should be presented with a dashboard similar to below. To access the local cluster use “Go to Clusters” link:
Important: The local-cluster namespace is used for the imported self-managed hub cluster. You must not have a local-cluster namespace on your cluster prior to installing. After the local-cluster namespace is created for the hub cluster, anyone who has access to the local-cluster namespace is automatically granted cluster administrator access. For security reasons, do not give anyone access to the local-cluster namespace who does not already have cluster-administrator access. You can click on the listed cluster to view more details. We have successfully installed and configured Red Hat Advanced Cluster Management on OpenShift 4.x. In our next guides we’ll discuss on Managing clusters, Applications, Security and Troubleshooting that will come in handy during clusters lifecycle management.
0 notes
netmetic · 5 years
Text
Deploying a Single Solace PubSub+ Event Broker on OpenShift Origin
“OpenShift is an open source container application platform by Red Hat based on the Kubernetes container orchestrator for enterprise app development and deployment.” It is powered by okd which is “origin community distribution of kubernetes.” 
In this post, I will show you how you can easily deploy a Solace PubSub+ Event Broker on OpenShift Origin 3.11. Needless to say, to follow along, you will need to have an OpenShift deployment handy.
As you may already know, Solace is a messaging company, known for its PubSub+ Event Broker. PubSub+ Event Broker can be deployed on premise, in cloud, and on several PaaS platforms such as OpenShift. Solace makes it easy for you to deploy PubSub+ Event Broker with different types of configurations (single node deployment, multi-node high availability deployment, etc.) via OpenShift templates. Today, we will focus on single node deployment.
Note that Solace has detailed instructions on different ways to deploy PubSub+ Event Broker on OpenShift on its GitHub page. Additionally, Solace also has some quickstart samples in a different repository, but for this blog post, we will be using the more detailed version. The following steps are meant to show you how to easily follow those instructions.
Also, Solace recently released PubSub+ 9.4.0EA (Early Access). To deploy that specific version, it has created a separate branch called ‘SecurityEnhancements’ in which the message broker gets deployed in an unprivileged container without any additional Linux capabilities required. I will be deploying 9.4.0EA in this post.
Let’s begin!
Step 1: Log in to OpenShift
Run the commands below to connect via SSH to your OpenShift master node and log in to OpenShift:
[centos@ ~]$ oc login Authentication required for (openshift) Username: admin Password: Login successful. You don't have any projects. You can try to create a new project, by running oc new-project
Step 2: Download OpenShift Template
Run the commands below to download the OpenShift template from Solace’s GitHub page:
[centos@ ~]$ mkdir ~/workspace [centos@ ~]$ cd ~/workspace [centos@ workspace]$ git clone https://github.com/SolaceProducts/solace-openshift-quickstart.git -b SecurityEnhancements Cloning into 'solace-openshift-quickstart'... remote: Enumerating objects: 19, done. remote: Counting objects: 100% (19/19), done. remote: Compressing objects: 100% (18/18), done. remote: Total 232 (delta 5), reused 4 (delta 0), pack-reused 213 Receiving objects: 100% (232/232), 1.91 MiB | 668.00 KiB/s, done. Resolving deltas: 100% (104/104), done. [centos@ workspace]$ cd solace-openshift-quickstart
Step 3: Create an OpenShift Project
Next, we will create and configure an OpenShift project called solace-pubsub that meets the requirements of Solace’s event broker deployment using the prepareProject.sh script:
[centos@ solace-openshift-quickstart]$ sudo ~/workspace/solace-openshift-quickstart/scripts/prepareProject.sh solace-pubsub Already logged into OpenShift as system:admin Now using project "solace-pubsub" on server . You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby. role "admin" added: "admin" Granting the solace-pubsub project policies and SCC privileges for correct operation... role "edit" added: "system:serviceaccount:solace-pubsub:default" cluster role "storage-admin" added: "admin"
Once the project has been created, run the commands below to select or enter it:
[centos@ solace-openshift-quickstart]$ oc project solace-pubsub Now using project "solace-pubsub" on server .
Step 4: Deploy PubSub+ Event Broker
Great, now all we have to do is to start the necessary services to spin up our broker using the template that Solace has provided. We will be using the messagebroker_singlenode_template.yaml template in ~/workspace/solace-openshift-quickstart/templates/.
MESSAGEBROKER_ADMIN_PASSWORD, one of the arguments to run this template, is the Base64 encoded password for your admin username. You can generate Base64 encoded password using the following command: [centos@ templates]$ echo -n 'admin' | base64 YWRtaW4=
Alright, it’s finally the time to start the services with the commands below:
[centos@ templates]$ oc process -f messagebroker_singlenode_template.yaml DEPLOYMENT_NAME=test-singlenode MESSAGEBROKER_STORAGE_SIZE=30Gi MESSAGEBROKER_ADMIN_PASSWORD=YWRtaW4= | oc create -f - secret/test-singlenode-solace-secrets created configmap/test-singlenode-solace created service/test-singlenode-solace-discovery created service/test-singlenode-solace created statefulset.apps/test-singlenode-solace created
Give it about a minute and then run the following command to get the external IP:
[centos@ templates]$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE test-singlenode-solace LoadBalancer aa4ca731ff6a711e9b11706a37272a39-1081382337.ap-northeast-1.elb.amazonaws.com 22:31508/TCP,8080:31300/TCP,55555:31135/TCP,55003:32629/TCP,55443:31725/TCP,943:30512/TCP,80:32479/TCP,443:32489/TCP 8m test-singlenode-solace-discovery ClusterIP None 8080/TCP 8m
Now, use the Load Balancer’s external Public IP at port 8080 to access these services. In my case, it would be: https://ift.tt/30mYCwn
You should now see Solace’s PubSub+ Standard login page where you can enter your username and password and click Login. That will lead you to Solace PubSub+ Manager, where you can see your default VPN: Click the default VPN to see more details about the VPN.
And that’s it! Your single PubSub+ Event Broker is up and running on OpenShift!
Step 5: Terminating Your PubSub+ Event Broker
Finally, you might want to terminate the broker if you no long need it. To do so, you will need to first stop your services:
[centos@ templates]$ oc process -f messagebroker_singlenode_template.yaml DEPLOYMENT_NAME=test-singlenode | oc delete -f - secret "test-ha-solace-secrets" deleted configmap "test-singlenode-solace" deleted
Then, delete your Persistent Volume by deleting your Persistent Volume Claim (PVC):
[centos@ ~]$ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-test-singlenode-solace-0 Bound pvc-a4cfb5cd-f6a7-11e9-b117-06a37272a390 30Gi RWO gp2 1h [centos@ ~]$ oc delete pvc data-test-singlenode-solace-0 persistentvolumeclaim "data-test-singlenode-solace-0" deleted
The delete may take few seconds. So be patient. Once it is deleted, you can delete your OpenShift project.
[centos@ ~]$ oc delete project solace-pubsub project.project.openshift.io "solace-pubsub" deleted
You broker has now been terminated!
I hope you found this post useful. For more information, visit PubSub+ for Developers. If you have any questions, post them to the Solace Developer Community.
The post Deploying a Single Solace PubSub+ Event Broker on OpenShift Origin appeared first on Solace.
Deploying a Single Solace PubSub+ Event Broker on OpenShift Origin published first on https://jiohow.tumblr.com/
0 notes
karonbill · 2 years
Text
IBM C1000-130 Questions and Answers
If you are worried about preparing for C1000-130 IBM Cloud Pak for Integration V2021.2 Administration Exam then go for PassQuestion. It will provide you Real C1000-130 Questions and Answers that will help you get remarkable results. Real C1000-130 Questions and Answers are designed on the pattern of real exams so you will be able to appear more confidently in IBM Cloud Pak for Integration V2021.2 Administration exam.  It will give you a clear idea of the real exam scenario so you can make things easier for yourself. If you are using our C1000-130 Questions and Answers, then it will become a lot easier for you to prepare for the exam. Make sure that you are going through our C1000-130 Questions and Answers multiple times so you can ensure your success in the exam.
C1000-130 IBM Cloud Pak for Integration V2021.2 Administration
An IBM Certified Administrator on IBM Cloud Pak for Integration V2021.2 is an experienced system administrator who has extensive knowledge and experience with IBM Cloud Pak for Integration V2021.2 in multi-cloud environments. This administrator can perform the intermediate to advanced tasks related to daily management and operation, security, performance, configuration of enhancements (including fix packs and patches), customization and/or problem determination.
C1000-130 Exam Details
Exam Code: C1000-130 Exam Name: IBM Cloud Pak for Integration V2021.2 Administration Number of questions: 62 Number of questions to pass: 42 Time allowed: 90 minutes Languages: English Price: $200 USD
C1000-130 Exam Topics
Section 1: Planning and Installation   20% Section 2: Configuration   19% Section 3: Platform Administration   25% Section 4: Product capabilities, licensing and governance   13% Section 5: Product Administration and Troubleshooting   23%
View Online IBM Cloud Pak for Integration V2021.2 Administration C1000-130 Free Questions
In Cloud Pak for Integration, which user role can replace default Keys and Certificates? A.Cluster Manager B.Super-user C.System user D.Cluster Administrator Answer:D
An account lockout policy can be created when setting up an LDAP server for the Cloud Pak for Integration platform. What is this policy used for? A.It warns the administrator if multiple login attempts fail. B.It prompts the user to change the password. C.It deletes the user account. D.It restricts access to the account if multiple login attempts fail. Answer : B
Which two Red Hat OpenShift Operators should be installed to enable OpenShift Logging? A.OpenShift Console Operator B.OpenShift Logging Operator C.OpenShift Log Collector D.OpenShift Centralized Logging Operator E.OpenShift Elasticsearch Operator Answer : A, E
Which diagnostic information must be gathered and provided to IBM Support for troubleshooting the Cloud Pak for Integration instance? A.Standard OpenShift Container Platform logs. B.Platform Navigator event logs. C.Cloud Pak For Integration activity logs. D.Integration tracing activity reports. Answer: C
An administrator has just installed the OpenShift cluster as the first step of installing Cloud Pak for Integration. What is an indication of successful completion of the OpenShift Cluster installation, prior to any other cluster operation? A.The command "which oc" shows that the OpenShift Command Line Interface(oc) is successfully installed. B.The duster credentials are included at the end of the /.openshifl_install.log file. C.The command "oc get nodes" returns the list of nodes in the cluster. D.The OpenShift Admin console can be opened with the default user and will display the cluster statistics. Answer:D
Which capability describes and catalogs the APIs of Kafka event sources and socializes those APIs with application developers? A.Gateway Endpoint Management B.REST Endpoint Management C.Event Endpoint Management D.API Endpoint Management Answer:C
0 notes
magicsoma · 7 years
Text
New Post has been published on 나만의 세상~!
New Post has been published on http://blog.seabow.pe.kr/?p=7356
Red Hat - OpenShift Online 맛 보기
이번 글은 요즘 핫한 PaaS 솔루션인 OpenShift Online 에 대한 내용 입니다. 아시는 분들도 있을지 모르겠지만??? 
OpenShift Online 가 체험판 격인 제품이 있어서 무료로 이용을 할 수 있답니다. ( 물론 작은 규모용 이죠 )
우선 아래의 링크를 클릭합니다. Link : https://www.openshift.com/devpreview/register.html
웹페이지가 열리면 “LOGIN WITH RED HAT” 를 클릭 합니다. ( 계정 생성을 해야겠죠? )
  위에 보시면 아시겠지만… 일반 계정을 생성하셔도 되지만… 그 외 SNS 계정과의 연동도 가능 합니다. 만약 위의 SNS 계정이 있으시다면 연동을 해 보세요.
계정 생성이 완료되고 email 확인까지 끝나면 이제 상품 신청이 가능 해 진답니다.
메뉴가 2가지네요. Starter 버전과 Pro 버전 보시면 아시겠지만 Starter 은 FREE 버전이며 Pro 버전은 실제 서비스를 하기위한 상품 이라고 보시면 됩니다. 일단 맛 보기??? 이므로 FREE 를 선택 합니다.
그리고 Cluster/Region 을 선택 합니다. 한국 Region 이 있으면 좋겠지만… US Ease, US West 밖에 없군요. 전 US East(Virginia) 를 선택 합니다.
최종적으로 신청한 내역을 화인하고 Confirm 을 누릅니다. ( 일부 내용이 변경 되었습니다. ) 이 경우 메일로 계정 활성화 정보가 날라올 것입니다.
계정 활성화가 되면… 콘솔 화면에 로그인을 할 수 있습니다.
          이 후 프로젝트를 생성하고 서비스를 디플로이 할 수 있습니다. ( 기존 오픈 쉬프트와 거의 유사해요. ) 리소스가 작아서 큰 서비스는 사용 할 수 없지만.
테스트 용도 또는 오픈쉬프트에 대한 확인 용도로 충분 하리라 생각 됩니다.
0 notes
gamemodustk-blog · 7 years
Text
GETTING STARTED WITH CONTAINERS
1.1. Review Docker has rapidly turned out to be one of the head ventures for containerizing applications. This subject gives a hands-on way to deal with begin utilizing Docker in Red Hat Enterprise Linux 7 and RHEL Atomic Host by getting and utilizing Docker pictures and working with Docker compartments.
1.2. Foundation
The Docker extend gives the methods for bundling applications in lightweight compartments. Running applications inside Docker holders offers the accompanying points of interest:
Littler than Virtual Machines: Because Docker pictures contain just the substance expected to run an application, sparing and sharing is a great deal more productive with Docker holders than it is with virtual machines (which incorporate whole working frameworks)
Enhanced execution: Likewise, since you are not running a totally isolate working framework, a holder will normally run speedier than an application that conveys with it the overhead of a radical new virtual machine.
Secure: Because a Docker holder normally has its own system interfaces, document framework, and memory, the application running in that compartment can be detached and secured from different exercises on a host PC.
Adaptable: With an application's run time necessities included with the application in the holder, a Docker compartment is fit for being keep running in different situations.
As of now, you can run Docker holders on Red Hat Enterprise Linux 7 (RHEL 7) Server and Red Hat Enterprise Linux Atomic (in view of RHEL 7) frameworks. In the event that you are new to RHEL Atomic Host, you can take in additional about it from RHEL Atomic Host 7 Installation and Configuration Guide or the upstream Project Atomic site. Extend Atomic produces littler subordinates of RPM-based Linux circulations (RHEL, Fedora, and CentOS) that is made particularly to run Docker holders in OpenStack, VirtualBox, Linux KVM and a few diverse cloud conditions.
This theme will enable you to begin with Docker in RHEL 7 and RHEL Atomic Host. Other than offering you a few hands-on methods for experimenting with Docker, it additionally portrays how to:
Get to RHEL-based Docker pictures from the Red Hat Registry
Fuse RHEL-entitled programming into your holders
In the event that you are occupied with more subtle elements on how Docker functions, allude to the accompanying:
Discharge Notes: Refer to the Atomic Host and Containers segment of the RHEL 7 Release Notes for a review of Docker and related components in RHEL 7.
Docker Project Site: From the Docker site, you can find out about Docker from the What is Docker? page and the Getting Started page. There is additionally a Docker Documentation page you can allude to.
Docker README: After you introduce the docker bundle, allude to the README.md document in the/usr/share/doc/docker-1* registry.
Docker man pages: Again, with docker introduced, sort man docker to find out about the docker charge. At that point allude to separate man pages for every docker choice (for instance, sort man docker-picture to peruse about the docker picture alternative).
NOTE
Right now, to run the docker charge in RHEL 7 and RHEL Atomic Host you should have root benefit. In the strategy, this is demonstrated by the order provoke showing up as a hash sign (#). Arranging sudo will work, on the off chance that you lean toward not to sign in specifically to the root client account.
1.3. GETTING DOCKER IN RHEL 7
To get a domain where you can create Docker compartments, you can introduce a Red Hat Enterprise Linux 7 framework to go about as an advancement framework and a holder have. The docker bundle itself is put away in a RHEL Extras storehouse (see the Red Hat Enterprise Linux Extras Life Cycle article for a portrayal of bolster arrangements and life cycle data for the Red Hat Enterprise Linux Extras channel).
Utilizing the RHEL 7 membership show, in the event that you need to make Docker pictures or compartments, you should appropriately enroll and entitle the host PC on which you fabricate them. When you utilize yum introduce inside a compartment to include bundles, the holder consequently approaches qualifications accessible from the RHEL 7 have, so it can get RPM bundles from any storehouse empowered on that host.
Take note of: The docker bundles and other holder related bundles are accessible for the RHEL Server and RHEL Atomic Host versions. They are not accessible for Workstation or different variations of RHEL.
Introduce RHEL Server release: If you are prepared to start, you can begin by introducing a Red Hat Enterprise Linux framework (Server version) as depicted in the accompanying: Red Hat Enterprise Linux 7 Installation Guide
Enroll RHEL: Once RHEL 7 is introduced, enlist the framework. You will be incited to enter your client name and secret word. Take note of that the client name and secret word are the same as your login accreditations for Red Hat Customer Portal.
# membership supervisor enroll
Enlisting to: subscription.rhsm.redhat.com:443/membership
Username: ********
Secret word: **********
Pick pool ID: Determine the pool ID of a membership that incorporates Red Hat Enterprise Linux Server. Sort the accompanying at a shell provoke to show a rundown of all memberships that are accessible for your framework, at that point append the pool ID of one that meets that prerequisite:
# membership administrator list - accessible Find legitimate RHEL pool ID
# membership administrator append - pool=pool_id
Empower storehouses: Enable the accompanying archives, which will enable you to introduce the docker bundle and related programming:
# membership director repos - enable=rhel-7-server-rpms
# membership director repos - enable=rhel-7-server-additional items rpms
# membership director repos - enable=rhel-7-server-discretionary rpms
It is conceivable that some Red Hat memberships incorporate empowered archives that can strife with one another. On the off chance that you trust that has happened, before empowering the repos appeared above, you can incapacitate all repos. See the [How are storehouses empowered… ​](https://access.redhat.com/arrangements/1443553) answer for data on the most proficient method to cripple undesirable vaults.
NOTE: For data on the station names required to get docker bundles for Red Hat Satellite 5, allude to Satellite 5 repo to introduce Docker on Red Hat Enterprise Linux 7.
Introduce Docker: The present arrival of RHEL and RHEL Atomic Host incorporate two unique forms of Docker. Here are the Docker bundles you need to browse:
docker: This bundle incorporates the form of Docker that is the default for the present arrival of RHEL. Introduce this bundle in the event that you need a more steady form of Docker that is perfect with the present variants of Kubernetes and OpenShift accessible with Red Hat Enterprise Linux.
docker-most recent: This bundle incorporates a later form of Docker that you can utilize on the off chance that you need to work with fresher elements of Docker. This adaptation is not perfect with the forms of Kubernetes and OpenShift that are accessible with the present arrival of Red Hat Enterprise Linux.
NOTE: For more data on the substance of docker and docker-most recent bundles, see the Atomic Host and Containers area of the Red Hat Enterprise Linux Release Notes for more subtle elements on the contrasts between the two bundles and how to empower the docker-most recent bundle.
To introduce and utilize the default docker bundle (alongside two or three ward bundles on the off chance that they are not yet introduced), sort the accompanying:
# yum introduce docker gadget mapper-libs gadget mapper-occasion libs
Begin docker:
# systemctl begin docker.service
Empower docker:
# systemctl empower docker.service
Check docker status:
# systemctl status docker.service
docker.service - Docker Application Container Engine
Stacked: stacked (/usr/lib/systemd/framework/docker.service; empowered; merchant preset: debilitated)
Drop-in:/usr/lib/systemd/framework/docker.service.d
|-flannel.conf
Dynamic: dynamic (running) since Thu 2016-05-09 22:398:47 EDT; 14s prior
Docs: http://docs.docker.com
Fundamental PID: 13495 (sh)
CGroup:/system.slice/docker.service
└─13495/receptacle/sh - c/usr/container/docker-current daemon $OPTIONS
...
With the docker benefit running, you can acquire some Docker pictures and utilize the docker summon to start working with Docker pictures in RHEL 7.
1.4. GETTING DOCKER IN RHEL ATOMIC HOST
RHEL Atomic Host is a light-weight Linux working framework appropriation that was composed particularly to run compartments. It contains two unique renditions of the docker benefit, and a few administrations that can be utilized to coordinate and oversee Docker compartments, for example, Kubernetes. Just a single adaptation of the docker administration can be running at once.
Since RHEL Atomic Host is more similar to a machine than a full-included Linux framework, it is not made for you to introduce RPM bundles or other programming on. Programming is added to Atomic Host frameworks by running compartment pictures.
RHEL Atomic Host has a component for refreshing existing bundles, however not for enabling clients to include new bundles. Accordingly, you ought to consider utilizing a standard RHEL 7 server framework to build up your applications (so you can include a full compliment of improvement and troubleshooting devices), at that point utilize RHEL Atomic Host to send your compartments into an assortment of virtualization and cloud condition.
All things considered, you can introduce a RHEL Atomic Host framework and utilize it to run, manufacture, stop, begin, and generally work with holders utilizing the cases appeared in this subject. To do that, utilization the accompanying strategy to get and introduce RHEL Atomic Host.
Get RHEL Atomic Host: RHEL Atomic Host is accessible from the Red Hat Customer Portal. You have the choice of running RHEL Atomic Host as a live picture (in .qcow2 configuration) or introducing RHEL Atomic Host from an establishment medium (in .iso arrange). You can get RHEL Atomic in those (and different arrangements) from here:
RHEL Atomic Host Downloads
At that point take after the Red Hat Enterprise Linux Atomic Host Installation and Configuration Guide guidelines for setting up Atomic to keep running in on
0 notes
Text
Running Containers in OpenShift
OpenShift is Red Hat holder application stage. It depends on Kubernetes and to keep things short we will call it a PaaS. The new OpenShift v3 speaks to a major wagered by Red Hat to re-compose the product totally in Go and use Kubernetes. In reality when you utilize OpenShift, you get a Red Hat circulation of Kubernetes in addition to the OpenShift functionalities around code organization, robotized assembles et cetera, that you are utilized to with a common PaaS.
What emerges with OpenShift, and what Red Hat touts frequently is the attention on security.
I am not a security master but rather when you take a gander at Kubernetes and in the event that you maintain a strategic distance from the questioning around privileged insights, you will find that Kubernetes has a great deal of decent security highlights. RBAC is currently the default setup when you utilize kubeadm to bootstrap your bunch, arrange disengagement can be upheld with system approaches and Pods can be controlled firmly with Pod security strategies. Add to that the verification instruments, TLS setup for all segments and confirmation controls and it is resembling an entirely solid and secure framework, in any event to me.
Be that as it may, security in some cases includes some major disadvantages for early adopters. For OpenShift's situation, they utilize Kubernetes Pod security strategies as a matter of course, they are called Security Context Constraints (i.e scc). The most unmistakable part of utilizing scc of course is that compartments that run their procedures as ROOT won't keep running in OpenShift.
So in the event that you begin with minishift and attempt to run an essential Docker picture that does not indicate a non ROOT client, then it will fall flat. This implies tragically that right now our Bitnami pictures won't keep running on OpenShift. While we are taking a shot at a long haul settle, to be specific to utilize a non-favored client in our Dockerfile, I will demonstrate to you best practices to incidentally go around this issue.
Beginning with OpenShift utilizing minishift
To begin with Openshift utilize minishift. It is a custom minikube , meaning it will begin a virtual machine on your nearby desktop/tablet and in this VM, openshift (i.e k8s) will be setup. The customer oc will likewise be effortlessly designed as it is a wrapper around kubectl .
Download and begin it like minikube (with the exception of that the default driver is xhyve so on the off chance that you utilize Virtual Box… ):
minishift begin — vm-driver virtualbox
Setup the Openshift customer oc in your shell:
$ minishift oc-env
trade PATH="/Users/sebgoa/.minishift/store/oc/v1.5.0:$PATH"
# Run this summon to design your shell:
# eval $(minishift oc-env)
$ eval $(minishift oc-env)
Adjust Security Context Constraints
To adjust a scc you should be an administrator on Openshift, so login as the administrator of your minishift (this fair switches the k8s setting):
oc login - u sysadmin:admin
At that point alter the scc called anyuid:
oc alter scc anyuid
On the off chance that you don't sign in as the administrator client, the RBAC leads set up won't let you alter the security imperatives. At that point include a clients area that resemble this:
clients:
- system:serviceaccount:default:ci
- system:serviceaccount:ci:default
- system:serviceaccount:myproject:default
Fundamentally you are giving the default a chance to administration account in the myproject namespace run Pods whose holders run forms as any uid , including the ROOT client. Take note of that the myproject is the namespace that your standard client approaches. This was created by minishift consequently, and you can discover it in your k8s setting with oc config see .
Change back to minishift as a consistent client ( oc config simply like kubectl config ):
oc config utilize setting minishift
Make an Application Using a Container Running Process as ROOT
You are presently prepared to make an application utilizing a compartment where the procedure keeps running as ROOT. You loose the case security approach. This is not perfect as you ought to dependably utilize standards of slightest benefit. oc new-application looks a considerable measure like kubectl run
oc new-application - name foobar \
- docker-picture bitnami/mariadb \
- env MARIADB_ROOT_PASSWORD=root
Furthermore, you now have a running Bitnami mariadb on OpenShift ( oc logs like kubectl logs :
$ oc logs foobar-1-zft17
Welcome to the Bitnami mariadb holder
Subscribe to venture refreshes by watching https://github.com/bitnami/bitnami-docker-mariadb
Submit issues and highlight demands at https://github.com/bitnami/bitnami-docker-mariadb/issues
Send us your criticism at [email protected]
nami INFO Initializing mariadb
mariadb INFO ==> Configuring authorizations…
mariadb INFO ==> Validating data sources…
mariadb INFO ==> Initializing database…
...<snip>
Conclusions
While you can evade the default security setting imperatives in OpenShift, this is not perfect. That is the reason at Bitnami we are currently attempting to alter our compartment application formats to solidify them and guarantee they work pleasantly with OpenShift out of the crate.
@sebgoa
@bitnami
References
Kubernetes Pod security strategies
Running your docker picture on minishift
Overseeing security imperatives
0 notes
codecraftshop · 2 years
Text
Login to openshift cluster in different ways | openshift 4
There are several ways to log in to an OpenShift cluster, depending on your needs and preferences. Here are some of the most common ways to log in to an OpenShift 4 cluster: Using the Web Console: OpenShift provides a web-based console that you can use to manage your cluster and applications. To log in to the console, open your web browser and navigate to the URL for the console. You will be…
Tumblr media
View On WordPress
0 notes