#codeready containers ubuntu setup
Explore tagged Tumblr posts
efxtv · 4 years ago
Photo
Tumblr media
Here I've created a list of the most popular websites a programmer should visit. You'll be amazed to see all the pages and blogs that can help you to get and share the knowledge just for free. Read more http://bit.ly/37rLdbo
18 notes · View notes
computingpostcom · 3 years ago
Text
Are you looking for an easy way to setup a local OpenShift 4 Cluster in your Laptop?. The Red Hat CodeReady Containers enables you to run a minimal OpenShift 4.2 or newer cluster on your local laptop or desktop computer. This should only be used for development and testing purposes. We’ll provide a separate guide to be used for setting up a production OpenShift 4 cluster. Red Hat CodeReady Containers is a regular OpenShift installation with the following notable differences: It uses a single node which behaves both as a master and as a worker node. The machine-config and monitoring Operators are disabled by default. These disabled Operators will cause the corresponding parts of the web console to be non functional. For the same reason, there is currently no upgrade path to newer OpenShift versions. Due to technical limitations, the CodeReady Containers cluster is ephemeral and will need to be recreated from scratch once a month using a newer release. The OpenShift instance is running in a virtual machine, which could cause some other differences, in particular in relation with external networking. Minimum system requirements CodeReady Containers requires the following minimum hardware and operating system requirements. 4 virtual CPUs (vCPUs) 8 GB of memory 35 GB of storage space CodeReady Containers can be run on Linux, Windows, and macOS but this setup have been tested on CentOS 7/8 and Fedora 31. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Microsoft Windows 10. Step 1: Install required software packages CodeReady Containers requires the libvirt and NetworkManager packages to be installed on the host system prior to its setup. ### Fedora ### sudo dnf install NetworkManager qemu-kvm libvirt virt-install sudo systemctl enable --now libvirtd ### CentOS / Rocky Linux ### sudo yum -y install qemu-kvm libvirt virt-install bridge-utils NetworkManager sudo systemctl enable --now libvirtd ### Ubuntu / Debian ### sudo apt install qemu-kvm libvirt-daemon libvirt-daemon-system network-manager Step 2: Install CodeReady Containers Download the latest binary file for CRC from the below URL. # Linux wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz # macOS wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-macos-amd64.pkg Extract the downloaded CodeReady Containers archive. # Linux tar xvf crc-linux-amd64.tar.xz # macOS open crc-macos-amd64.pkg Place the binary in your $PATH . sudo mv crc*/crc /usr/local/bin macOS On macOS double click on the file you have to install or use open command: open crc-macos-amd64.pkg Confirm installation by checking the software version. $ crc version CodeReady Containers version: 2.0.1+bf3b1a6 OpenShift version: 4.10.3 Podman version: 3.4.4 To view crc commands help page, run: $ crc --help CodeReady Containers is a tool that manages a local OpenShift 4.x cluster optimized for testing and development purposes Usage: crc [flags] crc [command] Available Commands: bundle Manage CRC bundles cleanup Undo config changes config Modify crc configuration console Open the OpenShift Web Console in the default browser delete Delete the OpenShift cluster help Help about any command ip Get IP address of the running OpenShift cluster oc-env Add the 'oc' executable to PATH podman-env Setup podman environment setup Set up prerequisites for the OpenShift cluster start Start the OpenShift cluster status Display status of the OpenShift cluster stop Stop the OpenShift cluster version Print version information Flags: -h, --help help for crc --log-level string log level (e.g. "debug | info | warn | error") (default "info") Use "crc [command] --help" for more information about a command.
Step 3: Deploy CodeReady Containers virtual machine. Run the crc setup command to set up your host operating system for the CodeReady Containers virtual machine. $ crc setup The installer will check for setup requirements before installation. INFO Checking if running as non-root INFO Caching oc binary INFO Setting up virtualization INFO Setting up KVM INFO Installing libvirt service and dependencies INFO Adding user to libvirt group INFO Enabling libvirt INFO Starting libvirt service INFO Will use root access: start libvirtd service INFO Checking if a supported libvirt version is installed INFO Installing crc-driver-libvirt INFO Removing older system-wide crc-driver-libvirt INFO Setting up libvirt 'crc' network INFO Starting libvirt 'crc' network INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Writing Network Manager config for crc INFO Will use root access: write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf INFO Will use root access: execute systemctl daemon-reload command INFO Will use root access: execute systemctl stop/start command INFO Writing dnsmasq config for crc INFO Will use root access: write dnsmasq configuration in /etc/NetworkManager/dnsmasq.d/crc.conf INFO Will use root access: execute systemctl daemon-reload command INFO Will use root access: execute systemctl stop/start command INFO Unpacking bundle from the CRC binary Once the Setup is complete, run the command below to start the OpenShift cluster in your Laptop machine. $ crc start INFO Checking if running as non-root INFO Checking if oc binary is cached INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists ? Image pull secret [? for help] * Please note that a valid OpenShift user pull secret is required during installation. The pull secret can be copied or downloaded from the Pull Secret section of the Install on Laptop: Red Hat CodeReady Containers page on cloud.redhat.com. Paste the pulling secret when prompted, then cluster setup will continue. INFO Extracting bundle: crc_libvirt_4.10.3_amd64... INFO Creating CodeReady Containers VM for OpenShift 4.10.3... INFO Verifying validity of the cluster certificates ... INFO Check internal and public DNS query ... INFO Copying kubeconfig file to instance dir ... INFO Adding user's pull secret and cluster ID ... INFO Starting OpenShift cluster ... [waiting 3m] INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD INFO INFO You can now run 'crc console' and use these credentials to access the OpenShift web console Started the OpenShift cluster WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
Access details and credentials are printed after a successful setup. INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD INFO You can now run 'crc console' and use these credentials to access the OpenShift web console To be able to access your cluster, first set up your environment by running. $ crc oc-env export PATH="/home/jmutai/.crc/bin:$PATH" eval $(crc oc-env) Run the commands printed in your terminal or add them to your ~/.bashrc or ~/.zshrc file, then source it. $ vim ~/.bashrc export PATH="~/.crc/bin:$PATH" eval $(crc oc-env) ### Then source ### source ~/.bashrc Login as Admin using command printed out: $ oc login -u kubeadmin -p UMeRe-hBQAi-JJ4Bi-8ynRD https://api.crc.testing:6443 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Login successful. You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". Confirm cluster setup. $ oc cluster-info Kubernetes master is running at https://api.crc.testing:6443 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. $ oc get nodes NAME STATUS ROLES AGE VERSION crc-2n9vw-master-0 Ready master,worker 5d13h v1.22.3+fdba464 $ oc config view apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://api.crc.testing:6443 name: api-crc-testing:6443 - cluster: certificate-authority: /home/jmutai/.minikube/ca.crt server: https://192.168.39.35:8443 name: minikube contexts: - context: cluster: api-crc-testing:6443 user: developer/api-crc-testing:6443 name: /api-crc-testing:6443/developer - context: cluster: api-crc-testing:6443 namespace: default user: kube:admin/api-crc-testing:6443 name: default/api-crc-testing:6443/kube:admin - context: cluster: minikube user: minikube name: minikube current-context: default/api-crc-testing:6443/kube:admin kind: Config preferences: users: - name: developer/api-crc-testing:6443 user: token: Pvqjq-b5HkV9UQtOYH8P9yOtm17MrOUVs-eaiSeQqXA - name: kube:admin/api-crc-testing:6443 user: token: LDrdGJMUpPUAxtg0IvWynedbtSBLjs8S2S6kdpvbMU8 - name: minikube user: client-certificate: /home/jmutai/.minikube/client.crt client-key: /home/jmutai/.minikube/client.key To view cluster operators: $ oc get clusteroperators NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.10.3 True False False 2d baremetal 4.10.3 True False False 26d cloud-credential 4.10.3 True False False 26d cluster-autoscaler 4.10.3 True False False 26d config-operator 4.10.3 True False False 26d console 4.10.3 True False False 45h csi-snapshot-controller 4.10.3 True False False 26d dns 4.10.3 True False False 26d etcd 4.10.3 True False False 26d image-registry 4.10.3 True False False 26d ingress 4.10.3 True False False 26d insights 4.10.3 True False False 26d kube-apiserver 4.10.3 True False False 26d
kube-controller-manager 4.10.3 True False False 26d kube-scheduler 4.10.3 True False False 26d kube-storage-version-migrator 4.10.3 True False False 45h machine-api 4.10.3 True False False 26d machine-approver 4.10.3 True False False 26d machine-config 4.10.3 True False False 26d marketplace 4.10.3 True False False 26d monitoring 4.10.3 True False False 46h network 4.10.3 True False False 26d node-tuning 4.10.3 True False False 46h openshift-apiserver 4.10.3 True False False 3d7h openshift-controller-manager 4.10.3 True False False 25d openshift-samples 4.10.3 True False False 46h operator-lifecycle-manager 4.10.3 True False False 26d operator-lifecycle-manager-catalog 4.10.3 True False False 26d operator-lifecycle-manager-packageserver 4.10.3 True False False 9d service-ca 4.10.3 True False False 26d storage 4.10.3 True False False 26d Step 4: Access OpenShift Cluster You can access the OpenShift cluster deployed locally from CLI or by opening the OpenShift 4.x console on your web browser. $ oc login -u developer -p developer https://api.crc.testing:6443 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y Login successful. You don't have any projects. You can try to create a new project, by running oc new-project Access as admin: $ oc login -u kubeadmin -p UMeRe-hBQAi-JJ4Bi-8ynRD https://api.crc.testing:6443 Login successful. You have access to 51 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". To open the console from your default web browser, run: $ crc console You can also view the password for the developer and kubeadmin users by running the following command: crc console --credentials Login with the credentials printed earlier. There you have a cluster running. Step 5: Stopping OpenShift Cluster To stop your OpenShift cluster, run the command: $ crc stop Stopping the OpenShift cluster, this may take a few minutes... Stopped the OpenShift cluster The virtual machine can be started any time by running the command: $ crc start INFO Checking if running as non-root INFO Checking if oc binary is cached INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists INFO Starting CodeReady Containers VM for OpenShift 4.10.3...
INFO Verifying validity of the cluster certificates ... INFO Check internal and public DNS query ... INFO Starting OpenShift cluster ... [waiting 3m] INFO INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, username is 'kubeadmin' and password is UMeRe-hBQAi-JJ4Bi-8ynRD INFO ... Deleting CodeReady Containers virtual machine If you want to delete an existing CodeReady Containers virtual machine, run: $ crc delete This command will delete the CodeReady Containers virtual machine. Reference CRC Documentation
0 notes
computingpostcom · 3 years ago
Text
Project Quay is a scalable container image registry that enables you to build, organize, distribute, and deploy containers. With Quay you can create image repositories, perform image vulnerability scanning and robust access controls. We had covered installation of Quay on a Linux distribution using Docker. How To Setup Red Hat Quay Registry on CentOS / RHEL / Ubuntu In this guide, we will review how you can deploy Quay container registry on OpenShift Container Platform using Operator. The operator we’ll use is provided in the Operators Hub. If you don’t have an OpenShift / OKD cluster running and would like to try this article, checkout our guides below. Setup Local OpenShift 4.x Cluster with CodeReady Containers How to Setup OpenShift Origin (OKD) 3.11 on Ubuntu How To run Local Openshift Cluster with Minishift The Project Quay is made up of several core components. Database: Used by Red Hat Quay as its primary metadata storage (not for image storage). Redis (key, value store): Stores live builder logs and the Red Hat Quay tutorial. Quay (container registry): Runs the quay container as a service, consisting of several components in the pod. Clair: Scans container images for vulnerabilities and suggests fixes. Step 1: Create new project for Project Quay Let’s begin by creating a new project for Quay registry. $ oc new-project quay-enterprise Now using project "quay-enterprise" on server "https://api.crc.testing:6443". ..... You can also create a Project from OpenShift Web console. Click create button and confirm the project is created and running. Step 2: Install Red Hat Quay Setup Operator The Red Hat Quay Setup Operator provides a simple method to deploy and manage a Red Hat Quay cluster. Login to the OpenShift console and select Operators → OperatorHub: Select the Red Hat Quay Operator. Select Install then Operator Subscription page will appear. Choose the following then select Subscribe: Installation Mode: Select a specific namespace to install to Update Channel: Choose the update channel (only one may be available) Approval Strategy: Choose to approve automatic or manual updates Step 3: Deploy a Red Hat Quay ecosystem Certain credentials are required for Accessing Quay.io registry. Create a new file with below details. $ vim docker_quay.json "auths": "quay.io": "auth": "cmVkaGF0K3F1YXk6TzgxV1NIUlNKUjE0VUFaQks1NEdRSEpTMFAxVjRDTFdBSlYxWDJDNFNEN0tPNTlDUTlOM1JFMTI2MTJYVTFIUg==", "email": "" Then create a secret on OpenShift that will be used. oc project quay-enterprise oc create secret generic redhat-pull-secret --from-file=".dockerconfigjson=docker_quay.json" --type='kubernetes.io/dockerconfigjson' Create Quay Superuser credentials secret: oc create secret generic quay-admin \ --from-literal=superuser-username=quayadmin \ --from-literal=superuser-password=StrongAdminPassword \ [email protected] Where: quayadmin is the Quay admin username StrongAdminPassword is the password for admin user [email protected] is the email of Admin user to be created Create Quay Configuration Secret A dedicated deployment of Quay Enterprise is used to manage the configuration of Quay. Access to the configuration interface is secured and requires authentication in order for access. oc create secret generic quay-config --from-literal=config-app-password=StrongPassword Replace StrongPassword with your desired password. Create Database credentials secret – PostgreSQL oc create secret generic postgres-creds \ --from-literal=database-username=quay \ --from-literal=database-password=StrongUserPassword \ --from-literal=database-root-password=StrongRootPassword \ --from-literal=database-name=quay These are the credentials for accessing the database server: quay – Database and DB username StrongUserPassword – quay DB user password StrongRootPassword – root user database password
Create Redis Password Credential By default, the operator managed Redis instance is deployed without a password. A password can be specified by creating a secret containing the password in the key password. oc create secret generic redis-password --from-literal=password=StrongRedisPassword Create Quay Ecosystem Deployment Manifest My Red Hat Quay ecosystem configuration file looks like below apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: quay-ecosystem spec: clair: enabled: true imagePullSecretName: redhat-pull-secret updateInterval: "60m" quay: imagePullSecretName: redhat-pull-secret superuserCredentialsSecretName: quay-admin configSecretName: quay-config deploymentStrategy: RollingUpdate skipSetup: false redis: credentialsSecretName: redis-password database: volumeSize: 10Gi credentialsSecretName: postgres-creds registryStorage: persistentVolumeSize: 20Gi persistentVolumeAccessModes: - ReadWriteMany livenessProbe: initialDelaySeconds: 120 httpGet: path: /health/instance port: 8443 scheme: HTTPS readinessProbe: initialDelaySeconds: 10 httpGet: path: /health/instance port: 8443 scheme: HTTPS Modify it to fit you use case. When done apply the configuration: oc apply -f quay-ecosystem.yaml Using Custom SSL Certificates If you want to use custom SSL certificates with Quay, you need to create a secret with the key and the certificate: oc create secret generic custom-quay-ssl \ --from-file=ssl.key=example.key \ --from-file=ssl.cert=example.crt Then modify your Ecosystem file to use the custom certificate secret: quay: imagePullSecretName: redhat-pull-secret sslCertificatesSecretName: custom-quay-ssl ....... Wait for few minutes then confirm deployment: $ oc get deployments NAME READY UP-TO-DATE AVAILABLE AGE quay-ecosystem-clair 1/1 1 1 2m35s quay-ecosystem-clair-postgresql 1/1 1 1 2m57s quay-ecosystem-quay 1/1 1 1 3m45s quay-ecosystem-quay-postgresql 1/1 1 1 5m8s quay-ecosystem-redis 1/1 1 1 5m57s quay-operator 1/1 1 1 70m $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE quay-ecosystem-clair ClusterIP 172.30.66.1 6060/TCP,6061/TCP 4m quay-ecosystem-clair-postgresql ClusterIP 172.30.10.126 5432/TCP 3m58s quay-ecosystem-quay ClusterIP 172.30.47.147 443/TCP 5m38s quay-ecosystem-quay-postgresql ClusterIP 172.30.196.61 5432/TCP 6m15s quay-ecosystem-redis ClusterIP 172.30.48.112 6379/TCP 6m58s quay-operator-metrics ClusterIP 172.30.81.233 8383/TCP,8686/TCP 70m Running pods in the project: $ oc get pods NAME READY STATUS RESTARTS AGE quay-ecosystem-clair-84b4d77654-cjwcr 1/1 Running 0 2m57s quay-ecosystem-clair-postgresql-7c47b5955-qbc4s 1/1 Running 0 3m23s quay-ecosystem-quay-66584ccbdb-8szts 1/1 Running 0 4m8s quay-ecosystem-quay-postgresql-74bf8db7f8-vnrx9 1/1 Running 0 5m34s quay-ecosystem-redis-7dcd5c58d6-p7xkn 1/1 Running 0 6m23s quay-operator-764c99dcdb-k44cq 1/1 Running 0 70m Step 4: Access Quay Dashboard Get a route URL for deployed Quay: $ oc get route quay-ecosystem-quay quay-ecosystem-quay-quay-enterprise.apps.example.com quay-ecosystem-quay 8443 passthrough/Redirect None
Open the URL on the machine with access to the cluster domain. Use the credentials you configured to login to Quay registry. And there you have it. You now have Quay registry running on OpenShift using Operators. Refer to below documentations for more help. Quay Operator Github Page Red Hat Quay documentation Project Quay Documentation
0 notes