#Install docker on rocky linux 8
Explore tagged Tumblr posts
Text
How to Install Docker on Linux Mint 20.
How to Install Docker on Linux Mint 20.
Hi hope you are doing well, lets learn about “How to Setup and Install Docker on Linux Mint 20”, the Docker is the fastest growing technology in the IT market. Docker is the container technology. Many industries are moving towards docker from the normal EC2 instances. It is PAAS (Platform as a Service), which uses a OS virtualisation to deliver software in packages called containers. The…

View On WordPress
#docker hub#docker install rocky linux#install docker ce on Linux Mint 20#Install Docker CE on Rocky Linux#install docker engine on ubuntu#Install docker in rocky linux 8#install docker on Linux Mint 20#Install docker on rocky linux 8#Install docker on ubuntu#Install docker on ubuntu 20.04#install mongodb docker
0 notes
Text
This guide demonstrates how to run Ubuntu Virtual Machines on Linux and macOS using Multipass. But before we dive into the crux of this tool. Let us get to know what this tool is. What is Multipass? There are many virtualization tools available to deploy VMs for testing and learning purposes. These include Virtualbox, VMware, LXD, KVM, Docker, LXC, Proxmox, Vagrant e.t.c I use Virtualbox and VMware regularly for testing various Linux applications on Linux distributions. In this guide, we are going to take in yet another virtualization tool known as Multipass. This tool makes it easy to create and launch Ubuntu Virtual Machines for regular users, developers, and system admins. Multipass is a lightweight Virtual machine manager developed by the canonical team to create and launch ubuntu instances on your local machine. It is developed to run on macOS, Windows, and GNU/Linux systems. Multipass uses KVM on Linux, Hyper kit on macOS, and Hyper-V on Windows to run the virtual machine with minimal overhead. With Multipass, one can run commands directly into the VM’s shell from your local computer. Moreso, it is possible to mount directories of your host system and share files with the VM. With the above knowledge, we are now set to dive into the installation of Multipass. Step 1: Install Multipass On Linux and macOS 1. Install Multipass on Linux On Linux, Multipass is available as a snap package. It can easily be installed on any Linux distribution that supports snapd. In some distributions such as Zorin OS, Solus 3 and Ubuntu releases from 16.04 LTS snap comes as a pre-installed application. You can install snapd as below: ###On Debian/Ubuntu sudo apt install snapd ###On RHEL 7/CentOS 7 sudo yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm sudo yum -y upgrade sudo yum -y install snapd ###On Rhel 8/Centos 8/Rocky Linux 8/Fedora sudo dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm sudo dnf -y upgrade sudo dnf -y install snapd Then enable snapd as below: sudo systemctl enable --now snapd.socket sudo ln -s /var/lib/snapd/snap /snap With snap installed and started, we are set to install Multipass on any Linux distribution using the command: First, update and upgrade your system. Then install Multipass on Linux as below sudo snap install multipass If the above fails to run for any reason, try to install Multipass with this command: sudo snap install multipass --classic With Multipass successfully installed, you will see this output: multipass 1.10.1 from Canonical✓ installed 2. Install Multipass on macOS On macOS, there are multiple ways to get Multipass installed on your system. 1. Using Multipass Installer Download the Multipass installer from the official downloads page then install it. With the .pkg package downloaded, Install it on your macOS system by activating it. Follow through the steps given using Administrator privileges. 2. Using Brew With brew, you can easily install Multipass on macOS using the command below: brew install --cask multipass On macOS, Multipass supports VirtualBox as a virtualization provider. If you would like to use VirtualBox, issue the below command: sudo multipass set local.driver=virtualbox Verify your Multipass installation using the command. $ multipass version multipass 1.10.1+mac multipassd 1.10.1+mac Step 2: Create and launch Ubuntu VMs with Multipass on Linux and macOS With Multipass successfully installed on your system, running Ubuntu VM’s is incredibly easy. To launch an Ubuntu Instance use the command: multipass launch --name test-instance alternatively use: multipass launch -n test-instance In the above code, replace test-instance with your desired Ubuntu instance name. The latest minimal Ubuntu LTS instance will be downloaded and automatically started as below: You can now list your available VMs using the command: multipass list Sample Output:
Name State IPv4 Image test-instance Running 10.14.155.56 Ubuntu 20.04 LTS From the above output, we have an Ubuntu instance with the name test-instance with Ubuntu 20.04 LTS and also the IP is provided. Execute commands for your VM from the Local System. One of the amazing features of Multipass is that it allows one to run commands for the Ubuntu instance from the local machine. To find the system’s details for a running VM use: multipass exec test-instance -- lsb_release -a In the code, test-instance is the name for the VM we want the details for. Sample Output: Launch Ubuntu VM’s shell. Aside from running commands from the local system’s shell, you can launch the Ubuntu VM’s shell and directly run the commands on it. The Shell for the Ubuntu VM is launched with the command: multipass shell test-instance Sample Output: Welcome to Ubuntu 20.04.3 LTS (GNU/Linux 5.4.0-81-generic x86_64) * Documentation: https://help.ubuntu.com * Management: https://landscape.canonical.com * Support: https://ubuntu.com/advantage System information as of Mon Aug 30 14:32:29 EAT 2021 System load: 0.08 Processes: 104 Usage of /: 27.4% of 4.67GB Users logged in: 0 Memory usage: 18% IPv4 address for ens4: 10.14.155.56 Swap usage: 0% 1 update can be applied immediately. To see these additional updates run: apt list --upgradable Last login: Mon Aug 30 14:31:40 2021 from 10.14.155.1 To run a command as administrator (user "root"), use "sudo ". See "man sudo_root" for details. ubuntu@test-instance:~$ From the shell, you can execute the normal Ubuntu command such as: sudo apt update To logout from the shell use: exit Find other instances to Launch As we already saw earlier, Multipass finds and downloads the current LTS version of Ubuntu for the VM. But you still can find other available versions you want to run using the command: $ multipass find Image Aliases Version Description snapcraft:core18 18.04 20201111 Snapcraft builder for Core 18 snapcraft:core20 20.04 20210921 Snapcraft builder for Core 20 snapcraft:core22 22.04 20220426 Snapcraft builder for Core 22 snapcraft:devel 20220913 Snapcraft builder for the devel series core core16 20200818 Ubuntu Core 16 core18 20211124 Ubuntu Core 18 18.04 bionic 20220901 Ubuntu 18.04 LTS 20.04 focal,lts 20220824 Ubuntu 20.04 LTS 22.04 jammy 20220902 Ubuntu 22.04 LTS daily:22.10 devel,kinetic 20220910 Ubuntu 22.10 appliance:adguard-home 20200812 Ubuntu AdGuard Home Appliance appliance:mosquitto 20200812 Ubuntu Mosquitto Appliance appliance:nextcloud 20200812 Ubuntu Nextcloud Appliance appliance:openhab 20200812 Ubuntu openHAB Home Appliance appliance:plexmediaserver 20200812 Ubuntu Plex Media Server Appliance anbox-cloud-appliance latest Anbox Cloud Appliance charm-dev latest A development and testing environment for charmers docker latest A Docker environment with Portainer and related tools jellyfin latest Jellyfin is a Free Software Media System that puts you in control of managing and streaming your media. minikube latest minikube is local Kubernetes
From the output, there are several Ubuntu LTS versions. You can launch an instance from the list using the syntax below. $ multipass launch --name test1-instance 22.04 This command will launch an instance for Ubuntu 22.04. Create an Instance with Custorm Specifications. Multipass by default will create a VM with 5 GB hard disk size, 1 CPU, and 1 GB RAM. However, this can be altered by making custom settings for the VM you want. This helps one create a VM meeting desired specifications and need. For example in the below code, I will demonstrate how to create a VM with 2 CPUs, 4 GB RAM, and 15 GB storage space. multipass launch -c 2 -m 4G -d 15G -n test2-instance View info about the instance: $ multipass info test2-instance Name: test2-instance State: Running IPv4: 10.14.155.175 Release: Ubuntu 20.04.3 LTS Image hash: 97bb9f79af52 (Ubuntu 20.04 LTS) Load: 0.47 0.31 0.12 Disk usage: 1.3G out of 14.4G Memory usage: 149.0M out of 3.8G Mounts: -- Remember, the minimum allowed requirements are: CPU- 1 Memory- 128 MB Hard diks- 512 MB Launch with custom network interface List available networks: $ multipass networks Name Type Description bridge0 bridge Network bridge with en1, en2, en3, en4 en0 wifi Wi-Fi (Wireless) en1 thunderbolt Thunderbolt 1 en2 thunderbolt Thunderbolt 2 en3 thunderbolt Thunderbolt 3 en4 thunderbolt Thunderbolt 4 Launch an instance with specified network interface: multipass launch -c 2 -m 4G -d 15G --network name=en0 -n test2-instance Wait for instance to start then check available interfaces: $ multipass shell test2-instance ubuntu@test2-instance:~$ ip ad 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp0s3: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:7f:cb:cf brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic enp0s3 valid_lft 85884sec preferred_lft 85884sec inet6 fe80::5054:ff:fe7f:cbcf/64 scope link valid_lft forever preferred_lft forever 3: enp0s8: mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:43:43:b4 brd ff:ff:ff:ff:ff:ff inet 192.168.100.164/24 brd 192.168.100.255 scope global dynamic enp0s8 valid_lft 85890sec preferred_lft 85890sec inet6 fe80::5054:ff:fe43:43b4/64 scope link valid_lft forever preferred_lft forever Suspend running instances To suspend instance on Multipass, run the command: multipass suspend test-instance Verify if the instance is suspended: $ multipass info test-instance Name: test-instance State: Suspended IPv4: -- Release: -- Image hash: 97bb9f79af52 (Ubuntu 20.04 LTS) Load: -- Disk usage: -- Memory usage: -- Mounts: -- The command multipass info test-instanceis generally used to get information about an instance. Step 3: Manage Ubuntu VMs on Multipass. You can start and stop Ubuntu VMs on Multipass using the below commands: ###Stop a VM multipass stop test-instance ###Start a VM multipass start test-instance Alternatively, you can manage your VMs using the Multipass Tray icon. This is done by launching Multipass GUI from the App Menu on the host system. From the tray icon, one can stop/start, open shell, disable and enable autostart of a VM and also quit Multipass. Delete VMs With intended tasks for the VM achieved, you can delete the VM if you no longer need it. First, you need to stop the VM. multipass stop test-instance Then delete it as below: multipass delete test-instance multipass purge Mount and Unmount a local directory To mount a local directory use the following command syntax:
multipass mount [ ...] Example: $ multipass list Name State IPv4 Image ubuntu-focal Running N/A Ubuntu 20.04 LTS $ multipass mount ~/Downloads ubuntu-focal $ multipass info ubuntu-focal Name: ubuntu-focal State: Running IPv4: N/A Release: Ubuntu 20.04.3 LTS Image hash: 10f8ae579fbf (Ubuntu 20.04 LTS) Load: 0.00 0.01 0.05 Disk usage: 1.3G out of 19.2G Memory usage: 164.0M out of 1.9G Mounts: /Users/jmutai/Downloads => /Users/jmutai/Downloads UID map: 501:default GID map: 20:default $ multipass ssh ubuntu-focal $ ubuntu@ubuntu-focal:~$ df -hT Filesystem Type Size Used Avail Use% Mounted on udev devtmpfs 977M 0 977M 0% /dev tmpfs tmpfs 199M 968K 198M 1% /run /dev/sda1 ext4 20G 1.3G 18G 7% / tmpfs tmpfs 994M 0 994M 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs tmpfs 994M 0 994M 0% /sys/fs/cgroup /dev/loop0 squashfs 56M 56M 0 100% /snap/core18/2128 /dev/loop1 squashfs 71M 71M 0 100% /snap/lxd/21029 /dev/loop2 squashfs 33M 33M 0 100% /snap/snapd/12883 /dev/sda15 vfat 105M 5.2M 100M 5% /boot/efi tmpfs tmpfs 199M 0 199M 0% /run/user/1000 /dev/loop3 squashfs 128K 128K 0 100% /snap/bare/5 /dev/loop4 squashfs 1.2M 1.2M 0 100% /snap/multipass-sshfs/145 :/Users/jmmutai/Downloads fuse.sshfs 1000G 0 1000G 0% /Users/jmutai/Downloads To unmount use the command: $ multipass umount ubuntu-focal $ multipass info ubuntu-focal Name: ubuntu-focal State: Running IPv4: N/A Release: Ubuntu 20.04.3 LTS Image hash: 10f8ae579fbf (Ubuntu 20.04 LTS) Load: 0.00 0.00 0.04 Disk usage: 1.3G out of 19.2G Memory usage: 159.8M out of 1.9G Mounts: -- In case you get stuck when using Multipass, there is a way out by getting help using the command: $ multipass help Usage: multipass [options] Create, control and connect to Ubuntu instances. This is a command line utility for multipass, a service that manages Ubuntu instances. Options: -h, --help Display this help -v, --verbose Increase logging verbosity. Repeat the 'v' in the short option for more detail. Maximum verbosity is obtained with 4 (or more) v's, i.e. -vvvv. Available commands: delete Delete instances exec Run a command on an instance find Display available images to create instances from get Get a configuration setting help Display help about a command info Display information about instances launch Create and start an Ubuntu instance list List all available instances mount Mount a local directory in the instance networks List available network interfaces purge Purge all deleted instances permanently recover Recover deleted instances restart Restart instances set Set a configuration setting shell Open a shell on a running instance start Start instances stop Stop running instances suspend Suspend running instances transfer Transfer files between the host and instances umount Unmount a directory from an instance version Show version details Conclusion. Congratulations! That marks the end of this guide on how to run Ubuntu Virtual Machines on Linux and macOS using Multipass. We have seen how easy it is to create and run Ubuntu instances with Multipass. I hope this was helpful
0 notes
Text
Usb network gate waiting for daemon to launch mac

You can follow the same instructions for CentOS and Rocky Linux. I will show you through the step-by-step installation of the Memcached distributed memory object caching system on an AlmaLinux 8. The installation is quite simple and assumes you are running in the root account, if not you may need to add ‘sudo‘ to the commands to get root privileges. This article assumes you have at least basic knowledge of Linux, know how to use the shell, and most importantly, you host your site on your own VPS. This reduces the number of times an external data source must be read, which lowers overheads and speeds up response times. Memcached is used to speed up dynamic database-driven websites by caching data and objects in RAM. For those of you who didn’t know, Memcached is a free and open-source high-performance distributed memory caching system. In this tutorial, we will show you how to install Memcached on AlmaLinux 8. How To Install Memcached on AlmaLinux 8 – idroot.In this article, I am going to show you how to install VMware Workstation Pro 16 on some common Linux distributions, such as Ubuntu 20.04 LTS, Debian 11, and Fedora 34. You will have amazing experience running virtual machines on VMware Workstation Pro 16. The user interface of the virtual machine will also be very responsive. So, technically you can play games on your VMware Workstation Pro 16 virtual machines. For Linux virtual machines, VMware Workstation Pro 16 supports OpenGL 4.1 3D acceleration. The VMware Workstation Pro 16 supports DirectX 11 3D acceleration for Windows virtual machines. It has outstanding 3D acceleration support for both the Windows and Linux virtual machines. VMware Workstation Pro 16 is one of the best Type-2 Hypervisor. The latest version of VMware Workstation Pro is 16. How to Install VMware Workstation Pro 16 on Linux.However, a huge number of these images are unmaintained, so you need to be selective about which images to explore. It’s the most popular place to grab images with more than 100,000 container images. These images are retrieved from Docker Hub, a registry managed by the Docker project. Getting Started with Docker: Docker Images – LinuxLinksĭocker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers.ĭocker containers are built from Docker images.This list showcases some feature-rich and efficient Docker alternatives to use in your next project. If you’re searching for some alternatives to Docker, look no further. Docker is useful in the containerization process, but it’s not the only platform around.

The 9 Best Docker Alternatives for Container ManagementĬontainers are highly beneficial for software development, deployment, and management in a virtual environment.
They won’t appear in your launchers or menus – instead you will have to find them and launch them one by one, or create your own launcher scripts. AppImages, being single-file executables, are not integrated into your system. It’s as simple as it can get and proving to be a very robust method to distribute applications that work across numerous distros on Linux. For example, you can download an AppImage for the HeroicLauncher, and then ensure the file can be executed (chmod +x in a terminal for example), launch it… et voila, no need to install it, the application will launch before your eyes. In case you are not too familiar with AppImages, it’s one of the ways on Linux to encapsulate a whole application and its dependencies into a single file. Following further investigation and testing, let me now share with you as well as you might just have a use for it. This was my fist time to come across this application.
AppImageLauncher to Integrate AppImages in Your DistroĪbout a week ago through the discussions we have on our Boiling Steam Matrix Room, someone mentioned in passing that they were using AppImageLauncher.
In the USB/Thunderbolt pull request Greg Kroah-Hartman summarizes this cycle’s work as “nothing major in here, just lots of little cleanups and additions for new hardware.” This time around the changes are on the smaller side but there are two additions worth mentioning. The USB and Thunderbolt updates for the Linux 5.16 kernel have arrived.
USB Additions For Linux 5.16 Include AMD Yellow Carp PM, Apple CD321X – Phoronix.
Word of its new project comes by way of System76’s Michael Murphy, who shared some of the rationale and motivation behind the new DE in series of comments posted on Pop!_OS Sub-reddit at the weekend.Īnd all told: they make for a pretty exciting read. Presently, that distro ships with a modified version of the GNOME desktop called ‘COSMIC‘ (all caps, not me shouting). The US-based company already maintains its own Ubuntu-based Linux distro called Pop!_OS. System76 has revealed it is working a new desktop environment that is not based on GNOME Shell.
System76 is Building its Own Desktop Environment.

0 notes
Text
How to install Docker CE on AlmaLinux 8 or Rocky Linux 8
How to install Docker CE on AlmaLinux 8 or Rocky Linux 8
Docker is an open source project that allows you to create, test, and deploy applications quickly and easily. Docker organizes software in containers that contain everything the software needs to run, e.g. B. Libraries, system tools, code and runtime. With Docker, you can quickly deploy and scale applications in any environment. Developers can use the development environments on Windows, Linux or…
View On WordPress
0 notes
Text
Instalar docker en Almalinux y Rocky Linux
Instalar docker en Almalinux y Rocky Linux
En esta breve guía, vamos a ver como realizar la instalación de Docker CE en AlmaLinux OS 8 y además en Rocky Linux, por medio de repositorios y con unos comandos quedará listo para ser utilizado. 1- Paquete necesario sudo dnf install -y yum-utils 2- Repositorios Con YUM sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo Con DNF sudo dnf config-manager…
View On WordPress
0 notes
Text
How to Install Docker on Ubuntu 20.04.
How to Install Docker on Ubuntu 20.04.
Hi hope you are doing well, lets learn about “How to Setup and Install Docker on Ubuntu 20.04”, the Docker is the fastest growing technology in the IT market. Docker is the container technology. Many industries are moving towards docker from the normal EC2 instances. It is PAAS (Platform as a Service), which uses a OS virtualisation to deliver software in packages called containers. The…

View On WordPress
#docker hub#docker install rocky linux#Install Docker CE on Rocky Linux#install docker engine on ubuntu#Install docker in rocky linux 8#Install docker on rocky linux 8#Install docker on ubuntu#Install docker on ubuntu 20.04#install mongodb docker#setup docker on rocky linux
0 notes
Text
How to Install Docker on Amazon Linux 2 AWS EC2.
How to Install Docker on Amazon Linux 2 AWS EC2.
Hi hope you are doing well, lets learn about “How to Setup and Install Docker on amazon linux 2 AWS EC2”, the Docker is the fastest growing technology in the IT market. Many industries are moving towards docker from the normal EC2 instances. Docker is the container technology. It is PAAS (Platform as a Service), which uses a OS virtualisation to deliver software in packages called…

View On WordPress
#docker hub#install docker ce on centos 8#Install Docker CE on Rocky Linux#install docker centos 8 dnf#Install docker in rocky linux 8#install docker on amazon linux 2#install docker on aws ec2#install docker on aws ec2 ubuntu#install docker on centos 8#install docker on centos 8 step by step#install docker on centos 8.4#Install docker on rocky linux 8#setting up docker on centos 8#setup docker on centos 8
0 notes
Text
Install Docker and Docker Compose on Rocky Linux 8.
Install Docker and Docker Compose on Rocky Linux 8.
Hi hope you are doing well, lets learn about “How to Setup and Install Docker and Docker Compose on Rocky Linux 8”, the Docker is the fastest growing technology in the IT market. Many industries are moving towards docker from the normal EC2 instances. Docker is the container technology. It is PAAS (Platform as a Service), which uses a OS virtualisation to deliver software in packages called…

View On WordPress
#docker hub#docker install rocky linux#Install Docker CE on Rocky Linux#install docker compose#install docker compose on rocky linux#Install docker in rocky linux 8#Install docker on rocky linux 8#setup docker compose on rocky linux#setup docker compose rocky linux#setup docker on rocky linux
0 notes
Text
How to Install Docker on Rocky Linux 8.
How to Install Docker on Rocky Linux 8.
Hi hope you are doing well, lets learn about “How to Setup and Install Docker on Rocky Linux 8”, the Docker is the fastest growing technology in the IT market. Many industries are moving towards docker from the normal EC2 instances. Docker is the container technology. It is PAAS (Platform as a Service), which uses a OS virtualisation to deliver software in packages called containers. The…

View On WordPress
#docker hub#docker install rocky linux#Install Docker CE on Rocky Linux#Install docker in rocky linux 8#Install docker on rocky linux 8#setup docker on rocky linux
0 notes
Text
In this guide we will look at the steps that you can use to generate Rocky Linux 8 Vagrant Boxes Using Packer templates. Vagrant is an open source solution that allows you to build and maintain portable virtual software development environments for your Virtualization Providers – KVM, VirtualBox, Hyper-V, VMware, Docker Containers, and AWS. We’ll use Packer, which is is a tool for creating identical machine images for multiple platforms from a single source configuration. For this task we’ll use Bento project which encapsulates Packer templates for building Vagrant base boxes. This simplifies Vagrant Boxes creation without the need to write your own templates; which is time consuming and difficult. For some templates the Vagrant Boxes have been build for you and published to the bento org on Vagrant Cloud. So follow the next steps to create your own Rocky Linux 8 Vagrant Boxes Using Packer and Bento Project templates. Step 1: Install Packer and Vagrant Tools The first step is installation of Packer and Vagrant which are dependencies for the next sections. The good news is that HashiCorp now maintains the package repositories for various Linux Distributions. Install Packer and Vagrant on Ubuntu / Debian Install required tools for repository addition into your system: sudo apt update sudo apt install wget apt-transport-https gnupg2 Import repository GPG key: curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - Add HashiCorp APT repository by running the commands below in your terminal: sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt update Once the repository is added install Packager: $ sudo apt install packer Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: packer 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. Need to get 31.7 MB of archives. After this operation, 145 MB of additional disk space will be used. Get:1 https://apt.releases.hashicorp.com focal/main amd64 packer amd64 1.7.3 [31.7 MB] Fetched 31.7 MB in 1s (43.0 MB/s) Selecting previously unselected package packer. (Reading database ... 64995 files and directories currently installed.) Preparing to unpack .../packer_1.7.3_amd64.deb ... Unpacking packer (1.7.3) ... Setting up packer (1.7.3) ... Do the same for Vagrant: $ sudo apt install vagrant Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: vagrant 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. Need to get 40.9 MB of archives. After this operation, 115 MB of additional disk space will be used. Get:1 https://apt.releases.hashicorp.com focal/main amd64 vagrant amd64 2.2.16 [40.9 MB] Fetched 40.9 MB in 1s (64.4 MB/s) Selecting previously unselected package vagrant. (Reading database ... 64998 files and directories currently installed.) Preparing to unpack .../vagrant_2.2.16_amd64.deb ... Unpacking vagrant (2.2.16) ... Setting up vagrant (2.2.16) ... Confirm installation by checking software versions: $ packer version Packer v1.7.3 $ vagrant version Installed Version: 2.2.16 Latest Version: 2.2.16 Install Packer and Vagrant on CentOS / Fedora / RHEL / Amazon Linux Add the repository and install the packages: CentOS / RHEL: sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo sudo yum -y install packer vagrant For Fedora run the commands below: sudo dnf install -y dnf-plugins-core sudo dnf config-manager --add-repo https://rpm.releases.hashicorp.com/fedora/hashicorp.repo sudo dnf -y install packer vagrant Amazon Linux: sudo yum install -y yum-utils sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo sudo yum -y install packer vagrant Install Vagrant and Packer on macOS
For macOS users the installation is done as shown below: brew tap hashicorp/tap brew install hashicorp/tap/packer brew install vagrant Step 2: Install Virtualization providers of your choice You have to install at least one of the following virtualization providers: VirtualBox VMware Fusion VMware Workstation Parallels Desktop also requires Parallels Virtualization SDK KVM Hyper-V You can start with the Virtualization provider for which you want to run Vagrant machine on. Step 3: Clone Bento Project on Github Install git tool if doesn’t exist in your machine: # CentOS / RHEL / Fedora sudo yum -y install git # Ubuntu / Debian sudo apt update sudo apt install git # macOS brew install git Once git package is installed use it to download Bento source from Github: git clone https://github.com/chef/bento.git The folder bento has a number of files and directories inside: $ ls -1 bento/ bento.gemspec bin builds builds.yml CHANGELOG.md CODE_OF_CONDUCT.md Gemfile lib LICENSE MAINTAINERS.md NOTICE.md packer_templates Rakefile README.md test_templates Step 4: Generate Rocky Linux 8 Vagrant Box for your Virtualization Provider Packer template for Rocky Linux is located in the bento/packer_templates/rockylinux/ directory. Let’s switch to this dir: cd bento/packer_templates/rockylinux/ As of this article writing the available template is for Rocky Linux 8.5. $ ls rockylinux-8.5-x86_64.json You can then build Rocky Linux 8 box for only Virtualization Provider: # Virtualbox provider packer build -only=virtualbox-iso rockylinux-8.5-x86_64.json # VMware packer build -only=vmware-iso rockylinux-8.5-x86_64.json # Parallels packer build -only=parallels-iso rockylinux-8.5-x86_64.json # KVM packer build -only=qemu rockylinux-8.5-x86_64.json VirtualBox Example Here is the output of Vagrant box creation with VirtualBox provider: virtualbox-iso: output will be in this color. ==> virtualbox-iso: Retrieving Guest additions ==> virtualbox-iso: Trying /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso ==> virtualbox-iso: Trying /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso ==> virtualbox-iso: /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso => /Applications/VirtualBox.app/Contents/MacOS/VBoxGuestAdditions.iso ==> virtualbox-iso: Retrieving ISO ==> virtualbox-iso: Trying http://download.rockylinux.org/pub/rocky/8.5/isos/x86_64/Rocky-8.5-x86_64-dvd1.iso ==> virtualbox-iso: Trying http://download.rockylinux.org/pub/rocky/8.5/isos/x86_64/Rocky-8.5-x86_64-dvd1.iso?checksum=sha256%3A4be83f5edf28209ce5caa06995c1c3fc5112d0d260b9e8c1cc2fecd384abcee0 virtualbox-iso: Rocky-8.5-x86_64-dvd1.iso 376.07 MiB / 9.23 GiB [==>--------------------------------------------------------------------] 3.98% 33m08s Once the ISO file is downloaded an VM instance is created that is later converted to a box file. ==> virtualbox-iso: Starting HTTP server on port 8793 ==> virtualbox-iso: Creating virtual machine... ==> virtualbox-iso: Creating hard drive ../../builds/packer-rockylinux-8.5-x86_64-virtualbox/rockylinux-8.5-x86_64.vdi with size 65536 MiB... ==> virtualbox-iso: Mounting ISOs... virtualbox-iso: Mounting boot ISO... ==> virtualbox-iso: Creating forwarded port mapping for communicator (SSH, WinRM, etc) (host port 3183) ==> virtualbox-iso: Starting the virtual machine... ==> virtualbox-iso: Waiting 5s for boot... ==> virtualbox-iso: Typing the boot command... ==> virtualbox-iso: Using ssh communicator to connect: 127.0.0.1 ==> virtualbox-iso: Waiting for SSH to become available... Screenshot: A VM instance created on VirtualBox but deleted after box generation. Since the installer uses kickstart to automate the installation no user action is required: Expected output from a successful build. .... ==> virtualbox-iso: ++ readlink -f /dev/disk/by-uuid/6a6aa6d9-8698-4e77-93b8-55809cf8e3a6 ==> virtualbox-iso: + swappart=/dev/sda1
==> virtualbox-iso: + /sbin/swapoff /dev/sda1 ==> virtualbox-iso: + dd if=/dev/zero of=/dev/sda1 bs=1M ==> virtualbox-iso: dd: error writing '/dev/sda1': No space left on device ==> virtualbox-iso: 2197+0 records in ==> virtualbox-iso: 2196+0 records out virtualbox-iso: dd exit code 1 is suppressed ==> virtualbox-iso: 2302672896 bytes (2.3 GB, 2.1 GiB) copied, 1.51517 s, 1.5 GB/s ==> virtualbox-iso: + echo 'dd exit code 1 is suppressed' ==> virtualbox-iso: + /sbin/mkswap -U 6a6aa6d9-8698-4e77-93b8-55809cf8e3a6 /dev/sda1 virtualbox-iso: Setting up swapspace version 1, size = 2.1 GiB (2302668800 bytes) virtualbox-iso: no label, UUID=6a6aa6d9-8698-4e77-93b8-55809cf8e3a6 ==> virtualbox-iso: + sync ==> virtualbox-iso: Gracefully halting virtual machine... ==> virtualbox-iso: Preparing to export machine... virtualbox-iso: Deleting forwarded port mapping for the communicator (SSH, WinRM, etc) (host port 3183) ==> virtualbox-iso: Exporting virtual machine... virtualbox-iso: Executing: export rockylinux-8.5-x86_64 --output ../../builds/packer-rockylinux-8.5-x86_64-virtualbox/rockylinux-8.5-x86_64.ovf ==> virtualbox-iso: Cleaning up floppy disk... ==> virtualbox-iso: Deregistering and deleting VM... ==> virtualbox-iso: Running post-processor: vagrant ==> virtualbox-iso (vagrant): Creating a dummy Vagrant box to ensure the host system can create one correctly ==> virtualbox-iso (vagrant): Creating Vagrant box for 'virtualbox' provider virtualbox-iso (vagrant): Copying from artifact: ../../builds/packer-rockylinux-8.5-x86_64-virtualbox/rockylinux-8.5-x86_64-disk001.vmdk virtualbox-iso (vagrant): Copying from artifact: ../../builds/packer-rockylinux-8.5-x86_64-virtualbox/rockylinux-8.5-x86_64.ovf virtualbox-iso (vagrant): Renaming the OVF to box.ovf... virtualbox-iso (vagrant): Compressing: Vagrantfile virtualbox-iso (vagrant): Compressing: box.ovf virtualbox-iso (vagrant): Compressing: metadata.json virtualbox-iso (vagrant): Compressing: rockylinux-8.5-x86_64-disk001.vmdk Build 'virtualbox-iso' finished after 12 minutes 15 milliseconds. ==> Wait completed after 12 minutes 15 milliseconds ==> Builds finished. The artifacts of successful builds are: --> virtualbox-iso: 'virtualbox' provider box: ../../builds/rockylinux-8.5.virtualbox.box List builds directory: $ ls ../../builds/ rockylinux-8.5.virtualbox.box uploaded Import the Vagrant Box $ vagrant box add rockylinux-8.5 file://../../builds/rockylinux-8.5.virtualbox.box ==> box: Box file was not detected as metadata. Adding it directly... ==> box: Adding box 'rockylinux-8.5' (v0) for provider: box: Unpacking necessary files from: file:///Users/jkmutai/Downloads/bento/builds/rockylinux-8.5.virtualbox.box ==> box: Successfully added box 'rockylinux-8.5' (v0) for 'virtualbox'! Confirm the Box is available: $ vagrant box list| grep rockylinux-8.5 rockylinux-8.5 (virtualbox, 0) VMware Box creation Successful box creation on VMware: .... vmware-iso (vagrant): Compressing: disk-s001.vmdk vmware-iso (vagrant): Compressing: disk-s002.vmdk vmware-iso (vagrant): Compressing: disk-s003.vmdk vmware-iso (vagrant): Compressing: disk-s004.vmdk vmware-iso (vagrant): Compressing: disk-s005.vmdk vmware-iso (vagrant): Compressing: disk-s006.vmdk vmware-iso (vagrant): Compressing: disk-s007.vmdk vmware-iso (vagrant): Compressing: disk-s008.vmdk vmware-iso (vagrant): Compressing: disk-s009.vmdk vmware-iso (vagrant): Compressing: disk-s010.vmdk vmware-iso (vagrant): Compressing: disk-s011.vmdk vmware-iso (vagrant): Compressing: disk-s012.vmdk vmware-iso (vagrant): Compressing: disk-s013.vmdk vmware-iso (vagrant): Compressing: disk-s014.vmdk vmware-iso (vagrant): Compressing: disk-s015.vmdk vmware-iso (vagrant): Compressing: disk-s016.vmdk vmware-iso (vagrant): Compressing: disk-s017.vmdk vmware-iso (vagrant): Compressing: disk.vmdk
vmware-iso (vagrant): Compressing: metadata.json vmware-iso (vagrant): Compressing: rockylinux-8.5-x86_64.nvram vmware-iso (vagrant): Compressing: rockylinux-8.5-x86_64.vmsd vmware-iso (vagrant): Compressing: rockylinux-8.5-x86_64.vmx vmware-iso (vagrant): Compressing: rockylinux-8.5-x86_64.vmxf Build 'vmware-iso' finished after 12 minutes 31 seconds. ==> Wait completed after 12 minutes 31 seconds ==> Builds finished. The artifacts of successful builds are: --> vmware-iso: 'vmware' provider box: ../../builds/rockylinux-8.5.vmware.box You can import the box in similar way: $ vagrant box add rockylinux-8.5 file://../../builds/rockylinux-8.5.vmware.box ==> box: Box file was not detected as metadata. Adding it directly... ==> box: Adding box 'rockylinux-8.5' (v0) for provider: box: Unpacking necessary files from: file:///Users/jkmutai/Downloads/bento/builds/rockylinux-8.5.vmware.box ==> box: Successfully added box 'rockylinux-8.5' (v0) for 'vmware_desktop'! Check if box import was successful: $ vagrant box list| grep rockylinux-8.5 rockylinux-8.5 (virtualbox, 0) rockylinux-8.5 (vmware_desktop, 0) Step 5: Create VM instance using Vagrant Box created We can then create a Vagrantfile for our box: mkdir -p ~/vagrantboxes/rocky-linux cd ~/vagrantboxes/rocky-linux Create a Vagrantfile file: vim Vagrantfile For VirtualBox provider paste below contents: # -*- mode: ruby -*- # vi: set ft=ruby : ENV['VAGRANT_DEFAULT_PROVIDER'] = 'virtualbox' Vagrant.configure("2") do |config| config.vm.box = "rockylinux-8.5" config.vm.hostname = "rocky-linux-8" config.vm.box_check_update = false config.vm.provider "virtualbox" do |vm| vm.name = "rocky-linux-8" vm.memory = "2048" vm.cpus = 2 # Display the VirtualBox GUI when booting the machine vm.gui = false end end Create the vagrant environment using vagrant up command: $ vagrant up Bringing machine 'rockylinux-8' up with 'virtualbox' provider... ==> rockylinux-8: Importing base box 'rockylinux-8.5'... ==> rockylinux-8: Matching MAC address for NAT networking... ==> rockylinux-8: Setting the name of the VM: rocky-linux-8_1625237224117_71022 ==> rockylinux-8: Clearing any previously set network interfaces... ==> rockylinux-8: Preparing network interfaces based on configuration... rockylinux-8: Adapter 1: nat ==> rockylinux-8: Forwarding ports... rockylinux-8: 22 (guest) => 2222 (host) (adapter 1) ==> rockylinux-8: Running 'pre-boot' VM customizations... ==> rockylinux-8: Booting VM... ==> rockylinux-8: Waiting for machine to boot. This may take a few minutes... rockylinux-8: SSH address: 127.0.0.1:2222 rockylinux-8: SSH username: vagrant rockylinux-8: SSH auth method: private key rockylinux-8: rockylinux-8: Vagrant insecure key detected. Vagrant will automatically replace rockylinux-8: this with a newly generated keypair for better security. rockylinux-8: rockylinux-8: Inserting generated public key within guest... rockylinux-8: Removing insecure key from the guest if it's present... rockylinux-8: Key inserted! Disconnecting and reconnecting using new SSH key... ==> rockylinux-8: Machine booted and ready! ==> rockylinux-8: Checking for guest additions in VM... ==> rockylinux-8: Setting hostname... ==> rockylinux-8: Mounting shared folders... rockylinux-8: /vagrant => /Users/jkmutai/myhacks/vagrant/rocky-linux Access Vagrant environment VM Shell: $ vagrant ssh This system is built by the Bento project by Chef Software More information can be found at https://github.com/chef/bento [vagrant@rocky-linux-8 ~]$ cat /etc/redhat-release Rocky Linux release 8.5 (Green Obsidian) You can also generate SSH Config: $ vagrant ssh-config rockylinux-8 Host rockylinux-8 HostName 127.0.0.1 User vagrant Port 2222 UserKnownHostsFile /dev/null StrictHostKeyChecking no PasswordAuthentication no IdentityFile /Users/jmutai/vagrantboxes/rocky-linux/.
vagrant/machines/rockylinux-8/virtualbox/private_key IdentitiesOnly yes LogLevel FATAL These can be added to ~/.ssh/config file To suspends the the rocky-linux-8 vagrant environment run: vagrant suspend When done you can destroy vagrant environment anytime with the command: $ vagrant destroy rockylinux-8: Are you sure you want to destroy the 'rockylinux-8' VM? [y/N] y ==> rockylinux-8: Forcing shutdown of VM... ==> rockylinux-8: Destroying VM and associated drives... Conclusion Vagrant simplifies developer actions required to have a running instance to test application code. By using Chef Bento templates we were able to quickly create our own Box and started a Vagrant environment from the box generated. We hope this article was helpful. You can drop a comment as feedback or with any issue encountered.
0 notes
Text
Welcome to this exhilarating tutorial on how to deploy and use Quarkus in Kubernetes. Kubernetes is one of the open-source tools currently preferred when automating system deployments. It makes it easy to scale and manage containerized applications. Kubernetes works by distributing workloads across the cluster and automating the container networking needs. Also, storage and persistent volumes are allocated, by doing so the desired state of container applications is continuously maintained. Quarkus offers provides an easy way to automatically generate the Kubernetes resources based on some defaults and the user-provided configuration. This Kubernetes-native Java framework also provides an extension used to build and push container images to a registry before the application is deployed to the target. Another feature about Quarkus is that it enabled one to use the Kubernetes ConfigMap as a configuration source without mounting them on the pod. The cool features associated with Quarkus are: Community and Standards: It provides a cohesive and fun-to-use full-stack framework by leveraging a growing list of over fifty best-of-breed libraries that you love and use Container First: It offers amazingly fast boot time, incredibly low RSS memory (not just heap size!) offering near-instant scale-up and high density memory utilization in container orchestration platforms like Kubernetes Unifies imperative and reactive: It allows developers to combine both the familiar imperative code and the reactive style when developing applications. Kube-Native: The combination of Quarkus and Kubernetes provides an ideal environment for creating scalable, fast, and lightweight applications. It highly increases the developer productivity with tooling, pre-built integrations, application services e.t.c By following this guide to the end, you will learn how to: Use Quarkus Dekorate extension to automatically generate Kubernetes manifests based on the source code and configuration Build and push images to Docker registry with Jib extension Deploy an application on Kubernetes without any manually created YAML in one click Use Quarkus Kubernetes Config to inject configuration properties from ConfigMap Let’s dive in! Setup Pre-requisites For this guide, you will require: Quarkus CLI Apache Maven 3.8.1+ (Optional) Access to a Kubernetes cluster A Kubernetes cluster can be deployed with the aid of the guides below: Run Kubernetes on Debian with Minikube Deploy Kubernetes Cluster on Linux With k0s Install Kubernetes Cluster on Ubuntu using K3s Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Once the cluster is running, install kubectl curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin Ensure that you can access the cluster. # For k0s export KUBECONFIG=/var/lib/k0s/pki/admin.conf 1. Install Quarkus CLI The Quarkus CLI can be installed on Linux, macOS, and Windows (using WSL or bash compatible shell-like Cygwin or MinGW) by running the below commands: curl -Ls https://sh.jbang.dev | bash -s - trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/ curl -Ls https://sh.jbang.dev | bash -s - app install --fresh --force quarkus@quarkusio You can install it on Windows systems using the Powershell: iex "& $(iwr https://ps.jbang.dev) trust add https://repo1.maven.org/maven2/io/quarkus/quarkus-cli/" iex "& $(iwr https://ps.jbang.dev) app install --fresh --force quarkus@quarkusio" Once installed, restart your shell. The Quarkus CLI can also be installed using SDKMAN as below: sdk install quarkus 2. Create a Project Use the Quarkus command-line interface (CLI) to create a new project. The below command adds resteasy-reactive, Jib, and kubernetes dependencies. quarkus create app quarkus-example --extension=resteasy-reactive,kubernetes,jib
cd quarkus-example Sample Output: After this, you will have several files generated, among these files is the pom.xml file bearing dependencies to the build file: ......... io.quarkus quarkus-resteasy-reactive io.quarkus quarkus-kubernetes io.quarkus quarkus-container-image-jib ...... The good thing with Quarkus is that it generates Deployment/StatefulSet resources that it use your registry_username/test-quarkus-app:tag as the container image of the Pod. The image here is controlled by the Jib extension and can be customized using the application.properties as shown: Open the file for editing: vim src/main/resources/application.properties Add the following lines replacing where required. quarkus.container-image.group=registry_username quarkus.container-image.name=tutorial-app quarkus.container-image.tag=latest quarkus.container-image.username=registry_username quarkus.container-image.password=Your_registry -Password If no registry has not been specified, the default, docker.io registry will be used. A detailed demonstration on specifying a registry has been captured elsewhere in this guide. 3. Build and Deploy your Application Jib is used to build optimized images for Java applications without a Docker daemon and no need for the mastery of deep docker practices. Dekorate is a Java library that makes it simple to generate and decorate Kubernetes manifests. It generates manifests based on the annotations, source code, and configuration variables. Now build and deploy your application using Quarkus CLI: quarkus build -Dquarkus.container-image.push=true Sample Output: After the build process, you will have two files named kubernetes.json and kubernetes.yml under the target/kubernetes/ directory. # ls target/kubernetes kubernetes.json kubernetes.yml Both files contain both the Kubernetes Deployment and Service. For example, the kubernetes.yml file looks like this: # cat target/kubernetes/kubernetes.yml --- apiVersion: v1 kind: Service metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/name: tutorial-app app.kubernetes.io/version: latest name: tutorial-app spec: ports: - name: http port: 80 targetPort: 8080 selector: app.kubernetes.io/name: tutorial-app app.kubernetes.io/version: latest type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app name: tutorial-app spec: replicas: 1 selector: matchLabels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app template: metadata: annotations: app.quarkus.io/build-timestamp: 2022-07-09 - 10:55:08 +0000 labels: app.kubernetes.io/version: latest app.kubernetes.io/name: tutorial-app spec: containers: - env: - name: KUBERNETES_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace image: registry_username/tutorial-app:latest imagePullPolicy: Always name: tutorial-app ports: - containerPort: 8080 name: http protocol: TCP You will also have the image pushed to your registry. DockerHub for this example: It is possible to generate a StatefulSet resource instead of the default Deployment resource via the application.properties; quarkus.kubernetes.deployment-kind=StatefulSet Now deploy the application to your Kubernetes cluster using any of the two manifests. For example: kubectl apply -f target/kubernetes/kubernetes.yml Verify if the deployment is up: # kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE tutorial-app 1/1 1 1 13s
# kubectl get pods NAME READY STATUS RESTARTS AGE tutorial-app-bc774dc8d-k494g 1/1 Running 0 19s Check if the service is running: # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 443/TCP 31m tutorial-app LoadBalancer 10.102.87.114 80:30400/TCP 4m53s Access the deployment using the provided port 30400. This can be done using a web browser with the URL http://IP_Address:30400/hello Or from the terminal as shown: $ curl 192.168.205.4:30400/hello Hello from RESTEasy Reactive This is the output of the file at src/main/java/org/acme/GreetingResource.java 4. Tuning the generated resources using application.properties Quarkus allows one to tune the generated manifest using the application.properties file. Through this file, several configurations can be made. These include: A. Namespace Quarkus allows one to run the application in a chosen namespace. It omits the namespace in the generated manifest rather than enforcing it in the default namespace. Therefore, you can run the application in the desired namespace say test using the command: kubectl apply -f target/kubernetes/kubernetes.yml -n=test Aside from specifying the namespace when running the Kubernetes command, you can still capture the namespace in the application.properties as shown: quarkus.kubernetes.namespace=mynamespace Replace mynamespace with the desired namespace for the application. B. Defining a Docker registry There are several other registries that can be defined. If left undefined, docker.io is used. If you want ot use another registry such as quay.io, then you need to specify it: quarkus.container-image.registry=my.docker-registry.net my.docker-registry.net is the registry you want to use. C. Environment variables There are several ways of defining variables on Kubernetes. These includes: key/value pairs import all values from a Secret or ConfigMap interpolate a single value identified by a given field in a Secret or ConfigMap interpolate a value from a field within the same resource Environment variables from key/value pairs To add environment variables from key/value pairs, use the below syntax: quarkus.kubernetes.env.vars.my-env-var=foobar This adds MY_ENV_VAR=foobar as an environment variable. my-env-var is converted to uppercase and the dashes are replaced with underscores to result in MY_ENV_VAR. Environment variables from Secret To add key/value pairs of Secret as environment variables, add the lines below to application.properties: quarkus.kubernetes.env.secrets=my-secret,my-other-secret This will result in the following in the container environment: envFrom: - secretRef: name: my-secret optional: false - secretRef: name: my-other-secret optional: false You can set the variable by extracting a value defined by keyName form the my-secret: quarkus.kubernetes.env.mapping.foo.from-secret=my-secret quarkus.kubernetes.env.mapping.foo.with-key=keyName Resulting into: - env: - name: FOO valueFrom: secretKeyRef: key: keyName name: my-secret optional: false Environment variables from ConfigMap Quarkus can be used to add key/value pairs from ConfigMap as environment variables. To achieve this, you need to add the lines below separating the ConfigMap to be used as a source by a comma. For example: quarkus.kubernetes.env.configmaps=my-config-map,another-config-map This will result into: envFrom: - configMapRef: name: my-config-map optional: false - configMapRef: name: another-config-map optional: false It is also possible to extract keyName field from the my-config-map by using: quarkus.kubernetes.env.mapping.foo.from-configmap=my-configmap quarkus.kubernetes.env.mapping.foo.with-key=keyName This will generate a manifest with the below lines:
- env: - name: FOO valueFrom: configMapRefKey: key: keyName name: my-configmap optional: false That is it! Closing Thoughts That summarizes this guide on how to deploy and use Quarkus in Kubernetes. I am sure that you are now in a position to generate the Kubernetes resources based on some defaults and the user-provided configuration using Quarkus. I hope this was valuable.
0 notes
Text
Many people around the world look for ways to build container images in Kubernetes without the need to mount the docker socket or perform any other action that compromises security on your cluster. With the increased need, a famous software engineer, Jessie Frazelle saw the need to introduce Img image builder. This is an open-source, daemon-less, and unprivileged Dockerfile and OCI compatible container image builder. Img is a wrapper around the open-source BuildKit, a building technology embedded within Img. There are many features associated with the img image builder. Some of them are: Img CLI, a responsive CLI that provides a set of commands similar to Docker CLI commands when dealing with container image building, distribution, and image manipulation. Rootless Builds: img can be run without requiring the –privileged Docker flag or the equivalent privileged: true security context in Kubernetes. BuildKit: defined as one of the next generation build engines for container images. Parallel Build Execution: BuildKit assembles an internal representation of the build steps as a Directed Acyclic Graph (DAG), which enables it to determine which build steps can be executed in parallel. Cross-Platform/OS Builds: it’s possible to build images for different combinations of architecture and OS on a completely different platform In this guide, we will take a deep dive into how to build container images on Kubernetes using img image builder. Setup Pre-requisites This guide will work best if you have a Kubernetes cluster set up. Below is a list of dedicated guides to help you achieve this: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Install Kubernetes Cluster on Ubuntu using K3s Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube This guide will demonstrate how to build container images from Dockerfile using img image builder in Kubernetes with Github. So, you will also need: Access to Kubernetes cluster with permissions to create, list, update and delete pods, jobs, and services Github repository with a Dockerfile: we will use the repo URL as the path of the Dockerfile Dockerhub account: to be able to authenticate and push the Docker image. #1. Configure Build Contexts For this guide, we will use a private GitHub repository as our build context. We need to configure it with the required Dockerfile. The URL to my private git repository used in this article is: https://github.com/computingforgeeks/kubernetes-demo In the repository, I will create a Dockerfile with the contents below: FROM ubuntu ENTRYPOINT ["/bin/bash", "-c", "echo hello"] Now obtain a Personal Access Token to your git account. #2. Create the Img Pod Manifest We will have two containers: Git-sync: an init container to clone the private git repository img: that builds the docker image and pushes it to docker hub These two containers share a volume git-repo mounted as emptyDir at /repo Create a manifest for the pod. vim pod.yml Add the below lines to the manifest: apiVersion: v1 kind: Pod metadata: labels: run: img name: img annotations: container.apparmor.security.beta.kubernetes.io/img: unconfined spec: securityContext: runAsUser: 1000 initContainers: - name: git-sync image: k8s.gcr.io/git-sync:v3.1.5 volumeMounts: - name: git-repo mountPath: /repo env: - name: GIT_SYNC_REPO value: "https://github.com/computingforgeeks/kubernetes-demo.git" ##Private repo-path-you-want-to-clone - name: GIT_SYNC_USERNAME value: "computingforgeeks" ##The username for the Git repository - name: GIT_SYNC_PASSWORD value: "ghp_JilxkjTT5EIgJCV........" ##The Personal Access Token for the Git repository - name: GIT_SYNC_BRANCH value: "master" ##repo-branch - name: GIT_SYNC_ROOT value: /repo - name: GIT_SYNC_DEST value: "hello" ##path-where-you-want-to-clone
- name: GIT_SYNC_ONE_TIME value: "true" securityContext: runAsUser: 0 containers: - image: r.j3ss.co/img imagePullPolicy: Always name: img resources: workingDir: /repo/hello command: ["/bin/sh"] args: - -c - >- img build -t docker.io//helloworld . && img login -u -p && img push docker.io//helloworld volumeMounts: - name: cache-volume mountPath: /tmp - name: git-repo mountPath: /repo volumes: - name: cache-volume emptyDir: - name: git-repo emptyDir: restartPolicy: Never In the above file, replace the values appropriately. You can also notice that the destination folder for git-sync is the working directory for img. If you are using a public git repository, you may not be required to provide the Personal Access Token for the Git repository. #3. Run img image builder in Kubernetes Using the manifest, run the pod using the command: kubectl apply -f pod.yml Now follow the image build and push process with the command: kubectl logs img --follow Output: From the above output, we are safe to conclude that the image has been successfully pushed to DockerHub #4. Pull and Test the Image You can now pull and test the image using: 1. Docker Ensure that Docker is installed on your system. The below guide can help you achieve this: How To Install Docker CE on Linux Systems Now run a container with the image using the command: docker run -it / For example: docker run -it klinsmann1/helloworld:latest Sample output: 2. Kubernetes The image pushed can still be used on Kubernetes. Pull and test the image as below; $ vim deploy.yml apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: selector: matchLabels: app: hello replicas: 1 template: metadata: labels: app: hello spec: containers: - name: hello-world image: klinsmann1/helloworld:latest Apply the manifest: kubectl apply -f deploy.yml Check the status of the deployment: $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-world-7f68776d79-h4h4z 0/1 Completed 1 (4s ago) 6s img 0/1 Completed 0 13m Verify if the execution was successful. $ kubectl logs hello-world-7f68776d79-h4h4z --follow hello The end! We have successfully walked through how to build container images on Kubernetes using img image builder. I hope this was significant to you.
0 notes
Text
Kubernetes usually as k8s or Kube is a perfect container orchestration system used for automation and management. It was initially developed by Google but is currently a community project. Recently, the popularity of Kubernetes and its ecosystem has grown immensely due to its ability to its workload types, design patterns, and behavior. The basic resource in Kubernetes is called a pod. This is the smallest deployable unit which is a wrapper around containers. A pod may be made up of one or more containers each bearing its own configurations. There are 3 different resources provided when deploying pods. These are: Deployments: the easiest and most used resource. They are usually used for stateless applications. However, the application can be made stateful by attaching a persistent volume to it. StatefulSets: This resource is used to manage the deployment and scaling of a set of Pods. It provides the guarantee about ordering and uniqueness of these Pods. DaemonSets: This controller ensures all the pod runs on all the nodes of the cluster. In case a node is added/removed from the cluster, DaemonSet automatically adds or removes the pod. Updating deployments can be done manually by: Update image tag in deployment.yaml Apply the manifest kubectl apply -f deployment.yaml This method is time-consuming and does not give ample time for the user to focus on other important tasks like writing code, tests e.t.c. Today, there are many tools that can be used to build container images, these include Kaniko, Img image builder e.t.c. But there has been a missing gap on how to automatically apply these updates when new images are available. Keel aims to provide a simple, robust, background service that automatically updates deployments, allowing users to focus on some other tasks. This is important, especially in situations where your application is re-build and needs to be deployed each time the code changes. The below image will help you understand how Keel works. In this guide, we will discuss how to automate app deployment updates on Kubernetes using Keel. Getting Started. Since this guide demonstrates how to automatically update deployments on Kubernetes, you need to be familiar with how to build and push images to your container registry such as Docker Hub. Therefore you will require: Github repository with a Dockerfile: we will use the repo URL as the path of the Dockerfile Docker hub account: to be able to authenticate and push the Docker image. Access to Kubernetes cluster: to be able to deploy the Kaniko pod and create the docker registry secret. Webhook Relay – will relay public webhooks to our internal Kubernetes environment so we don’t have to expose Keel to the public internet You also need a Kubernetes cluster set up. This page provides several options on how to set a Kubernetes cluster: Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Install Kubernetes Cluster on Ubuntu using K3s Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube You need kubectl as well. curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" chmod +x kubectl sudo mv kubectl /usr/local/bin Enable access to the cluster: # For k0s export KUBECONFIG=/var/lib/k0s/pki/admin.conf # For Vanilla Kubernetes mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Confirm kubectl works by listing available namespaces: $ kubectl get ns NAME STATUS AGE default Active 18d demo Active 2m3s kube-flannel Active 18d kube-node-lease Active 18d kube-public Active 18d kube-system Active 18d metallb-system Active 18d web Active 18d Step 1 – Set up GitHub repository
For this guide, we will have a simple Go application that will be built and updated automatically. The private git repository will be created with the 2 files below: Dockerfile FROM golang:1.17-alpine as build-env # Set environment variable ENV APP_NAME sample-dockerize-app ENV CMD_PATH main.go # Copy application data into image COPY . $GOPATH/src/$APP_NAME WORKDIR $GOPATH/src/$APP_NAME # Budild application RUN CGO_ENABLED=0 go build -v -o /$APP_NAME $GOPATH/src/$APP_NAME/$CMD_PATH # Run Stage FROM alpine:3.14 # Set environment variable ENV APP_NAME sample-dockerize-app # Copy only required data into this image COPY --from=build-env /$APP_NAME . # Expose application port EXPOSE 8081 # Start app CMD ./$APP_NAME main.go package main import ( "fmt" "log" "net/http" ) var version = "version1" func main() http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) fmt.Fprintf(w, "Welcome to my website! Version %s", version) ) fmt.Printf("App is starting, version: %s \n", version) log.Fatal(http.ListenAndServe(":8081", nil)) This application will be used to print the version to help us identify the automatic image update. The files should be created on Github. An example showing how the files should appear: Step 2 – Enable Webhook Relay forwarding Triggers are entry points into the Keel. They collect information regarding updated images and send events to providers. Creating a Webhook Relay operator allows us to create a public point and destination where webhooks are forwarded. To keep things isolated we will run them in a separate namespace. Create the namespace with the command: kubectl create namespace push-workflow Now set the created namespace as the current context. kubectl config set-context $(kubectl config current-context) --namespace=push-workflow Install Helm charts using the aid in the below guide: Install and Use Helm 3 on Kubernetes Cluster Add the repository: helm repo add webhookrelay https://charts.webhookrelay.com helm repo update Obtain access tokens from the webhook Relay page. While on this page, navigate to your profile and click ‘Create Token‘. You will have the token generated. Export the token as variables export RELAY_KEY=0***9cc-1765-4***-b968-397***1c6 export RELAY_SECRET=Qsr1R****U3 Install the tokens using Helm: helm upgrade --install webhookrelay-operator --namespace=push-workflow webhookrelay/webhookrelay-operator \ --set credentials.key=$RELAY_KEY --set credentials.secret=$RELAY_SECRET Check if the application has been installed. $ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION webhookrelay-operator push-workflow 1 2022-06-17 11:10:18.377937954 +0000 UTC deployed webhookrelay-operator-0.3.10.5.1 Now create a manifest that will allow you receive and forward hooks. vim webhookrelay_cr.yaml Add the below lines to the file. apiVersion: forward.webhookrelay.com/v1 kind: WebhookRelayForward metadata: name: keel-forward spec: buckets: - name: dockerhub-to-keel inputs: - name: dockerhub-endpoint description: "Public endpoint" responseBody: "OK" responseStatusCode: 200 outputs: - name: keel destination: http://keel:9300/v1/webhooks/dockerhub internal: true Apply the manifest kubectl apply -f webhookrelay_cr.yaml You should have the 2 pods running; # kubectl get pods NAME READY STATUS RESTARTS AGE keel-forward-whr-deployment-5d8db5b4b7-lwkbr 1/1 Running 0 21s webhookrelay-operator-c694f6c8b-msv86 1/1 Running 0 112s Obtain the endpoint by describing the pod, say: $ kubectl describe webhookrelayforwards.forward.webhookrelay.com keel-forward ... ... Status: Agent Status: Running
Public Endpoints: https://n5jmxkbviuucl395nwjxr7.hooks.webhookrelay.com Ready: true Routing Status: Configured Alternatively, obtain the public endpoint from the buckets page. Step 3 – Configure DockerHub Now create a new repository on DockerHub. This can be public or private depending on your preference. Now configure the repository Webhooks by creating a new one (dockerhub-to-keel) using the public endpoint. Step 4 – Build and Push Container Image There are several guides to help you build and push docker images to your container registry. In this guide, we will use Kaniko deployed using the aid of the guide below. Build container images from Dockerfile using kaniko in Kubernetes You can use the guide to deploy the DockerHub secrets and proceed as below. Create the manifest: vim pod.yml The file will contain the lines: apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: - "--context=git://@github.com/computingpost/kubernetes-kaniko.git#refs/heads/master" - "--destination=/keel-demo-image:latest" volumeMounts: - name: kaniko-secret mountPath: /kaniko/.docker restartPolicy: Never volumes: - name: kaniko-secret secret: secretName: dockercred items: - key: .dockerconfigjson path: config.json This manifest will help us create and push the image to Dockerhub with the tag keel-demo-image:latest. Apply the manifest. kubectl apply -f pod.yml Follow the build process. $ kubectl logs kaniko --follow Enumerating objects: 28, done. Counting objects: 100% (28/28), done. Compressing objects: 100% (25/25), done. Total 28 (delta 4), reused 0 (delta 0), pack-reused 0 INFO[0001] Resolved base name golang:1.17-alpine to build-env INFO[0001] Retrieving image manifest golang:1.17-alpine INFO[0001] Retrieving image golang:1.17-alpine from registry index.docker.io INFO[0002] Retrieving image manifest alpine:3.14 INFO[0002] Retrieving image alpine:3.14 from registry index.docker.io INFO[0004] Built cross stage deps: map[0:[/sample-dockerize-app]] INFO[0004] Retrieving image manifest golang:1.17-alpine INFO[0004] Returning cached image manifest INFO[0004] Executing 0 build triggers ....... INFO[0019] ENV APP_NAME sample-dockerize-app INFO[0019] COPY --from=build-env /$APP_NAME . INFO[0019] Taking snapshot of files... INFO[0019] EXPOSE 8081 INFO[0019] cmd: EXPOSE INFO[0019] Adding exposed port: 8081/tcp INFO[0019] CMD ./$APP_NAME INFO[0019] Pushing image to klinsmann1/keel-demo-image:latest INFO[0023] Pushed index.docker.io/klinsmann1/keel-demo-image@sha256:51d44171d79919df45e27c61ef929bf48fb2cabcade724c5a76cda501ad4303b Now you should have the image pushed to Dockerhub and tagged as latest Step 5 – Deploy Keel on Kubernetes. Using Helm charts, we can easily deploy keel with the below steps: Add the chart repo: helm repo add keel https://charts.keel.sh helm repo update When working regularly with Kubernetes manifests, you need to install Keel without Helm provider support as shown: helm upgrade --install keel --namespace=push-workflow keel/keel --set helmProvider.enabled="false" --set service.enabled="true" --set service.type="ClusterIP" Step 6 – Using Keel to Automate App Deployment updates. When using Keel, policies are used to define how and when you want the application update to occur. Although providers may have different methods of getting the configuration for your applications, policies are similar across them all. The available Keel policies are; all: this is used to initiate updates whenever a new version bump or a new prerelease is created. For example 1.0.0 -> 1.0.1-rc1
major: this updates major, minor, and patch versions minor: updates minor and patch versions but ignores major versions. patch: updates patch versions only and ignore minor and major versions. force: this forces the update even when the tag is not semver. For example latest. You can also enforce a tag say keel.sh/match-tag=true glob: use wildcards to match versions Policies are specified with a special annotation. For example: ........... annotations: keel.sh/policy: minor # update policy (available: patch, minor, major, all, force) keel.sh/trigger: poll # enable active repository checking (webhooks and GCR would still work) keel.sh/approvals: "1" # required approvals to update keel.sh/match-tag: "true" # only makes a difference when used with 'force' policy, will only update if tag matches :dev->:dev, :prod->:prod keel.sh/pollSchedule: "@every 1m" keel.sh/notify: chan1,chan2 # chat channels to sent notification to Deploy Your Application Create a manifest to use the docker image pushed to DockerHub: vim deployment.yaml Add the below lines to the file specifying the keel policies as required. apiVersion: apps/v1 kind: Deployment metadata: name: demo labels: name: "demo" annotations: # force policy will ensure that deployment is updated # even when tag is unchanged (latest remains) keel.sh/policy: force spec: replicas: 1 revisionHistoryLimit: 5 selector: matchLabels: app: demo template: metadata: name: demo labels: app: demo spec: containers: - image: klinsmann1/keel-demo-image:latest imagePullPolicy: Always # this is required to force pull image name: demo ports: - containerPort: 8081 livenessProbe: httpGet: path: / port: 8081 initialDelaySeconds: 10 timeoutSeconds: 5 Apply the manifest. kubectl create -f deployment.yaml Verify if everything is okay. $ kubectl get pods NAME READY STATUS RESTARTS AGE demo-66db46dfd-7x6bw 1/1 Running 0 9s kaniko 0/1 Completed 0 103s keel-65fcb79c6d-n7vz4 1/1 Running 0 6m23s keel-forward-whr-deployment-5d8db5b4b7-kzdzn 1/1 Running 0 11m webhookrelay-operator-c694f6c8b-4vrpc 1/1 Running 0 13m $ kubectl logs demo-66db46dfd-7x6bw App is starting, version: version1 Step 7 – Update and Push Container Image Now we will edit our main.go code on Github, and build and push a newer image with the program’s version string to version2 package main import ( "fmt" "log" "net/http" ) var version = "version2" func main() http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) fmt.Fprintf(w, "Welcome to my website! Version %s", version) ) fmt.Printf("App is starting, version: %s \n", version) log.Fatal(http.ListenAndServe(":8080", nil)) Build and push the image by running the same kaniko manifest in step 4. The commands used here are: kubectl delete pod kaniko kubectl apply -f pod.yml The new image will be pushed to Dockerhub. You can fill the process with the command: kubectl logs kaniko --follow Step 8 – Verify Updates on your Kubernetes App Once a new version is built and pushed, Keel automatically updates the application. To verify this, view the deployment history of the application. kubectl rollout history deployment/demo Sample Output: Check if the pod is running. $ kubectl get pods NAME READY STATUS RESTARTS AGE demo-86df5f4b76-p76th 1/1 Running 0 103s
kaniko 0/1 Completed 0 2m9s keel-65fcb79c6d-n7vz4 1/1 Running 0 11m keel-forward-whr-deployment-5d8db5b4b7-kzdzn 1/1 Running 0 16m webhookrelay-operator-c694f6c8b-4vrpc 1/1 Running 0 18m Now verify if the update was automated. kubectl logs demo-86df5f4b76-p76th Conclusion. It is that simple! Now you can easily automate app deployment updates on Kubernetes using Keel. I hope this was of value.
0 notes
Text
In the recent past, organizations used to build Docker images outside the Kubernetes cluster. With more companies adopting the Kubernetes technology, the idea of continuous integration builds within the cluster has increased. However, building and running containers within a cluster pose a security threat since the container has to access the file system of the worker node to connect with the docker daemon. In addition to that, the containers need to run in a privileged mode which exposes the node to innumerable security threats. This problem has been solved by Google with the introduction of Kaniko. This tool helps one to build container images from a docker file without any access to the docker daemon. It executes all the commands within the Dockerfile completely in the userspace without allowing any access to the host file system. Kaniko works in the following ways: It reads the specified Dockerfile, build context or a remote Docker registry Proceeds to extract the base image into the container filesystem Runs the commands in the Dockerfile individually. After every run, a snapshot of the userspace filesystem is taken After every snapshot, it appends only the changed image layers to the base image and updates the image metadata Then the image is pushed to the registry The illustration below will help you understand how Kaniko works. Before You Begin This guide requires one to have a Kubernetes cluster with permissions to create, update, list, and delete a job, pods, services, and secrets. There are several guides to help you deploy a Kubernetes cluster. Install Kubernetes Cluster on Rocky Linux 8 with Kubeadm & CRI-O Install Kubernetes Cluster on Ubuntu using K3s Deploy Kubernetes Cluster on Linux With k0s Run Kubernetes on Debian with Minikube Install Kubernetes Cluster on Ubuntu with kubeadm In this guide, we will build container images from Dockerfile using Kaniko in Kubernetes with Github and Docker Registry. So, you need the following: Github repository with a Dockerfile: we will use the repo URL as the path of the Dockerfile Docker hub account: to be able to authenticate and push the Docker image. Access to Kubernetes cluster: to be able to deploy the Kaniko pod and create the docker registry secret. Step 1 – Create the Container Registry Secret We will start off by setting up the container registry secret. This is useful when pushing the built image to the registry. There are several registries supported by Kaniko. These are: Docker Hub Google GCR Amazon ECR Azure Container Registry JFrog Container Registry/JFrog Artifactory These registries are declared using the –destination flag in the manifest. In this guide, we will push the image to the Dockerhub registry. I assume that you already have a Dockerhub account. Now create the registry secret using the command: kubectl create secret docker-registry dockercred \ --docker-server=https://index.docker.io/v1/ \ --docker-username= \ --docker-password=\ --docker-email= After this, you should have the secret deployed $ kubectl get secrets NAME TYPE DATA AGE default-token-ncn6w kubernetes.io/service-account-token 3 3m48s dockercred kubernetes.io/dockerconfigjson 1 15s Step 2 – Configure Kaniko Build Contexts In the build context, the following types are supported by Kaniko: S3 Bucket Local Directory GCS Bucket Azure Blob Storage Local Tar Git Repository Standard Input The build context represents your directory containing the Dockerfile to be used to build your image. You can use any of the supported types by specifying them in the manifest using the –context flag. The prefix to be used are: Source Prefix Example Local Directory dir://[path to a directory in the kaniko container] dir:///workspace Local Tar Gz tar://[path to a .tar.gz in the kaniko container]
tar://path/to/context.tar.gz GCS Bucket gs://[bucket name]/[path to .tar.gz] gs://kaniko-bucket/path/to/context.tar.gz Standard Input tar://[stdin] tar://stdin Azure Blob Storage https://[account].[azureblobhostsuffix]/[container]/[path to .tar.gz] https://myaccount.blob.core.windows.net/container/path/to/context.tar.gz Git Repository git://[repository url][#reference][#commit-id] git://github.com/acme/myproject.git#refs/heads/mybranch# S3 Bucket s3://[bucket name]/[path to .tar.gz] s3://kaniko-bucket/path/to/context.tar.gz In this guide, We will use a GitHub repository as our build context. So we need to configure it with the required Dockerfile. For this guide, we will use a private Git, repository with a Dockerfile. The repo URL is: https://github.com/computingforgeeks/kubernetes-kaniko The repository will have a Dockerfile with the below content FROM ubuntu ENTRYPOINT ["/bin/bash", "-c", "echo hello"] The repository will have the file as shown. Step 3 – Create the Kaniko Pod Manifest So we will create our manifest below. vim pod.yaml Remember to replace the values appropriately. apiVersion: v1 kind: Pod metadata: name: kaniko spec: containers: - name: kaniko image: gcr.io/kaniko-project/executor:latest args: - "--context=git://@github.com/computingforgeeks/kubernetes-kaniko.git#refs/heads/master" - "--destination=/kaniko-demo-image:1.0" volumeMounts: - name: kaniko-secret mountPath: /kaniko/.docker restartPolicy: Never volumes: - name: kaniko-secret secret: secretName: dockercred items: - key: .dockerconfigjson path: config.json In the above file, we have used the Git API token to authenticate the private git repository. This can be avoided if you are using a public repository. The –destination is the location we want to push the image. For example in my case, the location will be klinsmann1/kaniko-demo-image:1.0. Step 4 – Run Kaniko in Kubernetes With the manifest created as desired, run Kaniko with the command: kubectl apply -f pod.yaml Follow the image build and push process. kubectl logs kaniko --follow Sample output: After this, I will have my image pushed to Dockerhub. Step 5 – Pull and Test the Image Now the image can be used to run containers on both Kubernetes and Docker. On Docker Begin by installing Docker on your system. The below guide can help you achieve this: How To Install Docker CE on Linux Systems Now run a container with the image using the command: docker run -it / For example: docker run -it klinsmann1/kaniko-demo-image:1.0 Sample Output: On Kubernetes The image can also be pulled and tested on Kubernetes. Create a deployment file as shown. $ vim deploy.yml apiVersion: apps/v1 kind: Deployment metadata: name: hello-world spec: selector: matchLabels: app: hello replicas: 1 template: metadata: labels: app: hello spec: containers: - name: hello-world image: klinsmann1/kaniko-demo-image:1.0 Apply the manifest: kubectl apply -f deploy.yml Check the status: $ kubectl get pods NAME READY STATUS RESTARTS AGE hello-world-7f67c97454-xv8xs 0/1 Completed 2 (18s ago) 19s kaniko 0/1 Completed 0 32m Check if the execution was successful. $ kubectl logs hello-world-7f67c97454-xv8xs --follow hello Voila! We have triumphantly walked through how to build container images from Dockerfile using Kaniko in Kubernetes. Now proceed and build your images and use them within your cluster.
0 notes
Text
Redis an acronym for REmote DIctionary Server is an open-source, in-memory key-value pair NoSQL database written in ANSI C. It is a data structure store that can be used as a primary database, message broker, session store, or as a cache to web and gaming applications. This in-memory database is optimized for speed with both high read and write speeds since all the data in Redis is stored in the RAM. It also supports graphs, search, analytics, real-time streaming, and many more features than that of a simple data store. To give maximum CPU optimization, Redis is designed to use the single-threaded event loop model. Data structures used internally are as well implemented for maximum performance. Other features associated with Redis are: High availability and scalability – witht the primary-replica architecture, you can build highly available solutions providing consistent performance and reliability. It can be scaled vertically and horizontally Data Persistence – Saved data lasts even if the server failure occurs. For data persistent, redis must write on permanent storage such as hard disk. Rich Data Structures – It offers an innumerable variety of data structures to meet the desired application needs. Simplicity – it simple in design with very fewer number of lines to be integrated to be able to store, access, and use data. In-memory datastore – in contrast to conventional relational databases such as SQL, Oracle, e.t.c that store most data on disks, Redis and other in-memory datastores do not suffer the same penalty to access to access disks, this in turn gives applications super-fast performance and support for innumerable operations per second. Redis can be deployed on clouds, on-premises, hybrid environments, and over the Edge devices. This guide offers an in-depth illustration of how to run Redis in Podman / Docker Container. Step 1 – Install Podman|Docker on your system We will begin by installing Podman|Docker on our system. Install the desired container engine on your system. Install Docker using the aid from the below guide. How To Install Docker CE on Linux Systems For Podman, proceed using the commands below. #On CentOS/Rocky Linux sudo yum install podman #On Debian sudo apt-get install podman #On Ubuntu . /etc/os-release echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list curl -L "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_$VERSION_ID/Release.key" | sudo apt-key add - sudo apt update sudo apt -y install podman #On Fedora sudo dnf install podman #On RHEL 7 sudo subscription-manager repos --enable=rhel-7-server-extras-rpms sudo yum -y install podman #On RHEL 8 sudo yum module enable -y container-tools:rhel8 sudo yum module install -y container-tools:rhel8 Verify the installation as below. $ podman info host: arch: amd64 buildahVersion: 1.23.1 cgroupControllers: [] cgroupManager: cgroupfs cgroupVersion: v1 conmon: package: conmon-2.0.29-1.module+el8.4.0+643+525e162a.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.29, commit: ce0221c919d8326c218a7d4d355d11848e8dd21f' cpus: 2 distribution: distribution: '"rocky"' version: "8.4" eventLogger: file hostname: localhost.localdomain idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 ..... For Debian/Ubuntu systems, you may be required to make the below configurations to work with OCI registries. $ sudo vim /etc/containers/registries.conf unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "registry.centos.org", "docker.io"]
Once the desired container engine has been installed, proceed to the below step. Step 2 – Create a Persistent Volume for the Redis Container Persistent volumes here help data to survive after the main process of the particular data has ended. To achieve this, we need to create volumes on the hard disk to store the data. sudo mkdir -p /var/redis/data sudo mkdir $PWD/redis-data sudo chmod 775 -R /var/redis/data sudo chmod 775 -R $PWD/redis-data On Rhel-based systems, you are required to set SELinux in permissive mode otherwise, the created path will be inaccessible. sudo setenforce 0 sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config Step 3 – Provision the Redis Container First, pull the Redis container image. ##For Podman podman pull docker.io/redis ##For Docker docker pull docker.io/redis Sample output: Using default tag: latest latest: Pulling from library/redis 5eb5b503b376: Pull complete 6530a7ea3479: Pull complete 91f5202c6d9b: Pull complete 9f1ac212e389: Pull complete 82c311187b72: Pull complete da84aa65ce64: Pull complete Digest: sha256:0d9c9aed1eb385336db0bc9b976b6b49774aee3d2b9c2788a0d0d9e239986cb3 Status: Downloaded newer image for redis:latest docker.io/library/redis:latest Once pulled, verify if the image exists on your local registry. ##For Podman $ podman images REPOSITORY TAG IMAGE ID CREATED SIZE redis latest f1b6973564e9 3 weeks ago 113MB ##For Docker $ docker images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/redis latest f1b6973564e9 3 weeks ago 116 MB Step 4 – Run the Redis Container With the image available in the local registry, we can now spin the Redis container with Podman\Docker or with Podman-Compose|Docker-compose 1. Using Podman|Docker Using Podman podman run -d \ --name redis_server \ -v $PWD/redis-data:/var/redis/data \ -p 6379:6379 \ redis --requirepass StrongPassword Using Docker docker run -d \ --name redis_server \ -v $PWD/redis-data:/var/redis/data \ -p 6379:6379 \ docker.io/library/redis --requirepass StrongPassword 2. Using Podman-Compose|Docker-compose You can as well use Podman-Compose|Docker-compose to spin the container. All you need is to have Podman-Compose|Docker-compose installed. Install Podman-compose using the commands: First, install Python and PIP. # Install Python3 on CentOS 7 sudo yum -y install epel-release sudo yum -y install python3 python3-pip python3-devel # Install Python3 on Rocky Linux 8 / CentOS Stream 8 / AlmaLinux 8 sudo yum -y install python3 python3-pip python3-devel # Install Python3 on Debian / Ubuntu sudo apt update sudo apt install python3 python3-pip Now install dotenv and podman-compose as below. sudo pip3 install python-dotenv sudo curl -o /usr/local/bin/podman-compose https://raw.githubusercontent.com/containers/podman-compose/devel/podman_compose.py sudo chmod +x /usr/local/bin/podman-compose Install docker-compose with the commands: curl -s https://api.github.com/repos/docker/compose/releases/latest | grep browser_download_url | grep docker-compose-linux-x86_64 | cut -d '"' -f 4 | wget -qi - chmod +x docker-compose-linux-x86_64 sudo mv docker-compose-linux-x86_64 /usr/local/bin/docker-compose Now create the YAML file to be used when running the container. vim docker-compose.yml In the file, add the lines below. version: '3' services: cache: image: redis container_name: redis_server restart: always ports: - '6379:6379' command: redis-server --requirepass StrongPassword volumes: - $PWD/redis-data:/var/redis/data - $PWD/redis.conf:/usr/local/etc/redis/redis.conf In the file above, the –requirepass command has been used to specify a password for our Redis. Now start the container using the command: ##For Podman podman-compose up -d ##For Docker docker-compose up -d With
any other the above methods used, the container will start and can be checked using the command: ##For Podman $ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cee0b9192ccb docker.io/library/redis:latest --requirepass Str... 7 seconds ago Up 8 seconds ago 0.0.0.0:6379->6379/tcp redis_server ##For Docker $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 90775de4796b redis "docker-entrypoint.s…" 32 seconds ago Up 30 seconds 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp redis_server To start/stop the container, issue the command: ##For Podman podman stop redis_server podman start redis_server ##For Docker docker stop redis_server docker start redis_server Step 5 – Run the Redis Container as a systemd service. The container can be managed like any other systems service. We will create a systems service file for the container as below. sudo vim /etc/systemd/system/redis-container.service In the file, add the content below replacing the name of your Container engine. For example for docker: [Unit] Description=Redis container [Service] Restart=always ExecStart=/usr/bin/docker start -a redis_server ExecStop=/usr/bin/docker stop -t 2 redis_server [Install] WantedBy=local.target With Podman, you can also generate the service file and copy it to /etc/systemd/system/redis-container.service as below podman generate systemd redis_server Copy the generated file to to /etc/systemd/system/redis-container.service and proceed as below. Reload the system daemon. sudo systemctl daemon-reload Now start and enable the service. sudo systemctl start redis-container.service sudo systemctl enable redis-container.service Once started, check the status as below. $ systemctl status redis-container.service ● redis-container.service - Redis container Loaded: loaded (/etc/systemd/system/redis-container.service; disabled; vendor preset: enabled) Active: active (running) since Sun 2022-02-20 05:15:00 EST; 8s ago Main PID: 5880 (docker) Tasks: 7 (limit: 7075) Memory: 18.5M CPU: 29ms CGroup: /system.slice/redis-container.service └─5880 /usr/bin/docker start -a redis_server In case you find any errors such as “restarted too quickly” when starting the Redis container, it is because of permissions and you can correct this by running the Redis container with sudo or with elevated privileges as root Step 6 – Connect to the Redis container You can now connect to the Redis container locally or remotely using redis-cli. Locally, you will access the container as below: ##For docker docker exec -it redis_server redis-cli ##For Podman podman exec -it redis_server redis-cli Sample Output: Remotely, you need to have redis-tools installed and proceed as below. sudo redis-cli -h [host IP or domain name] -p 6379 For example for this guide, the command will be: sudo redis-cli -h 192.168.205.4 -p 6379 Provide the password for the Redis server. Voila! That was enough learning! I hope this guide has been of great importance to you. You can as give any feedback pertaining to this guide in the comments below.
0 notes
Text
The Vagrant Podman Provisioner can be used to automatically install Podman which is a daemon-less container engine used to develop, manage and run OCI containers. Podman built and officially supported by RedHat, acts as a drop-in replacement for Docker. Just like Docker, Podman has the ability to pull container images and configure containers to run automatically on boot. Podman is highly preferred when running containers since it allows one to run containers from Kubernetes as long as the images are OCI-compliant. Podman can also be used along with other provisioners such as the puppet provisioner e.t.c. In this setup, we will use Vagrant to set up the best working environment for the Podman provisioner. This guide takes a deep dive into how to manage Podman Containers With Vagrant. Getting Started. For this guide, you need to have Vagrant installed on your system. Below are dedicated guides to help you install Vagrant on your system. On Debian/Ubuntu/Kali Linux Install Vagrant on Ubuntu / Debian / Kali Linux On RHEL 8/CentOS 8 /Rocky Linux 8/Alma Linux How To Install Vagrant on CentOS 8 / RHEL 8 On Fedora Install Vagrant and VirtualBox on Fedora On Ubuntu, you will have to install the below package. sudo apt-get install libarchive-tools For this guide, I will use VirtualBox as my hypervisor. You can install it on your system using the guides below On Debian Install VirtualBox on Debian On Ubuntu/Kali Linux/Linux mint How To Install VirtualBox on Kali Linux / Linux Mint On RHEL/CentOS/Rocky Linux How To Install VirtualBox on CentOS 8 / RHEL 8 With Vagrant and Virtualbox successfully installed using the aid of the above guides, now you are set to proceed as below. Step 1 – Create a Vagrant Box For this guide, we will create a CentOS 7 vagrant box using VirtualBox as the provider. $ vagrant box add centos/7 --provider=virtualbox You can as well use other Hypervisors as below. ##For KVM vagrant box add centos/7 --provider=libvirt ##For VMware vagrant box add generic/centos7 --provider=vmware_desktop ##For Docker vagrant box add generic/centos7 --provider=docker ##For Parallels vagrant box add generic/centos7 --provider=parallels Create a Vagrant file for CentOS 7 as below mkdir ~/vagrant-vms cd ~/vagrant-vms touch ~/vagrant-vms/Vagrantfile Now we are set to edit the vagrant file depending on our preferences as below. Step 2 – Manage Podman Containers With Vagrant Now manage your Podman containers with Vagrant as below. Stop the running instance: $ vagrant halt ==> default: Attempting graceful shutdown of VM... 1. Build Podman Images with Vagrant The Vagrant Podman provisioner can be used to automatically build images. Container images can be built before running them or prior to any configured containers as below. $ vim ~/vagrant-vms/Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman" do |d| d.build_image "~/vagrant-vms/app" end end Above is a sample code on how to build an image from a Docker file and remember the path ~/vagrant/app must exist on your guest machine. vagrant ssh mkdir ~/vagrant-vms/app chmod 755 ~/vagrant-vms/app Stop the running instance as below vagrant halt Start Vagrant as below. vagrant up --provision 2. Pull Podman Images with Vagrant Images can also be pulled from Docker registries. There are two methods you can use to specify the Docker image to be pulled. The first method is by using the argument images: For example, to pull an Ubuntu image use: $ vim ~/vagrant-vms/Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman", images: ["ubuntu"] end The second option is by using the pull_images function as below. Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman" do |d| d.pull_images "ubuntu"
d.pull_images "alpine" end end Sample output: $ vagrant up --provision ......... ==> default: Rsyncing folder: /home/thor/vagrant-vms/ => /vagrant ==> default: Running provisioner: podman... default: Podman installing ==> default: Pulling Docker images... ==> default: -- Image: ubuntu ==> default: Trying to pull registry.access.redhat.com/ubuntu... ==> default: name unknown: Repo not found ==> default: Trying to pull registry.redhat.io/ubuntu... ==> default: unable to retrieve auth token: invalid username/password: unauthorized: Please login to the Red Hat Registry using your Customer Portal credentials. Further instructions can be found here: https://access.redhat.com/RegistryAuthentication ==> default: Trying to pull docker.io/library/ubuntu... ==> default: Getting image source signatures ==> default: Copying blob sha256:7b1a6ab2e44dbac178598dabe7cff59bd67233dba0b27e4fbd1f9d4b3c877a54 ==> default: Copying config sha256:ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1 ==> default: Writing manifest to image destination ==> default: Storing signatures ==> default: ba6acccedd2923aee4c2acc6a23780b14ed4b8a5fa4e14e252a23b846df9b6c1 3. Run Podman Containers with Vagrant In addition to building and pulling Podman images, Vagrant can as well be used to run Podman containers. Running containers can be done using the Ruby block syntax normally with do…end blocks. For example to run a Rabbitmq Podman container, use the below syntax. $ vim ~/vagrant-vms/Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman" do |d| d.run "rabbitmq" end end Start the instance $ vagrant up --provision ........ ==> default: Rsyncing folder: /home/thor/vagrant-vms/ => /vagrant ==> default: Running provisioner: podman... ==> default: Starting Docker containers... ==> default: -- Container: rabbitmq Verify that the container has been started. $ vagrant ssh $ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 245e7d8cf138 docker.io/library/rabbitmq:latest rabbitmq-server About a minute ago Up About a minute ago rabbitmq You can as well run multiple containers using the same image. Here, you have to specify the name of each container. For example, running multiple MySQL instances, you can use the code as below. $ vim ~/vagrant-vms/Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman" do |d| d.run "db-1", image: "mariadb" d.run "db-2", image: "mariadb" end end $ vagrant up --provision ==> default: Rsyncing folder: /home/thor/vagrant-vms/ => /vagrant ==> default: Running provisioner: podman... default: Podman installing ==> default: Starting Docker containers... ==> default: -- Container: db-1 ==> default: -- Container: db-2 Furthermore, a container can be configured to run with a shared directory mounted in it. For example, running an Ubuntu container with a shared directory /var/www use the below code; $ vim ~/vagrant-vms/Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "centos/7" config.vm.provision "podman" do |d| d.run "ubuntu", cmd: "bash -l", args: "-v '/vagrant:/var/www'" end end Sample output: $ vagrant up --provision ==> default: Rsyncing folder: /home/thor/vagrant-vms/ => /vagrant ==> default: Running provisioner: podman... default: Podman installing ==> default: Starting Docker containers... ==> default: -- Container: ubuntu You can verify your created Podman containers as below: $ vagrant ssh $ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ec976aa0ae53 docker.io/library/ubuntu:latest bash -l 2 minutes ago Exited (0) Less than a second ago ubuntu You can pause/hibernate the vagrant instance as below.
vagrant suspend Delete the Vagrant instance: vagrant destroy Conclusion. That marks the end of this guide! We have successfully gone through how to manage Podman Containers With Vagrant. I hope you found this guide important.
0 notes