In this guide we will perform an installation of Red Hat OpenShift Container Platform 4.11 on KVM Virtual Machines. OpenShift is a powerful, platform agnostic, enterprise-grade Kubernetes distribution focused on developer experience and application security. The project is developed and owned by Red Hat Software company. OpenShift Container Platform is built around containers orchestrated and managed by Kubernetes on a foundation of Red Hat Enterprise Linux.
The OpenShift platform offers automated installation, upgrades, and lifecycle management throughout the container stack – from the operating system, Kubernetes and cluster services, to deployed applications. Operating system that will be used on both the Control plan and Worker machines is Red Hat CoreOS (RHCOS). The RHCOS OS includes the kubelet, which is the Kubernetes node agent, and the CRI-O container runtime optimized for Kubernetes workloads.
In my installation the deployment is performed on a single node KVM compute server. This is not a production setup with high availability and should only be used for proof-of-concept and demo related purposes.
Red Hat’s recommendation on each cluster virtual machine minimum hardware requirements is as shown in the table below:
Virtual Machine
Operating System
vCPU
Virtual RAM
Storage
Bootstrap
RHCOS
4
16 GB
120 GB
Control plane
RHCOS
4
16 GB
120 GB
Compute
RHCOS
2
8 GB
120 GB
But the preferred requirements for each cluster virtual machine are:
Virtual Machine
Operating System
vCPU
Virtual RAM
Storage
Bootstrap
RHCOS
4
16 GB
120 GB
Control plane
RHCOS
8
16 GB
120 GB
Compute
RHCOS
6
8 GB
120 GB
The shared hardware requirements information for the virtual machines is not accurate since it depends on the workloads and desired cluster size when running in Production. Sizing can be done as deemed fit.
My Lab environment variables
OpenShift 4 Cluster base domain: example.com ( to be substituted accordingly)
OpenShift 4 Cluster name: ocp4 ( to be substituted accordingly)
OpenShift KVM network bridge: openshift4
OpenShift Network Block: 192.168.100.0/24
OpenShift Network gateway address: 192.168.100.1
Bastion / Helper node IP Address (Runs DHCP, Apache httpd, HAProxy, PXE, DNS) – 192.168.100.254
NTP server used: time.google.com
Used Mac Addresses and IP Addresses:
Machine Name
Mac Address (Generate yours and use)
DHCP Reserved IP Address
bootstrap.ocp4.example.com
52:54:00:a4:db:5f
192.168.100.10
master01.ocp4.example.com
52:54:00:8b:a1:17
192.168.100.11
master02.ocp4.example.com
52:54:00:ea:8b:9d
192.168.100.12
master03.ocp4.example.com
52:54:00:f8:87:c7
192.168.100.13
worker01.ocp4.example.com
52:54:00:31:4a:39
192.168.100.21
worker02.ocp4.example.com
52:54:00:6a:37:32
192.168.100.22
worker03.ocp4.example.com
52:54:00:95:d4:ed
192.168.100.23
Step 1: Setup KVM Infrastructure (On Hypervisor Node)
Install KVM in your hypervisor node using any of the guides in below links:
Install KVM Hypervisor on Ubuntu
How To Install KVM Hypervisor on Debian
Install KVM on RHEL 8 / CentOS 8 / Rocky Linux
After installation verify your server CPU has support for Intel VT or AMD-V Virtualization extensions:
cat /proc/cpuinfo | egrep "vmx|svm"
Creating Virtual Network (optional, you can use existing network)
Create a new virtual network configuration file
vim virt-net.xml
File contents:
openshift4
Create a virtual network using this file file created; modify if need be:
$ sudo virsh net-define --file virt-net.xml
Network openshift4 defined from virt-net.xml
Set the network to autostart on boot
$ sudo virsh net-autostart openshift4
Network openshift4 marked as autostarted
$ sudo virsh net-start openshift4
Network openshift4 started
Confirm that the bridge is available and active:
$ brctl show
bridge name bridge id STP enabled interfaces
openshift4 8000.5254002b479a yes
virbr0 8000.525400ad641d yes
Step 2: Create Bastion / Helper Virtual Machine
Create a Virtual Machine that will host some key services from officially provided virt-builder images. The virtual machine will be used to run the following services:
DNS Server (Bind)
Apache httpd web server
HAProxy Load balancer
DHCP & PXE/TFTP services
It will also be our bastion server for deploying and managing OpenShift platform (oc, openshift-install, kubectl, ansible)
Let’s first display available OS templates with command below:
$ virt-builder -l
I’ll create a VM image from fedora-36 template; you can also choose a CentOS template(8 or 7):
sudo virt-builder fedora-36 --format qcow2 \
--size 20G -o /var/lib/libvirt/images/ocp-bastion-server.qcow2 \
--root-password password:StrongRootPassw0rd
Where:
fedora-36 is the template used to create a new virtual machine
/var/lib/libvirt/images/ocp-bastion-server.qcow2 is the path to VM qcow2 image
StrongRootPassw0rd is the root user password
VM image creation progress will be visible in your screen
[ 1.0] Downloading: http://builder.libguestfs.org/fedora-36.xz
########################################################################################################################################################### 100.0%
[ 15.3] Planning how to build this image
[ 15.3] Uncompressing
[ 18.2] Resizing (using virt-resize) to expand the disk to 20.0G
[ 39.7] Opening the new disk
[ 44.1] Setting a random seed
[ 44.1] Setting passwords
[ 45.1] Finishing off
Output file: /var/lib/libvirt/images/ocp-bastion-server.qcow2
Output size: 20.0G
Output format: qcow2
Total usable space: 20.0G
Free space: 19.0G (94%)
Now create a Virtual Machine to be used as DNS and DHCP server with virt-install
Using Linux bridge:
sudo virt-install \
--name ocp-bastion-server \
--ram 4096 \
--vcpus 2 \
--disk path=/var/lib/libvirt/images/ocp-bastion-server.qcow2 \
--os-type linux \
--os-variant rhel8.0 \
--network bridge=openshift4 \
--graphics none \
--serial pty \
--console pty \
--boot hd \
--import
Using openVSwitch bridge: Ref How To Use Open vSwitch Bridge on KVM Virtual Machines
sudo virt-install \
--name ocp-bastion-server \
--ram 4096 \
--disk path=/var/lib/libvirt/images/ocp-bastion-server.qcow2 \
--vcpus 2 \
--os-type linux \
--os-variant rhel8.0 \
--network=bridge:openshift4,model=virtio,virtualport_type=openvswitch \
--graphics none \
--serial pty \
--console pty \
--boot hd \
--import
When your VM is created and running login as root user and password set initially:
Fedora 36 (Thirty Six)
Kernel 5.xx.fc36.x86_64 on an x86_64 (ttyS0)
fedora login: root
Password: StrongRootPassw0rd
You can reset root password after installation if that’s your desired action:
[root@fedora ~]# passwd
Changing password for user root.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
If the server didn’t get IP address from DHCP server you can set static IP manually on the primary interface:
# ip link show
1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp1s0: mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:21:fb:33 brd ff:ff:ff:ff:ff:ff
# vi /etc/sysconfig/network-scripts/ifcfg-enp1s0
NAME="enp1s0" # Set network name, usually same as device name
DEVICE="enp1s0" # Set your interface name as shown while running ip link show command
ONBOOT="yes"
NETBOOT="yes"
BOOTPROTO="none"
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
DEFROUTE="yes"
IPADDR=192.168.100.254 # Set your VM IP address
PREFIX=27 # Set Netmask Prefix
GATEWAY=192.168.100.1 # Set network gateway IP address
DNS1=8.8.8.8 # Set first DNS server to be used
DNS2=8.8.4.4 # Set secondary DNS server to be used
# Once configured bring up the interface using ifup command
# ifup enp1s0
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/7)
Test external connectivity from the VM:
# ping -c 2 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=117 time=4.98 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=117 time=5.14 ms
--- 8.8.8.8 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 4.981/5.061/5.142/0.080 ms
# ping -c 2 google.com
PING google.com (172.217.18.110) 56(84) bytes of data.
64 bytes from zrh04s05-in-f110.1e100.net (172.217.18.110): icmp_seq=1 ttl=118 time=4.97 ms
64 bytes from fra16s42-in-f14.1e100.net (172.217.18.110): icmp_seq=2 ttl=118 time=5.05 ms
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 4.971/5.008/5.045/0.037 ms
Perform OS upgrade before deploying other services.
sudo dnf -y upgrade
sudo dnf -y install git vim wget curl bash-completion tree tar libselinux-python3 firewalld
Reboot the server after the upgrade is done.
sudo reboot
Confirm you can access the VM through virsh console or ssh
$ sudo virsh list
Id Name State
-------------------------------------
1 ocp-bastion-server running
$ sudo virsh console ocp-bastion-server
Connected to domain 'ocp-bastion-server'
Escape character is ^] (Ctrl + ])
fedora login:
Enable domain autostart:
sudo virsh autostart ocp-bastion-server
Step 3: Install Ansible and Configure variables on Bastion / Helper node
Install Ansible configuration management tool on the Bastion machine
# Fedora
sudo dnf -y install git ansible vim wget curl bash-completion tree tar libselinux-python3
# CentOS 8 / Rocky Linux 8
sudo yum -y install epel-release
sudo yum -y install git ansible vim wget curl bash-completion tree tar libselinux-python3
# CentOS 7
sudo yum -y install epel-release
sudo yum -y install git ansible vim wget curl bash-completion tree tar libselinux-python
We have a Github repository with all the tasks and templates used in this guide. Clone the project to ~/ocp4_ansible directory.
cd ~/
git clone https://github.com/jmutai/ocp4_ansible.git
cd ~/ocp4_ansible
You can view the directory structure using tree command:
$ tree
.
├── ansible.cfg
├── files
│ └── set-dns-serial.sh
├── handlers
│ └── main.yml
├── inventory
├── LICENSE
├── README.md
├── tasks
│ ├── configure_bind_dns.yml
│ ├── configure_dhcpd.yml
│ ├── configure_haproxy_lb.yml
│ └── configure_tftp_pxe.yml
├── templates
│ ├── default.j2
│ ├── dhcpd.conf.j2
│ ├── dhcpd-uefi.conf.j2
│ ├── haproxy.cfg.j2
│ ├── named.conf.j2
│ ├── pxe-bootstrap.j2
│ ├── pxe-master.j2
│ ├── pxe-worker.j2
│ ├── reverse.j2
│ └── zonefile.j2
└── vars
└── main.yml
5 directories, 21 files
Edit ansible configuration file and modify to suit your use.
$ vim ansible.cfg
[defaults]
inventory = inventory
command_warnings = False
filter_plugins = filter_plugins
host_key_checking = False
deprecation_warnings=False
retry_files = false
When not executing ansible as root user you can addprivilege_escalation section.
[privilege_escalation]
become = true
become_method = sudo
become_user = root
become_ask_pass = false
If running on the localhost the inventory can be set as below:
$ vim inventory
[vmhost]
localhost ansible_connection=local
These are service handlers created and will be referenced in bastion setup process tasks.
$ vim handlers/main.yml
---
- name: restart tftp
service:
name: tftp
state: restarted
- name: restart bind
service:
name: named
state: restarted
- name: restart haproxy
service:
name: haproxy
state: restarted
- name: restart dhcpd
service:
name: dhcpd
state: restarted
- name: restart httpd
service:
name: httpd
state: restarted
Modify the default variables file inside vars folder:
vim vars/main.yml
Define all the variables required correctly. Be careful not to have wrong values which will cause issues at the time of OpenShift installation.
---
ppc64le: false
uefi: false
disk: vda #disk where you are installing RHCOS on the masters/workers
helper:
name: "bastion" #hostname for your helper node
ipaddr: "192.168.100.254" #current IP address of the helper
networkifacename: "ens3" #interface of the helper node,ACTUAL name of the interface, NOT the NetworkManager name
dns:
domain: "example.com" #DNS server domain. Should match baseDomain inside the install-config.yaml file.
clusterid: "ocp4" #needs to match what you will for metadata.name inside the install-config.yaml file
forwarder1: "8.8.8.8" #DNS forwarder
forwarder2: "1.1.1.1" #second DNS forwarder
lb_ipaddr: " helper.ipaddr " #Load balancer IP, it is optional, the default value is helper.ipaddr
dhcp:
router: "192.168.100.1" #default gateway of the network assigned to the masters/workers
bcast: "192.168.100.255" #broadcast address for your network
netmask: "255.255.255.0" #netmask that gets assigned to your masters/workers
poolstart: "192.168.100.10" #First address in your dhcp address pool
poolend: "192.168.100.50" #Last address in your dhcp address pool
ipid: "192.168.100.0" #ip network id for the range
netmaskid: "255.255.255.0" #networkmask id for the range.
ntp: "time.google.com" #ntp server address
dns: "" #domain name server, it is optional, the default value is set to helper.ipaddr
bootstrap:
name: "bootstrap" #hostname (WITHOUT the fqdn) of the bootstrap node
ipaddr: "192.168.100.10" #IP address that you want set for bootstrap node
macaddr: "52:54:00:a4:db:5f" #The mac address for dhcp reservation
masters:
- name: "master01" #hostname (WITHOUT the fqdn) of the master node (x of 3)
ipaddr: "192.168.100.11" #The IP address (x of 3) that you want set
macaddr: "52:54:00:8b:a1:17" #The mac address for dhcp reservation
- name: "master02"
ipaddr: "192.168.100.12"
macaddr: "52:54:00:ea:8b:9d"
- name: "master03"
ipaddr: "192.168.100.13"
macaddr: "52:54:00:f8:87:c7"
workers:
- name: "worker01" #hostname (WITHOUT the fqdn) of the worker node you want to set
ipaddr: "192.168.100.21" #The IP address that you want set (1st node)
macaddr: "52:54:00:31:4a:39" #The mac address for dhcp reservation (1st node)
- name: "worker02"
ipaddr: "192.168.100.22"
macaddr: "52:54:00:6a:37:32"
- name: "worker03"
ipaddr: "192.168.100.23"
macaddr: "52:54:00:95:d4:ed"
Generating unique mac addresses for bootstrap, worker and master nodes
You can generate all required mac addresses using the command below:
date +%s | md5sum | head -c 6 | sed -e 's/\([0-9A-Fa-f]\2\\)/\1:/g' -e 's/\(.*\):$/\1/' | sed -e 's/^/52:54:00:/'
Step 4: Install and Configure DHCP serveron Bastion / Helper node
Install dhcp-server rpm package using dnf or yum package manager.
sudo yum -y install dhcp-server
Enable dhcpd service to start on system boot
$ sudo systemctl enable dhcpd
Created symlink /etc/systemd/system/multi-user.target.wants/dhcpd.service → /usr/lib/systemd/system/dhcpd.service.
Backup current dhcpd configuration file. If the server is not new you can modify existing configuration
sudo mv /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.conf.bak
Task to configure dhcp server on the bastion server:
$ vim tasks/configure_dhcpd.yml
---
# Setup OCP4 DHCP Server on Helper Node
- hosts: all
vars_files:
- ../vars/main.yml
handlers:
- import_tasks: ../handlers/main.yml
tasks:
- name: Write out dhcp file
template:
src: ../templates/dhcpd.conf.j2
dest: /etc/dhcp/dhcpd.conf
notify:
- restart dhcpd
when: not uefi
- name: Write out dhcp file (UEFI)
template:
src: ../templates/dhcpd-uefi.conf.j2
dest: /etc/dhcp/dhcpd.conf
notify:
- restart dhcpd
when: uefi
Configure DHCP server using ansible, defined variables and templates shared.
$ ansible-playbook tasks/configure_dhcpd.yml
PLAY [all] *******************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [localhost]
TASK [Write out dhcp file] ***************************************************************************************************************************************
changed: [localhost]
TASK [Write out dhcp file (UEFI)] ********************************************************************************************************************************
skipping: [localhost]
RUNNING HANDLER [restart dhcpd] **********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *******************************************************************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Confirm that dhcpd service is in running state:
$ systemctl status dhcpd
● dhcpd.service - DHCPv4 Server Daemon
Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-08-17 19:35:06 EDT; 2min 42s ago
Docs: man:dhcpd(8)
man:dhcpd.conf(5)
Main PID: 24958 (dhcpd)
Status: "Dispatching packets..."
Tasks: 1 (limit: 4668)
Memory: 9.7M
CPU: 17ms
CGroup: /system.slice/dhcpd.service
└─24958 /usr/sbin/dhcpd -f -cf /etc/dhcp/dhcpd.conf -user dhcpd -group dhcpd --no-pid
...
You can as well check generated configuration file:
$ cat /etc/dhcp/dhcpd.conf
Step 4: Configure OCP Zone on Bind DNS Serveron Bastion / Helper node
We can now begin the installation of DNS and DHCP server packages required to run OpenShift Container Platform on KVM.
sudo yum -y install bind bind-utils
Enable the service to start at system boot up
sudo systemctl enable named
Install DNS Serialnumber generator script:
$ sudo vim /usr/local/bin/set-dns-serial.sh
#!/bin/bash
dnsserialfile=/usr/local/src/dnsserial-DO_NOT_DELETE_BEFORE_ASKING_CHRISTIAN.txt
zonefile=/var/named/zonefile.db
if [ -f zonefile ] ; then
echo $[ $(grep serial $zonefile | tr -d "\t"" ""\n" | cut -d';' -f 1) + 1 ] | tee $dnsserialfile
else
if [ ! -f $dnsserialfile ] || [ ! -s $dnsserialfile ]; then
echo $(date +%Y%m%d00) | tee $dnsserialfile
else
echo $[ $(< $dnsserialfile) + 1 ] | tee $dnsserialfile
fi
fi
##
##-30-
Make the script executable:
sudo chmod a+x /usr/local/bin/set-dns-serial.sh
This is the DNS Configuration task to be used:
$ vim tasks/configure_bind_dns.yml
---
# Configure OCP4 DNS Server on Helper Node
- hosts: all
vars_files:
- ../vars/main.yml
handlers:
- import_tasks: ../handlers/main.yml
tasks:
- name: Setup named configuration files
block:
- name: Write out named file
template:
src: ../templates/named.conf.j2
dest: /etc/named.conf
notify:
- restart bind
- name: Set zone serial number
shell: "/usr/local/bin/set-dns-serial.sh"
register: dymanicserialnumber
- name: Setting serial number as a fact
set_fact:
serialnumber: " dymanicserialnumber.stdout "
- name: Write out " lower " zone file
template:
src: ../templates/zonefile.j2
dest: /var/named/zonefile.db
mode: '0644'
notify:
- restart bind
- name: Write out reverse zone file
template:
src: ../templates/reverse.j2
dest: /var/named/reverse.db
mode: '0644'
notify:
- restart bind
Run ansible playbook to configure bind dns server for OpenShift deployment.
$ ansible-playbook tasks/configure_bind_dns.yml
ansible-playbook tasks/configure_bind_dns.yml
PLAY [all] *******************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [localhost]
TASK [Write out named file] **************************************************************************************************************************************
changed: [localhost]
TASK [Set zone serial number] ************************************************************************************************************************************
changed: [localhost]
TASK [Setting serial number as a fact] ***************************************************************************************************************************
changed: [localhost]
TASK [Write out "example.com" zone file] **********************************************************************************************************************
changed: [localhost]
TASK [Write out reverse zone file] *******************************************************************************************************************************
changed: [localhost]
RUNNING HANDLER [restart bind] ***********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *******************************************************************************************************************************************************
localhost : ok=7 changed=6 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Forward DNS zone file is created under /var/named/zonefile.db and reverse DNS lookup file is /var/named/reverse.db
Check if the service is in running status:
$ systemctl status named
● named.service - Berkeley Internet Name Domain (DNS)
Loaded: loaded (/usr/lib/systemd/system/named.service; disabled; vendor preset: disabled)
Active: active (running) since Wed 2021-08-11 16:19:38 EDT; 4s ago
Process: 1340 ExecStartPre=/bin/bash -c if [ ! "$DISABLE_ZONE_CHECKING" == "yes" ]; then /usr/sbin/named-checkconf -z "$NAMEDCONF"; else echo "Checking of zo>
Process: 1342 ExecStart=/usr/sbin/named -u named -c $NAMEDCONF $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 1344 (named)
Tasks: 6 (limit: 4668)
Memory: 26.3M
CPU: 53ms
CGroup: /system.slice/named.service
└─1344 /usr/sbin/named -u named -c /etc/named.conf
Aug 11 16:19:38 fedora named[1344]: network unreachable resolving './NS/IN': 2001:500:1::53#53
Aug 11 16:19:38 fedora named[1344]: network unreachable resolving './NS/IN': 2001:500:200::b#53
Aug 11 16:19:38 fedora named[1344]: network unreachable resolving './NS/IN': 2001:500:9f::42#53
Aug 11 16:19:38 fedora named[1344]: network unreachable resolving './NS/IN': 2001:7fe::53#53
Aug 11 16:19:38 fedora named[1344]: network unreachable resolving './NS/IN': 2001:503:c27::2:30#53
Aug 11 16:19:38 fedora named[1344]: zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
Aug 11 16:19:38 fedora named[1344]: all zones loaded
Aug 11 16:19:38 fedora named[1344]: managed-keys-zone: Initializing automatic trust anchor management for zone '.'; DNSKEY ID 20326 is now trusted, waiving the n>
Aug 11 16:19:38 fedora named[1344]: running
Aug 11 16:19:38 fedora systemd[1]: Started Berkeley Internet Name Domain (DNS).
To test our DNS server we just execute:
$ dig @127.0.0.1 -t srv _etcd-server-ssl._tcp.ocp4.example.com
; DiG 9.16.19-RH @127.0.0.1 -t srv _etcd-server-ssl._tcp.ocp4.example.com
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER (item='name': 'master01', 'ipaddr': '192.168.100.11', 'macaddr': '52:54:00:8b:a1:17')
changed: [localhost] => (item='name': 'master02', 'ipaddr': '192.168.100.12', 'macaddr': '52:54:00:ea:8b:9d')
changed: [localhost] => (item='name': 'master03', 'ipaddr': '192.168.100.13', 'macaddr': '52:54:00:f8:87:c7')
TASK [Set the worker specific tftp files] ************************************************************************************************************************
changed: [localhost] => (item='name': 'worker01', 'ipaddr': '192.168.100.21', 'macaddr': '52:54:00:31:4a:39')
changed: [localhost] => (item='name': 'worker02', 'ipaddr': '192.168.100.22', 'macaddr': '52:54:00:6a:37:32')
changed: [localhost] => (item='name': 'worker03', 'ipaddr': '192.168.100.23', 'macaddr': '52:54:00:95:d4:ed')
RUNNING HANDLER [restart tftp] ***********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *******************************************************************************************************************************************************
localhost : ok=5 changed=4 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Headless environment considerations
With the consideration of the fact that we’re working in a headless environment, minimal setup of KVM without graphical interface. We need to ensure CoreOS booted VM will automatically choose the correct image and ignition file for the OS installation.
PXE Boot files are created inside the directory /var/lib/tftpboot/pxelinux.cfg
NOTE: Each of the file created should have a 01- before the MAC Address. See below example of bootstrap node.
Bootstrap node
Mac Address:
52:54:00:a4:db:5f
The file created will be
cat /var/lib/tftpboot/pxelinux.cfg/01-52-54-00-a4-db-5f
With contents:
default menu.c32
prompt 1
timeout 9
ONTIMEOUT 1
menu title ######## PXE Boot Menu ########
label 1
menu label ^1) Install Bootstrap Node
menu default
kernel rhcos/kernel
append initrd=rhcos/initramfs.img nomodeset rd.neednet=1 console=tty0 console=ttyS0 ip=dhcp coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url=http://192.168.100.254:8080/rhcos/rootfs.img coreos.inst.ignition_url=http://192.168.100.254:8080/ignition/bootstrap.ign
Master nodes
The file for each master has contents similar to this:
default menu.c32
prompt 1
timeout 9
ONTIMEOUT 1
menu title ######## PXE Boot Menu ########
label 1
menu label ^1) Install Master Node
menu default
kernel rhcos/kernel
append initrd=rhcos/initramfs.img nomodeset rd.neednet=1 console=tty0 console=ttyS0 ip=dhcp coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url=http://192.168.100.254:8080/rhcos/rootfs.img coreos.inst.ignition_url=http://192.168.100.254:8080/ignition/master.ign
Worker nodes
The file for each worker node will looks similar to this:
default menu.c32
prompt 1
timeout 9
ONTIMEOUT 1
menu title ######## PXE Boot Menu ########
label 1
menu label ^1) Install Worker Node
menu default
kernel rhcos/kernel
append initrd=rhcos/initramfs.img nomodeset rd.neednet=1 console=tty0 console=ttyS0 ip=dhcp coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url=http://192.168.100.254:8080/rhcos/rootfs.img coreos.inst.ignition_url=http://192.168.100.254:8080/ignition/worker.ign
You can list all the files created using the following command:
$ ls -1 /var/lib/tftpboot/pxelinux.cfg
01-52:54:00:31:4a:39
01-52:54:00:6a:37:32
01-52:54:00:8b:a1:17
01-52:54:00:95:d4:ed
01-52:54:00:a4:db:5f
01-52:54:00:ea:8b:9d
01-52:54:00:f8:87:c7
Step 6: Configure HAProxy as Load balanceron Bastion / Helper node
In this setup we’re using a software load balancer solution – HAProxy. In a Production setup of OpenShift Container Platform a hardware or highly available load balancer solution is required.
Install the package
sudo yum install -y haproxy
Set SEBool to allow haproxy connect any port:
sudo setsebool -P haproxy_connect_any 1
Backup the default HAProxy configuration
sudo mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.default
Here is HAProxy configuration ansible task:
$ vim tasks/configure_haproxy_lb.yml
---
# Configure OCP4 HAProxy Load balancer on Helper Node
- hosts: all
vars_files:
- ../vars/main.yml
tasks:
- name: Write out haproxy config file
template:
src: ../templates/haproxy.cfg.j2
dest: /etc/haproxy/haproxy.cfg
notify:
- restart haproxy
handlers:
- name: restart haproxy
ansible.builtin.service:
name: haproxy
state: restarted
Run ansible-playbook using created task to configure HAProxy Load balancer for OpenShift
$ ansible-playbook tasks/configure_haproxy_lb.yml
PLAY [all] *******************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************
ok: [localhost]
TASK [Write out haproxy config file] *****************************************************************************************************************************
changed: [localhost]
RUNNING HANDLER [restart haproxy] ********************************************************************************************************************************
changed: [localhost]
PLAY RECAP *******************************************************************************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Open the file for editing
sudo vim /etc/haproxy/haproxy.cfg
Configuration file is place in the file /etc/haproxy/haproxy.cfg
Configure SElinux for HAProxy to use the custom ports configured.
sudo semanage port -a 6443 -t http_port_t -p tcp
sudo semanage port -a 22623 -t http_port_t -p tcp
sudo semanage port -a 32700 -t http_port_t -p tcp
Open ports on the firewall
sudo firewall-cmd --add-service=http,https --permanent
sudo firewall-cmd --add-port=6443,22623/tcp --permanent
sudo firewall-cmd --reload
Step 7: Install OpenShift installer and CLI binaryon Bastion / Helper node
Download and install the OpenShift installer and client
OpenShift Client binary:
# Linux
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz
tar xvf openshift-client-linux.tar.gz
sudo mv oc kubectl /usr/local/bin
rm -f README.md LICENSE openshift-client-linux.tar.gz
# macOS
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-mac.tar.gz
tar xvf openshift-client-mac.tar.gz
sudo mv oc kubectl /usr/local/bin
rm -f README.md LICENSE openshift-client-mac.tar.gz
OpenShift installer binary:
# Linux
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-linux.tar.gz
tar xvf openshift-install-linux.tar.gz
sudo mv openshift-install /usr/local/bin
rm -f README.md LICENSE openshift-install-linux.tar.gz
# macOS
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-mac.tar.gz
tar xvf openshift-install-mac.tar.gz
sudo mv openshift-install /usr/local/bin
rm -f README.md LICENSE openshift-install-mac.tar.gz
Check if you can run binaries:
$ openshift-install version
openshift-install 4.10.18
built from commit 25b4d09c94dc4bdc0c79d8668369aeb4026b52a4
release image quay.io/openshift-release-dev/ocp-release@sha256:195de2a5ef3af1083620a62a45ea61ac1233ffa27bbce7b30609a69775aeca19
release architecture amd64
$ oc version
Client Version: 4.10.18
$ kubectl version --client
Client Version: version.InfoMajor:"1", Minor:"23", GitVersion:"v1.23.0", GitCommit:"878f5a8fe0d04ea70c5e5de11fa9cc7a49afb86e", GitTreeState:"clean", BuildDate:"2022-06-01T00:19:52Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"
Create SSH Key Pairs
Now we need to create a SSH key pair to access to use later to access the CoreOS nodes
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
Step 8: Generate ignition fileson Bastion / Helper node
We need to create the ignition files used for the installation of CoreOS machines
Download Pull Secret
We can store our pull secret in ~/.openshift directory:
mkdir ~/.openshift
Visit cloud.redhat.com and download your pull secret and save it under ~/.openshift/pull-secret
$ vim ~/.openshift/pull-secret
Create ocp4 directory
mkdir -p ~/ocp4
cd ~/
We can now create OpenShift installation yaml file install-config-base.yaml:
cat
0 notes
18 de Agosto, 2020
Internacional
Vulnerabilidad de ejecución remota en Javascript
Los atacantes podrían aprovechar una vulnerabilidad de seguridad revelada recientemente que se encuentra en el paquete serialize-javascript NPM para realizar la ejecución remota de código (RCE). Rastreada como CVE-2020-7660, la vulnerabilidad en serialize-javascript permite a atacantes remotos inyectar código arbitrario a través de la función deleteFunctions dentro de index.js. Las versiones de serialize-javascript inferiores a 3.1.0 se ven afectadas. Es una biblioteca popular con más de 16 millones de descargas y 840 proyectos dependientes. Si un atacante puede controlar los valores de "foo" y "bar" y adivinar el UID, sería posible lograr RCE.
E.@. CVE-2020-7660 recibió una puntuación de gravedad CVSS de 8.1, dentro del rango 'importante' y al borde de 'crítico'. Sin embargo, en un aviso de Red Hat sobre la vulnerabilidad, la organización ha degradado el problema a 'moderado', ya que las aplicaciones que utilizan serialize-javascript deben poder controlar los datos JSON que pasan a través de él para que se active el error.
Red Hat señala que las versiones compatibles de Container Native Virtualization 2 no se ven afectadas, pero las versiones heredadas, incluida la 2.0, son vulnerables. Se emitieron correcciones para OpenShift Service Mesh 1.0 / 1.1 (servicemesh-grafana) y está en camino un parche para Red Hat OpenShift Container Platform 4 (openshift4 / ose-prometheus). Debido a la popularidad del paquete, otros repositorios también se ven afectados, incluido el Webpacker de Ruby on Rails.
El domingo (16 de agosto) se emitió una solución para resolver la rama estable, utilizando una versión vulnerable de serialize-javascript. La vulnerabilidad está parcheada en serialize-javascript versión 3.1.0 y ha sido resuelta por los colaboradores mediante cambios en el código, lo que garantiza que los marcadores de posición no estén precedidos por una barra invertida.
Fuente
0 notes
Create project in openshift webconsole and command line toolOpenshift 4 is latest devops technology which can benefit the enterprise in a lot of ways. Build development and deployment can be automated using Openshift 4 platform. Features for autoscaling , microservices architecture and lot more features. So please like watch subscribe my channel for the latest videos. #openshift # openshift4 #containerization #cloud #online #container #kubernetes #redhatopenshift #openshifttutorial #openshiftonline #openshiftcluster #openshiftlogin #webconsole #commandlinetool #openshiftproject #project openshift,redhat openshift online,web application openshift online,openshift login,openshift development,online learning,openshift connector,online tutorial,openshift tutorial,openshift cli,red hat openshift,openshift 4,container platform,openshift login web console command line tool openshift 4.2,creating,project,webonsole,openshift4,command line tool,openshift webconsole command line tool openshift4 red hat openshift,openshift install,openshift docker https://www.youtube.com/channel/UCnIp4tLcBJ0XbtKbE2ITrwA?sub_confirmation=1&app=desktop About: 00:00 Creating project in openshift 4 webconsole and oc command line tool | openshift4 | red hat openshift create project in openshift - how to create project in openshift webconsole and command line tool | openshift4 | red hat openshift. Red Hat OpenShift 4 Container Platform: Download OpenShift 4 client OpenShift for the Absolute Beginners - Hands-on In this course we will learn about creating project in openshift / openshift 4 online cluster in different ways. First method is to use the webconsole to create the project. In this there are Developer Mode and Administrator mode. Second way is to login through OC openshift cluster command line tool for windows to create the project. Openshift/ Openshift4 a cloud based container to build deploy test our application on cloud. In the next videos we will explore Openshift4 in detail. https://www.facebook.com/codecraftshop/ https://t.me/codecraftshop/ Please do like and subscribe to my you tube channel "CODECRAFTSHOP" Follow us on facebook | instagram | twitter at @CODECRAFTSHOP . -~-~~-~~~-~~-~- Please watch: "Install hyperv on windows 10 - how to install, setup & enable hyper v on windows hyper-v" https://www.youtube.com/watch?v=KooTCqf07wk -~-~~-~~~-~~-~-
0 notes
#openshift #openshift4 #HTPasswdidentityprovider #youtubechannel #codecraftshop
0 notes