#Proxmox VE kernel
Explore tagged Tumblr posts
techdirectarchive · 4 months ago
Text
Create a bootable USB on Mac: Proxmox VE Setup
1 note · View note
nksistemas · 2 years ago
Text
Proxmox VE 8.1 Introduce Compatibilidad con Secure Boot
Proxmox VE 8.1 Introduce Compatibilidad con Secure Boot
Proxmox VE 8.1 debuta con base Debian 12.2, kernel Linux 6.5, QEMU 8.1.2 y LXC 5.0.2, mejorando entornos virtuales. Proxmox VE (Entorno Virtual) es una plataforma de virtualización de código abierto para gestionar máquinas virtuales y contenedores. Ofrece una solución integral tanto para centros de datos virtualizados como para infraestructuras en la nube. Construido sobre Debian, integra…
Tumblr media
View On WordPress
0 notes
computingpostcom · 3 years ago
Text
The Network File System (NFS) is a well-proven, widely-supported and standardized network protocol used to share files between separate computer systems. The Network Information Service (NIS) is commonly used to provide centralized user management in the network. When NFS is combined with NIS, you’re able to have file and directory permissions for access control in the network. The default configuration of NFS is to completely trust the network and any machine connected to a trusted network is able to access any files that the server makes available. In Proxmox Virtualization Environment you can use local directories or locally mounted shares for storage. A directory is a file level storage that can be used to store content type like containers, virtual disk images, ISO images, templates, or backup files. In this post we discuss how you can configure NFS share on Proxmox VE for ISO images. The same process applies for any other storage purpose like storage of virtual disk images and templates. In Proxmox VE, storage configurations are located in the file /etc/pve/storage.cfg. You can list contents in /var/lib/vz/ directory: $ ls /var/lib/vz/ dump images template Within templates directory we can see iso and cache sub-directories. $ ls /var/lib/vz/template/ cache iso The table below shows a predefined directory layout to store different content types into different sub-directories Content type Subdir VM images images// ISO images template/iso/ Container templates template/cache/ Backup files dump/ Snippets snippets/ Configure NFS server Share Login to the server that will act as NFS server and configure export for ISO contents. If you’re using a ready solution with NFS feature, you can skip this steps. Install NFS server package on your Linux system. ### RHEL Based systems ### sudo yum -y install nfs-utils sudo systemctl enable --now rpcbind nfs-server sudo firewall-cmd --add-service=nfs --permanent sudo firewall-cmd --reload #If use NFSv3 allow the following sudo firewall-cmd --add-service=nfs3,mountd,rpc-bind --permanent sudo firewall-cmd --reload ### Debian Based systems ### sudo apt -y install nfs-kernel-server Let’s now configure NFS export by editing the file below $ sudo vim /etc/exports /nfs/isos *(rw,no_root_squash,no_subtree_check) In the example we’re setting /nfs/isos as NFS share for ISO images on Proxmox VE. Confirm it works after the changes by re-exporting shares: $ sudo exportfs -rrv exporting *:/nfs/isos Mount NFS ISO share on Proxmox VE server Install NFS utility packages in your Debian / Ubuntu system. sudo apt -y install nfs-common Our NFS server setup is: NFS Server IP address: 172.20.30.3 ISO Export path on NFS Server: /nfs/isos Ensure you don’t have any data inside isos directory: ls /var/lib/vz/template/iso/ On Proxmox VE, which acts as NFS client, execute the following to display RPC information of the remote NFS server. $ rpcinfo -p 172.20.30.3 program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100024 1 udp 46068 status 100024 1 tcp 43599 status 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 100003 3 udp 2049 nfs 100227 3 udp 2049 100021 1 udp 44453 nlockmgr 100021 3 udp 44453 nlockmgr 100021 4 udp 44453 nlockmgr 100021 1 tcp 35393 nlockmgr 100021 3 tcp 35393 nlockmgr 100021 4 tcp 35393 nlockmgr
Run the showmount to display all active folder exports in an NFS server: $ showmount -e 172.20.30.3 Export list for 172.20.30.3: /nfs/isos * Option 1: Configure mount using Proxmox pvesm (Recommended) pvesm is a powerful Proxmox VE Storage Manager command line tool. Use the tool to scan for NFS shares in the server we just configured. $ sudo pvesm scan nfs 172.20.30.3 /nfs/isos * We’re going to run commands shared below to configure NFS Storage for ISO images on our Proxmox environment. sudo pvesm add nfs NFS-iso --server 172.20.30.3 --path /var/lib/vz/template/iso/ --export /nfs/isos --content iso Where: 172.20.30.3 is the IP address of NFS server /nfs/isos is the path to the iso folder on NFS server (NFS export path) /var/lib/vz/template/iso/ path on Proxmox server where NFS share is mounted Listing contents of /etc/pve/storage.cfg after command execution. $ cat /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backup lvmthin: local-lvm thinpool data vgname pve content rootdir,images nfs: NFS-iso export /nfs/isos path /var/lib/vz/template/iso/ server 172.20.30.3 content iso We can confirm new lines were added to the file. With the df command you can check if mounting was successful. $ df -hT /var/lib/vz/template/iso Filesystem Type Size Used Avail Use% Mounted on 172.20.30.3:/nfs/isos nfs4 400G 39G 341G 11% /var/lib/vz/template/iso Option 2: Configure mount using /etc/fstab You can also use mount command for runtime testing if Proxmox server can access NFS server and exported directory. sudo mount -t nfs 172.20.30.3:/nfs/isos /var/lib/vz/template/iso/ To persist the configurations use /etc/fstab file. $ sudo vim /etc/fstab # Add NFS ISO share mount 172.20.30.3:/nfs/isos /var/lib/vz/template/iso nfs defaults 0 0 Unmount before testing: sudo umount /var/lib/vz/template/iso Validate mounting can be done successfully sudo mount /var/lib/vz/template/iso Check with the df command: $ df -hT /var/lib/vz/template/iso/ Filesystem Type Size Used Avail Use% Mounted on 172.20.30.3:/nfs/isos nfs4 400G 20G 360G 6% /var/lib/vz/template/iso Login to Proxmox Web dashboard and check the status of your mount We can see a list of images available on NFS share. From here VM installation with the ISOs can begin.
0 notes
bulletinwave · 4 years ago
Text
Virtualization platform Proxmox VE 6.4 changes to OpenZFS 2.0
Virtualization platform Proxmox VE 6.4 changes to OpenZFS 2.0
The new Proxmox VE 6.4 is based on Debian GNU / Linux 10.9 “Buster” with a Linux kernel 5.4 (LTS) or optionally 5.11. The HA-enabled virtualization solution manages both virtual machines and Linux containers via a clear WebGUI. The virtual machines provide a combination of QEMU 5.2 with KVM, whereby VMs can be linked to certain QEMU machine versions by “pinning”. Linux containers, provided by…
Tumblr media
View On WordPress
0 notes
saggiosguardo · 6 years ago
Text
Configurare un NUC con Proxmox ed Hassio (trasferendo anche la configurazione da un Raspberry)
È passato quasi un anno da quando ho scritto la guida per installare Home Assistant su Raspbarry Pi 3. In questo periodo la mia configurazione (che i curiosi possono sbirciare su GitHub) si è sempre più arricchita di package, script ed automazioni, e piano piano mi è venuta voglia di provare a trasferire la configurazione di Home Assistant su un dispositivo più performante. Ho scelto il modello più economico di Intel NUC con processore Celeron che ho trovato su Amazon, aggiunto 8 GB di RAM DDR3L ed un SSD crucial da 120 GB di quelli che in questi giorni vengono spesso segnalati sul canale Telegram @SaggeOfferte.
Come potete immaginare, una configurazione di questo tipo solo per Home Assistant è sicuramente sprecata per questo l'elemento essenziale della nostra nuova istallazione sarà Proxmox Virtual Environment. Si tratta di una distribuzione Linux basata su Debian con un kernel LTS di Ubuntu modificato, che consente l'implementazione e la gestione di macchine e contenitori virtuali. Proxmox VE si gestisce semplicemente tramite interfaccia web e contiene una shell per configurare in modo più avanzato anche da riga di comando. Proxmox ci permetterà di avere sulla stessa macchina numerosi container e macchine virtuali, ad iniziare dalla VM con Home Assistant.
L’installazione di Proxmox
Una volta assemblato l'Intel NUC (bisogna solo mettere la RAM e l'SSD) siamo pronti ad installare il sistema operativo. Per prima cosa va scaricata l'ISO dell’ultima versione di Proxmox VE dal sito ufficiale e flashata su una pendrive USB con il software gratuito Balena Etcher. A questo punto si deve collegare un monitor ed una tastiera al NUC (ma solo temporaneamente), avviando dalla USB appena creata por eseguire l’installazione e configurare i vari parametri come nelle immagini che seguono.
Impostiamo la password di root e la nostra email (servirà per ricevere eventuali avvisi), e soprattutto facciamo molta attenzione a configurare correttamente le impostazioni di rete, scegliamo un IP fisso per il NUC (facendo attenzione di riservarlo anche sul modem/router per evitare sovrapposizioni) ed impostiamo adeguatamente Gateway e server DNS indicando l’IP del router (nel mio caso 192.168.1.254).
Completata l’installazione possiamo scollegare monitor e tastiera, in quanto d’ora in poi ci collegheremo tramite interfaccia web da browser all'indirizzo https://IP_di_PROXMOX:8006. Il prossimo passo è quello di scaricare su un altro computer l'ISO Netinst di Debian e caricarla sul disco di Proxmox tramite l'interfaccia web, selezionando il disco local (nuc) nella colonna a sinistra e troveremo il pulsante Carica in quella a destra. A questo punto siamo pronti per creare la prima Virtual Machine che ospiterà Home Assistant.
Clicchiamo in alto a destra su crea VM (tasto blu con l’icona del monitor), assegnamo un nome ed un numero alla VM, impostiamo all’avvio l’ISO di Debian (precedentemente caricata), la dimensione del disco fisso da assegnare (consiglio 25/32 GB), la CPU e la RAM (minimo 2GB). Sono tutti parametri modificabili in futuro, quindi non ci si deve perdere troppo per decidere all'inizio. Non modifichiamo altro e proseguiamo fino a creare ed avviare la VM.
Se avete scelto di avviare la VM al termine della procedura guidata, questa sarà attiva poco dopo (la finestra in basso con il log vi darà indicazioni circa il progresso), altrimenti premete manualmente il tasto Avvia in alto a destra. Quindi entrate nella Console (sempre in alto a destra o dalle opzioni della colonna centrale). Se avete fatto tutto bene, la Console mostrerà l’installazione di Debian. Procedete impostando la password di root e facendo attenzione a non installare l’ambiente grafico e gli altri componenti che vengono proposti a fine della procedura guidata.
Siamo pronti ad installare docker e Hass.io nella VM di debian
Adesso che abbiamo la VM con Debian non resta che collegarci a questa tramite la console che troviamo selezionandola dalla colonna di sinistra. Volendo ci si può anche collegare tramite SSH una volta individuato l’IP assegnato alla VM.
Quando ci si è loggati, si devono dare uno per volta i seguenti comandi per installare docker e docker-ce:
apt-get install jq curl dbus socat bash avahi-daemon apt-get install -y apt-transport-https ca-certificates wget software-properties-common wget https://download.docker.com/linux/debian/gpg sudo apt-key add gpg echo "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | sudo tee -a /etc/apt/sources.list.d/docker.list apt-get update apt-get -y install docker-ce
e infine l’ultimo comando per installare Hass.io (ovvero HomeAssistant):
curl -sL https://raw.githubusercontent.com/home-assistant/hassio-build/master/install/hassio_install | bash -s -- -m intel-nuc
Colleghiamo eventuali dispositivi USB
Attendiamo il termine dell’installazione e se abbiamo una chiavetta Z-Wave o ConBee andrá individuata ed assegnata alla VM dal menu Hardware come in figura. Consiglio inoltre di collegare anche la periferica USB con codice 8087:0aa7 (potrebbe variare sul vostro dispositivo), questa serve per assegnare il bluetooth del NUC ad Home Assistant e potrà essere utile in futuro. Fatto questo dobbiamo spegnere la VM (possiamo usare il tasto spegnimento in alto) e successivamente accenderla tramite il tasto Avvia.
Siamo pronti per ripristinare il nostro backup di Home Assistant
Ecco fatto: Home Assistant è installato e possiamo collegarci ad esso inserendo nel browser l'indirizzo http://IP-VM:8123 e ripristinare uno snapshot della nostra precedente configurazione o iniziarne una da zero seguendo la nostra guida.
Nel primo caso, si potrà dunque trasferire tutto quello che era presente in un precedente setup basato sul Raspbarry dentro la VM nel nuovo NUC nel giro di pochi minuti. Seguendo questa strada si noterà che la SmartHome sarà immediatamente molto più reattiva. E siccome vi avanzerà un Raspberry, perché non sfruttarlo per costruire una console da retrogaming?
Come usare il Bluetooth del NUC sulla VM di Home Assistant
Una cosa che mi ha fatto impazzire quando sono passato al NUC è stata sfruttare il Bluetooth interno come facevo con il Raspberry per collegarmi ad alcuni dispositivi BLE e leggerne i dati. In realtà una volta trovato come fare è piuttosto semplice, ma vi assicuro che ci ho perso un sacco di tempo. Se avete già associato il dispositivo Bluetooth nel passaggio precedente, bisogna eseguire questi comandi dentro la console della macchina virtuale:
apt-get install bluetooth systemctl start bluetooth
Verifichiamo che il bluetooth funzioni avviando una scansione dei dispositivi con il comando hcitool scan e se non ci sono errori riavviamo la VM e saremo pronti ad utilizzare il Bluetooth del NUC con HomeAssistant.
Come impostare backup periodici della VM su usb esterna
L’ultima cosa che consiglio di fare è di sfruttare il fantastico sistema di backup di Proxmox. Accediamo alla shell di Proxmox e colleghiamo una pendrive USB (io uso questa da 64 GB, ma va benissimo anche di dimensioni inferiori). Individuiamo la USB con il comandolsblk
Il risultato sarà una schermata tipo quella qui sopra, dove la mia U§SB è individuata come sdc. A questo punto dobbiamo formattarla chiamandola ad esempio Backup ed utilizzando il filesystem ext4, usando il comando nella prima riga e stando attenti ad indicare correttamente la partizione della pendrive. Il secondo comanda serve a montare il disco appena formattato:
mkfs.ext4 -n "Backup" -I /dev/sdc mount -a
Ora torniamo alla nostra interfaccia web per proseguire la configurazione dei backup automatici. Selezioniamo Datacenter dalla colonna di sinistra, indichiamo quindi il disco di backup, i giorni della settimana in cui vogliamo eseguire il backup, la modalità Snapshot e spuntiamo le VM di cui eseguirlo.
L’ultima cosa che ci è rimasta da fare è dire a Proxmox quanti backup conservare sul disco USB, onde evitare che lo spazio possa esaurirsi. Per farlo dobbiamo selezionare Datacenter nella colonna di sinistra, quindi Storage ed infine il disco backup. Premiamo su Modifica e nel pop-up che si aprirà possiamo impostare il numero massimo di backup da conservare.
Come rimuovere il messaggio errore 401 da Proxmox
Per finire vi lascio gli ultimi tre comandi che se volete potete lanciare sul terminale di Proxmox: Il primo serve per disattivare la repository commerciale, eliminando il fastidioso errore 401 che altrimenti ci troveremo nel task; il secondo attiva la community repository ed infine l'ultimo serve ad eliminare il fastidioso pop-up che visualizziamo al login che ci invita a sottoscrivere una licenza commerciale.
sed -i "s/^deb/\#deb/" /etc/apt/sources.list.d/pve-enterprise.list echo "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" > /etc/apt/sources.list.d/pve-no-enterprise.list sed -i.bak "s/data.status !== 'Active'/false/g" /usr/share/javascript/proxmox-widget-toolkit/proxmoxlib.js && systemctl restart pveproxy.service
Dopo aver dato questi 3 comandi, per vedere sparire il popup è necessario fare un login / logout prima in un’altra lingua e poi di nuovo in italiano.
Autore Davide (@daxda)
Revisori Massimiliano Latella (@lamax) Maurizio Natali (@simplemal)
L'articolo Configurare un NUC con Proxmox ed Hassio (trasferendo anche la configurazione da un Raspberry) proviene da SaggiaMente.
Articoli correlati:
Guida: come attivare HomeKit su Home Assistant Una delle caratteristiche più interessanti di Home Assistant (guida all’installazione) è...
Guida: accedere dall'esterno ad Home Assistant via https con un certificato SSL valido Dopo una lunga attesa dovuta ai numerosi impegni, finalmente due...
Cosa ci facciamo con la domotica? Dopo alcuni anni di sperimentazione privata e mesi di condivisione...
from Configurare un NUC con Proxmox ed Hassio (trasferendo anche la configurazione da un Raspberry)
0 notes
techdirectarchive · 1 year ago
Text
Install Proxmox VE on a Bare-metal [Beelink EQ12]
Tumblr media
View On WordPress
0 notes
computingpostcom · 3 years ago
Text
Proxmox is a powerful open-source and enterprise-grade virtualization platform that uses a modified Debian kernel to deploy and manage multiple virtualized environments on bare metal server hardware. Proxmox Virtualization environment exists to help businesses run and manage virtual machines, containers, and physical hypervisors. In this short article we shall look at how you can prevent accidental Virtual Machine deletion on Proxmox Virtualization Environment. We’ll use a setting on Web interface to protect a VM on Proxmox from accidental deletion until flag is turned off. VM > Options > Protection: (default = 0) Prevent VM accidental deletion on Proxmox VE Login to your Proxmox VE through the web dashboard. Click on the Virtual Machine you’ll like to protect from accidental deletion and go to Options > Protection Change the setting flag/boolean from No (0) to Yes(1) The setting should look like below after the change. From this point you won’t be able to delete the Virtual Machine while the setting is on. Let’s demonstrate this with the Virtual Machine we just enabled delete protect on. We can input VM ID and attempt the deletion. Here is the error message encountered. From the message we can conclude our Virtual Machine is now protected from accidental deletion. Until you manually disable the setting you won’t be able to remove the Virtual Machine from Proxmox VE.
0 notes
computingpostcom · 3 years ago
Text
Virtualization is the foundation of cloud computing as it allows for more efficient usage of physical computer hardware. In Virtualization, a software application is used to create an abstraction layer over hardware elements of a computer – processors, memory, storage, network and more, to be divided into multiple virtual machines (VMs). Proxmox Virtual Environment (VE) is a Virtualization solution based on Debian Linux distribution with a modified LTS kernel. It enables you to deploy and manage both virtual machines and containers, with unified storage for better efficiency. In this guide, we will cover a step-by-step installation of Proxmox VE 7 virtualization software on Debian 11 (Bullseye) Linux system. It’s recommended to deploy Proxmox VE server from a Bare-metal_ISO_Installer, but it’s sometimes inevitable to deploy it on a running instance of Debian 11 (Bullseye) server. Setup Pre-requisites For the installation of Proxmox VE 7 on Debian 11 (Bullseye), you need the following requirements to be met; A running instance of Debian Bullseye A 64-bit processor with support for the Intel 64 or AMD64 CPU extensions. Access to Debian server terminal as root or standard user with sudo Server needs internet access Enough hardware resources to be used in Virtualizing other operating systems We have a guide that helps with the installation of Debian 11 (Bullseye) operating system. In the link below: Install Debian 11 Bullseye – Step by Step With Screenshots With all the requirements satisfied, proceed with the installation of Proxmox VE 7 on Debian 11 (Bullseye) with the steps discussed in the next sections. For Proxmox VE 6, check out: How To Install Proxmox VE 6 on Debian 10 (Buster) Step 1: Update Debian OS Ensure your Debian 11 (Bullseye) operating system is upgraded. sudo apt -y update && sudo apt -y upgrade Once the upgrade process is complete, reboot the server if needed. [ -f /var/run/reboot-required ] && sudo reboot -f Step 2: Set Proxmox Server hostname Let’s set a hostname on the server sudo hostnamectl set-hostname proxmox7node01.example.com --static Replaceproxmox7node01.example.com with correct hostname you’re setting on your system. Get the IP address of the primary interface: $ ip ad 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp1s0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 52:54:00:ef:22:c5 brd ff:ff:ff:ff:ff:ff inet 192.168.200.50/24 brd 192.168.200.255 scope global dynamic noprefixroute enp1s0 valid_lft 1982sec preferred_lft 1982sec inet6 fe80::5054:ff:feef:22c5/64 scope link noprefixroute valid_lft forever preferred_lft forever Update the record on /etc/hosts file with hostname and matching IP address for local resolution without DNS server. $ sudo vim /etc/hosts 192.168.200.50 proxmox7node01.example.com proxmox7node01 Logout and back in to use new hostname $ logout Test if configured hostname is is ok using the hostname command: $ hostname --ip-address 192.168.200.50 Step 3: Add the Proxmox VE repository The Proxmox server packages are distributed in an APT repository. Add the repository to your Debian 11 system by running the commands below: echo "deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve-install-repo.list Then import GPG packages signing key: wget http://download.proxmox.com/debian/proxmox-release-bullseye.gpg sudo mv proxmox-release-bullseye.gpg /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg chmod +r /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg Update your APT sources list $ sudo apt update Hit:1 http://deb.debian.org/debian bullseye InRelease Hit:2 http://deb.debian.org/debian bullseye-updates InRelease
Hit:3 http://security.debian.org/debian-security bullseye-security InRelease Get:4 http://download.proxmox.com/debian/pve bullseye InRelease [3053 B] Hit:5 http://deb.debian.org/debian bullseye-backports InRelease Get:6 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 Packages [186 kB] Fetched 189 kB in 0s (435 kB/s) Reading package lists... Done Building dependency tree... Done Reading state information... Done 1 package can be upgraded. Run 'apt list --upgradable' to see it. You can see we have an upgrade available after adding the repo. Let’s run the system upgrade command: $ sudo apt full-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done The following packages will be upgraded: ifupdown 1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. Need to get 82.0 kB of archives. After this operation, 2048 B disk space will be freed. Do you want to continue? [Y/n] y Get:1 http://download.proxmox.com/debian/pve bullseye/pve-no-subscription amd64 ifupdown amd64 0.8.36+pve1 [82.0 kB] Fetched 82.0 kB in 0s (2558 kB/s) Reading changelogs... Done (Reading database ... 137105 files and directories currently installed.) Preparing to unpack .../ifupdown_0.8.36+pve1_amd64.deb ... Unpacking ifupdown (0.8.36+pve1) over (0.8.36) ... Setting up ifupdown (0.8.36+pve1) ... Processing triggers for man-db (2.9.4-2) ... Adding Proxmox VE Ceph Repository: This is Proxmox VE’s main Ceph repository and holds the Ceph packages for production use. You can also use this repository to update only the Ceph client. echo "deb http://download.proxmox.com/debian/ceph-pacific bullseye main" | sudo tee /etc/apt/sources.list.d/ceph.list Step 4: Install Proxmox VE 7 packages With the repository added, we can now install Proxmox VE packages on Debian 11 (Bullseye) system: sudo apt update sudo apt install proxmox-ve postfix open-iscsi The installation time will depend on other variables such as internet connectivity and hard disk write speed: .... The following packages will be REMOVED: firmware-linux-free ifupdown The following NEW packages will be installed: attr bridge-utils ceph-common ceph-fuse cifs-utils corosync criu cstream curl dmeventd dtach ebtables faketime fonts-font-awesome fonts-glyphicons-halflings genisoimage glusterfs-client glusterfs-common gnutls-bin hdparm ibverbs-providers idn ifupdown2 ipset keyutils libaio1 libanyevent-http-perl libanyevent-perl libappconfig-perl libapt-pkg-perl libasync-interrupt-perl libauthen-pam-perl libbabeltrace1 libboost-context1.74.0 libboost-coroutine1.74.0 libboost-program-options1.74.0 libbytes-random-secure-perl libcephfs2 libcfg7 libcmap4 libcommon-sense-perl libconvert-asn1-perl libcorosync-common4 libcpg4 libcrypt-openssl-bignum-perl libcrypt-openssl-random-perl libcrypt-openssl-rsa-perl libcrypt-random-seed-perl libcrypt-ssleay-perl libdbi1 libdevel-cycle-perl libdevmapper-event1.02.1 libdigest-bubblebabble-perl libdigest-hmac-perl libev-perl libfaketime libfdt1 libfile-chdir-perl libfile-readbackwards-perl libfilesys-df-perl libgfapi0 libgfchangelog0 libgfrpc0 libgfxdr0 libglusterd0 libglusterfs0 libgnutls-dane0 libgnutlsxx28 libgoogle-perftools4 libgssapi-perl libguard-perl libibverbs1 libinih1 libio-multiplex-perl libipset13 libiscsi7 libisns0 libjemalloc2 libjs-bootstrap libjs-extjs libjs-jquery libjs-qrcodejs libjson-perl libjson-xs-perl libknet1 libleveldb1d liblinux-inotify2-perl liblvm2cmd2.03 libmath-random-isaac-perl libmath-random-isaac-xs-perl libmime-base32-perl libnet-dns-perl libnet-dns-sec-perl libnet-ip-perl libnet-ldap-perl libnet-libidn-perl libnet1 libnetaddr-ip-perl libnetfilter-log1 libnfsidmap2 libnozzle1 libnvpair3linux liboath0 libopeniscsiusr libopts25 libposix-strptime-perl libproxmox-acme-perl libproxmox-acme-plugins libproxmox-backup-qemu0 libpve-access-control libpve-apiclient-perl libpve-cluster-api-perl
libpve-cluster-perl libpve-common-perl libpve-guest-common-perl libpve-http-server-perl libpve-rs-perl libpve-storage-perl libpve-u2f-server-perl libqb100 libqrencode4 libquorum5 librados2 librados2-perl libradosstriper1 librbd1 librdmacm1 librrd8 librrds-perl libsdl1.2debian libsocket6-perl libspice-server1 libstatgrab10 libstring-shellquote-perl libtcmalloc-minimal4 libtemplate-perl libterm-readline-gnu-perl libtpms0 libtypes-serialiser-perl libu2f-server0 libunbound8 liburcu6 libusbredirparser1 libuuid-perl libuutil3linux libvotequorum8 libxml-libxml-perl libxml-namespacesupport-perl libxml-sax-base-perl libxml-sax-expat-perl libxml-sax-perl libyaml-libyaml-perl libzfs4linux libzpool5linux lvm2 lxc-pve lxcfs lzop nfs-common novnc-pve numactl open-iscsi postfix powermgmt-base proxmox-archive-keyring proxmox-backup-client proxmox-backup-file-restore proxmox-backup-restore-image proxmox-mini-journalreader proxmox-ve proxmox-widget-toolkit pve-cluster pve-container pve-docs pve-edk2-firmware pve-firewall pve-firmware pve-ha-manager pve-i18n pve-kernel-5.13 pve-kernel-5.13.19-2-pve pve-kernel-helper pve-lxc-syscalld pve-manager pve-qemu-kvm pve-xtermjs python3-ceph-argparse python3-cephfs python3-cffi-backend python3-cryptography python3-gpg python3-jwt python3-prettytable python3-protobuf python3-rados python3-rbd python3-samba python3-tdb qemu-server qrencode rpcbind rrdcached rsync samba-common samba-common-bin samba-dsdb-modules smartmontools smbclient socat spiceterm sqlite3 ssl-cert swtpm swtpm-libs swtpm-tools thin-provisioning-tools uidmap vncterm xfsprogs xsltproc zfs-zed zfsutils-linux zstd 0 upgraded, 223 newly installed, 2 to remove and 0 not upgraded. Need to get 302 MB of archives. After this operation, 1780 MB of additional disk space will be used. Do you want to continue? [Y/n] y If you have a mail server in your network, you should configure postfix as a satellite system, and your existing mail server will be the ‘relay host’ which will route the emails send by the proxmox server to the end recipient. If you don’t know what to enter here, choose local only. Confirm system mail name / update accordingly. Confirm the installation completes without any errors: ...... Created symlink /etc/systemd/system/multi-user.target.wants/pvedaemon.service → /lib/systemd/system/pvedaemon.service. Created symlink /etc/systemd/system/multi-user.target.wants/pveproxy.service → /lib/systemd/system/pveproxy.service. Created symlink /etc/systemd/system/multi-user.target.wants/spiceproxy.service → /lib/systemd/system/spiceproxy.service. Created symlink /etc/systemd/system/multi-user.target.wants/pvestatd.service → /lib/systemd/system/pvestatd.service. Created symlink /etc/systemd/system/getty.target.wants/pvebanner.service → /lib/systemd/system/pvebanner.service. Created symlink /etc/systemd/system/multi-user.target.wants/pvescheduler.service → /lib/systemd/system/pvescheduler.service. Created symlink /etc/systemd/system/timers.target.wants/pve-daily-update.timer → /lib/systemd/system/pve-daily-update.timer. Created symlink /etc/systemd/system/sysinit.target.wants/pvenetcommit.service → /lib/systemd/system/pvenetcommit.service. Created symlink /etc/systemd/system/pve-manager.service → /lib/systemd/system/pve-guests.service. Created symlink /etc/systemd/system/multi-user.target.wants/pve-guests.service → /lib/systemd/system/pve-guests.service. Backing up lvm.conf before setting pve-manager specific settings.. '/etc/lvm/lvm.conf' -> '/etc/lvm/lvm.conf.bak' Setting 'global_filter' in /etc/lvm/lvm.conf to prevent zvols from being scanned: global_filter=["a|.*|"] => global_filter=["r|/dev/zd.*|"] Setting up proxmox-ve (7.1-1) ... Processing triggers for mailcap (3.69) ... Processing triggers for fontconfig (2.13.1-4.2) ... Processing triggers for desktop-file-utils (0.26-1) ... Processing triggers for initramfs-tools (0.140) ... update-initramfs: Generating /boot/initrd.img-5.13.19-2-pve
Running hook script 'zz-proxmox-boot'.. Re-executing '/etc/kernel/postinst.d/zz-proxmox-boot' in new private mount namespace.. No /etc/kernel/proxmox-boot-uuids found, skipping ESP sync. Processing triggers for hicolor-icon-theme (0.17-2) ... Processing triggers for gnome-menus (3.36.0-1) ... Processing triggers for libc-bin (2.31-13+deb11u2) ... Processing triggers for rsyslog (8.2102.0-2) ... Processing triggers for man-db (2.9.4-2) ... Processing triggers for proxmox-backup-file-restore (2.1.2-1) ... Updating file-restore initramfs... 11292 blocks Processing triggers for pve-ha-manager (3.3-1) ... root@proxmox7node01:~$ Reboot your Debian system after installation to boot with Proxmox VE kernel. sudo systemctl reboot Check if Port 8006 is bound to Proxmox VE Proxy service $ ss -tunelp | grep 8006 tcp LISTEN 0 4096 *:8006 *:* uid:33 ino:25414 sk:18 cgroup:/system.slice/pveproxy.service v6only:0 Step 5: Access Proxmox VE web interface From your Workstation, connect to the Proxmox VE admin web console on (https://youripaddress:8006). Select “PAM Authentication” and authenticate with server’s root user password to access Proxmox VE dashboard which has a look like below: To change Proxmox VE UI theme see guide below: How To Customize Proxmox VE Web UI With dark theme Once logged in, create a Linux Bridge called vmbr0, Add the first network interface to be used by the bridge being created. For Private bridge using NAT check below article: Create Private Network Bridge on Proxmox VE with NAT The official Proxmox Documentation has more guides on the advanced configurations and Proxmox VE Administration.
0 notes
computingpostcom · 3 years ago
Text
In this guide, we will discuss the installation of Proxmox VE 6 server on Debian 10 (Buster) Linux system. The recommended and supported Proxmox VE server installation is usually done via Bare-metal_ISO_Installer, but there are scenarios where it makes sense to install on a running Debian Server. Proxmox Virtual Environment (VE) is an enterprise-grade open-source server virtualization solution based on Debian Linux distribution with a modified Ubuntu LTS kernel. It allows you to deploy and manage both virtual machines and containers. This setup presumes you have a running Debian 10 Buster Linux server running. If you don’t have one, follow our guide to Install Debian 10 on a dedicated server that will be used as a hypervisor. Please note that you need a 64-bit processor with support for the Intel 64 or AMD64 CPU extensions. Below are the steps you’ll follow through to install Proxmox VE 6 on Debian 10 (Buster) Server. For Proxmox VE 7, check out: How To Install Proxmox VE 7 on Debian 11 (Bullseye) Step 1: Update Debian OS Update apt package index before getting started. sudo apt -y update sudo apt -y upgrade sudo reboot Step 2: Set system hostname We need to set the hostname and make sure it is resolvable via /etc/hosts. sudo hostnamectl set-hostname proxmox6node01.example.com --static example.com should be replaced with a valid domain name. $ ip ad 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether da:f6:59:66:ed:ea brd ff:ff:ff:ff:ff:ff inet 10.116.0.2/20 brd 10.116.15.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::d8f6:59ff:fe66:edea/64 scope link valid_lft forever preferred_lft forever Then update the record on /etc/hosts file with hostname and matching IP address for local resolution without DNS server. $ sudo vim /etc/hosts 10.116.0.2 proxmox6node01.example.com proxmox6node01 Test if it is okay $ hostname --ip-address 10.116.0.2 Step 3: Add the Proxmox VE repository All Proxmox packages will be pulled from matching upstream repository which is added manually to the system. Here we’ll add the Proxmox VE No-Subscription Repository. Import GPG key: ### Using apt-key ### wget -qO - http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg | sudo apt-key add - ### OR Manually ### wget http://download.proxmox.com/debian/proxmox-ve-release-6.x.gpg sudo mv proxmox-ve-release-6.x.gpg /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg chmod +r /etc/apt/trusted.gpg.d/proxmox-ve-release-6.x.gpg Then add Proxmox VE repository: echo "deb http://download.proxmox.com/debian/pve buster pve-no-subscription" | sudo tee /etc/apt/sources.list.d/pve-install-repo.list You can now update your repository and system by running: sudo apt update && sudo apt dist-upgrade Adding Proxmox VE Ceph Repository: This is Proxmox VE’s main Ceph repository and holds the Ceph packages for production use. You can also use this repository to update only the Ceph client. echo "deb http://download.proxmox.com/debian/ceph-nautilus buster main" | sudo tee /etc/apt/sources.list.d/ceph.list Step 4: Install Proxmox VE packages These are the commands executed to install Proxmox VE packages. sudo apt update sudo apt install proxmox-ve postfix open-iscsi If you have a mail server in your network, you should configure postfix as a satellite system, and your existing mail server will be the ‘relay host’ which will route the emails send by the proxmox server to the end recipient. If you don’t know what to enter here, choose local only. Reboot your Debian system after installation to boot with Proxmox VE kernel. sudo reboot Step 5: Accessing Proxmox VE web interface
Connect to the Proxmox VE admin web interface on (https://youripaddress:8006). Proxmox VE Dashboard looks like this: Select “PAM Authentication” and authenticate with server’s root user password. Once logged in, create a Linux Bridge called vmbr0, And add your first network interface to it. For Private bridge using NAT check below article: Create Private Network Bridge on Proxmox VE 6 with NAT To change Proxmox VE UI theme see guide below: How To Customize Proxmox VE Web UI With dark theme Visit Proxmox Documentation website for advanced configurations and to master Proxmox VE Administration.
0 notes
computingpostcom · 3 years ago
Text
This guide has been written to help Linux and Cloud users to install and configure Proxmox VE 7 on Hetzner root server. Root server in Hetzner Cloud is a dedicated server which is completely isolated from one another to give you full access and control to configure the server anyway you want without affecting other users. Hetzner Online GmbH provides auctions for dedicated server hardware at a very competitive rates with a monthly payment model. Visit Hetzner Server auction page to bid on servers and save money. Proxmox Virtual Environment (VE) is a very powerful and enterprise-grade server virtualization software using Debian Linux as its base with a modified Linux kernel. With Proxmox you can run both Virtual Machines and Containers powered by KVM and LXC technologies respectively. The Proxmox VE source code is free, released under the GNU Affero General Public License, v3 (GNU AGPL, v3). This guide is intended for personal Labs only. We’ll do a single node installation of Proxmox VE Server on Debian 11 (Bullseye) operating system. It comes with an integrated graphical user interface (GUI) for management, there is no need to install a separate management tool. For a multi-node Proxmox VE Cluster setup, visit the official Proxmox VE High Availability to read more if interested with the solution. In this article, we shall perform installation of Proxmox VE 7 on a Hetzner root server with the following hardware specifications. CPU: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (Cores 12) Memory: 256GB RAM Disk: 2 x 480GB SSD Network: 1Gbit IPV4 Addresses: 1 x IPV4 public address When you order an hetzner root server, by default, you get a single IPv4 public IP address. If you need more public addresses, you’ll have to order separately. Step 1 – Boot the Server in to Rescue Mode Login to your Hetzner root server console and move Main functions > Servers > Server Label > Rescue section to boot your server in rescue mode. From the page shown, select the Operating system, CPU Architecture, and public key or password and click on “Activate rescue system” to use activate rescue system. After activating rescue system, the system has to be rebooted. This is done on Server > ServerName > Reset section in the console. Step 2 – Create Root Server Configuration. SSH to the server in rescue mode using root user and password shown during Rescue activation. $ ssh root@serverip Welcome to the Hetzner Rescue System. This Rescue System is based on Debian 9 (stretch) with a newer kernel. You can install software as in a normal system. To install a new operating system from one of our prebuilt images, run 'installimage' and follow the instructions. More information at http://wiki.hetzner.de Rescue System up since 2021-12-03 21:01 +02:00 Hardware data: CPU1: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz (Cores 12) Memory: 257653 MB Disk /dev/sda: 480 GB (=> 447 GiB) doesn't contain a valid partition table Disk /dev/sdb: 480 GB (=> 447 GiB) doesn't contain a valid partition table Total capacity 894 GiB with 2 Disks Network data: eth0 LINK: yes MAC: b4:2e:99:47:fa:5c IP: xxxxxxxxxxxxxxxxxx IPv6: xxxxxxxxxxxxxxxxxx Intel(R) Gigabit Ethernet Network Driver root@rescue ~ # As seen from the output, the server we’re using has two disks @480GB. We’ll configure them in non-redundant RAID 0 configuration. Disk /dev/sda: 480 GB (=> 447 GiB) Disk /dev/sdb: 480 GB (=> 447 GiB) Next we create our Hetzner server installer configuration file. We’ll name it debian-install-config.txt vim install-config.txt This is the data populated that will be used by the installimage to install Debian 11 (Bullseye) operating system from pre-built image. DRIVE1 /dev/sda DRIVE2 /dev/sdb SWRAID 1 SWRAIDLEVEL 0 # Use 1 for Raid 1 BOOTLOADER grub HOSTNAME proxmox7.example.com # Set correct hostname PART /boot ext4 512M PART lvm vg0 all
LV vg0 root / ext4 50G LV vg0 swap swap swap 8G LV vg0 var /var ext4 300G # List images with ls /root/.oldroot/nfs/install/../images IMAGE /root/images/Debian-1101-bullseye-amd64-base.tar.gz Configure your own partitioning scheme depending on storage hardware and usable space. Step 3 – Install Debian 11 (Bullseye) on Hetzner root server With correct installer configurations, initiate the installation process of Debian Linux on Hetzner root server by running the following command: # installimage -a -c install-config.txt Installation process will start immediately after command execution: Found AUTOSETUP file '/autosetup' Running unattended installimage installation ... DRIVE1 /dev/sda DRIVE2 /dev/sdb SWRAID 1 SWRAIDLEVEL 0 # Use 1 for Raid 1 BOOTLOADER grub HOSTNAME myrootserver.computingpost.com PART /boot ext3 512M PART lvm vg0 all LV vg0 root / ext4 50G LV vg0 swap swap swap 8G LV vg0 var /var ext4 300G IMAGE /root/.oldroot/nfs/install/../images/Debian-1101-bullseye-amd64-base.tar.gz WARNING: Starting installation in 20 seconds ... Press X to continue immediately ... Installation will DELETE ALL DATA ON DISK(s)! Press CTRL-C to abort now! The script will do disk preparation and Debian server installation for you. Just sit and relax as magic happens! Hetzner Online GmbH - installimage Your server will be installed now, this will take some minutes You can abort at any time with CTRL+C ... : Reading configuration done : Loading image file variables done : Loading debian specific functions done 1/17 : Deleting partitions done 2/17 : Test partition size done 3/17 : Creating partitions and /etc/fstab done 4/17 : Creating software RAID level 0 done 5/17 : Creating LVM volumes done 6/17 : Formatting partitions : formatting /dev/md/0 with ext4 done : formatting /dev/vg0/root with ext4 done : formatting /dev/vg0/swap with swap done : formatting /dev/vg0/var with ext4 done 7/17 : Mounting partitions done 8/17 : Sync time via ntp done : Importing public key for image validation done 9/17 : Validating image before starting extraction done 10/17 : Extracting image (local) done 11/17 : Setting up network config done 12/17 : Executing additional commands : Setting hostname done : Generating new SSH keys done : Generating mdadm config done : Generating ramdisk done : Generating ntp config done 13/17 : Setting up miscellaneous files done 14/17 : Configuring authentication : Fetching SSH keys done : Disabling root password done : Disabling SSH root login without password done : Copying SSH keys done 15/17 : Installing bootloader grub done 16/17 : Running some debian specific functions done 17/17 : Clearing log files done INSTALLATION COMPLETE You can now reboot and log in to your new system with the same credentials that you used to log into the rescue system. When installation is done, reboot to Debian 11 (Bullseye) environment. # shutdown -r now SSH to the server as root user with password or SSH Public key if set. $ ssh root@serverip Linux proxmox7.
example.com 5.13.19-2-pve #1 SMP PVE 5.13.19-4 (Mon, 29 Nov 2021 12:10:09 +0100) x86_64 The programs included with the Debian GNU/Linux system are free software; the exact distribution terms for each program are described in the individual files in /usr/share/doc/*/copyright. Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent permitted by applicable law. We can review our current system partitions on the server. If you used LVM and still have space in VG, you can adjust Logical Volumes capacity. # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 447.1G 0 disk ├─sda1 8:1 0 512M 0 part │ └─md0 9:0 0 511M 0 raid1 /boot └─sda2 8:2 0 446.6G 0 part └─md1 9:1 0 893G 0 raid0 ├─vg0-root 253:0 0 50G 0 lvm / ├─vg0-swap 253:1 0 8G 0 lvm [SWAP] └─vg0-var 253:2 0 300G 0 lvm /var sdb 8:16 0 447.1G 0 disk ├─sdb1 8:17 0 512M 0 part │ └─md0 9:0 0 511M 0 raid1 /boot └─sdb2 8:18 0 446.6G 0 part └─md1 9:1 0 893G 0 raid0 ├─vg0-root 253:0 0 50G 0 lvm / ├─vg0-swap 253:1 0 8G 0 lvm [SWAP] └─vg0-var 253:2 0 300G 0 lvm /var # pvs PV VG Fmt Attr PSize PFree /dev/md1 vg0 lvm2 a-- 893.00g 535.00g # vgs VG #PV #LV #SN Attr VSize VFree vg0 1 3 0 wz--n- 893.00g 535.00g # lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root vg0 -wi-ao---- 50.00g swap vg0 -wi-ao---- 8.00g var vg0 -wi-ao---- 300.00g See below example which adds extra 50GB to /dev/vg0/var Logical Volume # lvextend -r -L +50G /dev/vg0/vg0 Confirm Debian successful installation by querying OS release info: root@proxmox ~ # cat /etc/os-release PRETTY_NAME="Debian GNU/Linux 11 (bullseye)" NAME="Debian GNU/Linux" VERSION_ID="11" VERSION="11 (bullseye)" VERSION_CODENAME=bullseye ID=debian HOME_URL="https://www.debian.org/" SUPPORT_URL="https://www.debian.org/support" BUG_REPORT_URL="https://bugs.debian.org/" Update and upgrade your Debian 11 (Bullseye) system apt update apt -y full-upgrade apt install wget vim sudo bash-completion [ -f /var/run/reboot-required ] && reboot -f Step 4 – Install Proxmox VE 7 on Debian 11 (Bullseye) Now that our Cloud server is ready, we can dive to the actual installation of Proxmox VE. Refer to our guide below to proceed with the setup. How To Install Proxmox VE 7 on Debian 11 (Bullseye) To change Proxmox VE UI theme see guide below: How To Customize Proxmox VE Web UI With dark theme
0 notes