#proxmox cluster
Explore tagged Tumblr posts
virtualizationhowto · 1 year ago
Text
Proxmox Remove Node from Cluster Including Ceph
Proxmox Remove Node from Cluster Including Ceph #proxmox #proxmoxcluster #ceph #proxmoxceph #clusterrole #highavailability #virtualization #homelab #homeserver #vhtforums #pve #pvecluster #virtualizationhowto
If you are learning Proxmox and using it in your home lab environment, one of the things you will likely want to do is build a Proxmox cluster. Doing this provides high availability for your virtual machines and containers. If you are building clusters, you may need to remove nodes from cluster configurations in Proxmox. In a home lab, you may have power considerations, want less noise, or…
Tumblr media
View On WordPress
0 notes
techdirectarchive · 4 months ago
Text
Create a bootable USB on Mac: Proxmox VE Setup
1 note · View note
purplest-puppyweed · 1 month ago
Text
Terrible excuse for a bio
yeah so this is gonna go great I know exactly who I am
No minors! or Nazis, or TERFs. go away
WHOMST: call me Ivy! she/they/it, in no particular order. I'm either older than the first steam engines or in my early 20s, they haven't figured out how to carbon date living creatures yet and I sure as hell don't remember. WHOD'VE: I'm poly!bI'm bi? gay? sapphic? idk it's never been a problem
THINGS: I like star trek, M*A*S*H, and some other shows! My hobbies include open source microelectronic hardware design, general linux nonsense (ask me about my proxmox cluster), photography, and really whatever my brain points me at in a given moment! I play Satisfactory sometimes and I am frequently high
also shibari, petplay, sensdep, praise, degradation, and generally associated portions of BSDM are things that belong here. and HDG that too THINGS'NT: No watersports, scat, ageplay, vomit, or incest if at all possible!
WHY???? I am here for shenannigans purposes only! this is my single purpose in life. I will yap, I will post things of assorted natures that I'm sure even I don't comprehend! welcome to the disaster y'all
8 notes · View notes
rootresident · 2 months ago
Text
Self Hosting
I haven't posted here in quite a while, but the last year+ for me has been a journey of learning a lot of new things. This is a kind of 'state-of-things' post about what I've been up to for the last year.
I put together a small home lab with 3 HP EliteDesk SFF PCs, an old gaming desktop running an i7-6700k, and my new gaming desktop running an i7-11700k and an RTX-3080 Ti.
"Using your gaming desktop as a server?" Yep, sure am! It's running Unraid with ~7TB of storage, and I'm passing the GPU through to a Windows VM for gaming. I use Sunshine/Moonlight to stream from the VM to my laptop in order to play games, though I've definitely been playing games a lot less...
On to the good stuff: I have 3 Proxmox nodes in a cluster, running the majority of my services. Jellyfin, Audiobookshelf, Calibre Web Automated, etc. are all running on Unraid to have direct access to the media library on the array. All told there's 23 docker containers running on Unraid, most of which are media management and streaming services. Across my lab, I have a whopping 57 containers running. Some of them are for things like monitoring which I wouldn't really count, but hey I'm not going to bother taking an effort to count properly.
The Proxmox nodes each have a VM for docker which I'm managing with Portainer, though that may change at some point as Komodo has caught my eye as a potential replacement.
All the VMs and LXC containers on Proxmox get backed up daily and stored on the array, and physical hosts are backed up with Kopia and also stored on the array. I haven't quite figured out backups for the main storage array yet (redundancy != backups), because cloud solutions are kind of expensive.
You might be wondering what I'm doing with all this, and the answer is not a whole lot. I make some things available for my private discord server to take advantage of, the main thing being game servers for Minecraft, Valheim, and a few others. For all that stuff I have to try and do things mostly the right way, so I have users managed in Authentik and all my other stuff connects to that. I've also written some small things here and there to automate tasks around the lab, like SSL certs which I might make a separate post on, and custom dashboard to view and start the various game servers I host. Otherwise it's really just a few things here and there to make my life a bit nicer, like RSSHub to collect all my favorite art accounts in one place (fuck you Instagram, piece of shit).
It's hard to go into detail on a whim like this so I may break it down better in the future, but assuming I keep posting here everything will probably be related to my lab. As it's grown it's definitely forced me to be more organized, and I promise I'm thinking about considering maybe working on documentation for everything. Bookstack is nice for that, I'm just lazy. One day I might even make a network map...
5 notes · View notes
puppy-linux-official · 3 months ago
Text
Let’s see, in my home office I have 6 - one as a storage appliance, three to run my proxmox cluster, a gaming desktop, and a laptop.
In my living room there are three more laptops, a Macintosh Plus, and a Xeon workstation I need to take back to storage where there are probably another dozen computers at least, including three different revisions of the Apple IIgs.
Who are these people who have one computer. Sickos, degenerates, luddites. Embrace the singularity, become supercomputers georg.
i am having a realization i may be 'computers georg' - computer engineer, has 6 computers and wants another. do i have a problem
2K notes · View notes
gateway-official · 2 months ago
Text
Routing Mess
Well, I got a router to get better control over my network. I have an ISP that shall not be named that wouldn't let me get certain perks unless I use their router/modem hybrid motherfucker. It has a disgusting lack of configuration, so I bit the bullet and got a TP-Link AX1800 router from Wal-Mart. I hear these things die after a few years BUT it already has granted so much more control over my network than the other thing. I can finally route all DNS through Lenny (Raspberry Pi) so I'm utilizing Pi-hole to its fullest.
UNFORTUNATELY I did not prepare properly for the move, so I ended up blowing up my Proxmox cluster. I just acquired a very old Gateway PC from like, 2012, and I've been using it as a second member of my cluster (the Nuclear cluster). His name is Nicholas and he's got a 5 dollar terabyte drive that's used but passes the smart check. However, after switching over to the new router and following some instructions improperly for the Proxmox install on Julian (Gateway PC) and Adelle (Dell PC), the Nuclear Cluster royally broke and I had to reinstall on Julian and remove Adelle from the cluster. I also had to update Caddy and a bunch of other services to make everything work again because I moved from one IP address scheme to another.
Anyway, let me tell you about getting Julian (Gateway). I found this computer part store that's just full of computer junk. Anything and everything. My boyfriend drove me over there and I went in with him and the place is LINED with COMPUTERS and computer parts and computers running without cases and it was just BEAUTIFUL. I'm poking through PCs, trying to find a cheap one I can make into a NAS, and it's kinda hard because I'm in a wheelchair and all the PCs are on the ground and it's a small place, so my boyfriend starts poking around too. And then he goes, "HEY LOOK", and he rotates a desktop PC around and IT'S A GATEWAY! An old-ass Gateway. And I just had to bring it home!
Then today I found a PC for like 10 bucks, but it doesn't have RAM. It's once again a Dell.
I also brought home another Dell that I plan to make my media server. Any ideas on names?
1 note · View note
hawkstack · 3 months ago
Text
Top Ansible Modules for Cloud Automation in 2025
Introduction
As cloud adoption continues to surge in 2025, IT teams are increasingly turning to Ansible to automate infrastructure provisioning, configuration management, and application deployment. With its agentless architecture and extensive module library, Ansible simplifies cloud automation across multiple providers like AWS, Azure, Google Cloud, and more. In this blog, we will explore the top Ansible modules that are shaping cloud automation in 2025.
1. AWS Cloud Automation Modules
Amazon Web Services (AWS) remains a dominant force in cloud computing. Ansible provides several modules to automate AWS infrastructure, making it easier for DevOps teams to manage cloud resources. Some key AWS Ansible modules include:
amazon.aws.ec2_instance – Automates EC2 instance provisioning and configuration.
amazon.aws.s3_bucket – Manages AWS S3 bucket creation and permissions.
amazon.aws.rds_instance – Simplifies AWS RDS database provisioning.
amazon.aws.elb_application_lb – Automates Elastic Load Balancer (ALB) management.
amazon.aws.iam_role – Helps in managing AWS IAM roles and permissions.
These modules enhance infrastructure-as-code (IaC) practices, reducing manual efforts and increasing consistency.
2. Microsoft Azure Cloud Automation Modules
Microsoft Azure continues to grow with its enterprise-friendly cloud solutions. Ansible supports Azure cloud automation through the following modules:
azure.azcollection.azure_rm_virtualmachine – Automates the deployment of Azure virtual machines.
azure.azcollection.azure_rm_storageaccount – Manages Azure Storage accounts.
azure.azcollection.azure_rm_networkinterface – Handles network configurations in Azure.
azure.azcollection.azure_rm_kubernetescluster – Automates AKS (Azure Kubernetes Service) cluster deployment.
azure.azcollection.azure_rm_roleassignment – Assigns and manages user roles in Azure.
These modules provide a seamless way to manage Azure infrastructure with Ansible playbooks.
3. Google Cloud Platform (GCP) Automation Modules
Google Cloud has gained traction in AI, ML, and Kubernetes-based workloads. Ansible supports Google Cloud automation with these modules:
google.cloud.gcp_compute_instance – Provisions and manages Google Compute Engine instances.
google.cloud.gcp_storage_bucket – Automates Google Cloud Storage bucket management.
google.cloud.gcp_sql_instance – Manages Cloud SQL databases.
google.cloud.gcp_container_cluster – Deploys Kubernetes clusters in GKE (Google Kubernetes Engine).
google.cloud.gcp_firewall_rule – Configures firewall rules for Google Cloud networks.
Using these modules, DevOps teams can create scalable and secure Google Cloud environments.
4. Kubernetes and Containerization Modules
Kubernetes has become a critical component of modern cloud applications. Ansible supports container and Kubernetes automation with:
kubernetes.core.k8s – Manages Kubernetes resources, including deployments, services, and config maps.
kubernetes.core.helm – Automates Helm chart deployments.
community.docker.docker_container – Deploys and manages Docker containers.
kubernetes.core.k8s_auth – Manages Kubernetes authentication and role-based access control (RBAC).
kubernetes.core.k8s_scale – Dynamically scales Kubernetes deployments.
These modules make it easier to orchestrate containerized workloads efficiently.
5. Multi-Cloud and Hybrid Cloud Automation Modules
With enterprises adopting multi-cloud and hybrid cloud strategies, Ansible provides modules that help manage cloud-agnostic workloads, such as:
community.general.proxmox – Automates virtualization tasks in Proxmox.
community.vmware.vmware_guest – Manages VMware virtual machines.
community.general.terraform – Integrates Ansible with Terraform for multi-cloud deployments.
community.hashi_vault – Retrieves secrets from HashiCorp Vault securely.
community.general.consul – Automates Consul-based service discovery.
These modules help enterprises unify cloud operations across different providers.
Conclusion
Ansible remains at the forefront of cloud automation in 2025, offering an extensive range of modules to manage infrastructure seamlessly across AWS, Azure, GCP, Kubernetes, and hybrid cloud environments. Whether you are provisioning VMs, managing storage, or orchestrating containers, these top Ansible modules can simplify your cloud automation workflows.
By leveraging Ansible's capabilities, organizations can reduce complexity, improve efficiency, and accelerate cloud-native adoption. If you haven’t explored Ansible for cloud automation yet, now is the time to get started!
For more details www.hawkstack.com 
What’s your favorite Ansible module for cloud automation? Let us know in the comments!
0 notes
digitaltechdev · 10 months ago
Text
Why Your Deleted Proxmox Cluster Node Won’t Disappear – and How to Resolve It
Tumblr media
Building a Proxmox cluster is probably one of the things you will want to accomplish if you are studying Proxmox and utilizing it in your home lab environment. By doing this, you can guarantee high availability for your containers and virtual machines. It can be necessary to How to Resolve Deleted Cluster Node Still Showing in Proxmox settings while constructing clusters. You could wish to minimize noise, take up less space, or address electricity concerns when operating a home lab. This article will examine the procedures, including commands, involved in removing a node from the Proxmox cluster manager.
0 notes
cloudlodge · 1 year ago
Text
Proxmox Kubernetes Install with Talos Linux - Virtualization Howto
0 notes
jebadeiah · 2 years ago
Text
(R6D033) 100 Days of Code - Tearing Down My Network and (attempting) Rebuilding It
Today, my plan was to get Proxmox running on my system necrosys and then cluster it with my system puc. Then, I’d work on my hour of code. I’ve gotten Proxmox running on necrosys, but in trying to play around with some naming conventions I somehow, in a single moment, completely destroyed my Nginx engine… where? Not sure. Couldn’t I just undo what I’d done immediately prior to it breaking? Nope.…
Tumblr media
View On WordPress
0 notes
virtualizationhowto · 1 year ago
Text
Ceph Storage Best Practices for Ultimate Performance in Proxmox VE
Ceph Storage Best Practices for Ultimate Performance in Proxmox VE #ceph #storage #bestpractices #proxmox #proxmoxve #homelab #homeserver #hci #softwaredefinedstorage #hcistorage #virtualization #virtualmachines #vms #vhtforums #selfhosted
Ceph is a scalable storage solution that is free and open-source. It is a great storage solution when integrated within Proxmox Virtual Environment (VE) clusters that provides reliable and scalable storage for virtual machines, containers, etc. In this post, we will look at Ceph storage best practices for Ceph storage clusters and look at insights from Proxmox VE Ceph configurations with a Ceph…
Tumblr media
View On WordPress
0 notes
techdirectarchive · 19 days ago
Text
Resolve the Update Package Database failure on Proxmox VE
Proxmox Virtual Environment provides a complete open-source platform for enterprise virtualization. Its built-in web interface enables you to manage VMs and containers, configure software-defined storage and networking, set up high-availability clustering, and use multiple out-of-the-box tools all in a single solution. In this guide, we shall discuss the steps to resolve the Update Package…
0 notes
slumberersentinels · 2 years ago
Text
Continuing Under Global Upheaval
Hi there. Thank you for your continued support. While I have a lot I can talk about, it's going to be hard to tackle it all at the depths I want with the time I have. So I would like to do what I can to keep everybody updated.
Here we go.
OpenStack and Sunbeam on Ubuntu Linux, planned for Run #4
A SansIsSleeping Run Cannot last forever; 3000 Years At Most before KillScreen imminent. (Edit: 120? Edit 2: Wait, Nevermind)
2+ Years in SansIsSleeping as of September 16, 2023.
2+ Years in SansIsSleeping run #2, as of October 21st, 2023.
Unauthorized Shoutout Event in June 2023?
International Historical Revisionism, Genocide
So. Let's take it as best as I can. Thank you all for your support and interest.
Sunbeam, OpenStack, Ubuntu Linux
Sunbeam is a quick way of installing OpenStack on Ubuntu GNU/Linux. Let's take that from the top. To run a multifeatured hypervisor with a web access panel, networking, data storage systems, users, user groups within domains, and virtual machines all in one system, you use OpenStack. This system can do even more than what I list, and it's all a free and open sourced software. It's big. Big corporations use it. But anybody can use it, too, and that's especially helpful because Canonical--the folk who maintain and keep Ubuntu GNU/Linux up-to-date--developed Sunbeam, which is like "OpenStack Essentials."
I've considered other hypervising systems to manage the variety of operating systems and mini systems needed to build what I think is the future of Slumberer Sentinels and SansIsSleeping. I want the ability for any user to collect a 'drop' of the current live run and see it replicated on their own device. That's quite a ways off, and I'd rather just have a run I can run, load, and save the state of the whole system so I won't lose progress. You need a hypervising system to run this computer, virtually, or else it will be like run #2: When the computer reboots, shuts down the whole operating system, or shuts down the game, the run is over. A hypervising system can run the computer like console emulation: it processes a simulation of a smaller computer with less resources within your current one. With this, I can save the whole operating system--not just the libTAS, tool-assisted UNDERTALE state. Here are some: OpenVMS, a classic mainframe system built to cluster across computers and hypervise them all has been ported to x86 systems; Archipel, a defunct free and open source hypervisor; and Proxmox, which is also free but lacks a few LXC/LXD hypervisor container features (I think, such as migrating them), were all considered. If you weren't free, you aren't on this list.
Sunbeam seemed to be delivered within the last year, which feels serendipitous to me. Canonical, developers of Ubuntu, created "AnBox Cloud," which is amazing and would add a layer of safety to prevent any of our own Android phones from being hacked. This is why I was familiar with some of the systems already in place for OpenStack on Ubuntu. Sunbeam may make this giant software ecosystem palatable for someone with constrained needs, such as myself or any of you: hypervised computers are the future of safe and networked computing.
Do I know how to use Sunbeam? Not very much. But I'll hack away at it once I get it installed on my Ubuntu on a computer I've been using personally until I knew how to avoid how run #1 ended on TrueNAS. Due to the global economic crunch, I will have more time to do this once I get a windfall of cash, like a MacArthur grant. XD
SansIsSleeping Runs Cannot Last Forever !!!!
EDIT 2: This whole below section is outdated! This whole sleeping scene may be loopable forever. Yeeyyy :D See the link at the end.
This is obvious to people who know UNDERTALE very well. Unfortunately, I needed this explained to me by two kindly members of the tasvideos.org community. When I brought this 'droplet sharing' idea to TASVideos, I imagined I would be at a blockade: if i was so busy, how could I develop all I needed? Wouldn't I need some software? Was anyone familiar with running UNDERTALE with libTAS tools? I thought I would need to have the machine running UNDERTALE also stream that video out instead of have its video be captured, and I thought from an old YouTube video's theory that the only event that takes place while Sans is sleeping is a loop with an iterator. I was wrong--the only way for this run to last 'eternity' is for Sans' Zs to be recycled objects, and they aren't.
OceanBagel pointed out to me that there's a limit to how many objects can be created and destroyed in GameMaker games, and if that ecosystem isn't safely maintained and monitored, new objects will take the unique number addressed to older objects once the incredibly large iterating number loops around. This scene in UNDERTALE has this delightful limitation, as every Z of Sans is created as unique. ( <3 ). Computers, to avoid corrupting other pieces of data in memory, will limit the size of numbers and strings of text characters to a predefined limit of bytesize. In this case, the integer is giant. D1firehail suggested this giant integer would hit its upper limit, cycle back to its bottom limit and back to 0 after about 2997 years. You heard that right. This is a pretty reasonable 'eternity.' :)
Two Years of Sans Is Sleeping; Two Years in Run #2.
This is pretty astounding. In my research to possibly get my CISSP certification, I found about a term about "Mean Time Between Failures." I never looked this up on the laptop running Sans Run #2, so it must be greater than two years--the laptop hasn't failed yet. It's been me, who has! I accidentally shut down the desktop service, which luckily rebooted after my accident. I've messed up my home network many times trying to change DNS servers, trying to move Ethernet cables and update routers, and sometimes I've lost internet just because my ISP decided to randomly cancel my contract.
Despite all my hardware and network juggling, the laptop has held on. If I am lucky enough to move, I will need to research getting a portable hotspot so I could move to a new home, and keep the stream running via this mobile hotspot. I hope our internet would be hooked up fast--I'm not about to colocate this laptop in a hosting facility. :')
Thank you for everything. We are almost at 1,700 followers. In November, I expect to have an appointment time with a legal aid group that runs a day full of free appointments for non-profits. I expect they might give me ideas of legal structures for SansIsSleeping, Slumberer Sentinels, and other non-business entities like unions, trusts, and whatnot. Desert Bus for Hope started out as something like this, I figure. Would they need to hire a trucking union to run the bus 24/7? :)
Unauthorized Shoutout Event, June 2023?
So, for those of you who don't know, in the beginning of June I had the inspiration to change my home network setup. This was before I learned how to do a 'demilitarized zone,' and before I understood NAT, and technically, let's say I still don't really understand how deep this rabbit hole will go. In the middle of this, I thought it would be easy to change my home network, so I could have router with trusted connections, and a router simply to untrusted connections and as gateway to the whole internet. This is when I went to one of the computers still running the stream to check uptime, and noticed a strange, unauthorized Twitch shoutout.
Perhaps a user I gave mod privileges was playing a prank on me. Perhaps my laptop or my phone was hijacked in the past without my knowing, and this moment was when the network presence revealed itself. Perhaps it's the shock that troubled me on April 28th, 2023, with a growing acquaintance announcing dark wishes of being as 'evil' as it might be possibly recognized, and this person's statements coinciding with bad accidents simultaneously happening in real life. Maybe they were wanting to prank me as a one-year-old revert to Islam. Do I know? Unfortunately, I don't.
As a result, I pulled all my trusted home data servers offline. I updated my router softwares. I began the installation of intrusion detection systems (IDS), and just recently began deciding how to securely partition my home network into a DMZ (Demilitarized Zone) and a Trusted zone.
This is not my day job, and might be a distant relative of the web developer work I professionally did nine years ago.
International Historical Revisionism, Genocide
It's for all the above that I've tried to take things slowly, even as there is incredible violence internationally. I'm sure other users have had their Twitch accounts hacked. What have they done? And how involved can I be in such a giant Red Team VS Blue Team / Cybersecurity warzone as a single person?
I'm pretty sure Toby Fox might feel something like this, it's just I'm grateful he's likely had 7 to 8 years to meet the people who might help him protect himself and his works. I'm still meeting folk who are helping me, like Alex, Slab, Taingel, Ali (born4ready), and those who help me just survive in personal matters. Thank you for all you've done to help me, include you whom I have not listed.
There is so much going on that I've done my best to update the title of the stream with the latest vulnerability of human beings to systemic violence--like from coordinated groups, agencies, or policies beyond just one non-plural identity. It's hard to face all this and avoid "compassion fatigue." So I hope you show self-compassion.
It's easier for me to pray to G*d than it is to pray to any group I'd tokenize somehow. I pray for the good to be brought out in anything and everything, for it to shine, and for the end of elevating one nonconsentual violence over another.
Peace be upon you.
====
Edit 1: OceanBagel updated me to think that these numbers (his calculation/formula) would be *much* less than 3000 years. 121? Thank you, OceanBagel. ^-^
4,294,967,295 instances * (80 frames / 3 instances) * (1 second / 30 frames) * (1 minute / 60 seconds) * (1 hour / 60 minutes) * (1 day / 24 hours) * (1 year / 365.25 days)
Edit 2: Eternity's BACK ON, baby!!
OceanBagel has suggested that no problem with instance collisions will happen unless you actually interface with the battle. I guess we'll see in a few tens of decades! :)
1 note · View note
rootresident · 2 months ago
Text
SSL Cert Automation
SSL/TLS certificates are absolutely vital to the web. Yes, even your homelab, even if everything is local-only. I wholeheartedly recommend buying a domain for your homelab, as they can be had for ~$5/yr or less depending on the TLD (top-level domain) you choose. Obviously a .com domain is going to be more expensive, but others like .xyz are super affordable, and it makes a lot of things a whole lot easier. I recommend Cloudflare or Porkbun as your registrar; I've also used Namecheap and they're good but lack API access for small accounts. And please, PLEASE for the love of god DO NOT USE GODADDY. EVER.
First of all, why is cert automation even important? Most certificates you will purchase are issued for a one year period, so you only need to worry about renewal once a year, that's not too bad right? Well, that's all changing very soon. With issuers like Letsencrypt ending expiry emails, and the push to further shorten cert lifetime, automation is all the more needed. Not to mention Letsencrypt is free so there is very little reason not to use them (or a similar issuer).
"Okay, you've convinced me. But how???" Well, I'm glad you asked. By far the absolute easiest way is to use a reverse proxy that does all the work for you. Simply set up Caddy, Traefik, Nginx Proxy Manager, etc. and the appropriate provider plugin (if you're using DNS challenge, more on that later), and you're good to go. Everything you host will go through the proxy, which handles SSL certificate provisioning, renewal, and termination for you without needing to lift a finger. This is how a lot of people do it, and there's nothing wrong with doing it this way. However, it may not be the best solution depending on the complexity of your lab.
If you know a thing or two about managing SSL certificates you might be thinking about just running your own certificate authority. That does make it easier, you can make the certs expire whenever you want! Woo, 100 year certificates! Except not really, because many browsers/devices will balk at certificates with unrealistic lifetimes. Then you also have to install the cert authority on any and all client devices, docker containers, etc. It gets to be more of a pain than it's worth, especially when getting certs from an actual trusted CA is so easy. Indeed I used to do this, but when the certs did need to be renewed it was a right pain in the ass.
My lab consists of 6 physical computers, 3 are clustered with each other and all of them talk to the others for various things. Especially for the proxmox cluster, having a good certificate strategy is important because they need to be secure and trust each other. It's not really something I can reasonably slap a proxy in front of and expect it to be reliable. But unfortunately, there's not really any good out of the box solutions for exactly what I needed, which is automatic renewal and deployment to physical machines depending on the applications on each that need the certs.
So I made one myself. It's pretty simple really, I have a modified certbot docker container which uses a DNS challenge to provision or renew a wildcard certificate for my domain. Then an Ansible playbook runs on all the physical hosts (or particularly important VMs) to update the new cert and restart the application(s) as needed. And since it's running on a schedule, it helps eliminate the chance of accidental misconfiguration if I'm messing with something else in the lab. This way I apply the same cert to everything, and the reverse proxy will also use this same certificate for anything it serves.
The DNS challenge is important, because it's required to get a wildcard cert. You could provision certs individually without it, but the server has to be exposed to the internet which is not ideal for many backend management type stuff like Proxmox. You need to have API access to your registrar/DNS provider in order to accomplish this, otherwise you need to add the DNS challenge manually which just defeats the whole purpose. Basically certbot request a certificate, and the issuer says, "Oh yeah? If you really own this domain, then put this random secret in there for me to see." So it does, using API access, and the issuer trusts that you own the domain and gives you the requested certificate. This type of challenge is ideal for getting certs for things that aren't on the public internet.
This sure was a lot of words for a simple solution, huh. Well, more explanation never hurt anyone, probably. The point of this post is to show that while SSL certificates can be very complicated, for hobby use it's actually really easy to set up automation even for more complex environments. It might take a bit of work up front, but the comfort and security you get knowing you can sit back and not worry about anything and your systems will keep on trucking is pretty valuable.
0 notes
rodrigocarran · 2 years ago
Text
Como atualizar para o Proxmox VE 8 do Proxmox VE 7
O Proxmox VE é uma plataforma de gerenciamento de servidores completa e de código aberto para virtualização corporativa. Ele é projetado com forte integração KVM como hypervisor e Linux Containers usando LXC. O Proxmox VE também vem com uma interface de usuário integrada baseada na Web que permite gerenciar máquinas virtuais, contêineres, alta disponibilidade para clusters ou ferramentas…
Tumblr media
View On WordPress
0 notes
minix-official · 5 months ago
Text
I have a small proxmox cluster I've been meaning to turn into a home lab once I get more of the weird AF sata m.2 ssds my old ass thin clients use
not now kitten, mommy is busy worrying about the amount of self-hosted services she maintains
140 notes · View notes