I occasionally get a bit of ridicule when I describe my Homelab architecture. It’s not Kubernetes, built on a cloud provider, or all that exciting. I have convictions for my self-hosting which greatly influences some of my choices.
In time I plan to modernize and shore up gaps with better management/automation enabled by Ansible. For now- it has worked so well for many years it’s been a tough pill to swallow on adding the complexity for the ~15 services I run (most are “production” for myself or my family). This may be fun to look back at 5 years from now assuming I’ve delivered on my intent to modernize.
The Hardware
The stack looks like this:
Primary
- Hardware: Dell R730XD
- OS: Proxmox
- Role: Runs all services
- Role: Primary ZFS pool hosts (archive and various performance tiers for services)
Secondary / Backup
- Hardware: Custom Supermicro (repurposed NAS)
- OS: Proxmox Backup Server
- Role: Backup target for archive and daily VM snapshots
Justification / Description
Semi-stringent requirements for my homelab services:
- All software must be open source
- Upgrade rollbacks must be simple and straightforward (I have children and little time to waste)
- Application outages should be isolated as much as feasible
There isn’t much of a technical reason for the open source requirement (free as in beer, but freedom preferred when available), I simply prefer to use and support open source software as a matter of principle (consider donating to your favorite projects). However, I often find myself reviewing source code in the case of poor project documentation or working around bugs.
The other 2 requirements are easily fulfilled by a rather simple choice: All applications services are run on their own VMs. Is it wasteful? Absolutely. However it provides some key benefits:
- Application upgrades are prefaced by a shutdown and ZFS snapshot to make rollback trivial
- Database management/security is greately simplified
- Historical restores are trivial (click a few buttons in Proxmox Backup Server)
- Decommissioning applications is trivial with little excess cleanup
- VMs can live on one of my many ZFS pools each with different latency/speed/capacity characteristics
- Networking is simplified
- Justifies my excess of ECC RAM I was gifted
- The list goes on
Some are not exclusive to the architecture I described above, but depending on the alternative they may not be feasible.
Application Architecture
Almost all VM guest operating systems are Fedora minimal servers. I’ve automated the major version upgrades that occur twice a year and they’ve become rather painless. Rarely (I can’t think of an example) are there bugs which make me regret the decision.
I generally have 2 methods of installation:
- Podman running docker-compose started as a systemd service
- rpm “natively” installed on Fedora
I’ve recently begun to prefer the podman method using containers built directly by upstream for fast security upgrades. The RPM method is usually a package in the Fedora repository which is updated too slowly for my preferences, hence the preference of using containers. I’m aware of the drawbacks of docker- I plan to build my own images in time.
Conclusion
Eventually I’ll change this up to reduce much of the manual setup I have for new services which is painful. Some ideas include Fedora coreOS using Ignition and going all in on containers or simply using Terraform with cloud-init as a smaller step.