Our Homelab Infrastructure: A Complete Overview
An in-depth look at our 5-node Proxmox cluster, network architecture, and automation.
Our Homelab Infrastructure: A Complete Overview
Welcome to the heart of Selfie Stack - our homelab infrastructure. This post provides a comprehensive overview of our setup, from hardware to software, networking to automation.
Hardware Overview
The Cluster
Our lab runs on a 5-node Proxmox VE cluster:
| Node | Hardware | RAM | Storage | Role | |------|----------|-----|---------|------| | pve01 | Dell OptiPlex 7080 | 64GB | 1TB NVMe + 4TB HDD | Compute + Storage | | pve02 | Dell OptiPlex 7080 | 64GB | 1TB NVMe + 4TB HDD | Compute + Storage | | pve03 | Dell OptiPlex 7080 | 32GB | 512GB NVMe | Compute | | pve04 | Intel NUC 11 | 32GB | 512GB NVMe | Compute | | pve05 | Intel NUC 11 | 32GB | 512GB NVMe | Compute + PBS |
Networking Equipment
- Router: OPNsense on custom build (4-port Intel NIC)
- Switch: UniFi Switch 24 PoE
- APs: 2x UniFi 6 Lite
- UPS: CyberPower 1500VA
Network Architecture
Our network is segmented into VLANs for security and organization:
VLAN 1 - Management (10.0.1.0/24)
VLAN 10 - Servers (10.0.10.0/24)
VLAN 20 - Containers (10.0.20.0/24)
VLAN 30 - IoT (10.0.30.0/24)
VLAN 40 - Guest (10.0.40.0/24)
VLAN 100 - WAN (DHCP from ISP)
DNS and DHCP
We run:
- AdGuard Home - DNS filtering and local DNS
- OPNsense DHCP - IP address management
Software Stack
Virtualization
Proxmox VE 8.x manages all our VMs and containers:
- 42 LXC Containers - Lightweight services
- 8 Virtual Machines - Heavy workloads
- Ceph Storage - Distributed storage cluster
Core Services
| Service | Type | Description | |---------|------|-------------| | Traefik | LXC | Reverse proxy | | Authentik | LXC | SSO/Identity provider | | Postgres | LXC | Database server | | Redis | LXC | Cache/Queue | | Gitea | LXC | Self-hosted Git | | Drone CI | LXC | CI/CD pipeline |
Media Stack
| Service | Type | Description | |---------|------|-------------| | Plex | VM | Media server | | Sonarr | LXC | TV management | | Radarr | LXC | Movie management | | Prowlarr | LXC | Indexer manager | | qBittorrent | LXC | Download client |
Monitoring
| Service | Type | Description | |---------|------|-------------| | Prometheus | LXC | Metrics collection | | Grafana | LXC | Visualization | | Loki | LXC | Log aggregation | | Uptime Kuma | LXC | Uptime monitoring |
Automation
Infrastructure as Code
We manage infrastructure with:
# Ansible for configuration
ansible-playbook -i inventory site.yml
# Terraform for provisioning
terraform apply -var-file="prod.tfvars"
CI/CD Pipeline
Our deployment flow:
- Push code to Gitea
- Drone CI triggers build
- Tests run in containers
- Docker images built and pushed
- Watchtower updates running containers
Backup Automation
Daily backup schedule:
# Proxmox Backup Schedule
- 02:00 - VM snapshots to PBS
- 03:00 - Container backups to PBS
- 04:00 - PBS sync to offsite storage
- 05:00 - ZFS scrub (weekly)
Power Consumption
Efficiency matters. Current measurements:
| State | Power Draw | |-------|------------| | Idle | ~180W | | Normal Load | ~250W | | Peak | ~400W |
Monthly cost: ~$25-30 USD
Lessons Learned
After years of homelab evolution:
- Start small - Don't overbuy; grow as needed
- Document everything - Your future self will thank you
- Automate early - Manual processes don't scale
- Plan for failure - Assume hardware will fail
- Monitor proactively - Know problems before users do
Future Plans
What's next for the lab:
- [ ] 10GbE backbone upgrade
- [ ] GPU passthrough for transcoding
- [ ] Kubernetes migration (K3s)
- [ ] Solar power integration
Stay tuned for detailed posts on each component of our infrastructure!
Was this article helpful?