Proxmox — A quick how-to (why-to)/getting started/setup guide for your homelab
What is it?
Proxmox is an open-source platform for virtualization management with a built-in web interface. Simply put, it lets you create & manage data storage, Virtual Machines & LXC containers with networking functionality.
Why use it?
Proxmox is primarily geared towards enterprise environments. It makes a great use-case for a homelab as the cost is free, the software is mature & robust, the barrier to entry isn’t very high & it is popular enough that there is a lot of community support & helpful guides around it.
What to use it for?
If you’re looking for inspiration on what to do with your Proxmox build, some use-cases would be as network storage — be it for backups, extra storage for games so you don’t have to uninstall one to install a new one (especially the large ones (looking at you Warzone)), home media server (Plex), OpenVPN, testing out different OSes, web hosting — just to name a few.
How to use it?
Installation
Getting up & running with Proxmox is fairly straightforward. I won’t duplicate the already excellent instructions over at: https://pve.proxmox.com/wiki/Installation. The quick takeaway points (from my opinion for my personal usage) are:
- Use USB flash drive as the install media.
- Install to a SSD (m2 preferably so you save your SATA ports for data storage drives & preferably on the larger side to hold your VMs & LXCs).
Now that you’ve got Proxmox installed, its time to setup a storage pool for your data. You can use the local storage to hold your VMs & containers. For data storage, in my opinion, a ZFS pool makes the most sense. Why ZFS? See: https://itsfoss.com/what-is-zfs/. In short, it has the right set of features one would need to store data reliably on an array of disks & retrieve it performant-ly (obligatory, RAID is not a backup. 2 is 1 & 1 is none).
Creating a ZFS Pool
- Ensure your disks show up on the GUI under ‘Disks’. If your drives have been shucked, they may require the 3.3V pin mod to be able to show up. In short, these (shucked) disks are not intended to be used outside of their enclosure & the third data pin on the drive is able to detect when one tries to connect them as a regular internal drive & sends a constant reset signal (or something to that effect). The end result is that the drive does not get powered on. If they worked in their enclosure, but not when used as an internal drive, this is likely the problem. Google is your friend when it comes to fixing this with some Polyimide Tape.
- Generally, disks you plug in will be formatted & have partition(s) on them by default. The GUI will indicate this with a ‘partitions’ value under the ‘Usage’ column for the disk(s). You will want to remove these partitions to get them ready for ZFS. Launch a shell for your Proxmox node & for each disk that needs to go into your data pool of drives, run the following:
# Bring up the fdisk utility:
fdisk /dev/sd<drive_letter> # eg: fdisk /dev/sda for ‘sda’.
p # Print the current partition(s) if you need to see them.
d # Delete the partition (Repeat if necessary)
g # Create a new empty GPT partition table
w # Write/commit the change
- Once you’ve repeated this process for all your disks. They should all now show ‘No’ under ‘Usage’. This is good.
- Proceed to use the GUI to create a new ZFS pool. There are some good resources out there that can explain RAIDZ & its levels in greater depth than I can. Read up on RAIDZ if you are unfamiliar with the core concept. My recommendation (& what I went with) is RAID-Z2 for 5 disks.
- Note that RAID-0 (striped) is not possible through the GUI & needs to be done via the command line. Do this if you want to see some raw performance numbers. This is not recommended as when one disk dies, your entire pool dies.
zpool create <pool_name> sdX sdY sdZ
Then, add the pool as a storage via Datacenter > Storage > Add > ZFS > Select your pool.
Runningzpool status
will show information on the ZFS pool including the type of RAID used.
Destroying a ZFS Pool
Now that you’ve played around with a lightning-speed RAID 0 array, its time to destroy it so you can setup something sensible, like a RAID-Z2. In a shell launched from the Proxmox GUI (or via your preferred method of console-ing into Proxmox), unmount all the drives in the pool & then destroy the pool.
umount -f /dev/sda # Note: It is umount, not unmount
zpool destroy <pool_name>
NOTE: If you get an error on the destroy command, ensure that nothing is running that is using the pool & try again.
Lastly, remove the pool from the Datacenter using the GUI (Datacenter -> Storage -> <pool_name>
[Remove]).
Once that’s done, its back to getting them ready for the next ZFS pool:
fdisk /dev/sda # Format the disk "sda"
d # Remove partition
g # Create a new empty GPT partition table
w # Write/commit the change
Now you’ve got your storage sorted out, both for VMs/Containers as well as for your data. As mentioned earlier, I opted for a combined storage drive that houses my Proxmox install as well as all the VMs & containers that I choose to create. You can create multiple ZFS pools to store your VM/LXCs. As a beginner, that 1 TB m.2 should be sufficient for my purposes & the 54.56 TiB RAIDZ-2 will last me a long time as the storage for all the data used by these VMs & containers.
The last thing on this quick how/why piece on Proxmox — you’ll want to get some ISOs (for VMs) & templates for containers up/downloaded. In the GUI, go to your local storage — upload your ISOs & download your templates. One I’d recommend as a starting point would be the latest Ubuntu template.
That’s it for now, have fun with your Proxmox build! This is a very brief writeup that does not touch upon the many excellent features that Proxmox offers (clusters, high availability — to name a few). That’s on the reader to decide whether to continue to delve deeper based on interest. I’m sure there are many (potentially better?) approaches to getting started with Proxmox, but this one was mine, for a beginner homelab.
My setup at the time of writing
I wanted a build that leaned more towards enterprise level hardware as opposed to consumer level hardware, though Proxmox has great hardware support overall. My components going into this were:
- Motherboard: I picked the AsRock X470D4U as it was pretty much the only server-oriented motherboard that I could find that would usually go on sale, offered IPMI functionality & supported the AM4 platform.
- CPU: My original plan was to repurpose the AMD Ryzen 3900x from my desktop, but I ended up getting a good deal on a used 3950x. Great performance, both per-core + core-count with a decent TDP.
- RAM: Crucial 2x32GB DDR4 2666 MHz CL19. A 2x32GB kit will provide enough memory for now while leaving 2 more slots open for future expansion. ECC memory is recommended, but as far as I know, the Ryzen platform does not currently support ECC memory.
- PSU: SuperFlower Leadex Titanium 750W 80+ Titanium. Enough headroom so that it will pretty much always run fanless + titanium efficiency. Cons: Pricey, so make sure to look out for sales or get any reputable-brand PSU with a atleast a Gold+ rating.
- Boot drive: Crucial P2 1TB 3D NAND NVMe PCIe M.2 SSD — this is decent SSD performance at a reasonable price.
- Data drives: I decided to go with consumer level external desktop storage drives which usually go heavily discounted during certain holidays. Picked up five 12 TB Western Digital drives when the price-per-TB ratio was low enough (~14$ IIRC). These needed to be shucked (white-label) & the third data pin had to be taped up to be usable as internal drives. If you can swing it, NAS focused HDDs (WD Red/Pro & Seagate IronWolfs are the ones to go for).
References & more reading: