Homelab Upgrade

Thu, Nov 19, 2020 5-minute read

Self-hosting my own resources, and managing my own server(s) have long been a joy of mine; until it’s not. One small change has been able to tumble the carefully stacked building blocks of my infrastructure.

Take for example running a dynamic IP address. All it takes is for the power to go out while you are away, your UPS to die, and when everything comes back online the script to update my domain records decides not to work anymore. Had I been more astute a system administrator, I might have bothered to manually check my domain periodically or even implement one of the many health check services out there to do this for me! But that would have taken some forethought πŸ˜…

Whining aside, running my own home servers feels so liberating! Services such as AWS, Azure, or Digital Ocean definitely have their place, and I make use of a few of them. Though for small services really only served to me, it just does not make sense to shell out the money. To the homelab upgrade now! I have taken the time to phase out my aging and anemic Lenovo M83 server box, for something a little more purpose-built; still running consumer hardware though. Before digging into the new build, let’s reflect on what I was running:

  • VMWare ESXi 6.5
  • Intel i5-4570
  • 16GB DDR3 RAM
  • NVIDIA GeForce GT 720
  • LSI 9207-8e (Used when migrating data from Dell EMC disk shelves)
  • 4TB Western Digital Red

Not a bad little machine and it chugged through the workload that I threw at it. I had not yet adopted a containerized mindset though, so nearly everything ran within their own virtual machine. Not the most efficient. I have slowly begun the move to use Docker containers for most applications. The difference really is night and day when it comes to hardware usage! I wanted more though. At one point, I had been running an IBM X3650 M3 with two Dell EMC disk shelves almost entirely populated with 3TB Hitachi SAS drives.

IBM x3650 M3

That was the opposite end of the spectrum; overkill. Our local power agency absolutely loved me though, shown in the form of my monthly bill. I was fortunate to pick up the IBM and EMC machines for next to nothing at a local liquidation, but the cost in physical, time, and compatibility of enterprise equipment is too far much as a hobby. But no, I had moved away from that for something more modest. I desired to keep it that way. I started investigating what hardware I could use for a multi-purpose machine.

Server Build Layout

So enter the above, and excuse the mess, a custom-built abomination to handle all of my local needs - or so is the hope. Jumping right into the details, here are the specs of the machine:

  • Windows 10 Pro w/ Hyper-V
  • Intel i3-8300 (To be replaced with a six-core 9th generation i5 soon-ishβ„’ πŸ˜†)
  • 64GB DDR4 RAM
  • NVIDIA GeForce 1070 GTX SC2
  • LSI 9200-8i
  • 6x 4TB Western Digital Red (As one pool using SnapRAID and mergerfs)
  • 120GB HP S700 M.2 system drive
  • 2x 500GB Western Digital Blue SSD (Mirrored using Microsoft Storage Spaces)
  • 1TB Western Digital Blue HDD

The two largest changes to me were quadrupling the RAM to 64GB, and much denser storage within one case. I have to give a big shoutout to the team over at Fractal Design and their Node 804 case. It really made this build possible.

GeForce 1070 GTX

Working our way through the build, perhaps from the operating system out, I made the decision to go with Windows 10 Pro and Hyper-V. This was primarily driven by the use of the “consumer-grade” GeForce 1070 GTX, and its incompatibility of PCI passthrough within ESXi. Using Windows as a host has allowed me to use the graphics processor for whatever workload I may need it for, and still run a few virtual machines through Hyper-V; the choice of Hyper-V goes hand-in-hand with the aforementioned reasoning.

The Windows host boots from the M.2 drive on the motherboard, and has use of the single 1TB drive for any major data use (applications, and a few games for streaming). Hyper-V was given the two 500GB SSD drives as a mirrored pair to house the virtual machines. This leaves the six 4TB drives unaccounted for.

Hard drive array

Seguing into Hyper-V a bit, each of the six drives gets passed over to an OpenMediaVault(OMV) virtual machine. Within this VM I configured the drives as a pool, with a single drive acting as parity, using SnapRAID and mergerfs. OMV made this a very easy process. The entire feat was able to be done through the provided web interface. All in all, I wound up with 20TB of usable space. The data is generally accessed through SMB/CIFS, with a portion dedicated to MinIO S3 compatible storage.

In the previously mentioned attempt to move to a more containerized workload, I have cut out the vast majority of virtual machines. With OpenMediaVault included, there are only four VMs running. The remaining instances all run Ubuntu 20.04 LTS and include of course Docker, but also running in their own VM are OpenFaaS and Dokku. My own personal AWS Lambda and Heroku.

With the Windows 10 host, and Hyper-V running these four instances, I am fortunate enough to only see 10-15% CPU usage on average. This is on an entry-level i3-8300 too! While I still plan on upgrading to a “new” 9th generation i5 processor when means allow, the current CPU usages show it’s not too pressing a matter. The only time I have suffered much has been when streaming from Steam or Rainway to the living room; it’s a bit of a stretch to call it suffering.

Rambling on is one of my specialties, so I will draw this post to a close; having covered the base of the build. In future blog posts, I will try to cover some of the services that I have running, and how some of them may compare to their commercial counterparts. Spoiler alert: It’s a mixed bag.