5 min read

Building A Home Lab. What is the point in 2024?

In this post, I attempt to explain at a very high level the motives behind having a home lab in a world where there are plenty of other platforms to engage in your own learning journey at a fraction of the cost.
Building A Home Lab. What is the point in 2024?
Photo Credits: Alameda Free Library

I am an avid home lab'er and I have been for years, but its the year 2024 and with the proliferation of self-paced learning platforms and wide availability of both commercial and open source tools which are more than enough for beginner technology practitioners to have the tools necessary to excel in their learning journey. Why is the point of a home lab in 2024? Why should I spend my hard earned funds into turning your home office into a micro data center?

As I mentioned, its 2024, there is no shortage of quality content available from the fundamental to the advanced level on just about any topic or branch of practice in technology. From the self-paced learning platforms such as Code Academy, Udemy, Coursera, Khan Academy, to edX, LinkedIn which they all provide depending on the course a virtual lab, to fully fledged virtualization or containerization platforms that you can use for practicing. Tools such as VirtualBox, Qemu/KVM, Parallels, VMWare workstation, Vagrant, Docker, and Podman to name a few. If you are into Networking there are plenty of free and paid network virtualization platforms such as GNS3, EVE-NG, Cisco CML that you can use to learn without having to go through Ebay looking for old networking gear just so that you can hone your networking skills.

If you want to hone your skills with cloud solutions most major vendors out there have made their platforms available. From Microsoft Azure, to Google Cloud Platform, Amazon Web Services, and IBM Cloud to name a few, to other smaller but well-known Cloud and Virtual Private Server providers such as Linode, Digital Ocean, Liquid Web, Dream Host, Host Gator, and Namecheap.

So if all of those options currently exist: As we say in the Southern United States: Why in Gods Green earth would I want this in my home office, attic, or garage?

In my case the argument was easy to internally validate at the time which was: In order to perform my role effectively as a private cloud infrastructure consultant when working with OpenStack and OpenShift/Kubernetes infrastructure, it was imperative that I had exposure to commodity hardware used at the data center, even at smaller scale. A home lab environment where I could further experiment, improve, and rebuild as needed to support smaller scale reference architectures with the purpose of testing a number of procedures from design, building, troubleshooting, scaling, and upgrading the aforementioned cloud infrastructure on my own lab and to support other pet projects of personal interest.

This may not be the same reasons that you are contemplating a home lab of your own but I also provide another much more fundamental reason as to why a home lab is beneficial despite having the potential of draining your wallet further down the line.

In order to gain in-depth understanding into common compute platform architectures such as x86_64, ARM, PPC4 to name a few and their relation to the OS/Kernel and its underlying hardware is another reason why I prefer to have a home lab. While Operating Systems, Networking, and Storage fundamentals can be learned in virtualized platforms, nothing beats having access to the hardware directly. If anything it will also help you familiarize yourself with the hardware architecture and the way the hardware was built if you ever have the will to dive deeper into hardware design. By having the fundamental basis of modern compute architecture, and designs will make a significant difference in the long-term.

Here are a two other examples of widely adopted technologies that while you can dive right into the software stack that is supported by this technology, having an in-depth understanding of each will allow you to be a better rounded engineer in the long term which having the hardware facilitates:

Virtualization Extensions

Hardware Assisted Virtualization was introduced in Intel and AMD family of processors from 2005-2006 and it needs to be enabled in the BIOS(firmware) to be able run any virtualized OS instance in your host OS. This means no matter the hardware vendor, if the BIOS does not support virtualization extensions, you simply cannot have virtual OS instances within the host OS. Having the understanding on why and how these virtualiztion extensions work will further help in not only how to design and run virtualized workloads on supported OS platforms but will provide with expert level mastery in the subject which will further improve your troubleshooting skills further down the line. In the context of technology, it is as ancient as it gets ( ~20 years) but it is one of the most commonly used BIOS extension today and will continue to be used for the foreseeable future, no matter the hardware vendor. Virtualization concepts rely on the same standards allowable by the hardware and firmware.

Linux Kernel Namespaces and Control Groups

Linux Control Groups were introduced as a concept in 2004, that is 20 years ago as well and ancient by technology standards.

Linux Kernel Namespaces have been around as early as Kernel release 2.4.19 where the concept of the kernel mount namespace was committed to the kernel on kernel version 2.6.23(2001), user namespaces were committed to the kernel version 2.6.23(2007) with network namespaces introduced in the Linux kernel version 2.6.24(2008). This LWN.net article goes into the historic details in the implementation of Kernel namespaces and their introduction to the mainline kernel.

Why is this important? Because every single container run time engine running on top of a any Linux kernel currently available depends on this "ancient" feature of the kernel which are Control Groups and Kernel Namespaces used for resource isolation and it is the most if not the most important foundation behind the concept of application containerization which is now the de-facto framework for application modernization or dare I use the term: "Digital Transformation" in the enterprise and it is behind every public and private facing application no matter the use case. This paradigm shift in the way we deploy applications is supported by features in the kernel have been around for 20 years. Don't let anyone else fool you, specially the sales guy.

Both of the concepts I used as examples can be learned without having a home lab or any hardware of your own, but if you wanted to dive deeper into how both of these concepts work between the Linux Kernel and the hardware; Wouldn't having your own hardware so that you could dig deep into these concepts on your own make your learning journey more exciting? Wouldn't it fill your curiosity as to how these low level systems are designed and the impact that they have made to the way that we as technical professionals work ?

It absolutely has on many counts on my years of experience and personally that's where I see the value of having a home lab.

I hope that you found this post informational and enjoy your day.