Virtualization Lab with ESXi hypervisor 6.5.x

This is a bit of a dated post, but I’m going to discuss virtualization lab configuration for development and experimentation using ESXi hypervisor 6.5.x. It’s particular relevant now as I’ve setup a Kubernetes cluster at home to develop some orchestration scripts and cluster architecture design for scalable applications. This is a much cheaper solution than deploying a cluster into a cloud-based provider. The only real resource cost is power which comes in at a significantly cheaper expense than the per-hour provisioning costs of a cloud provider.

The bare metal hardware config build is purely commodity, self-assembled hardware: Shuttle cube form-factor SH170 with Kaby Lake i7-7700 single socket quad-core 3.6Ghz configuration, 32GB non-ECC DDR4 memory and 1TB Crucial M.2 SATA SSD. Since this is a pure development lab with large periods of idle time, spec-ing out top tier components didn’t make a whole lot of sense. I still wanted to stick with a SSD for primary storage for fast I/O (510-530MB/s R/W), but the rest of the spec was upper-middle range. There is room to double the memory capacity to 64GB, should the need arise to provision more virtual machines.

ESXi host configuration, commodity desktop DIY hardware

ESXi host configuration, commodity desktop DIY hardware

I had initially built this on-premises vSphere ESXi host back in 2017 for virtualization to build a cluster for NLP and ML RNN text generation using a Debian 8 virtual machine connected to an NVIDIA GeForce GTX 1060 [6GB] w/ direct PCIE passthrough (this allows direct access by the virtual machine to utilize the connected GPU for CUDA functions). It should be noted that in order for this passthrough configuration to work correctly, I had to trick the virtual machine to think it wasn’t virtualized (with the added side effect that it wouldn't allow me to install VMWare Tools). Virtualized machines cannot be connected to GeForce GTX 1060 as the 1060 NVIDIA driver refuses to load in a virtualized environment because NVIDIA forces you to use their Tesla GPUs in that scenario. The only way this can work is if the guest doesn’t realize it is running virtualized. VMWare Tools are nice to have, but they aren’t strictly necessary, and having the machine have access to the GPU is more important in this situation.

Machine Learning VM for char RNN and data science development, connected to GeForce GTX 1060 with passthrough PCIE

Machine Learning VM for char RNN and data science development, connected to GeForce GTX 1060 with passthrough PCIE

Virtual machines running on the ESXi host

Virtual machines running on the ESXi host

vSphere ESXi made the most sense for a base-metal hypervisor for the development lab, though I did look at XenServer as one possible option, but adequate PCIE passthrough support did not exist at the time. The free version was acceptable for a non-production development lab host system.

Currently the hypervisor version is 6.5.0 (Build 4887370) but I’ll be upgrading it soon to at least 6.7.0 (Build 13006603). I just have to identify and confirm that there are no custom VIBs that would be blown away upon upgrade; probably not, as this was not like that time I installed vSphere on a 2012 MacBookPro 15” via Thunderbolt ethernet, thanks to this handy guide.

Lately I’ve been researching and investigating the possibility of using Kubernetes for application container orchestration for an ASP.NET Core + MongoDB application we have been developing here at chilitechno over the last few months. To that end, k8s looks like a great solution for scaling the application, however before launching it into the cloud, we’d like to see about running it in a local on-premises development configuration. I could use a local Minikube / Kubernetes desktop solution for testing this setup, however it doesn’t lend it self to always-on access. Installing a development Kubernetes cluster on the vSphere host seems like the way to go.

Unfortunately while it appears the Canonical Distribution for Kubernetes (CDS) supports vSphere via conjure-up, however, it’s unclear if the free version of ESXi is supported for deployments of CDS. The conjure-up vSphere documentation is sparse at best and screenshots indicate an API endpoint for accessing vSphere. The problem here is that the free version does not have API access. I would have to upgrade the lab to an ESXi Essentials license at a cost of $600USD.

That’s okay, though, I can get my hands dirty and figure out how to manually install and configure Kubernetes at a very low level on the host.

A new post with discuss details of setting up such a cluster, as you can see in the screenshot above, the guests are provisioned thusly:

  • 1x Master: 2GB, 2 vCPU

  • 3x Worker: 3GB, 2vCPU