Any enterprise IT professional working on a digital transformation initiative knows that modernizing traditional applications isn’t something that happens overnight. At Diamanti, we routinely address infrastructure challenges that our customers face when building production-grade Kubernetes environments for databases and other business-critical applications.
Modernizing an application comes with its own significant challenges, and there are two common approaches to address them: a complete re-factoring of the application as a set of microservices (a lot of work for software developers and QA teams), or packaging the application code and running it inside a Docker container (not an ideal solution for the long term, but a lot less work up front).
But what if your application is impractical to containerize? And what if you don’t have time for a lengthy refactoring process?
There is still a viable path forward. Although you can’t migrate your original application off of the legacy VM it currently runs on, you CAN run it on that same type of VM within a container on your Kubernetes cluster. This is the power of KVM.
KVM in a Container
KVM, or Kernel-based Virtual Machine, is a Linux kernel module that allows the kernel to function as a type-1 bare-metal hypervisor, enabling VMs to be run as individual processes; exactly as a container would be. In the case of KVM, the container image consists of the code emulating the virtual machine.
With support for KVM, Diamanti provides a unified deployment platform for applications running in VMs in addition to applications running as containers, leveraging Kubernetes as the orchestrator for both forms of workloads.
This addresses the immediate needs of organizations that have adopted (or want to adopt) Kubernetes but have existing VM-based workloads that cannot be containerized or are still in the process of being containerized.
Key Benefits of KVM:
- Transition virtualized workloads to containerized workloads without migrating persistent data and affecting QoS
- Run existing virtualized workloads alongside new containerized workloads in the same environment
How KVM Works
Kubernetes allows for extensions to its architecture in the form of custom resources. Diamanti KVM solution is a Kubernetes add-on consisting custom resource definitions(CRDs), controller and an operator leveraging a range of Kubernetes extension mechanisms. Using this mechanism, the Kubernetes API can be used to manage KVM resources alongside other resources that Kubernetes provides, such as pods. Using Kubernetes and KVM allows us to launch containers and virtual machines on the same cluster, same node, using the same network and storage infrastructure.
Diamanti exposes KVM functionality as a feature of the Diamanti Enterprise Kubernetes Platform. Once the feature is enabled, the KVM CRD and KVM controller are deployed on the Kubernetes cluster running on the Diamanti platform. You can then access KVM objects and create new ones using the kubectl command– just as you would for built-in Kubernetes resources like pods, replication controllers, deployments, etc.
The KVM Controller monitors KVM CRD objects and creates the KVM pod for each KVM being created using Kubernetes APIs. The KVM pod is the primary container that runs the Quick Emulator (QEMU) Virtual Machine. A KVM object is always associated with a KVM pod during its lifetime.
Since the KVM CRD and Controller are deployed on top of a Kubernetes cluster, your Kubernetes-native workloads are run adjacent to the KVMs. Furthermore, if you are running native workloads and you have the KVM feature enabled, you can run VM-based workloads, too.
Diamanti’s KVM solution is designed to provide virtualization capabilities while preserving the Kubernetes philosophy and semantics. It also allows virtual machines to benefit from Diamanti’s Kubernetes features such as storage classes, persistent volumes, Quality-of-Service (QoS) for storage resources, L2 networking and QoS for network resources.
We’ll explore Diamanti’s KVM solution in greater detail in the next installment of this blog. Stay tuned!