The goal of this overview is to provide the reader with a basic understanding of state-of-the-art virtualization approaches and discover similarities and differences between containers and virtual machines. Both solutions have their corresponding uses and oftentimes intersect, for instance when VMs are used to deploy and host containers as opposed to running them directly on the hardware.
Virtual machines and containers ultimately serve one purpose - to create isolated (from the rest of the system) computing environments comprising of multiple components.
In that regard virtualization is a process of creating abstraction layers over computer hardware in order to divide computational power of one machine into multiple virtual environments.
Traditional VM architecture
A hypervisor is a software that is used to allow multiple operating systems to run alongside each other while sharing hardware resources of the very same physical machine. When run on a server, the hypervisor allows its OS and applications to be decoupled from the hardware, which in turn allows it to be split into multiple independent virtual machines.
Virtual machines have been around for quite some time and are considered to be the basis of the first generation cloud computing.
In other words a VM is an emulation of a physical computer with allocated CPUs, memory and storage which interacts with the hardware via a lightweight software layer - the hypervisor. When a user issues a VM instruction that requires additional resources from the physical environment, the hypervisor relays the request to the physical system and caches the changes. This design can exacerbate the drawbacks of application dependencies and large OS footprints - a footprint that is rarely needed to run a single app or microservice.
Containers are a more streamlined way of handling virtualization. Instead of deploying an entire virtual machine with hardware emulation, a container is a silo that packages together everything needed to run a piece of software. That is all the code, its dependencies and the OS itself. They are built on top of the host operating system's kernel and contain only apps and lightweight operating system APIs and services that run in user mode.
The lightweight nature of containers and their shared operating system makes them very easy to move across multiple environments. Everything within a container is preserved on a so-called image - a code-based file that includes all libraries and dependencies.
When to use one over the other?
Agile nature of containers allows them to be truly portable across bare metal systems and multicloud environments. They can be used to efficiently deploy and automate the management of collections of microservices but the caveat is that containers have to be compatible with the underlying OS. Compared to VMs, containers are best used to:
- build cloud-native apps
- package and manage micro services
- execute emerging IT practices (CI/CD, DevOps)
- scale projects across a diverse footprint
VMs are full-fledged environments that are capable of running far more operations than a single container. That makes them more suitable for traditional monolithic workloads. However that expanded functionality makes them less portable because of their dependence on the OS, application and libraries. Compared to containers, VMs are best used for:
- traditional workloads with lots of dependencies
- better development isolation
- infrastructural resource provisioning (networks, servers, etc)
- actual guest OS virtualization (e.g. running Unix on Linux)