A bare metal installs an OS directly on hardware, while a virtual machine allows the creation of multiple virtual environments on a single physical machine. While on a bare metal installation the operating system has full and direct control of the hardware, while a VM virtualizes all of his via software such as KVM, VMware, Hyper-V, etc.
As a VM, each OS thinks it has its own hardware, but via the Hypervisors (QEMU / KVM) the virtual machine shares resources of the physical server.
The guest OS that runs in a VM will contain its own file system, services, users, and network settings. When accessing resources, the guest OS will interact with virtualized hardware - often at the expense of some performance.
VM tools
libvirt is an open-source library, daemon, and set of APIs (via libvirtd) that give Linux a single, consistent way to manage different Hypervisors (QEMU / KVM). It handles starting, stopping, migrationg, and CPU management.
We can use virsh to interact with this API, and do stuff like create a storage pool where any guest can drop its virtual disks:
sudo virsh pool-define-as local-dir --type dir --target /var/lib/libvirt/images
sudo virsh pool-build local-dir
sudo virsh pool-start local-dir
sudo virsh pool-autostart local-dirTo manage all VMs we can use the virsh commands, including list --all, virsh start <vm_name>, virsh reboot <vm_name>, etc.
virt-manager is a GUI interface for libvirt.
VM states
In Linux, a VM can have one of multiple states:
- Running → actively running, consuming resources
- Paused → temporarily halted, freezing its activity without shutting it down
- Shut off → Powered down
- Suspended → Memory contents saved to a disk for later resumption
- Crashed
Note
The difference between Paused and Suspended is that when suspended, the VMs RAM and CPU are freed from activity and the VM state is stored in a disk. Paused, on the other hand, saves the VM state in RAM; this is why pausing allows for faster resume ops than suspend.
Hypervisors (QEMU / KVM)
KVM (Kernel-based Virtual Machine) enables virtualization in a server. Once in place, QEMU runs the virtual machines themselves.
KVM is a built-in feature of the Linux kernel that allows the OS to act as a Type 1 hypervisor, meaning it can run VMs directly on the host’s physical hardware. Interactions with KVM are usually handled via tools like QEMU or virsh.
On it’s own KVM handles the low level virtualization, but cannot handle user-space resources like QEMU does.
On the other hand, QEMU is a user-space application that can emulate hardware systems and run VMs on its own (even without KVM integration).IT is responsible for creating and managing virtual envuironments, including CPUs, drives, USB controllers, net cards.
QEMU can be used to boot a virtual machine from a disk image, assign RAM, attach network interfaces, connect to graphical output, and use KVM in the background to speed things up.tools
Other hypervisors include:
- Xen
- LXC
- OpenVZ
- VirtualBox
- VMware ESXi
VirtIO and other drivers
The performance and scalability of a VM set-up depends on how the guests can access and interact with physical resources on the host. In Linux, VMs use a framework called VirtIO, which enhances the performance of guest-hypervisor interactions. Additionally, VMs using VirtIO also make use of paravirtualized drives instilled within the guest operating system, such as virtio-net for networking or virtio-blk / virtio-scsi for disk access.
These work by allowing the guest to interact directly with the virtualized hardware.
The traditional virtualization method includes having the hypervisor emulate entire physical devices in software, which makes performance slow and expensive. VirtIO solves this issue by using paravirtualized interfaces, which are virtual devices types purpose-built for VMs and do not require emulating actual hardware.
This means that instead of interacting with a full emulation, a VM might simply use virtio-net for example, which is a driver specifically designed for virtualized networking.
Drives
A VM drive is typically backed by a disc image file on the host system. This means that whatever the VM sees as a full filesystem is really just a structured file (or set thereof) in the host, in formats like qcow2 (QEMU/KVM), raw, VDI (VirtualBox), or vmdk (VMware). The performance of these drives depends on the type of virtual disc interface (such as IDE, SATA, or VirtIO).
Networks
The virtualized network in a VM makes use of vNICs, or virtual network interfaces. The vNIC in the host is connectyed to virtual network devices on the host, such as virtual bridges or switches.
The type of virtual network configuration allows a VM to do different things. For example, NAT allows a VM to access external networks via the host’s IP address; bridge networking allows the VM to use the host’s physical NIC directly, making it appear like any other device on the local network.
Internal networks (called host-only) restrict traffic between the VM and the host.
Network types for VMs:
| Type | Description | Notes |
|---|---|---|
| NAT | Shares the host’s network connection | From the outside, it looks like the host is the one reaching out, not a VM. Also means the VM isn’t reachable from outside the network. |
| Bridged | Gives the VM its own IP addres in the local network | On the network it looks like another device; it is exposed to the same network-level access as any other machine |
| Host-only | Limits communication to VM and host | Closed setup where the VM can only talk to the host machine or other VMs in the same configuration. There is no LAN, not internet, only a private host-only bubble. |
| Routed | Uses a custom path to reach the Internal | Gives VM access to other networks via virtual routers. |
| Open (also known as Promisc) | Removes almost all barriers | Lets the VM see all traffic on the network and access anything it can find. |