Miniball is a minimal virtual machine manager. Miniball is the subset of the Dragonball Sandbox using components from dragonball-sandbox and rust-vmm. The purpose of the Miniball Project is to provide anyone who is interested in virtualization a good approach to learn it and also test the crates from dragonball-sandbox.
Miniball contains 2 main crates:
vmm
and
api
,
and 2 auxiliary crates:
vm-vcpu
,
utils
.
The vmm crate exports an object struct Vmm that encapsulates all the
dragonball-sandbox crates that provide functionality as dependencies.
The api provides configuration of vcpu, memory, kernel, block, and CLI.
The Miniball does not provide runtime configuration changes for the VM,
it must provide the full VM configuration when the VM starts.
Miniball uses the crates included in rust-vmm
:
kvm-ioctls
,
kvm-bindings
,
vm-memory
,
linux-loader
,
and the crates in dragonball-sandbox
:
dbs-address-space
,
dbs-allocator
,
dbs-boot
,
dbs-arch
,
dbs-device
,
dbs-interrupt
,
dbs-legacy-devices
,
dbs-utils
and
dbs-virtio-devices
.
The Miniball architecture diagram is as follows, showing the relationship between the various modules and the crates used by each module.
The Miniball obtains the configuration information for starting the VM from the input of the CLI, builds the Config (VcpuConfig, MemoryConfig, KernelConfig, BlockConfig), configures the VM in sequence, and finally loads the kernel into the Guest memory and starts the virtual machine. The detailed process is as follows:
-
Set up KVM. This is done through
kvm-ioctls
. It creates the KVM virtual machine in the host kernel.// src/vmm/src/vmm.rs let kvm = Kvm::new().map_err(Error::KvmIoctl)?; // Check that the KVM on the host is supported. let kvm_api_ver = kvm.get_api_version(); if kvm_api_ver != KVM_API_VERSION as i32 { return Err(Error::KvmApiVersion(kvm_api_ver)); } Vmm::check_kvm_capabilities(&kvm)?;
-
Configure guest memory. This is done through the
vm-memory
anddbs-address-space
crates. Thevm-memory
creates and registers the guest memory with KVM. Thedbs-address-space
manage guest memory. See theMemory virtualization documentation
for details on this part.- Requirements: KVM set up
- Inputs:
- guest memory size
// src/vmm/src/vmm.rs let guest_memory = Vmm::create_guest_memory(&config.memory_config)?; let address_space = Vmm::create_address_space(&config.memory_config)?; let address_allocator = Vmm::create_address_allocator(&config.memory_config)?;
-
Configure the vCPUs. This is done through
vm-vcpu
crate, which is a local crate. This is done partially throughkvm-ioctls
,dbs-arch
anddbs-boot
. See theCPU virtualization documentation
for details on this part.// src/vmm/src/vmm.rs // Create the KvmVm. let vm_config = VmConfig::new(&kvm, config.vcpu_config.num)?;
-
Requirements: KVM is configured, guest memory is configured
-
Inputs: vCPU registry values - hardcoded / embedded in VMM for the same reasons as boot parameters.
-
Breakdown (
x86_64
):-
Configure MPTables. These tables tell the guest OS what the multiprocessor configuration looks like, and are required even with a single vCPU.
// src/vm-vcpu/src/vm.rs #[cfg(target_arch = "x86_64")] mptable::setup_mptable(guest_memory, vm.config.num_vcpus, vm.config.num_vcpus) .map_err(Error::MpTable)?;
-
Create KVM
irqchip
. This creates the virtual IOAPIC and virtual PIC and sets up future vCPUs for local APIC.// src/vm-vcpu/src/vm.rs #[cfg(target_arch = "x86_64")] vm.setup_irq_controller()?;
-
Create vCPUs. An
fd
is registered with KVM for each vCPU.// src/vm-vcpu/src/vm.rs vm.create_vcpus(bus, vcpus_config, guest_memory)?;
-
Configure CPUID. Required (at least) because it’s the means by which the guest finds out it’s virtualized.
// src/vm-vcpu/src/vcpu/mod.rs let base_cpuid = _kvm .get_supported_cpuid(kvm_bindings::KVM_MAX_CPUID_ENTRIES) .map_err(Error::KvmIoctl)?; dbs_arch::cpuid::process_cpuid(&mut cpuid, &vm_spec).map_err(|e| Error::CpuId(e))?;
-
Configure MSRs (model specific registers). These registers control (among others) the processor features. See the reference.
// src/vm-vcpu/src/vcpu/mod.rs #[cfg(target_arch = "x86_64")] dbs_arch::regs::setup_msrs(&self.vcpu_fd).map_err(Error::MSRSConfiguration)
-
Configure other registers (
kvm_regs
,kvm_sregs
,fpu
) and the LAPICs.// src/vm-vcpu/src/vcpu/mod.rs #[cfg(target_arch = "x86_64")] { vcpu.configure_cpuid(&vcpu.config.cpuid)?; vcpu.configure_msrs()?; vcpu.configure_sregs(memory)?; vcpu.configure_lapic()?; vcpu.configure_fpu()?; }
-
-
-
Create event manager for device events. This is done through
dbs-utils::epoll_manager
.// src/vmm/src/vmm.rs let event_manager = EpollManager::default(); event_manager.add_subscriber(Box::new(wrapped_exit_handler.0.clone()));
-
legacy devices need to be configured with serial console and keyboard i8042 controller, serial console emulation is done through
dbs-device
anddbs-legacy-devices
crates. Device event handling is mediated throughdbs-utils::epoll_manager
. See theDevice virtualization documentation
for details on this part.- Requirements: KVM is configured, guest memory is configured,
irqchip
is configured (x86_64
), event manager is configured - Inputs: N/A
- Breakdown:
- Create dummy speakers. The virtual speaker must be emulated, otherwise the kernel keeps accessing the speaker's port causing the KVM to continuously exit.
- Create serial console. The serial console is used to provide communication between the virtual machine and the host.
- Create i8042. The keyboard i8042 controller is used to simulate the CPU reset command, which is used to notify the VMM Guest to shut down.
// src/vmm/src/lib.rs let serial = vmm.create_serial_console()?; vmm.init_serial_console(serial)?; #[cfg(target_arch = "x86_64")] vmm.add_i8042_device()?;
- Requirements: KVM is configured, guest memory is configured,
-
Configure root block device. This is done through
dbs-virtio-devices
. Device event handling is mediated withdbs-utils::epoll_manager
. See theDevice virtualization documentation
for details on this part.- Requirements: KVM is configured, guest memory is configured,
irqchip
is configured (x86_64
), event manager is configured
// src/vmm/src/lib.rs if let Some(cfg) = config.block_config.as_ref() { vmm.add_block_device(cfg)?; }
- Requirements: KVM is configured, guest memory is configured,
-
Load the guest kernel into guest memory. This is done through
linux-loader
anddbs-boot
crates. See theMemory virtualization documentation
for details on this part.- Requirements: guest memory is configured
- Inputs:
- path to kernel file
- start of high memory (x86_64)
- kernel command line
- boot parameters - embedded in VMM
5. Too complex to pass through the command line / other inputs:
these are arch-dependent structs, built with
bindgen
and exported bylinux-loader
, that the user fills in outsidelinux-loader
with arch- and use case-specific values. 6. Some can be constants and can be externally specified, unless they make the UI unusable. Examples: kernel loader type, kernel boot flags, dedicated address for the kernel command line, etc.
// src/vmm/src/lib.rs
let load_result = self.load_kernel()?;
#[cfg(target_arch = "x86_64")]
let kernel_load_addr = self.compute_kernel_load_addr(&load_result)?;
memory
- guest memory configurationssize_mib
-u32
, guest memory size in MiB (decimal)- default: 256 MiB
kernel
- guest kernel configurationspath
-String
, path to the guest kernel imagecmdline
-String
, kernel command line- default: "console=ttyS0 i8042.nokbd reboot=t panic=1 pci=off"
kernel_load_addr
-u64
, start address for high memory (decimal)- default: 0x100000
vcpus
- vCPU configurationsnum
-u8
, number of vCPUs (decimal)- default: 1
block
- block device configurationpath
-String
, path to the root filesystem
Note: For now, only the path to the root block device can be configured
via command line. The block device will implicitly be read-write and with
cache flush
command supported. Passing the block
argument is optional,
if you want to skip it, make sure you pass to the path
argument of the
kernel
configuration, a suitable image (for example a Busybox one).
We plan on extending the API to be able to configure more block devices and
more parameters for those (not just the path
).
We also want to offer the same support in the near future for network and
vsock devices.
dbs-miniball \
--kernel path=/path/to/kernel/image,cmdline="reboot=t panic=1 pci=off"
dbs-miniball \
--memory size_mib=1024 \
--vcpu num=2 \
--kernel path=/path/to/kernel/image
Currently, the Miniball runs on Linux x86_64 hosts, using the KVM hypervisor. To make sure KVM is accessible to your user, run:
[ -r /dev/kvm ] && [ -w /dev/kvm ] && echo "OK" || echo "FAIL"
To grant your user access to KVM, either:
-
If you have the ACL package for your distro installed:
sudo setfacl -m u:${USER}:rw /dev/kvm
or
-
If your distribution uses the
kvm
group to manage access to/dev/kvm
:[ $(stat -c "%G" /dev/kvm) = kvm ] && sudo usermod -aG kvm ${USER}
Then log out and back in.
To build the Miniball from source, you need to have the Rust compiler and
cargo
installed on your system. The following toolchains are supported:
x86_64-unknown-linux-gnu
(Linux withglibc
, default)x86_64-unknown-linux-musl
(Linux withmusl libc
)
As the Miniball does not yet have any compile-time features, building it is as simple as:
cargo build [--release]
This will produce a binary called dbs-miniball
in the cargo
build
directory (default: target/${toolchain}/${mode}
, where mode can be debug
or
release
).
To build a kernel for the Miniball to boot, check out the scripts in resources/kernel.
make_kernel_busybox_image.sh
builds an ELF or bzImage kernel with a baked-in initramfs running Busybox. It uses a stripped-down kernel config and a statically linked config for the Busybox initramfs.
Example:
sudo ./make_kernel_busybox_image.sh -f elf -k vmlinux-hello-busybox -w /tmp/kernel
produces a binary image called vmlinux-hello-busybox
in the /tmp/kernel
directory. Root privileges are needed to create device nodes.
Run ./make_kernel_busybox_image.sh
with no arguments to see the help.
make_kernel_image_deb.sh
builds an ELF or bzImage kernel compatible with Ubuntu 20.04 from a stripped-down kernel config, as well as.deb
packages containing the Linux kernel image and modules, to be installed in the guest. By default, the script downloads the.deb
packages from an official Ubuntu mirror, but it can build them from the same sources as the kernel instead. Users can opt in for this behavior by setting theMAKEDEB
environment variable before running the script.
Example:
./make_kernel_image_deb.sh -f bzimage -j 2 -k bzimage-focal -w /tmp/ubuntu-focal
produces a binary image called bzimage-focal
in the /tmp/ubuntu-focal
directory.
It downloads the linux-modules
and linux-image-unsigned
packages and places them
inside the kernel source directory within /tmp/ubuntu-focal
(the exact location is
displayed at the end). Run ./make_kernel_image_deb.sh
with no arguments to see the help.
The Miniball only supports a serial console device for now. This section will be expanded as other devices are added. Block devices are in the works.
To build a block device with a root filesystem in it containing an OS for the Miniball, check out the scripts in resources/disk.
make_rootfs.sh
builds a 1 GiB disk image containing an ext4 filesystem with an Ubuntu 20.04 image.
Example:
sudo resources/disk/make_rootfs.sh -d /tmp/ubuntu-focal/deb -w /tmp/ubuntu-focal
produces a file called rootfs.ext4
inside /tmp/ubuntu-focal
containing the
Ubuntu 20.04 image and the kernel image installed from the .deb
packages expected
in /tmp/ubuntu-focal/deb
. At the very least, the OS needs the linux-image
and
linux-modules
packages. These can either be downloaded or built from sources.
See this section for examples on how to acquire these packages using
scripts from this repo. Root privileges are needed to manage mountpoints.
Once all the prerequisites are met, the Miniball can be run either directly through cargo
,
passing on its specific command line arguments,
or after building it with cargo build
.
cargo run --release -- \
--memory size_mib=1024 \
--kernel path=${KERNEL_PATH} \
--vcpu num=1
cargo build --release
target/release/dbs-miniball \
--memory size_mib=1024 \
--kernel path=${KERNEL_PATH} \
--vcpu num=1
Examples:
cargo run --release -- \
--memory size_mib=1024 \
--kernel path=/tmp/kernel/linux-5.4.81/vmlinux-hello-busybox \
--vcpu num=1 \
--block path=/tmp/ubuntu-focal/rootfs.ext4
cargo build --release
target/release/dbs-miniball \
--memory size_mib=1024 \
--kernel path=/tmp/kernel/linux-5.4.81/vmlinux-hello-busybox \
--vcpu num=1 \
--block path=/tmp/ubuntu-focal/rootfs.ext4
Currently, this intersection resolves into Linux hosts
and the KVM hypervisor
. The first iteration of the Miniball supports
only this configuration, returning errors when users attempt to run it on
something else.
Long term, the Miniball will run on x86_64
and aarch64
platforms.
Currently, only Intel x86_64
CPUs are supported.
Rust 1.59.0
The Miniball will support both glibc
and musl libc
(toolchains:
x86_64-unknown-linux-gnu
, x86_64-unknown-linux-musl
) with glibc
being the
default due to x86_64-unknown-linux-gnu
being
Tier 1 supported
by Rust. Future extensions to aarch64
support will introduce the
aarch64-unknown-linux-gnu
and aarch64-unknown-linux-musl
toolchains,
defaulting (probably) to aarch64-unknown-linux-gnu
on ARM, because it's also
Tier 1 supported since Rust 1.49.
The Miniball is inspired by the vmm-reference project. Part of the code is derived from the vmm-reference project.
This project is licensed under either of:
- Apache License, Version 2.0
- BSD-3-Clause License