Step-by-Step for creating a NixOS Test VM¶
This page walks you through creating a simple NixOS Test VM that runs SoTest.
The code snippets and the example files in this tutorial are available within
the SoTest source code at docs/content/snippets
.
Step 1 - NixOS configuration¶
The first step is creating a NixOS configuration file. We’re using the example file provided below. You may adapt the example file to your needs.
Check out the NixOS module guide for detailed information on all module configuration options.
When modifying the configuration, taking a look at the integration tests and the arion config can help:
nix/tests/lib/single_host_test_config.nix
nix/tests/lib/postgres_setup_for_sotest_db.nix
nix/arion/arion-compose.nix
When starting out with a new installation, it’s easiest not to use real hardware for running the tests but instead to work with a virtualized hardware (e.g. with QEMU). You can then add real hardware once everything else is up and running.
Step 2 - Build the VM¶
Build the VM using your configuration by calling nix-build
with your NixOS
config file (we also determine the correct nixpkgs src path to use beforehand):
#!/usr/bin/env bash
# File: build_nixos_vm.sh
set -eou pipefail
# Find script parent dir so we make paths relative to this location
script_dir=$(
cd "$(dirname "${BASH_SOURCE[0]}")"
pwd -P
)
# Edit to point to your own configuration if desired
CONFIG_FILE=${script_dir}/example_nixos_configuration.nix
# Determine the nixpkgs src path
nixpkgs_src=$(nix-instantiate --eval -E "builtins.toString (import ${script_dir}/../../../nix/sources.nix {}).nixpkgs" | tr -d '"')
export NIX_PATH=nixpkgs=$nixpkgs_src
# Build the VM
nix-build '<nixpkgs/nixos>' -A vm -I nixos-config="$CONFIG_FILE"
# Remove any old QEMU image file
# Otherwise the old file will be reused when the VM is started and
# configuration changes won't take effect
rm -f nixos.qcow2
Check out the file ./result/bin/run-nixos-vm
that has been generated.
It starts the VM with a set of QEMU command line parameters, some of which allow
for more configuration.
Step 3 - Run the VM¶
You can start the VM using the following wrapper script:
#!/usr/bin/env bash
# File: run_nixos_vm.sh
set -eou pipefail
# Enable forwarding the port to the SoTest Web UI
# Enable forwarding the port to the webserver
# (ports have to match the ones configured in the NixOS configuration)
export QEMU_NET_OPTS="hostfwd=tcp::5000-:5000,hostfwd=tcp::8080-:80"
# Add another network card
# We're pretending this is the network the test machines are connected to
export QEMU_OPTS="-net nic,netdev=user.1,model=virtio -netdev user,id=user.1"
# Set relative directory for the temporary directory to be used by QEMU
export TMPDIR=/tmp/sotest-nixos-vm
mkdir -p $TMPDIR/xchg
export USE_TMPDIR=0
# Run the VM (script has to be executed from same directory as build_nixos_vm.sh)
./result/bin/run-nixos-vm
The script creates a TMPDIR
for data exchange with the host at
/tmp/sotest-nixos-vm/xchg
and forwards the ports for the SoTest Web UI and for
the configured webserver to the host.
You should now be able to access the SoTest Web UI at http://localhost:5000.
Check the available machines of the Web UI at
http://localhost:5000/machines and verify that the machines configured in
your Controller configuration are present. When the machine list is empty, the
SoTest Controller might be down due to an incorrect configuration. In that case,
log into the VM and check journalctl
output for hints on what went wrong.
When everything is correctly configured, the SoTest API returns output as well:
$ curl localhost:5000/machines
[...]
The webserver at http://localhost:8080 should show some output using curl or a
browser. It is configured to serve the contents of $TMPDIR/xchg
, so any files
placed in this directory on the host will be available through the webserver.
Due to the QEMU_NET_OPTS
from the example, the webserver is available at port
8080
on the host and at port 80
on the guest.
Step 4 - Start a simple Test Run¶
Create a simple Test Run for the virtualized hardware. The User Tutorial has more in-depth information on this topic.
You’ll need to provide a URL to some test binaries (e.g. the SoTest tutorial
binaries) or copy the binaries to the xchg
directory so they are served by the
VM’s webserver. You can download the binaries from the tutorial and then copy
them to the xchg
directory set up earlier and that is served by HTTPD. This
way, they’ll show up as http://localhost/tutorial-binaries.zip on the VM.
cp tutorial-binaries.zip /tmp/sotest-nixos-vm/xchg
(Alternatively, you can run nix-build nix/release.nix -A tutorial-binaries
to
obtain the binaries.)
You also need to provide the project configuration file.
Then you can run:
#!/usr/bin/env bash
# File: start_testrun_example.sh
set -eou pipefail
# Change directory to `snippets`, so we can access the project config file
cd "$(dirname "${BASH_SOURCE[0]}")"
# SoTest API call with some test data
curl \
-F 'boot_files_url=http://localhost/tutorial-binaries.zip' \
-F 'url=https://docs.sotest.io/tutorial' \
-F 'name=Tutorial Test Run' \
-F 'user=Test User' \
-F 'config=@./example_nixos_project_configuration.yaml' \
-X POST \
localhost:5000/test_runs
The API create call yields the ID of your Test Run. Check the SoTest Web UI for detailed information on your Test Run.
Example SoTest Configuration¶
# File: example_nixos_configuration.nix
{ config, pkgs, lib, ... }:
let
tftpFolder = "/tmp/tftp_root";
testDataFolder = "/tmp/testData";
machinesInterfaceIp = "192.168.1.1";
iscsiPort = 3260;
learning-pkg = pkgs.callPackage ../../../optimization/learning_pkg { };
targetIqnPrefix = "iqn.2017-09.de.example.iscsi:";
machineIqnPrefix = "iqn.2017-09.de.example:";
in
{
imports = [
# Adapt this path to point to your SoTest installation
../../../nix/modules/sotest.nix
];
# Required for including the SoTest module
# You can also use this when building the config: export NIXPKGS_ALLOW_UNFREE=1
nixpkgs.config.allowUnfree = true;
networking = {
firewall = {
# Used for webserver
allowedTCPPorts = [ 80 ];
allowedUDPPorts = [ ];
};
interfaces.eth1 = {
useDHCP = false;
ipv4.addresses = [{ address = machinesInterfaceIp; prefixLength = 24; }];
};
firewall.interfaces.eth1 = {
allowedUDPPorts = [
67 # dhcp/dnsmasq
69 # tftp
iscsiPort # iSCSI
];
allowedTCPPorts = [
80 # webserver
iscsiPort # iSCSI
5432 # db
];
};
};
services.dnsmasq = {
enable = true;
resolveLocalQueries = false;
extraConfig = builtins.replaceStrings [ "\${tftpFolder}" ] [ "${tftpFolder}" ] (builtins.readFile ./dnsmasq_example_configuration.conf);
};
systemd.services.dnsmasq = {
# Required so that dnsmasq can access the tftpFolder
serviceConfig.PrivateTmp = lib.mkForce false;
preStart = ''
# Create tftpFolder here as otherwise it'll only be created when
# the SoTest Controller starts
mkdir -m 0777 -p ${tftpFolder}
'';
};
# tgt is the iscsi service we're using
environment.systemPackages = with pkgs; [
tgt
];
# pkgs.tgt already provides a systemd service file
# systemd.packages allows systemd to find the .service file for tgt
systemd.packages = [ pkgs.tgt ];
# Enable iscsi service to start automatically
systemd.services.tgtd = {
enable = true;
# Additional "wantedBy" required because the default puts "wantedBy" into
# the "[Install]" section
# See: https://search.nixos.org/options?channel=unstable&show=systemd.services.%3Cname%3E.wantedBy&query=systemd.services.%3Cname%3E.wantedBy
wantedBy = [ "multi-user.target" ];
};
# The tgt service expects to find its config file at /etc/tgt/targets.conf
environment.etc = {
"tgt/targets.conf" = {
text = ''
<target ${targetIqnPrefix}:50-7b-9d-27-ab-77-win10-abc>
backing-store /media/storage/iscsi/win10-abc/50-7b-9d-27-ab-77.img
write-cache off
incominguser testUser testPassword
</target>
'';
mode = "0440";
};
};
# Create dummy iscsi image that can be checked for availability by iscsi service
environment.extraInit = ''
mkdir -p /media/storage/iscsi/win10-abc/
echo "test image" > /media/storage/iscsi/win10-abc/50-7b-9d-27-ab-77.img
'';
# This is the database that's used by SoTest. You may exchange this for
# another database of your choice.
services.postgresql = {
enable = true;
package = pkgs.postgresql_10;
inherit (config.services.sotest-db) port;
enableTCPIP = true;
authentication = "host all all 0.0.0.0/0 md5";
initialScript = with config.services.sotest-db; pkgs.writeText "postgres-initScript" ''
CREATE ROLE ${user} WITH LOGIN PASSWORD '${password}' CREATEDB;
CREATE DATABASE ${database};
GRANT ALL PRIVILEGES ON DATABASE ${database} TO ${user};
'';
};
services = {
# The database configuration used by the sotest-webui and the
# sotest-controller. Please make sure to adapt the values.
sotest-db = {
host = "localhost";
port = 5432;
user = "myuser";
password = "mypw";
database = "mydb";
};
# The SoTest Web UI
sotest-webui = {
enable = true;
openFirewall = true;
port = 5000;
# Set to true to automatically migrate the DB schema before the service is
# started
autoMigration = true;
# Set to true to enable failure probability prediction. Either enable
# sotest-model-generation or provide your own `predictionCommand`.
prediction = true;
storageDir = testDataFolder;
storageUrl = "http://localhost";
storageGCSizeStopMB = 0;
storageGCSizeTriggerMB = 1;
};
sotest-db-filler = {
# Enable the manual insertion of arbitrary test data into the database.
enable = true;
};
# Enable to regularly generate a prediction model from the database.
sotest-model-generation = {
enable = true;
nJobs = 1;
timer = {
enable = true;
interval = "*-*-* *:*:00";
};
};
# The SoTest Controller
# Use a recursive set so we don't have to retype tftpFolder and mac
sotest-controller = rec {
enable = true;
binaryTmpDir = "/tmp/sotest-binaries";
inherit tftpFolder;
authMethods = {
"https://gitlab.example.de/" = {
gitlab_private_token = "xxxxxxxxxxxxxxxxxxxx";
};
};
machines = [
rec {
id = "qemu_1";
name = "qemu_machine";
tags = [ "qemu" "no_power" ];
power_interface = "none";
mac = "02-03-04-05-06-07";
config_vars = [ "linux_terminal = ttyS0" ];
serial_interfaces = [
{
type = "custom";
cmd = "qemu-system-x86_64 -boot n -net user,tftp=${tftpFolder},bootfile=ipxe.kpxe -net nic,macaddr=${mac} -nographic";
}
];
updated_at = "2022-08-04T00:00:00Z";
}
];
iscsiConfig = {
targetIp = machinesInterfaceIp;
targetPort = iscsiPort;
inherit targetIqnPrefix machineIqnPrefix;
};
cacheVersion = 1;
};
# Webserver for serving own SoTest test binaries
# The NixOS QEMU VM mounts a provided host $TMPDIR at /tmp/xchg in the guest
# Use apache because nginx always returns 404 although files are present
httpd = {
enable = true;
adminAddr = "test@example.org";
virtualHosts.localhost = {
documentRoot = testDataFolder;
servedDirs = [
{ urlPath = "/"; dir = "/tmp/xchg"; }
{ urlPath = "/"; dir = testDataFolder; }
];
};
};
};
# When using a test VM, you may set an empty root password for easy access
users.users.root.initialHashedPassword = "";
# give both: the webserver and the webui access to the test data storage
users.extraGroups.TestDataAccessors.members = [
config.services.httpd.user
config.systemd.services.sotest-webui.serviceConfig.User
];
systemd.tmpfiles.rules = [ "d ${testDataFolder} 0070 - TestDataAccessors" ];
system.stateVersion = "22.05";
}