Configuration Files¶
This documentation page explains the configuration files that are needed for every application binary a full SoTest deployment consists of. For each file, its options and its constraints are described.
You can mainly ignore this document if you deploy SoTest using the NixOS modules.
All in all, the following admin configuration files exist:
controller.yaml
: Configuration of the SoTest Controller. Holds information on connected machines, their interfaces and the iSCSI config.web_app.yaml
: Configuration for the Web UI. Stores information on user logins, additional pages to be displayed in the UI and configuration for test failure predictions using machine learning.migrate.yaml
: Configuration for the database migration tool
The project configuration for each project is done using the
project_config.yaml
and can be carried out by a SoTest user. The file holds
information on the boot items to be executed, additional dependencies required
for running the boot items and which machines to run them on (machine names
only). It is described in detail here.
Common Database Configuration¶
All following application configurations require a database configuration, which is implemented via a TCP/IP connection to a PostgreSQL database instance.
Therefore, each configuration file contains the following database
section:
database:
host: myhostname
port: 5432
user: johndoe
password: johnspassword
database: sotestdb
SoTest Controller - controller.yaml
¶
binary_tmp_dir
: Path to a directory to temporarily store binary files during a Test Run. The user account that runs the Controller process requires read/write access to this directory.database
: see database configuration higher up in this document pagehardlinks_instead_of_copies
: Whether to use hard links instead of copying files necessary for a boot. By default, selected binaries are copied from thebinary_tmp_dir
into the TFTP root for each test. In order to make this faster, hard linking can be used instead of copying by setting this field totrue
. This does not work if mentioned paths are on different partitions.http_boot
: Whether to generate iPXE configs which boot from HTTP instead of TFTP. HTTP can be faster than TFTP, but requires the TFTP root to be served via HTTP in addition.iscsi_config
: Optional config that includes iSCSI specific data, see iSCSI Configuration.machines
: List of machines that are maintained by the Controller, see Machine Configuration.tftproot
: TFTP directory that is used for PXE boots by slaves. This directory must be writable by the Controller process.update_interval
: Time interval in seconds with which the database should be updated.cache_version
: Number that should be incremented if the result cache should be invalidated.
Machine Configuration¶
config_vars
: Optional list of machine-specific variables, which can be used in boot items, see Machine-Specific Variables and Extra files.id
: Unique identifier. No two machines can have the same id.mac
: Unique hardware MAC address for iPXE and AMT. Must be the exact MAC address of the network jack to be used for booting.name
: Machine name, not necessarily unique. It is added to the list oftags
.power_interface
: Power cycling interface, see Power Configuration.serial_interfaces
: List of serial communication interfaces, see Serial Configuration.tags
: List of machine-specific tags, see Tags.retired
: Flag to retire machines. Default isFalse
. If set, the machine will not only act like it’s inactive (not executing tests, ignored in Test Run creation), but also be hidden from users (not shown in the Infrastructure page). Retired machines are alwaysOffline
.updated_at
: Date and time when the last update on that machine happened.
Machine-Specific Variables¶
Machine-specific variables are created using the config_vars
key.
Variable names can be any combination of a-z
, A-Z
, 0-9
, and _
.
Variable values can be anything, including spaces, but no line breaks.
After a variable varName = 123
was declared in the machine configurations, it
will lead to the substitution of @{varName}
tokens in boot-item sections of
the project configuration. Tests can be defined once for all machines and still
work with machine-specific information, like, e.g., the MAC address.
To enable the user to use extra files, the admin needs to
- add a machine variable similar to
extra_files_http_url = http://${server}/sotest/94-13-f9-25-a5-2f
to every machine - set up the infrastructure for downloads, e.g., set up an HTTP server that serves the TFTP directory.
Power Configuration¶
Supported interfaces for power cycling machines are: AMT, IPMI, Antrax socket,
GPI (Raspi powerswitch interface), and a custom interface via an arbitrary
command that accepts input On
and Off
for power change. Each interface must
implement the field power_interface
and its respective fields. Example
configurations for each interface are listed below. Details on GPI
configuration options can be found here.
The antrax_switch_off_delay_ms
is necessary, so that machines connected to it
have enough time to properly switch off. The amount of ms
to make that
function properly will differ from machine to machine. The admin needs to take
care that the right amount is configured before enabling the machine.
power_interface: amt
amt_ip: 192.168.1.23
amt_user: admin
amt_pass: Password123
power_interface: ipmi
ipmi_ip: 192.168.1.23
ipmi_user: admin
ipmi_pass: Password123
power_interface: antrax
antrax_host: 192.168.1.23
antrax_port: 1
antrax_switch_off_delay_ms: 1000
power_interface: raspi
raspi_ip: 192.168.1.23
raspi_id: 1
raspi_switch_on_duration_ms: 6000
raspi_switch_off_duration_ms: 100
raspi_cmd_delay_ms: 1000
power_interface: custom
power_cmd: ./power_script.sh
Serial Configuration¶
Supported interfaces for serial communication are AMT, IPMI, native serial
character device, and custom serial output via command line standard output.
Each interface must define type
, preferably the optional field display_name
,
and its respective fields. Example configurations for each interface are listed
below.
type: amt
display_name: amt 23
amt_ip: 192.168.1.23
amt_user: admin
amt_pass: Password123
type: ipmi
display_name: ipmi 23
ipmi_ip: 192.168.1.23
ipmi_user: admin
ipmi_pass: Password123
type: native
display_name: native (standard)
serial_file: /dev/ttyUSB0
serial_baudrate: 115200
type: custom
display_name: serial console server port 8001
cmd: nc 192.168.1.100 8001
Tags¶
Machines in the controller configuration can be tagged with multiple different
tags. Tags can describe certain features the machine provides (such as
log:legacy-serial
or gvt:supported
), which project a machine belongs to
(project:elrond
) or how a machine is booted (boot:uefi
, boot:bios
).
On the other hand, a boot item description in the project configuration can define tags for which machine execution items should be created. Please be aware that a machine needs to have all tags (but can have more) defined in the boot item for it to be chosen. See Project Configuration for further information from a user perspective.
The machine-tags relation is displayed in the Infrastructure
view of the web
UI: http://<hostname>:<port>/machines
.
iSCSI Configuration¶
Describes the target server IP and port, as well as prefixes for the target and machine IQNs. They are needed to connect to the target server. The iSCSI Qualified Name format is used to address iSCSI initiator (here, a machine) and target. The fields are:
- literal
iqn.
- date code (
yyyy-mm.
) during which the naming authority owned the domain name - reserved domain name of the naming authority (e.g.
de.cbs
) - optional
:
prefixing a storage target name
See IQN addressing on wikipedia for further information.
iscsi_config:
target_server: 192.168.1.23
target_port: 3260
target_iqn_prefix: "iqn.2021-06.de.cbs.iscsi:"
machine_iqn_prefix: "iqn.2021-06.de.cbs:"
Web Authentication Configuration¶
Optional authentication tokens for the artifact server. Currently, only Gitlab
tokens are supported. The key is the URL prefix that will be in common with all
download links, and the value is a new key-value pair with the key
gitlab_private_token
and a working authentication token as its value.
auth_methods:
"http://my-gitlab-instance.com/":
gitlab_private_token: "abc"
Example¶
machines:
- id: hpbook_1
name: hpbook
mac: fc-15-b4-eb-c6-ca
power_interface: amt
# AMT interfaces need additional info like the following:
amt_ip: 192.168.1.157
amt_user: admin
amt_pass: Password123!
serial_interfaces:
- type: amt
display_name: amt 157
amt_ip: 192.168.1.157
amt_user: admin
amt_pass: Password123
config_vars:
- variable = value
- var2 = val2
tags:
- laptop
- amt
- id: robobox_1
name: robobox
mac: 00-25-90-e8-dd-4c
power_interface: ipmi
# Analogous to AMT machines, we need IPMI infos here:
display_name: ipmi 152
ipmi_ip: 192.168.1.152
ipmi_user: ADMIN
ipmi_pass: ADMIN
serial_interfaces:
- type: native
serial_file: /dev/tty.usbserial
tags:
- ipmi
- native serial
- id: intel_nuke_1
name: intel_nuke
mac: b8:ae:ed:75:8a:bd
power_interface: raspi
raspi_ip: 192.168.1.162
raspi_id: 42
raspi_switch_off_duration_ms: 6000
raspi_switch_on_duration_ms: 100
raspi_cmd_delay_ms: 1000
serial_interfaces:
- type: native
display_name: native-interface
serial_file: /dev/tty.usbserial
serial_baudrate: 115200
- id: machine_1
name: machine
mac: 11-11-11-11-11-11
power_interface: none
serial_interfaces:
- type: custom
display_name: qemu
cmd: qemu-system-x86_64 -boot n -net user,tftp=/tmp/tftpFolder,bootfile=ipxe.kpxe -net nic,macaddr=11:11:11:11:11:11 -nographic
tags:
- no_power
- qemu
iscsi_config:
target_server: 192.168.1.1
target_port: 3260
target_iqn_prefix: "iqn.2017-09.de.cbs.iscsi:"
machine_iqn_prefix: "iqn.2017-09.de.cbs:"
tftproot: /opt/tftproot
binary_tmp_dir : /tmp/sotest_binaries
# Interval time in seconds
update_interval : 2
database:
host: localhost
port: 5432
user: postgres
password: postgres
database: postgres
auth_methods:
"https://my_gitlab.com/":
gitlab_private_token: "abc"
cache_version: 1
SoTest Web UI - web_app.yaml
¶
The web app config holds parameters for the SoTest Web UI service app.
database
: see database configuration higher up in this document pagelogin
: Optional list of logins (username, password) for user management.extra_pages
: Optional list of extra pages to be added to the footer in the Web UIprediction_command
: Optional command for failure predictionprediction_cwd
: Optional working path for failure predictionstorageDir
: The directory where uploaded test data will be storedstorageUrl
: The URL that the uploaded test data will be served under
The TCP port on which the Web UI listens for HTTP requests is a command line
parameter (Default is 8000
). It is used like this:
serve_gui --port 3000
User Management¶
It is possible to protect some features against unauthorized users.
- If the
login
field is empty or not present, all features are available for every user. - If a set of
(username, password)
tuples are given, only users that are logged in can- use the API to start Test Runs
- abort Test Runs and executions
- reset Test Runs and executions
- change the activity state of machines in the infrastructure interface.
Extra Pages¶
Important pages like legal or privacy policy information sites can be included
over links in the footer. Such extra pages consist of a name
, which will be
shown in the footer, a file path
to a static html file, and a
route_identifier
to the page. The route_identifier
defines the URI that the
page can later be found at. For example, when using the string abc
as the
identifier, users can reach the extra page at http(s)://my-hostname/abc
.
Machine Learning Configuration¶
The SoTest Web UI supports a machine learning feature to optimize the scheduling of execution items. An external script must be provided to enable this feature.
- The default script orders executions per machine in such a way that those
items that are likely to fail the fastest will be executed first. The default
script expects a pre-trained model and an input via standard input. To use the
default script,
prediction_command
should be./path/to/failure_prediction -m model.pkl -i -
, wheremodel.pkl
is the pre-trained model.prediction_cwd
is the current working directory where this command is executed. The model can be generated with thegenerate_history
app and the default training script, e.g../path/generate_history -c database.yaml -p - | ./path/model_training -p -
. It is recommended to update the model regularly, e.g., once a day with a cronjob. Most of these steps are done automatically when the deployment is done with NixOS. You can find more info in the Deployment with NixOS Module section. - Custom scripts should handle input over standard input that is a list of
execution item information, separated by
\n
, e.g.trName,biName,machine,retries,execItemId\nmyTestRun,someBootItem,machine1,0,123\n...
. The output should be a list of floating point numbers for every execution item in unchanged order on standard output, separated by\n
or whitespaces, e.g.0.983\n0.002\n0.123\n...
. The execution of items will be ordered in ascending order by the respective numbers. Give the command asprediction_command
and the current working directory where the command shall be executed asprediction_cwd
. - If
prediction_command
andprediction_cwd
are not present in the configuration, all executions are scheduled in first-come-first-serve order.
Example Web UI config - web_app.yaml
¶
database:
host: myhostname
port: 5432
user: johndoe
password: johnspassword
database: sotestdb
login:
- [<user>,<password>]
- [<user>,<password>]
extra_pages:
- display_name: Data Protection
content_file: /var/www/gdpr.html
route_identifier: gdpr
prediction_command: ./path/prediction_script
prediction_cwd: /cwd/
Database Migration Tool - migrate.yaml
¶
The DB migration tool needs to be run once per every database schema layout change, which might happen with a new SoTest release (See release notes). Please make sure that your database is backed up before performing a migration.
Forgetting to run the DB migration tool between update and service restart will lead to the Web UI and Controller services exiting with an error.
Database migrations can potentially take a long time if the changes are complex and your database has grown very large.
Note that the SoTest NixOS Module provides an
autoMigration
setting that performs migrations automatically.
Configuration fields¶
The migrate config holds configuration parameters for the migration app.
database
: see database configuration higher up in this document page
Example migration app config¶
database:
host: <host>
port: <port>
user: <user>
password: <password>
database: <table_name>