System integration and automated testing
Genode's portability across kernels and hardware platforms is one of the prime features of the framework. However, each kernel or hardware platform requires different considerations when it comes to system configuration, integration, and booting. When using a particular kernel, profound knowledge about the boot concept and the kernel-specific tools is required. To streamline the testing of system scenarios across the many different supported kernels and hardware platforms, the framework is equipped with tools that relieve the system integrator from these peculiarities.
Run tool
The centerpiece of the system-integration infrastructure is the so-called run tool. Directed by a script (run script), it performs all the steps necessary to test a system scenario. Those steps are:
-
Building the components of a scenario
-
Configuration of the init component
-
Assembly of the boot directory
-
Creation of the boot image
-
Powering-on the test machine
-
Loading of the boot image
-
Capturing the log output
-
Validation of the scenario's behavior
-
Powering-off the test machine
Each of those steps depends on various parameters such as the used kernel, the hardware platform used to execute the scenario, the way the test hardware is connected to the test infrastructure (e.g., UART, AMT, JTAG, network), the way the test hardware is powered or reset, or the way of how the scenario is loaded into the test hardware. To accommodate the variety of combinations of these parameters, the run tool consists of an extensible library of modules. The selection and configuration of the modules is expressed in the run-tool configuration. The following types of modules exist:
- boot-dir modules
-
These modules contain the functionality to populate the boot directory and are specific to each kernel. It is mandatory to always include the module corresponding to the used kernel.
(the available modules are: linux, hw, okl4, fiasco, pistachio, nova, sel4, foc)
- image modules
-
These modules are used to wrap up all components used by the run script in a specific format and thereby prepare them for execution. Depending on the used kernel, different formats can be used. With these modules, the creation of ISO and disk images is also handled.
(the available modules are: uboot, disk, iso)
- load modules
-
These modules handle the way the components are transfered to the target system. Depending on the used kernel there are various options to pass on the components. For example, loading from TFTP or via JTAG is handled by the modules of this category.
(the available modules are: tftp, jtag, fastboot, ipxe)
- log modules
-
These modules handle how the output of a currently executed run script is captured.
(the available modules are: qemu, linux, serial, amt)
- power_on modules
-
These modules are used for bringing the target system into a defined state, e.g., by starting or rebooting the system.
(the available modules are: qemu, linux, softreset, amt, netio)
- power_off modules
-
These modules are used for turning the target system off after the execution of a run script.
Each module has the form of a script snippet located under the tool/run/<step>/ directory where <step> is a subdirectory named after the module type. Further instructions about the use of each module (e.g., additional configuration arguments) can be found in the form of comments inside the respective script snippets. Thanks to this modular structure, an extension of the tool kit comes down to adding a file at the corresponding module-type subdirectory. This way, custom work flows (such as tunneling JTAG over SSH) can be accommodated fairly easily.
Run-tool configuration examples
To execute a run script, a combination of modules may be used. The combination is controlled via the RUN_OPT declaration contained in the build directory's etc/build.conf file. The following examples illustrate the selection and configuration of different run modules:
Executing NOVA in Qemu
RUN_OPT = --include boot_dir/nova \ --include power_on/qemu --include log/qemu --include image/iso
By including boot_dir/nova, the run tool assembles a boot directory equipped with a boot loader and a boot-loader configuration that is able to bootstrap the NOVA kernel. The combination of the modules power_on/qemu and log/qemu prompts the run tool to spawn the Qemu emulator with the generated boot image and fetch the log output of the emulated machine from its virtual comport. The specification of image/iso tells the run tool to use a bootable ISO image as a boot medium as opposed to a disk image.
Executing NOVA on a real x86 machine using AMT
The following example uses Intel's advanced management technology (AMT) to remotely reset a physical target machine (power_on/amt) and capture the serial output over network (log/amt). In contrast to the example above, the system scenario is supplied via TFTP (load/tftp). Note that the example requires a working network-boot setup including a TFTP server, a DHCP server, and a PXE boot loader.
RUN_OPT = --include boot_dir/nova \ --include power_on/amt \ --power-on-amt-host 10.23.42.13 \ --power-on-amt-password 'foo!' \ --include load/tftp \ --load-tftp-base-dir /var/lib/tftpboot \ --load-tftp-offset-dir /x86 \ --include log/amt \ --log-amt-host 10.23.42.13 \ --log-amt-password 'foo!'
If the test machine has a comport connection to the machine where the run tool is executed, the log/serial module may be used instead of 'log/amt':
--include log/serial --log-serial-cmd 'picocom -b 115200 /dev/ttyUSB0'
Executing base-hw on a Raspberry Pi
The following example boots a system scenario based on the base-hw kernel on a Raspberry Pi that is powered via a network-controllable power plug (netio). The Raspberry Pi is connected to a JTAG debugger, which is used to load the system image onto the device.
RUN_OPT = --include boot_dir/hw \ --include power_on/netio \ --power-on-netio-ip 10.23.42.5 \ --power-on-netio-user admin \ --power-on-netio-password secret \ --power-on-netio-port 1 \ --include power_off/netio \ --power-off-netio-ip 10.23.42.5 \ --power-off-netio-user admin \ --power-off-netio-password secret \ --power-off-netio-port 1 \ --include load/jtag \ --load-jtag-debugger \ /usr/share/openocd/scripts/interface/flyswatter2.cfg \ --load-jtag-board \ /usr/share/openocd/scripts/interface/raspberrypi.cfg \ --include log/serial \ --log-serial-cmd 'picocom -b 115200 /dev/ttyUSB0'
Meaningful default behaviour
The create_builddir tool introduced in Section Using the build system equips a freshly created build directory with a meaningful default configuration that depends on the selected platform and the used kernel. For example, when creating a build directory for the x86_64 base platform and building a scenario with KERNEL=linux, RUN_OPT is automatically defined as
RUN_OPT = --include boot_dir/linux \ --include power_on/linux --include log/linux
Run scripts
Using run scripts, complete system scenarios can be described in a concise and kernel-independent way. As described in Section A simple system scenario, a run script can be used to integrate and test-drive the scenario directly from the build directory. The best way to get acquainted with the concept is by reviewing the run script for the hello-world example presented in Section Defining a system scenario. It performs the following steps:
-
Building the components needed for the system using the build command. This command instructs the build system to compile the targets listed in the brace block. It has the same effect as manually invoking make with the specified argument from within the build directory.
-
Creating a new boot directory using the create_boot_directory command. The integration of the scenario is performed in a dedicated directory at <build-dir>/var/run/<run-script-name>/. When the run script is finished, this boot directory will contain all components of the final system.
-
Installing the configuration for the init component into the boot directory using the install_config command. The argument to this command will be written to a file called config within the boot directory. It will eventually be loaded as boot module and made available by core's ROM service to the init component. The configuration of init is explained in Chapter System configuration.
-
Creating a bootable system image using the build_boot_image command. This command copies the specified list of files from the <build-dir>/bin/ directory to the boot directory and executes the steps needed to transform the content of the boot directory into a bootable form. In the most common case, the arguments of build_boot_image correspond to the results of a prior build step. To avoid the need to manually maintain the consistency between the arguments of both steps, the build_artifacts function provides a handy way to express the common case.
build_boot_image [build_artifacts]
Under the hood, the run tool invokes the run-module types boot_dir and boot_image. Depending on the run-tool configuration, the resulting boot image may have the form of an ISO image, a disk image, or a bootable ELF image.
-
Executing the system image using the run_genode_until command. Depending on the run-tool configuration, the system image is executed using an emulator or a physical machine. Under the hood, this step invokes the run modules of the types power_on, load, log, and power_off. For most platforms, Qemu is used by default. On Linux, the scenario is executed by starting core directly from the boot directory. The run_genode_until command takes a regular expression as argument. If the log output of the scenario matches the specified pattern, the run_genode_until command returns. If specifying forever as argument, this command will never return. If a regular expression is specified, an additional argument determines a timeout in seconds. If the regular expression does not match until the timeout is reached, the run script will abort.
After the successful completion of a run script, the run tool prints the message "Run script execution successful.".
Note that the hello.run script does not contain kernel-specific information. Therefore it can be executed from the build directory of any base platform via the command make run/hello KERNEL=<kernel> BOARD=<board>. When invoking make with an argument of the form run/<run-script>, the build system searches all repositories for a run script with the specified name. The run script must be located in one of the repositories' run/ subdirectories and have the file extension .run.
The run mechanism explained
The run tool is based on expect, which is an extension of the Tcl scripting language that allows for the scripting of interactive command-line-based programs. When the user invokes a run script via make run/<run-script>, the build system invokes the run tool at <genode-dir>/tool/run/run with the run script and the content of the RUN_OPT definition as arguments. The run tool is an expect script that has no other purpose than defining several commands used by run scripts and including the run modules as specified by the run-tool configuration. Whereas tool/run/run provides the generic commands, the run modules under tool/run/<module>/ contain all the peculiarities of the various kernels and boot strategies. The run modules thereby document precisely how the integration and boot concept works for each kernel platform.
Run modules
Each module consist of an expect source file located in one of the existing directories of a category. It is named implicitly by its location and the name of the source file, e.g. image/iso is the name of the image module that creates an ISO image. The source file contains one mandatory function:
run_<module> { <module-args> }
The function is called if the step is executed by the run tool. If its execution was successful, it returns true and otherwise false. Certain modules may also call exit on failure.
A module may have arguments, which are - by convention - prefixed with the name of the module, e.g., power_on/amt has an argument called –power-on-amt-host. By convention, the modules contain accessor functions for argument values. For example, the function power_on_amt_host in the run module power_on/amt returns the value supplied to the argument –power-on-amt-host. Thereby, a run script can access the value of such arguments in a defined way by calling power_on_amt_host. Also, arguments without a value are treated similarly. For example, for querying the presence of the argument –image-uboot-no-gzip, the run module run/image/uboot provides the corresponding function image_uboot_use_no_gzip. In addition to these functions, a module may have additional public functions. Those functions may be used by run scripts or other modules. To enable a run script or module to query the presence of another module, the run tool provides the function have_include. For example, the presence of the load/tftp module can be checked by calling have_include with the argument "load/tftp".
Using run scripts to implement integration tests
Because run scripts are actually expect scripts, the whole arsenal of language features of the Tcl scripting language is available to them. This turns run scripts into powerful tools for the automated execution of test cases. A good example is the run script at repos/libports/run/lwip.run, which tests the lwIP stack by running a simple Genode-based HTTP server on the test machine. It fetches and validates a HTML page from this server. The run script makes use of a regular expression as argument to the run_genode_until command to detect the state when the web server becomes ready, subsequently executes the lynx shell command to fetch the web site, and employs Tcl's support for regular expressions to validate the result. The run script works across all platforms that have network support. To accommodate a high diversity of platforms, parts of the run script depend on the spec values as defined for the build directory. The spec values are probed via the have_spec function. Depending on the probed spec values, the run script uses the append_if and lappend_if commands to conditionally assemble the init configuration and the list of boot modules.
To use the run mechanism efficiently, a basic understanding of the Tcl scripting language is required. Furthermore the functions provided by tool/run/run and the run modules at tool/run/ should be studied.
Automated testing across base platforms
To execute one or multiple test cases on more than one base platform, there exists a dedicated tool at tool/autopilot. Its primary purpose is the nightly execution of test cases. The tool takes a list of platforms and of run scripts as arguments and executes each run script on each platform. A platform is a triplet of CPU architecture, board, and kernel. For example, the following command instructs autopilot to generate a build directory for the x86_64 architecture and to execute the log.run script for the kernels board-kernel combinations NOVA on a PC and seL4 on a PC.
autopilot -t x86_64-pc-sel4 -t x86_64-pc-nova -r log
The build directory for each architecture is created at /tmp/autopilot.<username>/<architecture> and the output of each run script is written to a file called <architecture>.<board>.<kernel>.<run-script>.log. On stderr, autopilot prints the statistics about whether or not each run script executed successfully on each platform. If at least one run script failed, autopilot returns a non-zero exit code, which makes it straight forward to include autopilot into an automated build-and-test environment.