Lava-Test Test Definition 1.0¶
The lava_test_shell
action provides a way to employ a black-box
style testing approach with the target device. It does this by
deploying an overlay
to the target device; it requires a POSIX
system to be running on the target. The test definition format is
designed to be flexible, allowing many options on how to do things.
Quick start to Test Definition 1.0¶
A minimal test definition looks like this:
metadata:
name: passfail
format: "Lava-Test-Shell Test Definition 1.0"
description: "A simple passfail test for demo."
run:
steps:
- lava-test-case test-1 --result pass
- lava-test-case test-2 --result fail
Only the mandatory metadata parameters have been included (name, format, description).
Versioned test definitions¶
If your test definition is not part of a git repository then it is must include a version parameter in the metadata section like in the following example.
metadata:
name: passfail
format: "Lava-Test-Shell Test Definition 1.0"
description: "A simple passfail test for demo."
version: "1.0"
How a lava test shell is run¶
A lava-test-shell is run by:
building the test definition into a shell script.
Note
This shell script will include
set -e
, so a failing step will abort the entire test run. If you need to specify a step that might fail, finish the command with|| true
to make that failure not abort the test run.copying an
overlay
onto the device. Theoverlay
contains both the test script and the rest of the LAVA Test Helpers. and setup code to run the test script when the device bootsbooting the device and letting the test run
retrieving the output from the device and turning it into a test result
run subsequent test definitions, if any.
Writing a test for lava-test-shell¶
For the majority of cases, the above approach is the easiest thing to do: write shell code that outputs “test-case-id: result” for each test case you are interested in. See the Test Developer Guide:
Warning
Older support for parse patterns and fixup dictionaries
is deprecated because the support has proven too difficult to
use and very hard to debug. The syntax is Python but converted
through YAML and the scope is global. The support remains only for
compatibility with existing Lava Test Shell Definitions. In future,
any desired parsing should be moved into a custom script contained within the test definition
repository. This script can simply call lava-test-case
directly
with the relevant options once the data is parsed. This has the
advantage that the log output from LAVA can be tested directly as
input for the script.
When a test runs, $PATH
is arranged so that some LAVA-specific
utilities are available:
See also
lava-test-case¶
lava-test-case records the results of a single test case. For example:
steps:
- "lava-test-case simpletestcase --result pass"
- "lava-test-case fail-test --shell false"
It has two forms. One takes arguments to describe the outcome of the test case. The other takes the shell command to run, and the exit code of this shell command is used to produce the test result.
Both forms take the name of the testcase as the first argument.
Specifying results directly¶
The first form takes these additional arguments:
--result $RESULT
: $RESULT should be one of pass/fail/skip/unknown--measurement $MEASUREMENT
: A numerical measurement associated with the test result--units $UNITS
: The units of $MEASUREMENT
--result
must always be specified. For example:
run:
steps:
- "lava-test-case simpletestcase --result pass"
- "lava-test-case bottle-count --result pass --measurement 99 --units bottles"
If --measurement
is used, --units
must also be specified, even
if the unit is just a count.
The most useful way to produce output for lava-test-case result
is
Writing custom scripts to support tests which allow preparation of LAVA results from other
sources, complete with measurements. This involves calling lava-test-case
from scripts executed by the YAML file:
#!/usr/bin/env python
from subprocess import call
def test_case():
"""
Calculate something based on a test
and return the data
"""
return {"name": "test-rate", "result": "pass",
"units": "Mb/s", "measurement": 4.23}
def main():
data = test_case()
call(
['lava-test-case',
data['name'],
'--result', data['result'],
'--measurement', data['measurement'],
'--units', data['units']])
return 0
if __name__ == '__main__':
main()
The custom scripts themselves can be called from a lava-test-case
using the
--shell
command to test whether failures from the tests caused a subsequent
failure in the custom script.
Using the exit status of a command¶
The second form of lava-test-case
is indicated by the --shell
argument, for example:
run:
steps:
- "lava-test-case fail-test --shell false"
- "lava-test-case pass-test --shell true"
The result of a shell
call will only be recorded as a pass or fail,
dependent on the exit code of the command.
Using parameters in the job to update the definition¶
Parameters used in the test definition YAML can be controlled from the YAML job file. See the following YAML test definition for ean example of how it works.
1metadata:
2 format: Lava-Test Test Definition 1.0
3 name: params-test
4 description: "test commands for Linux POSIX images with params"
5 version: "1.0"
6 maintainer:
7 - neil.williams@linaro.org
8
9params:
10 VARIABLE_NAME_1: value_1
11 VARIABLE_NAME_2: value_2
12
13run:
14 steps:
15 - lava-test-case test3 --result pass
16 - lava-test-case test4 --result fail
17 - lava-test-case test5 --result pass --measurement 99 --units bottles
18 - lava-test-case test6 --result fail --measurement 0 --units mugs
19 - echo $VARIABLE_NAME_1
20 - echo $VARIABLE_NAME_2
21 - echo $SPACED_VAR
22 - echo $PUB_KEY
Download or view params.yaml: examples/test-definitions/params.yaml
This Lava-Test Test Definition 1.0 can be used in a simple QEMU test job:
1- test:
2 timeout:
3 minutes: 5
4 definitions:
5 - repository: https://gitlab.com/lava/functional-tests.git
6 from: git
7 path: posix/parameters.yaml
8 name: parse-params
9 parameters:
10 VARIABLE_NAME_1: "first variable value"
11 VARIABLE_NAME_1: "first variable value"
Download or view the test job: examples/test-jobs/qemu-stretch-params.yaml
lava-background-process-start¶
This starts a process in the background, for example:
steps:
- lava-background-process-start MEM --cmd "free -m | grep Mem | awk '{print $3}' >> /tmp/memusage"
- lava-background-process-start CPU --cmd "grep 'cpu ' /proc/stat"
- uname -a
- lava-background-process-stop CPU
- lava-background-process-stop MEM --attach /tmp/memusage text/plain --attach /proc/meminfo application/octet-stream
The arguments are:
The name that is used to identify the process later in lava-background-process-stop
The command line for the process to be run in the background
lava-background-process-stop¶
This stops a process previously started in the background using lava-background-process-start. The user can attach files to the test run if there is a need.
For example:
steps:
- lava-background-process-start MEM --cmd "free -m | grep Mem | awk '{print $3}' >> /tmp/memusage"
- lava-background-process-start CPU --cmd "grep 'cpu ' /proc/stat"
- uname -a
- lava-background-process-stop CPU
- lava-background-process-stop MEM --attach /tmp/memusage text/plain --attach /proc/meminfo application/octet-stream
The arguments are:
The name that was specified in lava-background-process-start
(optional) An indication that you want to attach file(s) to the test run with specified mime type. See Recording test case data.
Handling test attachments¶
Handling of attachments is in the control of the test writer. A separate publishing location can be configured or text based data is simply to output the contents into the log file.
See also
Deprecated elements¶
Handling Dependencies (Debian)¶
Warning
The install
element of Lava-Test Test Definition 1.0
is DEPRECATED. See Write portable test definitions. Newly
written Lava-Test Test Definition 1.0 files should not use
install
.
If your test requires some packages to be installed before its run it can
express that in the install
section with:
install:
deps:
- linux-libc-dev
- build-essential
Adding Git Repositories¶
If the test needs code from a shared repository, the action can clone this data on your behalf with:
install:
git-repos:
- git://git.linaro.org/people/davelong/lt_ti_lava.git
run:
steps:
- cd lt_ti_lava
- echo "now in the git cloned directory"
git-repos¶
There are several options for customizing git repository handling in the git-repos action, for example:
install:
git-repos:
- url: https://gitlab.com/lava/lava.git
skip_by_default: False
- url: https://gitlab.com/lava/lava.git
destination: lava-d-r
branch: release
- url: https://gitlab.com/lava/lava.git
destination: lava-d-s
branch: staging
url is the git repository URL.
skip_by_default (optional) accepts a True or False. Repositories can be skipped by default in the test definition YAML and enabled for particular jobs directly in the job submission YAML, and vice versa.
destination (optional) is the directory in which the git repository given in url should be cloned, to override normal git behavior.
branch (optional) is the branch within the git repository given in url that should be checked out after cloning.
Install Steps¶
Warning
The install
element of Lava-Test Test Definition 1.0
is DEPRECATED. See Write portable test definitions. Newly
written Lava-Test Test Definition 1.0 files should not use
install
.
Before the test shell code is executed, it will optionally do some install work if needed. For example if you needed to build some code from a git repo you could do:
install:
git-repos:
- git://git.linaro.org/people/davelong/lt_ti_lava.git
steps:
- cd lt_ti_lava
- make
Note
The repo steps are done in the dispatcher itself. The install steps are run directly on the target.
Parse patterns¶
Warning
Parse patterns and fixup dictionaries are confusing and hard to
debug. The syntax is Python and the support remains for compatibility with
existing Lava Test Shell Definitions. With LAVA V2, it is recommended to
move parsing into a custom script contained within
the test definition repository. The script can simply call
lava-test-case
directly with the relevant options once the data is
parsed. This has the advantage that the log output from LAVA can be tested
directly as input for the script.
You may need to incorporate an existing test that doesn’t output results in in
the required pass
/fail
/skip
/unknown
format required by LAVA.
The parse section has a fixup mechanism that can help:
parse:
pattern: "(?P<test_case_id>.*-*)\\s+:\\s+(?P<result>(PASS|FAIL))"
fixupdict:
PASS: pass
FAIL: fail
Note
Pattern can be double-quoted or single quoted. If it’s double-quoted, special characters need to be escaped. Otherwise, no escaping is necessary.
Single quote example:
parse:
pattern: '(?P<test_case_id>.*-*)\s+:\s+(?P<result>(PASS|FAIL))'
fixupdict:
PASS: pass
FAIL: fail