Using the EESSI test suite¶
This page covers the usage of the EESSI test suite.
We assume you have already installed and configured the EESSI test suite on your system.
Listing available tests¶
To list the tests that are available in the EESSI test suite,
use reframe --list
(or reframe -L
for short).
If you have properly configured ReFrame, you should see a (potentially long) list of checks in the output:
Note
When using --list
, checks are only generated based on modules that are available in the system where the reframe
command is invoked.
The system partitions specified in your ReFrame configuration file are not taken into account when using --list
.
So, if --list
produces an overview of 50 checks, and you have 4 system partitions in your configuration file,
actually running the test suite may result in (up to) 200 checks being executed.
Performing a dry run¶
To perform a dry run of the EESSI test suite, use reframe --dry-run
:
$ reframe --dry-run
...
[==========] Running 1234 check(s)
[----------] start processing checks
[ DRY ] GROMACS_EESSI ...
...
[----------] all spawned checks have finished
[ PASSED ] Ran 1234/1234 test case(s) from 1234 check(s) (0 failure(s), 0 skipped, 0 aborted)
Note
When using --dry-run
, the systems partitions listed in your ReFrame configuration file are also taken into
account when generating checks, next to available modules and test parameters, which is not the case when using --list
.
Running the (full) test suite¶
To actually run the (full) EESSI test suite and let ReFrame
produce a performance report, use reframe --run --performance-report
.
We strongly recommend filtering the checks that will be run by using additional options
like --system
, --name
, --tag
(see the 'Filtering tests' section below),
and doing a dry run first to make sure that the generated checks correspond to what you have in mind.
ReFrame output and log files¶
ReFrame will generate various output and log files:
- a general ReFrame log file with debug logging on the ReFrame run (incl. selection of tests, generating checks, test results, etc.);
- stage directories for each generated check, in which the checks are run;
- output directories for each generated check, which include the test output;
- performance log files for each test, which include performance results for the test runs;
We strongly recommend controlling where these files go by using the common logging configuration that
is provided by the EESSI test suite in your ReFrame configuration file
and setting $RFM_PREFIX
(avoid using the cmd line option --prefix
).
If you do, and if you use ReFrame v4.3.3 or more newer, you should find the output and log files at:
- general ReFrame log file at
$RFM_PREFIX/logs/reframe_<datestamp>_<timestamp>.log
; - stage directories in
$RFM_PREFIX/stage/<system>/<partition>/<environment>/
; - output directories in
$RFM_PREFIX/output/<system>/<partition>/<environment>/
; - performance log files in
$RFM_PREFIX/perflogs/<system>/<partition>/<environment>/
;
In the stage and output directories, there will be a subdirectory for each check that was run,
which are tagged with a unique hash (like d3adb33f
) that is determined based on the specific parameters for that check
(see the ReFrame documentation for more details on the test naming scheme).
Filtering tests¶
By default, ReFrame will automatically generate checks for each system partition, based on the tests available in the EESSI test suite, available software modules, and tags defined in the EESSI test suite.
To avoid being overwhelmed by checks, it is recommend to apply filters so ReFrame only generates the checks you are interested in.
Filtering by test name¶
You can filter checks based on the full test name using the --name
option (or -n
),
which includes the value for all test parameters.
Here's an example of a full test name:
GROMACS_EESSI %benchmark_info=HECBioSim/Crambin %nb_impl=cpu %scale=1_node %module_name=GROMACS/2023.1-foss-2022a /d3adb33f @example:gpu+default
To let ReFrame only generate checks for GROMACS, you can use:
To only run GROMACS checks with a particular version of GROMACS, you can use --name
to only retain specific GROMACS
modules:
Likewise, you can filter on any part of the test name.
You can also select one specific check using the corresponding test hash,
which is also part of the full test name (see /d3adb33f
in the example above):
for example:
The argument passed to --name
is interpreted as a Python regular expression, so you can use wildcards like .*
,
character ranges like [0-9]
, use ^
to specify that the pattern should match from the start of the test name, etc.
Use --list
or --dry-run
to check the impact of using the --name
option.
Filtering by system (partition)¶
By default, ReFrame will generate checks for each system partition that is listed in your configuration file.
To only let ReFrame checks for a particular system or system partition,
you can use the --system
option.
For example:
- To let ReFrame only generate checks for the system named
example
, use: - To let ReFrame only generate checks for the
gpu
partition of the system namedexample
, use:
Use --dry-run
to check the impact of using the --system
option.
Filtering by tags¶
To filter tests using one or more tags, you can use the --tag
option.
Using --list-tags
you can get a list of known tags.
To check the impact of this on generated checks by ReFrame, use --list
or --dry-run
.
CI
tag¶
For each software that is included in the EESSI test suite,
a small test is tagged with CI
to indicate it can be used in a Continuous Integration (CI) environment.
Hence, you can use this tag to let ReFrame only generate checks for small test cases:
For example:
scale
tags¶
The EESSI test suite defines a set of custom tags that control the scale of checks, which specify many cores/GPUs/nodes should be used for running a check. The number of cores and GPUs serves as an upper limit; the actual count depends on the specific configuration of cores, GPUs, and sockets within the node, as well as the specific test being carried out.
tag name | description |
---|---|
1_core |
using 1 CPU core 1 GPU |
2_cores |
using 2 CPU cores and 1 GPU |
4_cores |
using 4 CPU cores and 1 GPU |
1_cpn_2_nodes |
using 1 CPU core per node, 1 GPU per node, and 2 nodes |
1_cpn_4_nodes |
using 1 CPU core per node, 1 GPU per node, and 4 nodes |
1_8_node |
using 1/8th of a node (12.5% of available cores/GPUs, 1 at minimum) |
1_4_node |
using a quarter of a node (25% of available cores/GPUs, 1 at minimum) |
1_2_node |
using half of a node (50% of available cores/GPUs, 1 at minimum) |
1_node |
using a full node (all available cores/GPUs) |
2_nodes |
using 2 full nodes |
4_nodes |
using 4 full nodes |
8_nodes |
using 8 full nodes |
16_nodes |
using 16 full nodes |
Using multiple tags¶
To filter tests using multiple tags, you can:
- use
|
as separator to indicate that one of the specified tags must match (logical OR, for example--tag='1_core|2_cores'
); - use the
--tag
option multiple times to indicate that all specified tags must match (logical AND, for example--tag CI --tag 1_core
);
Example commands¶
Running all GROMACS tests on 4 cores on the cpu
partition
List all checks for TensorFlow 2.11 using a single node
Dry run of TensorFlow CI checks on a quarter (1/4) of a node (on all system partitions)
Overriding test parameters (advanced)¶
You can override test parameters using the --setvar
option (or -S
).
This can be done either globally (for all tests), or only for specific tests (which is recommended when using --setvar
).
For example, to run all GROMACS checks with a specific GROMACS module, you can use: