If you need urgent consulting help click here

Test Framework

The Zephyr Test Framework (Ztest) provides a simple testing framework intended to be used during development. It provides basic assertion macros and a generic test structure.

The framework can be used in two ways, either as a generic framework for integration testing, or for unit testing specific modules.

Quick start - Integration testing

A simple working base is located at samples/subsys/testsuite/integration. Just copy the files to tests/ and edit them for your needs. The test will then be automatically built and run by the twister script. If you are testing the bar component of foo, you should copy the sample folder to tests/foo/bar. It can then be tested with:

./scripts/twister -s tests/foo/bar/test-identifier

In the example above tests/foo/bar signifies the path to the test and the test-identifier references a test defined in the testcase.yaml file.

To run all tests defined in a test project, run:

./scripts/twister -T tests/foo/bar/

The sample contains the following files:

CMakeLists.txt

1# SPDX-License-Identifier: Apache-2.0
2
3cmake_minimum_required(VERSION 3.20.0)
4find_package(Zephyr REQUIRED HINTS $ENV{ZEPHYR_BASE})
5project(integration)
6
7FILE(GLOB app_sources src/*.c)
8target_sources(app PRIVATE ${app_sources})

testcase.yaml

1tests:
2  # section.subsection
3  testing.ztest:
4    build_only: true
5    platform_allow: native_posix
6    tags: testing

prj.conf

1CONFIG_ZTEST=y
2CONFIG_ZTEST_NEW_API=y

src/main.c (see best practices)

 1/*
 2 * Copyright (c) 2016 Intel Corporation
 3 *
 4 * SPDX-License-Identifier: Apache-2.0
 5 */
 6
 7#include <ztest.h>
 8
 9
10ZTEST_SUITE(framework_tests, NULL, NULL, NULL, NULL, NULL);
11
12/**
13 * @brief Test Asserts
14 *
15 * This test verifies various assert macros provided by ztest.
16 *
17 */
18ZTEST(framework_tests, test_assert)
19{
20	zassert_true(1, "1 was false");
21	zassert_false(0, "0 was true");
22	zassert_is_null(NULL, "NULL was not NULL");
23	zassert_not_null("foo", "\"foo\" was NULL");
24	zassert_equal(1, 1, "1 was not equal to 1");
25	zassert_equal_ptr(NULL, NULL, "NULL was not equal to NULL");
26}

A test case project may consist of multiple sub-tests or smaller tests that either can be testing functionality or APIs. Functions implementing a test should follow the guidelines below:

  • Test cases function names should be prefix with test_

  • Test cases should be documented using doxygen

  • Test function names should be unique within the section or component being tested

An example can be seen below:

/**
 * @brief Test Asserts
 *
 * This test verifies the zassert_true macro.
 */
static void test_assert(void)
{
        zassert_true(1, "1 was false");
}

The above test is then enabled as part of the testsuite using:

ztest_unit_test(test_assert)

Listing Tests

Tests (test projects) in the Zephyr tree consist of many testcases that run as part of a project and test similar functionality, for example an API or a feature. The twister script can parse the testcases in all test projects or a subset of them, and can generate reports on a granular level, i.e. if cases have passed or failed or if they were blocked or skipped.

Twister parses the source files looking for test case names, so you can list all kernel test cases, for example, by entering:

twister --list-tests -T tests/kernel

Skipping Tests

Special- or architecture-specific tests cannot run on all platforms and architectures, however we still want to count those and report them as being skipped. Because the test inventory and the list of tests is extracted from the code, adding conditionals inside the test suite is sub-optimal. Tests that need to be skipped for a certain platform or feature need to explicitly report a skip using ztest_test_skip(). If the test runs, it needs to report either a pass or fail. For example:

#ifdef CONFIG_TEST1
void test_test1(void)
{
        zassert_true(1, "true");
}
#else
void test_test1(void)
{
        ztest_test_skip();
}
#endif


void test_main(void)
{
        ztest_test_suite(common,
                         ztest_unit_test(test_test1),
                         ztest_unit_test(test_test2)
                         );
        ztest_run_test_suite(common);
}

Use the following macro at the start of your test to skip it with a KConfig option.

#define Z_TEST_SKIP_IFDEF(config)

For example:

void test_test1(void)
{
        Z_TEST_SKIP_IFDEF(CONFIG_BUGxxxxx);
        zassert_equal(1, 0, NULL);
}

Quick start - Unit testing

Ztest can be used for unit testing. This means that rather than including the entire Zephyr OS for testing a single function, you can focus the testing efforts into the specific module in question. This will speed up testing since only the module will have to be compiled in, and the tested functions will be called directly.

Since you won’t be including basic kernel data structures that most code depends on, you have to provide function stubs in the test. Ztest provides some helpers for mocking functions, as demonstrated below.

In a unit test, mock objects can simulate the behavior of complex real objects and are used to decide whether a test failed or passed by verifying whether an interaction with an object occurred, and if required, to assert the order of that interaction.

Best practices for declaring the test suite

twister and other validation tools need to obtain the list of subcases that a Zephyr ztest test image will expose.

Rationale

This all is for the purpose of traceability. It’s not enough to have only a semaphore test project. We also need to show that we have testpoints for all APIs and functionality, and we trace back to documentation of the API, and functional requirements.

The idea is that test reports show results for every sub-testcase as passed, failed, blocked, or skipped. Reporting on only the high-level test project level, particularly when tests do too many things, is too vague.

There exist two alternatives to writing tests. The first, and more verbose, approach is to directly declare and run the test suites. Here is a generic template for a test showing the expected use of ztest_test_suite():

#include <zephyr/ztest.h>

extern void test_sometest1(void);
extern void test_sometest2(void);
#ifndef CONFIG_WHATEVER              /* Conditionally skip test_sometest3 */
void test_sometest3(void)
{
     ztest_test_skip();
}
#else
extern void test_sometest3(void);
#endif
extern void test_sometest4(void);
...

void test_main(void)
{
     ztest_test_suite(common,
                         ztest_unit_test(test_sometest1),
                         ztest_unit_test(test_sometest2),
                         ztest_unit_test(test_sometest3),
                         ztest_unit_test(test_sometest4)
                );
     ztest_run_test_suite(common);
}

Alternatively, it is possible to split tests across multiple files using ztest_register_test_suite() which bypasses the need for extern:

#include <zephyr/ztest.h>

void test_sometest1(void) {
      zassert_true(1, "true");
}

ztest_register_test_suite(common, NULL,
                          ztest_unit_test(test_sometest1)
                          );

The above sample simple registers the test suite and uses a NULL pragma function (more on that later). It is important to note that the test suite isn’t directly run in this file. Instead two alternatives exist for running the suite. First, if to do nothing. A default test_main function is provided by ztest. This is the preferred approach if the test doesn’t involve a state and doesn’t require use of the pragma.

In cases of an integration test it is possible that some general state needs to be set between test suites. This can be thought of as a state diagram in which test_main simply goes through various actions that modify the board’s state and different test suites need to run. This is achieved in the following:

#include <zephyr/ztest.h>

struct state {
      bool is_hibernating;
      bool is_usb_connected;
}

static bool pragma_always(const void *state)
{
      return true;
}

static bool pragma_not_hibernating_not_connected(const void *s)
{
      struct state *state = s;
      return !state->is_hibernating && !state->is_usb_connected;
}

static bool pragma_usb_connected(const void *s)
{
      return ((struct state *)s)->is_usb_connected;
}

ztest_register_test_suite(baseline, pragma_always,
                          ztest_unit_test(test_case0));
ztest_register_test_suite(before_usb, pragma_not_hibernating_not_connected,
                          ztest_unit_test(test_case1),
                          ztest_unit_test(test_case2));
ztest_register_test_suite(with_usb, pragma_usb_connected,,
                          ztest_unit_test(test_case3),
                          ztest_unit_test(test_case4));

void test_main(void)
{
      struct state state;

      /* Should run `baseline` test suite only. */
      ztest_run_registered_test_suites(&state);

      /* Simulate power on and update state. */
      emulate_power_on();
      /* Should run `baseline` and `before_usb` test suites. */
      ztest_run_registered_test_suites(&state);

      /* Simulate plugging in a USB device. */
      emulate_plugging_in_usb();
      /* Should run `baseline` and `with_usb` test suites. */
      ztest_run_registered_test_suites(&state);

      /* Verify that all the registered test suites actually ran. */
      ztest_verify_all_registered_test_suites_ran();
}

For twister to parse source files and create a list of subcases, the declarations of ztest_test_suite() and ztest_register_test_suite() must follow a few rules:

What to avoid:

  • packing multiple testcases in one source file

    void test_main(void)
    {
    #ifdef TEST_feature1
            ztest_test_suite(feature1,
                             ztest_unit_test(test_1a),
                             ztest_unit_test(test_1b),
                             ztest_unit_test(test_1c)
                             );
            ztest_run_test_suite(feature1);
    #endif
    
    #ifdef TEST_feature2
            ztest_test_suite(feature2,
                             ztest_unit_test(test_2a),
                             ztest_unit_test(test_2b)
                             );
            ztest_run_test_suite(feature2);
    #endif
    }
    
  • Do not use #if

            ztest_test_suite(common,
                             ztest_unit_test(test_sometest1),
                             ztest_unit_test(test_sometest2),
    #ifdef CONFIG_WHATEVER
                             ztest_unit_test(test_sometest3),
    #endif
                             ztest_unit_test(test_sometest4),
            ...
    
  • Do not add comments on lines with a call to ztest_unit_test():

    ztest_test_suite(common,
                     ztest_unit_test(test_sometest1),
                     ztest_unit_test(test_sometest2) /* will fail */,
    /* will fail! */ ztest_unit_test(test_sometest3),
                     ztest_unit_test(test_sometest4),
    ...
    
  • Do not define multiple definitions of unit / user unit test case per line

    ztest_test_suite(common,
                     ztest_unit_test(test_sometest1), ztest_unit_test(test_sometest2),
                     ztest_unit_test(test_sometest3),
                     ztest_unit_test(test_sometest4),
    ...
    

Other questions:

  • Why not pre-scan with CPP and then parse? or post scan the ELF file?

    If C pre-processing or building fails because of any issue, then we won’t be able to tell the subcases.

  • Why not declare them in the YAML testcase description?

    A separate testcase description file would be harder to maintain than just keeping the information in the test source files themselves – only one file to update when changes are made eliminates duplication.

Stress test framework

Zephyr stress test framework (Ztress) provides an environment for executing user functions in multiple priority contexts. It can be used to validate that code is resilient to preemptions. The framework tracks the number of executions and preemptions for each context. Execution can have various completion conditions like timeout, number of executions or number of preemptions.

The framework is setting up the environment by creating the requested number of threads (each on different priority), optionally starting a timer. For each context, a user function (different for each context) is called and then the context sleeps for a randomized amount of system ticks. The framework is tracking CPU load and adjusts sleeping periods to achieve higher CPU load. In order to increase the probability of preemptions, the system clock frequency should be relatively high. The default 100 Hz on QEMU x86 is much too low and it is recommended to increase it to 100 kHz.

The stress test environment is setup and executed using ZTRESS_EXECUTE which accepts a variable number of arguments. Each argument is a context that is specified by ZTRESS_TIMER or ZTRESS_THREAD macros. Contexts are specified in priority descending order. Each context specifies completion conditions by providing the minimum number of executions and preemptions. When all conditions are met and the execution has completed, an execution report is printed and the macro returns. Note that while the test is executing, a progress report is periodically printed.

Execution can be prematurely completed by specifying a test timeout (ztress_set_timeout()) or an explicit abort (ztress_abort()).

User function parameters contains an execution counter and a flag indicating if it is the last execution.

The example below presents how to setup and run 3 contexts (one of which is k_timer interrupt handler context). Completion criteria is set to at least 10000 executions of each context and 1000 preemptions of the lowest priority context. Additionally, the timeout is configured to complete after 10 seconds if those conditions are not met. The last argument of each context is the initial sleep time which will be adjusted throughout the test to achieve the highest CPU load.

ztress_set_timeout(K_MSEC(10000));
ZTRESS_EXECUTE(ZTRESS_TIMER(foo_0, user_data_0, 10000, Z_TIMEOUT_TICKS(20)),
               ZTRESS_THREAD(foo_1, user_data_1, 10000, 0, Z_TIMEOUT_TICKS(20)),
               ZTRESS_THREAD(foo_2, user_data_2, 10000, 1000, Z_TIMEOUT_TICKS(20)));

Configuration

Static configuration of Ztress contains:

  • ZTRESS_MAX_THREADS - number of supported threads.

  • ZTRESS_STACK_SIZE - Stack size of created threads.

  • ZTRESS_REPORT_PROGRESS_MS - Test progress report interval.

API reference

Running tests

group ztest_test

This module eases the testing process by providing helpful macros and other testing structures.

Defines

ZTEST(suite, fn)

Create and register a new unit test.

Calling this macro will create a new unit test and attach it to the declared suite. The suite does not need to be defined in the same compilation unit.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_USER(suite, fn)

Define a test function that should run as a user thread.

This macro behaves exactly the same as ZTEST, but calls the test function in user space if CONFIG_USERSPACE was enabled.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_F(suite, fn)

Define a test function.

This macro behaves exactly the same as ZTEST(), but the function takes an argument for the fixture of type struct suite##_fixture* named this.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_USER_F(suite, fn)

Define a test function that should run as a user thread.

If CONFIG_USERSPACE is not enabled, this is functionally identical to ZTEST_F(). The test function takes a single fixture argument of type struct suite##_fixture* named this.

Parameters
  • suite – The name of the test suite to attach this test

  • fn – The test function to call.

ZTEST_RULE(name, before_each_fn, after_each_fn)

Define a test rule that will run before/after each unit test.

Functions defined here will run before/after each unit test for every test suite. Along with the callback, the test functions are provided a pointer to the test being run, and the data. This provides a mechanism for tests to perform custom operations depending on the specific test or the data (for example logging may use the test’s name).

Ordering:

  • Test rule’s before function will run before the suite’s before function. This is done to allow the test suite’s customization to take precedence over the rule which is applied to all suites.

  • Test rule’s after function is not guaranteed to run in any particular order.

Parameters
  • name – The name for the test rule (must be unique within the compilation unit)

  • before_each_fn – The callback function to call before each test (may be NULL)

  • after_each_fn – The callback function to call after each test (may be NULL)

ZTEST_DMEM
ZTEST_BMEM
ZTEST_SECTION
ztest_run_test_suite(suite)

Run the specified test suite.

Parameters
  • suite – Test suite to run.

Typedefs

typedef void (*ztest_rule_cb)(const struct ztest_unit_test *test, void *data)

Test rule callback function signature.

The function signature that can be used to register a test rule’s before/after callback. This provides access to the test and the fixture data (if provided).

Param test

Pointer to the unit test in context

Param data

Pointer to the test’s fixture data (may be NULL)

Functions

void ztest_test_fail(void)

Fail the currently running test.

This is the function called from failed assertions and the like. You probably don’t need to call it yourself.

void ztest_test_pass(void)

Pass the currently running test.

Normally a test passes just by returning without an assertion failure. However, if the success case for your test involves a fatal fault, you can call this function from k_sys_fatal_error_handler to indicate that the test passed before aborting the thread.

void ztest_test_skip(void)

Skip the current test.

static inline void unit_test_noop(void)

Do nothing, successfully.

Unit test / setup function / teardown function that does nothing, successfully. Can be used as a parameter to ztest_unit_test_setup_teardown().

void ztest_simple_1cpu_before(void *data)

A ‘before’ function to use in test suites that just need to start 1cpu.

Ignores data, and calls z_test_1cpu_start()

Parameters
  • data – The test suite’s data

void ztest_simple_1cpu_after(void *data)

A ‘after’ function to use in test suites that just need to stop 1cpu.

Ignores data, and calls z_test_1cpu_stop()

Parameters
  • data – The test suite’s data

Variables

struct k_mem_partition ztest_mem_partition
struct ztest_test_rule
#include <ztest_test_new.h>

Assertions

These macros will instantly fail the test if the related assertion fails. When an assertion fails, it will print the current file, line and function, alongside a reason for the failure and an optional message. If the config option:CONFIG_ZTEST_ASSERT_VERBOSE is 0, the assertions will only print the file and line numbers, reducing the binary size of the test.

Example output for a failed macro from zassert_equal(buf->ref, 2, "Invalid refcount"):

Assertion failed at main.c:62: test_get_single_buffer: Invalid refcount (buf->ref not equal to 2)
Aborted at unit test function
group ztest_assert

This module provides assertions when using Ztest.

Defines

zassert(cond, default_msg, msg, ...)

Fail the test, if cond is false.

You probably don’t need to call this macro directly. You should instead use zassert_{condition} macros below.

Note that when CONFIG_MULTITHREADING=n macro returns from the function. It is then expected that in that case ztest asserts will be used only in the context of the test function.

Parameters
  • cond – Condition to check

  • msg – Optional, can be NULL. Message to print if cond is false.

  • default_msg – Message to print if cond is false

zassert_unreachable(msg, ...)

Assert that this function call won’t be reached.

Parameters
  • msg – Optional message to print if the assertion fails

zassert_true(cond, msg, ...)

Assert that cond is true.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_false(cond, msg, ...)

Assert that cond is false.

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_ok(cond, msg, ...)

Assert that cond is 0 (success)

Parameters
  • cond – Condition to check

  • msg – Optional message to print if the assertion fails

zassert_is_null(ptr, msg, ...)

Assert that ptr is NULL.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assertion fails

zassert_not_null(ptr, msg, ...)

Assert that ptr is not NULL.

Parameters
  • ptr – Pointer to compare

  • msg – Optional message to print if the assertion fails

zassert_equal(a, b, msg, ...)

Assert that a equals b.

a and b won’t be converted and will be compared directly.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_not_equal(a, b, msg, ...)

Assert that a does not equal b.

a and b won’t be converted and will be compared directly.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_equal_ptr(a, b, msg, ...)

Assert that a equals b.

a and b will be converted to void * before comparing.

Parameters
  • a – Value to compare

  • b – Value to compare

  • msg – Optional message to print if the assertion fails

zassert_within(a, b, d, msg, ...)

Assert that a is within b with delta d.

Parameters
  • a – Value to compare

  • b – Value to compare

  • d – Delta

  • msg – Optional message to print if the assertion fails

zassert_between_inclusive(a, l, u, msg, ...)

Assert that a is greater than or equal to l and less than or equal to u.

Parameters
  • a – Value to compare

  • l – Lower limit

  • u – Upper limit

  • msg – Optional message to print if the assertion fails

zassert_mem_equal(...)

Assert that 2 memory buffers have the same contents.

This macro calls the final memory comparison assertion macro. Using double expansion allows providing some arguments by macros that would expand to more than one values (ANSI-C99 defines that all the macro arguments have to be expanded before macro call).

Parameters
zassert_mem_equal__(buf, exp, size, msg, ...)

Internal assert that 2 memory buffers have the same contents.

Note

This is internal macro, to be used as a second expansion. See zassert_mem_equal.

Parameters
  • buf – Buffer to compare

  • exp – Buffer with expected contents

  • size – Size of buffers

  • msg – Optional message to print if the assertion fails

Mocking

These functions allow abstracting callbacks and related functions and controlling them from specific tests. You can enable the mocking framework by setting CONFIG_ZTEST_MOCKING to “y” in the configuration file of the test. The amount of concurrent return values and expected parameters is limited by CONFIG_ZTEST_PARAMETER_COUNT.

Here is an example for configuring the function expect_two_parameters to expect the values a=2 and b=3, and telling returns_int to return 5:

 1#include <ztest.h>
 2
 3static void expect_two_parameters(int a, int b)
 4{
 5	ztest_check_expected_value(a);
 6	ztest_check_expected_value(b);
 7}
 8
 9static void parameter_tests(void)
10{
11	ztest_expect_value(expect_two_parameters, a, 2);
12	ztest_expect_value(expect_two_parameters, b, 3);
13	expect_two_parameters(2, 3);
14}
15
16static int returns_int(void)
17{
18	return ztest_get_return_value();
19}
20
21static void return_value_tests(void)
22{
23	ztest_returns_value(returns_int, 5);
24	zassert_equal(returns_int(), 5, NULL);
25}
26
27void test_main(void)
28{
29	ztest_test_suite(mock_framework_tests,
30		ztest_unit_test(parameter_test),
31		ztest_unit_test(return_value_test)
32	);
33
34	ztest_run_test_suite(mock_framework_tests);
35}
group ztest_mock

This module provides simple mocking functions for unit testing. These need CONFIG_ZTEST_MOCKING=y.

Defines

ztest_expect_value(func, param, value)

Tell function func to expect the value value for param.

When using ztest_check_expected_value(), tell that the value of param should be value. The value will internally be stored as an uintptr_t.

Parameters
  • func – Function in question

  • param – Parameter for which the value should be set

  • value – Value for param

ztest_check_expected_value(param)

If param doesn’t match the value set by ztest_expect_value(), fail the test.

This will first check that does param have a value to be expected, and then checks whether the value of the parameter is equal to the expected value. If either of these checks fail, the current test will fail. This must be called from the called function.

Parameters
  • param – Parameter to check

ztest_expect_data(func, param, data)

Tell function func to expect the data data for param.

When using ztest_check_expected_data(), the data pointed to by param should be same data in this function. Only data pointer is stored by this function, so it must still be valid when ztest_check_expected_data is called.

Parameters
  • func – Function in question

  • param – Parameter for which the data should be set

  • data – pointer for the data for parameter param

ztest_check_expected_data(param, length)

If data pointed by param don’t match the data set by ztest_expect_data(), fail the test.

This will first check that param is expected to be null or non-null and then check whether the data pointed by parameter is equal to expected data. If either of these checks fail, the current test will fail. This must be called from the called function.

Parameters
  • param – Parameter to check

  • length – Length of the data to compare

ztest_return_data(func, param, data)

Tell function func to return the data data for param.

When using ztest_return_data(), the data pointed to by param should be same data in this function. Only data pointer is stored by this function, so it must still be valid when ztest_copy_return_data is called.

Parameters
  • func – Function in question

  • param – Parameter for which the data should be set

  • data – pointer for the data for parameter param

ztest_copy_return_data(param, length)

Copy the data set by ztest_return_data to the memory pointed by param.

This will first check that param is not null and then copy the data. This must be called from the called function.

Parameters
  • param – Parameter to return data for

  • length – Length of the data to return

ztest_returns_value(func, value)

Tell func that it should return value.

Parameters
  • func – Function that should return value

  • value – Value to return from func

ztest_get_return_value()

Get the return value for current function.

The return value must have been set previously with ztest_returns_value(). If no return value exists, the current test will fail.

Returns

The value the current function should return

ztest_get_return_value_ptr()

Get the return value as a pointer for current function.

The return value must have been set previously with ztest_returns_value(). If no return value exists, the current test will fail.

Returns

The value the current function should return as a void *

Customizing Test Output

The way output is presented when running tests can be customized. An example can be found in tests/ztest/custom_output.

Customization is enabled by setting CONFIG_ZTEST_TC_UTIL_USER_OVERRIDE to “y” and adding a file tc_util_user_override.h with your overrides.

Add the line zephyr_include_directories(my_folder) to your project’s CMakeLists.txt to let Zephyr find your header file during builds.

See the file subsys/testsuite/include/tc_util.h to see which macros and/or defines can be overridden. These will be surrounded by blocks such as:

#ifndef SOMETHING
#define SOMETHING <default implementation>
#endif /* SOMETHING */

Shuffling Test Sequence

By default the tests are sorted and ran in alphanumerical order. Test cases may be dependent on this sequence. Enable ZTEST_SHUFFLE to randomize the order. The output from the test will display the seed for failed tests. For native posix builds you can provide the seed as an argument to twister with –seed

Static configuration of ZTEST_SHUFFLE contains:

  • ZTEST_SHUFFLE_SUITE_REPEAT_COUNT - Number of iterations the test suite will run.

  • ZTEST_SHUFFLE_TEST_REPEAT_COUNT - Number of iterations the test will run.