1 .. SPDX-License-Identifier: GPL-2.0+
2 .. Copyright 2021 Google LLC
8 This describes how to write tests in U-Boot and describes the possible options.
13 There are two basic types of test in U-Boot:
15 - Python tests, in test/py/tests
16 - C tests, in test/ and its subdirectories
18 (there are also UEFI tests in lib/efi_selftest/ not considered here.)
20 Python tests talk to U-Boot via the command line. They support both sandbox and
21 real hardware. They typically do not require building test code into U-Boot
22 itself. They are fairly slow to run, due to the command-line interface and there
23 being two separate processes. Python tests are fairly easy to write. They can
24 be a little tricky to debug sometimes due to the voluminous output of pytest.
26 C tests are written directly in U-Boot. While they can be used on boards, they
27 are more commonly used with sandbox, as they obviously add to U-Boot code size.
28 C tests are easy to write so long as the required facilities exist. Where they
29 do not it can involve refactoring or adding new features to sandbox. They are
30 fast to run and easy to debug.
32 Regardless of which test type is used, all tests are collected and run by the
33 pytest framework, so there is typically no need to run them separately. This
34 means that C tests can be used when it makes sense, and Python tests when it
38 This table shows how to decide whether to write a C or Python test:
40 ===================== =========================== =============================
41 Attribute C test Python test
42 ===================== =========================== =============================
43 Fast to run? Yes No (two separate processes)
44 Easy to write? Yes, if required test Yes
45 features exist in sandbox
47 Needs code in U-Boot? Yes No, provided the test can be
48 executed and the result
49 determined using the command
51 Easy to debug? Yes No, since access to the U-Boot
52 state is not available and the
54 sometimes require a bit of
56 Can use gdb? Yes, directly Yes, with --gdbserver
57 Can run on boards? Some can, but only if Some
59 dependent on sandboxau
60 ===================== =========================== =============================
66 Typically in U-Boot we encourage C test using sandbox for all features. This
67 allows fast testing, easy development and allows contributors to make changes
68 without needing dozens of boards to test with.
70 When a test requires setup or interaction with the running host (such as to
71 generate images and then running U-Boot to check that they can be loaded), or
72 cannot be run on sandbox, Python tests should be used. These should typically
73 NOT rely on running with sandbox, but instead should function correctly on any
74 board supported by U-Boot.
80 The best of both worlds is sometimes to have a Python test set things up and
81 perform some operations, with a 'checker' C unit test doing the checks
82 afterwards. This can be achieved with these steps:
84 - Add the `UTF_MANUAL` flag to the checker test so that the `ut` command
85 does not run it by default
86 - Add a `_norun` suffix to the name so that pytest knows to skip it too
88 In your Python test use the `-f` flag to the `ut` command to force the checker
89 test to run it, e.g.::
95 # Run the checker to make sure that everything worked
96 ut -f bootstd vbe_test_fixup_norun
98 Note that apart from the `UTF_MANUAL` flag, the code in a 'manual' C test
99 is just like any other C test. It still uses ut_assert...() and other such
100 constructs, in this case to check that the expected things happened in the
104 How slow are Python tests?
105 --------------------------
107 Under the hood, when running on sandbox, Python tests work by starting a sandbox
108 test and connecting to it via a pipe. Each interaction with the U-Boot process
109 requires at least a context switch to handle the pipe interaction. The test
110 sends a command to U-Boot, which then reacts and shows some output, then the
111 test sees that and continues. Of course on real hardware, communications delays
112 (e.g. with a serial console) make this slower.
114 For comparison, consider a test that checks the 'md' (memory dump). All times
115 below are approximate, as measured on an AMD 2950X system. Here is is the test
118 @pytest.mark.buildconfigspec('cmd_memory')
119 def test_md(u_boot_console):
120 """Test that md reads memory as expected, and that memory can be modified
121 using the mw command."""
123 ram_base = u_boot_utils.find_ram_base(u_boot_console)
124 addr = '%08x' % ram_base
126 expected_response = addr + ': ' + val
127 u_boot_console.run_command('mw ' + addr + ' 0 10')
128 response = u_boot_console.run_command('md ' + addr + ' 10')
129 assert(not (expected_response in response))
130 u_boot_console.run_command('mw ' + addr + ' ' + val)
131 response = u_boot_console.run_command('md ' + addr + ' 10')
132 assert(expected_response in response)
134 This runs a few commands and checks the output. Note that it runs a command,
135 waits for the response and then checks it agains what is expected. If run by
136 itself it takes around 800ms, including test collection. For 1000 runs it takes
137 19 seconds, or 19ms per run. Of course 1000 runs it not that useful since we
138 only want to run it once.
140 There is no exactly equivalent C test, but here is a similar one that tests 'ms'
143 /* Test 'ms' command with bytes */
144 static int mem_test_ms_b(struct unit_test_state *uts)
148 buf = map_sysmem(0, BUF_SIZE + 1);
149 memset(buf, '\0', BUF_SIZE);
154 run_command("ms.b 1 ff 12", 0);
155 ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................");
156 ut_assert_nextline("--");
157 ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................");
158 ut_assert_nextline("2 matches");
159 ut_assert_console_end();
161 ut_asserteq(2, env_get_hex("memmatches", 0));
162 ut_asserteq(0xff, env_get_hex("memaddr", 0));
163 ut_asserteq(0xfe, env_get_hex("mempos", 0));
169 MEM_TEST(mem_test_ms_b, UTF_CONSOLE);
171 This runs the command directly in U-Boot, then checks the console output, also
172 directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes
173 660ms, or 0.66ms per run.
175 So overall running a C test is perhaps 8 times faster individually and the
176 interactions are perhaps 25 times faster.
178 It should also be noted that the C test is fairly easy to debug. You can set a
179 breakpoint on do_mem_search(), which is what implements the 'ms' command,
180 single step to see what might be wrong, etc. That is also possible with the
181 pytest, but requires two terminals and --gdbserver.
184 Why does speed matter?
185 ----------------------
187 Many development activities rely on running tests:
189 - 'git bisect run make qcheck' can be used to find a failing commit
190 - test-driven development relies on quick iteration of build/test
191 - U-Boot's continuous integration (CI) systems make use of tests. Running
192 all sandbox tests typically takes 90 seconds and running each qemu test
193 takes about 30 seconds. This is currently dwarfed by the time taken to
196 As U-Boot continues to grow its feature set, fast and reliable tests are a
197 critical factor factor in developer productivity and happiness.
203 C tests are arranged into suites which are typically executed by the 'ut'
204 command. Each suite is in its own file. This section describes how to accomplish
205 some common test tasks.
207 (there are also UEFI C tests in lib/efi_selftest/ not considered here.)
209 Add a new driver model test
210 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
212 Use this when adding a test for a new or existing uclass, adding new operations
213 or features to a uclass, adding new ofnode or dev_read_() functions, or anything
214 else related to driver model.
216 Find a suitable place for your test, perhaps near other test functions in
217 existing code, or in a new file. Each uclass should have its own test file.
219 Declare the test with::
222 static int dm_test_uclassname_what(struct unit_test_state *uts)
228 DM_TEST(dm_test_uclassname_what, UTF_SCAN_FDT);
230 Note that the convention is to NOT add a blank line before the macro, so that
231 the function it relates to is more obvious.
233 Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what'
234 with what you are testing.
236 The flags for DM_TEST() are defined in test/test.h and you typically want
237 UTF_SCAN_FDT so that the devicetree is scanned and all devices are bound
238 and ready for use. The DM_TEST macro adds UTF_DM automatically so that
239 the test runner knows it is a driver model test.
241 Driver model tests are special in that the entire driver model state is
242 recreated anew for each test. This ensures that if a previous test deletes a
243 device, for example, it does not affect subsequent tests. Driver model tests
244 also run both with livetree and flattree, to ensure that both devicetree
245 implementations work as expected.
247 Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1]
249 [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4
252 Add a C test to an existing suite
253 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
255 Use this when you are adding to or modifying an existing feature outside driver
256 model. An example is bloblist.
258 Add a new function in the same file as the rest of the suite and register it
259 with the suite. For example, to add a new mem_search test::
261 /* Test 'ms' command with 32-bit values */
262 static int mem_test_ms_new_thing(struct unit_test_state *uts)
268 MEM_TEST(mem_test_ms_new_thing, UTF_CONSOLE);
270 Note that the MEM_TEST() macros is defined at the top of the file.
272 Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1]
274 [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2
280 Each suite should focus on one feature or subsystem, so if you are writing a
281 new one of those, you should add a new suite.
283 Create a new file in test/ or a subdirectory and define a macro to register the
291 /* Declare a new wibble test */
292 #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test)
296 /* At the bottom of the file: */
298 int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[])
300 struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test);
301 const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test);
303 return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv);
306 Then add new tests to it as above.
308 Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]::
310 /* Within cmd_ut_sub[]... */
312 U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""),
314 and adding new help to ut_help_text[]::
316 "ut wibble - Test the wibble feature\n"
318 If your feature is conditional on a particular Kconfig, then you can use #ifdef
321 Finally, add the test to the build by adding to the Makefile in the same
324 obj-$(CONFIG_$(XPL_)CMDLINE) += wibble.o
326 Note that CMDLINE is never enabled in SPL, so this test will only be present in
327 U-Boot proper. See below for how to do SPL tests.
329 As before, you can add an extra Kconfig check if needed::
331 ifneq ($(CONFIG_$(XPL_)WIBBLE),)
332 obj-$(CONFIG_$(XPL_)CMDLINE) += wibble.o
336 Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1]
338 [1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64
341 Making the test run from pytest
342 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
344 All C tests must run from pytest. Typically this is automatic, since pytest
345 scans the U-Boot executable for available tests to run. So long as you have a
346 'ut' subcommand for your test suite, it will run. The same applies for driver
347 model tests since they use the 'ut dm' subcommand.
349 See test/py/tests/test_ut.py for how unit tests are run.
355 Note: C tests are only available for sandbox_spl at present. There is currently
356 no mechanism in other boards to existing SPL tests even if they are built into
359 SPL tests cannot be run from the 'ut' command since there are no commands
360 available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when
361 the -u flag is given. This runs the available unit tests, no matter what suite
364 To create a new SPL test, follow the same rules as above, either adding to an
365 existing suite or creating a new one.
367 An example SPL test is spl_test_load().
373 See :doc:`py_testing` for brief notes how to write Python tests. You
374 should be able to use the existing tests in test/py/tests as examples.