From mboxrd@z Thu Jan 1 00:00:00 1970 From: Stephen Warren Date: Tue, 2 Feb 2016 12:43:15 -0700 Subject: [U-Boot] [PATCH] test/py: make each unit test a pytest In-Reply-To: References: <1454024708-867-1-git-send-email-swarren@wwwdotorg.org> <56AAF3C6.1030605@wwwdotorg.org> <56ABB407.4060704@wwwdotorg.org> Message-ID: <56B106D3.2000005@wwwdotorg.org> List-Id: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: u-boot@lists.denx.de On 01/29/2016 01:11 PM, Simon Glass wrote: > Hi Stephen, > > On 29 January 2016 at 11:48, Stephen Warren wrote: >> On 01/29/2016 11:23 AM, Simon Glass wrote: >>> >>> Hi Stephen, >>> >>> On 28 January 2016 at 22:08, Stephen Warren wrote: >>>> >>>> On 01/28/2016 08:52 PM, Simon Glass wrote: >>>>> >>>>> Hi Stephen, >>>>> >>>>> On 28 January 2016 at 16:45, Stephen Warren >>>>> wrote: >>>>>> >>>>>> From: Stephen Warren >>>>>> >>>>>> A custom fixture named ut_subtest is implemented which is parametrized >>>>>> with the names of all unit tests that the U-Boot binary supports. This >>>>>> causes each U-Boot unit test to be exposes as a separate pytest. In >>>>>> turn, >>>>>> this allows more fine-grained pass/fail counts and test selection, >>>>>> e.g.: >>>>>> >>>>>> test.py --bd sandbox -k ut_dm_usb >>>>>> >>>>>> ... will run about 8 tests at present. >>>>>> >>>>>> Signed-off-by: Stephen Warren >>>>>> --- >>>>>> This depends on at least my recently sent "test/py: run C-based unit >>>>>> tests". >>>>>> >>>>>> test/py/conftest.py | 105 >>>>>> ++++++++++++++++++++++++++++++++++++----------- >>>>>> test/py/tests/test_ut.py | 14 +++---- >>>>>> 2 files changed, 86 insertions(+), 33 deletions(-) >>>>> >>>>> >>>>> This seems a bit extreme. It might be better to move the remaining >>>>> three commands under the 'ut' subcommand. Then all unit tests would be >>>>> visible from the 'ut' help... >>>> >>>> >>>> I'm not sure what you mean by "extreme"? Do you mean you don't want each >>>> unit test exposed as a separate pytest? I thought based on our previous >>>> conversation that was exactly what you wanted. If not, I'm not sure what >>>> the deficiency in the current code is; either all the dm subtests are >>>> executed at once by a single pytest with a single overall status, or >>>> they're each a separate pytest with individual status. Any grouping >>>> that's in between those seems like it would be entirely arbitrary? >>> >>> >>> I mean that there might be a simpler way to find out what unit tests >>> are available in U-Boot rather than using objdump! Can the 'ut' >>> command itself report this? >> >> >> Well, the Python code could parse the ELF binary itself... :-) > > Eek! > >> We can't parse the source code to determine the test list, since it'd be >> hard to determine which tests were actually compiled in (based on .config >> feature support), v.s. which were simply written but not compiled. >> >> Perhaps we could add a new command-line option to U-Boot that /only/ prints >> out the list of support unit tests. That would mean executing the U-Boot >> binary on the host to determine the list though, which would limit the >> approach to sandbox; it couldn't ever work if we enabled unit tests on real >> HW. objdump should work in that scenario. > > That was what I was thinking actually. The 'ut' command already prints > a list when given no args, but you could add 'ut list'. > > I'm not quite clear how useful the 'ut' tests are on real hardware. > They are seldom enabled. Do you actually parse the U-Boot binary for > the board? There's no other place in the test system that parses the U-Boot binary. >> Or perhaps the build process could dump out a list of enabled unit tests, so >> test/py could simply read that file. At least that would push the objdump >> usage into the build process where it's basically guaranteed we have an >> objdump binary, plus we can use $(CROSS_COMPILE)objdump which would be >> better for cross-compiled binaries... > > Or do you think it would be acceptable to just have a hard-coded list > of tests and try each one? > > Or maybe your current approach is better than the alternatives... Overall, I still like the idea of simply parsing the binary best. I see no reason why (at least some subset of) unit tests could not ever be enabled in non-sandbox builds. So, I'd like the test system not to assume that they won't be. This means we can't execute the binary to find out the list of enabled tests beforehand, since determining the list has to happen before we run any tests, and hence happen via code on the host machine, which can't run target binaries. One improvement I can make is to run the objdump during the build process. As part of generating u-boot and u-boot.map, we can also call objdump to generate u-boot.syms. This isolates use of "compiler" tools like objdump to the Makefile; a more typical place to run them. The parsing of u-boot.syms can be left up to the test scripts.