public inbox for intel-gfx@lists.freedesktop.org
 help / color / mirror / Atom feed
* IGT conventions
@ 2014-01-15 23:26 Jeff McGee
  2014-01-15 23:55 ` Daniel Vetter
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff McGee @ 2014-01-15 23:26 UTC (permalink / raw)
  To: intel-gfx

I have a few questions about conventions observed in writing IGT tests.

I don't see any standard wrapper for logging other than the logging that goes
with certain igt_ control flow functions. Is it recommended to limit logging to
just these? I see some different approaches to supporting verbose modes. Is
it just up to each test?

Any recommendations on subtest granularity? Testing boils down to repeated
cycles of 'do something' then 'assert something'. Just wondering if there is a
guideline on how many of those cycles should each subtest contain. Probably
this is very case specific.

Also wondering if something like an igt_warn function to go with igt_require
and igt_assert has been considered. There might be a case where some condition
is not met which causes the test to become limited in its effectiveness but
still valid. We might still want to run the test and let it pass but attach a
caveat. Or would adding this gray area just be too complicating.

Thanks
Jeff

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IGT conventions
  2014-01-15 23:26 IGT conventions Jeff McGee
@ 2014-01-15 23:55 ` Daniel Vetter
  2014-01-16  9:27   ` Daniel Vetter
  0 siblings, 1 reply; 6+ messages in thread
From: Daniel Vetter @ 2014-01-15 23:55 UTC (permalink / raw)
  To: intel-gfx

On Wed, Jan 15, 2014 at 05:26:28PM -0600, Jeff McGee wrote:
> I have a few questions about conventions observed in writing IGT tests.
> 
> I don't see any standard wrapper for logging other than the logging that goes
> with certain igt_ control flow functions. Is it recommended to limit logging to
> just these? I see some different approaches to supporting verbose modes. Is
> it just up to each test?

As long as you only print stuff to stdout you can be rather excessive imo.
Some tests are rather loud by default, others are not. Atm we don't really
have a concept of verbose mode, but some engineers have implemented it
since it helped them with developing their feature. Personally I don't
care since I run igts through piglit, which captures all the output anyway
(well, as long as you don't go ahead and dump a few megabytes of noise
ofc).

> Any recommendations on subtest granularity? Testing boils down to repeated
> cycles of 'do something' then 'assert something'. Just wondering if there is a
> guideline on how many of those cycles should each subtest contain. Probably
> this is very case specific.

Whatever looks suitable. Personally I think for very specific interface
tests it makes a lot of sense to split them all up into subtests, but for
more generic stress testing bigger tests also make sense. Also note that
we still have a pile of testcases that go back before the subtests
infrastructure, but I think most of them are now split up into subtests
where it makes sense.

> Also wondering if something like an igt_warn function to go with igt_require
> and igt_assert has been considered. There might be a case where some condition
> is not met which causes the test to become limited in its effectiveness but
> still valid. We might still want to run the test and let it pass but attach a
> caveat. Or would adding this gray area just be too complicating.

Anything you put out to stderr will be tracked as a "warn" in piglit. Atm
we don't have any such use-case though I think, mostly since keeping
unbuffer stderr and buffered stdout in sync is a pain ;-) But I guess we
could formalize this a bit if you see it useful for you with a

#define igt_warn(a...) fprintf(stderr, a)

or something like that. Some of the checks in kms_flip.c might benefit
from this, since on a lot of our platforms the rather stringent timing
checks often fail randomly. But besides such corner-cases I kinda prefer
if we just split up testcases more instead of trying to be really clever
with the level of fail encounter and reported.

On that topic: A lot of the tests also depend upon in-kernel checks. With
piglit we capture dmesg, and as a rule anything above the info log level
is counted as a failure. At least that is how piglit treats dmesg output
for i-g-t testcases, and this is also what our QA reports. So if a
testcase hits upon a debug DRM_ERROR then that's considered a bug (and we
have a steady flow of patches to demote such debug leftovers from
development to DRM_DEBUG/INFO as appropriate).

Hopefully this clarifies things a bit. Comments and suggestions highly
welcome, especially if you see some need with your own infrastructure for
structured output/data from tests.

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IGT conventions
  2014-01-15 23:55 ` Daniel Vetter
@ 2014-01-16  9:27   ` Daniel Vetter
  2014-01-16 16:43     ` Jeff McGee
  0 siblings, 1 reply; 6+ messages in thread
From: Daniel Vetter @ 2014-01-16  9:27 UTC (permalink / raw)
  To: intel-gfx

On Thu, Jan 16, 2014 at 12:55 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> Anything you put out to stderr will be tracked as a "warn" in piglit. Atm
> we don't have any such use-case though I think, mostly since keeping
> unbuffer stderr and buffered stdout in sync is a pain ;-) But I guess we
> could formalize this a bit if you see it useful for you with a
>
> #define igt_warn(a...) fprintf(stderr, a)
>
> or something like that. Some of the checks in kms_flip.c might benefit
> from this, since on a lot of our platforms the rather stringent timing
> checks often fail randomly. But besides such corner-cases I kinda prefer
> if we just split up testcases more instead of trying to be really clever
> with the level of fail encounter and reported.

Actually if we put an fflush(stdout); before the fprintf then we would
not have any issues with buffered vs. unbuffered. And for consistency
maybe we could define igt_warn as just fputs and igt_warn_f as the
full printf thing.

If you think this is useful for your tests then I'll happily merge a
patch to add igt_warn*

Cheers, Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IGT conventions
  2014-01-16  9:27   ` Daniel Vetter
@ 2014-01-16 16:43     ` Jeff McGee
  2014-01-22 20:40       ` Jeff McGee
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff McGee @ 2014-01-16 16:43 UTC (permalink / raw)
  To: Daniel Vetter; +Cc: intel-gfx

On Thu, Jan 16, 2014 at 10:27:03AM +0100, Daniel Vetter wrote:
> On Thu, Jan 16, 2014 at 12:55 AM, Daniel Vetter <daniel@ffwll.ch> wrote:
> > Anything you put out to stderr will be tracked as a "warn" in piglit. Atm
> > we don't have any such use-case though I think, mostly since keeping
> > unbuffer stderr and buffered stdout in sync is a pain ;-) But I guess we
> > could formalize this a bit if you see it useful for you with a
> >
> > #define igt_warn(a...) fprintf(stderr, a)
> >
> > or something like that. Some of the checks in kms_flip.c might benefit
> > from this, since on a lot of our platforms the rather stringent timing
> > checks often fail randomly. But besides such corner-cases I kinda prefer
> > if we just split up testcases more instead of trying to be really clever
> > with the level of fail encounter and reported.
> 
> Actually if we put an fflush(stdout); before the fprintf then we would
> not have any issues with buffered vs. unbuffered. And for consistency
> maybe we could define igt_warn as just fputs and igt_warn_f as the
> full printf thing.
> 
> If you think this is useful for your tests then I'll happily merge a
> patch to add igt_warn*

Thanks for all the feedback. I'll see if something like this is really
necessary for my cases. Agree that the simplicity of pass, skip, or fail may
be best.

Jeff

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IGT conventions
  2014-01-16 16:43     ` Jeff McGee
@ 2014-01-22 20:40       ` Jeff McGee
  2014-01-22 20:53         ` Daniel Vetter
  0 siblings, 1 reply; 6+ messages in thread
From: Jeff McGee @ 2014-01-22 20:40 UTC (permalink / raw)
  To: Daniel Vetter, intel-gfx

There doesn't seem to be anything like the exit handlers for running when
a subtest exits. I need a failed subtest to be able to cleanup after
itself to avoid contaminating subsequent subtests. Have I missed something?
Perhaps this is not a problem when running subtests individually through
piglit?

I guess one simple approach is to wrap igt_assert with the cleanup
function, like the restore_assert that was originally used in pm_rps.

Jeff

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: IGT conventions
  2014-01-22 20:40       ` Jeff McGee
@ 2014-01-22 20:53         ` Daniel Vetter
  0 siblings, 0 replies; 6+ messages in thread
From: Daniel Vetter @ 2014-01-22 20:53 UTC (permalink / raw)
  To: Daniel Vetter, intel-gfx

On Wed, Jan 22, 2014 at 02:40:42PM -0600, Jeff McGee wrote:
> There doesn't seem to be anything like the exit handlers for running when
> a subtest exits. I need a failed subtest to be able to cleanup after
> itself to avoid contaminating subsequent subtests. Have I missed something?
> Perhaps this is not a problem when running subtests individually through
> piglit?

Yeah, the exit handlers only clean up when the entire test goes down, not
after each failed subtest. If you want more reliability you need to run
all the subtests individually (and maybe you even need to restart the
machine if something particularly bad happens).

> I guess one simple approach is to wrap igt_assert with the cleanup
> function, like the restore_assert that was originally used in pm_rps.

tbh I'm not sure whether this is worth catering for - imo better to invest
that time into getting a real android/igt testrunner going. For
development you can just run individual subtest only when hacking around,
for regression testing nothing really should fail after all ;-)
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2014-01-22 20:53 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-01-15 23:26 IGT conventions Jeff McGee
2014-01-15 23:55 ` Daniel Vetter
2014-01-16  9:27   ` Daniel Vetter
2014-01-16 16:43     ` Jeff McGee
2014-01-22 20:40       ` Jeff McGee
2014-01-22 20:53         ` Daniel Vetter

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox