From: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
To: Chris Wilson <chris@chris-wilson.co.uk>, intel-gfx@lists.freedesktop.org
Cc: igt-dev@lists.freedesktop.org, Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Subject: Re: [igt-dev] [PATCH i-g-t] i915/gem_exec_schedule: Verify that using HW semaphores doesn't block
Date: Mon, 1 Apr 2019 08:52:09 +0100 [thread overview]
Message-ID: <f5d450df-3639-cb14-716d-c1d1bedfde0e@linux.intel.com> (raw)
In-Reply-To: <20190329095402.29697-1-chris@chris-wilson.co.uk>
On 29/03/2019 09:54, Chris Wilson wrote:
> We may use HW semaphores to schedule nearly-ready work such that they
> are already spinning on the GPU waiting for the completion on another
> engine. However, we don't want for that spinning task to actually block
> any real work should it be scheduled.
>
> v2: No typeof autos
> v3: Don't cheat, check gen8 as well
>
> Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
> ---
> tests/i915/gem_exec_schedule.c | 87 ++++++++++++++++++++++++++++++++++
> 1 file changed, 87 insertions(+)
>
> diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
> index 4f0577b4e..3df319bcc 100644
> --- a/tests/i915/gem_exec_schedule.c
> +++ b/tests/i915/gem_exec_schedule.c
> @@ -48,6 +48,10 @@
>
> #define MAX_CONTEXTS 1024
>
> +#define LOCAL_I915_EXEC_BSD_SHIFT (13)
> +#define LOCAL_I915_EXEC_BSD_MASK (3 << LOCAL_I915_EXEC_BSD_SHIFT)
> +#define ENGINE_MASK (I915_EXEC_RING_MASK | LOCAL_I915_EXEC_BSD_MASK)
> +
> IGT_TEST_DESCRIPTION("Check that we can control the order of execution");
>
> static inline
> @@ -320,6 +324,86 @@ static void smoketest(int fd, unsigned ring, unsigned timeout)
> }
> }
>
> +static uint32_t __batch_create(int i915, uint32_t offset)
> +{
> + const uint32_t bbe = MI_BATCH_BUFFER_END;
> + uint32_t handle;
> +
> + handle = gem_create(i915, ALIGN(offset + 4, 4096));
> + gem_write(i915, handle, offset, &bbe, sizeof(bbe));
> +
> + return handle;
> +}
> +
> +static uint32_t batch_create(int i915)
> +{
> + return __batch_create(i915, 0);
> +}
> +
> +static void semaphore_userlock(int i915)
> +{
> + struct drm_i915_gem_exec_object2 obj = {
> + .handle = batch_create(i915),
> + };
> + igt_spin_t *spin = NULL;
> + unsigned int engine;
> + uint32_t scratch;
> +
> + igt_require(gem_scheduler_has_semaphores(i915));
> +
> + /*
> + * Given the use of semaphores to govern parallel submission
> + * of nearly-ready work to HW, we still want to run actually
> + * ready work immediately. Without semaphores, the dependent
> + * work wouldn't be submitted so our ready work will run.
> + */
> +
> + scratch = gem_create(i915, 4096);
> + for_each_physical_engine(i915, engine) {
> + if (!spin) {
> + spin = igt_spin_batch_new(i915,
> + .dependency = scratch,
> + .engine = engine);
> + } else {
> + uint64_t saved = spin->execbuf.flags;
> +
> + spin->execbuf.flags &= ~ENGINE_MASK;
> + spin->execbuf.flags |= engine;
> +
> + gem_execbuf(i915, &spin->execbuf);
> +
> + spin->execbuf.flags = saved;
> + }
> + }
> + igt_require(spin);
> + gem_close(i915, scratch);
> +
> + /*
> + * On all dependent engines, the request may be executing (busywaiting
> + * on a HW semaphore) but it should not prevent any real work from
> + * taking precedence.
> + */
> + scratch = gem_context_create(i915);
> + for_each_physical_engine(i915, engine) {
> + struct drm_i915_gem_execbuffer2 execbuf = {
> + .buffers_ptr = to_user_pointer(&obj),
> + .buffer_count = 1,
> + .flags = engine,
> + .rsvd1 = scratch,
> + };
> +
> + if (engine == (spin->execbuf.flags & ENGINE_MASK))
> + continue;
> +
> + gem_execbuf(i915, &execbuf);
> + }
> + gem_context_destroy(i915, scratch);
> + gem_sync(i915, obj.handle); /* to hang unless we can preempt */
> + gem_close(i915, obj.handle);
> +
> + igt_spin_batch_free(i915, spin);
> +}
> +
> static void reorder(int fd, unsigned ring, unsigned flags)
> #define EQUAL 1
> {
> @@ -1307,6 +1391,9 @@ igt_main
> igt_require(gem_scheduler_has_ctx_priority(fd));
> }
>
> + igt_subtest("semaphore-user")
> + semaphore_userlock(fd);
> +
> igt_subtest("smoketest-all")
> smoketest(fd, ALL_ENGINES, 30);
>
>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Regards,
Tvrtko
_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev
next prev parent reply other threads:[~2019-04-01 7:52 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-03-29 9:54 [igt-dev] [PATCH i-g-t] i915/gem_exec_schedule: Verify that using HW semaphores doesn't block Chris Wilson
2019-03-29 11:18 ` [igt-dev] ✓ Fi.CI.BAT: success for i915/gem_exec_schedule: Verify that using HW semaphores doesn't block (rev2) Patchwork
2019-03-29 13:59 ` [igt-dev] ✓ Fi.CI.IGT: " Patchwork
2019-04-01 7:52 ` Tvrtko Ursulin [this message]
-- strict thread matches above, loose matches on Subject: below --
2019-01-23 1:26 [igt-dev] [PATCH i-g-t] i915/gem_exec_schedule: Verify that using HW semaphores doesn't block Chris Wilson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f5d450df-3639-cb14-716d-c1d1bedfde0e@linux.intel.com \
--to=tvrtko.ursulin@linux.intel.com \
--cc=chris@chris-wilson.co.uk \
--cc=igt-dev@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=tvrtko.ursulin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox