public inbox for igt-dev@lists.freedesktop.org
 help / color / mirror / Atom feed
* [igt-dev] [PATCH i-g-t] i915/gem_exec_schedule: Trick semaphores into a GPU hang
@ 2019-04-09 13:56 Chris Wilson
  2019-04-09 15:08 ` [igt-dev] ✗ Fi.CI.BAT: failure for " Patchwork
  2019-04-09 15:39 ` [igt-dev] [Intel-gfx] [PATCH i-g-t] " Tvrtko Ursulin
  0 siblings, 2 replies; 3+ messages in thread
From: Chris Wilson @ 2019-04-09 13:56 UTC (permalink / raw)
  To: intel-gfx; +Cc: igt-dev, Tvrtko Ursulin

If we have two tasks running on xcs0 and xcs1 independently, but who
queue subsequent work onto rcs, we may insert semaphores before the rcs
work and pick unwisely which task to run first. To maximise throughput,
we want to run on rcs whichever task is ready first. Conversely, if we
pick wrongly that can be used to trigger a GPU hang with unaware
userspace.

Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
---
 tests/i915/gem_exec_schedule.c | 61 ++++++++++++++++++++++++++++++++++
 1 file changed, 61 insertions(+)

diff --git a/tests/i915/gem_exec_schedule.c b/tests/i915/gem_exec_schedule.c
index 3df319bcc..d6f109540 100644
--- a/tests/i915/gem_exec_schedule.c
+++ b/tests/i915/gem_exec_schedule.c
@@ -404,6 +404,65 @@ static void semaphore_userlock(int i915)
 	igt_spin_batch_free(i915, spin);
 }
 
+static void semaphore_codependency(int i915)
+{
+	struct {
+		igt_spin_t *xcs, *rcs;
+	} task[2];
+	unsigned int engine;
+	int i;
+
+	/*
+	 * Consider two tasks, task A runs on (xcs0, rcs0) and task B
+	 * on (xcs1, rcs0). That is they must both run a dependent
+	 * batch on rcs0, after first running in parallel on separate
+	 * engines. To maximise throughput, we want the shorter xcs task
+	 * to start on rcs first. However, if we insert semaphores we may
+	 * pick wrongly and end up running the requests in the least
+	 * optimal order.
+	 */
+
+	i = 0;
+	for_each_physical_engine(i915, engine) {
+		uint32_t ctx;
+
+		if (engine == I915_EXEC_RENDER)
+			continue;
+
+		ctx = gem_context_create(i915);
+
+		task[i].xcs =
+			__igt_spin_batch_new(i915,
+					     .ctx = ctx,
+					     .engine = engine,
+					     .flags = IGT_SPIN_POLL_RUN);
+		igt_spin_busywait_until_running(task[i].xcs);
+
+		/* Common rcs tasks will be queued in FIFO */
+		task[i].rcs =
+			__igt_spin_batch_new(i915,
+					     .ctx = ctx,
+					     .engine = I915_EXEC_RENDER,
+					     .dependency = task[i].xcs->handle);
+
+		gem_context_destroy(i915, ctx);
+
+		if (++i == ARRAY_SIZE(task))
+			break;
+	}
+	igt_require(i == ARRAY_SIZE(task));
+
+	/* Since task[0] was queued first, it will be first in queue for rcs */
+	igt_spin_batch_end(task[1].xcs);
+	igt_spin_batch_end(task[1].rcs);
+	gem_sync(i915, task[1].rcs->handle); /* to hang if task[0] hogs rcs */
+
+	for (i = 0; i < ARRAY_SIZE(task); i++) {
+		igt_spin_batch_free(i915, task[i].xcs);
+		igt_spin_batch_free(i915, task[i].rcs);
+	}
+}
+
 static void reorder(int fd, unsigned ring, unsigned flags)
 #define EQUAL 1
 {
@@ -1393,6 +1452,8 @@ igt_main
 
 		igt_subtest("semaphore-user")
 			semaphore_userlock(fd);
+		igt_subtest("semaphore-codependency")
+			semaphore_codependency(fd);
 
 		igt_subtest("smoketest-all")
 			smoketest(fd, ALL_ENGINES, 30);
-- 
2.20.1

_______________________________________________
igt-dev mailing list
igt-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/igt-dev

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-04-09 15:39 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2019-04-09 13:56 [igt-dev] [PATCH i-g-t] i915/gem_exec_schedule: Trick semaphores into a GPU hang Chris Wilson
2019-04-09 15:08 ` [igt-dev] ✗ Fi.CI.BAT: failure for " Patchwork
2019-04-09 15:39 ` [igt-dev] [Intel-gfx] [PATCH i-g-t] " Tvrtko Ursulin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox