From: Jonathan Cavitt <jonathan.cavitt@intel.com>
To: igt-dev@lists.freedesktop.org
Cc: jonathan.cavitt@intel.com, saurabhg.gupta@intel.com,
alex.zuo@intel.com, john.c.harrison@intel.com,
nirmoy.das@intel.com, chris.p.wilson@linux.intel.com
Subject: [PATCH i-g-t] tests/intel/gem_spin_batch: RCS/CCS must share VM on DG2 due to w/a
Date: Wed, 14 Aug 2024 11:33:36 -0700 [thread overview]
Message-ID: <20240814183336.507650-1-jonathan.cavitt@intel.com> (raw)
On DG2, both the RCS and CCS engine contexts must use the same virtual
address space when running parallel, non-preemptible work. Failure to
do so results in a GPU hang.
Suggested-by: John Harrison <john.c.harrison@intel.com>
Signed-off-by: Jonathan Cavitt <jonathan.cavitt@intel.com>
CC: Nirmoy Das <nirmoy.das@intel.com>
CC: Chris Wilson <chris.p.wilson@linux.intel.com>
---
tests/intel/gem_spin_batch.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/tests/intel/gem_spin_batch.c b/tests/intel/gem_spin_batch.c
index 682a062180..19b13f7334 100644
--- a/tests/intel/gem_spin_batch.c
+++ b/tests/intel/gem_spin_batch.c
@@ -24,6 +24,7 @@
#include "i915/gem.h"
#include "i915/gem_ring.h"
+#include "i915/gem_vm.h"
#include "igt.h"
/**
* TEST: gem spin batch
@@ -179,9 +180,20 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
const struct intel_execution_engine2 *e;
intel_ctx_cfg_t cfg = ctx->cfg;
struct igt_spin *spin, *n;
+ uint32_t shared_vm_id = 0;
uint64_t ahnd;
IGT_LIST_HEAD(list);
+ /*
+ * Wa_14014494547:DG2
+ * Both the RCS and CCS engine contexts must use the same
+ * virtual address space when running parallel,
+ * non-preemptible work. Failure to do so results in a
+ * GPU hang.
+ */
+ if (IS_DG2(intel_get_drm_devid(i915)))
+ shared_vm_id = gem_vm_create(i915);
+
for_each_ctx_cfg_engine(i915, &cfg, e) {
if (!gem_class_can_store_dword(i915, e->class))
continue;
@@ -192,8 +204,11 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
if (skip_bad_engine(i915, e))
continue;
- if (flags & PARALLEL_SPIN_NEW_CTX)
+ if (flags & PARALLEL_SPIN_NEW_CTX) {
+ if (shared_vm_id)
+ cfg.vm = shared_vm_id;
ctx = intel_ctx_create(i915, &cfg);
+ }
ahnd = get_reloc_ahnd(i915, ctx->id);
/* Prevent preemption so only one is allowed on each engine */
@@ -218,6 +233,9 @@ static void spin_all(int i915, const intel_ctx_t *ctx, unsigned int flags)
igt_spin_free(i915, spin);
put_ahnd(ahnd);
}
+
+ if (shared_vm_id)
+ gem_vm_destroy(i915, shared_vm_id);
}
static bool has_userptr(int fd)
--
2.25.1
next reply other threads:[~2024-08-14 18:52 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-14 18:33 Jonathan Cavitt [this message]
2024-08-14 19:41 ` ✓ CI.xeBAT: success for tests/intel/gem_spin_batch: RCS/CCS must share VM on DG2 due to w/a Patchwork
2024-08-14 19:50 ` ✓ Fi.CI.BAT: " Patchwork
2024-08-15 3:36 ` ✗ CI.xeFULL: failure " Patchwork
2024-08-15 7:49 ` [PATCH i-g-t] " Nirmoy Das
2024-08-15 14:01 ` Cavitt, Jonathan
2024-08-15 20:59 ` ✗ Fi.CI.IGT: failure for " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240814183336.507650-1-jonathan.cavitt@intel.com \
--to=jonathan.cavitt@intel.com \
--cc=alex.zuo@intel.com \
--cc=chris.p.wilson@linux.intel.com \
--cc=igt-dev@lists.freedesktop.org \
--cc=john.c.harrison@intel.com \
--cc=nirmoy.das@intel.com \
--cc=saurabhg.gupta@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox