From: Matthew Brost <matthew.brost@intel.com>
To: intel-xe@lists.freedesktop.org
Subject: [CI 3/8] drm/xe: Do not wait on TLB invalidations in page fault binds
Date: Mon, 27 Oct 2025 14:42:47 -0700 [thread overview]
Message-ID: <20251027214252.2455093-4-matthew.brost@intel.com> (raw)
In-Reply-To: <20251027214252.2455093-1-matthew.brost@intel.com>
The migrate queue is shared by all processes using a device, thus is
possible while servicing a page fault another process uses the migrate
queue resulting in a TLB invalidation. In case of page fault binds, this
TLB invalidation has nothing to do with the current bind so there is not
need to wait on it. Teach the bind pipeline to be able to skip waits on
TLB invalidations.
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
---
drivers/gpu/drm/xe/xe_vm.c | 14 ++++++++++++--
drivers/gpu/drm/xe/xe_vm_types.h | 1 +
2 files changed, 13 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
index 2f181c44b8b7..df0a44d9eb46 100644
--- a/drivers/gpu/drm/xe/xe_vm.c
+++ b/drivers/gpu/drm/xe/xe_vm.c
@@ -755,6 +755,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm, struct xe_vma *vma, u8 tile_ma
xe_assert(vm->xe, xe_vm_in_fault_mode(vm));
xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+ vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT;
for_each_tile(tile, vm->xe, id) {
vops.pt_update_ops[id].wait_vm_bookkeep = true;
vops.pt_update_ops[tile->id].q =
@@ -845,6 +846,7 @@ struct dma_fence *xe_vm_range_rebind(struct xe_vm *vm,
xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
+ vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT;
for_each_tile(tile, vm->xe, id) {
vops.pt_update_ops[id].wait_vm_bookkeep = true;
vops.pt_update_ops[tile->id].q =
@@ -3111,8 +3113,13 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
if (number_tiles == 0)
return ERR_PTR(-ENODATA);
- for_each_tile(tile, vm->xe, id)
- n_fence += (1 + XE_MAX_GT_PER_TILE);
+ if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT) {
+ for_each_tile(tile, vm->xe, id)
+ ++n_fence;
+ } else {
+ for_each_tile(tile, vm->xe, id)
+ n_fence += (1 + XE_MAX_GT_PER_TILE);
+ }
fences = kmalloc_array(n_fence, sizeof(*fences), GFP_KERNEL);
if (!fences) {
@@ -3153,6 +3160,9 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
collect_fences:
fences[current_fence++] = fence ?: dma_fence_get_stub();
+ if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT)
+ continue;
+
xe_migrate_job_lock(tile->migrate, q);
for_each_tlb_inval(i)
fences[current_fence++] =
diff --git a/drivers/gpu/drm/xe/xe_vm_types.h b/drivers/gpu/drm/xe/xe_vm_types.h
index 542dbe2f9310..3766dc37b3ad 100644
--- a/drivers/gpu/drm/xe/xe_vm_types.h
+++ b/drivers/gpu/drm/xe/xe_vm_types.h
@@ -466,6 +466,7 @@ struct xe_vma_ops {
#define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)
#define XE_VMA_OPS_FLAG_MADVISE BIT(1)
#define XE_VMA_OPS_ARRAY_OF_BINDS BIT(2)
+#define XE_VMA_OPS_FLAG_SKIP_TLB_WAIT BIT(3)
u32 flags;
#ifdef TEST_VM_OPS_ERROR
/** @inject_error: inject error to test error handling */
--
2.34.1
next prev parent reply other threads:[~2025-10-27 21:42 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-27 21:42 [CI 0/8] Fix dma-fence-array ordering Matthew Brost
2025-10-27 21:42 ` [CI 1/8] drm/xe: Add last fence attachment to TLB invalidation job queues Matthew Brost
2025-10-27 21:42 ` [CI 2/8] drm/xe: Decouple bind queue last fence from TLB invalidations Matthew Brost
2025-10-27 21:42 ` Matthew Brost [this message]
2025-10-27 21:42 ` [CI 4/8] drm/xe: Don't allow in fences on zero batch exec or zero binds Matthew Brost
2025-10-27 21:42 ` [CI 5/8] drm/xe: Remove last fence dependecy check from binds Matthew Brost
2025-10-27 21:42 ` [CI 6/8] workqueue: Add ordered workqueue Matthew Brost
2025-10-27 21:42 ` [CI 7/8] dma-buf: Fix dma-fence-array signaling ordering Matthew Brost
2025-10-27 21:42 ` [CI 8/8] dma-buf: Fix dma-fence-chain " Matthew Brost
2025-10-27 23:48 ` ✗ CI.checkpatch: warning for Fix dma-fence-array ordering Patchwork
2025-10-27 23:49 ` ✓ CI.KUnit: success " Patchwork
2025-10-28 0:05 ` ✗ CI.checksparse: warning " Patchwork
2025-10-28 0:32 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-28 8:06 ` ✗ Xe.CI.Full: " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251027214252.2455093-4-matthew.brost@intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox