Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [PATCH v5 4/6] drm/xe: Skip TLB invalidation waits in page fault binds
Date: Mon, 03 Nov 2025 16:19:15 +0100	[thread overview]
Message-ID: <131569db27f56fa9ab4e3e193261e267ed2476ae.camel@linux.intel.com> (raw)
In-Reply-To: <20251029205719.2746501-5-matthew.brost@intel.com>

On Wed, 2025-10-29 at 13:57 -0700, Matthew Brost wrote:
> Avoid waiting on unrelated TLB invalidations when servicing page
> fault
> binds. Since the migrate queue is shared across processes, TLB
> invalidations triggered by other processes may occur concurrently but
> are not relevant to the current bind. Teach the bind pipeline to skip
> waits on such invalidations to prevent unnecessary serialization.
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

> ---
>  drivers/gpu/drm/xe/xe_vm.c       | 14 ++++++++++++--
>  drivers/gpu/drm/xe/xe_vm_types.h |  1 +
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 7a6e254996fb..6c77ff109fe4 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -755,6 +755,7 @@ struct dma_fence *xe_vma_rebind(struct xe_vm *vm,
> struct xe_vma *vma, u8 tile_ma
>  	xe_assert(vm->xe, xe_vm_in_fault_mode(vm));
>  
>  	xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> +	vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT;
>  	for_each_tile(tile, vm->xe, id) {
>  		vops.pt_update_ops[id].wait_vm_bookkeep = true;
>  		vops.pt_update_ops[tile->id].q =
> @@ -845,6 +846,7 @@ struct dma_fence *xe_vm_range_rebind(struct xe_vm
> *vm,
>  	xe_assert(vm->xe, xe_vma_is_cpu_addr_mirror(vma));
>  
>  	xe_vma_ops_init(&vops, vm, NULL, NULL, 0);
> +	vops.flags |= XE_VMA_OPS_FLAG_SKIP_TLB_WAIT;
>  	for_each_tile(tile, vm->xe, id) {
>  		vops.pt_update_ops[id].wait_vm_bookkeep = true;
>  		vops.pt_update_ops[tile->id].q =
> @@ -3111,8 +3113,13 @@ static struct dma_fence *ops_execute(struct
> xe_vm *vm,
>  	if (number_tiles == 0)
>  		return ERR_PTR(-ENODATA);
>  
> -	for_each_tile(tile, vm->xe, id)
> -		n_fence += (1 + XE_MAX_GT_PER_TILE);
> +	if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT) {
> +		for_each_tile(tile, vm->xe, id)
> +			++n_fence;
> +	} else {
> +		for_each_tile(tile, vm->xe, id)
> +			n_fence += (1 + XE_MAX_GT_PER_TILE);
> +	}
>  
>  	fences = kmalloc_array(n_fence, sizeof(*fences),
> GFP_KERNEL);
>  	if (!fences) {
> @@ -3153,6 +3160,9 @@ static struct dma_fence *ops_execute(struct
> xe_vm *vm,
>  
>  collect_fences:
>  		fences[current_fence++] = fence ?:
> dma_fence_get_stub();
> +		if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT)
> +			continue;
> +
>  		xe_migrate_job_lock(tile->migrate, q);
>  		for_each_tlb_inval(i)
>  			fences[current_fence++] =
> diff --git a/drivers/gpu/drm/xe/xe_vm_types.h
> b/drivers/gpu/drm/xe/xe_vm_types.h
> index 542dbe2f9310..3766dc37b3ad 100644
> --- a/drivers/gpu/drm/xe/xe_vm_types.h
> +++ b/drivers/gpu/drm/xe/xe_vm_types.h
> @@ -466,6 +466,7 @@ struct xe_vma_ops {
>  #define XE_VMA_OPS_FLAG_HAS_SVM_PREFETCH BIT(0)
>  #define XE_VMA_OPS_FLAG_MADVISE          BIT(1)
>  #define XE_VMA_OPS_ARRAY_OF_BINDS	 BIT(2)
> +#define XE_VMA_OPS_FLAG_SKIP_TLB_WAIT	 BIT(3)
>  	u32 flags;
>  #ifdef TEST_VM_OPS_ERROR
>  	/** @inject_error: inject error to test error handling */


  reply	other threads:[~2025-11-03 15:19 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-29 20:57 [PATCH v5 0/6] Fix serialization on burst of unbinds - v2 Matthew Brost
2025-10-29 20:57 ` [PATCH v5 1/6] drm/xe: Enforce correct user fence signaling order using drm_syncobjs Matthew Brost
2025-10-30  7:58   ` Thomas Hellström
2025-10-30 12:54     ` Matthew Brost
2025-10-29 20:57 ` [PATCH v5 2/6] drm/xe: Attach last fence to TLB invalidation job queues Matthew Brost
2025-10-30  8:24   ` Thomas Hellström
2025-10-29 20:57 ` [PATCH v5 3/6] drm/xe: Decouple bind queue last fence from TLB invalidations Matthew Brost
2025-10-30  9:52   ` Thomas Hellström
2025-10-29 20:57 ` [PATCH v5 4/6] drm/xe: Skip TLB invalidation waits in page fault binds Matthew Brost
2025-11-03 15:19   ` Thomas Hellström [this message]
2025-10-29 20:57 ` [PATCH v5 5/6] drm/xe: Disallow input fences on zero batch execs and zero binds Matthew Brost
2025-11-03 15:21   ` Thomas Hellström
2025-11-03 15:22     ` Thomas Hellström
2025-10-29 20:57 ` [PATCH v5 6/6] drm/xe: Remove last fence dependency check from binds Matthew Brost
2025-10-30  8:43   ` Thomas Hellström
2025-11-03 15:24   ` Thomas Hellström

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=131569db27f56fa9ab4e3e193261e267ed2476ae.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox