Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Thomas Hellström" <thomas.hellstrom@linux.intel.com>
To: Matthew Brost <matthew.brost@intel.com>, intel-xe@lists.freedesktop.org
Cc: francois.dugast@intel.com, himal.prasad.ghimiray@intel.com
Subject: Re: [PATCH] drm/xe: Opportunistically skip TLB invalidaion on unbind
Date: Fri, 13 Jun 2025 10:24:32 +0200	[thread overview]
Message-ID: <136f645b70f1c0bfd646830d6cef2b60a0c3a22e.camel@linux.intel.com> (raw)
In-Reply-To: <20250613043645.255351-1-matthew.brost@intel.com>

On Thu, 2025-06-12 at 21:36 -0700, Matthew Brost wrote:
> If a range or VMA is invalidated and scratched page is disabled,
> there
> is no reason to issue a TLB invalidation on unbind, skip TLB
> innvalidation is this condition is true. This is an opportunistic
> check
> as it is done without the notifier lock, thus it possible for the
> range
> or VMA to be invalidated after this check is performed.
> 
> This should improve performance of the SVM garbage collector, for
> example, xe_exec_system_allocator --r many-stride-new-prefetch, went
> ~20s to ~9.5s on a BMG.
> 
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> ---
>  drivers/gpu/drm/xe/xe_pt.c  | 18 ++++++++++++++++--
>  drivers/gpu/drm/xe/xe_svm.c |  5 ++++-
>  drivers/gpu/drm/xe/xe_vm.c  |  5 ++++-
>  3 files changed, 24 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_pt.c b/drivers/gpu/drm/xe/xe_pt.c
> index f39d5cc9f411..09c3ccc81cca 100644
> --- a/drivers/gpu/drm/xe/xe_pt.c
> +++ b/drivers/gpu/drm/xe/xe_pt.c
> @@ -1988,7 +1988,14 @@ static int unbind_op_prepare(struct xe_tile
> *tile,
>  					 xe_vma_end(vma));
>  	++pt_update_ops->current_op;
>  	pt_update_ops->needs_userptr_lock |= xe_vma_is_userptr(vma);
> -	pt_update_ops->needs_invalidation = true;
> +
> +	/*
> +	 * Opportunistically supressing invalidation, READ_ONCE
> pairs with
> +	 * WRITE_ONCE in MMU notifier or BO move
> +	 */
> +	pt_update_ops->needs_invalidation |=
> xe_vm_has_scratch(xe_vma_vm(vma)) ||
> +		((vma->tile_present & BIT(tile->id)) &
> +		 ~READ_ONCE(vma->tile_invalidated));
>  
>  	xe_pt_commit_prepare_unbind(vma, pt_op->entries, pt_op-
> >num_entries);
>  
> @@ -2023,7 +2030,14 @@ static int unbind_range_prepare(struct xe_vm
> *vm,
>  					 range->base.itree.last +
> 1);
>  	++pt_update_ops->current_op;
>  	pt_update_ops->needs_svm_lock = true;
> -	pt_update_ops->needs_invalidation = true;
> +
> +	/*
> +	 * Opportunistically supressing invalidation, READ_ONCE
> pairs with
> +	 * WRITE_ONCE in SVM MMU notifier

To avoid having to document the pairing for all use, perhaps some
tile_invalidated accessors?


> +	 */
> +	pt_update_ops->needs_invalidation |= xe_vm_has_scratch(vm)
> ||
> +		((range->tile_present & BIT(tile->id)) &
> +		 ~READ_ONCE(range->tile_invalidated));

Would it be possible to code this repeated pattern as a function?

xe_vm_needs_invalidaion(vm, tile, tile_present, tile_invalidated);

Perhaps doesn't improve much on readability. Up to you.

Otherwise LGTM.
Thomas



>  
>  	xe_pt_commit_prepare_unbind(XE_INVALID_VMA, pt_op->entries,
>  				    pt_op->num_entries);
> diff --git a/drivers/gpu/drm/xe/xe_svm.c
> b/drivers/gpu/drm/xe/xe_svm.c
> index 13abc6049041..5e5bf47293ad 100644
> --- a/drivers/gpu/drm/xe/xe_svm.c
> +++ b/drivers/gpu/drm/xe/xe_svm.c
> @@ -141,7 +141,10 @@ xe_svm_range_notifier_event_begin(struct xe_vm
> *vm, struct drm_gpusvm_range *r,
>  	for_each_tile(tile, xe, id)
>  		if (xe_pt_zap_ptes_range(tile, vm, range)) {
>  			tile_mask |= BIT(id);
> -			/* Pairs with READ_ONCE in
> xe_svm_range_is_valid */
> +			/*
> +			 * Pairs with READ_ONCE in
> xe_svm_range_is_valid or PT
> +			 * code to suppress invalidation on unbind
> +			 */
>  			WRITE_ONCE(range->tile_invalidated,
>  				   range->tile_invalidated |
> BIT(id));
>  		}
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index d18807b92b18..b296ac37347b 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3924,7 +3924,10 @@ int xe_vm_invalidate_vma(struct xe_vma *vma)
>  	for (id = 0; id < fence_id; ++id)
>  		xe_gt_tlb_invalidation_fence_wait(&fence[id]);
>  
> -	/* WRITE_ONCE pair with READ_ONCE in xe_gt_pagefault.c */
> +	/*
> +	 * WRITE_ONCE pair with READ_ONCE in xe_gt_pagefault.c or PT
> code to
> +	 * suppress invalidation on unbind
> +	 */
>  	WRITE_ONCE(vma->tile_invalidated, vma->tile_mask);
>  
>  	return ret;


  parent reply	other threads:[~2025-06-13  8:24 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-13  4:36 [PATCH] drm/xe: Opportunistically skip TLB invalidaion on unbind Matthew Brost
2025-06-13  5:15 ` Ghimiray, Himal Prasad
2025-06-13  7:43 ` ✗ CI.KUnit: failure for " Patchwork
2025-06-13  8:24 ` Thomas Hellström [this message]
2025-06-13 16:31   ` [PATCH] " Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=136f645b70f1c0bfd646830d6cef2b60a0c3a22e.camel@linux.intel.com \
    --to=thomas.hellstrom@linux.intel.com \
    --cc=francois.dugast@intel.com \
    --cc=himal.prasad.ghimiray@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox