Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Brost <matthew.brost@intel.com>
To: Matt Roper <matthew.d.roper@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe/vm: Use for_each_tlb_inval() to calculate invalidation fences
Date: Tue, 18 Nov 2025 14:51:58 -0800	[thread overview]
Message-ID: <aRz4jtZku6BXs2PI@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20251118201439.3688178-2-matthew.d.roper@intel.com>

On Tue, Nov 18, 2025 at 12:14:40PM -0800, Matt Roper wrote:
> ops_execute() calculates the size of a fence array based on
> XE_MAX_GT_PER_TILE, while the code that actually fills in the fence
> array uses a for_each_tlb_inval() iterator.  This works out okay today
> since both approaches come up with the same number of invalidation
> fences (2: primary GT invalidation + media GT invalidation), but could
> be problematic in the future if there isn't a 1:1 relationship between
> TLBs needing invalidation and potential GTs on the tile.
> 
> Adjust the allocation code to use the same for_each_tlb_inval()
> counting logic as the code that fills the array to future-proof the
> code.
> 
> Signed-off-by: Matt Roper <matthew.d.roper@intel.com>

Reviewed-by: Matthew Brost <matthew.brost@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_vm.c | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_vm.c b/drivers/gpu/drm/xe/xe_vm.c
> index 7cac646bdf1c..6794f38b9340 100644
> --- a/drivers/gpu/drm/xe/xe_vm.c
> +++ b/drivers/gpu/drm/xe/xe_vm.c
> @@ -3111,12 +3111,12 @@ static struct dma_fence *ops_execute(struct xe_vm *vm,
>  	if (number_tiles == 0)
>  		return ERR_PTR(-ENODATA);
>  
> -	if (vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT) {
> -		for_each_tile(tile, vm->xe, id)
> -			++n_fence;
> -	} else {
> -		for_each_tile(tile, vm->xe, id)
> -			n_fence += (1 + XE_MAX_GT_PER_TILE);
> +	for_each_tile(tile, vm->xe, id) {
> +		++n_fence;
> +
> +		if (!(vops->flags & XE_VMA_OPS_FLAG_SKIP_TLB_WAIT))
> +			for_each_tlb_inval(i)
> +				++n_fence;
>  	}
>  
>  	fences = kmalloc_array(n_fence, sizeof(*fences), GFP_KERNEL);
> -- 
> 2.51.1
> 

  parent reply	other threads:[~2025-11-18 22:52 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-18 20:14 [PATCH] drm/xe/vm: Use for_each_tlb_inval() to calculate invalidation fences Matt Roper
2025-11-18 20:20 ` ✗ CI.KUnit: failure for " Patchwork
2025-11-18 20:26 ` [PATCH] " Matt Roper
2025-11-19  0:24   ` Matthew Brost
2025-11-18 20:33 ` ✓ CI.KUnit: success for drm/xe/vm: Use for_each_tlb_inval() to calculate invalidation fences (rev2) Patchwork
2025-11-18 22:51 ` Matthew Brost [this message]
2025-11-19  5:49 ` ✓ CI.KUnit: success for drm/xe/vm: Use for_each_tlb_inval() to calculate invalidation fences (rev3) Patchwork
2025-11-19  6:58 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-19  8:15 ` ✓ Xe.CI.Full: " Patchwork
2025-11-19 15:26   ` Matt Roper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aRz4jtZku6BXs2PI@lstrano-desk.jf.intel.com \
    --to=matthew.brost@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.d.roper@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox