Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Brost,  Matthew" <matthew.brost@intel.com>
Cc: "Roper, Matthew D" <matthew.d.roper@intel.com>,
	"De Marchi, Lucas" <lucas.demarchi@intel.com>
Subject: Re: [PATCH v2 01/12] drm/xe: Add normalize_invalidation_range
Date: Thu, 6 Nov 2025 20:03:59 +0000	[thread overview]
Message-ID: <a3bcd935e0b4487585c9972cbc5c8714ae125a36.camel@intel.com> (raw)
In-Reply-To: <20251104195616.3339137-2-matthew.brost@intel.com>

On Tue, 2025-11-04 at 11:56 -0800, Matthew Brost wrote:
> Extract the code that determines the alignment of TLB invalidation
> into
> a helper function — normalize_invalidation_range. This will be useful
> when adding context-based invalidations to the GuC TLB invalidation
> backend.
> 
> Signed-off-by: Nirmoy Das <nirmoy.das@intel.com>
> Signed-off-by: Matthew Brost <matthew.brost@intel.com>

Reviewed-by: Stuart Summers <stuart.summers@intel.com>

> ---
>  drivers/gpu/drm/xe/xe_guc_tlb_inval.c | 71 +++++++++++++------------
> --
>  1 file changed, 35 insertions(+), 36 deletions(-)
> 
> diff --git a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> index a80175c7c478..61bfa0d485f6 100644
> --- a/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> +++ b/drivers/gpu/drm/xe/xe_guc_tlb_inval.c
> @@ -92,6 +92,38 @@ static int send_tlb_inval_ggtt(struct xe_tlb_inval
> *tlb_inval, u32 seqno)
>         return -ECANCELED;
>  }
>  
> +static u64 normalize_invalidation_range(struct xe_gt *gt, u64
> *start, u64 *end)
> +{
> +       u64 orig_start = *start;
> +       u64 length = *end - *start;
> +       u64 align;
> +
> +       if (length < SZ_4K)
> +               length = SZ_4K;
> +
> +       align = roundup_pow_of_two(length);
> +       *start = ALIGN_DOWN(*start, align);
> +       *end = ALIGN(*end, align);
> +       length = align;
> +       while (*start + length < *end) {
> +               length <<= 1;
> +               *start = ALIGN_DOWN(orig_start, length);
> +       }
> +
> +       if (length >= SZ_2M) {
> +               length = max_t(u64, SZ_16M, length);
> +               *start = ALIGN_DOWN(orig_start, length);
> +       }
> +
> +       xe_gt_assert(gt, length >= SZ_4K);
> +       xe_gt_assert(gt, is_power_of_2(length));
> +       xe_gt_assert(gt, !(length & GENMASK(ilog2(SZ_16M) - 1,
> +                                           ilog2(SZ_2M) + 1)));
> +       xe_gt_assert(gt, IS_ALIGNED(*start, length));
> +
> +       return length;
> +}
> +
>  /*
>   * Ensure that roundup_pow_of_two(length) doesn't overflow.
>   * Note that roundup_pow_of_two() operates on unsigned long,
> @@ -118,47 +150,14 @@ static int send_tlb_inval_ppgtt(struct
> xe_tlb_inval *tlb_inval, u32 seqno,
>             length > MAX_RANGE_TLB_INVALIDATION_LENGTH) {
>                 action[len++] = MAKE_INVAL_OP(XE_GUC_TLB_INVAL_FULL);
>         } else {
> -               u64 orig_start = start;
> -               u64 align;
> -
> -               if (length < SZ_4K)
> -                       length = SZ_4K;
> -
> -               /*
> -                * We need to invalidate a higher granularity if
> start address
> -                * is not aligned to length. When start is not
> aligned with
> -                * length we need to find the length large enough to
> create an
> -                * address mask covering the required range.
> -                */
> -               align = roundup_pow_of_two(length);
> -               start = ALIGN_DOWN(start, align);
> -               end = ALIGN(end, align);
> -               length = align;
> -               while (start + length < end) {
> -                       length <<= 1;
> -                       start = ALIGN_DOWN(orig_start, length);
> -               }
> -
> -               /*
> -                * Minimum invalidation size for a 2MB page that the
> hardware
> -                * expects is 16MB
> -                */
> -               if (length >= SZ_2M) {
> -                       length = max_t(u64, SZ_16M, length);
> -                       start = ALIGN_DOWN(orig_start, length);
> -               }
> -
> -               xe_gt_assert(gt, length >= SZ_4K);
> -               xe_gt_assert(gt, is_power_of_2(length));
> -               xe_gt_assert(gt, !(length & GENMASK(ilog2(SZ_16M) -
> 1,
> -                                                   ilog2(SZ_2M) +
> 1)));
> -               xe_gt_assert(gt, IS_ALIGNED(start, length));
> +               u64 normalize_len = normalize_invalidation_range(gt,
> &start,
> +                                                               
> &end);
>  
>                 action[len++] =
> MAKE_INVAL_OP(XE_GUC_TLB_INVAL_PAGE_SELECTIVE);
>                 action[len++] = asid;
>                 action[len++] = lower_32_bits(start);
>                 action[len++] = upper_32_bits(start);
> -               action[len++] = ilog2(length) - ilog2(SZ_4K);
> +               action[len++] = ilog2(normalize_len) - ilog2(SZ_4K);
>         }
>  
>         xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);


  reply	other threads:[~2025-11-06 20:04 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-11-04 19:56 [PATCH v2 00/12] Context based TLB invalidations Matthew Brost
2025-11-04 19:56 ` [PATCH v2 01/12] drm/xe: Add normalize_invalidation_range Matthew Brost
2025-11-06 20:03   ` Summers, Stuart [this message]
2025-11-04 19:56 ` [PATCH v2 02/12] drm/xe: Make usm.asid_to_vm allocation use GFP_NOWAIT Matthew Brost
2025-11-06 22:08   ` Summers, Stuart
2025-11-06 22:13   ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 03/12] drm/xe: Add xe_device_asid_to_vm helper Matthew Brost
2025-12-11 22:07   ` Matt Atwood
2025-11-04 19:56 ` [PATCH v2 04/12] drm/xe: Add vm to exec queues association Matthew Brost
2025-11-06 22:15   ` Summers, Stuart
2025-12-12 21:03   ` Summers, Stuart
2025-12-12 21:24     ` Matthew Brost
2025-12-12 21:37       ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 05/12] drm/xe: Taint TLB invalidation seqno lock with GFP_KERNEL Matthew Brost
2025-12-11 22:35   ` Matt Atwood
2025-11-04 19:56 ` [PATCH v2 06/12] drm/xe: Do not forward invalid TLB invalidation seqnos to upper layers Matthew Brost
2025-11-06 22:05   ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 07/12] drm/xe: Rename send_tlb_inval_ppgtt to send_tlb_inval_asid_ppgtt Matthew Brost
2025-11-06 20:22   ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 08/12] drm/xe: Add send_tlb_inval_ppgtt helper Matthew Brost
2025-11-06 20:25   ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 09/12] drm/xe: Add xe_tlb_inval_idle helper Matthew Brost
2025-11-10 18:48   ` Summers, Stuart
2025-12-12 22:00     ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 10/12] drm/xe: Add exec queue active vfunc Matthew Brost
2025-11-04 19:56 ` [PATCH v2 11/12] drm/xe: Add context-based invalidation to GuC TLB invalidation backend Matthew Brost
2025-11-06 21:50   ` Summers, Stuart
2025-11-07  7:01     ` Matthew Brost
2025-11-10 19:29       ` Summers, Stuart
2025-11-11  1:01         ` Matthew Brost
2025-12-12 22:30   ` Summers, Stuart
2025-11-04 19:56 ` [PATCH v2 12/12] drm/xe: Enable context TLB invalidations for CI Matthew Brost

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a3bcd935e0b4487585c9972cbc5c8714ae125a36.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=lucas.demarchi@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=matthew.d.roper@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox