intel-xe.lists.freedesktop.org archive mirror
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "Brost, Matthew" <matthew.brost@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Kassabri, Farah" <farah.kassabri@intel.com>
Subject: Re: [PATCH 6/8] drm/xe: Prep TLB invalidation fence before sending
Date: Thu, 7 Aug 2025 20:22:43 +0000	[thread overview]
Message-ID: <21632c738ae43a9fafa1cf09fe0f7fcac736043c.camel@intel.com> (raw)
In-Reply-To: <aJUJ4KgWF/FjKTCZ@lstrano-desk.jf.intel.com>

On Thu, 2025-08-07 at 13:17 -0700, Matthew Brost wrote:
> On Thu, Aug 07, 2025 at 07:36:47PM +0000, stuartsummers wrote:
> > From: Matthew Brost <matthew.brost@intel.com>
> > 
> > It is a bit backwards to add a TLB invalidation fence to the
> > pending
> > list after issuing the invalidation. Perform this step before
> > issuing
> > the TLB invalidation in a helper function.
> > 
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > Reviewed-by: Stuart Summers <stuart.summers@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_tlb_inval.c | 105 +++++++++++++++-----------
> > ----
> >  1 file changed, 52 insertions(+), 53 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_tlb_inval.c
> > b/drivers/gpu/drm/xe/xe_tlb_inval.c
> > index 995699108bcb..967e5a261bb6 100644
> > --- a/drivers/gpu/drm/xe/xe_tlb_inval.c
> > +++ b/drivers/gpu/drm/xe/xe_tlb_inval.c
> > @@ -65,19 +65,19 @@ __inval_fence_signal(struct xe_device *xe,
> > struct xe_tlb_inval_fence *fence)
> >  static void
> >  inval_fence_signal(struct xe_device *xe, struct xe_tlb_inval_fence
> > *fence)
> >  {
> > +       lockdep_assert_held(&fence->tlb_inval->pending_lock);
> > +
> >         list_del(&fence->link);
> >         __inval_fence_signal(xe, fence);
> >  }
> >  
> > -void xe_tlb_inval_fence_signal(struct xe_tlb_inval_fence *fence)
> > +static void
> > +inval_fence_signal_unlocked(struct xe_device *xe,
> > +                           struct xe_tlb_inval_fence *fence)
> >  {
> > -       struct xe_gt *gt;
> > -
> > -       if (WARN_ON_ONCE(!fence->tlb_inval))
> > -               return;
> > -
> > -       gt = fence->tlb_inval->private;
> > -       __inval_fence_signal(gt_to_xe(gt), fence);
> > +       spin_lock_irq(&fence->tlb_inval->pending_lock);
> > +       inval_fence_signal(xe, fence);
> > +       spin_unlock_irq(&fence->tlb_inval->pending_lock);
> >  }
> >  
> >  static void xe_gt_tlb_fence_timeout(struct work_struct *work)
> > @@ -208,14 +208,10 @@ static bool tlb_inval_seqno_past(struct xe_gt
> > *gt, int seqno)
> >         return seqno_recv >= seqno;
> >  }
> >  
> > -static int send_tlb_inval(struct xe_guc *guc,
> > -                         struct xe_tlb_inval_fence *fence,
> > +static int send_tlb_inval(struct xe_guc *guc, struct
> > xe_tlb_inval_fence *fence,
> >                           u32 *action, int len)
> >  {
> >         struct xe_gt *gt = guc_to_gt(guc);
> > -       struct xe_device *xe = gt_to_xe(gt);
> > -       int seqno;
> > -       int ret;
> >  
> >         xe_gt_assert(gt, fence);
> >  
> > @@ -225,47 +221,38 @@ static int send_tlb_inval(struct xe_guc *guc,
> >          * need to be updated.
> >          */
> >  
> > +       xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
> > +       action[1] = fence->seqno;
> > +
> > +       return xe_guc_ct_send(&guc->ct, action, len,
> > +                             G2H_LEN_DW_TLB_INVALIDATE, 1);
> > +}
> > +
> > +static void xe_tlb_inval_fence_prep(struct xe_tlb_inval_fence
> > *fence)
> > +{
> > +       struct xe_tlb_inval *tlb_inval = fence->tlb_inval;
> > +       struct xe_gt *gt = tlb_inval->private;
> > +       struct xe_device *xe = gt_to_xe(gt);
> > +
> >         mutex_lock(&gt->tlb_inval.seqno_lock);
> 
> The seqno lock should be asserted here, with the caller taking this
> and
> holding until send_tlb_inval is called.

Ah thanks again, you're right. I'll have a new rev posted shortly...

-Stuart

> 
> Matt
> 
> > -       seqno = gt->tlb_inval.seqno;
> > -       fence->seqno = seqno;
> > +       fence->seqno = tlb_inval->seqno;
> >         trace_xe_tlb_inval_fence_send(xe, fence);
> > -       action[1] = seqno;
> > -       ret = xe_guc_ct_send(&guc->ct, action, len,
> > -                            G2H_LEN_DW_TLB_INVALIDATE, 1);
> > -       if (!ret) {
> > -               spin_lock_irq(&gt->tlb_inval.pending_lock);
> > -               /*
> > -                * We haven't actually published the TLB fence as
> > per
> > -                * pending_fences, but in theory our seqno could
> > have already
> > -                * been written as we acquired the pending_lock. In
> > such a case
> > -                * we can just go ahead and signal the fence here.
> > -                */
> > -               if (tlb_inval_seqno_past(gt, seqno)) {
> > -                       __inval_fence_signal(xe, fence);
> > -               } else {
> > -                       fence->inval_time = ktime_get();
> > -                       list_add_tail(&fence->link,
> > -                                     &gt-
> > >tlb_inval.pending_fences);
> > -
> > -                       if (list_is_singular(&gt-
> > >tlb_inval.pending_fences))
> > -                               queue_delayed_work(system_wq,
> > -                                                  &gt-
> > >tlb_inval.fence_tdr,
> > -                                                 
> > tlb_timeout_jiffies(gt));
> > -               }
> > -               spin_unlock_irq(&gt->tlb_inval.pending_lock);
> > -       } else {
> > -               __inval_fence_signal(xe, fence);
> > -       }
> > -       if (!ret) {
> > -               gt->tlb_inval.seqno = (gt->tlb_inval.seqno + 1) %
> > -                       TLB_INVALIDATION_SEQNO_MAX;
> > -               if (!gt->tlb_inval.seqno)
> > -                       gt->tlb_inval.seqno = 1;
> > -       }
> > -       mutex_unlock(&gt->tlb_inval.seqno_lock);
> > -       xe_gt_stats_incr(gt, XE_GT_STATS_ID_TLB_INVAL, 1);
> >  
> > -       return ret;
> > +       spin_lock_irq(&tlb_inval->pending_lock);
> > +       fence->inval_time = ktime_get();
> > +       list_add_tail(&fence->link, &tlb_inval->pending_fences);
> > +
> > +       if (list_is_singular(&tlb_inval->pending_fences))
> > +               queue_delayed_work(system_wq,
> > +                                  &tlb_inval->fence_tdr,
> > +                                  tlb_timeout_jiffies(gt));
> > +       spin_unlock_irq(&tlb_inval->pending_lock);
> > +
> > +       tlb_inval->seqno = (tlb_inval->seqno + 1) %
> > +               TLB_INVALIDATION_SEQNO_MAX;
> > +       if (!tlb_inval->seqno)
> > +               tlb_inval->seqno = 1;
> > +       mutex_unlock(&gt->tlb_inval.seqno_lock);
> >  }
> >  
> >  #define MAKE_INVAL_OP(type)    ((type <<
> > XE_GUC_TLB_INVAL_TYPE_SHIFT) | \
> > @@ -293,7 +280,12 @@ static int xe_tlb_inval_guc(struct xe_gt *gt,
> >         };
> >         int ret;
> >  
> > +       xe_tlb_inval_fence_prep(fence);
> > +
> >         ret = send_tlb_inval(&gt->uc.guc, fence, action,
> > ARRAY_SIZE(action));
> > +       if (ret < 0)
> > +               inval_fence_signal_unlocked(gt_to_xe(gt), fence);
> > +
> >         /*
> >          * -ECANCELED indicates the CT is stopped for a GT reset.
> > TLB caches
> >          *  should be nuked on a GT reset so this error can be
> > ignored.
> > @@ -420,7 +412,7 @@ int xe_tlb_inval_range(struct xe_tlb_inval
> > *tlb_inval,
> >  #define MAX_TLB_INVALIDATION_LEN       7
> >         u32 action[MAX_TLB_INVALIDATION_LEN];
> >         u64 length = end - start;
> > -       int len = 0;
> > +       int len = 0, ret;
> >  
> >         xe_gt_assert(gt, fence);
> >  
> > @@ -481,7 +473,14 @@ int xe_tlb_inval_range(struct xe_tlb_inval
> > *tlb_inval,
> >  
> >         xe_gt_assert(gt, len <= MAX_TLB_INVALIDATION_LEN);
> >  
> > -       return send_tlb_inval(&gt->uc.guc, fence, action, len);
> > +       xe_tlb_inval_fence_prep(fence);
> > +
> > +       ret = send_tlb_inval(&gt->uc.guc, fence, action,
> > +                            ARRAY_SIZE(action));
> > +       if (ret < 0)
> > +               inval_fence_signal_unlocked(xe, fence);
> > +
> > +       return ret;
> >  }
> >  
> >  /**
> > -- 
> > 2.34.1
> > 


  reply	other threads:[~2025-08-07 20:22 UTC|newest]

Thread overview: 20+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-07 19:36 [PATCH 0/8] Add TLB invalidation abstraction stuartsummers
2025-08-07 19:36 ` [PATCH 1/8] drm/xe: Move explicit CT lock in TLB invalidation sequence stuartsummers
2025-08-07 19:54   ` Matthew Brost
2025-08-07 19:36 ` [PATCH 2/8] drm/xe: s/tlb_invalidation/tlb_inval stuartsummers
2025-08-07 19:36 ` [PATCH 3/8] drm/xe: Add xe_tlb_inval structure stuartsummers
2025-08-07 19:36 ` [PATCH 4/8] drm/xe: Add xe_gt_tlb_invalidation_done_handler stuartsummers
2025-08-07 19:36 ` [PATCH 5/8] drm/xe: Decouple TLB invalidations from GT stuartsummers
2025-08-07 19:36 ` [PATCH 6/8] drm/xe: Prep TLB invalidation fence before sending stuartsummers
2025-08-07 20:17   ` Matthew Brost
2025-08-07 20:22     ` Summers, Stuart [this message]
2025-08-07 19:36 ` [PATCH 7/8] drm/xe: Add helpers to send TLB invalidations stuartsummers
2025-08-07 19:36 ` [PATCH 8/8] drm/xe: Split TLB invalidation code in frontend and backend stuartsummers
2025-08-07 19:45 ` ✗ CI.checkpatch: warning for Add TLB invalidation abstraction (rev4) Patchwork
2025-08-07 19:46 ` ✓ CI.KUnit: success " Patchwork
2025-08-07 20:49 ` ✓ Xe.CI.BAT: " Patchwork
2025-08-07 22:01 ` ✗ Xe.CI.Full: failure " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2025-08-07 20:39 [PATCH 0/8] Add TLB invalidation abstraction stuartsummers
2025-08-07 20:39 ` [PATCH 6/8] drm/xe: Prep TLB invalidation fence before sending stuartsummers
2025-08-06 22:23 [PATCH 0/8] Add TLB invalidation abstraction stuartsummers
2025-08-06 22:24 ` [PATCH 6/8] drm/xe: Prep TLB invalidation fence before sending stuartsummers
2025-07-30 20:45 [PATCH 0/8] Add TLB invalidation abstraction stuartsummers
2025-07-30 20:45 ` [PATCH 6/8] drm/xe: Prep TLB invalidation fence before sending stuartsummers
2025-07-31 18:38   ` Summers, Stuart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=21632c738ae43a9fafa1cf09fe0f7fcac736043c.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=farah.kassabri@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).