From: "Summers, Stuart" <stuart.summers@intel.com>
To: "Brost, Matthew" <matthew.brost@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v4 10/12] drm/xe: Add exec queue active vfunc
Date: Wed, 14 Jan 2026 21:17:18 +0000 [thread overview]
Message-ID: <34980f8931ff389d1bbee61b8b317ddaef845604.camel@intel.com> (raw)
In-Reply-To: <a3868286e8d300a50af06209feb313646440d446.camel@intel.com>
On Tue, 2026-01-13 at 22:21 +0000, Summers, Stuart wrote:
> On Tue, 2026-01-13 at 14:16 -0800, Matthew Brost wrote:
> > On Tue, Jan 13, 2026 at 02:32:58PM -0700, Summers, Stuart wrote:
> > > On Mon, 2026-01-12 at 18:52 -0800, Matthew Brost wrote:
> > > > If an exec queue is inactive (e.g., not registered or
> > > > scheduling
> > > > is
> > > > disabled), TLB invalidations are not issued for that queue. Add
> > > > a
> > > > virtual function to determine the active state, which TLB
> > > > invalidation
> > > > logic can hook into.
> > > >
> > > > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> > > > ---
> > > > drivers/gpu/drm/xe/xe_exec_queue_types.h | 2 ++
> > > > drivers/gpu/drm/xe/xe_execlist.c | 7 +++++++
> > > > drivers/gpu/drm/xe/xe_guc_submit.c | 6 ++++++
> > > > 3 files changed, 15 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > > b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > > index d3e2789cf5bc..9cca558c5809 100644
> > > > --- a/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > > +++ b/drivers/gpu/drm/xe/xe_exec_queue_types.h
> > > > @@ -298,6 +298,8 @@ struct xe_exec_queue_ops {
> > > > void (*resume)(struct xe_exec_queue *q);
> > > > /** @reset_status: check exec queue reset status */
> > > > bool (*reset_status)(struct xe_exec_queue *q);
> > > > + /** @active: check exec queue is active */
> > > > + bool (*active)(struct xe_exec_queue *q);
> > > > };
> > > >
> > > > #endif
> > > > diff --git a/drivers/gpu/drm/xe/xe_execlist.c
> > > > b/drivers/gpu/drm/xe/xe_execlist.c
> > > > index 46c17a18a3f4..e4a2674f48c7 100644
> > > > --- a/drivers/gpu/drm/xe/xe_execlist.c
> > > > +++ b/drivers/gpu/drm/xe/xe_execlist.c
> > > > @@ -469,6 +469,12 @@ static bool
> > > > execlist_exec_queue_reset_status(struct xe_exec_queue *q)
> > > > return false;
> > > > }
> > > >
> > > > +static bool execlist_exec_queue_active(struct xe_exec_queue
> > > > *q)
> > > > +{
> > > > + /* NIY */
> > > > + return false;
> > > > +}
> > > > +
> > > > static const struct xe_exec_queue_ops execlist_exec_queue_ops
> > > > =
> > > > {
> > > > .init = execlist_exec_queue_init,
> > > > .kill = execlist_exec_queue_kill,
> > > > @@ -481,6 +487,7 @@ static const struct xe_exec_queue_ops
> > > > execlist_exec_queue_ops = {
> > > > .suspend_wait = execlist_exec_queue_suspend_wait,
> > > > .resume = execlist_exec_queue_resume,
> > > > .reset_status = execlist_exec_queue_reset_status,
> > > > + .active = execlist_exec_queue_active,
> > > > };
> > > >
> > > > int xe_execlist_init(struct xe_gt *gt)
> > > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > index be8fa76baf1d..19282e5ca897 100644
> > > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > @@ -2203,6 +2203,11 @@ static bool
> > > > guc_exec_queue_reset_status(struct
> > > > xe_exec_queue *q)
> > > > return exec_queue_reset(q) ||
> > > > exec_queue_killed_or_banned_or_wedged(q);
> > > > }
> > > >
> > > > +static bool guc_exec_queue_active(struct xe_exec_queue *q)
> > > > +{
> > > > + return exec_queue_enabled(q) &&
> > > > !exec_queue_pending_disable(q);
> > >
> > > So I think the requirement here is !exec_queue_pending_disable &&
> > > !exec_queue_destroyed rather than exec_queue_enabled... But if
> > > the
> >
> > If destroyed is set, the refcounting exchange in the following
> > patch
> > will fail, so exec_queue_destroyed check here isn't needed.
>
> Ok. Still looking over that other comment but it looks promising...
>
> >
> > > context isn't enabled (set runnable to true in GuC), it means
> > > hardware
> > > shouldn't be touching any of the associated memory anyway and we
> > > shouldn't be issuing any invalidations, so this still looks ok to
> > > me.
> > > It means we'll be doing full invalidations for those I think
> > > though
> > > which could add some performance impact in the time between
> > > register
> > > and enable. For now though as long as this is working (still
> > > doing
> > > some
> > > testing on my end and I had some questions in a later patch), we
> > > can go
> > > with this.
> >
> > My reasoning here is that the context is not enabled, so we don’t
> > need
> > to issue an invalidation because the next context switch to the
> > hardware
> > will issue one—that’s the current hardware behavior. If that
> > changes,
> > this could simply become an exec_queue_registered check. I forget
> > if
> > this is changing in future hardware.
>
> Ok... yeah I guess we can address this if it comes up. I was more
> worried about this falling back to the full invalidation if the list
> was empty here.
>
> >
> > I also noticed I have a multi-queue bug here too. This needs to
> > check
> > the primary queue’s state for multi-queue.
>
> Ah true, good catch...
Just a quick confirmation.. no other concerns from me on this patch
outside of the secondary queue check you had mentioned.
Thanks,
Stuart
>
> Thanks,
> Stuart
>
> >
> > Matt
> >
> > >
> > > Thanks,
> > > Stuart
> > >
> > > > +}
> > > > +
> > > > /*
> > > > * All of these functions are an abstraction layer which other
> > > > parts
> > > > of Xe can
> > > > * use to trap into the GuC backend. All of these functions,
> > > > aside
> > > > from init,
> > > > @@ -2222,6 +2227,7 @@ static const struct xe_exec_queue_ops
> > > > guc_exec_queue_ops = {
> > > > .suspend_wait = guc_exec_queue_suspend_wait,
> > > > .resume = guc_exec_queue_resume,
> > > > .reset_status = guc_exec_queue_reset_status,
> > > > + .active = guc_exec_queue_active,
> > > > };
> > > >
> > > > static void guc_exec_queue_stop(struct xe_guc *guc, struct
> > > > xe_exec_queue *q)
> > >
>
next prev parent reply other threads:[~2026-01-14 21:17 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-13 2:52 [PATCH v4 00/12] Context based TLB invalidations Matthew Brost
2026-01-13 2:52 ` [PATCH v4 01/12] drm/xe: Add normalize_invalidation_range Matthew Brost
2026-01-13 2:52 ` [PATCH v4 02/12] drm/xe: Make usm.asid_to_vm allocation use GFP_NOWAIT Matthew Brost
2026-01-13 2:52 ` [PATCH v4 03/12] drm/xe: Add has_ctx_tlb_inval to device info Matthew Brost
2026-01-13 21:24 ` Summers, Stuart
2026-01-13 2:52 ` [PATCH v4 04/12] drm/xe: Add xe_device_asid_to_vm helper Matthew Brost
2026-01-13 2:52 ` [PATCH v4 05/12] drm/xe: Add vm to exec queues association Matthew Brost
2026-01-13 21:26 ` Summers, Stuart
2026-01-13 2:52 ` [PATCH v4 06/12] drm/xe: Taint TLB invalidation seqno lock with GFP_KERNEL Matthew Brost
2026-01-13 2:52 ` [PATCH v4 07/12] drm/xe: Rename send_tlb_inval_ppgtt to send_tlb_inval_asid_ppgtt Matthew Brost
2026-01-13 2:52 ` [PATCH v4 08/12] drm/xe: Add send_tlb_inval_ppgtt helper Matthew Brost
2026-01-13 2:52 ` [PATCH v4 09/12] drm/xe: Add xe_tlb_inval_idle helper Matthew Brost
2026-01-13 2:52 ` [PATCH v4 10/12] drm/xe: Add exec queue active vfunc Matthew Brost
2026-01-13 21:32 ` Summers, Stuart
2026-01-13 22:16 ` Matthew Brost
2026-01-13 22:21 ` Summers, Stuart
2026-01-14 21:17 ` Summers, Stuart [this message]
2026-01-13 2:52 ` [PATCH v4 11/12] drm/xe: Add context-based invalidation to GuC TLB invalidation backend Matthew Brost
2026-01-13 21:23 ` Summers, Stuart
2026-01-13 22:08 ` Matthew Brost
2026-01-13 23:25 ` Summers, Stuart
2026-01-14 0:25 ` Matthew Brost
2026-01-13 2:52 ` [PATCH v4 12/12] drm/xe: Enable context TLB invalidations for CI Matthew Brost
2026-01-13 3:00 ` ✓ CI.KUnit: success for Context based TLB invalidations (rev4) Patchwork
2026-01-13 3:41 ` ✓ Xe.CI.BAT: " Patchwork
2026-01-13 9:45 ` ✓ Xe.CI.Full: " Patchwork
2026-01-16 0:25 ` [PATCH v4 00/12] Context based TLB invalidations Summers, Stuart
2026-01-16 1:52 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=34980f8931ff389d1bbee61b8b317ddaef845604.camel@intel.com \
--to=stuart.summers@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.brost@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox