From: Matthew Brost <matthew.brost@intel.com>
To: "Summers, Stuart" <stuart.summers@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH v2 2/3] drm/xe: Forcefully tear down exec queues in GuC submit fini
Date: Thu, 18 Dec 2025 17:15:37 -0800 [thread overview]
Message-ID: <aUSnOcL+vYicGrhw@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <b7fef20f35249525239dd287b391045f80d046f6.camel@intel.com>
On Thu, Dec 18, 2025 at 04:36:32PM -0700, Summers, Stuart wrote:
> On Thu, 2025-12-18 at 13:44 -0800, Matthew Brost wrote:
> > In GuC submit fini, forcefully tear down any exec queues by disabling
> > CTs, stopping the scheduler (which cleans up lost G2H), killing all
> > remaining queues, and resuming scheduling to allow any remaining
> > cleanup
> > actions to complete and signal any remaining fences.
> >
> > v2:
> > - Fix VF failure (CI)
> >
> > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel
> > GPUs")
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Zhanjun Dong <zhanjun.dong@intel.com>
> > Signed-off-by: Matthew Brost <matthew.brost@intel.com>
> >
> > ---
> >
> > This fix will not apply outright to any stable kernel as it depeneds
> > on
> > functions which have added in the KMD since the original commit.
> > Likely
> > will have to manually send out patches to stable for kernel which
> > we'd
> > like to fix.
> > ---
> > drivers/gpu/drm/xe/xe_guc_submit.c | 27 ++++++++++++++++++++-------
> > 1 file changed, 20 insertions(+), 7 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index 071cbfec2401..58ec94439df1 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -289,6 +289,8 @@ static bool
> > exec_queue_killed_or_banned_or_wedged(struct xe_exec_queue *q)
> > EXEC_QUEUE_STATE_BANNED));
> > }
> >
> > +static int __xe_guc_submit_reset_prepare(struct xe_guc *guc);
> > +
> > static void guc_submit_fini(struct drm_device *drm, void *arg)
> > {
> > struct xe_guc *guc = arg;
> > @@ -296,6 +298,12 @@ static void guc_submit_fini(struct drm_device
> > *drm, void *arg)
> > struct xe_gt *gt = guc_to_gt(guc);
> > int ret;
> >
> > + /* Forcefully kill any remaining exec queues */
> > + xe_guc_ct_stop(&guc->ct);
> > + __xe_guc_submit_reset_prepare(guc);
>
> In xe_guc_declare_wedged() we have the opposite sequence -
> reset_prepare and ct_stop after. Can we adjust to the same here? Or is
> there a reason to swap that?
>
The ordering here doesn't matter. Can swap it here.
Matt
> Thanks,
> Stuart
>
> > + xe_guc_submit_stop(guc);
> > + xe_guc_submit_pause_abort(guc);
> > +
> > ret = wait_event_timeout(guc->submission_state.fini_wq,
> > xa_empty(&guc-
> > >submission_state.exec_queue_lookup),
> > HZ * 5);
> > @@ -2459,16 +2467,10 @@ static void guc_exec_queue_stop(struct xe_guc
> > *guc, struct xe_exec_queue *q)
> > }
> > }
> >
> > -int xe_guc_submit_reset_prepare(struct xe_guc *guc)
> > +static int __xe_guc_submit_reset_prepare(struct xe_guc *guc)
> > {
> > int ret;
> >
> > - if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))
> > - return 0;
> > -
> > - if (!guc->submission_state.initialized)
> > - return 0;
> > -
> > /*
> > * Using an atomic here rather than submission_state.lock as
> > this
> > * function can be called while holding the CT lock (engine
> > reset
> > @@ -2483,6 +2485,17 @@ int xe_guc_submit_reset_prepare(struct xe_guc
> > *guc)
> > return ret;
> > }
> >
> > +int xe_guc_submit_reset_prepare(struct xe_guc *guc)
> > +{
> > + if (xe_gt_WARN_ON(guc_to_gt(guc), vf_recovery(guc)))
> > + return 0;
> > +
> > + if (!guc->submission_state.initialized)
> > + return 0;
> > +
> > + return __xe_guc_submit_reset_prepare(guc);
> > +}
> > +
> > void xe_guc_submit_reset_wait(struct xe_guc *guc)
> > {
> > wait_event(guc->ct.wq, xe_device_wedged(guc_to_xe(guc)) ||
>
next prev parent reply other threads:[~2025-12-19 1:15 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-12-18 21:44 [PATCH v2 0/3] Attempt to fixup reset, wedge, unload corner cases Matthew Brost
2025-12-18 21:44 ` [PATCH v2 1/3] drm/xe: Always kill exec queues in xe_guc_submit_pause_abort Matthew Brost
2025-12-18 23:36 ` Summers, Stuart
2025-12-18 21:44 ` [PATCH v2 2/3] drm/xe: Forcefully tear down exec queues in GuC submit fini Matthew Brost
2025-12-18 23:36 ` Summers, Stuart
2025-12-19 1:15 ` Matthew Brost [this message]
2026-01-08 19:00 ` Dong, Zhanjun
2026-01-08 19:17 ` Matthew Brost
2026-01-14 22:35 ` Dong, Zhanjun
2026-02-06 5:50 ` Matthew Brost
2026-02-06 20:29 ` Dong, Zhanjun
2025-12-18 21:44 ` [PATCH v2 3/3] drm/xe: Trigger queue cleanup if not in wedged mode 2 Matthew Brost
2025-12-18 23:45 ` Summers, Stuart
2025-12-19 1:10 ` Matthew Brost
2025-12-18 23:08 ` ✓ CI.KUnit: success for Attempt to fixup reset, wedge, unload corner cases Patchwork
2025-12-18 23:44 ` ✓ Xe.CI.BAT: " Patchwork
2025-12-20 1:22 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aUSnOcL+vYicGrhw@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=stuart.summers@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox