Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
	"Lin, Shuicheng" <shuicheng.lin@intel.com>
Cc: "Brost, Matthew" <matthew.brost@intel.com>,
	"Ceraolo Spurio, Daniele" <daniele.ceraolospurio@intel.com>,
	"Auld, Matthew" <matthew.auld@intel.com>
Subject: Re: [PATCH] drm/xe/guc: Destroy LR exec queue directly if GuC is not running
Date: Wed, 15 Oct 2025 18:15:33 +0000	[thread overview]
Message-ID: <a051e041d27c98d60fed5d7368b262745024ee2d.camel@intel.com> (raw)
In-Reply-To: <DM4PR11MB5456A099DB75CBEC655B3B8EEAE8A@DM4PR11MB5456.namprd11.prod.outlook.com>

On Wed, 2025-10-15 at 18:11 +0000, Lin, Shuicheng wrote:
> On Wed, Oct 15, 2025 11:01 AM Stuart Summers wrote:
> > On Wed, 2025-10-15 at 17:07 +0000, Lin, Shuicheng wrote:
> > > Hi all,
> > > Could you please help review the patch?
> > > Thanks in advance for your time and support!
> > > 
> > > Best Regards
> > > Shuicheng
> > > 
> > > On Mon, Oct  13, 2025 8:37 PM Shuicheng Lin wrote:
> > > > During LR exec queue cleanup, if the GuC firmware is not
> > > > running,
> > > > the driver cannot communicate with the GuC to properly
> > > > deregister
> > > > the exec queue. In this case, directly destroy the exec queue
> > > > instead of attempting deregistration.
> > > > 
> > > > This prevents schedule disable failure and GuC ID resource
> > > > leaks as
> > > > below dmesg log:
> > > > "
> > > > [   50.242564] pci 0000:03:00.0: [drm] GT0: Schedule disable
> > > > failed
> > > > to respond, guc_id=2 [   50.242568] ------------[ cut here
> > > > ]------------ [   50.242584] pci 0000:03:00.0: [drm] Assertion
> > > > `ret`
> > > > failed!
> > > > ...
> > > > [   50.244942] pci 0000:03:00.0: [drm] *ERROR* GT0: GUC ID
> > > > manager
> > > > unclean (1/65535) [   50.244970] pci 0000:03:00.0: [drm] GT0:
> > > > total 65535 [   50.245002] pci 0000:03:00.0: [drm] GT0:    
> > > > used 1 [
> > > > 50.245032] pci 0000:03:00.0: [drm] GT0:     range 2..2 (1) "
> > > > 
> > > > Fixes: 8ae8a2e8dd21 ("drm/xe: Long running job update")
> > > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > > Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
> > > > ---
> > > >  drivers/gpu/drm/xe/xe_guc_submit.c | 10 +++++++++-
> > > >  1 file changed, 9 insertions(+), 1 deletion(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > index 0ef67d3523a7..d2dfbdc82920 100644
> > > > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > > > @@ -47,6 +47,8 @@
> > > >  #include "xe_uc_fw.h"
> > > >  #include "xe_vm.h"
> > > > 
> > > > +static void __guc_exec_queue_destroy(struct xe_guc *guc,
> > > > struct
> > > > +xe_exec_queue *q);
> > > > +
> > > >  static struct xe_guc *
> > > >  exec_queue_to_guc(struct xe_exec_queue *q)  { @@ -1060,10
> > > > +1062,15
> > > > @@ static void xe_guc_exec_queue_lr_cleanup(struct work_struct
> > > > *w)
> > > >          * state.
> > > >          */
> > > >         if (!wedged && exec_queue_registered(q) &&
> > > > !exec_queue_destroyed(q)) {
> > > > -               struct xe_guc *guc = exec_queue_to_guc(q);
> > > >                 int ret;
> > > > 
> > > >                 set_exec_queue_banned(q);
> > > > +               /* If GuC is not running, just destroy the exec
> > > > queue as we
> > > > can't communicate with it */
> > > > +               if (!xe_uc_fw_is_running(&guc->fw)) {
> > > > +                       __guc_exec_queue_destroy(guc, q);
> > 
> > Hey Shuicheng,
> > 
> > Matt B. had also pointed me to your series - I had missed it
> > somehow.
> > I'm seeing something similar but in the wedged path and have an
> > idea in [1].
> > Let me test your latest changes here and I'll get back here - happy
> > to go with
> > your change if it's working (not a full review here
> > yet..)
> > 
> > Thanks,
> > Stuart
> > 
> > [1] https://patchwork.freedesktop.org/series/155352/
> 
> Hi Stuart,
> This patch is specifically for LR exec queue.
> And there is another patch for normal exec queue that is already
> merged:
> https://patchwork.freedesktop.org/series/155417/

Ah got it. Yeah I have tested with that other patch and it wasn't
working for my case since the GuC is alive at the time of H2G
communication, but then died before we received a response. My patch
attempts to cover that case. I'll take a look at both of these though
and get back.

Thanks,
Stuart

> 
> Best Regards
> Shuicheng
> 
> > 
> > > > +                       goto skip_deregister;
> > > > +               }
> > > > +
> > > >                 disable_scheduling_deregister(guc, q);
> > > > 
> > > >                 /*
> > > > @@ -1088,6 +1095,7 @@ static void
> > > > xe_guc_exec_queue_lr_cleanup(struct
> > > > work_struct *w)
> > > >                 }
> > > >         }
> > > > 
> > > > +skip_deregister:
> > > >         if (!exec_queue_killed(q) && !xe_lrc_ring_is_idle(q-
> > > > > lrc[0]))
> > > >                 xe_devcoredump(q, NULL, "LR job cleanup,
> > > > guc_id=%d",
> > > > q-
> > > > > guc->id);
> > > > 
> > > > --
> > > > 2.49.0
> > > 
> 


  reply	other threads:[~2025-10-15 18:15 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-14  3:36 [PATCH] drm/xe/guc: Destroy LR exec queue directly if GuC is not running Shuicheng Lin
2025-10-14  4:28 ` ✗ CI.checkpatch: warning for " Patchwork
2025-10-14  4:29 ` ✓ CI.KUnit: success " Patchwork
2025-10-14  5:08 ` ✓ Xe.CI.BAT: " Patchwork
2025-10-14 12:53 ` ✗ Xe.CI.Full: failure " Patchwork
2025-10-15 17:07 ` [PATCH] " Lin, Shuicheng
2025-10-15 18:01   ` Summers, Stuart
2025-10-15 18:11     ` Lin, Shuicheng
2025-10-15 18:15       ` Summers, Stuart [this message]
2025-10-15 18:02 ` Daniele Ceraolo Spurio
2025-10-15 21:07   ` Lin, Shuicheng
2025-10-16  3:24 ` Matthew Brost
2025-10-16 14:51   ` Lin, Shuicheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a051e041d27c98d60fed5d7368b262745024ee2d.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=daniele.ceraolospurio@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.auld@intel.com \
    --cc=matthew.brost@intel.com \
    --cc=shuicheng.lin@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox