Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Summers, Stuart" <stuart.summers@intel.com>
To: "Brost, Matthew" <matthew.brost@intel.com>
Cc: "intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling
Date: Tue, 14 Oct 2025 03:10:37 +0000	[thread overview]
Message-ID: <b8b076b3602ff1effa1ad6fb3d2ab1d1430f7a67.camel@intel.com> (raw)
In-Reply-To: <aO2wvw/FIZecvvXB@lstrano-desk.jf.intel.com>

On Mon, 2025-10-13 at 19:09 -0700, Matthew Brost wrote:
> On Mon, Oct 13, 2025 at 10:31:35PM +0000, Stuart Summers wrote:
> > Currently if the GuC becomes unresponsive during a schedule
> > disable event, after we send the CT request, the driver
> > doesn't have a good way to recover. In most other cases,
> > we explicitly wait for GuC to respond by checking either
> > pending_enable, pending_disable, or some other state change
> > that we expect to be set after the response from GuC is
> > received for that particular request. Add a similar check
> > on the schedule disable side and make sure the queue state
> > for the queue being disabled is reset properly in that case.
> > 
> > v2: Only call the deregistration sequence since in this
> >     case the scheduling handler should be reset during
> >     the GT reset.
> >     By doing that, we don't need a way to track the scheduling
> >     disable request handler for that queue, making this sequence
> >     simpler. As a result, don't mark the queue as banned.
> > 
> > Signed-off-by: Stuart Summers <stuart.summers@intel.com>
> > ---
> >  drivers/gpu/drm/xe/xe_guc_submit.c | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c
> > b/drivers/gpu/drm/xe/xe_guc_submit.c
> > index c923f13a13ef..ca37c7a8c5ed 100644
> > --- a/drivers/gpu/drm/xe/xe_guc_submit.c
> > +++ b/drivers/gpu/drm/xe/xe_guc_submit.c
> > @@ -924,6 +924,8 @@ int xe_guc_read_stopped(struct xe_guc *guc)
> >                 GUC_CONTEXT_##enable_disable,                      
> >      \
> >         }
> >  
> > +static void handle_deregister_done(struct xe_guc *guc, struct
> > xe_exec_queue *q);
> > +
> >  static void disable_scheduling_deregister(struct xe_guc *guc,
> >                                           struct xe_exec_queue *q)
> >  {
> > @@ -961,6 +963,17 @@ static void
> > disable_scheduling_deregister(struct xe_guc *guc,
> >         xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
> >                        G2H_LEN_DW_SCHED_CONTEXT_MODE_SET +
> >                        G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
> > +
> > +       ret = wait_event_timeout(guc->ct.wq,
> > +                                !exec_queue_pending_disable(q) ||
> > +                                xe_guc_read_stopped(guc),
> > +                                HZ * 5);
> > +       if (!ret || xe_guc_read_stopped(guc)) {
> > +               xe_gt_warn(guc_to_gt(guc), "Schedule disable failed
> > to respond");
> > +               handle_deregister_done(guc, q);
> > +               xe_gt_reset_async(q->gt);
> > +       }
> > +
> 
> I thought I already stated this in eariler reviews - this path is
> designed to be fully async (i.e., no blocking here). If H2G get lost
> for
> whatever reason the driver should be smart enough to cleanup the
> memory
> references assicoiated for the lost H2G.

No you're right and I remembered your initial comment here. I was
thinking since we already have a wait earlier in this function and we
have similar waits for most of the other GuC communications in the
registration/submit code that having that here follows the same
thought. I get your point though. I'll have a more async focused
solution in the next rev.

Thanks for the feedback!

-Stuart

> 
> Matt
> 
> >  }
> >  
> >  static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue
> > *q)
> > -- 
> > 2.34.1
> > 


  reply	other threads:[~2025-10-14  3:10 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13 22:31 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-13 22:31 ` [PATCH 1/7] drm/xe: Add additional trace points for LRCs Stuart Summers
2025-10-13 22:31 ` [PATCH 2/7] drm/xe: Add a trace point for VM close Stuart Summers
2025-10-13 22:31 ` [PATCH 3/7] drm/xe: Add the BO pointer info to the BO trace Stuart Summers
2025-10-13 22:31 ` [PATCH 4/7] drm/xe: Add new exec queue trace points Stuart Summers
2025-10-13 22:31 ` [PATCH 5/7] drm/xe: Correct migration VM teardown order Stuart Summers
2025-10-13 22:31 ` [PATCH 6/7] drm/xe: Kick start GPU scheduler on teardown Stuart Summers
2025-10-13 22:32   ` Summers, Stuart
2025-10-14  2:07     ` Matthew Brost
2025-10-13 22:31 ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Stuart Summers
2025-10-14  2:09   ` Matthew Brost
2025-10-14  3:10     ` Summers, Stuart [this message]
2025-10-14  1:04 ` ✗ CI.checkpatch: warning for Fix a couple of wedge corner-case memory leaks (rev3) Patchwork
2025-10-14  1:05 ` ✓ CI.KUnit: success " Patchwork
2025-10-14  1:50 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-14 10:02 ` ✗ Xe.CI.Full: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2025-10-13 16:24 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-13 16:25 ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Stuart Summers
2025-10-02 23:04 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-02 23:04 ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Stuart Summers
2025-10-03 18:54   ` Matthew Brost
2025-10-03 18:58     ` Summers, Stuart
2025-10-03 19:38       ` Matthew Brost
2025-10-03 19:42         ` Summers, Stuart
2025-10-03 19:49           ` Matthew Brost
2025-10-03 19:53             ` Summers, Stuart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b8b076b3602ff1effa1ad6fb3d2ab1d1430f7a67.camel@intel.com \
    --to=stuart.summers@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox