From: Matthew Brost <matthew.brost@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: "Upadhyay, Tejas" <tejas.upadhyay@intel.com>,
"intel-xe@lists.freedesktop.org" <intel-xe@lists.freedesktop.org>,
"Thomas Hellström" <thomas.hellstrom@linux.intel.com>,
"Tseng, William" <william.tseng@intel.com>,
"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: [PATCH v3] drm/xe/sched: stop re-submitting signalled jobs
Date: Wed, 28 May 2025 08:48:49 -0700 [thread overview]
Message-ID: <aDcwYSHft86dfxq9@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <aabaf5db-92aa-4ec9-b0a8-6eb9694fa7c7@intel.com>
On Wed, May 28, 2025 at 03:29:17PM +0100, Matthew Auld wrote:
> On 28/05/2025 14:06, Upadhyay, Tejas wrote:
> >
> >
> > > -----Original Message-----
> > > From: Intel-xe <intel-xe-bounces@lists.freedesktop.org> On Behalf Of
> > > Matthew Auld
> > > Sent: 28 May 2025 17:03
> > > To: intel-xe@lists.freedesktop.org
> > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>; Brost, Matthew
> > > <matthew.brost@intel.com>; Tseng, William <william.tseng@intel.com>;
> > > stable@vger.kernel.org
> > > Subject: [PATCH v3] drm/xe/sched: stop re-submitting signalled jobs
> > >
> > > Customer is reporting a really subtle issue where we get random DMAR faults,
> > > hangs and other nasties for kernel migration jobs when stressing stuff like
> > > s2idle/s3/s4. The explosions seems to happen somewhere after resuming the
> > > system with splats looking something like:
> > >
> > > PM: suspend exit
> > > rfkill: input handler disabled
> > > xe 0000:00:02.0: [drm] GT0: Engine reset: engine_class=bcs, logical_mask:
> > > 0x2, guc_id=0 xe 0000:00:02.0: [drm] GT0: Timedout job: seqno=24496,
> > > lrc_seqno=24496, guc_id=0, flags=0x13 in no process [-1] xe 0000:00:02.0:
> > > [drm] GT0: Kernel-submitted job timed out
> > >
> > > The likely cause appears to be a race between suspend cancelling the worker
> > > that processes the free_job()'s, such that we still have pending jobs to be
> > > freed after the cancel. Following from this, on resume the pending_list will
> > > now contain at least one already complete job, but it looks like we call
> > > drm_sched_resubmit_jobs(), which will then call
> > > run_job() on everything still on the pending_list. But if the job was already
> > > complete, then all the resources tied to the job, like the bb itself, any memory
> > > that is being accessed, the iommu mappings etc. might be long gone since
> > > those are usually tied to the fence signalling.
> > >
> > > This scenario can be seen in ftrace when running a slightly modified xe_pm
> > > (kernel was only modified to inject artificial latency into free_job to make the
> > > race easier to hit):
> > >
> > > xe_sched_job_run: dev=0000:00:02.0, fence=0xffff888276cc8540, seqno=0,
> > > lrc_seqno=0, gt=0, guc_id=0, batch_addr=0x000000146910 ...
> > > xe_exec_queue_stop: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0,
> > > guc_state=0x0, flags=0x13
> > > xe_exec_queue_stop: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=1,
> > > guc_state=0x0, flags=0x4
> > > xe_exec_queue_stop: dev=0000:00:02.0, 4:0x1, gt=1, width=1, guc_id=0,
> > > guc_state=0x0, flags=0x3
> > > xe_exec_queue_stop: dev=0000:00:02.0, 1:0x1, gt=1, width=1, guc_id=1,
> > > guc_state=0x0, flags=0x3
> > > xe_exec_queue_stop: dev=0000:00:02.0, 4:0x1, gt=1, width=1, guc_id=2,
> > > guc_state=0x0, flags=0x3
> > > xe_exec_queue_resubmit: dev=0000:00:02.0, 3:0x2, gt=0, width=1, guc_id=0,
> > > guc_state=0x0, flags=0x13
> > > xe_sched_job_run: dev=0000:00:02.0, fence=0xffff888276cc8540, seqno=0,
> > > lrc_seqno=0, gt=0, guc_id=0, batch_addr=0x000000146910 ...
> > > .....
> > > xe_exec_queue_memory_cat_error: dev=0000:00:02.0, 3:0x2, gt=0, width=1,
> > > guc_id=0, guc_state=0x3, flags=0x13
> > >
> > > So the job_run() is clearly triggered twice for the same job, even though the
> > > first must have already signalled to completion during suspend. We can also
> > > see a CAT error after the re-submit.
> > >
> > > To prevent this try to call xe_sched_stop() to forcefully remove anything on
> > > the pending_list that has already signalled, before we re-submit.
> > >
> > > v2:
> > > - Make sure to re-arm the fence callbacks with sched_start().
> > > v3 (Matt B):
> > > - Stop using drm_sched_resubmit_jobs(), which appears to be deprecated
> > > and just open-code a simple loop such that we skip calling run_job()
> > > and anything already signalled.
> > >
> > > Link: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/4856
> > > Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
> > > Signed-off-by: Matthew Auld <matthew.auld@intel.com>
> > > Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> > > Cc: Matthew Brost <matthew.brost@intel.com>
> > > Cc: William Tseng <william.tseng@intel.com>
> > > Cc: <stable@vger.kernel.org> # v6.8+
> > > ---
> > > drivers/gpu/drm/xe/xe_gpu_scheduler.h | 10 +++++++++-
> > > 1 file changed, 9 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > > b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > > index c250ea773491..308061f0cf37 100644
> > > --- a/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > > +++ b/drivers/gpu/drm/xe/xe_gpu_scheduler.h
> > > @@ -51,7 +51,15 @@ static inline void xe_sched_tdr_queue_imm(struct
> > > xe_gpu_scheduler *sched)
> > >
> > > static inline void xe_sched_resubmit_jobs(struct xe_gpu_scheduler *sched) {
> > > - drm_sched_resubmit_jobs(&sched->base);
> > > + struct drm_sched_job *s_job;
> > > +
> > > + list_for_each_entry(s_job, &sched->base.pending_list, list) {
> > > + struct drm_sched_fence *s_fence = s_job->s_fence;
> > > + struct dma_fence *hw_fence = s_fence->parent;
> > > +
> > > + if (hw_fence && !dma_fence_is_signaled(hw_fence))
> > > + sched->base.ops->run_job(s_job);
> > > + }
> >
> > While this change looks correct, what about those hanging contexts which is indicated to waiters by dma_fence_set_error(&s_fence->finished, -ECANCELED);!
>
> I think a hanging context will usually be banned, so we shouldn't reach this
> point AFAICT. Can you share some more info on what your concern is here? I
> don't think we would normally want to call run_job() again on jobs from a
> hanging context. It looks like our run_job() will bail if the hw fence is
> marked with an error.
>
Also I the hw_fence will be signaled if in an eror state - see
xe_hw_fence_signaled. I believe we could remove that check from run_job
now but can do that in a follow up.
Anyways, this patch LGTM:
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
> >
> > Tejas
> > > }
> > >
> > > static inline bool
> > > --
> > > 2.49.0
> >
>
next prev parent reply other threads:[~2025-05-28 15:47 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-05-28 11:33 [PATCH v3] drm/xe/sched: stop re-submitting signalled jobs Matthew Auld
2025-05-28 11:39 ` ✓ CI.Patch_applied: success for drm/xe/sched: stop re-submitting signalled jobs (rev3) Patchwork
2025-05-28 11:39 ` ✗ CI.checkpatch: warning " Patchwork
2025-05-28 11:40 ` ✓ CI.KUnit: success " Patchwork
2025-05-28 11:45 ` ✗ CI.Build: failure " Patchwork
2025-05-28 13:06 ` [PATCH v3] drm/xe/sched: stop re-submitting signalled jobs Upadhyay, Tejas
2025-05-28 14:29 ` Matthew Auld
2025-05-28 15:48 ` Matthew Brost [this message]
2025-05-29 5:13 ` Upadhyay, Tejas
2025-05-28 16:31 ` ✓ CI.Patch_applied: success for drm/xe/sched: stop re-submitting signalled jobs (rev4) Patchwork
2025-05-28 16:31 ` ✗ CI.checkpatch: warning " Patchwork
2025-05-28 16:33 ` ✓ CI.KUnit: success " Patchwork
2025-05-28 16:43 ` ✓ CI.Build: " Patchwork
2025-05-28 16:46 ` ✓ CI.Hooks: " Patchwork
2025-05-28 16:47 ` ✓ CI.checksparse: " Patchwork
2025-05-28 17:15 ` ✓ Xe.CI.BAT: " Patchwork
2025-05-28 18:17 ` ✓ Xe.CI.Full: " Patchwork
2025-05-30 12:02 ` ✓ CI.Patch_applied: success for drm/xe/sched: stop re-submitting signalled jobs (rev5) Patchwork
2025-05-30 12:02 ` ✗ CI.checkpatch: warning " Patchwork
2025-05-30 12:03 ` ✓ CI.KUnit: success " Patchwork
2025-05-30 12:16 ` ✓ CI.Build: " Patchwork
2025-05-30 12:19 ` ✓ CI.Hooks: " Patchwork
2025-05-30 12:21 ` ✓ CI.checksparse: " Patchwork
2025-05-30 12:41 ` ✓ Xe.CI.BAT: " Patchwork
2025-05-31 23:11 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aDcwYSHft86dfxq9@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
--cc=stable@vger.kernel.org \
--cc=tejas.upadhyay@intel.com \
--cc=thomas.hellstrom@linux.intel.com \
--cc=william.tseng@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox