Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Stuart Summers <stuart.summers@intel.com>
Cc: intel-xe@lists.freedesktop.org, matthew.brost@intel.com,
	Stuart Summers <stuart.summers@intel.com>
Subject: [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling
Date: Mon, 13 Oct 2025 22:31:35 +0000	[thread overview]
Message-ID: <20251013223135.189357-8-stuart.summers@intel.com> (raw)
In-Reply-To: <20251013223135.189357-1-stuart.summers@intel.com>

Currently if the GuC becomes unresponsive during a schedule
disable event, after we send the CT request, the driver
doesn't have a good way to recover. In most other cases,
we explicitly wait for GuC to respond by checking either
pending_enable, pending_disable, or some other state change
that we expect to be set after the response from GuC is
received for that particular request. Add a similar check
on the schedule disable side and make sure the queue state
for the queue being disabled is reset properly in that case.

v2: Only call the deregistration sequence since in this
    case the scheduling handler should be reset during
    the GT reset.
    By doing that, we don't need a way to track the scheduling
    disable request handler for that queue, making this sequence
    simpler. As a result, don't mark the queue as banned.

Signed-off-by: Stuart Summers <stuart.summers@intel.com>
---
 drivers/gpu/drm/xe/xe_guc_submit.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index c923f13a13ef..ca37c7a8c5ed 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -924,6 +924,8 @@ int xe_guc_read_stopped(struct xe_guc *guc)
 		GUC_CONTEXT_##enable_disable,				\
 	}
 
+static void handle_deregister_done(struct xe_guc *guc, struct xe_exec_queue *q);
+
 static void disable_scheduling_deregister(struct xe_guc *guc,
 					  struct xe_exec_queue *q)
 {
@@ -961,6 +963,17 @@ static void disable_scheduling_deregister(struct xe_guc *guc,
 	xe_guc_ct_send(&guc->ct, action, ARRAY_SIZE(action),
 		       G2H_LEN_DW_SCHED_CONTEXT_MODE_SET +
 		       G2H_LEN_DW_DEREGISTER_CONTEXT, 2);
+
+	ret = wait_event_timeout(guc->ct.wq,
+				 !exec_queue_pending_disable(q) ||
+				 xe_guc_read_stopped(guc),
+				 HZ * 5);
+	if (!ret || xe_guc_read_stopped(guc)) {
+		xe_gt_warn(guc_to_gt(guc), "Schedule disable failed to respond");
+		handle_deregister_done(guc, q);
+		xe_gt_reset_async(q->gt);
+	}
+
 }
 
 static void xe_guc_exec_queue_trigger_cleanup(struct xe_exec_queue *q)
-- 
2.34.1


  parent reply	other threads:[~2025-10-13 22:31 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-13 22:31 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-13 22:31 ` [PATCH 1/7] drm/xe: Add additional trace points for LRCs Stuart Summers
2025-10-13 22:31 ` [PATCH 2/7] drm/xe: Add a trace point for VM close Stuart Summers
2025-10-13 22:31 ` [PATCH 3/7] drm/xe: Add the BO pointer info to the BO trace Stuart Summers
2025-10-13 22:31 ` [PATCH 4/7] drm/xe: Add new exec queue trace points Stuart Summers
2025-10-13 22:31 ` [PATCH 5/7] drm/xe: Correct migration VM teardown order Stuart Summers
2025-10-13 22:31 ` [PATCH 6/7] drm/xe: Kick start GPU scheduler on teardown Stuart Summers
2025-10-13 22:32   ` Summers, Stuart
2025-10-14  2:07     ` Matthew Brost
2025-10-13 22:31 ` Stuart Summers [this message]
2025-10-14  2:09   ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Matthew Brost
2025-10-14  3:10     ` Summers, Stuart
2025-10-14  1:04 ` ✗ CI.checkpatch: warning for Fix a couple of wedge corner-case memory leaks (rev3) Patchwork
2025-10-14  1:05 ` ✓ CI.KUnit: success " Patchwork
2025-10-14  1:50 ` ✗ Xe.CI.BAT: failure " Patchwork
2025-10-14 10:02 ` ✗ Xe.CI.Full: " Patchwork
  -- strict thread matches above, loose matches on Subject: below --
2025-10-13 16:24 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-13 16:25 ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Stuart Summers
2025-10-02 23:04 [PATCH 0/7] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-02 23:04 ` [PATCH 7/7] drm/xe: Check for GuC responses on disabling scheduling Stuart Summers
2025-10-03 18:54   ` Matthew Brost
2025-10-03 18:58     ` Summers, Stuart
2025-10-03 19:38       ` Matthew Brost
2025-10-03 19:42         ` Summers, Stuart
2025-10-03 19:49           ` Matthew Brost
2025-10-03 19:53             ` Summers, Stuart

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20251013223135.189357-8-stuart.summers@intel.com \
    --to=stuart.summers@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=matthew.brost@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox