public inbox for intel-xe@lists.freedesktop.org
 help / color / mirror / Atom feed
* [PATCH v2] drm/xe: Suppress reset log for killed queues
@ 2026-04-13 23:07 Daniele Ceraolo Spurio
  2026-04-13 23:10 ` Matthew Brost
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Daniele Ceraolo Spurio @ 2026-04-13 23:07 UTC (permalink / raw)
  To: intel-xe; +Cc: Daniele Ceraolo Spurio, Matthew Brost

When an app exits abruptly (for example due to the user hitting ctrl+c),
any of its queues that are still active on the HW are immediately
killed. As part of this process, the driver tells the GuC to preempt
the queues off the HW and to reset them if they don't preempt.
This can cause a reset log to be printed to dmesg, which can be confusing
to users as resets are commonly tied to errors, while any resets performed
in this case are just done to speed up the cleanup. Also, those reset
messages are not useful for debug, because we don't care what happens to
a queue once its app has exited.

The only case where a queue might be killed before the app that owns it
has exited is if the queue uses PXP and a PXP termination occurs. In
such scenario a log might be useful, but rather than a reset log it is
better to have a communication that the queue is being killed.

Therefore, we can silence the reset log for all killed queues and add a
simple debug log to record when a PXP queue is killed to cover that case.

Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Matthew Brost <matthew.brost@intel.com>
---
v2: silence for all killed queues instead of just destroyed ones (Matt),
rework commit message, add log for PXP killing.
---
 drivers/gpu/drm/xe/xe_guc_submit.c | 7 ++++---
 drivers/gpu/drm/xe/xe_pxp.c        | 6 ++++++
 2 files changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/xe/xe_guc_submit.c b/drivers/gpu/drm/xe/xe_guc_submit.c
index 10556156eaad..b1222b42174c 100644
--- a/drivers/gpu/drm/xe/xe_guc_submit.c
+++ b/drivers/gpu/drm/xe/xe_guc_submit.c
@@ -2968,9 +2968,10 @@ int xe_guc_exec_queue_reset_handler(struct xe_guc *guc, u32 *msg, u32 len)
 	if (unlikely(!q))
 		return -EPROTO;
 
-	xe_gt_info(gt, "Engine reset: engine_class=%s, logical_mask: 0x%x, guc_id=%d, state=0x%0x",
-		   xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id,
-		   atomic_read(&q->guc->state));
+	if (!exec_queue_killed(q))
+		xe_gt_info(gt, "Engine reset: engine_class=%s, logical_mask: 0x%x, guc_id=%d, state=0x%0x",
+			   xe_hw_engine_class_to_str(q->class), q->logical_mask, guc_id,
+			   atomic_read(&q->guc->state));
 
 	trace_xe_exec_queue_reset(q);
 
diff --git a/drivers/gpu/drm/xe/xe_pxp.c b/drivers/gpu/drm/xe/xe_pxp.c
index d570ab3717df..d9af4d6d4bb9 100644
--- a/drivers/gpu/drm/xe/xe_pxp.c
+++ b/drivers/gpu/drm/xe/xe_pxp.c
@@ -13,9 +13,11 @@
 #include "xe_device_types.h"
 #include "xe_exec_queue.h"
 #include "xe_force_wake.h"
+#include "xe_guc_exec_queue_types.h"
 #include "xe_guc_submit.h"
 #include "xe_gsc_proxy.h"
 #include "xe_gt_types.h"
+#include "xe_hw_engine.h"
 #include "xe_huc.h"
 #include "xe_mmio.h"
 #include "xe_pm.h"
@@ -755,6 +757,10 @@ static void pxp_invalidate_queues(struct xe_pxp *pxp)
 	spin_unlock_irq(&pxp->queues.lock);
 
 	list_for_each_entry_safe(q, tmp, &to_clean, pxp.link) {
+		drm_dbg(&pxp->xe->drm,
+			"Killing queue due to PXP termination: eclass=%s, guc_id=%d\n",
+			xe_hw_engine_class_to_str(q->class), q->guc->id);
+
 		xe_exec_queue_kill(q);
 
 		/*
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2026-04-14  2:34 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-04-13 23:07 [PATCH v2] drm/xe: Suppress reset log for killed queues Daniele Ceraolo Spurio
2026-04-13 23:10 ` Matthew Brost
2026-04-13 23:16 ` ✓ CI.KUnit: success for " Patchwork
2026-04-14  0:09 ` ✓ Xe.CI.BAT: " Patchwork
2026-04-14  2:34 ` ✗ Xe.CI.FULL: failure " Patchwork

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox