Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] Fix a couple of wedge corner-case memory leaks
@ 2025-10-27 18:04 Stuart Summers
  2025-10-27 18:04 ` [PATCH 1/6] drm/xe: Add additional trace points for LRCs Stuart Summers
                   ` (7 more replies)
  0 siblings, 8 replies; 19+ messages in thread
From: Stuart Summers @ 2025-10-27 18:04 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, niranjana.vishwanathapura, zhanjun.dong,
	shuicheng.lin, Stuart Summers

Most of the patches in this series are just adding
some debug hints to help track these down. I split
these up in case we want to pick and choose which ones
to include in the tree. I found them useful.

The main interesting patch is the last one in the
series which fixes some corner cases when the
driver becomes wedged in the middle of either communication
with the DRM scheduler or in the event the GuC becomes
unresponsive. In both of these cases there is a chance
we could leak memory around the exec queue members
like the LRC and the LRC BO. This patch fixes those
scenarios.

This series depends on [1].

v2: Address feedback from Matt:
    - Let the DRM scheduler handle pausing/unpausing
    - Still do the wait after scheduling disable/deregister
      as with the previous patch, but skip the intermediate
      software-based schedule disable using the "banned"
      flag and instead just jump straight to the deregister
      handling which will fully reset the queue state.
      Note that for this case I am seeing a hardware failure
      after submitting to GuC but before receiving the
      response from GuC. So even if we wedge in this case
      (monitoring the hardware state change), the queue
      itself is not wedged because of the active GuC
      submission (CT is not stalled at that point).
v3: Add back in the xe pause checks and instead just kickstart
    message handling in the guc_submi_fini() routine before
    doing the async wait there.
v4: Handle the CT communication loss during wedge asynchronously
    Also combine those last two patches into one to handle
    wedge cleanup generally.
v5: Add a new patch with a little documentation on the GuC
    submission handling stages.
    Move the scheduler kickstart and destruction call on the
    dangling queues into the wedged_fini() callback. These
    only get called now for queues which are in an error
    state - wedge was called, but these weren't fully
    cleaned up as seen by the lack of exec_queue reference
    at the time of wedging.
    Also fix the migration ordering teardown reference mistake
    pointed by Matt in the previous series rev.
v6: Implement and test against [1] with the changes Matt suggested.

[1]: https://patchwork.freedesktop.org/series/155315/

Stuart Summers (6):
  drm/xe: Add additional trace points for LRCs
  drm/xe: Add a trace point for VM close
  drm/xe: Add the BO pointer info to the BO trace
  drm/xe: Add new exec queue trace points
  drm/xe: Correct migration VM teardown order
  drm/xe: Clean up GuC software state after a wedge

 drivers/gpu/drm/xe/xe_exec_queue.c |  4 +++
 drivers/gpu/drm/xe/xe_guc_submit.c | 17 +++++++++---
 drivers/gpu/drm/xe/xe_lrc.c        |  4 +++
 drivers/gpu/drm/xe/xe_lrc.h        |  3 +++
 drivers/gpu/drm/xe/xe_migrate.c    |  7 ++---
 drivers/gpu/drm/xe/xe_trace.h      | 22 ++++++++++++++--
 drivers/gpu/drm/xe/xe_trace_bo.h   | 12 +++++++--
 drivers/gpu/drm/xe/xe_trace_lrc.h  | 42 +++++++++++++++++++++++++++++-
 drivers/gpu/drm/xe/xe_vm.c         |  2 ++
 9 files changed, 101 insertions(+), 12 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread
* [PATCH 0/6] Fix a couple of wedge corner-case memory leaks
@ 2025-10-14 18:09 Stuart Summers
  2025-10-14 18:09 ` [PATCH 1/6] drm/xe: Add additional trace points for LRCs Stuart Summers
  0 siblings, 1 reply; 19+ messages in thread
From: Stuart Summers @ 2025-10-14 18:09 UTC (permalink / raw)
  Cc: intel-xe, matthew.brost, Stuart Summers

Most of the patches in this series are just adding
some debug hints to help track these down. I split
these up in case we want to pick and choose which ones
to include in the tree. I found them useful.

The main two interesting patches are the last two in the
series which are fixing some corner cases when the
driver becomes wedged in the middle of either communication
with the DRM scheduler or in the event the GuC becomes
unresponsive. In both of these cases there is a chance
we could leak memory around the exec queue members
like the LRC and the LRC BO. These patches fix those
scenarios.

v2: Address feedback from Matt:
    - Let the DRM scheduler handle pausing/unpausing
    - Still do the wait after scheduling disable/deregister
      as with the previous patch, but skip the intermediate
      software-based schedule disable using the "banned"
      flag and instead just jump straight to the deregister
      handling which will fully reset the queue state.
      Note that for this case I am seeing a hardware failure
      after submitting to GuC but before receiving the
      response from GuC. So even if we wedge in this case
      (monitoring the hardware state change), the queue
      itself is not wedged because of the active GuC
      submission (CT is not stalled at that point).
v3: Add back in the xe pause checks and instead just kickstart
    message handling in the guc_submi_fini() routine before
    doing the async wait there.
v4: Handle the CT communication loss during wedge asynchronously
    Also combine those last two patches into one to handle
    wedge cleanup generally.

Stuart Summers (6):
  drm/xe: Add additional trace points for LRCs
  drm/xe: Add a trace point for VM close
  drm/xe: Add the BO pointer info to the BO trace
  drm/xe: Add new exec queue trace points
  drm/xe: Correct migration VM teardown order
  drm/xe: Clean up GuC software state after a wedge

 drivers/gpu/drm/xe/xe_exec_queue.c |  4 +++
 drivers/gpu/drm/xe/xe_guc_submit.c | 26 +++++++++++++++---
 drivers/gpu/drm/xe/xe_lrc.c        |  4 +++
 drivers/gpu/drm/xe/xe_lrc.h        |  3 +++
 drivers/gpu/drm/xe/xe_migrate.c    |  2 +-
 drivers/gpu/drm/xe/xe_trace.h      | 22 ++++++++++++++--
 drivers/gpu/drm/xe/xe_trace_bo.h   | 12 +++++++--
 drivers/gpu/drm/xe/xe_trace_lrc.h  | 42 +++++++++++++++++++++++++++++-
 drivers/gpu/drm/xe/xe_vm.c         |  2 ++
 9 files changed, 107 insertions(+), 10 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2025-10-28 20:29 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-10-27 18:04 [PATCH 0/6] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-27 18:04 ` [PATCH 1/6] drm/xe: Add additional trace points for LRCs Stuart Summers
2025-10-28 19:46   ` Matt Atwood
2025-10-28 20:11     ` Summers, Stuart
2025-10-27 18:04 ` [PATCH 2/6] drm/xe: Add a trace point for VM close Stuart Summers
2025-10-28 20:03   ` Matt Atwood
2025-10-28 20:10     ` Summers, Stuart
2025-10-27 18:04 ` [PATCH 3/6] drm/xe: Add the BO pointer info to the BO trace Stuart Summers
2025-10-28 20:11   ` Matt Atwood
2025-10-27 18:04 ` [PATCH 4/6] drm/xe: Add new exec queue trace points Stuart Summers
2025-10-28 20:29   ` Matt Atwood
2025-10-27 18:04 ` [PATCH 5/6] drm/xe: Correct migration VM teardown order Stuart Summers
2025-10-27 19:46   ` Matthew Brost
2025-10-27 18:04 ` [PATCH 6/6] drm/xe: Clean up GuC software state after a wedge Stuart Summers
2025-10-27 18:16 ` ✗ CI.checkpatch: warning for Fix a couple of wedge corner-case memory leaks (rev6) Patchwork
2025-10-27 18:17 ` ✗ CI.KUnit: failure " Patchwork
2025-10-27 18:18   ` Summers, Stuart
2025-10-27 20:10     ` Summers, Stuart
  -- strict thread matches above, loose matches on Subject: below --
2025-10-14 18:09 [PATCH 0/6] Fix a couple of wedge corner-case memory leaks Stuart Summers
2025-10-14 18:09 ` [PATCH 1/6] drm/xe: Add additional trace points for LRCs Stuart Summers

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox