From: Matt Roper <matthew.d.roper@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.d.roper@intel.com, Gustavo Sousa <gustavo.sousa@intel.com>
Subject: [PATCH v3 12/27] drm/xe/devcoredump: Use scope-based cleanup
Date: Fri, 14 Nov 2025 13:43:48 -0800 [thread overview]
Message-ID: <20251114214335.2388972-41-matthew.d.roper@intel.com> (raw)
In-Reply-To: <20251114214335.2388972-29-matthew.d.roper@intel.com>
Use scope-based cleanup for forcewake and runtime PM in the devcoredump
code. This eliminates some goto-based error handling and slightly
simplifies other functions.
v2:
- Move the forcewake acquisition slightly higher in
devcoredump_snapshot() so that we maintain an easy-to-understand LIFO
cleanup order. (Gustavo)
Cc: Gustavo Sousa <gustavo.sousa@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_devcoredump.c | 30 ++++++++++++-----------------
1 file changed, 12 insertions(+), 18 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index 203e3038cc81..bf347714b5e0 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -276,7 +276,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
struct xe_device *xe = coredump_to_xe(coredump);
- unsigned int fw_ref;
/*
* NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -287,15 +286,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
xe_devcoredump_read, xe_devcoredump_free,
XE_COREDUMP_TIMEOUT_JIFFIES);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
- xe_vm_snapshot_capture_delayed(ss->vm);
- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
- xe_force_wake_put(gt_to_fw(ss->gt), fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref.domains, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+ }
ss->read.chunk_position = 0;
@@ -306,7 +305,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
ss->read.buffer = kvmalloc(XE_DEVCOREDUMP_CHUNK_MAX,
GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer,
XE_DEVCOREDUMP_CHUNK_MAX,
@@ -314,15 +313,12 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
} else {
ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer, ss->read.size, 0,
coredump);
xe_devcoredump_snapshot_free(ss);
}
-
-put_pm:
- xe_pm_runtime_put(xe);
}
static void devcoredump_snapshot(struct xe_devcoredump *coredump,
@@ -332,7 +328,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
- unsigned int fw_ref;
bool cookie;
ss->snapshot_time = ktime_get_real();
@@ -348,11 +343,11 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
ss->gt = q->gt;
INIT_WORK(&ss->work, xe_devcoredump_deferred_snap_work);
+ /* keep going if fw fails as we still want to save the memory and SW data */
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+
cookie = dma_fence_begin_signalling();
- /* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
-
ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
ss->ge = xe_guc_exec_queue_snapshot_capture(q);
@@ -364,7 +359,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
queue_work(system_unbound_wq, &ss->work);
- xe_force_wake_put(gt_to_fw(q->gt), fw_ref);
dma_fence_end_signalling(cookie);
}
--
2.51.1
next prev parent reply other threads:[~2025-11-14 21:43 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-14 21:43 [PATCH v3 00/27] Scope-based forcewake and runtime PM Matt Roper
2025-11-14 21:43 ` [PATCH v3 01/27] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
2025-11-17 22:03 ` Gustavo Sousa
2025-11-17 22:17 ` Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 02/27] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
2025-11-17 22:04 ` Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 03/27] drm/xe/gt: Use scope-based cleanup Matt Roper
2025-11-14 21:43 ` [PATCH v3 04/27] drm/xe/gt_idle: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 05/27] drm/xe/guc: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 06/27] drm/xe/guc_pc: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 07/27] drm/xe/mocs: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 08/27] drm/xe/pat: Use scope-based forcewake Matt Roper
2025-11-14 21:43 ` [PATCH v3 09/27] drm/xe/pxp: Use scope-based cleanup Matt Roper
2025-11-14 21:43 ` [PATCH v3 10/27] drm/xe/gsc: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 11/27] drm/xe/device: " Matt Roper
2025-11-14 21:43 ` Matt Roper [this message]
2025-11-17 22:09 ` [PATCH v3 12/27] drm/xe/devcoredump: " Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 13/27] drm/xe/display: Use scoped-cleanup Matt Roper
2025-11-17 22:11 ` Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 14/27] drm/xe: Return forcewake reference type from force_wake_get_any_engine() Matt Roper
2025-11-17 22:19 ` Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 15/27] drm/xe/drm_client: Use scope-based cleanup Matt Roper
2025-11-17 22:28 ` Gustavo Sousa
2025-11-14 21:43 ` [PATCH v3 16/27] drm/xe/gt_debugfs: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 17/27] drm/xe/huc: Use scope-based forcewake Matt Roper
2025-11-14 21:43 ` [PATCH v3 18/27] drm/xe/query: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 19/27] drm/xe/reg_sr: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 20/27] drm/xe/vram: " Matt Roper
2025-11-14 21:43 ` [PATCH v3 21/27] drm/xe/bo: Use scope-based runtime PM Matt Roper
2025-11-14 21:43 ` [PATCH v3 22/27] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
2025-11-14 21:43 ` [PATCH v3 23/27] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
2025-11-14 21:44 ` [PATCH v3 24/27] drm/xe/sriov: " Matt Roper
2025-11-14 21:44 ` [PATCH v3 25/27] drm/xe/tests: " Matt Roper
2025-11-14 21:44 ` [PATCH v3 26/27] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
2025-11-14 21:44 ` [PATCH v3 27/27] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
2025-11-14 23:22 ` ✗ CI.checkpatch: warning for Scope-based forcewake and runtime PM (rev4) Patchwork
2025-11-14 23:23 ` ✓ CI.KUnit: success " Patchwork
2025-11-15 0:14 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-15 11:18 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251114214335.2388972-41-matthew.d.roper@intel.com \
--to=matthew.d.roper@intel.com \
--cc=gustavo.sousa@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox