From: Matt Roper <matthew.d.roper@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: matthew.d.roper@intel.com
Subject: [PATCH 18/33] drm/xe/devcoredump: Use scope-based cleanup
Date: Fri, 7 Nov 2025 10:13:34 -0800 [thread overview]
Message-ID: <20251107181315.631642-53-matthew.d.roper@intel.com> (raw)
In-Reply-To: <20251107181315.631642-35-matthew.d.roper@intel.com>
Use scope-based cleanup for forcewake and runtime PM in the devcoredump
code. This eliminates some goto-based error handling and slightly
simplifies other functions.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
---
drivers/gpu/drm/xe/xe_devcoredump.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
index eb0b40ffffaa..3f58ee4b7c0f 100644
--- a/drivers/gpu/drm/xe/xe_devcoredump.c
+++ b/drivers/gpu/drm/xe/xe_devcoredump.c
@@ -276,7 +276,6 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
struct xe_devcoredump_snapshot *ss = container_of(work, typeof(*ss), work);
struct xe_devcoredump *coredump = container_of(ss, typeof(*coredump), snapshot);
struct xe_device *xe = coredump_to_xe(coredump);
- struct xe_force_wake_ref fw_ref;
/*
* NB: Despite passing a GFP_ flags parameter here, more allocations are done
@@ -287,15 +286,15 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
xe_devcoredump_read, xe_devcoredump_free,
XE_COREDUMP_TIMEOUT_JIFFIES);
- xe_pm_runtime_get(xe);
+ guard(xe_pm_runtime)(xe);
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(ss->gt), XE_FORCEWAKE_ALL);
- if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
- xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
- xe_vm_snapshot_capture_delayed(ss->vm);
- xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
- xe_force_wake_put(fw_ref);
+ xe_with_force_wake(fw_ref, gt_to_fw(ss->gt), XE_FORCEWAKE_ALL) {
+ if (!xe_force_wake_ref_has_domain(fw_ref, XE_FORCEWAKE_ALL))
+ xe_gt_info(ss->gt, "failed to get forcewake for coredump capture\n");
+ xe_vm_snapshot_capture_delayed(ss->vm);
+ xe_guc_exec_queue_snapshot_capture_delayed(ss->ge);
+ }
ss->read.chunk_position = 0;
@@ -306,7 +305,7 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
ss->read.buffer = kvmalloc(XE_DEVCOREDUMP_CHUNK_MAX,
GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer,
XE_DEVCOREDUMP_CHUNK_MAX,
@@ -314,15 +313,12 @@ static void xe_devcoredump_deferred_snap_work(struct work_struct *work)
} else {
ss->read.buffer = kvmalloc(ss->read.size, GFP_USER);
if (!ss->read.buffer)
- goto put_pm;
+ return;
__xe_devcoredump_read(ss->read.buffer, ss->read.size, 0,
coredump);
xe_devcoredump_snapshot_free(ss);
}
-
-put_pm:
- xe_pm_runtime_put(xe);
}
static void devcoredump_snapshot(struct xe_devcoredump *coredump,
@@ -332,7 +328,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
struct xe_devcoredump_snapshot *ss = &coredump->snapshot;
struct xe_guc *guc = exec_queue_to_guc(q);
const char *process_name = "no process";
- struct xe_force_wake_ref fw_ref;
bool cookie;
ss->snapshot_time = ktime_get_real();
@@ -351,7 +346,7 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
cookie = dma_fence_begin_signalling();
/* keep going if fw fails as we still want to save the memory and SW data */
- fw_ref = xe_force_wake_get(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
+ CLASS(xe_force_wake, fw_ref)(gt_to_fw(q->gt), XE_FORCEWAKE_ALL);
ss->guc.log = xe_guc_log_snapshot_capture(&guc->log, true);
ss->guc.ct = xe_guc_ct_snapshot_capture(&guc->ct);
@@ -364,7 +359,6 @@ static void devcoredump_snapshot(struct xe_devcoredump *coredump,
queue_work(system_unbound_wq, &ss->work);
- xe_force_wake_put(fw_ref);
dma_fence_end_signalling(cookie);
}
--
2.51.1
next prev parent reply other threads:[~2025-11-07 18:13 UTC|newest]
Thread overview: 44+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-11-07 18:13 [PATCH 00/33] Scope-based forcewake and runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 01/33] drm/xe/forcewake: Improve kerneldoc Matt Roper
2025-11-10 23:33 ` Summers, Stuart
2025-11-07 18:13 ` [PATCH 02/33] drm/xe/eustall: Store forcewake reference in stream structure Matt Roper
2025-11-07 19:52 ` Harish Chegondi
2025-11-07 18:13 ` [PATCH 03/33] drm/xe/oa: " Matt Roper
2025-11-07 18:13 ` [PATCH 04/33] drm/xe/forcewake: Create dedicated type for forcewake references Matt Roper
2025-11-07 19:27 ` Michal Wajdeczko
2025-11-07 21:17 ` Matt Roper
2025-11-07 18:13 ` [PATCH 05/33] squash! " Matt Roper
2025-11-07 18:13 ` [PATCH 06/33] squash! " Matt Roper
2025-11-07 18:13 ` [PATCH 07/33] drm/xe/forcewake: Add scope-based cleanup for forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 08/33] drm/xe/pm: Add scope-based cleanup helper for runtime PM Matt Roper
2025-11-10 21:59 ` Matt Roper
2025-11-07 18:13 ` [PATCH 09/33] drm/xe/gt: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 10/33] drm/xe/gt_idle: " Matt Roper
2025-11-07 18:13 ` [PATCH 11/33] drm/xe/guc: " Matt Roper
2025-11-07 18:13 ` [PATCH 12/33] drm/xe/guc_pc: " Matt Roper
2025-11-07 18:13 ` [PATCH 13/33] drm/xe/mocs: " Matt Roper
2025-11-07 18:13 ` [PATCH 14/33] drm/xe/pat: Use scope-based forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 15/33] drm/xe/pxp: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 16/33] drm/xe/gsc: " Matt Roper
2025-11-07 18:13 ` [PATCH 17/33] drm/xe/device: " Matt Roper
2025-11-07 18:13 ` Matt Roper [this message]
2025-11-07 18:13 ` [PATCH 19/33] drm/xe/display: Use scoped-cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 20/33] drm/xe: Create scoped cleanup class for force_wake_get_any_engine() Matt Roper
2025-11-07 18:13 ` [PATCH 21/33] drm/xe/drm_client: Use scope-based cleanup Matt Roper
2025-11-07 18:13 ` [PATCH 22/33] drm/xe/gt_debugfs: " Matt Roper
2025-11-07 18:13 ` [PATCH 23/33] drm/xe/huc: Use scope-based forcewake Matt Roper
2025-11-07 18:13 ` [PATCH 24/33] drm/xe/query: " Matt Roper
2025-11-07 18:13 ` [PATCH 25/33] drm/xe/reg_sr: " Matt Roper
2025-11-07 18:13 ` [PATCH 26/33] drm/xe/vram: " Matt Roper
2025-11-07 18:13 ` [PATCH 27/33] drm/xe/bo: Use scope-based runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 28/33] drm/xe/ggtt: Use scope-based runtime pm Matt Roper
2025-11-07 18:13 ` [PATCH 29/33] drm/xe/hwmon: Use scope-based runtime PM Matt Roper
2025-11-07 18:13 ` [PATCH 30/33] drm/xe/sriov: " Matt Roper
2025-11-07 18:13 ` [PATCH 31/33] drm/xe/tests: " Matt Roper
2025-11-07 18:13 ` [PATCH 32/33] drm/xe/sysfs: Use scope-based runtime power management Matt Roper
2025-11-07 18:13 ` [PATCH 33/33] drm/xe/debugfs: Use scope-based runtime PM Matt Roper
2025-11-07 18:18 ` [PATCH 00/33] Scope-based forcewake and " Matt Roper
2025-11-07 20:43 ` ✗ CI.checkpatch: warning for " Patchwork
2025-11-07 20:45 ` ✓ CI.KUnit: success " Patchwork
2025-11-07 21:21 ` ✓ Xe.CI.BAT: " Patchwork
2025-11-09 3:59 ` ✗ Xe.CI.Full: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251107181315.631642-53-matthew.d.roper@intel.com \
--to=matthew.d.roper@intel.com \
--cc=intel-xe@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox