From: Matthew Brost <matthew.brost@intel.com>
To: Shuicheng Lin <shuicheng.lin@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH 2/2] drm/xe: Release runtime pm for error path of xe_devcoredump_read()
Date: Wed, 2 Jul 2025 23:57:54 -0700 [thread overview]
Message-ID: <aGYp8kPgCyfKFsm9@lstrano-desk.jf.intel.com> (raw)
In-Reply-To: <20250703062024.3441918-6-shuicheng.lin@intel.com>
On Thu, Jul 03, 2025 at 06:20:27AM +0000, Shuicheng Lin wrote:
> xe_pm_runtime_put() is missed to be called for the error path in
> xe_devcoredump_read().
> Add function description comments for xe_devcoredump_read() to help
> understand it.
>
> Fixes: c4a2e5f865b7 ("drm/xe: Add devcoredump chunking")
> Cc: Matthew Brost <matthew.brost@intel.com>
> Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
> ---
> drivers/gpu/drm/xe/xe_devcoredump.c | 32 +++++++++++++++++++++++------
> 1 file changed, 26 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_devcoredump.c b/drivers/gpu/drm/xe/xe_devcoredump.c
> index 94625010abc4..701ffe6c8264 100644
> --- a/drivers/gpu/drm/xe/xe_devcoredump.c
> +++ b/drivers/gpu/drm/xe/xe_devcoredump.c
> @@ -171,14 +171,29 @@ static void xe_devcoredump_snapshot_free(struct xe_devcoredump_snapshot *ss)
>
> #define XE_DEVCOREDUMP_CHUNK_MAX (SZ_512M + SZ_1G)
>
> +/**
> + * xe_devcoredump_read - Read data from the Xe device coredump snapshot
The preferred style is:
s/xe_devcoredump_read/xe_devcoredump_read()
> + * @buffer: Destination buffer to copy the coredump data into
> + * @offset: Offset in the coredump data to start reading from
> + * @count: Number of bytes to read
> + * @data: Pointer to the xe_devcoredump structure
> + * @datalen: Length of the data (unused)
> + *
> + * Reads a chunk of the coredump snapshot data into the provided buffer,
> + * handling chunked reads for large coredumps and ensuring proper locking.
A little a more descriptive, would be:
Reads a chunk of the coredump snapshot data into the provided buffer. If the
devcoredump is smaller than 1.5 GB, it is read directly from a pre-written
buffer. For larger devcoredumps, the pre-written buffer must be periodically
repopulated from the snapshot state due to kmalloc size limitations.
> + *
> + * Return: Number of bytes copied on success, or a negative error code on failure.
> + */
> static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
> size_t count, void *data, size_t datalen)
> {
> struct xe_devcoredump *coredump = data;
> struct xe_devcoredump_snapshot *ss;
> - ssize_t byte_copied;
> + ssize_t byte_copied = 0;
> u32 chunk_offset;
> ssize_t new_chunk_position;
> + bool pm_acquired = false;
> + int ret = 0;
>
> if (!coredump)
> return -ENODEV;
> @@ -188,19 +203,23 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
> /* Ensure delayed work is captured before continuing */
> flush_work(&ss->work);
>
> - if (ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX)
> + if (ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX) {
> xe_pm_runtime_get(gt_to_xe(ss->gt));
> + pm_acquired = true;
> + }
I'd write this like:
pm_acquired = ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX;
if (pm_acquired)
xe_pm_runtime_get(gt_to_xe(ss->gt));
>
> mutex_lock(&coredump->lock);
>
> if (!ss->read.buffer) {
> mutex_unlock(&coredump->lock);
> - return -ENODEV;
> + ret = -ENODEV;
> + goto pm_put;
> }
>
> if (offset >= ss->read.size) {
> mutex_unlock(&coredump->lock);
> - return 0;
> + ret = 0;
You don't need to set ret = 0 here as it is initialized to zero above.
Nits aside, patch functionally looks correct.
Matt
> + goto pm_put;
> }
>
> new_chunk_position = div_u64_rem(offset,
> @@ -223,10 +242,11 @@ static ssize_t xe_devcoredump_read(char *buffer, loff_t offset,
>
> mutex_unlock(&coredump->lock);
>
> - if (ss->read.size > XE_DEVCOREDUMP_CHUNK_MAX)
> +pm_put:
> + if (pm_acquired)
> xe_pm_runtime_put(gt_to_xe(ss->gt));
>
> - return byte_copied;
> + return byte_copied ? byte_copied : ret;
> }
>
> static void xe_devcoredump_free(void *data)
> --
> 2.49.0
>
next prev parent reply other threads:[~2025-07-03 6:56 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-03 6:20 [PATCH 0/2] Remove unused code in devcoredump_snapshot Shuicheng Lin
2025-07-03 6:20 ` [PATCH 1/2] drm/xe: Remove unused code in devcoredump_snapshot() Shuicheng Lin
2025-07-03 6:20 ` [PATCH 2/2] drm/xe: Release runtime pm for error path of xe_devcoredump_read() Shuicheng Lin
2025-07-03 6:57 ` Matthew Brost [this message]
2025-07-03 7:01 ` Matthew Brost
2025-07-03 6:29 ` ✗ CI.checkpatch: warning for Remove unused code in devcoredump_snapshot Patchwork
2025-07-03 6:30 ` ✓ CI.KUnit: success " Patchwork
2025-07-03 7:41 ` [PATCH v2 0/2] Remove unused code in devcoredump_snapshot() Shuicheng Lin
2025-07-03 7:41 ` [PATCH v2 1/2] drm/xe: " Shuicheng Lin
2025-07-03 14:38 ` Dong, Zhanjun
2025-07-03 15:35 ` Matthew Brost
2025-07-03 7:41 ` [PATCH v2 2/2] drm/xe: Release runtime pm for error path of xe_devcoredump_read() Shuicheng Lin
2025-07-03 15:34 ` Matthew Brost
2025-07-04 5:56 ` ✗ CI.checkpatch: warning for Remove unused code in devcoredump_snapshot (rev2) Patchwork
2025-07-04 5:57 ` ✓ CI.KUnit: success " Patchwork
2025-07-04 20:01 ` ✗ Xe.CI.Full: failure for Remove unused code in devcoredump_snapshot Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=aGYp8kPgCyfKFsm9@lstrano-desk.jf.intel.com \
--to=matthew.brost@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=shuicheng.lin@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox