From: Matthew Auld <matthew.auld@intel.com>
To: Rodrigo Vivi <rodrigo.vivi@intel.com>, intel-xe@lists.freedesktop.org
Subject: Re: [RFC 06/20] drm/xe: Convert mem_access assertion towards the runtime_pm state
Date: Tue, 9 Jan 2024 11:06:19 +0000 [thread overview]
Message-ID: <1fdd8cef-3fe0-496e-9f0e-51fe5ea2a397@intel.com> (raw)
In-Reply-To: <20231228021232.2366249-7-rodrigo.vivi@intel.com>
On 28/12/2023 02:12, Rodrigo Vivi wrote:
> The mem_access helpers are going away and getting replaced by
> direct calls of the xe_pm_runtime_{get,put} functions. However, an
> assertion with a warning splat is desired when we hit the worst
> case of a memory access with the device really in the 'suspended'
> state.
>
> Also, this needs to be the first step. Otherwise, the upcoming
> conversion would be really noise with warn splats of missing mem_access
> gets.
>
> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> ---
> drivers/gpu/drm/xe/xe_device.c | 13 ++++++++++++-
> drivers/gpu/drm/xe/xe_pm.c | 16 ++++++++++++++++
> drivers/gpu/drm/xe/xe_pm.h | 1 +
> 3 files changed, 29 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
> index 86867d42d5329..dc3721bb37b1e 100644
> --- a/drivers/gpu/drm/xe/xe_device.c
> +++ b/drivers/gpu/drm/xe/xe_device.c
> @@ -631,9 +631,20 @@ bool xe_device_mem_access_ongoing(struct xe_device *xe)
> return atomic_read(&xe->mem_access.ref);
> }
>
> +/**
> + * xe_device_assert_mem_access - Inspect the current runtime_pm state.
> + * @xe: xe device instance
> + *
> + * To be used before any kind of memory access. It will splat a debug warning
> + * if the device is currently sleeping. But it doesn't guarantee in any way
> + * that the device is going to continue awake. Xe PM runtime get and put
> + * functions might be added to the outer bound of the memory access, while
> + * this check is intended for inner usage to splat some warning if the worst
> + * case has just happened.
> + */
> void xe_device_assert_mem_access(struct xe_device *xe)
> {
> - XE_WARN_ON(!xe_device_mem_access_ongoing(xe));
> + XE_WARN_ON(xe_pm_runtime_suspended(xe));
> }
>
> bool xe_device_mem_access_get_if_ongoing(struct xe_device *xe)
> diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> index cabed94a21873..45114e4e76a5a 100644
> --- a/drivers/gpu/drm/xe/xe_pm.c
> +++ b/drivers/gpu/drm/xe/xe_pm.c
> @@ -246,6 +246,22 @@ struct task_struct *xe_pm_read_callback_task(struct xe_device *xe)
> return READ_ONCE(xe->pm_callback_task);
> }
>
> +/**
> + * xe_pm_runtime_suspended - Inspect the current runtime_pm state.
> + * @xe: xe device instance
> + *
> + * This does not provide any guarantee that the device is going to continue
> + * suspended as it might be racing with the runtime state transitions.
> + * It can be used only as a non-reliable assertion, to ensure that we are not in
> + * the sleep state while trying to access some memory for instance.
> + *
> + * Returns true if PCI device is suspended, false otherwise.
> + */
> +bool xe_pm_runtime_suspended(struct xe_device *xe)
> +{
> + return pm_runtime_suspended(xe->drm.dev);
Would it not be better to check for active instead? That way we can
check for !active above and create a bigger net with SUSPENDING and
RESUMING states also being invalid i.e another task is about to suspend
or hasn't fully resumed yet. We might also need to also check the
callback task though.
> +}
> +
> /**
> * xe_pm_runtime_suspend - Prepare our device for D3hot/D3Cold
> * @xe: xe device instance
> diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> index 069f41c61505b..67a9bf3dd379b 100644
> --- a/drivers/gpu/drm/xe/xe_pm.h
> +++ b/drivers/gpu/drm/xe/xe_pm.h
> @@ -22,6 +22,7 @@ int xe_pm_resume(struct xe_device *xe);
>
> void xe_pm_init(struct xe_device *xe);
> void xe_pm_runtime_fini(struct xe_device *xe);
> +bool xe_pm_runtime_suspended(struct xe_device *xe);
> int xe_pm_runtime_suspend(struct xe_device *xe);
> int xe_pm_runtime_resume(struct xe_device *xe);
> void xe_pm_runtime_get(struct xe_device *xe);
next prev parent reply other threads:[~2024-01-09 11:06 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-28 2:12 [RFC 00/20] First attempt to kill mem_access Rodrigo Vivi
2023-12-28 2:12 ` [RFC 01/20] drm/xe: Document Xe PM component Rodrigo Vivi
2023-12-28 2:12 ` [RFC 02/20] drm/xe: Fix display runtime_pm handling Rodrigo Vivi
2023-12-28 2:12 ` [RFC 03/20] drm/xe: Create a xe_pm_runtime_resume_and_get variant for display Rodrigo Vivi
2023-12-28 2:12 ` [RFC 04/20] drm/xe: Convert xe_pm_runtime_{get, put} to void and protect from recursion Rodrigo Vivi
2023-12-28 2:12 ` [RFC 05/20] drm/xe: Prepare display for D3Cold Rodrigo Vivi
2023-12-28 2:12 ` [RFC 06/20] drm/xe: Convert mem_access assertion towards the runtime_pm state Rodrigo Vivi
2024-01-09 11:06 ` Matthew Auld [this message]
2024-01-09 17:50 ` Rodrigo Vivi
2023-12-28 2:12 ` [RFC 07/20] drm/xe: Runtime PM wake on every IOCTL Rodrigo Vivi
2024-01-02 11:30 ` Gupta, Anshuman
2024-01-09 17:57 ` Rodrigo Vivi
2023-12-28 2:12 ` [RFC 08/20] drm/xe: Runtime PM wake on every exec Rodrigo Vivi
2024-01-09 11:24 ` Matthew Auld
2024-01-09 17:41 ` Rodrigo Vivi
2024-01-09 18:40 ` Matthew Auld
2023-12-28 2:12 ` [RFC 09/20] drm/xe: Runtime PM wake on every sysfs call Rodrigo Vivi
2023-12-28 2:12 ` [RFC 10/20] drm/xe: Sort some xe_pm_runtime related functions Rodrigo Vivi
2024-01-09 11:26 ` Matthew Auld
2023-12-28 2:12 ` [RFC 11/20] drm/xe: Ensure device is awake before removing it Rodrigo Vivi
2023-12-28 2:12 ` [RFC 12/20] drm/xe: Remove mem_access from guc_pc calls Rodrigo Vivi
2023-12-28 2:12 ` [RFC 13/20] drm/xe: Runtime PM wake on every debugfs call Rodrigo Vivi
2023-12-28 2:12 ` [RFC 14/20] drm/xe: Replace dma_buf mem_access per direct xe_pm_runtime calls Rodrigo Vivi
2023-12-28 2:12 ` [RFC 15/20] drm/xe: Allow GuC CT fast path and worker regardless of runtime_pm Rodrigo Vivi
2024-01-09 12:09 ` Matthew Auld
2023-12-28 2:12 ` [RFC 16/20] drm/xe: Remove mem_access calls from migration Rodrigo Vivi
2024-01-09 12:33 ` Matthew Auld
2024-01-09 17:58 ` Rodrigo Vivi
2024-01-09 18:49 ` Matthew Auld
2024-01-09 22:40 ` Rodrigo Vivi
2024-01-11 14:17 ` Matthew Brost
2023-12-28 2:12 ` [RFC 17/20] drm/xe: Removing extra mem_access protection from runtime pm Rodrigo Vivi
2023-12-28 2:12 ` [RFC 18/20] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls Rodrigo Vivi
2023-12-28 2:12 ` [RFC 19/20] drm/xe: Remove unused runtime pm helper Rodrigo Vivi
2023-12-28 2:12 ` [RFC 20/20] drm/xe: Mega Kill of mem_access Rodrigo Vivi
2024-01-09 11:41 ` Matthew Auld
2024-01-09 17:39 ` Rodrigo Vivi
2024-01-09 18:27 ` Matthew Auld
2024-01-09 22:34 ` Rodrigo Vivi
2024-01-04 5:40 ` ✓ CI.Patch_applied: success for First attempt to kill mem_access Patchwork
2024-01-04 5:40 ` ✗ CI.checkpatch: warning " Patchwork
2024-01-04 5:41 ` ✗ CI.KUnit: failure " Patchwork
2024-01-10 5:21 ` [RFC 00/20] " Matthew Brost
2024-01-10 14:06 ` Rodrigo Vivi
2024-01-10 14:08 ` Vivi, Rodrigo
2024-01-10 14:33 ` Matthew Brost
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1fdd8cef-3fe0-496e-9f0e-51fe5ea2a397@intel.com \
--to=matthew.auld@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox