From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Matthew Auld <matthew.auld@intel.com>
Cc: <intel-xe@lists.freedesktop.org>
Subject: Re: [PATCH] drm/xe: Convert xe_pm_runtime_{get,put} to void and protect from recursion
Date: Fri, 1 Mar 2024 13:06:57 -0500 [thread overview]
Message-ID: <ZeIZQTZB7X0CusQB@intel.com> (raw)
In-Reply-To: <c64bab22-516e-4a9a-a3bf-e08dcef536e0@intel.com>
On Fri, Mar 01, 2024 at 05:52:02PM +0000, Matthew Auld wrote:
> On 27/02/2024 18:35, Rodrigo Vivi wrote:
> > With mem_access going away and pm_runtime getting called instead,
> > we need to protect these against recursions.
> >
> > For D3cold, the TTM migration helpers will call for the job execution.
> > Jobs execution will be protected by direct runtime_pm calls, but they
> > cannot be called again if we are already at a runtime suspend/resume
> > transaction when evicting/restoring memory for D3Cold. So, we will check
> > for the xe_pm_read_callback_task.
> >
> > The put is asynchronous so there's no need to block it. However, for a
> > proper balance, we need to ensure that the references are taken and
> > restored regardless of the flow. So, let's convert them all to void and
> > use some direct linux/pm_runtime functions.
> >
> > Cc: Matthew Auld <matthew.auld@intel.com>
> > Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_pm.c | 25 ++++++++++++++-----------
> > drivers/gpu/drm/xe/xe_pm.h | 4 ++--
> > 2 files changed, 16 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
> > index b5511e3c3153..3664480b21ba 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.c
> > +++ b/drivers/gpu/drm/xe/xe_pm.c
> > @@ -408,26 +408,29 @@ int xe_pm_runtime_resume(struct xe_device *xe)
> > /**
> > * xe_pm_runtime_get - Get a runtime_pm reference and resume synchronously
> > * @xe: xe device instance
> > - *
> > - * Returns: Any number greater than or equal to 0 for success, negative error
> > - * code otherwise.
> > */
> > -int xe_pm_runtime_get(struct xe_device *xe)
> > +void xe_pm_runtime_get(struct xe_device *xe)
>
> Actually there is still the caller in intel_runtime_pm_get() compat. What is
> the correct patch order here? It's kind of hard to follow.
Sorry for the conflicting parallel shot.
Put them together now: https://patchwork.freedesktop.org/series/130625/
I hope this makes sense now.
>
> > {
> > - return pm_runtime_get_sync(xe->drm.dev);
> > + pm_runtime_get_noresume(xe->drm.dev);
> > +
> > + if (xe_pm_read_callback_task(xe) == current)
> > + return;
> > +
> > + pm_runtime_resume(xe->drm.dev);
> > }
> > /**
> > * xe_pm_runtime_put - Put the runtime_pm reference back and mark as idle
> > * @xe: xe device instance
> > - *
> > - * Returns: Any number greater than or equal to 0 for success, negative error
> > - * code otherwise.
> > */
> > -int xe_pm_runtime_put(struct xe_device *xe)
> > +void xe_pm_runtime_put(struct xe_device *xe)
> > {
> > - pm_runtime_mark_last_busy(xe->drm.dev);
> > - return pm_runtime_put(xe->drm.dev);
> > + if (xe_pm_read_callback_task(xe) == current) {
> > + pm_runtime_put_noidle(xe->drm.dev);
> > + } else {
> > + pm_runtime_mark_last_busy(xe->drm.dev);
> > + pm_runtime_put(xe->drm.dev);
> > + }
> > }
> > /**
> > diff --git a/drivers/gpu/drm/xe/xe_pm.h b/drivers/gpu/drm/xe/xe_pm.h
> > index 7f5884babb29..fdc2a49c1a1f 100644
> > --- a/drivers/gpu/drm/xe/xe_pm.h
> > +++ b/drivers/gpu/drm/xe/xe_pm.h
> > @@ -26,9 +26,9 @@ void xe_pm_runtime_fini(struct xe_device *xe);
> > bool xe_pm_runtime_suspended(struct xe_device *xe);
> > int xe_pm_runtime_suspend(struct xe_device *xe);
> > int xe_pm_runtime_resume(struct xe_device *xe);
> > -int xe_pm_runtime_get(struct xe_device *xe);
> > +void xe_pm_runtime_get(struct xe_device *xe);
> > int xe_pm_runtime_get_ioctl(struct xe_device *xe);
> > -int xe_pm_runtime_put(struct xe_device *xe);
> > +void xe_pm_runtime_put(struct xe_device *xe);
> > int xe_pm_runtime_get_if_active(struct xe_device *xe);
> > void xe_pm_assert_unbounded_bridge(struct xe_device *xe);
> > int xe_pm_set_vram_threshold(struct xe_device *xe, u32 threshold);
prev parent reply other threads:[~2024-03-01 18:07 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-27 18:35 [PATCH] drm/xe: Convert xe_pm_runtime_{get, put} to void and protect from recursion Rodrigo Vivi
2024-02-27 18:40 ` ✓ CI.Patch_applied: success for " Patchwork
2024-02-27 18:40 ` ✓ CI.checkpatch: " Patchwork
2024-02-27 18:41 ` ✓ CI.KUnit: " Patchwork
2024-02-27 18:45 ` ✗ CI.Build: failure " Patchwork
2024-03-01 17:44 ` [PATCH] drm/xe: Convert xe_pm_runtime_{get,put} " Matthew Auld
2024-03-01 17:52 ` Matthew Auld
2024-03-01 18:06 ` Rodrigo Vivi [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZeIZQTZB7X0CusQB@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox