From: Vinod Govindapillai <vinod.govindapillai@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: vinod.govindapillai@intel.com, imre.deak@intel.com,
arun.r.murthy@intel.com, rodrigo.vivi@intel.com,
uma.shankar@intel.com, ville.syrala@intel.com
Subject: [PATCH v3 3/3] drm/xe/display: handle HPD polling in display runtime suspend/resume
Date: Tue, 20 Aug 2024 20:14:08 +0300 [thread overview]
Message-ID: <20240820171408.192309-4-vinod.govindapillai@intel.com> (raw)
In-Reply-To: <20240820171408.192309-1-vinod.govindapillai@intel.com>
In XE, display runtime suspend / resume routines are called only
if d3cold is allowed. This makes the driver unable to detect any
HPDs once the device goes into runtime suspend state in platforms
like LNL. Update the display runtime suspend / resume routines
to include HPD polling regardless of d3cold status.
While xe_display_pm_suspend/resume() performs steps during runtime
suspend/resume that shouldn't happen, like suspending MST and they
are missing other steps like enabling DC9, this patchset is meant
to keep the current behavior wrt. these, leaving the corresponding
updates for a follow-up
v2: have a separate function for display runtime s/r (Rodrigo)
v3: better streamlining of system s/r and runtime s/r calls (Imre)
Signed-off-by: Vinod Govindapillai <vinod.govindapillai@intel.com>
---
drivers/gpu/drm/xe/display/xe_display.c | 23 +++++++++++++++++++++++
drivers/gpu/drm/xe/display/xe_display.h | 4 ++++
drivers/gpu/drm/xe/xe_pm.c | 8 +++++---
3 files changed, 32 insertions(+), 3 deletions(-)
diff --git a/drivers/gpu/drm/xe/display/xe_display.c b/drivers/gpu/drm/xe/display/xe_display.c
index b2a0b4b5c45c..3dc55edd390a 100644
--- a/drivers/gpu/drm/xe/display/xe_display.c
+++ b/drivers/gpu/drm/xe/display/xe_display.c
@@ -308,6 +308,18 @@ static void xe_display_flush_cleanup_work(struct xe_device *xe)
}
}
+/* TODO: System and runtime suspend/resume sequences will be sanitized as a follow-up. */
+void xe_display_pm_runtime_suspend(struct xe_device *xe)
+{
+ if (!xe->info.probe_display)
+ return;
+
+ if (xe->d3cold.allowed)
+ xe_display_pm_suspend(xe, true);
+
+ intel_hpd_poll_enable(xe);
+}
+
void xe_display_pm_suspend(struct xe_device *xe, bool runtime)
{
struct intel_display *display = &xe->display;
@@ -355,6 +367,17 @@ void xe_display_pm_suspend_late(struct xe_device *xe)
intel_display_power_suspend_late(xe);
}
+void xe_display_pm_runtime_resume(struct xe_device *xe)
+{
+ if (!xe->info.probe_display)
+ return;
+
+ intel_hpd_poll_disable(xe);
+
+ if (xe->d3cold.allowed)
+ xe_display_pm_resume(xe, true);
+}
+
void xe_display_pm_resume_early(struct xe_device *xe)
{
if (!xe->info.probe_display)
diff --git a/drivers/gpu/drm/xe/display/xe_display.h b/drivers/gpu/drm/xe/display/xe_display.h
index 000fb5799df5..53d727fd792b 100644
--- a/drivers/gpu/drm/xe/display/xe_display.h
+++ b/drivers/gpu/drm/xe/display/xe_display.h
@@ -38,6 +38,8 @@ void xe_display_pm_suspend(struct xe_device *xe, bool runtime);
void xe_display_pm_suspend_late(struct xe_device *xe);
void xe_display_pm_resume_early(struct xe_device *xe);
void xe_display_pm_resume(struct xe_device *xe, bool runtime);
+void xe_display_pm_runtime_suspend(struct xe_device *xe);
+void xe_display_pm_runtime_resume(struct xe_device *xe);
#else
@@ -67,6 +69,8 @@ static inline void xe_display_pm_suspend(struct xe_device *xe, bool runtime) {}
static inline void xe_display_pm_suspend_late(struct xe_device *xe) {}
static inline void xe_display_pm_resume_early(struct xe_device *xe) {}
static inline void xe_display_pm_resume(struct xe_device *xe, bool runtime) {}
+static inline void xe_display_pm_runtime_suspend(struct xe_device *xe) {}
+static inline void xe_display_pm_runtime_resume(struct xe_device *xe) {}
#endif /* CONFIG_DRM_XE_DISPLAY */
#endif /* _XE_DISPLAY_H_ */
diff --git a/drivers/gpu/drm/xe/xe_pm.c b/drivers/gpu/drm/xe/xe_pm.c
index fcfb49af8c89..c247e1cb8aba 100644
--- a/drivers/gpu/drm/xe/xe_pm.c
+++ b/drivers/gpu/drm/xe/xe_pm.c
@@ -366,9 +366,9 @@ int xe_pm_runtime_suspend(struct xe_device *xe)
xe_bo_runtime_pm_release_mmap_offset(bo);
mutex_unlock(&xe->mem_access.vram_userfault.lock);
- if (xe->d3cold.allowed) {
- xe_display_pm_suspend(xe, true);
+ xe_display_pm_runtime_suspend(xe);
+ if (xe->d3cold.allowed) {
err = xe_bo_evict_all(xe);
if (err)
goto out;
@@ -431,12 +431,14 @@ int xe_pm_runtime_resume(struct xe_device *xe)
for_each_gt(gt, xe, id)
xe_gt_resume(gt);
+ xe_display_pm_runtime_resume(xe);
+
if (xe->d3cold.allowed) {
- xe_display_pm_resume(xe, true);
err = xe_bo_restore_user(xe);
if (err)
goto out;
}
+
out:
lock_map_release(&xe_pm_runtime_lockdep_map);
xe_pm_write_callback_task(xe, NULL);
--
2.34.1
next prev parent reply other threads:[~2024-08-20 17:16 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-20 17:14 [PATCH v3 0/3] handle HPD polling on display pm runtime s/r Vinod Govindapillai
2024-08-20 17:14 ` [PATCH v3 1/3] drm/xe: Suspend/resume user access only during system s/r Vinod Govindapillai
2024-08-23 4:46 ` Murthy, Arun R
2024-08-20 17:14 ` [PATCH v3 2/3] drm/xe: Handle polling only for system s/r in xe_display_pm_suspend/resume() Vinod Govindapillai
2024-08-23 5:30 ` Murthy, Arun R
2024-08-23 7:07 ` Govindapillai, Vinod
2024-08-23 8:43 ` Murthy, Arun R
2024-08-23 9:08 ` Govindapillai, Vinod
2024-08-23 9:15 ` Govindapillai, Vinod
2024-08-23 9:34 ` Murthy, Arun R
2024-08-23 12:51 ` Imre Deak
2024-08-20 17:14 ` Vinod Govindapillai [this message]
2024-08-23 5:30 ` [PATCH v3 3/3] drm/xe/display: handle HPD polling in display runtime suspend/resume Murthy, Arun R
2024-08-20 18:25 ` ✓ CI.Patch_applied: success for handle HPD polling on display pm runtime s/r Patchwork
2024-08-20 18:25 ` ✓ CI.checkpatch: " Patchwork
2024-08-20 18:26 ` ✓ CI.KUnit: " Patchwork
2024-08-20 18:38 ` ✓ CI.Build: " Patchwork
2024-08-20 18:40 ` ✓ CI.Hooks: " Patchwork
2024-08-20 18:41 ` ✓ CI.checksparse: " Patchwork
2024-08-20 19:02 ` ✓ CI.BAT: " Patchwork
2024-08-21 0:16 ` ✗ CI.FULL: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240820171408.192309-4-vinod.govindapillai@intel.com \
--to=vinod.govindapillai@intel.com \
--cc=arun.r.murthy@intel.com \
--cc=imre.deak@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=rodrigo.vivi@intel.com \
--cc=uma.shankar@intel.com \
--cc=ville.syrala@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox