From: John.C.Harrison@Intel.com
To: Intel-GFX@Lists.FreeDesktop.Org
Cc: DRI-Devel@Lists.FreeDesktop.Org,
Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Subject: [Intel-gfx] [PATCH 8/8] drm/i915: Get PM ref before accessing HW register
Date: Tue, 7 Sep 2021 18:42:59 -0700 [thread overview]
Message-ID: <20210908014259.50346-9-John.C.Harrison@Intel.com> (raw)
In-Reply-To: <20210908014259.50346-1-John.C.Harrison@Intel.com>
From: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
Seeing these errors when GT is likely in suspend state-
"RPM wakelock ref not held during HW access"
Ensure GT is awake before trying to access HW registers. Avoid
reading the register if that is not the case.
Signed-off-by: Vinay Belgaumkar <vinay.belgaumkar@intel.com>
---
drivers/gpu/drm/i915/gt/intel_rps.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/i915/gt/intel_rps.c b/drivers/gpu/drm/i915/gt/intel_rps.c
index 3489f5f0cac1..e1a198bbd135 100644
--- a/drivers/gpu/drm/i915/gt/intel_rps.c
+++ b/drivers/gpu/drm/i915/gt/intel_rps.c
@@ -1969,8 +1969,14 @@ u32 intel_rps_read_actual_frequency(struct intel_rps *rps)
u32 intel_rps_read_punit_req(struct intel_rps *rps)
{
struct intel_uncore *uncore = rps_to_uncore(rps);
+ struct intel_runtime_pm *rpm = rps_to_uncore(rps)->rpm;
+ intel_wakeref_t wakeref;
+ u32 freq = 0;
- return intel_uncore_read(uncore, GEN6_RPNSWREQ);
+ with_intel_runtime_pm_if_in_use(rpm, wakeref)
+ freq = intel_uncore_read(uncore, GEN6_RPNSWREQ);
+
+ return freq;
}
static u32 intel_rps_get_req(u32 pureq)
--
2.25.1
next prev parent reply other threads:[~2021-09-08 1:43 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-09-08 1:42 [Intel-gfx] [PATCH 0/8] [CI] Enable GuC submission by default on DG1 John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 1/8] drm/i915: Do not define vma on stack John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 2/8] drm/i915/guc: put all guc objects in lmem when available John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 3/8] drm/i915/guc: Add DG1 GuC / HuC firmware defs John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 4/8] drm/i915/guc: Enable GuC submission by default on DG1 John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 5/8] Me: Allow relocs on DG1 for CI John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 6/8] Me: Workaround LMEM blow up John.C.Harrison
2021-09-08 1:42 ` [Intel-gfx] [PATCH 7/8] Me: Dump GuC log to dmesg on SLPC load failure John.C.Harrison
2021-09-08 1:42 ` John.C.Harrison [this message]
2021-09-08 1:53 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC submission by default on DG1 (rev3) Patchwork
2021-09-08 1:55 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-09-08 2:28 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-09-08 7:58 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
2021-09-09 16:23 ` [Intel-gfx] ✗ Fi.CI.CHECKPATCH: warning for Enable GuC submission by default on DG1 (rev4) Patchwork
2021-09-09 16:24 ` [Intel-gfx] ✗ Fi.CI.SPARSE: " Patchwork
2021-09-09 16:52 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2021-09-09 18:27 ` [Intel-gfx] ✗ Fi.CI.IGT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210908014259.50346-9-John.C.Harrison@Intel.com \
--to=john.c.harrison@intel.com \
--cc=DRI-Devel@Lists.FreeDesktop.Org \
--cc=Intel-GFX@Lists.FreeDesktop.Org \
--cc=vinay.belgaumkar@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox