From: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
To: intel-xe@lists.freedesktop.org
Cc: rodrigo.vivi@intel.com
Subject: [Intel-xe] [PATCH 2/3] drm/xe: Use spinlock in forcewake instead of mutex
Date: Fri, 1 Sep 2023 12:36:47 +0530 [thread overview]
Message-ID: <20230901070648.1100049-3-aravind.iddamsetty@linux.intel.com> (raw)
In-Reply-To: <20230901070648.1100049-1-aravind.iddamsetty@linux.intel.com>
In PMU we need to access certain registers which fall under GT power
domain for which we need to take forcewake. But as PMU being an atomic
context can't expect to have any sleeping calls.
Reviewed-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Aravind Iddamsetty <aravind.iddamsetty@linux.intel.com>
---
drivers/gpu/drm/xe/xe_force_wake.c | 14 +++++++-------
drivers/gpu/drm/xe/xe_force_wake_types.h | 2 +-
2 files changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_force_wake.c b/drivers/gpu/drm/xe/xe_force_wake.c
index ef7279e0b006..53d5b013a80c 100644
--- a/drivers/gpu/drm/xe/xe_force_wake.c
+++ b/drivers/gpu/drm/xe/xe_force_wake.c
@@ -42,7 +42,7 @@ void xe_force_wake_init_gt(struct xe_gt *gt, struct xe_force_wake *fw)
struct xe_device *xe = gt_to_xe(gt);
fw->gt = gt;
- mutex_init(&fw->lock);
+ spin_lock_init(&fw->lock);
/* Assuming gen11+ so assert this assumption is correct */
XE_WARN_ON(GRAPHICS_VER(gt_to_xe(gt)) < 11);
@@ -116,7 +116,7 @@ static int domain_wake_wait(struct xe_gt *gt,
{
return xe_mmio_wait32(gt, domain->reg_ack, domain->val, domain->val,
XE_FORCE_WAKE_ACK_TIMEOUT_MS * USEC_PER_MSEC,
- NULL, false);
+ NULL, true);
}
static void domain_sleep(struct xe_gt *gt, struct xe_force_wake_domain *domain)
@@ -129,7 +129,7 @@ static int domain_sleep_wait(struct xe_gt *gt,
{
return xe_mmio_wait32(gt, domain->reg_ack, domain->val, 0,
XE_FORCE_WAKE_ACK_TIMEOUT_MS * USEC_PER_MSEC,
- NULL, false);
+ NULL, true);
}
#define for_each_fw_domain_masked(domain__, mask__, fw__, tmp__) \
@@ -147,7 +147,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
enum xe_force_wake_domains tmp, woken = 0;
int ret, ret2 = 0;
- mutex_lock(&fw->lock);
+ spin_lock(&fw->lock);
for_each_fw_domain_masked(domain, domains, fw, tmp) {
if (!domain->ref++) {
woken |= BIT(domain->id);
@@ -162,7 +162,7 @@ int xe_force_wake_get(struct xe_force_wake *fw,
domain->id, ret);
}
fw->awake_domains |= woken;
- mutex_unlock(&fw->lock);
+ spin_unlock(&fw->lock);
return ret2;
}
@@ -176,7 +176,7 @@ int xe_force_wake_put(struct xe_force_wake *fw,
enum xe_force_wake_domains tmp, sleep = 0;
int ret, ret2 = 0;
- mutex_lock(&fw->lock);
+ spin_lock(&fw->lock);
for_each_fw_domain_masked(domain, domains, fw, tmp) {
if (!--domain->ref) {
sleep |= BIT(domain->id);
@@ -191,7 +191,7 @@ int xe_force_wake_put(struct xe_force_wake *fw,
domain->id, ret);
}
fw->awake_domains &= ~sleep;
- mutex_unlock(&fw->lock);
+ spin_unlock(&fw->lock);
return ret2;
}
diff --git a/drivers/gpu/drm/xe/xe_force_wake_types.h b/drivers/gpu/drm/xe/xe_force_wake_types.h
index cb782696855b..ed0edc2cdf9f 100644
--- a/drivers/gpu/drm/xe/xe_force_wake_types.h
+++ b/drivers/gpu/drm/xe/xe_force_wake_types.h
@@ -76,7 +76,7 @@ struct xe_force_wake {
/** @gt: back pointers to GT */
struct xe_gt *gt;
/** @lock: protects everything force wake struct */
- struct mutex lock;
+ spinlock_t lock;
/** @awake_domains: mask of all domains awake */
enum xe_force_wake_domains awake_domains;
/** @domains: force wake domains */
--
2.25.1
next prev parent reply other threads:[~2023-09-01 6:58 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-01 7:06 [Intel-xe] [PATCH v6 0/3] drm/xe/pmu: Enable PMU interface Aravind Iddamsetty
2023-09-01 7:06 ` [Intel-xe] [PATCH v3 1/3] drm/xe: Get GT clock to nanosecs Aravind Iddamsetty
2023-09-01 7:06 ` Aravind Iddamsetty [this message]
2023-09-01 7:06 ` [Intel-xe] [PATCH v6 3/3] drm/xe/pmu: Enable PMU interface Aravind Iddamsetty
2023-09-02 5:00 ` Dixit, Ashutosh
2023-09-14 4:41 ` Aravind Iddamsetty
2023-09-14 5:16 ` Dixit, Ashutosh
2023-09-14 5:38 ` Aravind Iddamsetty
2023-09-01 7:28 ` [Intel-xe] ✓ CI.Patch_applied: success for drm/xe/pmu: Enable PMU interface (rev6) Patchwork
2023-09-01 7:29 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-09-01 7:30 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-09-01 7:37 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-09-01 7:37 ` [Intel-xe] ✗ CI.Hooks: failure " Patchwork
2023-09-01 7:37 ` [Intel-xe] ✗ CI.checksparse: warning " Patchwork
2023-09-01 8:11 ` [Intel-xe] ✗ CI.BAT: failure " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2023-09-14 6:13 [Intel-xe] [PATCH v7 0/3] drm/xe/pmu: Enable PMU interface Aravind Iddamsetty
2023-09-14 6:13 ` [Intel-xe] [PATCH 2/3] drm/xe: Use spinlock in forcewake instead of mutex Aravind Iddamsetty
2023-08-30 5:15 [Intel-xe] [PATCH v5 0/3] drm/xe/pmu: Enable PMU interface Aravind Iddamsetty
2023-08-30 5:15 ` [Intel-xe] [PATCH 2/3] drm/xe: Use spinlock in forcewake instead of mutex Aravind Iddamsetty
2023-08-30 5:33 ` Dixit, Ashutosh
2023-08-30 20:56 ` Rodrigo Vivi
2023-08-30 22:19 ` Dixit, Ashutosh
2023-08-31 4:13 ` Aravind Iddamsetty
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230901070648.1100049-3-aravind.iddamsetty@linux.intel.com \
--to=aravind.iddamsetty@linux.intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox