From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: <intel-xe@lists.freedesktop.org>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>,
Matthew Auld <matthew.auld@intel.com>
Subject: [PATCH 09/14] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls
Date: Thu, 15 Feb 2024 14:34:25 -0500 [thread overview]
Message-ID: <20240215193430.130106-9-rodrigo.vivi@intel.com> (raw)
In-Reply-To: <20240215193430.130106-1-rodrigo.vivi@intel.com>
Continue the work to kill the mem_access in favor of a pure runtime pm.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
---
drivers/gpu/drm/xe/xe_hwmon.c | 25 +++++++++++++------------
1 file changed, 13 insertions(+), 12 deletions(-)
diff --git a/drivers/gpu/drm/xe/xe_hwmon.c b/drivers/gpu/drm/xe/xe_hwmon.c
index b82233a41606..a256af8c2012 100644
--- a/drivers/gpu/drm/xe/xe_hwmon.c
+++ b/drivers/gpu/drm/xe/xe_hwmon.c
@@ -18,6 +18,7 @@
#include "xe_pcode.h"
#include "xe_pcode_api.h"
#include "xe_sriov.h"
+#include "xe_pm.h"
enum xe_hwmon_reg {
REG_PKG_RAPL_LIMIT,
@@ -266,7 +267,7 @@ xe_hwmon_power1_max_interval_show(struct device *dev, struct device_attribute *a
u32 x, y, x_w = 2; /* 2 bits */
u64 r, tau4, out;
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
mutex_lock(&hwmon->hwmon_lock);
@@ -275,7 +276,7 @@ xe_hwmon_power1_max_interval_show(struct device *dev, struct device_attribute *a
mutex_unlock(&hwmon->hwmon_lock);
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
x = REG_FIELD_GET(PKG_PWR_LIM_1_TIME_X, r);
y = REG_FIELD_GET(PKG_PWR_LIM_1_TIME_Y, r);
@@ -354,7 +355,7 @@ xe_hwmon_power1_max_interval_store(struct device *dev, struct device_attribute *
rxy = REG_FIELD_PREP(PKG_PWR_LIM_1_TIME_X, x) | REG_FIELD_PREP(PKG_PWR_LIM_1_TIME_Y, y);
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
mutex_lock(&hwmon->hwmon_lock);
@@ -363,7 +364,7 @@ xe_hwmon_power1_max_interval_store(struct device *dev, struct device_attribute *
mutex_unlock(&hwmon->hwmon_lock);
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
return count;
}
@@ -384,12 +385,12 @@ static umode_t xe_hwmon_attributes_visible(struct kobject *kobj,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret = 0;
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
if (attr == &sensor_dev_attr_power1_max_interval.dev_attr.attr)
ret = xe_hwmon_get_reg(hwmon, REG_PKG_RAPL_LIMIT) ? attr->mode : 0;
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
return ret;
}
@@ -610,7 +611,7 @@ xe_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
struct xe_hwmon *hwmon = (struct xe_hwmon *)drvdata;
int ret;
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
switch (type) {
case hwmon_power:
@@ -630,7 +631,7 @@ xe_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
break;
}
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
return ret;
}
@@ -642,7 +643,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
switch (type) {
case hwmon_power:
@@ -662,7 +663,7 @@ xe_hwmon_read(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
return ret;
}
@@ -674,7 +675,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
struct xe_hwmon *hwmon = dev_get_drvdata(dev);
int ret;
- xe_device_mem_access_get(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_get(gt_to_xe(hwmon->gt));
switch (type) {
case hwmon_power:
@@ -688,7 +689,7 @@ xe_hwmon_write(struct device *dev, enum hwmon_sensor_types type, u32 attr,
break;
}
- xe_device_mem_access_put(gt_to_xe(hwmon->gt));
+ xe_pm_runtime_put(gt_to_xe(hwmon->gt));
return ret;
}
--
2.43.0
next prev parent reply other threads:[~2024-02-15 19:35 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-15 19:34 [PATCH 01/14] drm/xe: Document Xe PM component Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 02/14] drm/xe: Convert mem_access assertion towards the runtime_pm state Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 03/14] drm/xe: Runtime PM wake on every IOCTL Rodrigo Vivi
2024-02-16 9:38 ` Francois Dugast
2024-02-15 19:34 ` [PATCH 04/14] drm/xe: Convert kunit tests from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 05/14] drm/xe: Runtime PM wake on every sysfs call Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 06/14] drm/xe: Remove mem_access from guc_pc calls Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 07/14] drm/xe: Runtime PM wake on every debugfs call Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 08/14] drm/xe: Replace dma_buf mem_access per direct xe_pm_runtime calls Rodrigo Vivi
2024-02-15 19:34 ` Rodrigo Vivi [this message]
2024-02-15 19:34 ` [PATCH 10/14] drm/xe: Remove useless mem_access protection for query ioctls Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 11/14] drm/xe: Convert gsc_work from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 12/14] drm/xe: Remove mem_access from suspend and resume functions Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 13/14] drm/xe: Convert gt_reset from mem_access to xe_pm_runtime Rodrigo Vivi
2024-02-15 19:34 ` [PATCH 14/14] drm/xe: Remove useless mem_access on PAT dumps Rodrigo Vivi
2024-02-15 20:04 ` ✓ CI.Patch_applied: success for series starting with [01/14] drm/xe: Document Xe PM component Patchwork
2024-02-15 20:05 ` ✓ CI.checkpatch: " Patchwork
2024-02-15 20:05 ` ✗ CI.KUnit: failure " Patchwork
2024-02-21 12:07 ` [PATCH 01/14] " Francois Dugast
2024-02-21 14:41 ` Gupta, Anshuman
2024-02-21 19:00 ` Rodrigo Vivi
2024-02-22 15:06 ` Gupta, Anshuman
2024-02-22 16:29 ` Vivi, Rodrigo
2024-02-21 19:05 ` ✗ CI.Patch_applied: failure for series starting with [01/14] drm/xe: Document Xe PM component (rev2) Patchwork
-- strict thread matches above, loose matches on Subject: below --
2024-02-22 16:39 [PATCH 01/14] drm/xe: Document Xe PM component Rodrigo Vivi
2024-02-22 16:39 ` [PATCH 09/14] drm/xe: Convert hwmon from mem_access to xe_pm_runtime calls Rodrigo Vivi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240215193430.130106-9-rodrigo.vivi@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=matthew.auld@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox