From: "Nilawar, Badal" <badal.nilawar@intel.com>
To: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
Cc: linux-hwmon@vger.kernel.org, linux@roeck-us.net,
rodrigo.vivi@intel.com, intel-xe@lists.freedesktop.org
Subject: Re: [Intel-xe] [PATCH v6 1/5] drm/xe/hwmon: Expose power attributes
Date: Fri, 29 Sep 2023 12:07:35 +0530 [thread overview]
Message-ID: <c366920b-ec67-f5be-4b17-ae1be82bdae9@intel.com> (raw)
In-Reply-To: <87ttreucb0.wl-ashutosh.dixit@intel.com>
On 28-09-2023 10:25, Dixit, Ashutosh wrote:
> On Wed, 27 Sep 2023 01:39:46 -0700, Nilawar, Badal wrote:
>>
>
> Hi Badal,
>
>> On 27-09-2023 10:23, Dixit, Ashutosh wrote:
>>> On Mon, 25 Sep 2023 01:18:38 -0700, Badal Nilawar wrote:
>>>>
>>>> +static umode_t
>>>> +xe_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
>>>> + u32 attr, int channel)
>>>> +{
>>>> + struct xe_hwmon *hwmon = (struct xe_hwmon *)drvdata;
>>>> + int ret;
>>>> +
>>>> + xe_device_mem_access_get(gt_to_xe(hwmon->gt));
>>>
>>> Maybe we do xe_device_mem_access_get/put in xe_hwmon_process_reg where it
>>> is needed? E.g. xe_hwmon_is_visible doesn't need to do this because it
>>> doesn't read/write registers.
>> Agreed, but visible function is called only once while registering hwmon
>> interface, which happen during driver probe. During driver probe device
>> will be in resumed state. So no harm in keeping
>> xe_device_mem_access_get/put in visible function.
>
> To me it doesn't make any sense to keep xe_device_mem_access_get/put
> anywhere except in xe_hwmon_process_reg where the HW access actually
> happens. We can eliminate xe_device_mem_access_get/put's all over the place
> if we do it. Isn't it?
Agreed, thought process here suggest that take rpm wakeref at lowest
possible level. I already tried this in rfc series and in some extent in
rev2. There is problem with this approach. See my comments below.
>
> The only restriction I have heard of (though not sure why) is that
> xe_device_mem_access_get/put should not be called under lock. Though I am
> not sure it is for spinlock or also mutex. So as we were saying the locking
> will also need to move to xe_hwmon_process_reg.
Yes from rev2 comments its dangerous to take mutex before
xe_device_mem_access_get/put. With code for "PL1 disable/restore during
resume" I saw deadlock. Scenario was power1_max write -> mutex lock ->
rpm resume -> disable pl1 -> mutex lock (dead lock here).
>
> So:
>
> xe_hwmon_process_reg()
> {
> xe_device_mem_access_get
> mutex_lock
> ...
> mutex_unlock
> xe_device_mem_access_put
> }
>
> So once again if this is not possible for some reason let's figure out why.
There are two problems with this approach.
Problem 1: If you see implementation of xe_hwmon_power_max_write, reg
access is happening 3 times, so there will be 3 rpm suspend/resume
cycles. I was observing the same with rfc implementation. So in
subsequent series xe_device_mem_access_put/get is moved to top level
functions i.e. hwmon hooks.
Problem 2: If locking moved inside xe_hwmon_process_reg then between two
subsequent reg accesses it will open small window during which race can
happen.
As Anshuman suggested in other thread for read are sequential and
protected by sysfs layer. So lets apply locking only for RW attributes.
+static int xe_hwmon_power_max_write(struct xe_hwmon *hwmon, long value)
+{
+ u32 reg_val;
+
+ /* Disable PL1 limit and verify, as limit cannot be disabled on all
platforms */
+ if (value == PL1_DISABLE) {
+ xe_hwmon_process_reg(hwmon, REG_PKG_RAPL_LIMIT, REG_RMW, ®_val,
+ PKG_PWR_LIM_1_EN, 0);
+ xe_hwmon_process_reg(hwmon, REG_PKG_RAPL_LIMIT, REG_READ, ®_val,
+ PKG_PWR_LIM_1_EN, 0);
+
+ if (reg_val & PKG_PWR_LIM_1_EN)
+ return -EOPNOTSUPP;
+ }
+
+ /* Computation in 64-bits to avoid overflow. Round to nearest. */
+ reg_val = DIV_ROUND_CLOSEST_ULL((u64)value << hwmon->scl_shift_power,
SF_POWER);
+ reg_val = PKG_PWR_LIM_1_EN | REG_FIELD_PREP(PKG_PWR_LIM_1, reg_val);
+
+ xe_hwmon_process_reg(hwmon, REG_PKG_RAPL_LIMIT, REG_RMW, ®_val,
+ PKG_PWR_LIM_1_EN | PKG_PWR_LIM_1, reg_val);
+
+ return 0;
+}
Regards,
Badal
>
>>>
>>> Also do we need to take forcewake? i915 had forcewake table so it would
>>> take forcewake automatically but XE doesn't do that.
>> Hwmon regs doesn't fall under GT domain so doesn't need forcewake.
>
> OK, great.
>
> Thanks.
> --
> Ashutosh
next prev parent reply other threads:[~2023-09-29 6:37 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-09-25 8:18 [Intel-xe] [PATCH v6 0/5] Add HWMON support for DGFX Badal Nilawar
2023-09-25 8:18 ` [Intel-xe] [PATCH v6 1/5] drm/xe/hwmon: Expose power attributes Badal Nilawar
2023-09-25 8:58 ` Andi Shyti
2023-09-27 4:45 ` Dixit, Ashutosh
2023-09-27 10:28 ` Nilawar, Badal
2023-09-28 4:54 ` Dixit, Ashutosh
2023-09-27 4:53 ` Dixit, Ashutosh
2023-09-27 8:39 ` Nilawar, Badal
2023-09-28 4:55 ` Dixit, Ashutosh
2023-09-29 6:37 ` Nilawar, Badal [this message]
2023-09-29 16:48 ` Dixit, Ashutosh
2023-09-29 21:41 ` Dixit, Ashutosh
2023-10-04 0:52 ` Dixit, Ashutosh
2023-10-04 6:43 ` Nilawar, Badal
2023-10-04 15:56 ` Rodrigo Vivi
2023-10-04 16:11 ` Rodrigo Vivi
2023-10-04 10:18 ` Nilawar, Badal
2023-09-28 4:55 ` Dixit, Ashutosh
2023-09-25 8:18 ` [Intel-xe] [PATCH v6 2/5] drm/xe/hwmon: Expose card reactive critical power Badal Nilawar
2023-09-25 9:03 ` Andi Shyti
2023-09-25 8:18 ` [Intel-xe] [PATCH v6 3/5] drm/xe/hwmon: Expose input voltage attribute Badal Nilawar
2023-09-25 9:04 ` Andi Shyti
2023-09-25 8:18 ` [Intel-xe] [PATCH v6 4/5] drm/xe/hwmon: Expose hwmon energy attribute Badal Nilawar
2023-09-25 11:49 ` Andi Shyti
2023-09-25 8:18 ` [Intel-xe] [PATCH v6 5/5] drm/xe/hwmon: Expose power1_max_interval Badal Nilawar
2023-09-25 11:56 ` Andi Shyti
[not found] ` <e5801f36-2f9a-6d24-7af2-1e7174f2e0b4@intel.com>
2023-09-26 8:01 ` Andi Shyti
2023-09-26 9:00 ` Nilawar, Badal
2023-09-26 21:01 ` Andi Shyti
2023-09-27 3:32 ` Dixit, Ashutosh
2023-09-27 9:04 ` Nilawar, Badal
2023-09-27 9:31 ` Gupta, Anshuman
2023-09-25 8:20 ` [Intel-xe] ✓ CI.Patch_applied: success for Add HWMON support for DGFX (rev6) Patchwork
2023-09-25 8:20 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-09-25 8:21 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-09-25 8:28 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-09-25 8:28 ` [Intel-xe] ✗ CI.Hooks: failure " Patchwork
2023-09-25 8:30 ` [Intel-xe] ✓ CI.checksparse: success " Patchwork
2023-09-25 9:04 ` [Intel-xe] ✗ CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c366920b-ec67-f5be-4b17-ae1be82bdae9@intel.com \
--to=badal.nilawar@intel.com \
--cc=ashutosh.dixit@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=linux-hwmon@vger.kernel.org \
--cc=linux@roeck-us.net \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox