Intel-XE Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: "Nilawar, Badal" <badal.nilawar@intel.com>
To: "Dixit, Ashutosh" <ashutosh.dixit@intel.com>
Cc: linux-hwmon@vger.kernel.org, rodrigo.vivi@intel.com,
	intel-xe@lists.freedesktop.org, linux@roeck-us.net
Subject: Re: [Intel-xe] [PATCH v6 1/5] drm/xe/hwmon: Expose power attributes
Date: Wed, 4 Oct 2023 12:13:06 +0530	[thread overview]
Message-ID: <fa566c3c-ea7b-967e-b7ee-868317431140@intel.com> (raw)
In-Reply-To: <87jzs3ryxh.wl-ashutosh.dixit@intel.com>

Hi Anshutosh,

On 04-10-2023 06:22, Dixit, Ashutosh wrote:
> On Fri, 29 Sep 2023 14:41:22 -0700, Dixit, Ashutosh wrote:
>>
> 
> Hi Badal,
> 
> Why did you merge the hwmon patches when there is still open discussion
> below on the patches? According to upstream rules (I'm not sure if you know
> about this) you should not merge patches, even if you have R-b's on the
> patches, till all review comments are resolved.
> 
> Generally you are expected to either address the comments or reply to the
> comments are at least inform that you are merging, disregarding the > comments. IMO you should at least have done one of these before merging.

I did selective merging. I haven't merged 5th patch yet as locking is 
still in discussion. I am working on addressing locking and thought I 
will address some of your comments with it.

Thanks,
Badal
> 
> Cc: @Vivi, Rodrigo
> 
> Thanks.
> --
> Ashutosh
> 
> 
>> On Fri, 29 Sep 2023 09:48:36 -0700, Dixit, Ashutosh wrote:
>>> On Thu, 28 Sep 2023 23:37:35 -0700, Nilawar, Badal wrote:
>>>>
>>>> On 28-09-2023 10:25, Dixit, Ashutosh wrote:
>>>>> On Wed, 27 Sep 2023 01:39:46 -0700, Nilawar, Badal wrote:
>>>>>
>>>>>> On 27-09-2023 10:23, Dixit, Ashutosh wrote:
>>>>>>> On Mon, 25 Sep 2023 01:18:38 -0700, Badal Nilawar wrote:
>>>>>>>>
>>>>>>>> +static umode_t
>>>>>>>> +xe_hwmon_is_visible(const void *drvdata, enum hwmon_sensor_types type,
>>>>>>>> +		    u32 attr, int channel)
>>>>>>>> +{
>>>>>>>> +	struct xe_hwmon *hwmon = (struct xe_hwmon *)drvdata;
>>>>>>>> +	int ret;
>>>>>>>> +
>>>>>>>> +	xe_device_mem_access_get(gt_to_xe(hwmon->gt));
>>>>>>>
>>>>>>> Maybe we do xe_device_mem_access_get/put in xe_hwmon_process_reg where it
>>>>>>> is needed? E.g. xe_hwmon_is_visible doesn't need to do this because it
>>>>>>> doesn't read/write registers.
>>>>>> Agreed, but visible function is called only once while registering hwmon
>>>>>> interface, which happen during driver probe. During driver probe device
>>>>>> will be in resumed state. So no harm in keeping
>>>>>> xe_device_mem_access_get/put in visible function.
>>>>>
>>>>> To me it doesn't make any sense to keep xe_device_mem_access_get/put
>>>>> anywhere except in xe_hwmon_process_reg where the HW access actually
>>>>> happens. We can eliminate xe_device_mem_access_get/put's all over the place
>>>>> if we do it. Isn't it?
>>>> Agreed, thought process here suggest that take rpm wakeref at lowest
>>>> possible level. I already tried this in rfc series and in some extent in
>>>> rev2. There is problem with this approach. See my comments below.
>>>>>
>>>>> The only restriction I have heard of (though not sure why) is that
>>>>> xe_device_mem_access_get/put should not be called under lock>. Though I am
>>>>> not sure it is for spinlock or also mutex. So as we were saying the locking
>>>>> will also need to move to xe_hwmon_process_reg.
>>>> Yes from rev2 comments its dangerous to take mutex before
>>>> xe_device_mem_access_get/put. With code for "PL1 disable/restore during
>>>> resume" I saw deadlock. Scenario was power1_max write -> mutex lock -> rpm
>>>> resume -> disable pl1 -> mutex lock (dead lock here).
>>>
>>> But this is already the wrong order as mentioned below. If we follow the
>>> below order do we still see deadlock?
>>>
>>>>>
>>>>> So:
>>>>>
>>>>> xe_hwmon_process_reg()
>>>>> {
>>>>> 	xe_device_mem_access_get
>>>>> 	mutex_lock
>>>>> 	...
>>>>> 	mutex_unlock
>>>>> 	xe_device_mem_access_put
>>>>> }
>>>>>
>>>>> So once again if this is not possible for some reason let's figure out why.
>>>> There are two problems with this approach.
>>>>
>>>> Problem 1: If you see implementation of xe_hwmon_power_max_write, reg
>>>> access is happening 3 times, so there will be 3 rpm suspend/resume
>>>> cycles. I was observing the same with rfc implementation. So in subsequent
>>>> series xe_device_mem_access_put/get is moved to top level functions
>>>> i.e. hwmon hooks.
>>>
>>> This is not exactly correct because there is also a 1 second autosuspend
>>> delay which will prevent such rpm suspend/resume cycles:
>>>
>>> xe_pm_runtime_init:
>>> 	pm_runtime_set_autosuspend_delay(dev, 1000);
>>>
>>>
>>>>
>>>> Problem 2: If locking moved inside xe_hwmon_process_reg then between two
>>>> subsequent reg accesses it will open small window during which race can
>>>> happen.
>>>> As Anshuman suggested in other thread for read are sequential and protected
>>>> by sysfs layer. So lets apply locking only for RW attributes.
>>>
>>> But what is the locking trying to protect? As far as I understand it is
>>> just the registers which have to be atomically modified/read. So it seems
>>> sufficient to just protect the register accesses with the lock.
>>>
>>> So I am still not convinced.
>>
>> Let's figure out the locking first depending on what needs to be protected
>> (just registers or other data too). And then we can see where to put the
>> xe_device_mem_access_get/put's (following the rule that
>> xe_device_mem_access_get/put's should not be called under lock).

  reply	other threads:[~2023-10-04  6:43 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-09-25  8:18 [Intel-xe] [PATCH v6 0/5] Add HWMON support for DGFX Badal Nilawar
2023-09-25  8:18 ` [Intel-xe] [PATCH v6 1/5] drm/xe/hwmon: Expose power attributes Badal Nilawar
2023-09-25  8:58   ` Andi Shyti
2023-09-27  4:45   ` Dixit, Ashutosh
2023-09-27 10:28     ` Nilawar, Badal
2023-09-28  4:54       ` Dixit, Ashutosh
2023-09-27  4:53   ` Dixit, Ashutosh
2023-09-27  8:39     ` Nilawar, Badal
2023-09-28  4:55       ` Dixit, Ashutosh
2023-09-29  6:37         ` Nilawar, Badal
2023-09-29 16:48           ` Dixit, Ashutosh
2023-09-29 21:41             ` Dixit, Ashutosh
2023-10-04  0:52               ` Dixit, Ashutosh
2023-10-04  6:43                 ` Nilawar, Badal [this message]
2023-10-04 15:56                   ` Rodrigo Vivi
2023-10-04 16:11                     ` Rodrigo Vivi
2023-10-04 10:18               ` Nilawar, Badal
2023-09-28  4:55   ` Dixit, Ashutosh
2023-09-25  8:18 ` [Intel-xe] [PATCH v6 2/5] drm/xe/hwmon: Expose card reactive critical power Badal Nilawar
2023-09-25  9:03   ` Andi Shyti
2023-09-25  8:18 ` [Intel-xe] [PATCH v6 3/5] drm/xe/hwmon: Expose input voltage attribute Badal Nilawar
2023-09-25  9:04   ` Andi Shyti
2023-09-25  8:18 ` [Intel-xe] [PATCH v6 4/5] drm/xe/hwmon: Expose hwmon energy attribute Badal Nilawar
2023-09-25 11:49   ` Andi Shyti
2023-09-25  8:18 ` [Intel-xe] [PATCH v6 5/5] drm/xe/hwmon: Expose power1_max_interval Badal Nilawar
2023-09-25 11:56   ` Andi Shyti
     [not found]     ` <e5801f36-2f9a-6d24-7af2-1e7174f2e0b4@intel.com>
2023-09-26  8:01       ` Andi Shyti
2023-09-26  9:00         ` Nilawar, Badal
2023-09-26 21:01           ` Andi Shyti
2023-09-27  3:32             ` Dixit, Ashutosh
2023-09-27  9:04               ` Nilawar, Badal
2023-09-27  9:31                 ` Gupta, Anshuman
2023-09-25  8:20 ` [Intel-xe] ✓ CI.Patch_applied: success for Add HWMON support for DGFX (rev6) Patchwork
2023-09-25  8:20 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-09-25  8:21 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-09-25  8:28 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-09-25  8:28 ` [Intel-xe] ✗ CI.Hooks: failure " Patchwork
2023-09-25  8:30 ` [Intel-xe] ✓ CI.checksparse: success " Patchwork
2023-09-25  9:04 ` [Intel-xe] ✗ CI.BAT: failure " Patchwork

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa566c3c-ea7b-967e-b7ee-868317431140@intel.com \
    --to=badal.nilawar@intel.com \
    --cc=ashutosh.dixit@intel.com \
    --cc=intel-xe@lists.freedesktop.org \
    --cc=linux-hwmon@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=rodrigo.vivi@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox