From: Rodrigo Vivi <rodrigo.vivi@intel.com>
To: Ashutosh Dixit <ashutosh.dixit@intel.com>
Cc: intel-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org
Subject: Re: [Intel-gfx] [PATCH 3/3] drm/i915/hwmon: Block waiting for GuC reset to complete
Date: Tue, 18 Apr 2023 01:35:58 -0400 [thread overview]
Message-ID: <ZD4sPiMDhhr1wO8+@intel.com> (raw)
In-Reply-To: <20230410223509.3593109-4-ashutosh.dixit@intel.com>
On Mon, Apr 10, 2023 at 03:35:09PM -0700, Ashutosh Dixit wrote:
> Instead of erroring out when GuC reset is in progress, block waiting for
> GuC reset to complete which is a more reasonable uapi behavior.
>
> v2: Avoid race between wake_up_all and waiting for wakeup (Rodrigo)
>
> Signed-off-by: Ashutosh Dixit <ashutosh.dixit@intel.com>
> ---
> drivers/gpu/drm/i915/i915_hwmon.c | 38 +++++++++++++++++++++++++++----
> 1 file changed, 33 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/gpu/drm/i915/i915_hwmon.c b/drivers/gpu/drm/i915/i915_hwmon.c
> index 9ab8971679fe3..8471a667dfc71 100644
> --- a/drivers/gpu/drm/i915/i915_hwmon.c
> +++ b/drivers/gpu/drm/i915/i915_hwmon.c
> @@ -51,6 +51,7 @@ struct hwm_drvdata {
> char name[12];
> int gt_n;
> bool reset_in_progress;
> + wait_queue_head_t waitq;
> };
>
> struct i915_hwmon {
> @@ -395,16 +396,41 @@ hwm_power_max_read(struct hwm_drvdata *ddat, long *val)
> static int
> hwm_power_max_write(struct hwm_drvdata *ddat, long val)
> {
> +#define GUC_RESET_TIMEOUT msecs_to_jiffies(2000)
> +
> + int ret = 0, timeout = GUC_RESET_TIMEOUT;
> struct i915_hwmon *hwmon = ddat->hwmon;
> intel_wakeref_t wakeref;
> - int ret = 0;
> + DEFINE_WAIT(wait);
> u32 nval;
>
> - mutex_lock(&hwmon->hwmon_lock);
> - if (hwmon->ddat.reset_in_progress) {
> - ret = -EAGAIN;
> - goto unlock;
> + /* Block waiting for GuC reset to complete when needed */
> + for (;;) {
> + mutex_lock(&hwmon->hwmon_lock);
I'm really afraid of how this mutex is handled with the wait queue.
some initial thought it looks like it is trying to reimplement ww_mutex?
all other examples of the wait_queue usages like this or didn't use
locks or had it in a total different flow that I could not correlate.
> +
> + prepare_to_wait(&ddat->waitq, &wait, TASK_INTERRUPTIBLE);
> +
> + if (!hwmon->ddat.reset_in_progress)
> + break;
If this breaks we never unlock it?
> +
> + if (signal_pending(current)) {
> + ret = -EINTR;
> + break;
> + }
> +
> + if (!timeout) {
> + ret = -ETIME;
> + break;
> + }
> +
> + mutex_unlock(&hwmon->hwmon_lock);
do we need to lock the signal pending and timeout as well?
or only wrapping it around the hwmon->ddat access would be
enough?
> +
> + timeout = schedule_timeout(timeout);
> }
> + finish_wait(&ddat->waitq, &wait);
> + if (ret)
> + goto unlock;
> +
> wakeref = intel_runtime_pm_get(ddat->uncore->rpm);
>
> /* Disable PL1 limit and verify, because the limit cannot be disabled on all platforms */
> @@ -508,6 +534,7 @@ void i915_hwmon_power_max_restore(struct drm_i915_private *i915, bool old)
> intel_uncore_rmw(hwmon->ddat.uncore, hwmon->rg.pkg_rapl_limit,
> PKG_PWR_LIM_1_EN, old ? PKG_PWR_LIM_1_EN : 0);
> hwmon->ddat.reset_in_progress = false;
> + wake_up_all(&hwmon->ddat.waitq);
>
> mutex_unlock(&hwmon->hwmon_lock);
> }
> @@ -784,6 +811,7 @@ void i915_hwmon_register(struct drm_i915_private *i915)
> ddat->uncore = &i915->uncore;
> snprintf(ddat->name, sizeof(ddat->name), "i915");
> ddat->gt_n = -1;
> + init_waitqueue_head(&ddat->waitq);
>
> for_each_gt(gt, i915, i) {
> ddat_gt = hwmon->ddat_gt + i;
> --
> 2.38.0
>
next prev parent reply other threads:[~2023-04-18 5:36 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-10 22:35 [Intel-gfx] [PATCH 0/3] drm/i915/guc: Disable PL1 power limit when loading GuC firmware Ashutosh Dixit
2023-04-10 22:35 ` [Intel-gfx] [PATCH 1/3] drm/i915/hwmon: Get mutex and rpm ref just once in hwm_power_max_write Ashutosh Dixit
2023-04-10 22:35 ` [Intel-gfx] [PATCH 2/3] drm/i915/guc: Disable PL1 power limit when loading GuC firmware Ashutosh Dixit
2023-04-10 22:35 ` [Intel-gfx] [PATCH 3/3] drm/i915/hwmon: Block waiting for GuC reset to complete Ashutosh Dixit
2023-04-18 5:35 ` Rodrigo Vivi [this message]
2023-04-18 17:23 ` Dixit, Ashutosh
2023-04-19 19:40 ` Rodrigo Vivi
2023-04-19 22:13 ` Dixit, Ashutosh
2023-04-20 15:45 ` Rodrigo Vivi
2023-04-19 13:21 ` Tvrtko Ursulin
2023-04-19 22:10 ` Dixit, Ashutosh
2023-04-20 7:57 ` Tvrtko Ursulin
2023-04-20 15:43 ` Rodrigo Vivi
2023-04-20 16:26 ` Dixit, Ashutosh
2023-04-10 23:04 ` [Intel-gfx] ✗ Fi.CI.SPARSE: warning for drm/i915/guc: Disable PL1 power limit when loading GuC firmware Patchwork
2023-04-10 23:14 ` [Intel-gfx] ✓ Fi.CI.BAT: success " Patchwork
2023-04-11 0:30 ` [Intel-gfx] ✓ Fi.CI.IGT: " Patchwork
-- strict thread matches above, loose matches on Subject: below --
2023-04-20 16:40 [Intel-gfx] [PATCH v6 0/3] " Ashutosh Dixit
2023-04-20 16:40 ` [Intel-gfx] [PATCH 3/3] drm/i915/hwmon: Block waiting for GuC reset to complete Ashutosh Dixit
2023-04-06 4:45 [Intel-gfx] [PATCH v4 0/3] drm/i915/guc: Disable PL1 power limit when loading GuC firmware Ashutosh Dixit
2023-04-06 4:45 ` [Intel-gfx] [PATCH 3/3] drm/i915/hwmon: Block waiting for GuC reset to complete Ashutosh Dixit
2023-04-07 11:04 ` Rodrigo Vivi
2023-04-10 22:40 ` Dixit, Ashutosh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZD4sPiMDhhr1wO8+@intel.com \
--to=rodrigo.vivi@intel.com \
--cc=ashutosh.dixit@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox