From: Gustavo Sousa <gustavo.sousa@intel.com>
To: intel-gfx@lists.freedesktop.org, intel-xe@lists.freedesktop.org
Cc: Luca Coelho <luciano.coelho@intel.com>,
Rodrigo Vivi <rodrigo.vivi@intel.com>
Subject: [PATCH 03/13] drm/i915/dmc_wl: Check for non-zero refcount in release work
Date: Mon, 21 Oct 2024 19:27:22 -0300 [thread overview]
Message-ID: <20241021222744.294371-4-gustavo.sousa@intel.com> (raw)
In-Reply-To: <20241021222744.294371-1-gustavo.sousa@intel.com>
When the DMC wakelock refcount reaches zero, we know that there are no
users and that we can do the actual release operation on the hardware,
which is queued with a delayed work. The idea of the delayed work is to
avoid performing the release if a new lock user appears (i.e. refcount
gets incremented) in a very short period of time.
Based on the above, the release work should bail out if refcount is
non-zero (meaning new lock users appeared in the meantime), but our
current code actually does the opposite: it bails when refcount is zero.
That means that the wakelock is not released when it should be; and
that, when the work is not canceled in time, it ends up being releasing
when it should not.
Fix that by inverting the condition.
Signed-off-by: Gustavo Sousa <gustavo.sousa@intel.com>
---
drivers/gpu/drm/i915/display/intel_dmc_wl.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/i915/display/intel_dmc_wl.c b/drivers/gpu/drm/i915/display/intel_dmc_wl.c
index 8056a3c8666c..c298aef89449 100644
--- a/drivers/gpu/drm/i915/display/intel_dmc_wl.c
+++ b/drivers/gpu/drm/i915/display/intel_dmc_wl.c
@@ -72,8 +72,11 @@ static void intel_dmc_wl_work(struct work_struct *work)
spin_lock_irqsave(&wl->lock, flags);
- /* Bail out if refcount reached zero while waiting for the spinlock */
- if (!refcount_read(&wl->refcount))
+ /*
+ * Bail out if refcount became non-zero while waiting for the spinlock,
+ * meaning that the lock is now taken again.
+ */
+ if (refcount_read(&wl->refcount))
goto out_unlock;
__intel_de_rmw_nowl(display, DMC_WAKELOCK1_CTL, DMC_WAKELOCK_CTL_REQ, 0);
--
2.47.0
next prev parent reply other threads:[~2024-10-21 22:28 UTC|newest]
Thread overview: 55+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-21 22:27 [PATCH 00/13] drm/i915/dmc_wl: Fixes and enablement for Xe3_LPD Gustavo Sousa
2024-10-21 22:27 ` [PATCH 01/13] drm/xe: Mimic i915 behavior for non-sleeping MMIO wait Gustavo Sousa
2024-11-01 10:57 ` Luca Coelho
2024-11-05 12:17 ` Gustavo Sousa
2024-10-21 22:27 ` [PATCH 02/13] drm/i915/dmc_wl: Use non-sleeping variant of " Gustavo Sousa
2024-10-22 9:34 ` Jani Nikula
2024-10-22 10:55 ` Gustavo Sousa
2024-11-01 11:04 ` Luca Coelho
2024-11-01 11:18 ` Luca Coelho
2024-10-21 22:27 ` Gustavo Sousa [this message]
2024-11-01 11:48 ` [PATCH 03/13] drm/i915/dmc_wl: Check for non-zero refcount in release work Luca Coelho
2024-10-21 22:27 ` [PATCH 04/13] drm/i915/dmc_wl: Get wakelock when disabling dynamic DC states Gustavo Sousa
2024-11-01 12:24 ` Luca Coelho
2024-11-05 12:44 ` Gustavo Sousa
2024-11-06 11:37 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 05/13] drm/i915/dmc_wl: Use sentinel item for range tables Gustavo Sousa
2024-11-01 12:25 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 06/13] drm/i915/dmc_wl: Extract intel_dmc_wl_addr_in_range() Gustavo Sousa
2024-10-21 22:27 ` [PATCH 07/13] drm/i915/dmc_wl: Check ranges specific to DC states Gustavo Sousa
2024-10-22 8:03 ` Jani Nikula
2024-10-22 11:06 ` Gustavo Sousa
2024-11-05 19:54 ` Gustavo Sousa
2024-10-22 8:03 ` Jani Nikula
2024-10-22 11:10 ` Gustavo Sousa
2024-10-22 11:14 ` Gustavo Sousa
2024-11-01 12:51 ` Luca Coelho
2024-11-05 13:00 ` Gustavo Sousa
2024-11-06 11:47 ` Luca Coelho
2024-11-06 13:56 ` Gustavo Sousa
2024-10-21 22:27 ` [PATCH 08/13] drm/i915/dmc_wl: Allow simpler syntax for single reg in range tables Gustavo Sousa
2024-11-01 12:58 ` Luca Coelho
2024-11-05 13:42 ` Gustavo Sousa
2024-11-06 12:23 ` Luca Coelho
2024-11-06 12:29 ` Gustavo Sousa
2024-11-06 12:35 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 09/13] drm/i915/dmc_wl: Deal with existing references when disabling Gustavo Sousa
2024-11-01 14:17 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 10/13] drm/i915/dmc_wl: Couple enable/disable with dynamic DC states Gustavo Sousa
2024-11-01 14:19 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 11/13] drm/i915/dmc_wl: Add and use HAS_DMC_WAKELOCK() Gustavo Sousa
2024-10-22 9:37 ` Jani Nikula
2024-10-22 11:03 ` Gustavo Sousa
2024-11-05 13:56 ` Gustavo Sousa
2024-11-06 9:25 ` Jani Nikula
2024-11-06 13:24 ` Gustavo Sousa
2024-10-21 22:27 ` [PATCH 12/13] drm/i915/dmc_wl: Sanitize enable_dmc_wl according to hardware support Gustavo Sousa
2024-11-01 14:25 ` Luca Coelho
2024-10-21 22:27 ` [PATCH 13/13] drm/i915/xe3lpd: Use DMC wakelock by default Gustavo Sousa
2024-11-01 14:27 ` Luca Coelho
2024-11-05 13:46 ` Gustavo Sousa
2024-11-05 21:12 ` Gustavo Sousa
2024-11-06 12:27 ` Luca Coelho
2024-10-21 22:54 ` ✗ Fi.CI.CHECKPATCH: warning for drm/i915/dmc_wl: Fixes and enablement for Xe3_LPD Patchwork
2024-10-21 22:54 ` ✗ Fi.CI.SPARSE: " Patchwork
2024-10-21 23:44 ` ✗ Fi.CI.BAT: failure " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241021222744.294371-4-gustavo.sousa@intel.com \
--to=gustavo.sousa@intel.com \
--cc=intel-gfx@lists.freedesktop.org \
--cc=intel-xe@lists.freedesktop.org \
--cc=luciano.coelho@intel.com \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox