From: Matthew Auld <matthew.auld@intel.com>
To: intel-xe@lists.freedesktop.org
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Subject: [Intel-xe] [PATCH v14 10/10] drm/xe: add lockdep annotation for xe_device_mem_access_get()
Date: Mon, 17 Jul 2023 12:25:13 +0100 [thread overview]
Message-ID: <20230717112502.32379-22-matthew.auld@intel.com> (raw)
In-Reply-To: <20230717112502.32379-12-matthew.auld@intel.com>
The atomics here might hide potential issues, also rpm core is not
holding any lock when calling our rpm resume callback, so add a dummy lock
with the idea that xe_pm_runtime_resume() is eventually going to be
called when we are holding it. This only needs to happen once and then
lockdep can validate all callers and their locks.
v2: (Thomas Hellström)
- Prefer static lockdep_map instead of full blown mutex.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Acked-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
---
drivers/gpu/drm/xe/xe_device.c | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/drivers/gpu/drm/xe/xe_device.c b/drivers/gpu/drm/xe/xe_device.c
index ba2b83925ded..1c57944014e0 100644
--- a/drivers/gpu/drm/xe/xe_device.c
+++ b/drivers/gpu/drm/xe/xe_device.c
@@ -35,6 +35,12 @@
#include "xe_vm_madvise.h"
#include "xe_wait_user_fence.h"
+#ifdef CONFIG_LOCKDEP
+static struct lockdep_map xe_device_mem_access_lockdep_map = {
+ .name = "xe_device_mem_access_lockdep_map"
+};
+#endif
+
static int xe_file_open(struct drm_device *dev, struct drm_file *file)
{
struct xe_file *xef;
@@ -458,10 +464,28 @@ void xe_device_mem_access_get(struct xe_device *xe)
if (xe_pm_read_callback_task(xe) == current)
return;
+ /*
+ * Since the resume here is synchronous it can be quite easy to deadlock
+ * if we are not careful. Also in practice it might be quite timing
+ * sensitive to ever see the 0 -> 1 transition with the callers locks
+ * held, so deadlocks might exist but are hard for lockdep to ever see.
+ * With this in mind, help lockdep learn about the potentially scary
+ * stuff that can happen inside the runtime_resume callback by acquiring
+ * a dummy lock (it doesn't protect anything and gets compiled out on
+ * non-debug builds). Lockdep then only needs to see the
+ * mem_access_lockdep_map -> runtime_resume callback once, and then can
+ * hopefully validate all the (callers_locks) -> mem_access_lockdep_map.
+ * For example if the (callers_locks) are ever grabbed in the
+ * runtime_resume callback, lockdep should give us a nice splat.
+ */
+ lock_map_acquire(&xe_device_mem_access_lockdep_map);
+
xe_pm_runtime_get(xe);
ref = atomic_inc_return(&xe->mem_access.ref);
XE_WARN_ON(ref == S32_MAX);
+
+ lock_map_release(&xe_device_mem_access_lockdep_map);
}
void xe_device_mem_access_put(struct xe_device *xe)
--
2.41.0
next prev parent reply other threads:[~2023-07-17 11:25 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-17 11:25 [Intel-xe] [PATCH v14 00/10] xe_device_mem_access fixes and related bits Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 01/10] drm/xe: fix xe_device_mem_access_get() races Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 02/10] drm/xe/vm: tidy up xe_runtime_pm usage Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 03/10] drm/xe/debugfs: grab mem_access around forcewake Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 04/10] drm/xe/guc_pc: add missing mem_access for freq_rpe_show Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 05/10] drm/xe/mmio: grab mem_access in xe_mmio_ioctl Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 06/10] drm/xe: ensure correct access_put ordering Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 07/10] drm/xe: drop xe_device_mem_access_get() from guc_ct_send Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 08/10] drm/xe/ggtt: prime ggtt->lock against FS_RECLAIM Matthew Auld
2023-07-17 11:25 ` [Intel-xe] [PATCH v14 09/10] drm/xe: drop xe_device_mem_access_get() from invalidation_vma Matthew Auld
2023-07-17 11:25 ` Matthew Auld [this message]
2023-07-17 11:28 ` [Intel-xe] ✓ CI.Patch_applied: success for xe_device_mem_access fixes and related bits (rev4) Patchwork
2023-07-17 11:28 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-07-17 11:29 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-07-17 11:33 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-17 11:34 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-17 11:35 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-07-18 12:50 ` [Intel-xe] ✓ CI.Patch_applied: success for xe_device_mem_access fixes and related bits (rev6) Patchwork
2023-07-18 12:50 ` [Intel-xe] ✗ CI.checkpatch: warning " Patchwork
2023-07-18 12:51 ` [Intel-xe] ✓ CI.KUnit: success " Patchwork
2023-07-18 12:55 ` [Intel-xe] ✓ CI.Build: " Patchwork
2023-07-18 12:56 ` [Intel-xe] ✓ CI.Hooks: " Patchwork
2023-07-18 12:57 ` [Intel-xe] ✓ CI.checksparse: " Patchwork
2023-07-18 13:31 ` [Intel-xe] ○ CI.BAT: info " Patchwork
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230717112502.32379-22-matthew.auld@intel.com \
--to=matthew.auld@intel.com \
--cc=intel-xe@lists.freedesktop.org \
--cc=rodrigo.vivi@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox