public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/2] locking/ww_mutex, dma-buf/dma-resv: Improve detection of unheld locks
@ 2025-11-20 11:03 Thomas Hellström
  2025-11-20 11:03 ` [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers Thomas Hellström
  2025-11-20 11:03 ` [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks Thomas Hellström
  0 siblings, 2 replies; 5+ messages in thread
From: Thomas Hellström @ 2025-11-20 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Boqun Feng, Waiman Long, Sumit Semwal, Christian König, LKML,
	dri-devel, linaro-mm-sig

WW mutexes and dma-resv objects, which embed them, typically have a
number of locks belocking to the same lock class. However
code using them typically want to verify the locking on
object granularity, not lock-class granularity.

This series add ww_mutex functions to facilitate that,
(patch 1) and utilizes these functions in the dma-resv lock
checks.

Thomas Hellström (2):
  kernel/locking/ww_mutex: Add per-lock lock-check helpers
  dma-buf/dma-resv: Improve the dma-resv lockdep checks

 include/linux/dma-resv.h |  7 +++++--
 include/linux/ww_mutex.h | 18 ++++++++++++++++++
 kernel/locking/mutex.c   | 10 ++++++++++
 3 files changed, 33 insertions(+), 2 deletions(-)

-- 
2.51.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers
  2025-11-20 11:03 [RFC PATCH 0/2] locking/ww_mutex, dma-buf/dma-resv: Improve detection of unheld locks Thomas Hellström
@ 2025-11-20 11:03 ` Thomas Hellström
  2025-11-20 11:38   ` Peter Zijlstra
  2025-11-20 11:03 ` [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks Thomas Hellström
  1 sibling, 1 reply; 5+ messages in thread
From: Thomas Hellström @ 2025-11-20 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Boqun Feng, Waiman Long, Sumit Semwal, Christian König, LKML,
	dri-devel, linaro-mm-sig

Code using ww_mutexes typically by design have a number of
such mutexes sharing the same ww_class, and within a ww transaction
they are all lockdep annotated using a nest_lock which means
that multiple ww_mutexes of the same lockdep class may be locked at
the same time. That means that lock_is_held() returns true and
lockdep_assert_held() doesn't fire as long as there is a *single*
ww_mutex held of the same class. IOW within a WW transaction.

Code using these mutexes typically want to assert that individual
ww_mutexes are held. Not that any ww_mutex of the same class is
held.

Introduce functions that can be used for that.

RFC: Placement of the functions? lockdep.c? Are the #ifdefs testing for
the correct config?

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 include/linux/ww_mutex.h | 18 ++++++++++++++++++
 kernel/locking/mutex.c   | 10 ++++++++++
 2 files changed, 28 insertions(+)

diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
index 45ff6f7a872b..7bc0f533dea6 100644
--- a/include/linux/ww_mutex.h
+++ b/include/linux/ww_mutex.h
@@ -380,4 +380,22 @@ static inline bool ww_mutex_is_locked(struct ww_mutex *lock)
 	return ww_mutex_base_is_locked(&lock->base);
 }
 
+#ifdef CONFIG_PROVE_LOCKING
+
+bool ww_mutex_held(struct ww_mutex *lock);
+
+#else /* CONFIG_PROVE_LOCKING */
+
+static inline bool ww_mutex_held(struct ww_mutex *lock)
+{
+	return true;
+}
+
+#endif /* CONFIG_PROVE_LOCKING */
+
+static inline void ww_mutex_assert_held(struct ww_mutex *lock)
+{
+	lockdep_assert(ww_mutex_held(lock));
+}
+
 #endif
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index de7d6702cd96..37868b739efd 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -1174,3 +1174,13 @@ int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
 	return 1;
 }
 EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
+
+#ifdef CONFIG_PROVE_LOCKING
+
+bool ww_mutex_held(struct ww_mutex *lock)
+{
+	return __ww_mutex_owner(&lock->base) == current;
+}
+EXPORT_SYMBOL(ww_mutex_held);
+
+#endif /* CONFIG_PROVE_LOCKING */
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks
  2025-11-20 11:03 [RFC PATCH 0/2] locking/ww_mutex, dma-buf/dma-resv: Improve detection of unheld locks Thomas Hellström
  2025-11-20 11:03 ` [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers Thomas Hellström
@ 2025-11-20 11:03 ` Thomas Hellström
  2025-11-20 13:22   ` Christian König
  1 sibling, 1 reply; 5+ messages in thread
From: Thomas Hellström @ 2025-11-20 11:03 UTC (permalink / raw)
  To: intel-xe
  Cc: Thomas Hellström, Matthew Auld, Matthew Brost,
	Maarten Lankhorst, Peter Zijlstra, Ingo Molnar, Will Deacon,
	Boqun Feng, Waiman Long, Sumit Semwal, Christian König, LKML,
	dri-devel, linaro-mm-sig

Ensure that dma_resv_held() and dma_resv_assert_held() operate
on individual reservation objects within a WW transaction rather
than on the reservation WW class.

Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
---
 include/linux/dma-resv.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index c5ab6fd9ebe8..001de3880fde 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -308,8 +308,11 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
 	     fence = dma_resv_iter_first(cursor); fence;	\
 	     fence = dma_resv_iter_next(cursor))
 
-#define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base)
-#define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base)
+#define dma_resv_held(obj) (lockdep_is_held(&(obj)->lock.base) && ww_mutex_held(&(obj)->lock))
+#define dma_resv_assert_held(obj) do {			\
+		lockdep_assert_held(&(obj)->lock.base); \
+		ww_mutex_assert_held(&(obj)->lock);	\
+	} while (0)
 
 #ifdef CONFIG_DEBUG_MUTEXES
 void dma_resv_reset_max_fences(struct dma_resv *obj);
-- 
2.51.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers
  2025-11-20 11:03 ` [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers Thomas Hellström
@ 2025-11-20 11:38   ` Peter Zijlstra
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2025-11-20 11:38 UTC (permalink / raw)
  To: Thomas Hellström
  Cc: intel-xe, Matthew Auld, Matthew Brost, Maarten Lankhorst,
	Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long, Sumit Semwal,
	Christian König, LKML, dri-devel, linaro-mm-sig

On Thu, Nov 20, 2025 at 12:03:40PM +0100, Thomas Hellström wrote:
> Code using ww_mutexes typically by design have a number of
> such mutexes sharing the same ww_class, and within a ww transaction
> they are all lockdep annotated using a nest_lock which means
> that multiple ww_mutexes of the same lockdep class may be locked at
> the same time. That means that lock_is_held() returns true and
> lockdep_assert_held() doesn't fire as long as there is a *single*
> ww_mutex held of the same class. IOW within a WW transaction.
> 
> Code using these mutexes typically want to assert that individual
> ww_mutexes are held. Not that any ww_mutex of the same class is
> held.
> 
> Introduce functions that can be used for that.
> 
> RFC: Placement of the functions? lockdep.c? Are the #ifdefs testing for
> the correct config?

Yeah, I think so.

Ack on this.

> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
> ---
>  include/linux/ww_mutex.h | 18 ++++++++++++++++++
>  kernel/locking/mutex.c   | 10 ++++++++++
>  2 files changed, 28 insertions(+)
> 
> diff --git a/include/linux/ww_mutex.h b/include/linux/ww_mutex.h
> index 45ff6f7a872b..7bc0f533dea6 100644
> --- a/include/linux/ww_mutex.h
> +++ b/include/linux/ww_mutex.h
> @@ -380,4 +380,22 @@ static inline bool ww_mutex_is_locked(struct ww_mutex *lock)
>  	return ww_mutex_base_is_locked(&lock->base);
>  }
>  
> +#ifdef CONFIG_PROVE_LOCKING
> +
> +bool ww_mutex_held(struct ww_mutex *lock);
> +
> +#else /* CONFIG_PROVE_LOCKING */
> +
> +static inline bool ww_mutex_held(struct ww_mutex *lock)
> +{
> +	return true;
> +}
> +
> +#endif /* CONFIG_PROVE_LOCKING */
> +
> +static inline void ww_mutex_assert_held(struct ww_mutex *lock)
> +{
> +	lockdep_assert(ww_mutex_held(lock));
> +}
> +
>  #endif
> diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
> index de7d6702cd96..37868b739efd 100644
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -1174,3 +1174,13 @@ int atomic_dec_and_mutex_lock(atomic_t *cnt, struct mutex *lock)
>  	return 1;
>  }
>  EXPORT_SYMBOL(atomic_dec_and_mutex_lock);
> +
> +#ifdef CONFIG_PROVE_LOCKING
> +
> +bool ww_mutex_held(struct ww_mutex *lock)
> +{
> +	return __ww_mutex_owner(&lock->base) == current;
> +}
> +EXPORT_SYMBOL(ww_mutex_held);
> +
> +#endif /* CONFIG_PROVE_LOCKING */
> -- 
> 2.51.1
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks
  2025-11-20 11:03 ` [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks Thomas Hellström
@ 2025-11-20 13:22   ` Christian König
  0 siblings, 0 replies; 5+ messages in thread
From: Christian König @ 2025-11-20 13:22 UTC (permalink / raw)
  To: Thomas Hellström, intel-xe
  Cc: Matthew Auld, Matthew Brost, Maarten Lankhorst, Peter Zijlstra,
	Ingo Molnar, Will Deacon, Boqun Feng, Waiman Long, Sumit Semwal,
	LKML, dri-devel, linaro-mm-sig, Pelloux-Prayer, Pierre-Eric

On 11/20/25 12:03, Thomas Hellström wrote:
> Ensure that dma_resv_held() and dma_resv_assert_held() operate
> on individual reservation objects within a WW transaction rather
> than on the reservation WW class.
> 
> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>

I can't judge the lockdep backend changes, but this patch here makes a lot of sense.

Reviewed-by: Christian König <christian.koenig@amd.com>

That reminds me that Pierre-Eric stumbled over some odd lockdep behavior while working on TTM as well. @Pierre-Eric what that this issue?

Regards,
Christian.

> ---
>  include/linux/dma-resv.h | 7 +++++--
>  1 file changed, 5 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
> index c5ab6fd9ebe8..001de3880fde 100644
> --- a/include/linux/dma-resv.h
> +++ b/include/linux/dma-resv.h
> @@ -308,8 +308,11 @@ static inline bool dma_resv_iter_is_restarted(struct dma_resv_iter *cursor)
>  	     fence = dma_resv_iter_first(cursor); fence;	\
>  	     fence = dma_resv_iter_next(cursor))
>  
> -#define dma_resv_held(obj) lockdep_is_held(&(obj)->lock.base)
> -#define dma_resv_assert_held(obj) lockdep_assert_held(&(obj)->lock.base)
> +#define dma_resv_held(obj) (lockdep_is_held(&(obj)->lock.base) && ww_mutex_held(&(obj)->lock))
> +#define dma_resv_assert_held(obj) do {			\
> +		lockdep_assert_held(&(obj)->lock.base); \
> +		ww_mutex_assert_held(&(obj)->lock);	\
> +	} while (0)
>  
>  #ifdef CONFIG_DEBUG_MUTEXES
>  void dma_resv_reset_max_fences(struct dma_resv *obj);


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2025-11-20 13:22 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-11-20 11:03 [RFC PATCH 0/2] locking/ww_mutex, dma-buf/dma-resv: Improve detection of unheld locks Thomas Hellström
2025-11-20 11:03 ` [RFC PATCH 1/2] kernel/locking/ww_mutex: Add per-lock lock-check helpers Thomas Hellström
2025-11-20 11:38   ` Peter Zijlstra
2025-11-20 11:03 ` [RFC PATCH 2/2] dma-buf/dma-resv: Improve the dma-resv lockdep checks Thomas Hellström
2025-11-20 13:22   ` Christian König

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox