* [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
@ 2026-01-21 11:07 ` Peter Zijlstra
2026-01-21 13:09 ` Marco Elver
` (2 more replies)
2026-01-21 11:07 ` [RFC][PATCH 2/4] locking/mutex: Add context analysis Peter Zijlstra
` (4 subsequent siblings)
5 siblings, 3 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 11:07 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, peterz, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, bvanassche, llvm
Useful for things like unlock fastpaths, which on success release the
lock.
Suggested-by: Marco Elver <elver@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/compiler-context-analysis.h | 32 ++++++++++++++++++++++++++++++
1 file changed, 32 insertions(+)
--- a/include/linux/compiler-context-analysis.h
+++ b/include/linux/compiler-context-analysis.h
@@ -320,6 +320,38 @@ static inline void _context_unsafe_alias
*/
#define __releases(...) __releases_ctx_lock(__VA_ARGS__)
+/*
+ * Clang's analysis does not care precisely about the value, only that it is
+ * either zero or non-zero. So the __cond_acquires() interface might be
+ * misleading if we say that @ret is the value returned if acquired. Instead,
+ * provide symbolic variants which we translate.
+ */
+#define __cond_acquires_impl_not_true(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_false(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+#define __cond_acquires_impl_not_nonzero(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_0(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+#define __cond_acquires_impl_not_nonnull(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_NULL(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+
+/**
+ * __cond_releases() - function attribute, function conditionally
+ * releases a context lock exclusively
+ * @ret: abstract value returned by function if context lock releases
+ * @x: context lock instance pointer
+ *
+ * Function attribute declaring that the function conditionally releases the
+ * given context lock instance @x exclusively. The associated context(s) must
+ * be active on entry. The function return value @ret denotes when the context
+ * lock is released.
+ *
+ * @ret may be one of: true, false, nonzero, 0, nonnull, NULL.
+ *
+ * NOTE: clang does not have a native attribute for this; instead implement
+ * it as an unconditional release and a conditional acquire for the
+ * inverted condition -- which is semantically equivalent.
+ */
+#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
+
/**
* __acquire() - function to acquire context lock exclusively
* @x: context lock instance pointer
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 11:07 ` [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases() Peter Zijlstra
@ 2026-01-21 13:09 ` Marco Elver
2026-01-21 17:55 ` Bart Van Assche
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2026-01-21 13:09 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, bvanassche, llvm
On Wed, 21 Jan 2026 at 12:13, Peter Zijlstra <peterz@infradead.org> wrote:
>
> Useful for things like unlock fastpaths, which on success release the
> lock.
>
> Suggested-by: Marco Elver <elver@google.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Marco Elver <elver@google.com>
> ---
> include/linux/compiler-context-analysis.h | 32 ++++++++++++++++++++++++++++++
> 1 file changed, 32 insertions(+)
>
> --- a/include/linux/compiler-context-analysis.h
> +++ b/include/linux/compiler-context-analysis.h
> @@ -320,6 +320,38 @@ static inline void _context_unsafe_alias
> */
> #define __releases(...) __releases_ctx_lock(__VA_ARGS__)
>
> +/*
> + * Clang's analysis does not care precisely about the value, only that it is
> + * either zero or non-zero. So the __cond_acquires() interface might be
> + * misleading if we say that @ret is the value returned if acquired. Instead,
> + * provide symbolic variants which we translate.
> + */
> +#define __cond_acquires_impl_not_true(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
> +#define __cond_acquires_impl_not_false(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
> +#define __cond_acquires_impl_not_nonzero(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
> +#define __cond_acquires_impl_not_0(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
> +#define __cond_acquires_impl_not_nonnull(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
> +#define __cond_acquires_impl_not_NULL(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
> +
> +/**
> + * __cond_releases() - function attribute, function conditionally
> + * releases a context lock exclusively
> + * @ret: abstract value returned by function if context lock releases
> + * @x: context lock instance pointer
> + *
> + * Function attribute declaring that the function conditionally releases the
> + * given context lock instance @x exclusively. The associated context(s) must
> + * be active on entry. The function return value @ret denotes when the context
> + * lock is released.
> + *
> + * @ret may be one of: true, false, nonzero, 0, nonnull, NULL.
> + *
> + * NOTE: clang does not have a native attribute for this; instead implement
> + * it as an unconditional release and a conditional acquire for the
> + * inverted condition -- which is semantically equivalent.
> + */
> +#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
> +
> /**
> * __acquire() - function to acquire context lock exclusively
> * @x: context lock instance pointer
>
>
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 11:07 ` [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases() Peter Zijlstra
2026-01-21 13:09 ` Marco Elver
@ 2026-01-21 17:55 ` Bart Van Assche
2026-01-21 18:35 ` Marco Elver
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2026-01-21 17:55 UTC (permalink / raw)
To: Peter Zijlstra, elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> +#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
Since we have __cond_acquires() and __cond_acquires_shared(), how about
also adding a __cond_releases_shared() macro?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 17:55 ` Bart Van Assche
@ 2026-01-21 18:35 ` Marco Elver
2026-01-21 19:02 ` Peter Zijlstra
0 siblings, 1 reply; 30+ messages in thread
From: Marco Elver @ 2026-01-21 18:35 UTC (permalink / raw)
To: Bart Van Assche
Cc: Peter Zijlstra, linux-kernel, bigeasy, mingo, tglx, will,
boqun.feng, longman, hch, rostedt, llvm
On Wed, 21 Jan 2026 at 18:55, Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> > +#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
>
> Since we have __cond_acquires() and __cond_acquires_shared(), how about
> also adding a __cond_releases_shared() macro?
Since I'm a fan of less code, another option is to introduce it when
needed. It's unlikely to be needed, and until then I'd prefer fewer
lines of code to be included and maintained. Up to you.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 18:35 ` Marco Elver
@ 2026-01-21 19:02 ` Peter Zijlstra
2026-01-21 21:02 ` Bart Van Assche
0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 19:02 UTC (permalink / raw)
To: Marco Elver
Cc: Bart Van Assche, linux-kernel, bigeasy, mingo, tglx, will,
boqun.feng, longman, hch, rostedt, llvm
On Wed, Jan 21, 2026 at 07:35:14PM +0100, Marco Elver wrote:
> On Wed, 21 Jan 2026 at 18:55, Bart Van Assche <bvanassche@acm.org> wrote:
> >
> > On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> > > +#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
> >
> > Since we have __cond_acquires() and __cond_acquires_shared(), how about
> > also adding a __cond_releases_shared() macro?
>
> Since I'm a fan of less code, another option is to introduce it when
> needed. It's unlikely to be needed, and until then I'd prefer fewer
> lines of code to be included and maintained. Up to you.
Right, this was an on-demand thing. Lets add it when it becomes needed.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases()
2026-01-21 19:02 ` Peter Zijlstra
@ 2026-01-21 21:02 ` Bart Van Assche
0 siblings, 0 replies; 30+ messages in thread
From: Bart Van Assche @ 2026-01-21 21:02 UTC (permalink / raw)
To: Peter Zijlstra, Marco Elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/21/26 11:02 AM, Peter Zijlstra wrote:
> Right, this was an on-demand thing. Lets add it when it becomes needed.
Please take a look at vma_start_read() in mm/mmap_lock.c. I think that
it needs a __cond_shared_release() annotation. From the documentation
block above that function:
* IMPORTANT: RCU lock must be held upon entering the function, but
upon error
* IT IS RELEASED. The caller must handle this correctly.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [tip: locking/core] compiler-context-analysys: Add __cond_releases()
2026-01-21 11:07 ` [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases() Peter Zijlstra
2026-01-21 13:09 ` Marco Elver
2026-01-21 17:55 ` Bart Van Assche
@ 2026-03-09 19:48 ` tip-bot2 for Peter Zijlstra
2 siblings, 0 replies; 30+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2026-03-09 19:48 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Marco Elver, Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 07574b8ebaac7927e2355b4f343b03b50e04494c
Gitweb: https://git.kernel.org/tip/07574b8ebaac7927e2355b4f343b03b50e04494c
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 20 Jan 2026 13:40:30 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Sun, 08 Mar 2026 11:06:52 +01:00
compiler-context-analysys: Add __cond_releases()
Useful for things like unlock fastpaths, which on success release the
lock.
Suggested-by: Marco Elver <elver@google.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Marco Elver <elver@google.com>
Link: https://patch.msgid.link/20260121111213.634625032@infradead.org
---
include/linux/compiler-context-analysis.h | 32 ++++++++++++++++++++++-
1 file changed, 32 insertions(+)
diff --git a/include/linux/compiler-context-analysis.h b/include/linux/compiler-context-analysis.h
index 00c074a..a931757 100644
--- a/include/linux/compiler-context-analysis.h
+++ b/include/linux/compiler-context-analysis.h
@@ -320,6 +320,38 @@ static inline void _context_unsafe_alias(void **p) { }
*/
#define __releases(...) __releases_ctx_lock(__VA_ARGS__)
+/*
+ * Clang's analysis does not care precisely about the value, only that it is
+ * either zero or non-zero. So the __cond_acquires() interface might be
+ * misleading if we say that @ret is the value returned if acquired. Instead,
+ * provide symbolic variants which we translate.
+ */
+#define __cond_acquires_impl_not_true(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_false(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+#define __cond_acquires_impl_not_nonzero(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_0(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+#define __cond_acquires_impl_not_nonnull(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(0, x)
+#define __cond_acquires_impl_not_NULL(x, ...) __try_acquires##__VA_ARGS__##_ctx_lock(1, x)
+
+/**
+ * __cond_releases() - function attribute, function conditionally
+ * releases a context lock exclusively
+ * @ret: abstract value returned by function if context lock releases
+ * @x: context lock instance pointer
+ *
+ * Function attribute declaring that the function conditionally releases the
+ * given context lock instance @x exclusively. The associated context(s) must
+ * be active on entry. The function return value @ret denotes when the context
+ * lock is released.
+ *
+ * @ret may be one of: true, false, nonzero, 0, nonnull, NULL.
+ *
+ * NOTE: clang does not have a native attribute for this; instead implement
+ * it as an unconditional release and a conditional acquire for the
+ * inverted condition -- which is semantically equivalent.
+ */
+#define __cond_releases(ret, x) __releases(x) __cond_acquires_impl_not_##ret(x)
+
/**
* __acquire() - function to acquire context lock exclusively
* @x: context lock instance pointer
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC][PATCH 2/4] locking/mutex: Add context analysis
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
2026-01-21 11:07 ` [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases() Peter Zijlstra
@ 2026-01-21 11:07 ` Peter Zijlstra
2026-01-21 17:11 ` Bart Van Assche
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2026-01-21 11:07 ` [RFC][PATCH 3/4] locking/rtmutex: " Peter Zijlstra
` (3 subsequent siblings)
5 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 11:07 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, peterz, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, bvanassche, llvm
Add compiler context analysis annotations.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/mutex_types.h | 2 +-
kernel/locking/Makefile | 2 ++
kernel/locking/mutex.c | 42 +++++++++++++++++++++++++++++++++++++-----
kernel/locking/mutex.h | 1 +
kernel/locking/ww_mutex.h | 12 ++++++++++++
5 files changed, 53 insertions(+), 6 deletions(-)
--- a/include/linux/mutex_types.h
+++ b/include/linux/mutex_types.h
@@ -44,7 +44,7 @@ context_lock_struct(mutex) {
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
struct optimistic_spin_queue osq; /* Spinner MCS lock */
#endif
- struct list_head wait_list;
+ struct list_head wait_list __guarded_by(&wait_lock);
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
#endif
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -3,6 +3,8 @@
# and is generally not a function of system call inputs.
KCOV_INSTRUMENT := n
+CONTEXT_ANALYSIS_mutex.o := y
+
obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
# Avoid recursion lockdep -> sanitizer -> ... -> lockdep & improve performance.
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -46,8 +46,9 @@
static void __mutex_init_generic(struct mutex *lock)
{
atomic_long_set(&lock->owner, 0);
- raw_spin_lock_init(&lock->wait_lock);
- INIT_LIST_HEAD(&lock->wait_list);
+ scoped_guard (raw_spinlock_init, &lock->wait_lock) {
+ INIT_LIST_HEAD(&lock->wait_list);
+ }
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
osq_lock_init(&lock->osq);
#endif
@@ -150,6 +151,7 @@ EXPORT_SYMBOL(mutex_init_generic);
* follow with a __mutex_trylock() before failing.
*/
static __always_inline bool __mutex_trylock_fast(struct mutex *lock)
+ __cond_acquires(true, lock)
{
unsigned long curr = (unsigned long)current;
unsigned long zero = 0UL;
@@ -163,6 +165,7 @@ static __always_inline bool __mutex_tryl
}
static __always_inline bool __mutex_unlock_fast(struct mutex *lock)
+ __cond_releases(true, lock)
{
unsigned long curr = (unsigned long)current;
@@ -195,6 +198,7 @@ static inline void __mutex_clear_flag(st
}
static inline bool __mutex_waiter_is_first(struct mutex *lock, struct mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
return list_first_entry(&lock->wait_list, struct mutex_waiter, list) == waiter;
}
@@ -206,6 +210,7 @@ static inline bool __mutex_waiter_is_fir
static void
__mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct list_head *list)
+ __must_hold(&lock->wait_lock)
{
hung_task_set_blocker(lock, BLOCKER_TYPE_MUTEX);
debug_mutex_add_waiter(lock, waiter, current);
@@ -217,6 +222,7 @@ __mutex_add_waiter(struct mutex *lock, s
static void
__mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
list_del(&waiter->list);
if (likely(list_empty(&lock->wait_list)))
@@ -259,7 +265,8 @@ static void __mutex_handoff(struct mutex
* We also put the fastpath first in the kernel image, to make sure the
* branch is predicted by the CPU as default-untaken.
*/
-static void __sched __mutex_lock_slowpath(struct mutex *lock);
+static void __sched __mutex_lock_slowpath(struct mutex *lock)
+ __acquires(lock);
/**
* mutex_lock - acquire the mutex
@@ -340,7 +347,7 @@ bool ww_mutex_spin_on_owner(struct mutex
* Similarly, stop spinning if we are no longer the
* first waiter.
*/
- if (waiter && !__mutex_waiter_is_first(lock, waiter))
+ if (waiter && !data_race(__mutex_waiter_is_first(lock, waiter)))
return false;
return true;
@@ -525,7 +532,8 @@ mutex_optimistic_spin(struct mutex *lock
}
#endif
-static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip);
+static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip)
+ __releases(lock);
/**
* mutex_unlock - release the mutex
@@ -544,6 +552,7 @@ static noinline void __sched __mutex_unl
* This function is similar to (but not equivalent to) up().
*/
void __sched mutex_unlock(struct mutex *lock)
+ __releases(lock)
{
#ifndef CONFIG_DEBUG_LOCK_ALLOC
if (__mutex_unlock_fast(lock))
@@ -565,6 +574,8 @@ EXPORT_SYMBOL(mutex_unlock);
* of a unlocked mutex is not allowed.
*/
void __sched ww_mutex_unlock(struct ww_mutex *lock)
+ __releases(lock)
+ __no_context_analysis
{
__ww_mutex_unlock(lock);
mutex_unlock(&lock->base);
@@ -578,6 +589,7 @@ static __always_inline int __sched
__mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+ __cond_acquires(0, lock)
{
DEFINE_WAKE_Q(wake_q);
struct mutex_waiter waiter;
@@ -772,6 +784,7 @@ __mutex_lock_common(struct mutex *lock,
static int __sched
__mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip)
+ __cond_acquires(0, lock)
{
return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, false);
}
@@ -779,6 +792,7 @@ __mutex_lock(struct mutex *lock, unsigne
static int __sched
__ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
unsigned long ip, struct ww_acquire_ctx *ww_ctx)
+ __cond_acquires(0, lock)
{
return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true);
}
@@ -824,22 +838,27 @@ EXPORT_SYMBOL(ww_mutex_trylock);
#ifdef CONFIG_DEBUG_LOCK_ALLOC
void __sched
mutex_lock_nested(struct mutex *lock, unsigned int subclass)
+ __acquires(lock)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_);
+ __acquire(lock);
}
EXPORT_SYMBOL_GPL(mutex_lock_nested);
void __sched
_mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
+ __acquires(lock)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
+ __acquire(lock);
}
EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
int __sched
_mutex_lock_killable(struct mutex *lock, unsigned int subclass,
struct lockdep_map *nest)
+ __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_KILLABLE, subclass, nest, _RET_IP_);
}
@@ -854,6 +873,7 @@ EXPORT_SYMBOL_GPL(mutex_lock_interruptib
void __sched
mutex_lock_io_nested(struct mutex *lock, unsigned int subclass)
+ __acquires(lock)
{
int token;
@@ -862,12 +882,14 @@ mutex_lock_io_nested(struct mutex *lock,
token = io_schedule_prepare();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
subclass, NULL, _RET_IP_, NULL, 0);
+ __acquire(lock);
io_schedule_finish(token);
}
EXPORT_SYMBOL_GPL(mutex_lock_io_nested);
static inline int
ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_releases(nonzero, lock)
{
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
unsigned tmp;
@@ -894,6 +916,7 @@ ww_mutex_deadlock_injection(struct ww_mu
int __sched
ww_mutex_lock(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_acquires(nonzero, lock)
{
int ret;
@@ -909,6 +932,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock);
int __sched
ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_acquires(0, lock)
{
int ret;
@@ -929,6 +953,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock_interrup
* Release the lock, slowpath:
*/
static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip)
+ __releases(lock)
{
struct task_struct *next = NULL;
DEFINE_WAKE_Q(wake_q);
@@ -936,6 +961,7 @@ static noinline void __sched __mutex_unl
unsigned long flags;
mutex_release(&lock->dep_map, ip);
+ __release(lock);
/*
* Release the lock before (potentially) taking the spinlock such that
@@ -1061,24 +1087,29 @@ EXPORT_SYMBOL_GPL(mutex_lock_io);
static noinline void __sched
__mutex_lock_slowpath(struct mutex *lock)
+ __acquires(lock)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_);
+ __acquire(lock);
}
static noinline int __sched
__mutex_lock_killable_slowpath(struct mutex *lock)
+ __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_KILLABLE, 0, NULL, _RET_IP_);
}
static noinline int __sched
__mutex_lock_interruptible_slowpath(struct mutex *lock)
+ __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_);
}
static noinline int __sched
__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_acquires(0, lock)
{
return __ww_mutex_lock(&lock->base, TASK_UNINTERRUPTIBLE, 0,
_RET_IP_, ctx);
@@ -1087,6 +1118,7 @@ __ww_mutex_lock_slowpath(struct ww_mutex
static noinline int __sched
__ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock,
struct ww_acquire_ctx *ctx)
+ __cond_acquires(0, lock)
{
return __ww_mutex_lock(&lock->base, TASK_INTERRUPTIBLE, 0,
_RET_IP_, ctx);
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -7,6 +7,7 @@
* Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
*/
#ifndef CONFIG_PREEMPT_RT
+#include <linux/mutex.h>
/*
* This is the control structure for tasks blocked on mutex, which resides
* on the blocked task's kernel stack:
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -7,6 +7,7 @@
static inline struct mutex_waiter *
__ww_waiter_first(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
struct mutex_waiter *w;
@@ -19,6 +20,7 @@ __ww_waiter_first(struct mutex *lock)
static inline struct mutex_waiter *
__ww_waiter_next(struct mutex *lock, struct mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
w = list_next_entry(w, list);
if (list_entry_is_head(w, &lock->wait_list, list))
@@ -29,6 +31,7 @@ __ww_waiter_next(struct mutex *lock, str
static inline struct mutex_waiter *
__ww_waiter_prev(struct mutex *lock, struct mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
w = list_prev_entry(w, list);
if (list_entry_is_head(w, &lock->wait_list, list))
@@ -39,6 +42,7 @@ __ww_waiter_prev(struct mutex *lock, str
static inline struct mutex_waiter *
__ww_waiter_last(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
struct mutex_waiter *w;
@@ -51,6 +55,7 @@ __ww_waiter_last(struct mutex *lock)
static inline void
__ww_waiter_add(struct mutex *lock, struct mutex_waiter *waiter, struct mutex_waiter *pos)
+ __must_hold(&lock->wait_lock)
{
struct list_head *p = &lock->wait_list;
if (pos)
@@ -71,16 +76,19 @@ __ww_mutex_has_waiters(struct mutex *loc
}
static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags)
+ __acquires(&lock->wait_lock)
{
raw_spin_lock_irqsave(&lock->wait_lock, *flags);
}
static inline void unlock_wait_lock(struct mutex *lock, unsigned long *flags)
+ __releases(&lock->wait_lock)
{
raw_spin_unlock_irqrestore(&lock->wait_lock, *flags);
}
static inline void lockdep_assert_wait_lock_held(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
lockdep_assert_held(&lock->wait_lock);
}
@@ -307,6 +315,7 @@ static bool __ww_mutex_wound(struct MUTE
struct ww_acquire_ctx *ww_ctx,
struct ww_acquire_ctx *hold_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct task_struct *owner = __ww_mutex_owner(lock);
@@ -371,6 +380,7 @@ static bool __ww_mutex_wound(struct MUTE
static void
__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct MUTEX_WAITER *cur;
@@ -464,6 +474,7 @@ __ww_mutex_kill(struct MUTEX *lock, stru
static inline int
__ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
struct ww_acquire_ctx *ctx)
+ __must_hold(&lock->wait_lock)
{
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
@@ -514,6 +525,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITE
struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct MUTEX_WAITER *cur, *pos = NULL;
bool is_wait_die;
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 2/4] locking/mutex: Add context analysis
2026-01-21 11:07 ` [RFC][PATCH 2/4] locking/mutex: Add context analysis Peter Zijlstra
@ 2026-01-21 17:11 ` Bart Van Assche
2026-01-21 18:59 ` Peter Zijlstra
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
1 sibling, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2026-01-21 17:11 UTC (permalink / raw)
To: Peter Zijlstra, elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> @@ -565,6 +574,8 @@ EXPORT_SYMBOL(mutex_unlock);
> * of a unlocked mutex is not allowed.
> */
> void __sched ww_mutex_unlock(struct ww_mutex *lock)
> + __releases(lock)
> + __no_context_analysis
> {
> __ww_mutex_unlock(lock);
> mutex_unlock(&lock->base);
"__releases()" annotations should be added to the declaration of a
function only instead of the definition, isn't it?
> @@ -824,22 +838,27 @@ EXPORT_SYMBOL(ww_mutex_trylock);
> #ifdef CONFIG_DEBUG_LOCK_ALLOC
> void __sched
> mutex_lock_nested(struct mutex *lock, unsigned int subclass)
> + __acquires(lock)
Same comment here and below for the "__acquires()" annotation: I think
this annotation should be added to the function declaration only.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 2/4] locking/mutex: Add context analysis
2026-01-21 17:11 ` Bart Van Assche
@ 2026-01-21 18:59 ` Peter Zijlstra
0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 18:59 UTC (permalink / raw)
To: Bart Van Assche
Cc: elver, linux-kernel, bigeasy, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, llvm
On Wed, Jan 21, 2026 at 09:11:53AM -0800, Bart Van Assche wrote:
> On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> > @@ -565,6 +574,8 @@ EXPORT_SYMBOL(mutex_unlock);
> > * of a unlocked mutex is not allowed.
> > */
> > void __sched ww_mutex_unlock(struct ww_mutex *lock)
> > + __releases(lock)
> > + __no_context_analysis
> > {
> > __ww_mutex_unlock(lock);
> > mutex_unlock(&lock->base);
>
> "__releases()" annotations should be added to the declaration of a
> function only instead of the definition, isn't it?
>
> > @@ -824,22 +838,27 @@ EXPORT_SYMBOL(ww_mutex_trylock);
> > #ifdef CONFIG_DEBUG_LOCK_ALLOC
> > void __sched
> > mutex_lock_nested(struct mutex *lock, unsigned int subclass)
> > + __acquires(lock)
>
> Same comment here and below for the "__acquires()" annotation: I think
> this annotation should be added to the function declaration only.
Yeah, I thought I went through and lifted most of them. Clearly I missed
a few :/
^ permalink raw reply [flat|nested] 30+ messages in thread
* [tip: locking/core] locking/mutex: Add context analysis
2026-01-21 11:07 ` [RFC][PATCH 2/4] locking/mutex: Add context analysis Peter Zijlstra
2026-01-21 17:11 ` Bart Van Assche
@ 2026-03-09 19:48 ` tip-bot2 for Peter Zijlstra
1 sibling, 0 replies; 30+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2026-03-09 19:48 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 5c4326231cde36fd5e90c41e403df9fac6238f4b
Gitweb: https://git.kernel.org/tip/5c4326231cde36fd5e90c41e403df9fac6238f4b
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 20 Jan 2026 10:06:08 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Sun, 08 Mar 2026 11:06:53 +01:00
locking/mutex: Add context analysis
Add compiler context analysis annotations.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260121111213.745353747@infradead.org
---
include/linux/mutex.h | 2 +-
include/linux/mutex_types.h | 2 +-
kernel/locking/Makefile | 2 ++
kernel/locking/mutex.c | 33 ++++++++++++++++++++++++++++-----
kernel/locking/mutex.h | 1 +
kernel/locking/ww_mutex.h | 12 ++++++++++++
6 files changed, 45 insertions(+), 7 deletions(-)
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index c471b12..734048c 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -183,7 +183,7 @@ static inline int __must_check __devm_mutex_init(struct device *dev, struct mute
*/
#ifdef CONFIG_DEBUG_LOCK_ALLOC
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock);
-extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
+extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock) __acquires(lock);
extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock,
unsigned int subclass) __cond_acquires(0, lock);
extern int __must_check _mutex_lock_killable(struct mutex *lock,
diff --git a/include/linux/mutex_types.h b/include/linux/mutex_types.h
index a8f119f..24ed599 100644
--- a/include/linux/mutex_types.h
+++ b/include/linux/mutex_types.h
@@ -44,7 +44,7 @@ context_lock_struct(mutex) {
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
struct optimistic_spin_queue osq; /* Spinner MCS lock */
#endif
- struct mutex_waiter *first_waiter;
+ struct mutex_waiter *first_waiter __guarded_by(&wait_lock);
#ifdef CONFIG_DEBUG_MUTEXES
void *magic;
#endif
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index a114949..264447d 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -3,6 +3,8 @@
# and is generally not a function of system call inputs.
KCOV_INSTRUMENT := n
+CONTEXT_ANALYSIS_mutex.o := y
+
obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
# Avoid recursion lockdep -> sanitizer -> ... -> lockdep & improve performance.
diff --git a/kernel/locking/mutex.c b/kernel/locking/mutex.c
index 95f1822..427187f 100644
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -46,8 +46,9 @@
static void __mutex_init_generic(struct mutex *lock)
{
atomic_long_set(&lock->owner, 0);
- raw_spin_lock_init(&lock->wait_lock);
- lock->first_waiter = NULL;
+ scoped_guard (raw_spinlock_init, &lock->wait_lock) {
+ lock->first_waiter = NULL;
+ }
#ifdef CONFIG_MUTEX_SPIN_ON_OWNER
osq_lock_init(&lock->osq);
#endif
@@ -150,6 +151,7 @@ EXPORT_SYMBOL(mutex_init_generic);
* follow with a __mutex_trylock() before failing.
*/
static __always_inline bool __mutex_trylock_fast(struct mutex *lock)
+ __cond_acquires(true, lock)
{
unsigned long curr = (unsigned long)current;
unsigned long zero = 0UL;
@@ -163,6 +165,7 @@ static __always_inline bool __mutex_trylock_fast(struct mutex *lock)
}
static __always_inline bool __mutex_unlock_fast(struct mutex *lock)
+ __cond_releases(true, lock)
{
unsigned long curr = (unsigned long)current;
@@ -201,6 +204,7 @@ static inline void __mutex_clear_flag(struct mutex *lock, unsigned long flag)
static void
__mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
struct mutex_waiter *first)
+ __must_hold(&lock->wait_lock)
{
hung_task_set_blocker(lock, BLOCKER_TYPE_MUTEX);
debug_mutex_add_waiter(lock, waiter, current);
@@ -219,6 +223,7 @@ __mutex_add_waiter(struct mutex *lock, struct mutex_waiter *waiter,
static void
__mutex_remove_waiter(struct mutex *lock, struct mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
if (list_empty(&waiter->list)) {
__mutex_clear_flag(lock, MUTEX_FLAGS);
@@ -268,7 +273,8 @@ static void __mutex_handoff(struct mutex *lock, struct task_struct *task)
* We also put the fastpath first in the kernel image, to make sure the
* branch is predicted by the CPU as default-untaken.
*/
-static void __sched __mutex_lock_slowpath(struct mutex *lock);
+static void __sched __mutex_lock_slowpath(struct mutex *lock)
+ __acquires(lock);
/**
* mutex_lock - acquire the mutex
@@ -349,7 +355,7 @@ bool ww_mutex_spin_on_owner(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
* Similarly, stop spinning if we are no longer the
* first waiter.
*/
- if (waiter && lock->first_waiter != waiter)
+ if (waiter && data_race(lock->first_waiter != waiter))
return false;
return true;
@@ -534,7 +540,8 @@ mutex_optimistic_spin(struct mutex *lock, struct ww_acquire_ctx *ww_ctx,
}
#endif
-static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip);
+static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip)
+ __releases(lock);
/**
* mutex_unlock - release the mutex
@@ -574,6 +581,7 @@ EXPORT_SYMBOL(mutex_unlock);
* of a unlocked mutex is not allowed.
*/
void __sched ww_mutex_unlock(struct ww_mutex *lock)
+ __no_context_analysis
{
__ww_mutex_unlock(lock);
mutex_unlock(&lock->base);
@@ -587,6 +595,7 @@ static __always_inline int __sched
__mutex_lock_common(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip,
struct ww_acquire_ctx *ww_ctx, const bool use_ww_ctx)
+ __cond_acquires(0, lock)
{
DEFINE_WAKE_Q(wake_q);
struct mutex_waiter waiter;
@@ -780,6 +789,7 @@ err_early_kill:
static int __sched
__mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
struct lockdep_map *nest_lock, unsigned long ip)
+ __cond_acquires(0, lock)
{
return __mutex_lock_common(lock, state, subclass, nest_lock, ip, NULL, false);
}
@@ -787,6 +797,7 @@ __mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
static int __sched
__ww_mutex_lock(struct mutex *lock, unsigned int state, unsigned int subclass,
unsigned long ip, struct ww_acquire_ctx *ww_ctx)
+ __cond_acquires(0, lock)
{
return __mutex_lock_common(lock, state, subclass, NULL, ip, ww_ctx, true);
}
@@ -834,6 +845,7 @@ void __sched
mutex_lock_nested(struct mutex *lock, unsigned int subclass)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, subclass, NULL, _RET_IP_);
+ __acquire(lock);
}
EXPORT_SYMBOL_GPL(mutex_lock_nested);
@@ -842,6 +854,7 @@ void __sched
_mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
+ __acquire(lock);
}
EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock);
@@ -870,12 +883,14 @@ mutex_lock_io_nested(struct mutex *lock, unsigned int subclass)
token = io_schedule_prepare();
__mutex_lock_common(lock, TASK_UNINTERRUPTIBLE,
subclass, NULL, _RET_IP_, NULL, 0);
+ __acquire(lock);
io_schedule_finish(token);
}
EXPORT_SYMBOL_GPL(mutex_lock_io_nested);
static inline int
ww_mutex_deadlock_injection(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_releases(nonzero, lock)
{
#ifdef CONFIG_DEBUG_WW_MUTEX_SLOWPATH
unsigned tmp;
@@ -937,6 +952,7 @@ EXPORT_SYMBOL_GPL(ww_mutex_lock_interruptible);
* Release the lock, slowpath:
*/
static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigned long ip)
+ __releases(lock)
{
struct task_struct *next = NULL;
struct mutex_waiter *waiter;
@@ -945,6 +961,7 @@ static noinline void __sched __mutex_unlock_slowpath(struct mutex *lock, unsigne
unsigned long flags;
mutex_release(&lock->dep_map, ip);
+ __release(lock);
/*
* Release the lock before (potentially) taking the spinlock such that
@@ -1066,24 +1083,29 @@ EXPORT_SYMBOL_GPL(mutex_lock_io);
static noinline void __sched
__mutex_lock_slowpath(struct mutex *lock)
+ __acquires(lock)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, NULL, _RET_IP_);
+ __acquire(lock);
}
static noinline int __sched
__mutex_lock_killable_slowpath(struct mutex *lock)
+ __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_KILLABLE, 0, NULL, _RET_IP_);
}
static noinline int __sched
__mutex_lock_interruptible_slowpath(struct mutex *lock)
+ __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_INTERRUPTIBLE, 0, NULL, _RET_IP_);
}
static noinline int __sched
__ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
+ __cond_acquires(0, lock)
{
return __ww_mutex_lock(&lock->base, TASK_UNINTERRUPTIBLE, 0,
_RET_IP_, ctx);
@@ -1092,6 +1114,7 @@ __ww_mutex_lock_slowpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
static noinline int __sched
__ww_mutex_lock_interruptible_slowpath(struct ww_mutex *lock,
struct ww_acquire_ctx *ctx)
+ __cond_acquires(0, lock)
{
return __ww_mutex_lock(&lock->base, TASK_INTERRUPTIBLE, 0,
_RET_IP_, ctx);
diff --git a/kernel/locking/mutex.h b/kernel/locking/mutex.h
index 9ad4da8..b94ef40 100644
--- a/kernel/locking/mutex.h
+++ b/kernel/locking/mutex.h
@@ -7,6 +7,7 @@
* Copyright (C) 2004, 2005, 2006 Red Hat, Inc., Ingo Molnar <mingo@redhat.com>
*/
#ifndef CONFIG_PREEMPT_RT
+#include <linux/mutex.h>
/*
* This is the control structure for tasks blocked on mutex, which resides
* on the blocked task's kernel stack:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index a0847e9..c50ea5d 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -7,12 +7,14 @@
static inline struct mutex_waiter *
__ww_waiter_first(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
return lock->first_waiter;
}
static inline struct mutex_waiter *
__ww_waiter_next(struct mutex *lock, struct mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
w = list_next_entry(w, list);
if (lock->first_waiter == w)
@@ -23,6 +25,7 @@ __ww_waiter_next(struct mutex *lock, struct mutex_waiter *w)
static inline struct mutex_waiter *
__ww_waiter_prev(struct mutex *lock, struct mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
w = list_prev_entry(w, list);
if (lock->first_waiter == w)
@@ -33,6 +36,7 @@ __ww_waiter_prev(struct mutex *lock, struct mutex_waiter *w)
static inline struct mutex_waiter *
__ww_waiter_last(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
struct mutex_waiter *w = lock->first_waiter;
@@ -43,6 +47,7 @@ __ww_waiter_last(struct mutex *lock)
static inline void
__ww_waiter_add(struct mutex *lock, struct mutex_waiter *waiter, struct mutex_waiter *pos)
+ __must_hold(&lock->wait_lock)
{
__mutex_add_waiter(lock, waiter, pos);
}
@@ -60,16 +65,19 @@ __ww_mutex_has_waiters(struct mutex *lock)
}
static inline void lock_wait_lock(struct mutex *lock, unsigned long *flags)
+ __acquires(&lock->wait_lock)
{
raw_spin_lock_irqsave(&lock->wait_lock, *flags);
}
static inline void unlock_wait_lock(struct mutex *lock, unsigned long *flags)
+ __releases(&lock->wait_lock)
{
raw_spin_unlock_irqrestore(&lock->wait_lock, *flags);
}
static inline void lockdep_assert_wait_lock_held(struct mutex *lock)
+ __must_hold(&lock->wait_lock)
{
lockdep_assert_held(&lock->wait_lock);
}
@@ -296,6 +304,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct ww_acquire_ctx *hold_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct task_struct *owner = __ww_mutex_owner(lock);
@@ -360,6 +369,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
static void
__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct MUTEX_WAITER *cur;
@@ -453,6 +463,7 @@ __ww_mutex_kill(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx)
static inline int
__ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
struct ww_acquire_ctx *ctx)
+ __must_hold(&lock->wait_lock)
{
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
@@ -503,6 +514,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter,
struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct MUTEX_WAITER *cur, *pos = NULL;
bool is_wait_die;
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC][PATCH 3/4] locking/rtmutex: Add context analysis
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
2026-01-21 11:07 ` [RFC][PATCH 1/4] compiler-context-analysys: Add __cond_releases() Peter Zijlstra
2026-01-21 11:07 ` [RFC][PATCH 2/4] locking/mutex: Add context analysis Peter Zijlstra
@ 2026-01-21 11:07 ` Peter Zijlstra
2026-01-21 17:15 ` Bart Van Assche
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2026-01-21 11:07 ` [RFC][PATCH 4/4] futex: Convert to compiler " Peter Zijlstra
` (2 subsequent siblings)
5 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 11:07 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, peterz, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, bvanassche, llvm
Add compiler context analysis annotations.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
include/linux/mutex.h | 2 +-
include/linux/rtmutex.h | 4 ++--
kernel/locking/Makefile | 2 ++
kernel/locking/mutex.c | 2 --
kernel/locking/rtmutex.c | 18 +++++++++++++++++-
kernel/locking/rtmutex_api.c | 3 +++
kernel/locking/rtmutex_common.h | 22 ++++++++++++++++------
kernel/locking/ww_mutex.h | 18 +++++++++++++-----
kernel/locking/ww_rt_mutex.c | 1 +
9 files changed, 55 insertions(+), 17 deletions(-)
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -183,7 +183,7 @@ static inline int __must_check __devm_mu
*/
#ifdef CONFIG_DEBUG_LOCK_ALLOC
extern void mutex_lock_nested(struct mutex *lock, unsigned int subclass) __acquires(lock);
-extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock);
+extern void _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest_lock) __acquires(lock);
extern int __must_check mutex_lock_interruptible_nested(struct mutex *lock,
unsigned int subclass) __cond_acquires(0, lock);
extern int __must_check _mutex_lock_killable(struct mutex *lock,
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -22,8 +22,8 @@ extern int max_lock_depth;
struct rt_mutex_base {
raw_spinlock_t wait_lock;
- struct rb_root_cached waiters;
- struct task_struct *owner;
+ struct rb_root_cached waiters __guarded_by(&wait_lock);
+ struct task_struct *owner __guarded_by(&wait_lock);
};
#define __RT_MUTEX_BASE_INITIALIZER(rtbasename) \
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -4,6 +4,8 @@
KCOV_INSTRUMENT := n
CONTEXT_ANALYSIS_mutex.o := y
+CONTEXT_ANALYSIS_rtmutex_api.o := y
+CONTEXT_ANALYSIS_ww_rt_mutex.o := y
obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
--- a/kernel/locking/mutex.c
+++ b/kernel/locking/mutex.c
@@ -848,7 +848,6 @@ EXPORT_SYMBOL_GPL(mutex_lock_nested);
void __sched
_mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
- __acquires(lock)
{
__mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
__acquire(lock);
@@ -858,7 +857,6 @@ EXPORT_SYMBOL_GPL(_mutex_lock_nest_lock)
int __sched
_mutex_lock_killable(struct mutex *lock, unsigned int subclass,
struct lockdep_map *nest)
- __cond_acquires(0, lock)
{
return __mutex_lock(lock, TASK_KILLABLE, subclass, nest, _RET_IP_);
}
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -94,6 +94,7 @@ static inline int __ww_mutex_check_kill(
static __always_inline struct task_struct *
rt_mutex_owner_encode(struct rt_mutex_base *lock, struct task_struct *owner)
+ __must_hold(&lock->wait_lock)
{
unsigned long val = (unsigned long)owner;
@@ -105,6 +106,7 @@ rt_mutex_owner_encode(struct rt_mutex_ba
static __always_inline void
rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)
+ __must_hold(&lock->wait_lock)
{
/*
* lock->wait_lock is held but explicit acquire semantics are needed
@@ -114,12 +116,14 @@ rt_mutex_set_owner(struct rt_mutex_base
}
static __always_inline void rt_mutex_clear_owner(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
/* lock->wait_lock is held so the unlock provides release semantics. */
WRITE_ONCE(lock->owner, rt_mutex_owner_encode(lock, NULL));
}
static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
lock->owner = (struct task_struct *)
((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);
@@ -127,6 +131,7 @@ static __always_inline void clear_rt_mut
static __always_inline void
fixup_rt_mutex_waiters(struct rt_mutex_base *lock, bool acquire_lock)
+ __must_hold(&lock->wait_lock)
{
unsigned long owner, *p = (unsigned long *) &lock->owner;
@@ -328,6 +333,7 @@ static __always_inline bool rt_mutex_cmp
}
static __always_inline void mark_rt_mutex_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
lock->owner = (struct task_struct *)
((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS);
@@ -1206,6 +1212,7 @@ static int __sched task_blocks_on_rt_mut
struct ww_acquire_ctx *ww_ctx,
enum rtmutex_chainwalk chwalk,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct task_struct *owner = rt_mutex_owner(lock);
struct rt_mutex_waiter *top_waiter = waiter;
@@ -1249,6 +1256,7 @@ static int __sched task_blocks_on_rt_mut
/* Check whether the waiter should back out immediately */
rtm = container_of(lock, struct rt_mutex, rtmutex);
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx, wake_q);
if (res) {
raw_spin_lock(&task->pi_lock);
@@ -1356,6 +1364,7 @@ static void __sched mark_wakeup_next_wai
}
static int __sched __rt_mutex_slowtrylock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
int ret = try_to_take_rt_mutex(lock, current, NULL);
@@ -1505,7 +1514,7 @@ static bool rtmutex_spin_on_owner(struct
* - the VCPU on which owner runs is preempted
*/
if (!owner_on_cpu(owner) || need_resched() ||
- !rt_mutex_waiter_is_top_waiter(lock, waiter)) {
+ !data_race(rt_mutex_waiter_is_top_waiter(lock, waiter))) {
res = false;
break;
}
@@ -1538,6 +1547,7 @@ static bool rtmutex_spin_on_owner(struct
*/
static void __sched remove_waiter(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
struct task_struct *owner = rt_mutex_owner(lock);
@@ -1613,6 +1623,8 @@ static int __sched rt_mutex_slowlock_blo
struct task_struct *owner;
int ret = 0;
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
+
lockevent_inc(rtmutex_slow_block);
for (;;) {
/* Try to acquire the lock: */
@@ -1658,6 +1670,7 @@ static int __sched rt_mutex_slowlock_blo
static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
struct rt_mutex_base *lock,
struct rt_mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
/*
* If the result is not -EDEADLOCK or the caller requested
@@ -1694,11 +1707,13 @@ static int __sched __rt_mutex_slowlock(s
enum rtmutex_chainwalk chwalk,
struct rt_mutex_waiter *waiter,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex);
struct ww_mutex *ww = ww_container_of(rtm);
int ret;
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
lockdep_assert_held(&lock->wait_lock);
lockevent_inc(rtmutex_slowlock);
@@ -1750,6 +1765,7 @@ static inline int __rt_mutex_slowlock_lo
struct ww_acquire_ctx *ww_ctx,
unsigned int state,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct rt_mutex_waiter waiter;
int ret;
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -169,6 +169,7 @@ int __sched rt_mutex_futex_trylock(struc
}
int __sched __rt_mutex_futex_trylock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
return __rt_mutex_slowtrylock(lock);
}
@@ -526,6 +527,7 @@ static __always_inline int __mutex_lock_
unsigned int subclass,
struct lockdep_map *nest_lock,
unsigned long ip)
+ __acquires(lock) __no_context_analysis
{
int ret;
@@ -647,6 +649,7 @@ EXPORT_SYMBOL(mutex_trylock);
#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
void __sched mutex_unlock(struct mutex *lock)
+ __releases(lock) __no_context_analysis
{
mutex_release(&lock->dep_map, _RET_IP_);
__rt_mutex_unlock(&lock->rtmutex);
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -79,12 +79,18 @@ struct rt_wake_q_head {
* PI-futex support (proxy locking functions, etc.):
*/
extern void rt_mutex_init_proxy_locked(struct rt_mutex_base *lock,
- struct task_struct *proxy_owner);
-extern void rt_mutex_proxy_unlock(struct rt_mutex_base *lock);
+ struct task_struct *proxy_owner)
+ __must_hold(&lock->wait_lock);
+
+extern void rt_mutex_proxy_unlock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock);
+
extern int __rt_mutex_start_proxy_lock(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter,
struct task_struct *task,
- struct wake_q_head *);
+ struct wake_q_head *)
+ __must_hold(&lock->wait_lock);
+
extern int rt_mutex_start_proxy_lock(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter,
struct task_struct *task);
@@ -109,6 +115,7 @@ extern void rt_mutex_postunlock(struct r
*/
#ifdef CONFIG_RT_MUTEXES
static inline int rt_mutex_has_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
return !RB_EMPTY_ROOT(&lock->waiters.rb_root);
}
@@ -120,6 +127,7 @@ static inline int rt_mutex_has_waiters(s
*/
static inline bool rt_mutex_waiter_is_top_waiter(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
struct rb_node *leftmost = rb_first_cached(&lock->waiters);
@@ -127,6 +135,7 @@ static inline bool rt_mutex_waiter_is_to
}
static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
struct rb_node *leftmost = rb_first_cached(&lock->waiters);
struct rt_mutex_waiter *w = NULL;
@@ -170,9 +179,10 @@ enum rtmutex_chainwalk {
static inline void __rt_mutex_base_init(struct rt_mutex_base *lock)
{
- raw_spin_lock_init(&lock->wait_lock);
- lock->waiters = RB_ROOT_CACHED;
- lock->owner = NULL;
+ scoped_guard (raw_spinlock_init, &lock->wait_lock) {
+ lock->waiters = RB_ROOT_CACHED;
+ lock->owner = NULL;
+ }
}
/* Debug functions */
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -4,6 +4,7 @@
#define MUTEX mutex
#define MUTEX_WAITER mutex_waiter
+#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->wait_lock)
static inline struct mutex_waiter *
__ww_waiter_first(struct mutex *lock)
@@ -97,9 +98,11 @@ static inline void lockdep_assert_wait_l
#define MUTEX rt_mutex
#define MUTEX_WAITER rt_mutex_waiter
+#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->rtmutex.wait_lock)
static inline struct rt_mutex_waiter *
__ww_waiter_first(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
struct rb_node *n = rb_first(&lock->rtmutex.waiters.rb_root);
if (!n)
@@ -127,6 +130,7 @@ __ww_waiter_prev(struct rt_mutex *lock,
static inline struct rt_mutex_waiter *
__ww_waiter_last(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
struct rb_node *n = rb_last(&lock->rtmutex.waiters.rb_root);
if (!n)
@@ -148,21 +152,25 @@ __ww_mutex_owner(struct rt_mutex *lock)
static inline bool
__ww_mutex_has_waiters(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
return rt_mutex_has_waiters(&lock->rtmutex);
}
static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
+ __acquires(&lock->rtmutex.wait_lock)
{
raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags);
}
static inline void unlock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
+ __releases(&lock->rtmutex.wait_lock)
{
raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, *flags);
}
static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
lockdep_assert_held(&lock->rtmutex.wait_lock);
}
@@ -315,7 +323,7 @@ static bool __ww_mutex_wound(struct MUTE
struct ww_acquire_ctx *ww_ctx,
struct ww_acquire_ctx *hold_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ MUST_HOLD_WAIT_LOCK
{
struct task_struct *owner = __ww_mutex_owner(lock);
@@ -380,7 +388,7 @@ static bool __ww_mutex_wound(struct MUTE
static void
__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ MUST_HOLD_WAIT_LOCK
{
struct MUTEX_WAITER *cur;
@@ -428,7 +436,7 @@ ww_mutex_set_context_fastpath(struct ww_
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!__ww_mutex_has_waiters(&lock->base)))
+ if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
return;
/*
@@ -474,7 +482,7 @@ __ww_mutex_kill(struct MUTEX *lock, stru
static inline int
__ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
struct ww_acquire_ctx *ctx)
- __must_hold(&lock->wait_lock)
+ MUST_HOLD_WAIT_LOCK
{
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
@@ -525,7 +533,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITE
struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ MUST_HOLD_WAIT_LOCK
{
struct MUTEX_WAITER *cur, *pos = NULL;
bool is_wait_die;
--- a/kernel/locking/ww_rt_mutex.c
+++ b/kernel/locking/ww_rt_mutex.c
@@ -90,6 +90,7 @@ ww_mutex_lock_interruptible(struct ww_mu
EXPORT_SYMBOL(ww_mutex_lock_interruptible);
void __sched ww_mutex_unlock(struct ww_mutex *lock)
+ __no_context_analysis
{
struct rt_mutex *rtm = &lock->base;
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 3/4] locking/rtmutex: Add context analysis
2026-01-21 11:07 ` [RFC][PATCH 3/4] locking/rtmutex: " Peter Zijlstra
@ 2026-01-21 17:15 ` Bart Van Assche
2026-01-21 19:01 ` Peter Zijlstra
2026-03-09 19:48 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
1 sibling, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2026-01-21 17:15 UTC (permalink / raw)
To: Peter Zijlstra, elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> Add compiler context analysis annotations.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
> include/linux/mutex.h | 2 +-
> include/linux/rtmutex.h | 4 ++--
> kernel/locking/Makefile | 2 ++
> kernel/locking/mutex.c | 2 --
> kernel/locking/rtmutex.c | 18 +++++++++++++++++-
> kernel/locking/rtmutex_api.c | 3 +++
> kernel/locking/rtmutex_common.h | 22 ++++++++++++++++------
> kernel/locking/ww_mutex.h | 18 +++++++++++++-----
> kernel/locking/ww_rt_mutex.c | 1 +
> 9 files changed, 55 insertions(+), 17 deletions(-)
The patch subject says "rtmutex" but this patch includes a change for
the header file include/linux/mutex.h. Shouldn't that change be moved
into the mutex patch?
> --- a/kernel/locking/mutex.c
> +++ b/kernel/locking/mutex.c
> @@ -848,7 +848,6 @@ EXPORT_SYMBOL_GPL(mutex_lock_nested);
>
> void __sched
> _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
> - __acquires(lock)
> {
> __mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
> __acquire(lock);
Shouldn't the "__acquires()" annotation be moved to the declaration in
<linux/mutex.h>?
> #define MUTEX mutex
> #define MUTEX_WAITER mutex_waiter
> +#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->wait_lock)
>
> static inline struct mutex_waiter *
> __ww_waiter_first(struct mutex *lock)
> @@ -97,9 +98,11 @@ static inline void lockdep_assert_wait_l
>
> #define MUTEX rt_mutex
> #define MUTEX_WAITER rt_mutex_waiter
> +#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->rtmutex.wait_lock)
Is it really necessary to introduce these two macros? I prefer to see
the __must_hold() annotations instead of the macro names.
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 3/4] locking/rtmutex: Add context analysis
2026-01-21 17:15 ` Bart Van Assche
@ 2026-01-21 19:01 ` Peter Zijlstra
0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 19:01 UTC (permalink / raw)
To: Bart Van Assche
Cc: elver, linux-kernel, bigeasy, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, llvm
On Wed, Jan 21, 2026 at 09:15:53AM -0800, Bart Van Assche wrote:
> On 1/21/26 3:07 AM, Peter Zijlstra wrote:
> > Add compiler context analysis annotations.
> >
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> > include/linux/mutex.h | 2 +-
> > include/linux/rtmutex.h | 4 ++--
> > kernel/locking/Makefile | 2 ++
> > kernel/locking/mutex.c | 2 --
> > kernel/locking/rtmutex.c | 18 +++++++++++++++++-
> > kernel/locking/rtmutex_api.c | 3 +++
> > kernel/locking/rtmutex_common.h | 22 ++++++++++++++++------
> > kernel/locking/ww_mutex.h | 18 +++++++++++++-----
> > kernel/locking/ww_rt_mutex.c | 1 +
> > 9 files changed, 55 insertions(+), 17 deletions(-)
>
> The patch subject says "rtmutex" but this patch includes a change for
> the header file include/linux/mutex.h. Shouldn't that change be moved
> into the mutex patch?
Clearly I suffered from wandering hunks syndrome while doing these :/
> > --- a/kernel/locking/mutex.c
> > +++ b/kernel/locking/mutex.c
> > @@ -848,7 +848,6 @@ EXPORT_SYMBOL_GPL(mutex_lock_nested);
> > void __sched
> > _mutex_lock_nest_lock(struct mutex *lock, struct lockdep_map *nest)
> > - __acquires(lock)
> > {
> > __mutex_lock(lock, TASK_UNINTERRUPTIBLE, 0, nest, _RET_IP_);
> > __acquire(lock);
>
> Shouldn't the "__acquires()" annotation be moved to the declaration in
> <linux/mutex.h>?
It is, but yeah, this should be in the previous patch.
> > #define MUTEX mutex
> > #define MUTEX_WAITER mutex_waiter
> > +#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->wait_lock)
> > static inline struct mutex_waiter *
> > __ww_waiter_first(struct mutex *lock)
> > @@ -97,9 +98,11 @@ static inline void lockdep_assert_wait_l
> > #define MUTEX rt_mutex
> > #define MUTEX_WAITER rt_mutex_waiter
> > +#define MUST_HOLD_WAIT_LOCK __must_hold(&lock->rtmutex.wait_lock)
>
> Is it really necessary to introduce these two macros? I prefer to see
> the __must_hold() annotations instead of the macro names.
You'd rather see something like:
__must_hold(&lock->WAIT_LOCK)
? Can do I suppose.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [tip: locking/core] locking/rtmutex: Add context analysis
2026-01-21 11:07 ` [RFC][PATCH 3/4] locking/rtmutex: " Peter Zijlstra
2026-01-21 17:15 ` Bart Van Assche
@ 2026-03-09 19:48 ` tip-bot2 for Peter Zijlstra
1 sibling, 0 replies; 30+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2026-03-09 19:48 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 90bb681dcdf7e69c90b56a18f06c0389a0810b92
Gitweb: https://git.kernel.org/tip/90bb681dcdf7e69c90b56a18f06c0389a0810b92
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Tue, 20 Jan 2026 18:17:50 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Sun, 08 Mar 2026 11:06:53 +01:00
locking/rtmutex: Add context analysis
Add compiler context analysis annotations.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260121111213.851599178@infradead.org
---
include/linux/rtmutex.h | 8 +++----
kernel/locking/Makefile | 2 ++-
kernel/locking/rtmutex.c | 18 ++++++++++++++-
kernel/locking/rtmutex_api.c | 2 ++-
kernel/locking/rtmutex_common.h | 27 ++++++++++++++++-------
kernel/locking/ww_mutex.h | 20 ++++++++++++-----
kernel/locking/ww_rt_mutex.c | 1 +-
scripts/context-analysis-suppression.txt | 1 +-
8 files changed, 61 insertions(+), 18 deletions(-)
diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h
index ede4c6b..78e7e58 100644
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -22,8 +22,8 @@ extern int max_lock_depth;
struct rt_mutex_base {
raw_spinlock_t wait_lock;
- struct rb_root_cached waiters;
- struct task_struct *owner;
+ struct rb_root_cached waiters __guarded_by(&wait_lock);
+ struct task_struct *owner __guarded_by(&wait_lock);
};
#define __RT_MUTEX_BASE_INITIALIZER(rtbasename) \
@@ -41,7 +41,7 @@ struct rt_mutex_base {
*/
static inline bool rt_mutex_base_is_locked(struct rt_mutex_base *lock)
{
- return READ_ONCE(lock->owner) != NULL;
+ return data_race(READ_ONCE(lock->owner) != NULL);
}
#ifdef CONFIG_RT_MUTEXES
@@ -49,7 +49,7 @@ static inline bool rt_mutex_base_is_locked(struct rt_mutex_base *lock)
static inline struct task_struct *rt_mutex_owner(struct rt_mutex_base *lock)
{
- unsigned long owner = (unsigned long) READ_ONCE(lock->owner);
+ unsigned long owner = (unsigned long) data_race(READ_ONCE(lock->owner));
return (struct task_struct *) (owner & ~RT_MUTEX_HAS_WAITERS);
}
diff --git a/kernel/locking/Makefile b/kernel/locking/Makefile
index 264447d..0c07de7 100644
--- a/kernel/locking/Makefile
+++ b/kernel/locking/Makefile
@@ -4,6 +4,8 @@
KCOV_INSTRUMENT := n
CONTEXT_ANALYSIS_mutex.o := y
+CONTEXT_ANALYSIS_rtmutex_api.o := y
+CONTEXT_ANALYSIS_ww_rt_mutex.o := y
obj-y += mutex.o semaphore.o rwsem.o percpu-rwsem.o
diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index c80902e..ccaba61 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -94,6 +94,7 @@ static inline int __ww_mutex_check_kill(struct rt_mutex *lock,
static __always_inline struct task_struct *
rt_mutex_owner_encode(struct rt_mutex_base *lock, struct task_struct *owner)
+ __must_hold(&lock->wait_lock)
{
unsigned long val = (unsigned long)owner;
@@ -105,6 +106,7 @@ rt_mutex_owner_encode(struct rt_mutex_base *lock, struct task_struct *owner)
static __always_inline void
rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)
+ __must_hold(&lock->wait_lock)
{
/*
* lock->wait_lock is held but explicit acquire semantics are needed
@@ -114,12 +116,14 @@ rt_mutex_set_owner(struct rt_mutex_base *lock, struct task_struct *owner)
}
static __always_inline void rt_mutex_clear_owner(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
/* lock->wait_lock is held so the unlock provides release semantics. */
WRITE_ONCE(lock->owner, rt_mutex_owner_encode(lock, NULL));
}
static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
lock->owner = (struct task_struct *)
((unsigned long)lock->owner & ~RT_MUTEX_HAS_WAITERS);
@@ -127,6 +131,7 @@ static __always_inline void clear_rt_mutex_waiters(struct rt_mutex_base *lock)
static __always_inline void
fixup_rt_mutex_waiters(struct rt_mutex_base *lock, bool acquire_lock)
+ __must_hold(&lock->wait_lock)
{
unsigned long owner, *p = (unsigned long *) &lock->owner;
@@ -328,6 +333,7 @@ static __always_inline bool rt_mutex_cmpxchg_release(struct rt_mutex_base *lock,
}
static __always_inline void mark_rt_mutex_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
lock->owner = (struct task_struct *)
((unsigned long)lock->owner | RT_MUTEX_HAS_WAITERS);
@@ -1206,6 +1212,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock,
struct ww_acquire_ctx *ww_ctx,
enum rtmutex_chainwalk chwalk,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct task_struct *owner = rt_mutex_owner(lock);
struct rt_mutex_waiter *top_waiter = waiter;
@@ -1249,6 +1256,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock,
/* Check whether the waiter should back out immediately */
rtm = container_of(lock, struct rt_mutex, rtmutex);
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
res = __ww_mutex_add_waiter(waiter, rtm, ww_ctx, wake_q);
if (res) {
raw_spin_lock(&task->pi_lock);
@@ -1356,6 +1364,7 @@ static void __sched mark_wakeup_next_waiter(struct rt_wake_q_head *wqh,
}
static int __sched __rt_mutex_slowtrylock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
int ret = try_to_take_rt_mutex(lock, current, NULL);
@@ -1505,7 +1514,7 @@ static bool rtmutex_spin_on_owner(struct rt_mutex_base *lock,
* - the VCPU on which owner runs is preempted
*/
if (!owner_on_cpu(owner) || need_resched() ||
- !rt_mutex_waiter_is_top_waiter(lock, waiter)) {
+ !data_race(rt_mutex_waiter_is_top_waiter(lock, waiter))) {
res = false;
break;
}
@@ -1538,6 +1547,7 @@ static bool rtmutex_spin_on_owner(struct rt_mutex_base *lock,
*/
static void __sched remove_waiter(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
bool is_top_waiter = (waiter == rt_mutex_top_waiter(lock));
struct task_struct *owner = rt_mutex_owner(lock);
@@ -1613,6 +1623,8 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock,
struct task_struct *owner;
int ret = 0;
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
+
lockevent_inc(rtmutex_slow_block);
for (;;) {
/* Try to acquire the lock: */
@@ -1658,6 +1670,7 @@ static int __sched rt_mutex_slowlock_block(struct rt_mutex_base *lock,
static void __sched rt_mutex_handle_deadlock(int res, int detect_deadlock,
struct rt_mutex_base *lock,
struct rt_mutex_waiter *w)
+ __must_hold(&lock->wait_lock)
{
/*
* If the result is not -EDEADLOCK or the caller requested
@@ -1694,11 +1707,13 @@ static int __sched __rt_mutex_slowlock(struct rt_mutex_base *lock,
enum rtmutex_chainwalk chwalk,
struct rt_mutex_waiter *waiter,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct rt_mutex *rtm = container_of(lock, struct rt_mutex, rtmutex);
struct ww_mutex *ww = ww_container_of(rtm);
int ret;
+ __assume_ctx_lock(&rtm->rtmutex.wait_lock);
lockdep_assert_held(&lock->wait_lock);
lockevent_inc(rtmutex_slowlock);
@@ -1750,6 +1765,7 @@ static inline int __rt_mutex_slowlock_locked(struct rt_mutex_base *lock,
struct ww_acquire_ctx *ww_ctx,
unsigned int state,
struct wake_q_head *wake_q)
+ __must_hold(&lock->wait_lock)
{
struct rt_mutex_waiter waiter;
int ret;
diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c
index 59dbd29..124219a 100644
--- a/kernel/locking/rtmutex_api.c
+++ b/kernel/locking/rtmutex_api.c
@@ -526,6 +526,7 @@ static __always_inline int __mutex_lock_common(struct mutex *lock,
unsigned int subclass,
struct lockdep_map *nest_lock,
unsigned long ip)
+ __acquires(lock) __no_context_analysis
{
int ret;
@@ -647,6 +648,7 @@ EXPORT_SYMBOL(mutex_trylock);
#endif /* !CONFIG_DEBUG_LOCK_ALLOC */
void __sched mutex_unlock(struct mutex *lock)
+ __releases(lock) __no_context_analysis
{
mutex_release(&lock->dep_map, _RET_IP_);
__rt_mutex_unlock(&lock->rtmutex);
diff --git a/kernel/locking/rtmutex_common.h b/kernel/locking/rtmutex_common.h
index cf6ddd1..c38b7bd 100644
--- a/kernel/locking/rtmutex_common.h
+++ b/kernel/locking/rtmutex_common.h
@@ -79,12 +79,18 @@ struct rt_wake_q_head {
* PI-futex support (proxy locking functions, etc.):
*/
extern void rt_mutex_init_proxy_locked(struct rt_mutex_base *lock,
- struct task_struct *proxy_owner);
-extern void rt_mutex_proxy_unlock(struct rt_mutex_base *lock);
+ struct task_struct *proxy_owner)
+ __must_hold(&lock->wait_lock);
+
+extern void rt_mutex_proxy_unlock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock);
+
extern int __rt_mutex_start_proxy_lock(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter,
struct task_struct *task,
- struct wake_q_head *);
+ struct wake_q_head *)
+ __must_hold(&lock->wait_lock);
+
extern int rt_mutex_start_proxy_lock(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter,
struct task_struct *task);
@@ -94,8 +100,9 @@ extern int rt_mutex_wait_proxy_lock(struct rt_mutex_base *lock,
extern bool rt_mutex_cleanup_proxy_lock(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter);
-extern int rt_mutex_futex_trylock(struct rt_mutex_base *l);
-extern int __rt_mutex_futex_trylock(struct rt_mutex_base *l);
+extern int rt_mutex_futex_trylock(struct rt_mutex_base *lock);
+extern int __rt_mutex_futex_trylock(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock);
extern void rt_mutex_futex_unlock(struct rt_mutex_base *lock);
extern bool __rt_mutex_futex_unlock(struct rt_mutex_base *lock,
@@ -109,6 +116,7 @@ extern void rt_mutex_postunlock(struct rt_wake_q_head *wqh);
*/
#ifdef CONFIG_RT_MUTEXES
static inline int rt_mutex_has_waiters(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
return !RB_EMPTY_ROOT(&lock->waiters.rb_root);
}
@@ -120,6 +128,7 @@ static inline int rt_mutex_has_waiters(struct rt_mutex_base *lock)
*/
static inline bool rt_mutex_waiter_is_top_waiter(struct rt_mutex_base *lock,
struct rt_mutex_waiter *waiter)
+ __must_hold(&lock->wait_lock)
{
struct rb_node *leftmost = rb_first_cached(&lock->waiters);
@@ -127,6 +136,7 @@ static inline bool rt_mutex_waiter_is_top_waiter(struct rt_mutex_base *lock,
}
static inline struct rt_mutex_waiter *rt_mutex_top_waiter(struct rt_mutex_base *lock)
+ __must_hold(&lock->wait_lock)
{
struct rb_node *leftmost = rb_first_cached(&lock->waiters);
struct rt_mutex_waiter *w = NULL;
@@ -170,9 +180,10 @@ enum rtmutex_chainwalk {
static inline void __rt_mutex_base_init(struct rt_mutex_base *lock)
{
- raw_spin_lock_init(&lock->wait_lock);
- lock->waiters = RB_ROOT_CACHED;
- lock->owner = NULL;
+ scoped_guard (raw_spinlock_init, &lock->wait_lock) {
+ lock->waiters = RB_ROOT_CACHED;
+ lock->owner = NULL;
+ }
}
/* Debug functions */
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index c50ea5d..b1834ab 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -4,6 +4,7 @@
#define MUTEX mutex
#define MUTEX_WAITER mutex_waiter
+#define WAIT_LOCK wait_lock
static inline struct mutex_waiter *
__ww_waiter_first(struct mutex *lock)
@@ -86,9 +87,11 @@ static inline void lockdep_assert_wait_lock_held(struct mutex *lock)
#define MUTEX rt_mutex
#define MUTEX_WAITER rt_mutex_waiter
+#define WAIT_LOCK rtmutex.wait_lock
static inline struct rt_mutex_waiter *
__ww_waiter_first(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
struct rb_node *n = rb_first(&lock->rtmutex.waiters.rb_root);
if (!n)
@@ -116,6 +119,7 @@ __ww_waiter_prev(struct rt_mutex *lock, struct rt_mutex_waiter *w)
static inline struct rt_mutex_waiter *
__ww_waiter_last(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
struct rb_node *n = rb_last(&lock->rtmutex.waiters.rb_root);
if (!n)
@@ -137,21 +141,25 @@ __ww_mutex_owner(struct rt_mutex *lock)
static inline bool
__ww_mutex_has_waiters(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
return rt_mutex_has_waiters(&lock->rtmutex);
}
static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
+ __acquires(&lock->rtmutex.wait_lock)
{
raw_spin_lock_irqsave(&lock->rtmutex.wait_lock, *flags);
}
static inline void unlock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
+ __releases(&lock->rtmutex.wait_lock)
{
raw_spin_unlock_irqrestore(&lock->rtmutex.wait_lock, *flags);
}
static inline void lockdep_assert_wait_lock_held(struct rt_mutex *lock)
+ __must_hold(&lock->rtmutex.wait_lock)
{
lockdep_assert_held(&lock->rtmutex.wait_lock);
}
@@ -304,7 +312,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct ww_acquire_ctx *hold_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ __must_hold(&lock->WAIT_LOCK)
{
struct task_struct *owner = __ww_mutex_owner(lock);
@@ -369,7 +377,7 @@ static bool __ww_mutex_wound(struct MUTEX *lock,
static void
__ww_mutex_check_waiters(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ __must_hold(&lock->WAIT_LOCK)
{
struct MUTEX_WAITER *cur;
@@ -396,6 +404,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
{
DEFINE_WAKE_Q(wake_q);
unsigned long flags;
+ bool has_waiters;
ww_mutex_lock_acquired(lock, ctx);
@@ -417,7 +426,8 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!__ww_mutex_has_waiters(&lock->base)))
+ has_waiters = data_race(__ww_mutex_has_waiters(&lock->base));
+ if (likely(!has_waiters))
return;
/*
@@ -463,7 +473,7 @@ __ww_mutex_kill(struct MUTEX *lock, struct ww_acquire_ctx *ww_ctx)
static inline int
__ww_mutex_check_kill(struct MUTEX *lock, struct MUTEX_WAITER *waiter,
struct ww_acquire_ctx *ctx)
- __must_hold(&lock->wait_lock)
+ __must_hold(&lock->WAIT_LOCK)
{
struct ww_mutex *ww = container_of(lock, struct ww_mutex, base);
struct ww_acquire_ctx *hold_ctx = READ_ONCE(ww->ctx);
@@ -514,7 +524,7 @@ __ww_mutex_add_waiter(struct MUTEX_WAITER *waiter,
struct MUTEX *lock,
struct ww_acquire_ctx *ww_ctx,
struct wake_q_head *wake_q)
- __must_hold(&lock->wait_lock)
+ __must_hold(&lock->WAIT_LOCK)
{
struct MUTEX_WAITER *cur, *pos = NULL;
bool is_wait_die;
diff --git a/kernel/locking/ww_rt_mutex.c b/kernel/locking/ww_rt_mutex.c
index c7196de..e07fb3b 100644
--- a/kernel/locking/ww_rt_mutex.c
+++ b/kernel/locking/ww_rt_mutex.c
@@ -90,6 +90,7 @@ ww_mutex_lock_interruptible(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
EXPORT_SYMBOL(ww_mutex_lock_interruptible);
void __sched ww_mutex_unlock(struct ww_mutex *lock)
+ __no_context_analysis
{
struct rt_mutex *rtm = &lock->base;
diff --git a/scripts/context-analysis-suppression.txt b/scripts/context-analysis-suppression.txt
index fd8951d..1c51b61 100644
--- a/scripts/context-analysis-suppression.txt
+++ b/scripts/context-analysis-suppression.txt
@@ -24,6 +24,7 @@ src:*include/linux/mutex*.h=emit
src:*include/linux/rcupdate.h=emit
src:*include/linux/refcount.h=emit
src:*include/linux/rhashtable.h=emit
+src:*include/linux/rtmutex*.h=emit
src:*include/linux/rwlock*.h=emit
src:*include/linux/rwsem.h=emit
src:*include/linux/sched*=emit
^ permalink raw reply related [flat|nested] 30+ messages in thread
* [RFC][PATCH 4/4] futex: Convert to compiler context analysis
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
` (2 preceding siblings ...)
2026-01-21 11:07 ` [RFC][PATCH 3/4] locking/rtmutex: " Peter Zijlstra
@ 2026-01-21 11:07 ` Peter Zijlstra
2026-01-21 13:19 ` Peter Zijlstra
2026-03-18 8:02 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
2026-01-21 13:07 ` [RFC][PATCH 0/4] locking: Add/convert context analysis bits Marco Elver
2026-01-21 19:23 ` Peter Zijlstra
5 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 11:07 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, peterz, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, bvanassche, llvm
Convert the sparse annotations over to the new compiler context
analysis stuff.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260114110828.GE830229@noisy.programming.kicks-ass.net
---
kernel/futex/Makefile | 2 ++
kernel/futex/core.c | 9 ++++++---
kernel/futex/futex.h | 17 ++++++++++++++---
kernel/futex/pi.c | 9 +++++++++
kernel/futex/waitwake.c | 4 ++++
5 files changed, 35 insertions(+), 6 deletions(-)
--- a/kernel/futex/Makefile
+++ b/kernel/futex/Makefile
@@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
+CONTEXT_ANALYSIS := y
+
obj-y += core.o syscalls.o pi.o requeue.o waitwake.o
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -864,7 +864,6 @@ void __futex_unqueue(struct futex_q *q)
/* The key must be already stored in q->key. */
void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
- __acquires(&hb->lock)
{
/*
* Increment the counter before taking the lock so that
@@ -879,10 +878,10 @@ void futex_q_lock(struct futex_q *q, str
q->lock_ptr = &hb->lock;
spin_lock(&hb->lock);
+ __acquire(q->lock_ptr);
}
void futex_q_unlock(struct futex_hash_bucket *hb)
- __releases(&hb->lock)
{
futex_hb_waiters_dec(hb);
spin_unlock(&hb->lock);
@@ -1443,12 +1442,15 @@ static void futex_cleanup(struct task_st
void futex_exit_recursive(struct task_struct *tsk)
{
/* If the state is FUTEX_STATE_EXITING then futex_exit_mutex is held */
- if (tsk->futex_state == FUTEX_STATE_EXITING)
+ if (tsk->futex_state == FUTEX_STATE_EXITING) {
+ __assume_ctx_lock(&tsk->futex_exit_mutex);
mutex_unlock(&tsk->futex_exit_mutex);
+ }
tsk->futex_state = FUTEX_STATE_DEAD;
}
static void futex_cleanup_begin(struct task_struct *tsk)
+ __acquires(&tsk->futex_exit_mutex)
{
/*
* Prevent various race issues against a concurrent incoming waiter
@@ -1475,6 +1477,7 @@ static void futex_cleanup_begin(struct t
}
static void futex_cleanup_end(struct task_struct *tsk, int state)
+ __releases(&tsk->futex_exit_mutex)
{
/*
* Lockless store. The only side effect is that an observer might
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -217,7 +217,7 @@ enum futex_access {
extern int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key,
enum futex_access rw);
-extern void futex_q_lockptr_lock(struct futex_q *q);
+extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
extern struct hrtimer_sleeper *
futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
@@ -311,9 +311,11 @@ extern int futex_unqueue(struct futex_q
static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
struct task_struct *task)
__releases(&hb->lock)
+ __releases(q->lock_ptr)
{
__futex_queue(q, hb, task);
spin_unlock(&hb->lock);
+ __release(q->lock_ptr);
}
extern void futex_unqueue_pi(struct futex_q *q);
@@ -358,9 +360,12 @@ static inline int futex_hb_waiters_pendi
#endif
}
-extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb);
-extern void futex_q_unlock(struct futex_hash_bucket *hb);
+extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
+ __acquires(&hb->lock)
+ __acquires(q->lock_ptr);
+extern void futex_q_unlock(struct futex_hash_bucket *hb)
+ __releases(&hb->lock);
extern int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
union futex_key *key,
@@ -379,6 +384,9 @@ extern int fixup_pi_owner(u32 __user *ua
*/
static inline void
double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+ __acquires(&hb1->lock)
+ __acquires(&hb2->lock)
+ __no_context_analysis
{
if (hb1 > hb2)
swap(hb1, hb2);
@@ -390,6 +398,9 @@ double_lock_hb(struct futex_hash_bucket
static inline void
double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+ __releases(&hb1->lock)
+ __releases(&hb2->lock)
+ __no_context_analysis
{
spin_unlock(&hb1->lock);
if (hb1 != hb2)
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -389,6 +389,7 @@ static void __attach_to_pi_owner(struct
* Initialize the pi_mutex in locked state and make @p
* the owner of it:
*/
+ __assume_ctx_lock(&pi_state->pi_mutex.wait_lock);
rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
/* Store the key for possible exit cleanups: */
@@ -614,6 +615,8 @@ int futex_lock_pi_atomic(u32 __user *uad
static int wake_futex_pi(u32 __user *uaddr, u32 uval,
struct futex_pi_state *pi_state,
struct rt_mutex_waiter *top_waiter)
+ __must_hold(&pi_state->pi_mutex.wait_lock)
+ __releases(&pi_state->pi_mutex.wait_lock)
{
struct task_struct *new_owner;
bool postunlock = false;
@@ -670,6 +673,8 @@ static int wake_futex_pi(u32 __user *uad
static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
struct task_struct *argowner)
+ __must_hold(&q->pi_state->pi_mutex.wait_lock)
+ __must_hold(q->lock_ptr)
{
struct futex_pi_state *pi_state = q->pi_state;
struct task_struct *oldowner, *newowner;
@@ -966,6 +971,7 @@ int futex_lock_pi(u32 __user *uaddr, uns
* - EAGAIN: The user space value changed.
*/
futex_q_unlock(hb);
+ __release(q.lock_ptr);
/*
* Handle the case where the owner is in the middle of
* exiting. Wait for the exit to complete otherwise
@@ -1090,6 +1096,7 @@ int futex_lock_pi(u32 __user *uaddr, uns
if (res)
ret = (res < 0) ? res : 0;
+ __release(&hb->lock);
futex_unqueue_pi(&q);
spin_unlock(q.lock_ptr);
if (q.drop_hb_ref) {
@@ -1101,10 +1108,12 @@ int futex_lock_pi(u32 __user *uaddr, uns
out_unlock_put_key:
futex_q_unlock(hb);
+ __release(q.lock_ptr);
goto out;
uaddr_faulted:
futex_q_unlock(hb);
+ __release(q.lock_ptr);
ret = fault_in_user_writeable(uaddr);
if (ret)
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -462,6 +462,7 @@ int futex_wait_multiple_setup(struct fut
}
futex_q_unlock(hb);
+ __release(q->lock_ptr);
}
__set_current_state(TASK_RUNNING);
@@ -628,6 +629,7 @@ int futex_wait_setup(u32 __user *uaddr,
if (ret) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
ret = get_user(uval, uaddr);
if (ret)
@@ -641,11 +643,13 @@ int futex_wait_setup(u32 __user *uaddr,
if (uval != val) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
return -EWOULDBLOCK;
}
if (key2 && futex_match(&q->key, key2)) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
return -EINVAL;
}
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 4/4] futex: Convert to compiler context analysis
2026-01-21 11:07 ` [RFC][PATCH 4/4] futex: Convert to compiler " Peter Zijlstra
@ 2026-01-21 13:19 ` Peter Zijlstra
2026-03-18 8:02 ` [tip: locking/core] " tip-bot2 for Peter Zijlstra
1 sibling, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 13:19 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, bvanassche, llvm
On Wed, Jan 21, 2026 at 12:07:08PM +0100, Peter Zijlstra wrote:
> +extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
> + __acquires(&hb->lock)
> + __acquires(q->lock_ptr);
FWIW, this 'weirdness' is because the thing cannot tell they are the
same lock and we use both forms. It's a bit tedious but that's what it
is.
^ permalink raw reply [flat|nested] 30+ messages in thread
* [tip: locking/core] futex: Convert to compiler context analysis
2026-01-21 11:07 ` [RFC][PATCH 4/4] futex: Convert to compiler " Peter Zijlstra
2026-01-21 13:19 ` Peter Zijlstra
@ 2026-03-18 8:02 ` tip-bot2 for Peter Zijlstra
1 sibling, 0 replies; 30+ messages in thread
From: tip-bot2 for Peter Zijlstra @ 2026-03-18 8:02 UTC (permalink / raw)
To: linux-tip-commits; +Cc: Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the locking/core branch of tip:
Commit-ID: 16df04446e34a1e7dba57f657af6ad5f51199763
Gitweb: https://git.kernel.org/tip/16df04446e34a1e7dba57f657af6ad5f51199763
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Wed, 14 Jan 2026 12:08:28 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Mon, 16 Mar 2026 13:16:48 +01:00
futex: Convert to compiler context analysis
Convert the sparse annotations over to the new compiler context
analysis stuff.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://patch.msgid.link/20260121111213.950376128@infradead.org
---
kernel/futex/Makefile | 2 ++
kernel/futex/core.c | 9 ++++++---
kernel/futex/futex.h | 17 ++++++++++++++---
kernel/futex/pi.c | 9 +++++++++
kernel/futex/waitwake.c | 4 ++++
5 files changed, 35 insertions(+), 6 deletions(-)
diff --git a/kernel/futex/Makefile b/kernel/futex/Makefile
index b77188d..dce70f8 100644
--- a/kernel/futex/Makefile
+++ b/kernel/futex/Makefile
@@ -1,3 +1,5 @@
# SPDX-License-Identifier: GPL-2.0
+CONTEXT_ANALYSIS := y
+
obj-y += core.o syscalls.o pi.o requeue.o waitwake.o
diff --git a/kernel/futex/core.c b/kernel/futex/core.c
index cf7e610..4bacf55 100644
--- a/kernel/futex/core.c
+++ b/kernel/futex/core.c
@@ -864,7 +864,6 @@ void __futex_unqueue(struct futex_q *q)
/* The key must be already stored in q->key. */
void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
- __acquires(&hb->lock)
{
/*
* Increment the counter before taking the lock so that
@@ -879,10 +878,10 @@ void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
q->lock_ptr = &hb->lock;
spin_lock(&hb->lock);
+ __acquire(q->lock_ptr);
}
void futex_q_unlock(struct futex_hash_bucket *hb)
- __releases(&hb->lock)
{
futex_hb_waiters_dec(hb);
spin_unlock(&hb->lock);
@@ -1443,12 +1442,15 @@ static void futex_cleanup(struct task_struct *tsk)
void futex_exit_recursive(struct task_struct *tsk)
{
/* If the state is FUTEX_STATE_EXITING then futex_exit_mutex is held */
- if (tsk->futex_state == FUTEX_STATE_EXITING)
+ if (tsk->futex_state == FUTEX_STATE_EXITING) {
+ __assume_ctx_lock(&tsk->futex_exit_mutex);
mutex_unlock(&tsk->futex_exit_mutex);
+ }
tsk->futex_state = FUTEX_STATE_DEAD;
}
static void futex_cleanup_begin(struct task_struct *tsk)
+ __acquires(&tsk->futex_exit_mutex)
{
/*
* Prevent various race issues against a concurrent incoming waiter
@@ -1475,6 +1477,7 @@ static void futex_cleanup_begin(struct task_struct *tsk)
}
static void futex_cleanup_end(struct task_struct *tsk, int state)
+ __releases(&tsk->futex_exit_mutex)
{
/*
* Lockless store. The only side effect is that an observer might
diff --git a/kernel/futex/futex.h b/kernel/futex/futex.h
index 30c2afa..9f6bf6f 100644
--- a/kernel/futex/futex.h
+++ b/kernel/futex/futex.h
@@ -217,7 +217,7 @@ enum futex_access {
extern int get_futex_key(u32 __user *uaddr, unsigned int flags, union futex_key *key,
enum futex_access rw);
-extern void futex_q_lockptr_lock(struct futex_q *q);
+extern void futex_q_lockptr_lock(struct futex_q *q) __acquires(q->lock_ptr);
extern struct hrtimer_sleeper *
futex_setup_timer(ktime_t *time, struct hrtimer_sleeper *timeout,
int flags, u64 range_ns);
@@ -311,9 +311,11 @@ extern int futex_unqueue(struct futex_q *q);
static inline void futex_queue(struct futex_q *q, struct futex_hash_bucket *hb,
struct task_struct *task)
__releases(&hb->lock)
+ __releases(q->lock_ptr)
{
__futex_queue(q, hb, task);
spin_unlock(&hb->lock);
+ __release(q->lock_ptr);
}
extern void futex_unqueue_pi(struct futex_q *q);
@@ -358,9 +360,12 @@ static inline int futex_hb_waiters_pending(struct futex_hash_bucket *hb)
#endif
}
-extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb);
-extern void futex_q_unlock(struct futex_hash_bucket *hb);
+extern void futex_q_lock(struct futex_q *q, struct futex_hash_bucket *hb)
+ __acquires(&hb->lock)
+ __acquires(q->lock_ptr);
+extern void futex_q_unlock(struct futex_hash_bucket *hb)
+ __releases(&hb->lock);
extern int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
union futex_key *key,
@@ -379,6 +384,9 @@ extern int fixup_pi_owner(u32 __user *uaddr, struct futex_q *q, int locked);
*/
static inline void
double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+ __acquires(&hb1->lock)
+ __acquires(&hb2->lock)
+ __no_context_analysis
{
if (hb1 > hb2)
swap(hb1, hb2);
@@ -390,6 +398,9 @@ double_lock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
static inline void
double_unlock_hb(struct futex_hash_bucket *hb1, struct futex_hash_bucket *hb2)
+ __releases(&hb1->lock)
+ __releases(&hb2->lock)
+ __no_context_analysis
{
spin_unlock(&hb1->lock);
if (hb1 != hb2)
diff --git a/kernel/futex/pi.c b/kernel/futex/pi.c
index bc1f7e8..49ab5f4 100644
--- a/kernel/futex/pi.c
+++ b/kernel/futex/pi.c
@@ -389,6 +389,7 @@ static void __attach_to_pi_owner(struct task_struct *p, union futex_key *key,
* Initialize the pi_mutex in locked state and make @p
* the owner of it:
*/
+ __assume_ctx_lock(&pi_state->pi_mutex.wait_lock);
rt_mutex_init_proxy_locked(&pi_state->pi_mutex, p);
/* Store the key for possible exit cleanups: */
@@ -614,6 +615,8 @@ int futex_lock_pi_atomic(u32 __user *uaddr, struct futex_hash_bucket *hb,
static int wake_futex_pi(u32 __user *uaddr, u32 uval,
struct futex_pi_state *pi_state,
struct rt_mutex_waiter *top_waiter)
+ __must_hold(&pi_state->pi_mutex.wait_lock)
+ __releases(&pi_state->pi_mutex.wait_lock)
{
struct task_struct *new_owner;
bool postunlock = false;
@@ -670,6 +673,8 @@ out_unlock:
static int __fixup_pi_state_owner(u32 __user *uaddr, struct futex_q *q,
struct task_struct *argowner)
+ __must_hold(&q->pi_state->pi_mutex.wait_lock)
+ __must_hold(q->lock_ptr)
{
struct futex_pi_state *pi_state = q->pi_state;
struct task_struct *oldowner, *newowner;
@@ -966,6 +971,7 @@ retry_private:
* - EAGAIN: The user space value changed.
*/
futex_q_unlock(hb);
+ __release(q.lock_ptr);
/*
* Handle the case where the owner is in the middle of
* exiting. Wait for the exit to complete otherwise
@@ -1090,6 +1096,7 @@ no_block:
if (res)
ret = (res < 0) ? res : 0;
+ __release(&hb->lock);
futex_unqueue_pi(&q);
spin_unlock(q.lock_ptr);
if (q.drop_hb_ref) {
@@ -1101,10 +1108,12 @@ no_block:
out_unlock_put_key:
futex_q_unlock(hb);
+ __release(q.lock_ptr);
goto out;
uaddr_faulted:
futex_q_unlock(hb);
+ __release(q.lock_ptr);
ret = fault_in_user_writeable(uaddr);
if (ret)
diff --git a/kernel/futex/waitwake.c b/kernel/futex/waitwake.c
index 1c2dd03..ceed9d8 100644
--- a/kernel/futex/waitwake.c
+++ b/kernel/futex/waitwake.c
@@ -462,6 +462,7 @@ retry:
}
futex_q_unlock(hb);
+ __release(q->lock_ptr);
}
__set_current_state(TASK_RUNNING);
@@ -628,6 +629,7 @@ retry_private:
if (ret) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
ret = get_user(uval, uaddr);
if (ret)
@@ -641,11 +643,13 @@ retry_private:
if (uval != val) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
return -EWOULDBLOCK;
}
if (key2 && futex_match(&q->key, key2)) {
futex_q_unlock(hb);
+ __release(q->lock_ptr);
return -EINVAL;
}
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
` (3 preceding siblings ...)
2026-01-21 11:07 ` [RFC][PATCH 4/4] futex: Convert to compiler " Peter Zijlstra
@ 2026-01-21 13:07 ` Marco Elver
2026-01-21 19:23 ` Peter Zijlstra
5 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2026-01-21 13:07 UTC (permalink / raw)
To: Peter Zijlstra
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, bvanassche, llvm
On Wed, Jan 21, 2026 at 12:07PM +0100, Peter Zijlstra wrote:
> Hai
>
> This is on top of tip/locking/core with these patches on:
>
> https://lkml.kernel.org/r/20260119094029.1344361-1-elver@google.com
>
> and converts mutex, rtmutex, ww_mutex and futex to use the new context analysis
> bits.
>
> There is one snafu:
>
> ww_mutex_set_context_fastpath()'s data_race() usage doesn't stop the compiler
> from complaining when building a defconfig+PREEMPT_RT+LOCKDEP build:
>
> ../kernel/locking/ww_mutex.h:439:24: error: calling function '__ww_mutex_has_waiters' requires holding raw_spinlock 'lock->base.rtmutex.wait_lock' exclusively [-Werror,-Wthread-safety-analysis]
> 439 | if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
> | ^
> 1 error generated.
This works:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 73eed6b7f24e..561e2475954d 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -436,7 +436,8 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
+ bool has_waiters = data_race(__ww_mutex_has_waiters(&lock->base));
+ if (likely(!has_waiters))
return;
/*
It appears that the _Pragma are ignored when the expression is inside
__builtin_expect(...). That's a bit inconvenient.
Another option is this given its exclusively used without holding this
lock:
diff --git a/kernel/locking/ww_mutex.h b/kernel/locking/ww_mutex.h
index 73eed6b7f24e..45a9c394fe91 100644
--- a/kernel/locking/ww_mutex.h
+++ b/kernel/locking/ww_mutex.h
@@ -71,7 +71,7 @@ __ww_mutex_owner(struct mutex *lock)
}
static inline bool
-__ww_mutex_has_waiters(struct mutex *lock)
+__ww_mutex_has_waiters_lockless(struct mutex *lock)
{
return atomic_long_read(&lock->owner) & MUTEX_FLAG_WAITERS;
}
@@ -151,10 +151,9 @@ __ww_mutex_owner(struct rt_mutex *lock)
}
static inline bool
-__ww_mutex_has_waiters(struct rt_mutex *lock)
- __must_hold(&lock->rtmutex.wait_lock)
+__ww_mutex_has_waiters_lockless(struct rt_mutex *lock)
{
- return rt_mutex_has_waiters(&lock->rtmutex);
+ return data_race(rt_mutex_has_waiters(&lock->rtmutex));
}
static inline void lock_wait_lock(struct rt_mutex *lock, unsigned long *flags)
@@ -436,7 +435,7 @@ ww_mutex_set_context_fastpath(struct ww_mutex *lock, struct ww_acquire_ctx *ctx)
* __ww_mutex_add_waiter() and makes sure we either observe ww->ctx
* and/or !empty list.
*/
- if (likely(!data_race(__ww_mutex_has_waiters(&lock->base))))
+ if (likely(!__ww_mutex_has_waiters_lockless(&lock->base)))
return;
/*
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-21 11:07 [RFC][PATCH 0/4] locking: Add/convert context analysis bits Peter Zijlstra
` (4 preceding siblings ...)
2026-01-21 13:07 ` [RFC][PATCH 0/4] locking: Add/convert context analysis bits Marco Elver
@ 2026-01-21 19:23 ` Peter Zijlstra
2026-01-21 20:37 ` Bart Van Assche
2026-01-23 14:16 ` Sebastian Andrzej Siewior
5 siblings, 2 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-21 19:23 UTC (permalink / raw)
To: elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, bvanassche, llvm
On Wed, Jan 21, 2026 at 12:07:04PM +0100, Peter Zijlstra wrote:
> Hai
>
> This is on top of tip/locking/core with these patches on:
>
> https://lkml.kernel.org/r/20260119094029.1344361-1-elver@google.com
>
> and converts mutex, rtmutex, ww_mutex and futex to use the new context analysis
> bits.
>
Pushed out an updated/fixed series to:
git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-21 19:23 ` Peter Zijlstra
@ 2026-01-21 20:37 ` Bart Van Assche
2026-01-22 9:04 ` Peter Zijlstra
2026-01-23 14:16 ` Sebastian Andrzej Siewior
1 sibling, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2026-01-21 20:37 UTC (permalink / raw)
To: Peter Zijlstra, elver
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/21/26 11:23 AM, Peter Zijlstra wrote:
> Pushed out an updated/fixed series to:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
With CONFIG_WARN_CONTEXT_ANALYSIS=y, CONFIG_WARN_CONTEXT_ANALYSIS_ALL=n
and "+src:*include/*=emit" in scripts/context-analysis-suppression.txt
I see the following error messages for that tree:
In file included from kernel/locking/mutex.c:22:
In file included from ./include/linux/ww_mutex.h:21:
./include/linux/rtmutex.h:44:25: error: reading variable 'owner'
requires holding raw_spinlock '&rt_mutex_base::wait_lock'
[-Werror,-Wthread-safety-analysis]
44 | return READ_ONCE(lock->owner) != NULL;
| ^
./include/linux/rtmutex.h:52:56: error: reading variable 'owner'
requires holding raw_spinlock '&rt_mutex_base::wait_lock'
[-Werror,-Wthread-safety-analysis]
52 | unsigned long owner = (unsigned long)
READ_ONCE(lock->owner);
| ^
2 errors generated.
Should this series perhaps include changes for the file
scripts/context-analysis-suppression.txt?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-21 20:37 ` Bart Van Assche
@ 2026-01-22 9:04 ` Peter Zijlstra
2026-01-22 16:28 ` Bart Van Assche
0 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-22 9:04 UTC (permalink / raw)
To: Bart Van Assche
Cc: elver, linux-kernel, bigeasy, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, llvm
On Wed, Jan 21, 2026 at 12:37:21PM -0800, Bart Van Assche wrote:
> On 1/21/26 11:23 AM, Peter Zijlstra wrote:
> > Pushed out an updated/fixed series to:
> >
> > git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
>
> With CONFIG_WARN_CONTEXT_ANALYSIS=y, CONFIG_WARN_CONTEXT_ANALYSIS_ALL=n
> and "+src:*include/*=emit" in scripts/context-analysis-suppression.txt
> I see the following error messages for that tree:
>
> In file included from kernel/locking/mutex.c:22:
> In file included from ./include/linux/ww_mutex.h:21:
> ./include/linux/rtmutex.h:44:25: error: reading variable 'owner' requires
> holding raw_spinlock '&rt_mutex_base::wait_lock'
> [-Werror,-Wthread-safety-analysis]
> 44 | return READ_ONCE(lock->owner) != NULL;
> | ^
> ./include/linux/rtmutex.h:52:56: error: reading variable 'owner' requires
> holding raw_spinlock '&rt_mutex_base::wait_lock'
> [-Werror,-Wthread-safety-analysis]
> 52 | unsigned long owner = (unsigned long)
> READ_ONCE(lock->owner);
> | ^
> 2 errors generated.
>
> Should this series perhaps include changes for the file
> scripts/context-analysis-suppression.txt?
I'm having trouble reproducing :-(
You're speaking of something like the below, on a defconfig build,
right?
---
diff --git a/scripts/context-analysis-suppression.txt b/scripts/context-analysis-suppression.txt
index fd8951d06706..6c31eadd0244 100644
--- a/scripts/context-analysis-suppression.txt
+++ b/scripts/context-analysis-suppression.txt
@@ -14,6 +14,7 @@ src:*include/linux/*
src:*include/net/*
# Opt-in headers:
+src:*include/*=emit
src:*include/linux/bit_spinlock.h=emit
src:*include/linux/cleanup.h=emit
src:*include/linux/kref.h=emit
^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-22 9:04 ` Peter Zijlstra
@ 2026-01-22 16:28 ` Bart Van Assche
2026-01-22 18:58 ` Nathan Chancellor
2026-01-23 11:15 ` Peter Zijlstra
0 siblings, 2 replies; 30+ messages in thread
From: Bart Van Assche @ 2026-01-22 16:28 UTC (permalink / raw)
To: Peter Zijlstra
Cc: elver, linux-kernel, bigeasy, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, llvm
On 1/22/26 1:04 AM, Peter Zijlstra wrote:
> I'm having trouble reproducing :-(
Hi Peter,
With this patch applied:
diff --git a/scripts/context-analysis-suppression.txt
b/scripts/context-analysis-suppression.txt
index fd8951d06706..1c51b6153f08 100644
--- a/scripts/context-analysis-suppression.txt
+++ b/scripts/context-analysis-suppression.txt
@@ -24,6 +24,7 @@ src:*include/linux/mutex*.h=emit
src:*include/linux/rcupdate.h=emit
src:*include/linux/refcount.h=emit
src:*include/linux/rhashtable.h=emit
+src:*include/linux/rtmutex*.h=emit
src:*include/linux/rwlock*.h=emit
src:*include/linux/rwsem.h=emit
src:*include/linux/sched*=emit
and if I run the following commands:
build-kernel-with-clang allmodconfig
build-kernel-with-clang kernel/locking/mutex.o
then the following output appears:
CALL scripts/checksyscalls.sh
DESCEND objtool
INSTALL libsubcmd_headers
CC kernel/locking/mutex.o
In file included from kernel/locking/mutex.c:22:
In file included from ./include/linux/ww_mutex.h:21:
./include/linux/rtmutex.h:44:25: error: reading variable 'owner' requires
holding raw_spinlock '&rt_mutex_base::wait_lock'
[-Werror,-Wthread-safety-analysis]
44 | return READ_ONCE(lock->owner) != NULL;
| ^
./include/linux/rtmutex.h:52:56: error: reading variable 'owner' requires
holding raw_spinlock '&rt_mutex_base::wait_lock'
[-Werror,-Wthread-safety-analysis]
52 | unsigned long owner = (unsigned long)
READ_ONCE(lock->owner);
| ^
2 errors generated.
The build-kernel-with-clang script is as follows (this may not be the
recommended way to build the kernel with clang):
#!/bin/bash
export CC=clang
export LD=ld.lld # Use LLVM's linker (optional but recommended)
export AR=llvm-ar
export NM=llvm-nm
export OBJCOPY=llvm-objcopy
export OBJDUMP=llvm-objdump
export STRIP=llvm-strip
export READELF=llvm-readelf
make LLVM=1 CC=clang "$@"
Please let me know if you need more information.
Bart.
^ permalink raw reply related [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-22 16:28 ` Bart Van Assche
@ 2026-01-22 18:58 ` Nathan Chancellor
2026-01-23 11:06 ` Peter Zijlstra
2026-01-23 11:15 ` Peter Zijlstra
1 sibling, 1 reply; 30+ messages in thread
From: Nathan Chancellor @ 2026-01-22 18:58 UTC (permalink / raw)
To: Bart Van Assche
Cc: Peter Zijlstra, elver, linux-kernel, bigeasy, mingo, tglx, will,
boqun.feng, longman, hch, rostedt, llvm
On Thu, Jan 22, 2026 at 08:28:44AM -0800, Bart Van Assche wrote:
> The build-kernel-with-clang script is as follows (this may not be the
> recommended way to build the kernel with clang):
>
> #!/bin/bash
> export CC=clang
> export LD=ld.lld # Use LLVM's linker (optional but recommended)
> export AR=llvm-ar
> export NM=llvm-nm
> export OBJCOPY=llvm-objcopy
> export OBJDUMP=llvm-objdump
> export STRIP=llvm-strip
> export READELF=llvm-readelf
> make LLVM=1 CC=clang "$@"
FWIW, this could ultimately simplify to
make LLVM=1 "$@"
as LLVM=1 is the shorthand for
make CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm STRIP=llvm-strip \
OBJCOPY=llvm-objcopy OBJDUMP=llvm-objdump READELF=llvm-readelf \
HOSTCC=clang HOSTCXX=clang++ HOSTAR=llvm-ar HOSTLD=ld.lld
as noted in Documentation/kbuild/llvm.rst /
https://docs.kernel.org/kbuild/llvm.html.
Cheers,
Nathan
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-22 18:58 ` Nathan Chancellor
@ 2026-01-23 11:06 ` Peter Zijlstra
0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-23 11:06 UTC (permalink / raw)
To: Nathan Chancellor
Cc: Bart Van Assche, elver, linux-kernel, bigeasy, mingo, tglx, will,
boqun.feng, longman, hch, rostedt, llvm
On Thu, Jan 22, 2026 at 11:58:12AM -0700, Nathan Chancellor wrote:
> On Thu, Jan 22, 2026 at 08:28:44AM -0800, Bart Van Assche wrote:
> > The build-kernel-with-clang script is as follows (this may not be the
> > recommended way to build the kernel with clang):
> >
> > #!/bin/bash
> > export CC=clang
> > export LD=ld.lld # Use LLVM's linker (optional but recommended)
> > export AR=llvm-ar
> > export NM=llvm-nm
> > export OBJCOPY=llvm-objcopy
> > export OBJDUMP=llvm-objdump
> > export STRIP=llvm-strip
> > export READELF=llvm-readelf
> > make LLVM=1 CC=clang "$@"
>
> FWIW, this could ultimately simplify to
>
> make LLVM=1 "$@"
>
> as LLVM=1 is the shorthand for
>
> make CC=clang LD=ld.lld AR=llvm-ar NM=llvm-nm STRIP=llvm-strip \
> OBJCOPY=llvm-objcopy OBJDUMP=llvm-objdump READELF=llvm-readelf \
> HOSTCC=clang HOSTCXX=clang++ HOSTAR=llvm-ar HOSTLD=ld.lld
>
Right, for all of us on Debian, you can use:
make LLVM=-22 ...
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-22 16:28 ` Bart Van Assche
2026-01-22 18:58 ` Nathan Chancellor
@ 2026-01-23 11:15 ` Peter Zijlstra
2026-01-23 18:58 ` Bart Van Assche
1 sibling, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2026-01-23 11:15 UTC (permalink / raw)
To: Bart Van Assche
Cc: elver, linux-kernel, bigeasy, mingo, tglx, will, boqun.feng,
longman, hch, rostedt, llvm
On Thu, Jan 22, 2026 at 08:28:44AM -0800, Bart Van Assche wrote:
> then the following output appears:
>
> CALL scripts/checksyscalls.sh
> DESCEND objtool
> INSTALL libsubcmd_headers
> CC kernel/locking/mutex.o
> In file included from kernel/locking/mutex.c:22:
> In file included from ./include/linux/ww_mutex.h:21:
> ./include/linux/rtmutex.h:44:25: error: reading variable 'owner' requires
> holding raw_spinlock '&rt_mutex_base::wait_lock'
> [-Werror,-Wthread-safety-analysis]
> 44 | return READ_ONCE(lock->owner) != NULL;
> | ^
> ./include/linux/rtmutex.h:52:56: error: reading variable 'owner' requires
> holding raw_spinlock '&rt_mutex_base::wait_lock'
> [-Werror,-Wthread-safety-analysis]
> 52 | unsigned long owner = (unsigned long)
> READ_ONCE(lock->owner);
> | ^
> 2 errors generated.
>
Indeed; I shall fold the below into the rtmutex patch. Thanks!
---
--- a/include/linux/rtmutex.h
+++ b/include/linux/rtmutex.h
@@ -41,7 +41,7 @@ struct rt_mutex_base {
*/
static inline bool rt_mutex_base_is_locked(struct rt_mutex_base *lock)
{
- return READ_ONCE(lock->owner) != NULL;
+ return data_race(READ_ONCE(lock->owner) != NULL);
}
#ifdef CONFIG_RT_MUTEXES
@@ -49,7 +49,7 @@ static inline bool rt_mutex_base_is_lock
static inline struct task_struct *rt_mutex_owner(struct rt_mutex_base *lock)
{
- unsigned long owner = (unsigned long) READ_ONCE(lock->owner);
+ unsigned long owner = (unsigned long) data_race(READ_ONCE(lock->owner));
return (struct task_struct *) (owner & ~RT_MUTEX_HAS_WAITERS);
}
--- a/scripts/context-analysis-suppression.txt
+++ b/scripts/context-analysis-suppression.txt
@@ -24,6 +24,7 @@ src:*include/linux/mutex*.h=emit
src:*include/linux/rcupdate.h=emit
src:*include/linux/refcount.h=emit
src:*include/linux/rhashtable.h=emit
+src:*include/linux/rtmutex*.h=emit
src:*include/linux/rwlock*.h=emit
src:*include/linux/rwsem.h=emit
src:*include/linux/sched*=emit
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-23 11:15 ` Peter Zijlstra
@ 2026-01-23 18:58 ` Bart Van Assche
2026-01-23 20:15 ` Marco Elver
0 siblings, 1 reply; 30+ messages in thread
From: Bart Van Assche @ 2026-01-23 18:58 UTC (permalink / raw)
To: elver, Peter Zijlstra
Cc: linux-kernel, bigeasy, mingo, tglx, will, boqun.feng, longman,
hch, rostedt, llvm
On 1/23/26 3:15 AM, Peter Zijlstra wrote:
> --- a/include/linux/rtmutex.h
> +++ b/include/linux/rtmutex.h
> @@ -41,7 +41,7 @@ struct rt_mutex_base {
> */
> static inline bool rt_mutex_base_is_locked(struct rt_mutex_base *lock)
> {
> - return READ_ONCE(lock->owner) != NULL;
> + return data_race(READ_ONCE(lock->owner) != NULL);
> }
>
> #ifdef CONFIG_RT_MUTEXES
> @@ -49,7 +49,7 @@ static inline bool rt_mutex_base_is_lock
>
> static inline struct task_struct *rt_mutex_owner(struct rt_mutex_base *lock)
> {
> - unsigned long owner = (unsigned long) READ_ONCE(lock->owner);
> + unsigned long owner = (unsigned long) data_race(READ_ONCE(lock->owner));
>
> return (struct task_struct *) (owner & ~RT_MUTEX_HAS_WAITERS);
> }
Marco, shouldn't lock context analysis be disabled for all READ_ONCE()
invocations? I expect that all code that uses READ_ONCE() to read a
structure member annotated with __guarded_by() will have to be annotated
with data_race(). Shouldn't the data_race() invocation be moved into the
READ_ONCE() definition?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 30+ messages in thread* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-23 18:58 ` Bart Van Assche
@ 2026-01-23 20:15 ` Marco Elver
0 siblings, 0 replies; 30+ messages in thread
From: Marco Elver @ 2026-01-23 20:15 UTC (permalink / raw)
To: Bart Van Assche
Cc: Peter Zijlstra, linux-kernel, bigeasy, mingo, tglx, will,
boqun.feng, longman, hch, rostedt, llvm
On Fri, 23 Jan 2026 at 19:58, Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 1/23/26 3:15 AM, Peter Zijlstra wrote:
> > --- a/include/linux/rtmutex.h
> > +++ b/include/linux/rtmutex.h
> > @@ -41,7 +41,7 @@ struct rt_mutex_base {
> > */
> > static inline bool rt_mutex_base_is_locked(struct rt_mutex_base *lock)
> > {
> > - return READ_ONCE(lock->owner) != NULL;
> > + return data_race(READ_ONCE(lock->owner) != NULL);
> > }
> >
> > #ifdef CONFIG_RT_MUTEXES
> > @@ -49,7 +49,7 @@ static inline bool rt_mutex_base_is_lock
> >
> > static inline struct task_struct *rt_mutex_owner(struct rt_mutex_base *lock)
> > {
> > - unsigned long owner = (unsigned long) READ_ONCE(lock->owner);
> > + unsigned long owner = (unsigned long) data_race(READ_ONCE(lock->owner));
> >
> > return (struct task_struct *) (owner & ~RT_MUTEX_HAS_WAITERS);
> > }
> Marco, shouldn't lock context analysis be disabled for all READ_ONCE()
> invocations? I expect that all code that uses READ_ONCE() to read a
> structure member annotated with __guarded_by() will have to be annotated
> with data_race(). Shouldn't the data_race() invocation be moved into the
> READ_ONCE() definition?
Unclear, guarded variables accessed with READ_ONCE() do not
necessarily imply access without said lock is safe.
Second, a READ_ONCE() paired with another marked primitive never
results in a data race, and using data_race() actually prohibits KCSAN
from detecting real data races if any would be introduced, but we're
not expecting any.
In the above case, if there aren't actually any data races, the
data_race() addition does look odd.
Perhaps context_unsafe(READ_ONCE(..)) would be more appropriate if
there's no actual data race. That will signal context analysis this is
safe, and if there are any real data races introduced here (e.g. by a
concurrent plain write or such), we'd learn about it if KCSAN detects
it.
^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [RFC][PATCH 0/4] locking: Add/convert context analysis bits
2026-01-21 19:23 ` Peter Zijlstra
2026-01-21 20:37 ` Bart Van Assche
@ 2026-01-23 14:16 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 30+ messages in thread
From: Sebastian Andrzej Siewior @ 2026-01-23 14:16 UTC (permalink / raw)
To: Peter Zijlstra
Cc: elver, linux-kernel, mingo, tglx, will, boqun.feng, longman, hch,
rostedt, bvanassche, llvm
On 2026-01-21 20:23:49 [+0100], Peter Zijlstra wrote:
> On Wed, Jan 21, 2026 at 12:07:04PM +0100, Peter Zijlstra wrote:
> > Hai
> >
> > This is on top of tip/locking/core with these patches on:
> >
> > https://lkml.kernel.org/r/20260119094029.1344361-1-elver@google.com
> >
> > and converts mutex, rtmutex, ww_mutex and futex to use the new context analysis
> > bits.
> >
>
> Pushed out an updated/fixed series to:
>
> git://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/core
I went through what you have there now and it looks good and I did not
find anything while testing. Full ACK.
Sebastian
^ permalink raw reply [flat|nested] 30+ messages in thread