From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from 013.lax.mailroute.net (013.lax.mailroute.net [199.89.1.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5001142188B for ; Fri, 8 May 2026 17:45:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=199.89.1.16 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778262351; cv=none; b=HebPlNkHker1lkTvLORZRy00tW3Sb5e0hP9ryCc8wDXciEAvvJkXrcvXpQEHdFvIYviHtDhNyeDvY5zV6E6Qbq+zNq+XzoForQuUrDGSTZ/lMZov//YOGINQ9q8xcZcS9vvgcc8WDbNzctV9Ul6u2lg7JgynX0TiT6OjLlsh9Ns= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778262351; c=relaxed/simple; bh=eDDtR7wW2tfSgv+eC7mAJZHeMSQj+fgYDILruVgpHa4=; h=From:To:Cc:Subject:Date:Message-ID:MIME-Version; b=Ud6TmY4oqM7ApmhDolRghnLVLJDZbEe9bNMJ6Jjs6Hm9Ry1Sf5B2J9u2rtyPMCx2ubQDAm4zkqNLdGuB7DWwARaOBrSZeTepZGOz+gNhFmBrKKnoxlGdLHhp71we6D9kop8RcSiLJTyQF5sEjiR5w7DUop0/JwUj5R2pY5c1z7E= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org; spf=pass smtp.mailfrom=acm.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b=Yj6R/QZd; arc=none smtp.client-ip=199.89.1.16 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=acm.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b="Yj6R/QZd" Received: from localhost (localhost [127.0.0.1]) by 013.lax.mailroute.net (Postfix) with ESMTP id 4gBxNr4mDKzlfftZ; Fri, 8 May 2026 17:45:48 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h= content-transfer-encoding:mime-version:x-mailer:message-id:date :date:subject:subject:from:from:received:received; s=mr01; t= 1778262334; x=1780854335; bh=5rhWQpa2qwT+DW7e2GA6YzQhE1pIqKqzML8 lUViyUPQ=; b=Yj6R/QZdpbcsiJQYTJwZZ/kgRELwNe9BYdep4IQR9JKTTu8KPif teTNpvVt5mE0tG/3Qw4v7JdLs6UUVnqcNDLU8JzC21NVzcYlNuAy6pPF7wDRxZm3 hRnQ4Gpw4U9f0OEvA1w2iGxoM1ozAfJpFJWoui6BXRPdJEPNlt5FKq/RztZ4zlWT ZJT4V2R6O8dzEXDAoP2JTX0rlP2A4UM0Gx7xmS8V20Inz4Ikfc7Mqr+q89A+qqQk kZ6qAyJ4H/Hl+cOpX1X1hd/LhOe5qHMA77qoBaSRw5RSv4ZQ8xf60LbBIrUozfoe qXTsfFK7LroDYRzzypsB6nJqEDpeL/JN8fA== X-Virus-Scanned: by MailRoute Received: from 013.lax.mailroute.net ([127.0.0.1]) by localhost (013.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP id WFL_8dnNAzUT; Fri, 8 May 2026 17:45:34 +0000 (UTC) Received: from bvanassche-glaptop2.roam.corp.google.com (c-73-231-117-72.hsd1.ca.comcast.net [73.231.117.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bvanassche@acm.org) by 013.lax.mailroute.net (Postfix) with ESMTPSA id 4gBxNT3Q2nzlfpMC; Fri, 8 May 2026 17:45:29 +0000 (UTC) From: Bart Van Assche To: Peter Zijlstra Cc: Marco Elver , Nathan Chancellor , linux-kernel@vger.kernel.org, Bart Van Assche , Ingo Molnar , Will Deacon , Boqun Feng , Sebastian Andrzej Siewior , Clark Williams , Steven Rostedt , Joel Granados , Alexei Starovoitov , Vlastimil Babka Subject: [PATCH v3] locking/rtmutex: Annotate API and implementation Date: Fri, 8 May 2026 10:45:16 -0700 Message-ID: <20260508174520.1416285-1-bvanassche@acm.org> X-Mailer: git-send-email 2.54.0.563.g4f69b47b94-goog Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Enable context analysis for struct rt_mutex and annotate all functions th= at accept a struct rt_mutex pointer. In the __rt_mutex_lock_common() callers= , instead of adding the __no_context_analysis annotation, emit a runtime warning if the __rt_mutex_lock_common() return value is not zero and add = an __acquire() statement. Signed-off-by: Bart Van Assche --- Changes compared to v2: - Fixed the CONFIG_DEBUG_LOCK_ALLOC=3Dn build. - Removed "CONTEXT_ANALYSIS_rtmutex.o :=3D y" from the Makefile because = it is not necessary. - Converted the "__no_context_analysis" annotations on __rt_mutex_lock_c= ommon() callers into a WARN_ON_ONCE(ret !=3D 0) + __acquire(). - Removed __no_context_analysis from __rt_mutex_unlock(). Changes compared to v1: - Fixed the CONFIG_PREEMPT_RT=3Dy build. include/linux/rtmutex.h | 22 +++++++++++++++------- kernel/locking/rtmutex.c | 4 ++++ kernel/locking/rtmutex_api.c | 31 ++++++++++++++++++++++++++++--- 3 files changed, 47 insertions(+), 10 deletions(-) diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 78e7e588817c..9e1f012f89db 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -56,6 +56,8 @@ static inline struct task_struct *rt_mutex_owner(struct= rt_mutex_base *lock) #endif extern void rt_mutex_base_init(struct rt_mutex_base *rtb); =20 +context_lock_struct(rt_mutex); + /** * The rt_mutex structure * @@ -108,8 +110,10 @@ do { \ extern void __rt_mutex_init(struct rt_mutex *lock, const char *name, str= uct lock_class_key *key); =20 #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int sub= class); -extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockd= ep_map *nest_lock); +extern void rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int sub= class) + __acquires(lock); +extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockd= ep_map *nest_lock) + __acquires(lock); #define rt_mutex_lock(lock) rt_mutex_lock_nested(lock, 0) #define rt_mutex_lock_nest_lock(lock, nest_lock) \ do { \ @@ -118,15 +122,19 @@ extern void _rt_mutex_lock_nest_lock(struct rt_mute= x *lock, struct lockdep_map * } while (0) =20 #else -extern void rt_mutex_lock(struct rt_mutex *lock); +extern void rt_mutex_lock(struct rt_mutex *lock) __acquires(lock); #define rt_mutex_lock_nested(lock, subclass) rt_mutex_lock(lock) #define rt_mutex_lock_nest_lock(lock, nest_lock) rt_mutex_lock(lock) #endif =20 -extern int rt_mutex_lock_interruptible(struct rt_mutex *lock); -extern int rt_mutex_lock_killable(struct rt_mutex *lock); -extern int rt_mutex_trylock(struct rt_mutex *lock); +extern int rt_mutex_lock_interruptible(struct rt_mutex *lock) + __cond_acquires(0, lock); +extern int rt_mutex_lock_killable(struct rt_mutex *lock) + __cond_acquires(0, lock); +extern int rt_mutex_trylock(struct rt_mutex *lock) + __cond_acquires(true, lock); =20 -extern void rt_mutex_unlock(struct rt_mutex *lock); +extern void rt_mutex_unlock(struct rt_mutex *lock) + __releases(lock); =20 #endif diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4f386ea6c792..9147d6a31b78 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -484,6 +484,7 @@ static __always_inline bool __waiter_less(struct rb_n= ode *a, const struct rb_nod =20 static __always_inline void rt_mutex_enqueue(struct rt_mutex_base *lock, struct rt_mutex_waiter *wai= ter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); =20 @@ -492,6 +493,7 @@ rt_mutex_enqueue(struct rt_mutex_base *lock, struct r= t_mutex_waiter *waiter) =20 static __always_inline void rt_mutex_dequeue(struct rt_mutex_base *lock, struct rt_mutex_waiter *wai= ter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); =20 @@ -1092,6 +1094,7 @@ static int __sched rt_mutex_adjust_prio_chain(struc= t task_struct *task, static int __sched try_to_take_rt_mutex(struct rt_mutex_base *lock, struct task_struct *tas= k, struct rt_mutex_waiter *waiter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); =20 @@ -1319,6 +1322,7 @@ static int __sched task_blocks_on_rt_mutex(struct r= t_mutex_base *lock, */ static void __sched mark_wakeup_next_waiter(struct rt_wake_q_head *wqh, struct rt_mutex_base *lock) + __must_hold(&lock->wait_lock) { struct rt_mutex_waiter *waiter; =20 diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c index 124219aea46e..7c40b91422a0 100644 --- a/kernel/locking/rtmutex_api.c +++ b/kernel/locking/rtmutex_api.c @@ -41,6 +41,7 @@ static __always_inline int __rt_mutex_lock_common(struc= t rt_mutex *lock, unsigned int state, struct lockdep_map *nest_lock, unsigned int subclass) + __cond_acquires(0, lock) { int ret; =20 @@ -67,13 +68,27 @@ EXPORT_SYMBOL(rt_mutex_base_init); */ void __sched rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int su= bclass) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, subclass); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, subclass) = =3D=3D 0) + return; + /* + * The code below is never reached because __rt_mutex_lock_common() onl= y + * returns an error code if interrupted by a signal or upon a timeout. + */ + WARN_ON_ONCE(true); + __acquire(lock); } EXPORT_SYMBOL_GPL(rt_mutex_lock_nested); =20 void __sched _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lock= dep_map *nest_lock) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, nest_lock, 0); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, nest_lock, 0) =3D= =3D 0) + return; + /* + * The code below is never reached because __rt_mutex_lock_common() onl= y + * returns an error code if interrupted by a signal or upon a timeout. + */ + WARN_ON_ONCE(true); + __acquire(lock); } EXPORT_SYMBOL_GPL(_rt_mutex_lock_nest_lock); =20 @@ -86,7 +101,14 @@ EXPORT_SYMBOL_GPL(_rt_mutex_lock_nest_lock); */ void __sched rt_mutex_lock(struct rt_mutex *lock) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, 0); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, 0) =3D=3D = 0) + return; + /* + * The code below is never reached because __rt_mutex_lock_common() onl= y + * returns an error code if interrupted by a signal or upon a timeout. + */ + WARN_ON_ONCE(true); + __acquire(lock); } EXPORT_SYMBOL_GPL(rt_mutex_lock); #endif @@ -157,6 +179,7 @@ void __sched rt_mutex_unlock(struct rt_mutex *lock) { mutex_release(&lock->dep_map, _RET_IP_); __rt_mutex_unlock(&lock->rtmutex); + __release(lock); } EXPORT_SYMBOL_GPL(rt_mutex_unlock); =20 @@ -182,6 +205,7 @@ int __sched __rt_mutex_futex_trylock(struct rt_mutex_= base *lock) */ bool __sched __rt_mutex_futex_unlock(struct rt_mutex_base *lock, struct rt_wake_q_head *wqh) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); =20 @@ -312,6 +336,7 @@ int __sched __rt_mutex_start_proxy_lock(struct rt_mut= ex_base *lock, struct rt_mutex_waiter *waiter, struct task_struct *task, struct wake_q_head *wake_q) + __must_hold(&lock->wait_lock) { int ret; =20