From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from 011.lax.mailroute.net (011.lax.mailroute.net [199.89.1.14]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A44913F167B for ; Wed, 6 May 2026 09:53:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=199.89.1.14 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778061213; cv=none; b=gUfs4a6AdE4hx85mN/SS/LpO4gLh8iuEOFAofy2O3doJlpPqM3YGATN0Boj4P1x2A7M6aoP33S0wPvLxn4GBiqrLL5cbd+LNcih3MXW6NPX0yqT2O2I7e6qLoVbSGBj3Y/taFh/27FIvuFlGQqn/9m3IS0CvKQ/qgdUT+eDH4tc= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778061213; c=relaxed/simple; bh=8B3CgZvB+WiB9jqmWXI/qlYWAbzXtDzCXFRjD7GVqAE=; h=Message-ID:Date:MIME-Version:Subject:To:Cc:References:From: In-Reply-To:Content-Type; b=gNh90t6t/9GpPYtkgv/xi1Sg4JID2Ne0D27TZlAMFKoHDF9PWlbx4bQ2ZxlsLSjSE1aFcmhOg7fvgE9pc+oNqABgkbb/pTdw2DldJth9X9qsdZqmLLs493Til5ABTDbGi70TR4EDJmflniNLmV6N8x8CzNsFglt51bKNBASYPTA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org; spf=pass smtp.mailfrom=acm.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b=KTm9I2nO; arc=none smtp.client-ip=199.89.1.14 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=acm.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=acm.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=acm.org header.i=@acm.org header.b="KTm9I2nO" Received: from localhost (localhost [127.0.0.1]) by 011.lax.mailroute.net (Postfix) with ESMTP id 4g9W0m2c5Yz1XM5kD; Wed, 6 May 2026 09:53:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h= content-transfer-encoding:content-type:content-type:in-reply-to :from:from:content-language:references:subject:subject :user-agent:mime-version:date:date:message-id:received:received; s=mr01; t=1778061195; x=1780653196; bh=/lmIYxRT/pjcOyCw8CNyEJoz W2kmnV995RfJ/+UYkng=; b=KTm9I2nOikcRNGu1CsqrFO/vJRmi8qwhXXtJOXGq XyBIVJvoz4pLlw3Lrhb+opfRQ/01wsTwekRm1TbeyMGsHZTpOfKCz4WNWy9eUAXB v7gnE7sGSx5KCYXQsjOlNVvD4N1n3UwaurtafLlRU+QFqwORsB9QaK4POEXphc40 vLl0FNNMf1qUsY5XjZemeVaFcyEXVexYF/jTprs/l3eqqljqHt4L/Ra1OnfGrIIu X8+DFMEuiJJfErCCtXAugR0i6ANR7206igCH273wYge3PKVdXLJZiOWp14kYL0m0 t2xEmguAjEben087pq3zmpXNpdErPwkaGwb9uExmW/cs3w== X-Virus-Scanned: by MailRoute Received: from 011.lax.mailroute.net ([127.0.0.1]) by localhost (011.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP id PIKRf-bknfqJ; Wed, 6 May 2026 09:53:15 +0000 (UTC) Received: from [10.211.9.52] (unknown [213.147.98.98]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) (Authenticated sender: bvanassche@acm.org) by 011.lax.mailroute.net (Postfix) with ESMTPSA id 4g9W0Q09jqz1XM5jn; Wed, 6 May 2026 09:53:09 +0000 (UTC) Message-ID: <6b472fc2-2e52-40f2-9c37-81bfd70b9d96@acm.org> Date: Wed, 6 May 2026 11:53:05 +0200 Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 User-Agent: Mozilla Thunderbird Subject: Re: [PATCH v2] locking/rtmutex: Annotate API and implementation To: Sebastian Andrzej Siewior Cc: Peter Zijlstra , Marco Elver , linux-kernel@vger.kernel.org, Ingo Molnar , Will Deacon , Boqun Feng , Clark Williams , Steven Rostedt , Joel Granados , Alexei Starovoitov , Vlastimil Babka References: <20260505022649.870788-1-bvanassche@acm.org> <20260505161256.0NhG6_Hm@linutronix.de> <41878012-e4db-4199-a3d5-ed2dc5badc0b@acm.org> <20260506073541.d8Ywsyl6@linutronix.de> Content-Language: en-US From: Bart Van Assche In-Reply-To: <20260506073541.d8Ywsyl6@linutronix.de> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit On 5/6/26 9:35 AM, Sebastian Andrzej Siewior wrote: > Hmm. This mostly reassembles __mutex_lock() from mutex.c which does the > same thing. Couldn't we get away doing the same thing meaning a > __cond_acquires() on those with a return value and a __acquire() in the > void case? I think it would make sense to keep those two close in terms > of annotations. Please take a look at the rt_mutex_lock() changes in the diff below. Thanks, Bart. diff --git a/include/linux/rtmutex.h b/include/linux/rtmutex.h index 78e7e588817c..9e1f012f89db 100644 --- a/include/linux/rtmutex.h +++ b/include/linux/rtmutex.h @@ -56,6 +56,8 @@ static inline struct task_struct *rt_mutex_owner(struct rt_mutex_base *lock) #endif extern void rt_mutex_base_init(struct rt_mutex_base *rtb); +context_lock_struct(rt_mutex); + /** * The rt_mutex structure * @@ -108,8 +110,10 @@ do { \ extern void __rt_mutex_init(struct rt_mutex *lock, const char *name, struct lock_class_key *key); #ifdef CONFIG_DEBUG_LOCK_ALLOC -extern void rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int subclass); -extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map *nest_lock); +extern void rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int subclass) + __acquires(lock); +extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map *nest_lock) + __acquires(lock); #define rt_mutex_lock(lock) rt_mutex_lock_nested(lock, 0) #define rt_mutex_lock_nest_lock(lock, nest_lock) \ do { \ @@ -118,15 +122,19 @@ extern void _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map * } while (0) #else -extern void rt_mutex_lock(struct rt_mutex *lock); +extern void rt_mutex_lock(struct rt_mutex *lock) __acquires(lock); #define rt_mutex_lock_nested(lock, subclass) rt_mutex_lock(lock) #define rt_mutex_lock_nest_lock(lock, nest_lock) rt_mutex_lock(lock) #endif -extern int rt_mutex_lock_interruptible(struct rt_mutex *lock); -extern int rt_mutex_lock_killable(struct rt_mutex *lock); -extern int rt_mutex_trylock(struct rt_mutex *lock); +extern int rt_mutex_lock_interruptible(struct rt_mutex *lock) + __cond_acquires(0, lock); +extern int rt_mutex_lock_killable(struct rt_mutex *lock) + __cond_acquires(0, lock); +extern int rt_mutex_trylock(struct rt_mutex *lock) + __cond_acquires(true, lock); -extern void rt_mutex_unlock(struct rt_mutex *lock); +extern void rt_mutex_unlock(struct rt_mutex *lock) + __releases(lock); #endif diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c index 4f386ea6c792..69759fde7d10 100644 --- a/kernel/locking/rtmutex.c +++ b/kernel/locking/rtmutex.c @@ -484,6 +484,7 @@ static __always_inline bool __waiter_less(struct rb_node *a, const struct rb_nod static __always_inline void rt_mutex_enqueue(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); @@ -492,6 +493,7 @@ rt_mutex_enqueue(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter) static __always_inline void rt_mutex_dequeue(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); @@ -1092,6 +1094,7 @@ static int __sched rt_mutex_adjust_prio_chain(struct task_struct *task, static int __sched try_to_take_rt_mutex(struct rt_mutex_base *lock, struct task_struct *task, struct rt_mutex_waiter *waiter) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); @@ -1319,6 +1322,7 @@ static int __sched task_blocks_on_rt_mutex(struct rt_mutex_base *lock, */ static void __sched mark_wakeup_next_waiter(struct rt_wake_q_head *wqh, struct rt_mutex_base *lock) + __must_hold(&lock->wait_lock) { struct rt_mutex_waiter *waiter; @@ -1479,6 +1483,7 @@ static void __sched rt_mutex_slowunlock(struct rt_mutex_base *lock) } static __always_inline void __rt_mutex_unlock(struct rt_mutex_base *lock) + __no_context_analysis { if (likely(rt_mutex_cmpxchg_release(lock, current, NULL))) return; diff --git a/kernel/locking/rtmutex_api.c b/kernel/locking/rtmutex_api.c index 124219aea46e..f9345fa22286 100644 --- a/kernel/locking/rtmutex_api.c +++ b/kernel/locking/rtmutex_api.c @@ -41,6 +41,7 @@ static __always_inline int __rt_mutex_lock_common(struct rt_mutex *lock, unsigned int state, struct lockdep_map *nest_lock, unsigned int subclass) + __cond_acquires(0, lock) { int ret; @@ -67,13 +68,19 @@ EXPORT_SYMBOL(rt_mutex_base_init); */ void __sched rt_mutex_lock_nested(struct rt_mutex *lock, unsigned int subclass) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, subclass); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, subclass) == 0) + return; + WARN_ON_ONCE(true); + __acquire(lock); /* to keep the compiler happy */ } EXPORT_SYMBOL_GPL(rt_mutex_lock_nested); void __sched _rt_mutex_lock_nest_lock(struct rt_mutex *lock, struct lockdep_map *nest_lock) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, nest_lock, 0); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, nest_lock, 0) == 0) + return; + WARN_ON_ONCE(true); + __acquire(lock); /* to keep the compiler happy */ } EXPORT_SYMBOL_GPL(_rt_mutex_lock_nest_lock); @@ -86,7 +93,10 @@ EXPORT_SYMBOL_GPL(_rt_mutex_lock_nest_lock); */ void __sched rt_mutex_lock(struct rt_mutex *lock) { - __rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, 0); + if (__rt_mutex_lock_common(lock, TASK_UNINTERRUPTIBLE, NULL, 0) == 0) + return; + WARN_ON_ONCE(true); + __acquire(lock); /* to keep the compiler happy */ } EXPORT_SYMBOL_GPL(rt_mutex_lock); #endif @@ -157,6 +167,7 @@ void __sched rt_mutex_unlock(struct rt_mutex *lock) { mutex_release(&lock->dep_map, _RET_IP_); __rt_mutex_unlock(&lock->rtmutex); + __release(lock); } EXPORT_SYMBOL_GPL(rt_mutex_unlock); @@ -182,6 +193,7 @@ int __sched __rt_mutex_futex_trylock(struct rt_mutex_base *lock) */ bool __sched __rt_mutex_futex_unlock(struct rt_mutex_base *lock, struct rt_wake_q_head *wqh) + __must_hold(&lock->wait_lock) { lockdep_assert_held(&lock->wait_lock); @@ -312,6 +324,7 @@ int __sched __rt_mutex_start_proxy_lock(struct rt_mutex_base *lock, struct rt_mutex_waiter *waiter, struct task_struct *task, struct wake_q_head *wake_q) + __must_hold(&lock->wait_lock) { int ret;