public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/14] Enable lock context analysis
@ 2026-03-04 19:48 Bart Van Assche
  2026-03-04 19:48 ` [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices() Bart Van Assche
                   ` (13 more replies)
  0 siblings, 14 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche

Hi Jens,

During the most recent merge window the following patch series has been merged:
[PATCH v5 00/36] Compiler-Based Context- and Locking-Analysis
(https://lore.kernel.org/lkml/20251219154418.3592607-1-elver@google.com/). That
patch series drops support for verifying lock context annotations with sparse
and introduced support for verifying lock context annotations with Clang. The
support in Clang for lock context annotation and verification is better than
that in sparse. Hence this patch series that enables lock context analysis for
the block layer core and all block drivers.

The first patch in this series fixes a bug discovered in DRBD while enabling
lock context analysis.

Please consider this patch series for the upstream kernel.

Thanks,

Bart.

Bart Van Assche (14):
  drbd: Balance RCU calls in drbd_adm_dump_devices()
  blk-ioc: Prepare for enabling thread-safety analysis
  block: Make the lock context annotations compatible with Clang
  aoe: Add a lock context annotation
  drbd: Make the lock context annotations compatible with Clang
  loop: Add lock context annotations
  nbd: Add lock context annotations
  null_blk: Add more lock context annotations
  rbd: Add lock context annotations
  rnbd: Add more lock context annotations
  ublk: Fix the lock context annotations
  zloop: Add a lock context annotations
  zram: Add lock context annotations
  block: Enable lock context analysis for all block drivers

 block/Makefile                     |  2 ++
 block/bdev.c                       |  7 +++--
 block/blk-cgroup.c                 |  7 +++--
 block/blk-crypto-profile.c         |  2 ++
 block/blk-ioc.c                    |  2 +-
 block/blk-iocost.c                 |  2 ++
 block/blk-mq-debugfs.c             | 12 ++++----
 block/blk-zoned.c                  |  1 +
 block/blk.h                        |  4 +++
 block/ioctl.c                      |  1 +
 block/kyber-iosched.c              |  4 +--
 block/mq-deadline.c                |  8 +++---
 drivers/block/Makefile             |  2 ++
 drivers/block/aoe/Makefile         |  2 ++
 drivers/block/aoe/aoecmd.c         |  1 +
 drivers/block/drbd/Makefile        |  3 ++
 drivers/block/drbd/drbd_bitmap.c   | 20 +++++++------
 drivers/block/drbd/drbd_int.h      | 46 ++++++++++++++----------------
 drivers/block/drbd/drbd_main.c     | 45 ++++++++++++++++++++++-------
 drivers/block/drbd/drbd_nl.c       | 13 ++++++---
 drivers/block/drbd/drbd_receiver.c | 20 +++++++------
 drivers/block/drbd/drbd_req.c      |  2 ++
 drivers/block/drbd/drbd_state.c    |  3 ++
 drivers/block/drbd/drbd_worker.c   |  6 ++--
 drivers/block/loop.c               |  4 +++
 drivers/block/mtip32xx/Makefile    |  2 ++
 drivers/block/nbd.c                |  3 ++
 drivers/block/null_blk/Makefile    |  2 ++
 drivers/block/null_blk/main.c      |  7 +++--
 drivers/block/null_blk/zoned.c     |  2 ++
 drivers/block/rbd.c                |  7 +++++
 drivers/block/rnbd/Makefile        |  2 ++
 drivers/block/rnbd/rnbd-clt.c      |  2 ++
 drivers/block/ublk_drv.c           |  6 +++-
 drivers/block/xen-blkback/Makefile |  3 ++
 drivers/block/zloop.c              |  1 +
 drivers/block/zram/Makefile        |  2 ++
 drivers/block/zram/zcomp.c         |  3 +-
 drivers/block/zram/zcomp.h         |  6 ++--
 drivers/block/zram/zram_drv.c      |  1 +
 include/linux/backing-dev.h        |  2 ++
 include/linux/blkdev.h             | 11 +++++--
 include/linux/bpf.h                |  1 +
 43 files changed, 197 insertions(+), 85 deletions(-)


^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices()
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 20:25   ` Damien Le Moal
  2026-03-04 19:48 ` [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis Bart Van Assche
                   ` (12 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Christoph Böhmwalder, Andreas Gruenbacher,
	Philipp Reisner, Lars Ellenberg, Nathan Chancellor

Make drbd_adm_dump_devices() call rcu_read_lock() before
rcu_read_unlock() is called. This has been detected by the Clang
thread-safety analyzer. Compile-tested only.

Tested-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
Cc: Andreas Gruenbacher <agruen@linbit.com>
Fixes: a55bbd375d18 ("drbd: Backport the "status" command")
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/drbd/drbd_nl.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index e201f0087a0f..728ecc431b38 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -3378,8 +3378,10 @@ int drbd_adm_dump_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		if (resource_filter) {
 			retcode = ERR_RES_NOT_KNOWN;
 			resource = drbd_find_resource(nla_data(resource_filter));
-			if (!resource)
+			if (!resource) {
+				rcu_read_lock();
 				goto put_result;
+			}
 			cb->args[0] = (long)resource;
 		}
 	}
@@ -3628,8 +3630,10 @@ int drbd_adm_dump_peer_devices(struct sk_buff *skb, struct netlink_callback *cb)
 		if (resource_filter) {
 			retcode = ERR_RES_NOT_KNOWN;
 			resource = drbd_find_resource(nla_data(resource_filter));
-			if (!resource)
+			if (!resource) {
+				rcu_read_lock();
 				goto put_result;
+			}
 		}
 		cb->args[0] = (long)resource;
 	}

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
  2026-03-04 19:48 ` [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices() Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-05 10:10   ` Jan Kara
  2026-03-04 19:48 ` [PATCH 03/14] block: Make the lock context annotations compatible with Clang Bart Van Assche
                   ` (11 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Yu Kuai, Jan Kara, Nathan Chancellor

The Clang thread-safety analyzer does not support testing return values
with "< 0". Hence change the "< 0" test into "!= 0". This is fine since
the radix_tree_maybe_preload() return value is <= 0.

Cc: Yu Kuai <yukuai3@huawei.com>
Cc: Jan Kara <jack@suse.cz>
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/blk-ioc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/block/blk-ioc.c b/block/blk-ioc.c
index d15918d7fabb..0bf78aebc887 100644
--- a/block/blk-ioc.c
+++ b/block/blk-ioc.c
@@ -364,7 +364,7 @@ static struct io_cq *ioc_create_icq(struct request_queue *q)
 	if (!icq)
 		return NULL;
 
-	if (radix_tree_maybe_preload(GFP_ATOMIC) < 0) {
+	if (radix_tree_maybe_preload(GFP_ATOMIC) != 0) {
 		kmem_cache_free(et->icq_cache, icq);
 		return NULL;
 	}

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
  2026-03-04 19:48 ` [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices() Bart Van Assche
  2026-03-04 19:48 ` [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 20:03   ` Tejun Heo
  2026-03-04 19:48 ` [PATCH 04/14] aoe: Add a lock context annotation Bart Van Assche
                   ` (10 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Tejun Heo, Josef Bacik, Alexei Starovoitov,
	Daniel Borkmann, Andrii Nakryiko, Nathan Chancellor,
	Miklos Szeredi, Christian Brauner, Andreas Gruenbacher,
	Joanne Koong, Mateusz Guzik

Clang is more strict than sparse with regard to lock context annotation
checking. Hence this patch that makes the lock context annotations
compatible with Clang. __release() annotations have been added below
invocations of indirect calls that unlock a mutex because Clang does not
support annotating function pointers with __releases().

Enable context analysis in the block layer Makefile.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 block/Makefile              |  2 ++
 block/bdev.c                |  7 +++++--
 block/blk-cgroup.c          |  7 ++++---
 block/blk-crypto-profile.c  |  2 ++
 block/blk-iocost.c          |  2 ++
 block/blk-mq-debugfs.c      | 12 ++++++------
 block/blk-zoned.c           |  1 +
 block/blk.h                 |  4 ++++
 block/ioctl.c               |  1 +
 block/kyber-iosched.c       |  4 ++--
 block/mq-deadline.c         |  8 ++++----
 include/linux/backing-dev.h |  2 ++
 include/linux/blkdev.h      | 11 ++++++++---
 include/linux/bpf.h         |  1 +
 14 files changed, 44 insertions(+), 20 deletions(-)

diff --git a/block/Makefile b/block/Makefile
index c65f4da93702..407ea53e39b2 100644
--- a/block/Makefile
+++ b/block/Makefile
@@ -3,6 +3,8 @@
 # Makefile for the kernel block layer
 #
 
+CONTEXT_ANALYSIS := y
+
 obj-y		:= bdev.o fops.o bio.o elevator.o blk-core.o blk-sysfs.o \
 			blk-flush.o blk-settings.o blk-ioc.o blk-map.o \
 			blk-merge.o blk-timeout.o blk-lib.o blk-mq.o \
diff --git a/block/bdev.c b/block/bdev.c
index ed022f8c48c7..367f0f09a2e4 100644
--- a/block/bdev.c
+++ b/block/bdev.c
@@ -313,6 +313,7 @@ int bdev_freeze(struct block_device *bdev)
 	if (bdev->bd_holder_ops && bdev->bd_holder_ops->freeze) {
 		error = bdev->bd_holder_ops->freeze(bdev);
 		lockdep_assert_not_held(&bdev->bd_holder_lock);
+		__release(&bdev->bd_holder_lock);
 	} else {
 		mutex_unlock(&bdev->bd_holder_lock);
 		error = sync_blockdev(bdev);
@@ -356,6 +357,7 @@ int bdev_thaw(struct block_device *bdev)
 	if (bdev->bd_holder_ops && bdev->bd_holder_ops->thaw) {
 		error = bdev->bd_holder_ops->thaw(bdev);
 		lockdep_assert_not_held(&bdev->bd_holder_lock);
+		__release(&bdev->bd_holder_lock);
 	} else {
 		mutex_unlock(&bdev->bd_holder_lock);
 	}
@@ -1254,9 +1256,10 @@ EXPORT_SYMBOL(lookup_bdev);
 void bdev_mark_dead(struct block_device *bdev, bool surprise)
 {
 	mutex_lock(&bdev->bd_holder_lock);
-	if (bdev->bd_holder_ops && bdev->bd_holder_ops->mark_dead)
+	if (bdev->bd_holder_ops && bdev->bd_holder_ops->mark_dead) {
 		bdev->bd_holder_ops->mark_dead(bdev, surprise);
-	else {
+		__release(&bdev->bd_holder_lock);
+	} else {
 		mutex_unlock(&bdev->bd_holder_lock);
 		sync_blockdev(bdev);
 	}
diff --git a/block/blk-cgroup.c b/block/blk-cgroup.c
index b70096497d38..5aec000d3da6 100644
--- a/block/blk-cgroup.c
+++ b/block/blk-cgroup.c
@@ -774,6 +774,7 @@ EXPORT_SYMBOL_GPL(blkg_conf_init);
  * of @ctx->input. Returns -errno on error.
  */
 int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx)
+	__no_context_analysis /* conditional locking */
 {
 	char *input = ctx->input;
 	unsigned int major, minor;
@@ -819,6 +820,7 @@ int blkg_conf_open_bdev(struct blkg_conf_ctx *ctx)
  * for restoring the memalloc scope.
  */
 unsigned long __must_check blkg_conf_open_bdev_frozen(struct blkg_conf_ctx *ctx)
+	__must_hold(&ctx->bdev->bd_queue->rq_qos_mutex)
 {
 	int ret;
 	unsigned long memflags;
@@ -860,7 +862,7 @@ unsigned long __must_check blkg_conf_open_bdev_frozen(struct blkg_conf_ctx *ctx)
  */
 int blkg_conf_prep(struct blkcg *blkcg, const struct blkcg_policy *pol,
 		   struct blkg_conf_ctx *ctx)
-	__acquires(&bdev->bd_queue->queue_lock)
+	__cond_acquires(0, &ctx->bdev->bd_disk->queue->queue_lock)
 {
 	struct gendisk *disk;
 	struct request_queue *q;
@@ -974,8 +976,7 @@ EXPORT_SYMBOL_GPL(blkg_conf_prep);
  * blkg_conf_ctx's initialized with blkg_conf_init().
  */
 void blkg_conf_exit(struct blkg_conf_ctx *ctx)
-	__releases(&ctx->bdev->bd_queue->queue_lock)
-	__releases(&ctx->bdev->bd_queue->rq_qos_mutex)
+	__no_context_analysis /* conditional unlocking */
 {
 	if (ctx->blkg) {
 		spin_unlock_irq(&bdev_get_queue(ctx->bdev)->queue_lock);
diff --git a/block/blk-crypto-profile.c b/block/blk-crypto-profile.c
index 4ac74443687a..cf447ba4a66e 100644
--- a/block/blk-crypto-profile.c
+++ b/block/blk-crypto-profile.c
@@ -43,6 +43,7 @@ struct blk_crypto_keyslot {
 };
 
 static inline void blk_crypto_hw_enter(struct blk_crypto_profile *profile)
+	__acquires(&profile->lock)
 {
 	/*
 	 * Calling into the driver requires profile->lock held and the device
@@ -55,6 +56,7 @@ static inline void blk_crypto_hw_enter(struct blk_crypto_profile *profile)
 }
 
 static inline void blk_crypto_hw_exit(struct blk_crypto_profile *profile)
+	__releases(&profile->lock)
 {
 	up_write(&profile->lock);
 	if (profile->dev)
diff --git a/block/blk-iocost.c b/block/blk-iocost.c
index d145db61e5c3..081054ca8111 100644
--- a/block/blk-iocost.c
+++ b/block/blk-iocost.c
@@ -728,6 +728,7 @@ static void iocg_commit_bio(struct ioc_gq *iocg, struct bio *bio,
 }
 
 static void iocg_lock(struct ioc_gq *iocg, bool lock_ioc, unsigned long *flags)
+	__no_context_analysis /* conditional locking */
 {
 	if (lock_ioc) {
 		spin_lock_irqsave(&iocg->ioc->lock, *flags);
@@ -738,6 +739,7 @@ static void iocg_lock(struct ioc_gq *iocg, bool lock_ioc, unsigned long *flags)
 }
 
 static void iocg_unlock(struct ioc_gq *iocg, bool unlock_ioc, unsigned long *flags)
+	__no_context_analysis /* conditional locking */
 {
 	if (unlock_ioc) {
 		spin_unlock(&iocg->waitq.lock);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 047ec887456b..5c168e82273e 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -20,7 +20,7 @@ static int queue_poll_stat_show(void *data, struct seq_file *m)
 }
 
 static void *queue_requeue_list_start(struct seq_file *m, loff_t *pos)
-	__acquires(&q->requeue_lock)
+	__acquires(&((struct request_queue *)m->private)->requeue_lock)
 {
 	struct request_queue *q = m->private;
 
@@ -36,7 +36,7 @@ static void *queue_requeue_list_next(struct seq_file *m, void *v, loff_t *pos)
 }
 
 static void queue_requeue_list_stop(struct seq_file *m, void *v)
-	__releases(&q->requeue_lock)
+	__releases(&((struct request_queue *)m->private)->requeue_lock)
 {
 	struct request_queue *q = m->private;
 
@@ -298,7 +298,7 @@ int blk_mq_debugfs_rq_show(struct seq_file *m, void *v)
 EXPORT_SYMBOL_GPL(blk_mq_debugfs_rq_show);
 
 static void *hctx_dispatch_start(struct seq_file *m, loff_t *pos)
-	__acquires(&hctx->lock)
+	__acquires(&((struct blk_mq_hw_ctx *)m->private)->lock)
 {
 	struct blk_mq_hw_ctx *hctx = m->private;
 
@@ -314,7 +314,7 @@ static void *hctx_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
 }
 
 static void hctx_dispatch_stop(struct seq_file *m, void *v)
-	__releases(&hctx->lock)
+	__releases(&((struct blk_mq_hw_ctx *)m->private)->lock)
 {
 	struct blk_mq_hw_ctx *hctx = m->private;
 
@@ -486,7 +486,7 @@ static int hctx_dispatch_busy_show(void *data, struct seq_file *m)
 
 #define CTX_RQ_SEQ_OPS(name, type)					\
 static void *ctx_##name##_rq_list_start(struct seq_file *m, loff_t *pos) \
-	__acquires(&ctx->lock)						\
+	__acquires(&((struct blk_mq_ctx *)m->private)->lock)		\
 {									\
 	struct blk_mq_ctx *ctx = m->private;				\
 									\
@@ -503,7 +503,7 @@ static void *ctx_##name##_rq_list_next(struct seq_file *m, void *v,	\
 }									\
 									\
 static void ctx_##name##_rq_list_stop(struct seq_file *m, void *v)	\
-	__releases(&ctx->lock)						\
+	__releases(&((struct blk_mq_ctx *)m->private)->lock)		\
 {									\
 	struct blk_mq_ctx *ctx = m->private;				\
 									\
diff --git a/block/blk-zoned.c b/block/blk-zoned.c
index e1a23c8b676d..df0800e69ad7 100644
--- a/block/blk-zoned.c
+++ b/block/blk-zoned.c
@@ -439,6 +439,7 @@ static int blkdev_truncate_zone_range(struct block_device *bdev,
  */
 int blkdev_zone_mgmt_ioctl(struct block_device *bdev, blk_mode_t mode,
 			   unsigned int cmd, unsigned long arg)
+	__cond_acquires(0, bdev->bd_mapping->host->i_rwsem)
 {
 	void __user *argp = (void __user *)arg;
 	struct blk_zone_range zrange;
diff --git a/block/blk.h b/block/blk.h
index f6053e9dd2aa..59321957f54b 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -736,16 +736,19 @@ static inline void blk_unfreeze_release_lock(struct request_queue *q)
  * reclaim from triggering block I/O.
  */
 static inline void blk_debugfs_lock_nomemsave(struct request_queue *q)
+	__acquires(&q->debugfs_mutex)
 {
 	mutex_lock(&q->debugfs_mutex);
 }
 
 static inline void blk_debugfs_unlock_nomemrestore(struct request_queue *q)
+	__releases(&q->debugfs_mutex)
 {
 	mutex_unlock(&q->debugfs_mutex);
 }
 
 static inline unsigned int __must_check blk_debugfs_lock(struct request_queue *q)
+	__acquires(&q->debugfs_mutex)
 {
 	unsigned int memflags = memalloc_noio_save();
 
@@ -755,6 +758,7 @@ static inline unsigned int __must_check blk_debugfs_lock(struct request_queue *q
 
 static inline void blk_debugfs_unlock(struct request_queue *q,
 				      unsigned int memflags)
+	__releases(&q->debugfs_mutex)
 {
 	blk_debugfs_unlock_nomemrestore(q);
 	memalloc_noio_restore(memflags);
diff --git a/block/ioctl.c b/block/ioctl.c
index 0b04661ac809..784f2965f8bd 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -518,6 +518,7 @@ static int blkdev_pr_read_reservation(struct block_device *bdev,
 
 static int blkdev_flushbuf(struct block_device *bdev, unsigned cmd,
 		unsigned long arg)
+	__cond_acquires(0, bdev->bd_holder_lock)
 {
 	if (!capable(CAP_SYS_ADMIN))
 		return -EACCES;
diff --git a/block/kyber-iosched.c b/block/kyber-iosched.c
index b84163d1f851..874791838cbc 100644
--- a/block/kyber-iosched.c
+++ b/block/kyber-iosched.c
@@ -894,7 +894,7 @@ static int kyber_##name##_tokens_show(void *data, struct seq_file *m)	\
 }									\
 									\
 static void *kyber_##name##_rqs_start(struct seq_file *m, loff_t *pos)	\
-	__acquires(&khd->lock)						\
+	__acquires(((struct kyber_hctx_data *)((struct blk_mq_hw_ctx *)m->private)->sched_data)->lock) \
 {									\
 	struct blk_mq_hw_ctx *hctx = m->private;			\
 	struct kyber_hctx_data *khd = hctx->sched_data;			\
@@ -913,7 +913,7 @@ static void *kyber_##name##_rqs_next(struct seq_file *m, void *v,	\
 }									\
 									\
 static void kyber_##name##_rqs_stop(struct seq_file *m, void *v)	\
-	__releases(&khd->lock)						\
+	__releases(((struct kyber_hctx_data *)((struct blk_mq_hw_ctx *)m->private)->sched_data)->lock)						\
 {									\
 	struct blk_mq_hw_ctx *hctx = m->private;			\
 	struct kyber_hctx_data *khd = hctx->sched_data;			\
diff --git a/block/mq-deadline.c b/block/mq-deadline.c
index 95917a88976f..b812708a86ee 100644
--- a/block/mq-deadline.c
+++ b/block/mq-deadline.c
@@ -798,7 +798,7 @@ static const struct elv_fs_entry deadline_attrs[] = {
 #define DEADLINE_DEBUGFS_DDIR_ATTRS(prio, data_dir, name)		\
 static void *deadline_##name##_fifo_start(struct seq_file *m,		\
 					  loff_t *pos)			\
-	__acquires(&dd->lock)						\
+	__acquires(&((struct deadline_data *)((struct request_queue *)m->private)->elevator->elevator_data)->lock)						\
 {									\
 	struct request_queue *q = m->private;				\
 	struct deadline_data *dd = q->elevator->elevator_data;		\
@@ -819,7 +819,7 @@ static void *deadline_##name##_fifo_next(struct seq_file *m, void *v,	\
 }									\
 									\
 static void deadline_##name##_fifo_stop(struct seq_file *m, void *v)	\
-	__releases(&dd->lock)						\
+	__releases(&((struct deadline_data *)((struct request_queue *)m->private)->elevator->elevator_data)->lock)						\
 {									\
 	struct request_queue *q = m->private;				\
 	struct deadline_data *dd = q->elevator->elevator_data;		\
@@ -921,7 +921,7 @@ static int dd_owned_by_driver_show(void *data, struct seq_file *m)
 }
 
 static void *deadline_dispatch_start(struct seq_file *m, loff_t *pos)
-	__acquires(&dd->lock)
+	__acquires(&((struct deadline_data *)((struct request_queue *)m->private)->elevator->elevator_data)->lock)
 {
 	struct request_queue *q = m->private;
 	struct deadline_data *dd = q->elevator->elevator_data;
@@ -939,7 +939,7 @@ static void *deadline_dispatch_next(struct seq_file *m, void *v, loff_t *pos)
 }
 
 static void deadline_dispatch_stop(struct seq_file *m, void *v)
-	__releases(&dd->lock)
+	__releases(&((struct deadline_data *)((struct request_queue *)m->private)->elevator->elevator_data)->lock)
 {
 	struct request_queue *q = m->private;
 	struct deadline_data *dd = q->elevator->elevator_data;
diff --git a/include/linux/backing-dev.h b/include/linux/backing-dev.h
index 0c8342747cab..34571d8b9dce 100644
--- a/include/linux/backing-dev.h
+++ b/include/linux/backing-dev.h
@@ -273,6 +273,7 @@ static inline struct bdi_writeback *inode_to_wb_wbc(
  */
 static inline struct bdi_writeback *
 unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
+	__no_context_analysis /* conditional locking */
 {
 	rcu_read_lock();
 
@@ -300,6 +301,7 @@ unlocked_inode_to_wb_begin(struct inode *inode, struct wb_lock_cookie *cookie)
  */
 static inline void unlocked_inode_to_wb_end(struct inode *inode,
 					    struct wb_lock_cookie *cookie)
+	__no_context_analysis /* conditional locking */
 {
 	if (unlikely(cookie->locked))
 		xa_unlock_irqrestore(&inode->i_mapping->i_pages, cookie->flags);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 8d93d8e356d8..7b05ea282435 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1092,15 +1092,19 @@ static inline unsigned int blk_boundary_sectors_left(sector_t offset,
  */
 static inline struct queue_limits
 queue_limits_start_update(struct request_queue *q)
+	__acquires(&q->limits_lock)
 {
 	mutex_lock(&q->limits_lock);
 	return q->limits;
 }
 int queue_limits_commit_update_frozen(struct request_queue *q,
-		struct queue_limits *lim);
+		struct queue_limits *lim)
+	__releases(&q->limits_lock);
 int queue_limits_commit_update(struct request_queue *q,
-		struct queue_limits *lim);
-int queue_limits_set(struct request_queue *q, struct queue_limits *lim);
+		struct queue_limits *lim)
+	__releases(&q->limits_lock);
+int queue_limits_set(struct request_queue *q, struct queue_limits *lim)
+	__must_not_hold(&q->limits_lock);
 int blk_validate_limits(struct queue_limits *lim);
 
 /**
@@ -1112,6 +1116,7 @@ int blk_validate_limits(struct queue_limits *lim);
  * starting update.
  */
 static inline void queue_limits_cancel_update(struct request_queue *q)
+	__releases(&q->limits_lock)
 {
 	mutex_unlock(&q->limits_lock);
 }
diff --git a/include/linux/bpf.h b/include/linux/bpf.h
index 05b34a6355b0..a3277bcf8d1d 100644
--- a/include/linux/bpf.h
+++ b/include/linux/bpf.h
@@ -2489,6 +2489,7 @@ bpf_prog_run_array(const struct bpf_prog_array *array,
 static __always_inline u32
 bpf_prog_run_array_uprobe(const struct bpf_prog_array *array,
 			  const void *ctx, bpf_prog_run_fn run_prog)
+	__no_context_analysis /* conditional locking */
 {
 	const struct bpf_prog_array_item *item;
 	const struct bpf_prog *prog;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 04/14] aoe: Add a lock context annotation
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (2 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 03/14] block: Make the lock context annotations compatible with Clang Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang Bart Van Assche
                   ` (9 subsequent siblings)
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Justin Sanders, Nathan Chancellor

ktio() unlocks and locks iocq[id].lock. Add a __must_hold() annotation
that reflects this. This annotation prevents that Clang will complain
about this function once lock context analysis is enabled.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/aoe/aoecmd.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index a4744a30a8af..54c57b9f8894 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -1193,6 +1193,7 @@ noskb:		if (buf)
  */
 static int
 ktio(int id)
+	__must_hold(&iocq[id].lock)
 {
 	struct frame *f;
 	struct list_head *pos;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (3 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 04/14] aoe: Add a lock context annotation Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-09 10:08   ` Christoph Böhmwalder
  2026-03-04 19:48 ` [PATCH 06/14] loop: Add lock context annotations Bart Van Assche
                   ` (8 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Philipp Reisner, Lars Ellenberg,
	Christoph Böhmwalder, Nathan Chancellor

Clang performs more strict checking of lock context annotations than
sparse. This patch makes the DRBD lock context annotations compatible
with Clang and prepares for enabling lock context analysis.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/drbd/drbd_bitmap.c   | 20 +++++++------
 drivers/block/drbd/drbd_int.h      | 46 ++++++++++++++----------------
 drivers/block/drbd/drbd_main.c     | 45 ++++++++++++++++++++++-------
 drivers/block/drbd/drbd_nl.c       |  5 ++--
 drivers/block/drbd/drbd_receiver.c | 20 +++++++------
 drivers/block/drbd/drbd_req.c      |  2 ++
 drivers/block/drbd/drbd_state.c    |  3 ++
 drivers/block/drbd/drbd_worker.c   |  6 ++--
 8 files changed, 91 insertions(+), 56 deletions(-)

diff --git a/drivers/block/drbd/drbd_bitmap.c b/drivers/block/drbd/drbd_bitmap.c
index 65ea6ec66bfd..eeeeba9840ea 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -122,12 +122,14 @@ static void __bm_print_lock_info(struct drbd_device *device, const char *func)
 }
 
 void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag flags)
+	__acquires(&device->bitmap->bm_change)
 {
 	struct drbd_bitmap *b = device->bitmap;
 	int trylock_failed;
 
 	if (!b) {
 		drbd_err(device, "FIXME no bitmap in drbd_bm_lock!?\n");
+		__acquire(&b->bm_change);
 		return;
 	}
 
@@ -149,10 +151,12 @@ void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag flags)
 }
 
 void drbd_bm_unlock(struct drbd_device *device)
+	__releases(&device->bitmap->bm_change)
 {
 	struct drbd_bitmap *b = device->bitmap;
 	if (!b) {
 		drbd_err(device, "FIXME no bitmap in drbd_bm_unlock!?\n");
+		__release(&b->bm_change);
 		return;
 	}
 
@@ -987,7 +991,7 @@ static inline sector_t drbd_md_last_bitmap_sector(struct drbd_backing_dev *bdev)
 	}
 }
 
-static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_hold(local)
+static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr)
 {
 	struct drbd_device *device = ctx->device;
 	enum req_op op = ctx->flags & BM_AIO_READ ? REQ_OP_READ : REQ_OP_WRITE;
@@ -1060,7 +1064,7 @@ static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
 /*
  * bm_rw: read/write the whole bitmap from/to its on disk location.
  */
-static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned lazy_writeout_upper_idx) __must_hold(local)
+static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned lazy_writeout_upper_idx)
 {
 	struct drbd_bm_aio_ctx *ctx;
 	struct drbd_bitmap *b = device->bitmap;
@@ -1215,7 +1219,7 @@ static int bm_rw(struct drbd_device *device, const unsigned int flags, unsigned
  * @device:	DRBD device.
  */
 int drbd_bm_read(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		 struct drbd_peer_device *peer_device)
 
 {
 	return bm_rw(device, BM_AIO_READ, 0);
@@ -1228,7 +1232,7 @@ int drbd_bm_read(struct drbd_device *device,
  * Will only write pages that have changed since last IO.
  */
 int drbd_bm_write(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		 struct drbd_peer_device *peer_device)
 {
 	return bm_rw(device, 0, 0);
 }
@@ -1240,7 +1244,7 @@ int drbd_bm_write(struct drbd_device *device,
  * Will write all pages.
  */
 int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
 {
 	return bm_rw(device, BM_AIO_WRITE_ALL_PAGES, 0);
 }
@@ -1250,7 +1254,7 @@ int drbd_bm_write_all(struct drbd_device *device,
  * @device:	DRBD device.
  * @upper_idx:	0: write all changed pages; +ve: page index to stop scanning for changed pages
  */
-int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_hold(local)
+int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx)
 {
 	return bm_rw(device, BM_AIO_COPY_PAGES, upper_idx);
 }
@@ -1267,7 +1271,7 @@ int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_ho
  * pending resync acks are still being processed.
  */
 int drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
 {
 	return bm_rw(device, BM_AIO_COPY_PAGES, 0);
 }
@@ -1276,7 +1280,7 @@ int drbd_bm_write_copy_pages(struct drbd_device *device,
  * drbd_bm_write_hinted() - Write bitmap pages with "hint" marks, if they have changed.
  * @device:	DRBD device.
  */
-int drbd_bm_write_hinted(struct drbd_device *device) __must_hold(local)
+int drbd_bm_write_hinted(struct drbd_device *device)
 {
 	return bm_rw(device, BM_AIO_WRITE_HINTED | BM_AIO_COPY_PAGES, 0);
 }
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index f6d6276974ee..fea8e870781e 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -1056,14 +1056,14 @@ extern void conn_md_sync(struct drbd_connection *connection);
 extern void drbd_md_write(struct drbd_device *device, void *buffer);
 extern void drbd_md_sync(struct drbd_device *device);
 extern int  drbd_md_read(struct drbd_device *device, struct drbd_backing_dev *bdev);
-extern void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local);
-extern void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local);
-extern void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local);
-extern void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local);
-extern void drbd_md_set_flag(struct drbd_device *device, int flags) __must_hold(local);
-extern void drbd_md_clear_flag(struct drbd_device *device, int flags)__must_hold(local);
+extern void drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void drbd_uuid_new_current(struct drbd_device *device);
+extern void drbd_uuid_set_bm(struct drbd_device *device, u64 val);
+extern void drbd_uuid_move_history(struct drbd_device *device);
+extern void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void drbd_md_set_flag(struct drbd_device *device, int flags);
+extern void drbd_md_clear_flag(struct drbd_device *device, int flags);
 extern int drbd_md_test_flag(struct drbd_backing_dev *, int);
 extern void drbd_md_mark_dirty(struct drbd_device *device);
 extern void drbd_queue_bitmap_io(struct drbd_device *device,
@@ -1080,9 +1080,9 @@ extern int drbd_bitmap_io_from_worker(struct drbd_device *device,
 		char *why, enum bm_flag flags,
 		struct drbd_peer_device *peer_device);
 extern int drbd_bmio_set_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
 extern int drbd_bmio_clear_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
 
 /* Meta data layout
  *
@@ -1292,17 +1292,17 @@ extern void _drbd_bm_set_bits(struct drbd_device *device,
 extern int  drbd_bm_test_bit(struct drbd_device *device, unsigned long bitnr);
 extern int  drbd_bm_e_weight(struct drbd_device *device, unsigned long enr);
 extern int  drbd_bm_read(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
 extern void drbd_bm_mark_for_writeout(struct drbd_device *device, int page_nr);
 extern int  drbd_bm_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern void drbd_bm_reset_al_hints(struct drbd_device *device) __must_hold(local);
-extern int  drbd_bm_write_hinted(struct drbd_device *device) __must_hold(local);
-extern int  drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) __must_hold(local);
+		struct drbd_peer_device *peer_device);
+extern void drbd_bm_reset_al_hints(struct drbd_device *device);
+extern int  drbd_bm_write_hinted(struct drbd_device *device);
+extern int  drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx);
 extern int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
 extern int  drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
 extern size_t	     drbd_bm_words(struct drbd_device *device);
 extern unsigned long drbd_bm_bits(struct drbd_device *device);
 extern sector_t      drbd_bm_capacity(struct drbd_device *device);
@@ -1389,7 +1389,8 @@ enum determine_dev_size {
 	DS_GREW_FROM_ZERO = 3,
 };
 extern enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *, enum dds_flags, struct resize_parms *) __must_hold(local);
+drbd_determine_dev_size(struct drbd_device *device, enum dds_flags,
+			struct resize_parms *);
 extern void resync_after_online_grow(struct drbd_device *);
 extern void drbd_reconsider_queue_parameters(struct drbd_device *device,
 			struct drbd_backing_dev *bdev, struct o_qlim *o);
@@ -1470,10 +1471,10 @@ extern bool drbd_rs_should_slow_down(struct drbd_peer_device *peer_device, secto
 		bool throttle_if_app_is_waiting);
 extern int drbd_submit_peer_request(struct drbd_peer_request *peer_req);
 extern int drbd_free_peer_reqs(struct drbd_device *, struct list_head *);
-extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *, u64,
+extern struct drbd_peer_request *drbd_alloc_peer_req(struct drbd_peer_device *device, u64,
 						     sector_t, unsigned int,
 						     unsigned int,
-						     gfp_t) __must_hold(local);
+						     gfp_t);
 extern void drbd_free_peer_req(struct drbd_device *device, struct drbd_peer_request *req);
 extern struct page *drbd_alloc_pages(struct drbd_peer_device *, unsigned int, bool);
 extern void _drbd_clear_done_ee(struct drbd_device *device, struct list_head *to_be_freed);
@@ -1488,7 +1489,6 @@ void drbd_set_my_capacity(struct drbd_device *device, sector_t size);
 static inline void drbd_submit_bio_noacct(struct drbd_device *device,
 					     int fault_type, struct bio *bio)
 {
-	__release(local);
 	if (!bio->bi_bdev) {
 		drbd_err(device, "drbd_submit_bio_noacct: bio->bi_bdev == NULL\n");
 		bio->bi_status = BLK_STS_IOERR;
@@ -1975,8 +1975,7 @@ static inline bool is_sync_state(enum drbd_conns connection_state)
  * You have to call put_ldev() when finished working with device->ldev.
  */
 #define get_ldev_if_state(_device, _min_state)				\
-	(_get_ldev_if_state((_device), (_min_state)) ?			\
-	 ({ __acquire(x); true; }) : false)
+	(_get_ldev_if_state((_device), (_min_state)))
 #define get_ldev(_device) get_ldev_if_state(_device, D_INCONSISTENT)
 
 static inline void put_ldev(struct drbd_device *device)
@@ -1991,7 +1990,6 @@ static inline void put_ldev(struct drbd_device *device)
 	/* This may be called from some endio handler,
 	 * so we must not sleep here. */
 
-	__release(local);
 	D_ASSERT(device, i >= 0);
 	if (i == 0) {
 		if (disk_state == D_DISKLESS)
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 200d464e984b..c014a89e224c 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -589,6 +589,7 @@ static void *__conn_prepare_command(struct drbd_connection *connection,
 }
 
 void *conn_prepare_command(struct drbd_connection *connection, struct drbd_socket *sock)
+	__cond_acquires(true, sock->mutex)
 {
 	void *p;
 
@@ -601,6 +602,7 @@ void *conn_prepare_command(struct drbd_connection *connection, struct drbd_socke
 }
 
 void *drbd_prepare_command(struct drbd_peer_device *peer_device, struct drbd_socket *sock)
+	__cond_acquires(true, sock->mutex)
 {
 	return conn_prepare_command(peer_device->connection, sock);
 }
@@ -646,6 +648,7 @@ static int __conn_send_command(struct drbd_connection *connection, struct drbd_s
 int conn_send_command(struct drbd_connection *connection, struct drbd_socket *sock,
 		      enum drbd_packet cmd, unsigned int header_size,
 		      void *data, unsigned int size)
+	__releases(sock->mutex)
 {
 	int err;
 
@@ -657,6 +660,7 @@ int conn_send_command(struct drbd_connection *connection, struct drbd_socket *so
 int drbd_send_command(struct drbd_peer_device *peer_device, struct drbd_socket *sock,
 		      enum drbd_packet cmd, unsigned int header_size,
 		      void *data, unsigned int size)
+	__releases(sock->mutex)
 {
 	int err;
 
@@ -667,6 +671,7 @@ int drbd_send_command(struct drbd_peer_device *peer_device, struct drbd_socket *
 }
 
 int drbd_send_ping(struct drbd_connection *connection)
+	__cond_acquires(true, connection->meta.mutex)
 {
 	struct drbd_socket *sock;
 
@@ -677,6 +682,7 @@ int drbd_send_ping(struct drbd_connection *connection)
 }
 
 int drbd_send_ping_ack(struct drbd_connection *connection)
+	__cond_acquires(true, connection->meta.mutex)
 {
 	struct drbd_socket *sock;
 
@@ -687,6 +693,7 @@ int drbd_send_ping_ack(struct drbd_connection *connection)
 }
 
 int drbd_send_sync_param(struct drbd_peer_device *peer_device)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_rs_param_95 *p;
@@ -800,6 +807,7 @@ int drbd_send_protocol(struct drbd_connection *connection)
 }
 
 static int _drbd_send_uuids(struct drbd_peer_device *peer_device, u64 uuid_flags)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_socket *sock;
@@ -862,6 +870,7 @@ void drbd_print_uuids(struct drbd_device *device, const char *text)
 }
 
 void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *peer_device)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_socket *sock;
@@ -888,6 +897,7 @@ void drbd_gen_and_send_sync_uuid(struct drbd_peer_device *peer_device)
 }
 
 int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enum dds_flags flags)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_socket *sock;
@@ -969,6 +979,7 @@ int drbd_send_sizes(struct drbd_peer_device *peer_device, int trigger_reply, enu
  * @peer_device:	DRBD peer device.
  */
 int drbd_send_current_state(struct drbd_peer_device *peer_device)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_state *p;
@@ -992,6 +1003,7 @@ int drbd_send_current_state(struct drbd_peer_device *peer_device)
  * want to send each intermediary state in the order it occurred.
  */
 int drbd_send_state(struct drbd_peer_device *peer_device, union drbd_state state)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_state *p;
@@ -1005,6 +1017,7 @@ int drbd_send_state(struct drbd_peer_device *peer_device, union drbd_state state
 }
 
 int drbd_send_state_req(struct drbd_peer_device *peer_device, union drbd_state mask, union drbd_state val)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_req_state *p;
@@ -1019,6 +1032,7 @@ int drbd_send_state_req(struct drbd_peer_device *peer_device, union drbd_state m
 }
 
 int conn_send_state_req(struct drbd_connection *connection, union drbd_state mask, union drbd_state val)
+	__cond_acquires(true, connection->data.mutex)
 {
 	enum drbd_packet cmd;
 	struct drbd_socket *sock;
@@ -1035,6 +1049,7 @@ int conn_send_state_req(struct drbd_connection *connection, union drbd_state mas
 }
 
 void drbd_send_sr_reply(struct drbd_peer_device *peer_device, enum drbd_state_rv retcode)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_req_state_reply *p;
@@ -1048,6 +1063,7 @@ void drbd_send_sr_reply(struct drbd_peer_device *peer_device, enum drbd_state_rv
 }
 
 void conn_send_sr_reply(struct drbd_connection *connection, enum drbd_state_rv retcode)
+	__cond_acquires(true, connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_req_state_reply *p;
@@ -1381,6 +1397,7 @@ int drbd_send_ack_ex(struct drbd_peer_device *peer_device, enum drbd_packet cmd,
 
 int drbd_send_rs_deallocated(struct drbd_peer_device *peer_device,
 			     struct drbd_peer_request *peer_req)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_block_desc *p;
@@ -1397,6 +1414,7 @@ int drbd_send_rs_deallocated(struct drbd_peer_device *peer_device,
 
 int drbd_send_drequest(struct drbd_peer_device *peer_device, int cmd,
 		       sector_t sector, int size, u64 block_id)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_block_req *p;
@@ -1413,6 +1431,7 @@ int drbd_send_drequest(struct drbd_peer_device *peer_device, int cmd,
 
 int drbd_send_drequest_csum(struct drbd_peer_device *peer_device, sector_t sector, int size,
 			    void *digest, int digest_size, enum drbd_packet cmd)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_block_req *p;
@@ -1430,6 +1449,7 @@ int drbd_send_drequest_csum(struct drbd_peer_device *peer_device, sector_t secto
 }
 
 int drbd_send_ov_request(struct drbd_peer_device *peer_device, sector_t sector, int size)
+	__cond_acquires(true, peer_device->connection->data.mutex)
 {
 	struct drbd_socket *sock;
 	struct p_block_req *p;
@@ -3282,7 +3302,7 @@ void drbd_md_mark_dirty(struct drbd_device *device)
 		mod_timer(&device->md_sync_timer, jiffies + 5*HZ);
 }
 
-void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local)
+void drbd_uuid_move_history(struct drbd_device *device)
 {
 	int i;
 
@@ -3290,7 +3310,7 @@ void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local)
 		device->ldev->md.uuid[i+1] = device->ldev->md.uuid[i];
 }
 
-void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
+void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
 {
 	if (idx == UI_CURRENT) {
 		if (device->state.role == R_PRIMARY)
@@ -3305,7 +3325,7 @@ void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(l
 	drbd_md_mark_dirty(device);
 }
 
-void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
+void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
 {
 	unsigned long flags;
 	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3313,7 +3333,7 @@ void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(lo
 	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
 }
 
-void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(local)
+void drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
 {
 	unsigned long flags;
 	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3332,7 +3352,7 @@ void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) __must_hold(loc
  * Creates a new current UUID, and rotates the old current UUID into
  * the bitmap slot. Causes an incremental resync upon next connect.
  */
-void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
+void drbd_uuid_new_current(struct drbd_device *device)
 {
 	u64 val;
 	unsigned long long bm_uuid;
@@ -3354,7 +3374,7 @@ void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
 	drbd_md_sync(device);
 }
 
-void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
+void drbd_uuid_set_bm(struct drbd_device *device, u64 val)
 {
 	unsigned long flags;
 	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3387,7 +3407,7 @@ void drbd_uuid_set_bm(struct drbd_device *device, u64 val) __must_hold(local)
  * Sets all bits in the bitmap and writes the whole bitmap to stable storage.
  */
 int drbd_bmio_set_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
+			  struct drbd_peer_device *peer_device)
 
 {
 	int rv = -EIO;
@@ -3414,7 +3434,7 @@ int drbd_bmio_set_n_write(struct drbd_device *device,
  * Clears all bits in the bitmap and writes the whole bitmap to stable storage.
  */
 int drbd_bmio_clear_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
+			  struct drbd_peer_device *peer_device)
 
 {
 	drbd_resume_al(device);
@@ -3541,7 +3561,7 @@ int drbd_bitmap_io(struct drbd_device *device,
 	return rv;
 }
 
-void drbd_md_set_flag(struct drbd_device *device, int flag) __must_hold(local)
+void drbd_md_set_flag(struct drbd_device *device, int flag)
 {
 	if ((device->ldev->md.flags & flag) != flag) {
 		drbd_md_mark_dirty(device);
@@ -3549,7 +3569,7 @@ void drbd_md_set_flag(struct drbd_device *device, int flag) __must_hold(local)
 	}
 }
 
-void drbd_md_clear_flag(struct drbd_device *device, int flag) __must_hold(local)
+void drbd_md_clear_flag(struct drbd_device *device, int flag)
 {
 	if ((device->ldev->md.flags & flag) != 0) {
 		drbd_md_mark_dirty(device);
@@ -3649,6 +3669,7 @@ const char *cmdname(enum drbd_packet cmd)
  *		struct drbd_peer_request
  */
 int drbd_wait_misc(struct drbd_device *device, struct drbd_interval *i)
+	__must_hold(&device->resource->req_lock)
 {
 	struct net_conf *nc;
 	DEFINE_WAIT(wait);
@@ -3678,6 +3699,8 @@ int drbd_wait_misc(struct drbd_device *device, struct drbd_interval *i)
 }
 
 void lock_all_resources(void)
+	__acquires(&resources_mutex)
+	__no_context_analysis /* locking loop */
 {
 	struct drbd_resource *resource;
 	int __maybe_unused i = 0;
@@ -3689,6 +3712,8 @@ void lock_all_resources(void)
 }
 
 void unlock_all_resources(void)
+	__releases(&resources_mutex)
+	__no_context_analysis /* unlock loop */
 {
 	struct drbd_resource *resource;
 
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 728ecc431b38..cf505b31d040 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -927,7 +927,7 @@ void drbd_resume_io(struct drbd_device *device)
  * You should call drbd_md_sync() after calling this function.
  */
 enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct resize_parms *rs) __must_hold(local)
+drbd_determine_dev_size(struct drbd_device *device, enum dds_flags flags, struct resize_parms *rs)
 {
 	struct md_offsets_and_sizes {
 		u64 last_agreed_sect;
@@ -3025,7 +3025,7 @@ static int drbd_adm_simple_request_state(struct sk_buff *skb, struct genl_info *
 }
 
 static int drbd_bmio_set_susp_al(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
 {
 	int rv;
 
@@ -3453,6 +3453,7 @@ int drbd_adm_dump_connections_done(struct netlink_callback *cb)
 enum { SINGLE_RESOURCE, ITERATE_RESOURCES };
 
 int drbd_adm_dump_connections(struct sk_buff *skb, struct netlink_callback *cb)
+	__no_context_analysis /* too complex for Clang */
 {
 	struct nlattr *resource_filter;
 	struct drbd_resource *resource = NULL, *next_resource;
diff --git a/drivers/block/drbd/drbd_receiver.c b/drivers/block/drbd/drbd_receiver.c
index 58b95bf4bdca..9c49b977bc22 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -175,7 +175,7 @@ You must not have the req_lock:
  * trim: payload_size == 0 */
 struct drbd_peer_request *
 drbd_alloc_peer_req(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
-		    unsigned int request_size, unsigned int payload_size, gfp_t gfp_mask) __must_hold(local)
+		    unsigned int request_size, unsigned int payload_size, gfp_t gfp_mask)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_peer_request *peer_req;
@@ -287,6 +287,7 @@ static int drbd_finish_peer_reqs(struct drbd_device *device)
 
 static void _drbd_wait_ee_list_empty(struct drbd_device *device,
 				     struct list_head *head)
+	__must_hold(&device->resource->req_lock)
 {
 	DEFINE_WAIT(wait);
 
@@ -733,6 +734,7 @@ int drbd_connected(struct drbd_peer_device *peer_device)
  *  -2 We do not have a network config...
  */
 static int conn_connect(struct drbd_connection *connection)
+	__no_context_analysis /* conditional locking */
 {
 	struct drbd_socket sock, msock;
 	struct drbd_peer_device *peer_device;
@@ -1657,7 +1659,7 @@ static void drbd_csum_ee_size(struct crypto_shash *h,
  */
 static struct drbd_peer_request *
 read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t sector,
-	      struct packet_info *pi) __must_hold(local)
+	      struct packet_info *pi)
 {
 	struct drbd_device *device = peer_device->device;
 	const sector_t capacity = get_capacity(device->vdisk);
@@ -1869,7 +1871,7 @@ static int e_end_resync_block(struct drbd_work *w, int unused)
 }
 
 static int recv_resync_read(struct drbd_peer_device *peer_device, sector_t sector,
-			    struct packet_info *pi) __releases(local)
+			    struct packet_info *pi)
 {
 	struct drbd_device *device = peer_device->device;
 	struct drbd_peer_request *peer_req;
@@ -2230,6 +2232,7 @@ static blk_opf_t wire_flags_to_bio(struct drbd_connection *connection, u32 dpf)
 
 static void fail_postponed_requests(struct drbd_device *device, sector_t sector,
 				    unsigned int size)
+	__must_hold(&device->resource->req_lock)
 {
 	struct drbd_peer_device *peer_device = first_peer_device(device);
 	struct drbd_interval *i;
@@ -2256,6 +2259,7 @@ static void fail_postponed_requests(struct drbd_device *device, sector_t sector,
 
 static int handle_write_conflicts(struct drbd_device *device,
 				  struct drbd_peer_request *peer_req)
+	__must_hold(&device->resource->req_lock)
 {
 	struct drbd_connection *connection = peer_req->peer_device->connection;
 	bool resolve_conflicts = test_bit(RESOLVE_CONFLICTS, &connection->flags);
@@ -2826,7 +2830,7 @@ static int receive_DataRequest(struct drbd_connection *connection, struct packet
 /*
  * drbd_asb_recover_0p  -  Recover after split-brain with no remaining primaries
  */
-static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold(local)
+static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
 	int self, peer, rv = -100;
@@ -2909,7 +2913,7 @@ static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) __must_hold
 /*
  * drbd_asb_recover_1p  -  Recover after split-brain with one remaining primary
  */
-static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold(local)
+static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
 	int hg, rv = -100;
@@ -2966,7 +2970,7 @@ static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) __must_hold
 /*
  * drbd_asb_recover_2p  -  Recover after split-brain with two remaining primaries
  */
-static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) __must_hold(local)
+static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device)
 {
 	struct drbd_device *device = peer_device->device;
 	int hg, rv = -100;
@@ -3044,7 +3048,7 @@ static void drbd_uuid_dump(struct drbd_device *device, char *text, u64 *uuid,
  */
 
 static int drbd_uuid_compare(struct drbd_peer_device *const peer_device,
-		enum drbd_role const peer_role, int *rule_nr) __must_hold(local)
+		enum drbd_role const peer_role, int *rule_nr)
 {
 	struct drbd_connection *const connection = peer_device->connection;
 	struct drbd_device *device = peer_device->device;
@@ -3264,7 +3268,7 @@ static int drbd_uuid_compare(struct drbd_peer_device *const peer_device,
  */
 static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device *peer_device,
 					   enum drbd_role peer_role,
-					   enum drbd_disk_state peer_disk) __must_hold(local)
+					   enum drbd_disk_state peer_disk)
 {
 	struct drbd_device *device = peer_device->device;
 	enum drbd_conns rv = C_MASK;
diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
index 70f75ef07945..ca7e511e83a6 100644
--- a/drivers/block/drbd/drbd_req.c
+++ b/drivers/block/drbd/drbd_req.c
@@ -952,6 +952,7 @@ static bool remote_due_to_read_balancing(struct drbd_device *device, sector_t se
  * Only way out: remove the conflicting intervals from the tree.
  */
 static void complete_conflicting_writes(struct drbd_request *req)
+	__must_hold(&req->device->resource->req_lock)
 {
 	DEFINE_WAIT(wait);
 	struct drbd_device *device = req->device;
@@ -1325,6 +1326,7 @@ static void drbd_send_and_submit(struct drbd_device *device, struct drbd_request
 	bool submit_private_bio = false;
 
 	spin_lock_irq(&resource->req_lock);
+	__assume_ctx_lock(&req->device->resource->req_lock);
 	if (rw == WRITE) {
 		/* This may temporarily give up the req_lock,
 		 * but will re-aquire it before it returns here.
diff --git a/drivers/block/drbd/drbd_state.c b/drivers/block/drbd/drbd_state.c
index adcba7f1d8ea..2ab7208cce59 100644
--- a/drivers/block/drbd/drbd_state.c
+++ b/drivers/block/drbd/drbd_state.c
@@ -562,6 +562,7 @@ _req_st_cond(struct drbd_device *device, union drbd_state mask,
 static enum drbd_state_rv
 drbd_req_state(struct drbd_device *device, union drbd_state mask,
 	       union drbd_state val, enum chg_state_flags f)
+	__no_context_analysis /* conditional locking */
 {
 	struct completion done;
 	unsigned long flags;
@@ -699,6 +700,7 @@ int drbd_request_detach_interruptible(struct drbd_device *device)
 enum drbd_state_rv
 _drbd_request_state_holding_state_mutex(struct drbd_device *device, union drbd_state mask,
 		    union drbd_state val, enum chg_state_flags f)
+	__must_hold(&device->state_mutex)
 {
 	enum drbd_state_rv rv;
 
@@ -2292,6 +2294,7 @@ _conn_rq_cond(struct drbd_connection *connection, union drbd_state mask, union d
 enum drbd_state_rv
 _conn_request_state(struct drbd_connection *connection, union drbd_state mask, union drbd_state val,
 		    enum chg_state_flags flags)
+	__no_context_analysis /* conditional locking */
 {
 	enum drbd_state_rv rv = SS_SUCCESS;
 	struct after_conn_state_chg_work *acscw;
diff --git a/drivers/block/drbd/drbd_worker.c b/drivers/block/drbd/drbd_worker.c
index 0697f99fed18..6fec59bbf0e9 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -78,7 +78,7 @@ void drbd_md_endio(struct bio *bio)
 /* reads on behalf of the partner,
  * "submitted" by the receiver
  */
-static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req) __releases(local)
+static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req)
 {
 	unsigned long flags = 0;
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
@@ -99,7 +99,7 @@ static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req) __rele
 
 /* writes on behalf of the partner, or resync writes,
  * "submitted" by the receiver, final stage.  */
-void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req) __releases(local)
+void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req)
 {
 	unsigned long flags = 0;
 	struct drbd_peer_device *peer_device = peer_req->peer_device;
@@ -1923,10 +1923,8 @@ static void drbd_ldev_destroy(struct drbd_device *device)
 	lc_destroy(device->act_log);
 	device->act_log = NULL;
 
-	__acquire(local);
 	drbd_backing_dev_free(device, device->ldev);
 	device->ldev = NULL;
-	__release(local);
 
 	clear_bit(GOING_DISKLESS, &device->flags);
 	wake_up(&device->misc_wait);

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 06/14] loop: Add lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (4 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 07/14] nbd: " Bart Van Assche
                   ` (7 subsequent siblings)
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Nathan Chancellor

Prepare for enabling lock context analysis by adding lock context
annotations that are compatible with Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 0000913f7efc..9c7ed9e8a442 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -107,6 +107,8 @@ static DEFINE_MUTEX(loop_validate_mutex);
  * loop_configure()/loop_change_fd()/__loop_clr_fd() calls.
  */
 static int loop_global_lock_killable(struct loop_device *lo, bool global)
+	__cond_acquires(0, &lo->lo_mutex)
+	__no_context_analysis /* conditional locking */
 {
 	int err;
 
@@ -128,6 +130,8 @@ static int loop_global_lock_killable(struct loop_device *lo, bool global)
  * @global: true if @lo was about to bind another "struct loop_device", false otherwise
  */
 static void loop_global_unlock(struct loop_device *lo, bool global)
+	__releases(&lo->lo_mutex)
+	__no_context_analysis /* conditional locking */
 {
 	mutex_unlock(&lo->lo_mutex);
 	if (global)

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 07/14] nbd: Add lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (5 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 06/14] loop: Add lock context annotations Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 08/14] null_blk: Add more " Bart Van Assche
                   ` (6 subsequent siblings)
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Josef Bacik, Nathan Chancellor

Prepare for enabling lock context analysis by adding those lock context
annotations that are required by Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/nbd.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index fe63f3c55d0d..28bb89bc7de3 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1469,6 +1469,7 @@ static void nbd_config_put(struct nbd_device *nbd)
 }
 
 static int nbd_start_device(struct nbd_device *nbd)
+	__must_hold(&nbd->config_lock)
 {
 	struct nbd_config *config = nbd->config;
 	int num_connections = config->num_connections;
@@ -1541,6 +1542,7 @@ static int nbd_start_device(struct nbd_device *nbd)
 }
 
 static int nbd_start_device_ioctl(struct nbd_device *nbd)
+	__must_hold(nbd->config_lock)
 {
 	struct nbd_config *config = nbd->config;
 	int ret;
@@ -1592,6 +1594,7 @@ static void nbd_set_cmd_timeout(struct nbd_device *nbd, u64 timeout)
 /* Must be called with config_lock held */
 static int __nbd_ioctl(struct block_device *bdev, struct nbd_device *nbd,
 		       unsigned int cmd, unsigned long arg)
+	__must_hold(nbd->config_lock)
 {
 	struct nbd_config *config = nbd->config;
 	loff_t bytesize;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 08/14] null_blk: Add more lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (6 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 07/14] nbd: " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 09/14] rbd: Add " Bart Van Assche
                   ` (5 subsequent siblings)
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Nathan Chancellor, Keith Busch,
	Chaitanya Kulkarni, Johannes Thumshirn, Zheng Qixing,
	Matthew Wilcox (Oracle), Thorsten Blum, Nilay Shroff, Kees Cook

Prepare for enabling lock context analysis by adding the lock context
annotations that are required by Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/null_blk/main.c  | 7 +++++--
 drivers/block/null_blk/zoned.c | 2 ++
 2 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index f8c0fd57e041..677ac829ef80 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1004,8 +1004,7 @@ static struct nullb_page *null_lookup_page(struct nullb *nullb,
 
 static struct nullb_page *null_insert_page(struct nullb *nullb,
 					   sector_t sector, bool ignore_cache)
-	__releases(&nullb->lock)
-	__acquires(&nullb->lock)
+	__must_hold(&nullb->lock)
 {
 	u64 idx;
 	struct nullb_page *t_page;
@@ -1038,6 +1037,7 @@ static struct nullb_page *null_insert_page(struct nullb *nullb,
 }
 
 static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
+	__must_hold(&nullb->lock)
 {
 	int i;
 	unsigned int offset;
@@ -1087,6 +1087,7 @@ static int null_flush_cache_page(struct nullb *nullb, struct nullb_page *c_page)
 }
 
 static int null_make_cache_space(struct nullb *nullb, unsigned long n)
+	__must_hold(&nullb->lock)
 {
 	int i, err, nr_pages;
 	struct nullb_page *c_pages[FREE_BATCH];
@@ -1141,6 +1142,7 @@ static int null_make_cache_space(struct nullb *nullb, unsigned long n)
 
 static blk_status_t copy_to_nullb(struct nullb *nullb, void *source,
 				  loff_t pos, size_t n, bool is_fua)
+	__must_hold(&nullb->lock)
 {
 	size_t temp, count = 0;
 	struct nullb_page *t_page;
@@ -1242,6 +1244,7 @@ static blk_status_t null_handle_flush(struct nullb *nullb)
 static blk_status_t null_transfer(struct nullb *nullb, struct page *page,
 	unsigned int len, unsigned int off, bool is_write, loff_t pos,
 	bool is_fua)
+	__must_hold(&nullb->lock)
 {
 	struct nullb_device *dev = nullb->dev;
 	blk_status_t err = BLK_STS_OK;
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index 384bdce6a9b7..a7f94e76034f 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -32,6 +32,7 @@ static inline void null_init_zone_lock(struct nullb_device *dev,
 
 static inline void null_lock_zone(struct nullb_device *dev,
 				  struct nullb_zone *zone)
+	__no_context_analysis /* conditional locking */
 {
 	if (!dev->memory_backed)
 		spin_lock_irq(&zone->spinlock);
@@ -41,6 +42,7 @@ static inline void null_lock_zone(struct nullb_device *dev,
 
 static inline void null_unlock_zone(struct nullb_device *dev,
 				    struct nullb_zone *zone)
+	__no_context_analysis /* conditional locking */
 {
 	if (!dev->memory_backed)
 		spin_unlock_irq(&zone->spinlock);

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 09/14] rbd: Add lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (7 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 08/14] null_blk: Add more " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 10/14] rnbd: Add more " Bart Van Assche
                   ` (4 subsequent siblings)
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Ilya Dryomov, Nathan Chancellor

Prepare for enabling lock context analysis by adding the lock context
annotations that are required by Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/rbd.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index e7da06200c1e..595e4ba720ec 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4185,6 +4185,7 @@ static void rbd_acquire_lock(struct work_struct *work)
 }
 
 static bool rbd_quiesce_lock(struct rbd_device *rbd_dev)
+	__must_hold(&rbd_dev->lock_rwsem)
 {
 	dout("%s rbd_dev %p\n", __func__, rbd_dev);
 	lockdep_assert_held_write(&rbd_dev->lock_rwsem);
@@ -4229,6 +4230,7 @@ static void __rbd_release_lock(struct rbd_device *rbd_dev)
  * lock_rwsem must be held for write
  */
 static void rbd_release_lock(struct rbd_device *rbd_dev)
+	__must_hold(&rbd_dev->lock_rwsem)
 {
 	if (!rbd_quiesce_lock(rbd_dev))
 		return;
@@ -4597,6 +4599,7 @@ static void rbd_unregister_watch(struct rbd_device *rbd_dev)
  * lock_rwsem must be held for write
  */
 static void rbd_reacquire_lock(struct rbd_device *rbd_dev)
+	__must_hold(&rbd_dev->lock_rwsem)
 {
 	struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
 	char cookie[32];
@@ -6789,6 +6792,7 @@ static void rbd_dev_device_release(struct rbd_device *rbd_dev)
  * upon return.
  */
 static int rbd_dev_device_setup(struct rbd_device *rbd_dev)
+	__releases(&rbd_dev->header_rwsem)
 {
 	int ret;
 
@@ -6890,6 +6894,7 @@ static void rbd_dev_image_release(struct rbd_device *rbd_dev)
  * with @depth == 0.
  */
 static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
+	__no_context_analysis /* conditional locking */
 {
 	bool need_watch = !rbd_is_ro(rbd_dev);
 	int ret;
@@ -7143,6 +7148,8 @@ static ssize_t do_rbd_add(const char *buf, size_t count)
 	if (rc < 0)
 		goto err_out_rbd_dev;
 
+	__acquire(&rbd_dev->header_rwsem);
+
 	if (rbd_dev->opts->alloc_size > rbd_dev->layout.object_size) {
 		rbd_warn(rbd_dev, "alloc_size adjusted to %u",
 			 rbd_dev->layout.object_size);

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 10/14] rnbd: Add more lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (8 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 09/14] rbd: Add " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-06 13:09   ` Marco Elver
  2026-03-04 19:48 ` [PATCH 11/14] ublk: Fix the " Bart Van Assche
                   ` (3 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Md. Haris Iqbal, Jack Wang, Nathan Chancellor

Prepare for enabling lock context analysis by adding the lock context
annotations required by Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/rnbd/rnbd-clt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 4d6725a0035e..7f0f29b8e75a 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -833,6 +833,7 @@ static int wait_for_rtrs_connection(struct rnbd_clt_session *sess)
 static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
 	__releases(&sess_lock)
 	__acquires(&sess_lock)
+	__must_hold(sess_lock)
 {
 	DEFINE_WAIT(wait);
 
@@ -855,6 +856,7 @@ static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
 static struct rnbd_clt_session *__find_and_get_sess(const char *sessname)
 	__releases(&sess_lock)
 	__acquires(&sess_lock)
+	__must_hold(sess_lock)
 {
 	struct rnbd_clt_session *sess, *sn;
 	int err;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 11/14] ublk: Fix the lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (9 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 10/14] rnbd: Add more " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 20:43   ` Caleb Sander Mateos
  2026-03-04 19:48 ` [PATCH 12/14] zloop: Add a " Bart Van Assche
                   ` (2 subsequent siblings)
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Ming Lei, Nathan Chancellor

Add the lock context annotations that are required by Clang. Remove the
__must_hold(&ub->mutex) annotation from ublk_mark_io_ready() because not
all callers hold ub->mutex.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/ublk_drv.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 34ed4f6a02ef..70f2ebde3be9 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -353,11 +353,13 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
 }
 
 static inline void ublk_io_lock(struct ublk_io *io)
+	__acquires(&io->lock)
 {
 	spin_lock(&io->lock);
 }
 
 static inline void ublk_io_unlock(struct ublk_io *io)
+	__releases(&io->lock)
 {
 	spin_unlock(&io->lock);
 }
@@ -2926,7 +2928,6 @@ static void ublk_queue_reset_io_flags(struct ublk_queue *ubq)
 
 /* device can only be started after all IOs are ready */
 static void ublk_mark_io_ready(struct ublk_device *ub, u16 q_id)
-	__must_hold(&ub->mutex)
 {
 	struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
 
@@ -3160,6 +3161,7 @@ static int ublk_check_fetch_buf(const struct ublk_device *ub, __u64 buf_addr)
 
 static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
 			struct ublk_io *io, u16 q_id)
+	__must_hold(&ub->mutex)
 {
 	/* UBLK_IO_FETCH_REQ is only allowed before dev is setup */
 	if (ublk_dev_ready(ub))
@@ -3598,9 +3600,11 @@ static int ublk_batch_prep_io(struct ublk_queue *ubq,
 	}
 
 	ublk_io_lock(io);
+	__acquire(&data->ub->mutex);
 	ret = __ublk_fetch(data->cmd, data->ub, io, ubq->q_id);
 	if (!ret)
 		io->buf = buf;
+	__release(&data->ub->mutex);
 	ublk_io_unlock(io);
 
 	if (!ret)

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 12/14] zloop: Add a lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (10 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 11/14] ublk: Fix the " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-04 19:48 ` [PATCH 13/14] zram: Add " Bart Van Assche
  2026-03-04 19:48 ` [PATCH 14/14] block: Enable lock context analysis for all block drivers Bart Van Assche
  13 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Nathan Chancellor

Prepare for enabling lock context analysis by adding a lock context
annotation that is required for Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/zloop.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/zloop.c b/drivers/block/zloop.c
index 51c043342127..b3654fd814c7 100644
--- a/drivers/block/zloop.c
+++ b/drivers/block/zloop.c
@@ -379,6 +379,7 @@ static void zloop_rw_complete(struct kiocb *iocb, long ret)
 }
 
 static void zloop_rw(struct zloop_cmd *cmd)
+	__no_context_analysis /* conditional locking */
 {
 	struct request *rq = blk_mq_rq_from_pdu(cmd);
 	struct zloop_device *zlo = rq->q->queuedata;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 13/14] zram: Add lock context annotations
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (11 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 12/14] zloop: Add a " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-05  1:23   ` Sergey Senozhatsky
  2026-03-04 19:48 ` [PATCH 14/14] block: Enable lock context analysis for all block drivers Bart Van Assche
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Minchan Kim, Sergey Senozhatsky,
	Nathan Chancellor

Prepare for enabling lock context analysis by adding the lock context
annotations that are required by Clang.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/zram/zcomp.c    | 3 ++-
 drivers/block/zram/zcomp.h    | 6 ++++--
 drivers/block/zram/zram_drv.c | 1 +
 3 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
index a771a8ecc540..c5dc9bb7046d 100644
--- a/drivers/block/zram/zcomp.c
+++ b/drivers/block/zram/zcomp.c
@@ -107,7 +107,8 @@ ssize_t zcomp_available_show(const char *comp, char *buf, ssize_t at)
 	return at;
 }
 
-struct zcomp_strm *zcomp_stream_get(struct zcomp *comp)
+struct zcomp_strm *__zcomp_stream_get(struct zcomp *comp)
+	__no_context_analysis /* acquire related to return value */
 {
 	for (;;) {
 		struct zcomp_strm *zstrm = raw_cpu_ptr(comp->stream);
diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h
index eacfd3f7d61d..4814087e8ac9 100644
--- a/drivers/block/zram/zcomp.h
+++ b/drivers/block/zram/zcomp.h
@@ -85,8 +85,10 @@ bool zcomp_available_algorithm(const char *comp);
 struct zcomp *zcomp_create(const char *alg, struct zcomp_params *params);
 void zcomp_destroy(struct zcomp *comp);
 
-struct zcomp_strm *zcomp_stream_get(struct zcomp *comp);
-void zcomp_stream_put(struct zcomp_strm *zstrm);
+#define zcomp_stream_get(...) __acquire_ret(__zcomp_stream_get(__VA_ARGS__), &__ret->lock)
+struct zcomp_strm *__zcomp_stream_get(struct zcomp *comp);
+void zcomp_stream_put(struct zcomp_strm *zstrm)
+	__releases(&zstrm->lock);
 
 int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm,
 		   const void *src, unsigned int *dst_len);
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index bca33403fc8b..41d3d2a2752d 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2395,6 +2395,7 @@ static int scan_slots_for_recompress(struct zram *zram, u32 mode, u32 prio_max,
 static int recompress_slot(struct zram *zram, u32 index, struct page *page,
 			   u64 *num_recomp_pages, u32 threshold, u32 prio,
 			   u32 prio_max)
+	__no_context_analysis /* too complex for Clang */
 {
 	struct zcomp_strm *zstrm = NULL;
 	unsigned long handle_old;

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH 14/14] block: Enable lock context analysis for all block drivers
  2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
                   ` (12 preceding siblings ...)
  2026-03-04 19:48 ` [PATCH 13/14] zram: Add " Bart Van Assche
@ 2026-03-04 19:48 ` Bart Van Assche
  2026-03-05  1:33   ` Sergey Senozhatsky
  13 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 19:48 UTC (permalink / raw)
  To: Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Bart Van Assche, Justin Sanders, Philipp Reisner, Lars Ellenberg,
	Christoph Böhmwalder, Md. Haris Iqbal, Jack Wang,
	Roger Pau Monné, Minchan Kim, Sergey Senozhatsky

Now that all locking functions in block drivers have been annotated,
enable lock context analysis for all block drivers.

Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/Makefile             | 2 ++
 drivers/block/aoe/Makefile         | 2 ++
 drivers/block/drbd/Makefile        | 3 +++
 drivers/block/mtip32xx/Makefile    | 2 ++
 drivers/block/null_blk/Makefile    | 2 ++
 drivers/block/rnbd/Makefile        | 2 ++
 drivers/block/xen-blkback/Makefile | 3 +++
 drivers/block/zram/Makefile        | 2 ++
 8 files changed, 18 insertions(+)

diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 2d8096eb8cdf..e17f6381b798 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -6,6 +6,8 @@
 # Rewritten to use lists instead of if-statements.
 # 
 
+CONTEXT_ANALYSIS := y
+
 # needed for trace events
 ccflags-y				+= -I$(src)
 
diff --git a/drivers/block/aoe/Makefile b/drivers/block/aoe/Makefile
index b7545ce2f1b0..27bff6359a56 100644
--- a/drivers/block/aoe/Makefile
+++ b/drivers/block/aoe/Makefile
@@ -3,5 +3,7 @@
 # Makefile for ATA over Ethernet
 #
 
+CONTEXT_ANALYSIS := y
+
 obj-$(CONFIG_ATA_OVER_ETH)	+= aoe.o
 aoe-y := aoeblk.o aoechr.o aoecmd.o aoedev.o aoemain.o aoenet.o
diff --git a/drivers/block/drbd/Makefile b/drivers/block/drbd/Makefile
index 67a8b352a1d5..8eaa83a7592b 100644
--- a/drivers/block/drbd/Makefile
+++ b/drivers/block/drbd/Makefile
@@ -1,4 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
+
+CONTEXT_ANALYSIS := y
+
 drbd-y := drbd_buildtag.o drbd_bitmap.o drbd_proc.o
 drbd-y += drbd_worker.o drbd_receiver.o drbd_req.o drbd_actlog.o
 drbd-y += drbd_main.o drbd_strings.o drbd_nl.o
diff --git a/drivers/block/mtip32xx/Makefile b/drivers/block/mtip32xx/Makefile
index bff32b5d3c19..233961fdb41b 100644
--- a/drivers/block/mtip32xx/Makefile
+++ b/drivers/block/mtip32xx/Makefile
@@ -3,4 +3,6 @@
 # Makefile for  Block device driver for Micron PCIe SSD
 #
 
+CONTEXT_ANALYSIS := y
+
 obj-$(CONFIG_BLK_DEV_PCIESSD_MTIP32XX) += mtip32xx.o
diff --git a/drivers/block/null_blk/Makefile b/drivers/block/null_blk/Makefile
index 84c36e512ab8..282b0d51a477 100644
--- a/drivers/block/null_blk/Makefile
+++ b/drivers/block/null_blk/Makefile
@@ -1,5 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0
 
+CONTEXT_ANALYSIS := y
+
 # needed for trace events
 ccflags-y			+= -I$(src)
 
diff --git a/drivers/block/rnbd/Makefile b/drivers/block/rnbd/Makefile
index 208e5f865497..42c2cccdb53d 100644
--- a/drivers/block/rnbd/Makefile
+++ b/drivers/block/rnbd/Makefile
@@ -1,5 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-or-later
 
+CONTEXT_ANALYSIS := y
+
 ccflags-y := -I$(srctree)/drivers/infiniband/ulp/rtrs
 
 rnbd-client-y := rnbd-clt.o \
diff --git a/drivers/block/xen-blkback/Makefile b/drivers/block/xen-blkback/Makefile
index b0ea5ab5b9a1..864ef423226c 100644
--- a/drivers/block/xen-blkback/Makefile
+++ b/drivers/block/xen-blkback/Makefile
@@ -1,4 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
+
+CONTEXT_ANALYSIS := y
+
 obj-$(CONFIG_XEN_BLKDEV_BACKEND) := xen-blkback.o
 
 xen-blkback-y	:= blkback.o xenbus.o
diff --git a/drivers/block/zram/Makefile b/drivers/block/zram/Makefile
index 0fdefd576691..a5663ab01653 100644
--- a/drivers/block/zram/Makefile
+++ b/drivers/block/zram/Makefile
@@ -1,5 +1,7 @@
 # SPDX-License-Identifier: GPL-2.0-only
 
+CONTEXT_ANALYSIS := y
+
 zram-y	:=	zcomp.o zram_drv.o
 
 zram-$(CONFIG_ZRAM_BACKEND_LZO)		+= backend_lzorle.o backend_lzo.o

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 19:48 ` [PATCH 03/14] block: Make the lock context annotations compatible with Clang Bart Van Assche
@ 2026-03-04 20:03   ` Tejun Heo
  2026-03-04 20:29     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Tejun Heo @ 2026-03-04 20:03 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On Wed, Mar 04, 2026 at 11:48:22AM -0800, Bart Van Assche wrote:
> Clang is more strict than sparse with regard to lock context annotation
> checking. Hence this patch that makes the lock context annotations
> compatible with Clang. __release() annotations have been added below
> invocations of indirect calls that unlock a mutex because Clang does not
> support annotating function pointers with __releases().
> 
> Enable context analysis in the block layer Makefile.

Maybe I'm in the minority here but are these annotations actually useful?
What do these capture that lockdep can't? Can we just remove these?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices()
  2026-03-04 19:48 ` [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices() Bart Van Assche
@ 2026-03-04 20:25   ` Damien Le Moal
  2026-03-04 20:59     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Damien Le Moal @ 2026-03-04 20:25 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, Marco Elver, linux-block,
	Christoph Böhmwalder, Andreas Gruenbacher, Philipp Reisner,
	Lars Ellenberg, Nathan Chancellor

On 3/5/26 04:48, Bart Van Assche wrote:
> Make drbd_adm_dump_devices() call rcu_read_lock() before
> rcu_read_unlock() is called. This has been detected by the Clang
> thread-safety analyzer. Compile-tested only.
> 
> Tested-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
> Cc: Andreas Gruenbacher <agruen@linbit.com>
> Fixes: a55bbd375d18 ("drbd: Backport the "status" command")
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

This probably should be sent independently of this series and immediately as
this look like a serious bug.

-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 20:03   ` Tejun Heo
@ 2026-03-04 20:29     ` Bart Van Assche
  2026-03-04 20:58       ` Tejun Heo
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 20:29 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On 3/4/26 2:03 PM, Tejun Heo wrote:
> On Wed, Mar 04, 2026 at 11:48:22AM -0800, Bart Van Assche wrote:
>> Clang is more strict than sparse with regard to lock context annotation
>> checking. Hence this patch that makes the lock context annotations
>> compatible with Clang. __release() annotations have been added below
>> invocations of indirect calls that unlock a mutex because Clang does not
>> support annotating function pointers with __releases().
>>
>> Enable context analysis in the block layer Makefile.
> 
> Maybe I'm in the minority here but are these annotations actually useful?
> What do these capture that lockdep can't? Can we just remove these?

Every Linux kernel release cycle new locking bugs are introduced, often
in error paths. Clang can detect many of these bugs at compile time.
This is why I would like to enable lock context analysis for the entire
kernel tree. This patch series only covers the block layer and block
drivers. The entire patch series (needs to be split further) is
available here: https://github.com/bvanassche/linux/tree/thread-safety

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 11/14] ublk: Fix the lock context annotations
  2026-03-04 19:48 ` [PATCH 11/14] ublk: Fix the " Bart Van Assche
@ 2026-03-04 20:43   ` Caleb Sander Mateos
  2026-03-04 20:55     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Caleb Sander Mateos @ 2026-03-04 20:43 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Ming Lei, Nathan Chancellor

On Wed, Mar 4, 2026 at 11:50 AM Bart Van Assche <bvanassche@acm.org> wrote:
>
> Add the lock context annotations that are required by Clang. Remove the
> __must_hold(&ub->mutex) annotation from ublk_mark_io_ready() because not
> all callers hold ub->mutex.
>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/block/ublk_drv.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 34ed4f6a02ef..70f2ebde3be9 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -353,11 +353,13 @@ static inline bool ublk_support_batch_io(const struct ublk_queue *ubq)
>  }
>
>  static inline void ublk_io_lock(struct ublk_io *io)
> +       __acquires(&io->lock)
>  {
>         spin_lock(&io->lock);
>  }
>
>  static inline void ublk_io_unlock(struct ublk_io *io)
> +       __releases(&io->lock)
>  {
>         spin_unlock(&io->lock);
>  }
> @@ -2926,7 +2928,6 @@ static void ublk_queue_reset_io_flags(struct ublk_queue *ubq)
>
>  /* device can only be started after all IOs are ready */
>  static void ublk_mark_io_ready(struct ublk_device *ub, u16 q_id)
> -       __must_hold(&ub->mutex)

I don't think this is right. Both callers of ublk_mark_io_ready() hold
the mutex: ublk_fetch() acquires it directly, and ublk_batch_prep_io()
is called with it having been acquired in
ublk_handle_batch_prep_cmd(). The stores to ub->unprivileged_daemons
and ub->nr_queue_ready would be data races if the mutex weren't held.

Best,
Caleb

>  {
>         struct ublk_queue *ubq = ublk_get_queue(ub, q_id);
>
> @@ -3160,6 +3161,7 @@ static int ublk_check_fetch_buf(const struct ublk_device *ub, __u64 buf_addr)
>
>  static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
>                         struct ublk_io *io, u16 q_id)
> +       __must_hold(&ub->mutex)
>  {
>         /* UBLK_IO_FETCH_REQ is only allowed before dev is setup */
>         if (ublk_dev_ready(ub))
> @@ -3598,9 +3600,11 @@ static int ublk_batch_prep_io(struct ublk_queue *ubq,
>         }
>
>         ublk_io_lock(io);
> +       __acquire(&data->ub->mutex);
>         ret = __ublk_fetch(data->cmd, data->ub, io, ubq->q_id);
>         if (!ret)
>                 io->buf = buf;
> +       __release(&data->ub->mutex);
>         ublk_io_unlock(io);
>
>         if (!ret)
>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 11/14] ublk: Fix the lock context annotations
  2026-03-04 20:43   ` Caleb Sander Mateos
@ 2026-03-04 20:55     ` Bart Van Assche
  2026-03-04 21:03       ` Caleb Sander Mateos
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 20:55 UTC (permalink / raw)
  To: Caleb Sander Mateos
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Ming Lei, Nathan Chancellor

On 3/4/26 2:43 PM, Caleb Sander Mateos wrote:
> On Wed, Mar 4, 2026 at 11:50 AM Bart Van Assche <bvanassche@acm.org> wrote:
>>   /* device can only be started after all IOs are ready */
>>   static void ublk_mark_io_ready(struct ublk_device *ub, u16 q_id)
>> -       __must_hold(&ub->mutex)
> 
> I don't think this is right. Both callers of ublk_mark_io_ready() hold
> the mutex: ublk_fetch() acquires it directly, and ublk_batch_prep_io()
> is called with it having been acquired in
> ublk_handle_batch_prep_cmd(). The stores to ub->unprivileged_daemons
> and ub->nr_queue_ready would be data races if the mutex weren't held.

Does this patch look better to you?

diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 34ed4f6a02ef..26c368e5358b 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -353,11 +353,13 @@ static inline bool ublk_support_batch_io(const 
struct ublk_queue *ubq)
  }

  static inline void ublk_io_lock(struct ublk_io *io)
+       __acquires(&io->lock)
  {
         spin_lock(&io->lock);
  }

  static inline void ublk_io_unlock(struct ublk_io *io)
+       __releases(&io->lock)
  {
         spin_unlock(&io->lock);
  }
@@ -3160,6 +3162,7 @@ static int ublk_check_fetch_buf(const struct 
ublk_device *ub, __u64 buf_addr)

  static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
                         struct ublk_io *io, u16 q_id)
+       __must_hold(&ub->mutex)
  {
         /* UBLK_IO_FETCH_REQ is only allowed before dev is setup */
         if (ublk_dev_ready(ub))
@@ -3581,6 +3584,7 @@ static void ublk_batch_revert_prep_cmd(struct 
ublk_batch_io_iter *iter,
  static int ublk_batch_prep_io(struct ublk_queue *ubq,
                               const struct ublk_batch_io_data *data,
                               const struct ublk_elem_header *elem)
+       __must_hold(&data->ub->mutex)
  {
         struct ublk_io *io = &ubq->ios[elem->tag];
         const struct ublk_batch_io *uc = &data->header;

Thanks,

Bart.

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 20:29     ` Bart Van Assche
@ 2026-03-04 20:58       ` Tejun Heo
  2026-03-04 21:34         ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Tejun Heo @ 2026-03-04 20:58 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On Wed, Mar 04, 2026 at 02:29:06PM -0600, Bart Van Assche wrote:
> On 3/4/26 2:03 PM, Tejun Heo wrote:
> > On Wed, Mar 04, 2026 at 11:48:22AM -0800, Bart Van Assche wrote:
> > > Clang is more strict than sparse with regard to lock context annotation
> > > checking. Hence this patch that makes the lock context annotations
> > > compatible with Clang. __release() annotations have been added below
> > > invocations of indirect calls that unlock a mutex because Clang does not
> > > support annotating function pointers with __releases().
> > > 
> > > Enable context analysis in the block layer Makefile.
> > 
> > Maybe I'm in the minority here but are these annotations actually useful?
> > What do these capture that lockdep can't? Can we just remove these?
> 
> Every Linux kernel release cycle new locking bugs are introduced, often
> in error paths. Clang can detect many of these bugs at compile time.

I mean, yeah, static bug detection is nice but is error-prone manual
annotation the way to do it at this time and age? These annotations have
been around for as long as I can remember and I've never once found them
genuinely useful. Sure, maybe it can flag some latent error path bugs once
in a blue moon but for the most part they're unused and unmaintained
appendages that just add to noise.

Here's a challenge. Can it reliably and in a sustainable manner capture
anything that https://github.com/masoncl/review-prompts can't capture?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices()
  2026-03-04 20:25   ` Damien Le Moal
@ 2026-03-04 20:59     ` Bart Van Assche
  0 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 20:59 UTC (permalink / raw)
  To: Damien Le Moal, Jens Axboe
  Cc: Christoph Hellwig, Marco Elver, linux-block,
	Christoph Böhmwalder, Andreas Gruenbacher, Philipp Reisner,
	Lars Ellenberg, Nathan Chancellor

On 3/4/26 2:25 PM, Damien Le Moal wrote:
> On 3/5/26 04:48, Bart Van Assche wrote:
>> Make drbd_adm_dump_devices() call rcu_read_lock() before
>> rcu_read_unlock() is called. This has been detected by the Clang
>> thread-safety analyzer. Compile-tested only.
>>
>> Tested-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>
>> Cc: Andreas Gruenbacher <agruen@linbit.com>
>> Fixes: a55bbd375d18 ("drbd: Backport the "status" command")
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> 
> This probably should be sent independently of this series and immediately as
> this look like a serious bug.

Hi Damien,

If my analysis is correct this patch fixes a bug introduced in August
2014 or almost 12 years ago. Since the bug hasn't been fixed yet I don't
think that it is super urgent to fix this bug.

Thanks,

Bart.




^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 11/14] ublk: Fix the lock context annotations
  2026-03-04 20:55     ` Bart Van Assche
@ 2026-03-04 21:03       ` Caleb Sander Mateos
  2026-03-04 21:36         ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Caleb Sander Mateos @ 2026-03-04 21:03 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Ming Lei, Nathan Chancellor

On Wed, Mar 4, 2026 at 12:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 3/4/26 2:43 PM, Caleb Sander Mateos wrote:
> > On Wed, Mar 4, 2026 at 11:50 AM Bart Van Assche <bvanassche@acm.org> wrote:
> >>   /* device can only be started after all IOs are ready */
> >>   static void ublk_mark_io_ready(struct ublk_device *ub, u16 q_id)
> >> -       __must_hold(&ub->mutex)
> >
> > I don't think this is right. Both callers of ublk_mark_io_ready() hold
> > the mutex: ublk_fetch() acquires it directly, and ublk_batch_prep_io()
> > is called with it having been acquired in
> > ublk_handle_batch_prep_cmd(). The stores to ub->unprivileged_daemons
> > and ub->nr_queue_ready would be data races if the mutex weren't held.
>
> Does this patch look better to you?
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 34ed4f6a02ef..26c368e5358b 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -353,11 +353,13 @@ static inline bool ublk_support_batch_io(const
> struct ublk_queue *ubq)
>   }
>
>   static inline void ublk_io_lock(struct ublk_io *io)
> +       __acquires(&io->lock)
>   {
>          spin_lock(&io->lock);
>   }
>
>   static inline void ublk_io_unlock(struct ublk_io *io)
> +       __releases(&io->lock)
>   {
>          spin_unlock(&io->lock);
>   }
> @@ -3160,6 +3162,7 @@ static int ublk_check_fetch_buf(const struct
> ublk_device *ub, __u64 buf_addr)
>
>   static int __ublk_fetch(struct io_uring_cmd *cmd, struct ublk_device *ub,
>                          struct ublk_io *io, u16 q_id)
> +       __must_hold(&ub->mutex)
>   {
>          /* UBLK_IO_FETCH_REQ is only allowed before dev is setup */
>          if (ublk_dev_ready(ub))
> @@ -3581,6 +3584,7 @@ static void ublk_batch_revert_prep_cmd(struct
> ublk_batch_io_iter *iter,
>   static int ublk_batch_prep_io(struct ublk_queue *ubq,
>                                const struct ublk_batch_io_data *data,
>                                const struct ublk_elem_header *elem)
> +       __must_hold(&data->ub->mutex)
>   {
>          struct ublk_io *io = &ubq->ios[elem->tag];
>          const struct ublk_batch_io *uc = &data->header;

Sure, these annotations all look correct. But it's not clear to me how
you're deciding which functions need an annotation. Is clang just
unable to see that ub->mutex is held here because the function is
called indirectly through a function pointer?

Best,
Caleb

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 20:58       ` Tejun Heo
@ 2026-03-04 21:34         ` Bart Van Assche
  2026-03-04 21:45           ` Tejun Heo
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 21:34 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On 3/4/26 2:58 PM, Tejun Heo wrote:
> I mean, yeah, static bug detection is nice but is error-prone manual
> annotation the way to do it at this time and age?

Clang verifies the consistency of the annotations with the 
implementation of the annotated function at compile time. Hence, I don't
think that these annotations can be called error-prone.

> These annotations have
> been around for as long as I can remember and I've never once found them
> genuinely useful. Sure, maybe it can flag some latent error path bugs once
> in a blue moon but for the most part they're unused and unmaintained
> appendages that just add to noise.

Agreed that these annotations were not very useful as long as sparse was
the only tool for verifying these annotations. The verification
performed by Clang however seems very useful to me.

> Here's a challenge. Can it reliably and in a sustainable manner capture
> anything that https://github.com/masoncl/review-prompts can't capture?

The above URL points at a repository with AI review prompts. Today's AI
systems are based on LLMs. LLMs suffer from the following issues, issues
that do not apply to Clang's compile-time thread-safety analysis:
* Hallucinations. This means generating grammatically perfect but
   factually incorrect statements.
* The "lost in the middle" phenomenon. Ignoring information that occurs
   in the middle of a long prompt.
* Not being built into the compiler. Clang's thread-safety analysis is
   built into the compiler. Given the computational resources required
   to perform LLM inference, I do not expect AI review prompts to be
   integrated into any C compiler anytime soon.

Here are specific examples of what is possible with the Clang
thread-safety analysis and what falls outside the scope of any code
review software:
* Documenting which synchronization object protects which member
   variable (the __guarded_by() annotation). It can be very difficult
   or even ambiguous to derive from code which synchronization object is
   intended to protect which member variable. The Clang thread-safety
   support allows to annotate member variables with __guarded_by().
* Whether or not it is intentional that some code paths unlock a
   synchronization object and other paths do not. The Clang
   thread-safety annotations include __acquires() and __cond_acquires().
   These annotations not only enable compile time checking of
   synchronization calls but are also useful as documentation to humans.

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 11/14] ublk: Fix the lock context annotations
  2026-03-04 21:03       ` Caleb Sander Mateos
@ 2026-03-04 21:36         ` Bart Van Assche
  0 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-04 21:36 UTC (permalink / raw)
  To: Caleb Sander Mateos
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Ming Lei, Nathan Chancellor

On 3/4/26 3:03 PM, Caleb Sander Mateos wrote:
> On Wed, Mar 4, 2026 at 12:55 PM Bart Van Assche <bvanassche@acm.org> wrote:
>> @@ -3581,6 +3584,7 @@ static void ublk_batch_revert_prep_cmd(struct
>> ublk_batch_io_iter *iter,
>>    static int ublk_batch_prep_io(struct ublk_queue *ubq,
>>                                 const struct ublk_batch_io_data *data,
>>                                 const struct ublk_elem_header *elem)
>> +       __must_hold(&data->ub->mutex)
>>    {
>>           struct ublk_io *io = &ubq->ios[elem->tag];
>>           const struct ublk_batch_io *uc = &data->header;
> 
> Sure, these annotations all look correct. But it's not clear to me how
> you're deciding which functions need an annotation. Is clang just
> unable to see that ub->mutex is held here because the function is
> called indirectly through a function pointer?
Clang performs more strict checking of lock context annotations than
sparse. If a function is annotated with __must_hold(), sparse only uses
the information from that annotation while checking the implementation
of that function. Clang not only checks the implementation but also
checks the caller(s).

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 21:34         ` Bart Van Assche
@ 2026-03-04 21:45           ` Tejun Heo
  2026-03-04 21:46             ` Tejun Heo
  0 siblings, 1 reply; 39+ messages in thread
From: Tejun Heo @ 2026-03-04 21:45 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On Wed, Mar 04, 2026 at 03:34:12PM -0600, Bart Van Assche wrote:
> Here are specific examples of what is possible with the Clang
> thread-safety analysis and what falls outside the scope of any code
> review software:
> * Documenting which synchronization object protects which member
>   variable (the __guarded_by() annotation). It can be very difficult
>   or even ambiguous to derive from code which synchronization object is
>   intended to protect which member variable. The Clang thread-safety
>   support allows to annotate member variables with __guarded_by().
> * Whether or not it is intentional that some code paths unlock a
>   synchronization object and other paths do not. The Clang
>   thread-safety annotations include __acquires() and __cond_acquires().
>   These annotations not only enable compile time checking of
>   synchronization calls but are also useful as documentation to humans.

I'm skeptical that the overhead justifies the likely constantly diminishing
benefits. I suppose it's upto each subsystem's choice.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 03/14] block: Make the lock context annotations compatible with Clang
  2026-03-04 21:45           ` Tejun Heo
@ 2026-03-04 21:46             ` Tejun Heo
  0 siblings, 0 replies; 39+ messages in thread
From: Tejun Heo @ 2026-03-04 21:46 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Josef Bacik, Alexei Starovoitov, Daniel Borkmann,
	Andrii Nakryiko, Nathan Chancellor, Miklos Szeredi,
	Christian Brauner, Andreas Gruenbacher, Joanne Koong,
	Mateusz Guzik

On Wed, Mar 04, 2026 at 11:45:09AM -1000, Tejun Heo wrote:
> On Wed, Mar 04, 2026 at 03:34:12PM -0600, Bart Van Assche wrote:
> > Here are specific examples of what is possible with the Clang
> > thread-safety analysis and what falls outside the scope of any code
> > review software:
> > * Documenting which synchronization object protects which member
> >   variable (the __guarded_by() annotation). It can be very difficult
> >   or even ambiguous to derive from code which synchronization object is
> >   intended to protect which member variable. The Clang thread-safety
> >   support allows to annotate member variables with __guarded_by().
> > * Whether or not it is intentional that some code paths unlock a
> >   synchronization object and other paths do not. The Clang
> >   thread-safety annotations include __acquires() and __cond_acquires().
> >   These annotations not only enable compile time checking of
> >   synchronization calls but are also useful as documentation to humans.
> 
> I'm skeptical that the overhead justifies the likely constantly diminishing
> benefits. I suppose it's upto each subsystem's choice.

Oops, I meant, benefits justifying overhead, not the other way around. At
least you know I actually wrote it.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 13/14] zram: Add lock context annotations
  2026-03-04 19:48 ` [PATCH 13/14] zram: Add " Bart Van Assche
@ 2026-03-05  1:23   ` Sergey Senozhatsky
  0 siblings, 0 replies; 39+ messages in thread
From: Sergey Senozhatsky @ 2026-03-05  1:23 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Minchan Kim, Sergey Senozhatsky, Nathan Chancellor

On (26/03/04 11:48), Bart Van Assche wrote:
> Prepare for enabling lock context analysis by adding the lock context
> annotations that are required by Clang.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 14/14] block: Enable lock context analysis for all block drivers
  2026-03-04 19:48 ` [PATCH 14/14] block: Enable lock context analysis for all block drivers Bart Van Assche
@ 2026-03-05  1:33   ` Sergey Senozhatsky
  0 siblings, 0 replies; 39+ messages in thread
From: Sergey Senozhatsky @ 2026-03-05  1:33 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Justin Sanders, Philipp Reisner, Lars Ellenberg,
	Christoph Böhmwalder, Md. Haris Iqbal, Jack Wang,
	Roger Pau Monné, Minchan Kim, Sergey Senozhatsky

On (26/03/04 11:48), Bart Van Assche wrote:
> Now that all locking functions in block drivers have been annotated,
> enable lock context analysis for all block drivers.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> # zram

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-04 19:48 ` [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis Bart Van Assche
@ 2026-03-05 10:10   ` Jan Kara
  2026-03-05 12:46     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Kara @ 2026-03-05 10:10 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, Marco Elver,
	linux-block, Yu Kuai, Jan Kara, Nathan Chancellor

On Wed 04-03-26 11:48:21, Bart Van Assche wrote:
> The Clang thread-safety analyzer does not support testing return values
> with "< 0". Hence change the "< 0" test into "!= 0". This is fine since
> the radix_tree_maybe_preload() return value is <= 0.
> 
> Cc: Yu Kuai <yukuai3@huawei.com>
> Cc: Jan Kara <jack@suse.cz>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>

Frankly, I dislike changing the idiomatic ways we use in the kernel for
checking for error returns just because some static checker tool is too
dumb... It may make life easier for the tool but it makes it harder for
humans which I don't think is a good tradeoff.

								Honza

> ---
>  block/blk-ioc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/block/blk-ioc.c b/block/blk-ioc.c
> index d15918d7fabb..0bf78aebc887 100644
> --- a/block/blk-ioc.c
> +++ b/block/blk-ioc.c
> @@ -364,7 +364,7 @@ static struct io_cq *ioc_create_icq(struct request_queue *q)
>  	if (!icq)
>  		return NULL;
>  
> -	if (radix_tree_maybe_preload(GFP_ATOMIC) < 0) {
> +	if (radix_tree_maybe_preload(GFP_ATOMIC) != 0) {
>  		kmem_cache_free(et->icq_cache, icq);
>  		return NULL;
>  	}
-- 
Jan Kara <jack@suse.com>
SUSE Labs, CR

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-05 10:10   ` Jan Kara
@ 2026-03-05 12:46     ` Bart Van Assche
  2026-03-05 13:18       ` Marco Elver
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-05 12:46 UTC (permalink / raw)
  To: Marco Elver, Jan Kara
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, linux-block,
	Yu Kuai, Nathan Chancellor

On 3/5/26 4:10 AM, Jan Kara wrote:
> On Wed 04-03-26 11:48:21, Bart Van Assche wrote:
>> The Clang thread-safety analyzer does not support testing return values
>> with "< 0". Hence change the "< 0" test into "!= 0". This is fine since
>> the radix_tree_maybe_preload() return value is <= 0.
>>
>> Cc: Yu Kuai <yukuai3@huawei.com>
>> Cc: Jan Kara <jack@suse.cz>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> 
> Frankly, I dislike changing the idiomatic ways we use in the kernel for
> checking for error returns just because some static checker tool is too
> dumb... It may make life easier for the tool but it makes it harder for
> humans which I don't think is a good tradeoff.

Marco, do you agree it would help to add a variant of
try_acquire_capability in Clang that accepts a range of successful
return values instead of a single boolean? That would not only address
Jan Kara's concern but would also make it possible to annotate the
many functions that return an ERR_PTR() value. With the current
thread-safety support in Clang the only option is to annotate these
functions with __no_context_analysis.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-05 12:46     ` Bart Van Assche
@ 2026-03-05 13:18       ` Marco Elver
  2026-03-05 14:35         ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Marco Elver @ 2026-03-05 13:18 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jan Kara, Jens Axboe, Christoph Hellwig, Damien Le Moal,
	linux-block, Yu Kuai, Nathan Chancellor

On Thu, 5 Mar 2026 at 13:46, Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 3/5/26 4:10 AM, Jan Kara wrote:
> > On Wed 04-03-26 11:48:21, Bart Van Assche wrote:
> >> The Clang thread-safety analyzer does not support testing return values
> >> with "< 0". Hence change the "< 0" test into "!= 0". This is fine since
> >> the radix_tree_maybe_preload() return value is <= 0.
> >>
> >> Cc: Yu Kuai <yukuai3@huawei.com>
> >> Cc: Jan Kara <jack@suse.cz>
> >> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> >
> > Frankly, I dislike changing the idiomatic ways we use in the kernel for
> > checking for error returns just because some static checker tool is too
> > dumb... It may make life easier for the tool but it makes it harder for
> > humans which I don't think is a good tradeoff.
>
> Marco, do you agree it would help to add a variant of
> try_acquire_capability in Clang that accepts a range of successful
> return values instead of a single boolean? That would not only address
> Jan Kara's concern but would also make it possible to annotate the
> many functions that return an ERR_PTR() value. With the current
> thread-safety support in Clang the only option is to annotate these
> functions with __no_context_analysis.

Yes some better support is needed. But this might also be wishful
thinking only - we can dream. :-)

This was discussed here:
https://lore.kernel.org/all/CANpmjNPquO=W1JAh1FNQb8pMQjgeZAKCPQUAd7qUg=5pjJ6x=Q@mail.gmail.com/
under point 4.

It's a tough one, and no clear solution exists yet. Exploring the
design space here is the first step - I don't think "accepts a range
of successful return values" is trivial, because we have to either
list those values, or encode the possible ranges as an expression
which we can then match against. Only the latter is usable IMHO, but
implementing that in the compiler is a big deal - we need some kind of
solver to match expressions - or severely limiting allowed
expressions.

Either way, getting that implemented and upstreamed is a ~2-3 months
effort. Which is why I have ignored this for now given the poor ROI -
the current infrastructure is opt-in, and my thoughts were to enable
in as many places as possible where we don't run into this issue. We'd
need an estimate %% what coverage we're missing and if it's worth it.
I understand you're working on global enablement, but this particular
problem needs careful design analysis before committing to anything.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-05 13:18       ` Marco Elver
@ 2026-03-05 14:35         ` Bart Van Assche
  2026-03-05 20:30           ` Marco Elver
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-05 14:35 UTC (permalink / raw)
  To: Marco Elver
  Cc: Jan Kara, Jens Axboe, Christoph Hellwig, Damien Le Moal,
	linux-block, Yu Kuai, Nathan Chancellor, Peter Zijlstra

On 3/5/26 7:18 AM, Marco Elver wrote:
> It's a tough one, and no clear solution exists yet. Exploring the
> design space here is the first step - I don't think "accepts a range
> of successful return values" is trivial, because we have to either
> list those values, or encode the possible ranges as an expression
> which we can then match against. Only the latter is usable IMHO, but
> implementing that in the compiler is a big deal - we need some kind of
> solver to match expressions - or severely limiting allowed
> expressions.
> 
> Either way, getting that implemented and upstreamed is a ~2-3 months
> effort. Which is why I have ignored this for now given the poor ROI -
> the current infrastructure is opt-in, and my thoughts were to enable
> in as many places as possible where we don't run into this issue. We'd
> need an estimate %% what coverage we're missing and if it's worth it.
> I understand you're working on global enablement, but this particular
> problem needs careful design analysis before committing to anything.

I'm interested in enabling compile-time thread-safety analysis for the
kernel in its entirety. If thread-safety analysis can't be enabled for
the kernel in its entirety I probably will stop working on this topic.

Regarding your questions, so far I have identified 17 functions that
perform conditional locking and that may return an error pointer:

$ git grep -nH '__no_context_analysis.*ERR_PTR' | wc -l
17

The most widely used among these functions is probably fc_mount(). The
direct and indirect callers of that function include
fc_mount_longterm(), vfs_kern_mount() and path_mount().

Another concern is the concern Jan brought up: with the current support
for conditional locking a large number of callers of conditional locking
functions would have to be modified. If I counted correctly the patch
"treewide: Modify mutex_lock_interruptible() return value checks" on my
thread-safety branch includes 173 changes. There probably will be more
kernel maintainer than Jan who will protest against these changes.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis
  2026-03-05 14:35         ` Bart Van Assche
@ 2026-03-05 20:30           ` Marco Elver
  0 siblings, 0 replies; 39+ messages in thread
From: Marco Elver @ 2026-03-05 20:30 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jan Kara, Jens Axboe, Christoph Hellwig, Damien Le Moal,
	linux-block, Yu Kuai, Nathan Chancellor, Peter Zijlstra

On Thu, 5 Mar 2026 at 15:36, Bart Van Assche <bvanassche@acm.org> wrote:
>
> On 3/5/26 7:18 AM, Marco Elver wrote:
> > It's a tough one, and no clear solution exists yet. Exploring the
> > design space here is the first step - I don't think "accepts a range
> > of successful return values" is trivial, because we have to either
> > list those values, or encode the possible ranges as an expression
> > which we can then match against. Only the latter is usable IMHO, but
> > implementing that in the compiler is a big deal - we need some kind of
> > solver to match expressions - or severely limiting allowed
> > expressions.
> >
> > Either way, getting that implemented and upstreamed is a ~2-3 months
> > effort. Which is why I have ignored this for now given the poor ROI -
> > the current infrastructure is opt-in, and my thoughts were to enable
> > in as many places as possible where we don't run into this issue. We'd
> > need an estimate %% what coverage we're missing and if it's worth it.
> > I understand you're working on global enablement, but this particular
> > problem needs careful design analysis before committing to anything.
>
> I'm interested in enabling compile-time thread-safety analysis for the
> kernel in its entirety. If thread-safety analysis can't be enabled for
> the kernel in its entirety I probably will stop working on this topic.
>
> Regarding your questions, so far I have identified 17 functions that
> perform conditional locking and that may return an error pointer:
>
> $ git grep -nH '__no_context_analysis.*ERR_PTR' | wc -l
> 17
>
> The most widely used among these functions is probably fc_mount(). The
> direct and indirect callers of that function include
> fc_mount_longterm(), vfs_kern_mount() and path_mount().
>
> Another concern is the concern Jan brought up: with the current support
> for conditional locking a large number of callers of conditional locking
> functions would have to be modified. If I counted correctly the patch
> "treewide: Modify mutex_lock_interruptible() return value checks" on my
> thread-safety branch includes 173 changes. There probably will be more
> kernel maintainer than Jan who will protest against these changes.

I will take a look in a few weeks if it can be fixed with a new Clang feature.

But I think you can still proceed to attempt enabling context analysis
tree-wide, but simply disable the analysis for some of these
problematic subsystems completely. Specifically, the Makefile
directive should also works as an opt-out:

  CONTEXT_ANALYSIS := n

, if you enable the WARN_CONTEXT_ANALYSIS_ALL option. Either way,
incremental enablement is the way we should pursue this, and not in an
all-or-nothing approach. I imagine the path that will work is:

1. Incrementally enable context analysis for more subsystems
(CONTEXT_ANALYSIS := y)
2. Add selective opt outs (CONTEXT_ANALYSIS := n) where
WARN_CONTEXT_ANALYSIS_ALL complains.
3. Once we have a clean WARN_CONTEXT_ANALYSIS_ALL build, make it default.
4. Remove all 'CONTEXT_ANALYSIS := y' lines.

I don't think we'll ever get 100% coverage, but even if we get e.g.
80% coverage, we're able to prevent several classes of locking bugs
from ever entering the kernel in 80% of the code. I'd take that any
day over 0% coverage (which is/was a risk when attempting to make it
perfect halts any progress). It's just a pragmatic trade-off we have
to make. New compiler features will of course help, but they shouldn't
stand in the way of getting to that XX% coverage.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 10/14] rnbd: Add more lock context annotations
  2026-03-04 19:48 ` [PATCH 10/14] rnbd: Add more " Bart Van Assche
@ 2026-03-06 13:09   ` Marco Elver
  2026-03-06 14:11     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Marco Elver @ 2026-03-06 13:09 UTC (permalink / raw)
  To: Bart Van Assche
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, linux-block,
	Md. Haris Iqbal, Jack Wang, Nathan Chancellor

On Wed, 4 Mar 2026 at 20:49, Bart Van Assche <bvanassche@acm.org> wrote:
>
> Prepare for enabling lock context analysis by adding the lock context
> annotations required by Clang.
>
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/block/rnbd/rnbd-clt.c | 2 ++
>  1 file changed, 2 insertions(+)
>
> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
> index 4d6725a0035e..7f0f29b8e75a 100644
> --- a/drivers/block/rnbd/rnbd-clt.c
> +++ b/drivers/block/rnbd/rnbd-clt.c
> @@ -833,6 +833,7 @@ static int wait_for_rtrs_connection(struct rnbd_clt_session *sess)
>  static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
>         __releases(&sess_lock)
>         __acquires(&sess_lock)
> +       __must_hold(sess_lock)
>  {
>         DEFINE_WAIT(wait);
>
> @@ -855,6 +856,7 @@ static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
>  static struct rnbd_clt_session *__find_and_get_sess(const char *sessname)
>         __releases(&sess_lock)
>         __acquires(&sess_lock)
> +       __must_hold(sess_lock)
>  {
>         struct rnbd_clt_session *sess, *sn;
>         int err;

This has all 3: __releases, __acquires, __must_hold. Only either
__releases + __acquires OR __must_hold is sufficient. __must_hold
implies that the lock must be both held on entry and exit - if that
lock is released and re-acquired within the function is irrelevant.
For documentation purposes it might be that one or the other is
clearer (e.g. I've used both __release+__acquires in some cases where
I felt it's clearer).

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 10/14] rnbd: Add more lock context annotations
  2026-03-06 13:09   ` Marco Elver
@ 2026-03-06 14:11     ` Bart Van Assche
  0 siblings, 0 replies; 39+ messages in thread
From: Bart Van Assche @ 2026-03-06 14:11 UTC (permalink / raw)
  To: Marco Elver
  Cc: Jens Axboe, Christoph Hellwig, Damien Le Moal, linux-block,
	Md. Haris Iqbal, Jack Wang, Nathan Chancellor


On 3/6/26 7:09 AM, Marco Elver wrote:
> On Wed, 4 Mar 2026 at 20:49, Bart Van Assche <bvanassche@acm.org> wrote:
>>
>> Prepare for enabling lock context analysis by adding the lock context
>> annotations required by Clang.
>>
>> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
>> ---
>>   drivers/block/rnbd/rnbd-clt.c | 2 ++
>>   1 file changed, 2 insertions(+)
>>
>> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
>> index 4d6725a0035e..7f0f29b8e75a 100644
>> --- a/drivers/block/rnbd/rnbd-clt.c
>> +++ b/drivers/block/rnbd/rnbd-clt.c
>> @@ -833,6 +833,7 @@ static int wait_for_rtrs_connection(struct rnbd_clt_session *sess)
>>   static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
>>          __releases(&sess_lock)
>>          __acquires(&sess_lock)
>> +       __must_hold(sess_lock)
>>   {
>>          DEFINE_WAIT(wait);
>>
>> @@ -855,6 +856,7 @@ static void wait_for_rtrs_disconnection(struct rnbd_clt_session *sess)
>>   static struct rnbd_clt_session *__find_and_get_sess(const char *sessname)
>>          __releases(&sess_lock)
>>          __acquires(&sess_lock)
>> +       __must_hold(sess_lock)
>>   {
>>          struct rnbd_clt_session *sess, *sn;
>>          int err;
> 
> This has all 3: __releases, __acquires, __must_hold. Only either
> __releases + __acquires OR __must_hold is sufficient. __must_hold
> implies that the lock must be both held on entry and exit - if that
> lock is released and re-acquired within the function is irrelevant.
> For documentation purposes it might be that one or the other is
> clearer (e.g. I've used both __release+__acquires in some cases where
> I felt it's clearer).

Let's not make more changes than strictly necessary. I will drop this patch.

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang
  2026-03-04 19:48 ` [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang Bart Van Assche
@ 2026-03-09 10:08   ` Christoph Böhmwalder
  2026-03-09 23:15     ` Bart Van Assche
  0 siblings, 1 reply; 39+ messages in thread
From: Christoph Böhmwalder @ 2026-03-09 10:08 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Philipp Reisner, Lars Ellenberg, Nathan Chancellor

Am 04.03.26 um 20:48 schrieb Bart Van Assche:
> Clang performs more strict checking of lock context annotations than
> sparse. This patch makes the DRBD lock context annotations compatible
> with Clang and prepares for enabling lock context analysis.
> 
> Signed-off-by: Bart Van Assche <bvanassche@acm.org>
> ---
>  drivers/block/drbd/drbd_bitmap.c   | 20 +++++++------
>  drivers/block/drbd/drbd_int.h      | 46 ++++++++++++++----------------
>  drivers/block/drbd/drbd_main.c     | 45 ++++++++++++++++++++++-------
>  drivers/block/drbd/drbd_nl.c       |  5 ++--
>  drivers/block/drbd/drbd_receiver.c | 20 +++++++------
>  drivers/block/drbd/drbd_req.c      |  2 ++
>  drivers/block/drbd/drbd_state.c    |  3 ++
>  drivers/block/drbd/drbd_worker.c   |  6 ++--
>  8 files changed, 91 insertions(+), 56 deletions(-)
[...]
>  
>  void drbd_send_sr_reply(struct drbd_peer_device *peer_device, enum drbd_state_rv retcode)
> +	__cond_acquires(true, peer_device->connection->data.mutex)
>  {
>  	struct drbd_socket *sock;
>  	struct p_req_state_reply *p;
> @@ -1048,6 +1063,7 @@ void drbd_send_sr_reply(struct drbd_peer_device *peer_device, enum drbd_state_rv
>  }
>  
>  void conn_send_sr_reply(struct drbd_connection *connection, enum drbd_state_rv retcode)
> +	__cond_acquires(true, connection->data.mutex)
>  {
>  	struct drbd_socket *sock;
>  	struct p_req_state_reply *p;

These are marked as acquiring connection->data.mutex, but actually use
connection->meta later. So the annotation should reference meta.mutex.

Also, I think the annotations on drbd_send_* are wrong. These functions
have no path where they return without releasing the mutex, but these
annotations would tell clang that they hold the mutex on non-zero return.

Regards,
Christoph

-- 
Christoph Böhmwalder
LINBIT | Keeping the Digital World Running
DRBD HA —  Disaster Recovery — Software defined Storage


^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang
  2026-03-09 10:08   ` Christoph Böhmwalder
@ 2026-03-09 23:15     ` Bart Van Assche
  2026-03-11 20:42       ` Christoph Böhmwalder
  0 siblings, 1 reply; 39+ messages in thread
From: Bart Van Assche @ 2026-03-09 23:15 UTC (permalink / raw)
  To: Christoph Böhmwalder, Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Philipp Reisner, Lars Ellenberg, Nathan Chancellor

On 3/9/26 3:08 AM, Christoph Böhmwalder wrote:
> These are marked as acquiring connection->data.mutex, but actually use
> connection->meta later. So the annotation should reference meta.mutex.
> 
> Also, I think the annotations on drbd_send_* are wrong. These functions
> have no path where they return without releasing the mutex, but these
> annotations would tell clang that they hold the mutex on non-zero return.

Thanks for the feedback. Does the patch below look better? Compared to
the previous version, all lock context annotations (except for static
functions) have been moved from .c into .h files. Lock context aliases
have been introduced if the synchronization object is not visible in the
.h file. Two function declarations have been moved from before to after
the struct drbd_device definition.

Thanks,

Bart.


diff --git a/drivers/block/drbd/drbd_bitmap.c 
b/drivers/block/drbd/drbd_bitmap.c
index 65ea6ec66bfd..3c521f0dc9ad 100644
--- a/drivers/block/drbd/drbd_bitmap.c
+++ b/drivers/block/drbd/drbd_bitmap.c
@@ -122,12 +122,16 @@ static void __bm_print_lock_info(struct 
drbd_device *device, const char *func)
  }

  void drbd_bm_lock(struct drbd_device *device, char *why, enum bm_flag 
flags)
+	__acquires(&device->bitmap->bm_change)
  {
  	struct drbd_bitmap *b = device->bitmap;
  	int trylock_failed;

  	if (!b) {
  		drbd_err(device, "FIXME no bitmap in drbd_bm_lock!?\n");
+		/* Fake __acquire() to keep the compiler happy. */
+		__acquire(&b->bm_change);
+		__acquire(drbd_bitmap_lock);
  		return;
  	}

@@ -146,13 +150,18 @@ void drbd_bm_lock(struct drbd_device *device, char 
*why, enum bm_flag flags)

  	b->bm_why  = why;
  	b->bm_task = current;
+	__acquire(drbd_bitmap_lock);
  }

  void drbd_bm_unlock(struct drbd_device *device)
+	__releases(&device->bitmap->bm_change)
  {
  	struct drbd_bitmap *b = device->bitmap;
  	if (!b) {
  		drbd_err(device, "FIXME no bitmap in drbd_bm_unlock!?\n");
+		/* Fake __release() to keep the compiler happy. */
+		__release(&b->bm_change);
+		__release(drbd_bitmap_lock);
  		return;
  	}

@@ -163,6 +172,7 @@ void drbd_bm_unlock(struct drbd_device *device)
  	b->bm_why  = NULL;
  	b->bm_task = NULL;
  	mutex_unlock(&b->bm_change);
+	__release(drbd_bitmap_lock);
  }

  /* we store some "meta" info about our pages in page->private */
@@ -987,7 +997,7 @@ static inline sector_t 
drbd_md_last_bitmap_sector(struct drbd_backing_dev *bdev)
  	}
  }

-static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr) 
__must_hold(local)
+static void bm_page_io_async(struct drbd_bm_aio_ctx *ctx, int page_nr)
  {
  	struct drbd_device *device = ctx->device;
  	enum req_op op = ctx->flags & BM_AIO_READ ? REQ_OP_READ : REQ_OP_WRITE;
@@ -1060,7 +1070,7 @@ static void bm_page_io_async(struct 
drbd_bm_aio_ctx *ctx, int page_nr) __must_ho
  /*
   * bm_rw: read/write the whole bitmap from/to its on disk location.
   */
-static int bm_rw(struct drbd_device *device, const unsigned int flags, 
unsigned lazy_writeout_upper_idx) __must_hold(local)
+static int bm_rw(struct drbd_device *device, const unsigned int flags, 
unsigned lazy_writeout_upper_idx)
  {
  	struct drbd_bm_aio_ctx *ctx;
  	struct drbd_bitmap *b = device->bitmap;
@@ -1215,7 +1225,7 @@ static int bm_rw(struct drbd_device *device, const 
unsigned int flags, unsigned
   * @device:	DRBD device.
   */
  int drbd_bm_read(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		 struct drbd_peer_device *peer_device)

  {
  	return bm_rw(device, BM_AIO_READ, 0);
@@ -1228,7 +1238,7 @@ int drbd_bm_read(struct drbd_device *device,
   * Will only write pages that have changed since last IO.
   */
  int drbd_bm_write(struct drbd_device *device,
-		 struct drbd_peer_device *peer_device) __must_hold(local)
+		 struct drbd_peer_device *peer_device)
  {
  	return bm_rw(device, 0, 0);
  }
@@ -1240,7 +1250,7 @@ int drbd_bm_write(struct drbd_device *device,
   * Will write all pages.
   */
  int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
  {
  	return bm_rw(device, BM_AIO_WRITE_ALL_PAGES, 0);
  }
@@ -1250,7 +1260,7 @@ int drbd_bm_write_all(struct drbd_device *device,
   * @device:	DRBD device.
   * @upper_idx:	0: write all changed pages; +ve: page index to stop 
scanning for changed pages
   */
-int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx) 
__must_hold(local)
+int drbd_bm_write_lazy(struct drbd_device *device, unsigned upper_idx)
  {
  	return bm_rw(device, BM_AIO_COPY_PAGES, upper_idx);
  }
@@ -1267,7 +1277,7 @@ int drbd_bm_write_lazy(struct drbd_device *device, 
unsigned upper_idx) __must_ho
   * pending resync acks are still being processed.
   */
  int drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
  {
  	return bm_rw(device, BM_AIO_COPY_PAGES, 0);
  }
@@ -1276,7 +1286,7 @@ int drbd_bm_write_copy_pages(struct drbd_device 
*device,
   * drbd_bm_write_hinted() - Write bitmap pages with "hint" marks, if 
they have changed.
   * @device:	DRBD device.
   */
-int drbd_bm_write_hinted(struct drbd_device *device) __must_hold(local)
+int drbd_bm_write_hinted(struct drbd_device *device)
  {
  	return bm_rw(device, BM_AIO_WRITE_HINTED | BM_AIO_COPY_PAGES, 0);
  }
diff --git a/drivers/block/drbd/drbd_int.h b/drivers/block/drbd/drbd_int.h
index f6d6276974ee..46546e6e9f6b 100644
--- a/drivers/block/drbd/drbd_int.h
+++ b/drivers/block/drbd/drbd_int.h
@@ -193,10 +193,14 @@ struct drbd_device_work {

  #include "drbd_interval.h"

-extern int drbd_wait_misc(struct drbd_device *, struct drbd_interval *);
+/*
+ * Alias for &resources_mutex because &resources_mutex is not visible 
in this
+ * context.
+ */
+token_context_lock(all_drbd_resources);

-extern void lock_all_resources(void);
-extern void unlock_all_resources(void);
+extern void lock_all_resources(void) __acquires(all_drbd_resources);
+extern void unlock_all_resources(void) __releases(all_drbd_resources);

  struct drbd_request {
  	struct drbd_work w;
@@ -1056,14 +1060,14 @@ extern void conn_md_sync(struct drbd_connection 
*connection);
  extern void drbd_md_write(struct drbd_device *device, void *buffer);
  extern void drbd_md_sync(struct drbd_device *device);
  extern int  drbd_md_read(struct drbd_device *device, struct 
drbd_backing_dev *bdev);
-extern void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) 
__must_hold(local);
-extern void _drbd_uuid_set(struct drbd_device *device, int idx, u64 
val) __must_hold(local);
-extern void drbd_uuid_new_current(struct drbd_device *device) 
__must_hold(local);
-extern void drbd_uuid_set_bm(struct drbd_device *device, u64 val) 
__must_hold(local);
-extern void drbd_uuid_move_history(struct drbd_device *device) 
__must_hold(local);
-extern void __drbd_uuid_set(struct drbd_device *device, int idx, u64 
val) __must_hold(local);
-extern void drbd_md_set_flag(struct drbd_device *device, int flags) 
__must_hold(local);
-extern void drbd_md_clear_flag(struct drbd_device *device, int 
flags)__must_hold(local);
+extern void drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void drbd_uuid_new_current(struct drbd_device *device);
+extern void drbd_uuid_set_bm(struct drbd_device *device, u64 val);
+extern void drbd_uuid_move_history(struct drbd_device *device);
+extern void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val);
+extern void drbd_md_set_flag(struct drbd_device *device, int flags);
+extern void drbd_md_clear_flag(struct drbd_device *device, int flags);
  extern int drbd_md_test_flag(struct drbd_backing_dev *, int);
  extern void drbd_md_mark_dirty(struct drbd_device *device);
  extern void drbd_queue_bitmap_io(struct drbd_device *device,
@@ -1080,9 +1084,15 @@ extern int drbd_bitmap_io_from_worker(struct 
drbd_device *device,
  		char *why, enum bm_flag flags,
  		struct drbd_peer_device *peer_device);
  extern int drbd_bmio_set_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
  extern int drbd_bmio_clear_n_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
+extern enum drbd_state_rv
+_drbd_request_state_holding_state_mutex(struct drbd_device *device, 
union drbd_state,
+					union drbd_state, enum chg_state_flags)
+	__must_hold(&device->state_mutex);
+extern int drbd_wait_misc(struct drbd_device *device, struct 
drbd_interval *)
+	__must_hold(&device->resource->req_lock);

  /* Meta data layout
   *
@@ -1292,17 +1302,17 @@ extern void _drbd_bm_set_bits(struct drbd_device 
*device,
  extern int  drbd_bm_test_bit(struct drbd_device *device, unsigned long 
bitnr);
  extern int  drbd_bm_e_weight(struct drbd_device *device, unsigned long 
enr);
  extern int  drbd_bm_read(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
  extern void drbd_bm_mark_for_writeout(struct drbd_device *device, int 
page_nr);
  extern int  drbd_bm_write(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
-extern void drbd_bm_reset_al_hints(struct drbd_device *device) 
__must_hold(local);
-extern int  drbd_bm_write_hinted(struct drbd_device *device) 
__must_hold(local);
-extern int  drbd_bm_write_lazy(struct drbd_device *device, unsigned 
upper_idx) __must_hold(local);
+		struct drbd_peer_device *peer_device);
+extern void drbd_bm_reset_al_hints(struct drbd_device *device);
+extern int  drbd_bm_write_hinted(struct drbd_device *device);
+extern int  drbd_bm_write_lazy(struct drbd_device *device, unsigned 
upper_idx);
  extern int drbd_bm_write_all(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
  extern int  drbd_bm_write_copy_pages(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local);
+		struct drbd_peer_device *peer_device);
  extern size_t	     drbd_bm_words(struct drbd_device *device);
  extern unsigned long drbd_bm_bits(struct drbd_device *device);
  extern sector_t      drbd_bm_capacity(struct drbd_device *device);
@@ -1321,8 +1331,16 @@ extern void drbd_bm_merge_lel(struct drbd_device 
*device, size_t offset,
  extern void drbd_bm_get_lel(struct drbd_device *device, size_t offset,
  		size_t number, unsigned long *buffer);

-extern void drbd_bm_lock(struct drbd_device *device, char *why, enum 
bm_flag flags);
-extern void drbd_bm_unlock(struct drbd_device *device);
+/*
+ * Alias for &device->bitmap->bm_change because not all type 
information for
+ * &device->bitmap->bm_change is available in this context.
+ */
+token_context_lock(drbd_bitmap_lock);
+
+extern void drbd_bm_lock(struct drbd_device *device, char *why, enum 
bm_flag flags)
+	__acquires(drbd_bitmap_lock);
+extern void drbd_bm_unlock(struct drbd_device *device)
+	__releases(drbd_bitmap_lock);
  /* drbd_main.c */

  extern struct kmem_cache *drbd_request_cache;
@@ -1389,7 +1407,8 @@ enum determine_dev_size {
  	DS_GREW_FROM_ZERO = 3,
  };
  extern enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *, enum dds_flags, struct 
resize_parms *) __must_hold(local);
+drbd_determine_dev_size(struct drbd_device *device, enum dds_flags,
+			struct resize_parms *);
  extern void resync_after_online_grow(struct drbd_device *);
  extern void drbd_reconsider_queue_parameters(struct drbd_device *device,
  			struct drbd_backing_dev *bdev, struct o_qlim *o);
@@ -1473,7 +1492,7 @@ extern int drbd_free_peer_reqs(struct drbd_device 
*, struct list_head *);
  extern struct drbd_peer_request *drbd_alloc_peer_req(struct 
drbd_peer_device *, u64,
  						     sector_t, unsigned int,
  						     unsigned int,
-						     gfp_t) __must_hold(local);
+						     gfp_t);
  extern void drbd_free_peer_req(struct drbd_device *device, struct 
drbd_peer_request *req);
  extern struct page *drbd_alloc_pages(struct drbd_peer_device *, 
unsigned int, bool);
  extern void _drbd_clear_done_ee(struct drbd_device *device, struct 
list_head *to_be_freed);
@@ -1488,7 +1507,6 @@ void drbd_set_my_capacity(struct drbd_device 
*device, sector_t size);
  static inline void drbd_submit_bio_noacct(struct drbd_device *device,
  					     int fault_type, struct bio *bio)
  {
-	__release(local);
  	if (!bio->bi_bdev) {
  		drbd_err(device, "drbd_submit_bio_noacct: bio->bi_bdev == NULL\n");
  		bio->bi_status = BLK_STS_IOERR;
@@ -1839,14 +1857,18 @@ static inline void request_ping(struct 
drbd_connection *connection)
  	wake_ack_receiver(connection);
  }

-extern void *conn_prepare_command(struct drbd_connection *, struct 
drbd_socket *);
-extern void *drbd_prepare_command(struct drbd_peer_device *, struct 
drbd_socket *);
-extern int conn_send_command(struct drbd_connection *, struct 
drbd_socket *,
+extern void *conn_prepare_command(struct drbd_connection *, struct 
drbd_socket *sock)
+	__cond_acquires(nonnull, sock->mutex);
+extern void *drbd_prepare_command(struct drbd_peer_device *, struct 
drbd_socket *sock)
+	__cond_acquires(nonnull, sock->mutex);
+extern int conn_send_command(struct drbd_connection *, struct 
drbd_socket *sock,
  			     enum drbd_packet, unsigned int, void *,
-			     unsigned int);
-extern int drbd_send_command(struct drbd_peer_device *, struct 
drbd_socket *,
+			     unsigned int)
+	__releases(sock->mutex);
+extern int drbd_send_command(struct drbd_peer_device *, struct 
drbd_socket *sock,
  			     enum drbd_packet, unsigned int, void *,
-			     unsigned int);
+			     unsigned int)
+	__releases(sock->mutex);

  extern int drbd_send_ping(struct drbd_connection *connection);
  extern int drbd_send_ping_ack(struct drbd_connection *connection);
@@ -1975,8 +1997,7 @@ static inline bool is_sync_state(enum drbd_conns 
connection_state)
   * You have to call put_ldev() when finished working with device->ldev.
   */
  #define get_ldev_if_state(_device, _min_state)				\
-	(_get_ldev_if_state((_device), (_min_state)) ?			\
-	 ({ __acquire(x); true; }) : false)
+	(_get_ldev_if_state((_device), (_min_state)))
  #define get_ldev(_device) get_ldev_if_state(_device, D_INCONSISTENT)

  static inline void put_ldev(struct drbd_device *device)
@@ -1991,7 +2012,6 @@ static inline void put_ldev(struct drbd_device 
*device)
  	/* This may be called from some endio handler,
  	 * so we must not sleep here. */

-	__release(local);
  	D_ASSERT(device, i >= 0);
  	if (i == 0) {
  		if (disk_state == D_DISKLESS)
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 200d464e984b..0bbee2afb7e5 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -3282,7 +3282,7 @@ void drbd_md_mark_dirty(struct drbd_device *device)
  		mod_timer(&device->md_sync_timer, jiffies + 5*HZ);
  }

-void drbd_uuid_move_history(struct drbd_device *device) __must_hold(local)
+void drbd_uuid_move_history(struct drbd_device *device)
  {
  	int i;

@@ -3290,7 +3290,7 @@ void drbd_uuid_move_history(struct drbd_device 
*device) __must_hold(local)
  		device->ldev->md.uuid[i+1] = device->ldev->md.uuid[i];
  }

-void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val) 
__must_hold(local)
+void __drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
  {
  	if (idx == UI_CURRENT) {
  		if (device->state.role == R_PRIMARY)
@@ -3305,7 +3305,7 @@ void __drbd_uuid_set(struct drbd_device *device, 
int idx, u64 val) __must_hold(l
  	drbd_md_mark_dirty(device);
  }

-void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val) 
__must_hold(local)
+void _drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
  {
  	unsigned long flags;
  	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3313,7 +3313,7 @@ void _drbd_uuid_set(struct drbd_device *device, 
int idx, u64 val) __must_hold(lo
  	spin_unlock_irqrestore(&device->ldev->md.uuid_lock, flags);
  }

-void drbd_uuid_set(struct drbd_device *device, int idx, u64 val) 
__must_hold(local)
+void drbd_uuid_set(struct drbd_device *device, int idx, u64 val)
  {
  	unsigned long flags;
  	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3332,7 +3332,7 @@ void drbd_uuid_set(struct drbd_device *device, int 
idx, u64 val) __must_hold(loc
   * Creates a new current UUID, and rotates the old current UUID into
   * the bitmap slot. Causes an incremental resync upon next connect.
   */
-void drbd_uuid_new_current(struct drbd_device *device) __must_hold(local)
+void drbd_uuid_new_current(struct drbd_device *device)
  {
  	u64 val;
  	unsigned long long bm_uuid;
@@ -3354,7 +3354,7 @@ void drbd_uuid_new_current(struct drbd_device 
*device) __must_hold(local)
  	drbd_md_sync(device);
  }

-void drbd_uuid_set_bm(struct drbd_device *device, u64 val) 
__must_hold(local)
+void drbd_uuid_set_bm(struct drbd_device *device, u64 val)
  {
  	unsigned long flags;
  	spin_lock_irqsave(&device->ldev->md.uuid_lock, flags);
@@ -3387,7 +3387,7 @@ void drbd_uuid_set_bm(struct drbd_device *device, 
u64 val) __must_hold(local)
   * Sets all bits in the bitmap and writes the whole bitmap to stable 
storage.
   */
  int drbd_bmio_set_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
+			  struct drbd_peer_device *peer_device)

  {
  	int rv = -EIO;
@@ -3414,7 +3414,7 @@ int drbd_bmio_set_n_write(struct drbd_device *device,
   * Clears all bits in the bitmap and writes the whole bitmap to stable 
storage.
   */
  int drbd_bmio_clear_n_write(struct drbd_device *device,
-			  struct drbd_peer_device *peer_device) __must_hold(local)
+			  struct drbd_peer_device *peer_device)

  {
  	drbd_resume_al(device);
@@ -3541,7 +3541,7 @@ int drbd_bitmap_io(struct drbd_device *device,
  	return rv;
  }

-void drbd_md_set_flag(struct drbd_device *device, int flag) 
__must_hold(local)
+void drbd_md_set_flag(struct drbd_device *device, int flag)
  {
  	if ((device->ldev->md.flags & flag) != flag) {
  		drbd_md_mark_dirty(device);
@@ -3549,7 +3549,7 @@ void drbd_md_set_flag(struct drbd_device *device, 
int flag) __must_hold(local)
  	}
  }

-void drbd_md_clear_flag(struct drbd_device *device, int flag) 
__must_hold(local)
+void drbd_md_clear_flag(struct drbd_device *device, int flag)
  {
  	if ((device->ldev->md.flags & flag) != 0) {
  		drbd_md_mark_dirty(device);
@@ -3678,24 +3678,44 @@ int drbd_wait_misc(struct drbd_device *device, 
struct drbd_interval *i)
  }

  void lock_all_resources(void)
+	__acquires(all_drbd_resources)
+	__acquires(&resources_mutex)
  {
  	struct drbd_resource *resource;
  	int __maybe_unused i = 0;

  	mutex_lock(&resources_mutex);
  	local_irq_disable();
+	/*
+	 * context_unsafe() because the thread-safety analyzer does not support
+	 * locking inside loops.
+	 */
+	context_unsafe(
  	for_each_resource(resource, &drbd_resources)
  		spin_lock_nested(&resource->req_lock, i++);
+	);
+
+	__acquire(all_drbd_resources);
  }

  void unlock_all_resources(void)
+	__releases(all_drbd_resources)
+	__releases(&resources_mutex)
  {
  	struct drbd_resource *resource;

+	/*
+	 * context_unsafe() because the thread-safety analyzer does not support
+	 * locking inside loops.
+	 */
+	context_unsafe(
  	for_each_resource(resource, &drbd_resources)
  		spin_unlock(&resource->req_lock);
+	);
  	local_irq_enable();
  	mutex_unlock(&resources_mutex);
+
+	__release(all_drbd_resources);
  }

  #ifdef CONFIG_DRBD_FAULT_INJECTION
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 728ecc431b38..cf505b31d040 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -927,7 +927,7 @@ void drbd_resume_io(struct drbd_device *device)
   * You should call drbd_md_sync() after calling this function.
   */
  enum determine_dev_size
-drbd_determine_dev_size(struct drbd_device *device, enum dds_flags 
flags, struct resize_parms *rs) __must_hold(local)
+drbd_determine_dev_size(struct drbd_device *device, enum dds_flags 
flags, struct resize_parms *rs)
  {
  	struct md_offsets_and_sizes {
  		u64 last_agreed_sect;
@@ -3025,7 +3025,7 @@ static int drbd_adm_simple_request_state(struct 
sk_buff *skb, struct genl_info *
  }

  static int drbd_bmio_set_susp_al(struct drbd_device *device,
-		struct drbd_peer_device *peer_device) __must_hold(local)
+		struct drbd_peer_device *peer_device)
  {
  	int rv;

@@ -3453,6 +3453,7 @@ int drbd_adm_dump_connections_done(struct 
netlink_callback *cb)
  enum { SINGLE_RESOURCE, ITERATE_RESOURCES };

  int drbd_adm_dump_connections(struct sk_buff *skb, struct 
netlink_callback *cb)
+	__no_context_analysis /* too complex for Clang */
  {
  	struct nlattr *resource_filter;
  	struct drbd_resource *resource = NULL, *next_resource;
diff --git a/drivers/block/drbd/drbd_receiver.c 
b/drivers/block/drbd/drbd_receiver.c
index 58b95bf4bdca..b0ef6c5470f8 100644
--- a/drivers/block/drbd/drbd_receiver.c
+++ b/drivers/block/drbd/drbd_receiver.c
@@ -175,7 +175,7 @@ You must not have the req_lock:
   * trim: payload_size == 0 */
  struct drbd_peer_request *
  drbd_alloc_peer_req(struct drbd_peer_device *peer_device, u64 id, 
sector_t sector,
-		    unsigned int request_size, unsigned int payload_size, gfp_t 
gfp_mask) __must_hold(local)
+		    unsigned int request_size, unsigned int payload_size, gfp_t gfp_mask)
  {
  	struct drbd_device *device = peer_device->device;
  	struct drbd_peer_request *peer_req;
@@ -287,6 +287,7 @@ static int drbd_finish_peer_reqs(struct drbd_device 
*device)

  static void _drbd_wait_ee_list_empty(struct drbd_device *device,
  				     struct list_head *head)
+	__must_hold(&device->resource->req_lock)
  {
  	DEFINE_WAIT(wait);

@@ -896,6 +897,11 @@ static int conn_connect(struct drbd_connection 
*connection)
  	if (drbd_send_protocol(connection) == -EOPNOTSUPP)
  		return -1;

+	/*
+	 * context_unsafe() because the thread-safety analyzer does not support
+	 * locking inside loops.
+	 */
+	context_unsafe(
  	/* Prevent a race between resync-handshake and
  	 * being promoted to Primary.
  	 *
@@ -905,14 +911,21 @@ static int conn_connect(struct drbd_connection 
*connection)
  	 */
  	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
  		mutex_lock(peer_device->device->state_mutex);
+	);

  	/* avoid a race with conn_request_state( C_DISCONNECTING ) */
  	spin_lock_irq(&connection->resource->req_lock);
  	set_bit(STATE_SENT, &connection->flags);
  	spin_unlock_irq(&connection->resource->req_lock);

+	/*
+	 * context_unsafe() because the thread-safety analyzer does not support
+	 * locking inside loops.
+	 */
+	context_unsafe(
  	idr_for_each_entry(&connection->peer_devices, peer_device, vnr)
  		mutex_unlock(peer_device->device->state_mutex);
+	);

  	rcu_read_lock();
  	idr_for_each_entry(&connection->peer_devices, peer_device, vnr) {
@@ -1657,7 +1670,7 @@ static void drbd_csum_ee_size(struct crypto_shash *h,
   */
  static struct drbd_peer_request *
  read_in_block(struct drbd_peer_device *peer_device, u64 id, sector_t 
sector,
-	      struct packet_info *pi) __must_hold(local)
+	      struct packet_info *pi)
  {
  	struct drbd_device *device = peer_device->device;
  	const sector_t capacity = get_capacity(device->vdisk);
@@ -1869,7 +1882,7 @@ static int e_end_resync_block(struct drbd_work *w, 
int unused)
  }

  static int recv_resync_read(struct drbd_peer_device *peer_device, 
sector_t sector,
-			    struct packet_info *pi) __releases(local)
+			    struct packet_info *pi)
  {
  	struct drbd_device *device = peer_device->device;
  	struct drbd_peer_request *peer_req;
@@ -2230,6 +2243,7 @@ static blk_opf_t wire_flags_to_bio(struct 
drbd_connection *connection, u32 dpf)

  static void fail_postponed_requests(struct drbd_device *device, 
sector_t sector,
  				    unsigned int size)
+	__must_hold(&device->resource->req_lock)
  {
  	struct drbd_peer_device *peer_device = first_peer_device(device);
  	struct drbd_interval *i;
@@ -2256,6 +2270,7 @@ static void fail_postponed_requests(struct 
drbd_device *device, sector_t sector,

  static int handle_write_conflicts(struct drbd_device *device,
  				  struct drbd_peer_request *peer_req)
+	__must_hold(&device->resource->req_lock)
  {
  	struct drbd_connection *connection = peer_req->peer_device->connection;
  	bool resolve_conflicts = test_bit(RESOLVE_CONFLICTS, &connection->flags);
@@ -2826,7 +2841,7 @@ static int receive_DataRequest(struct 
drbd_connection *connection, struct packet
  /*
   * drbd_asb_recover_0p  -  Recover after split-brain with no remaining 
primaries
   */
-static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device) 
__must_hold(local)
+static int drbd_asb_recover_0p(struct drbd_peer_device *peer_device)
  {
  	struct drbd_device *device = peer_device->device;
  	int self, peer, rv = -100;
@@ -2909,7 +2924,7 @@ static int drbd_asb_recover_0p(struct 
drbd_peer_device *peer_device) __must_hold
  /*
   * drbd_asb_recover_1p  -  Recover after split-brain with one 
remaining primary
   */
-static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device) 
__must_hold(local)
+static int drbd_asb_recover_1p(struct drbd_peer_device *peer_device)
  {
  	struct drbd_device *device = peer_device->device;
  	int hg, rv = -100;
@@ -2966,7 +2981,7 @@ static int drbd_asb_recover_1p(struct 
drbd_peer_device *peer_device) __must_hold
  /*
   * drbd_asb_recover_2p  -  Recover after split-brain with two 
remaining primaries
   */
-static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device) 
__must_hold(local)
+static int drbd_asb_recover_2p(struct drbd_peer_device *peer_device)
  {
  	struct drbd_device *device = peer_device->device;
  	int hg, rv = -100;
@@ -3044,7 +3059,7 @@ static void drbd_uuid_dump(struct drbd_device 
*device, char *text, u64 *uuid,
   */

  static int drbd_uuid_compare(struct drbd_peer_device *const peer_device,
-		enum drbd_role const peer_role, int *rule_nr) __must_hold(local)
+		enum drbd_role const peer_role, int *rule_nr)
  {
  	struct drbd_connection *const connection = peer_device->connection;
  	struct drbd_device *device = peer_device->device;
@@ -3264,7 +3279,7 @@ static int drbd_uuid_compare(struct 
drbd_peer_device *const peer_device,
   */
  static enum drbd_conns drbd_sync_handshake(struct drbd_peer_device 
*peer_device,
  					   enum drbd_role peer_role,
-					   enum drbd_disk_state peer_disk) __must_hold(local)
+					   enum drbd_disk_state peer_disk)
  {
  	struct drbd_device *device = peer_device->device;
  	enum drbd_conns rv = C_MASK;
diff --git a/drivers/block/drbd/drbd_req.c b/drivers/block/drbd/drbd_req.c
index 70f75ef07945..a758d0f66e3f 100644
--- a/drivers/block/drbd/drbd_req.c
+++ b/drivers/block/drbd/drbd_req.c
@@ -952,6 +952,7 @@ static bool remote_due_to_read_balancing(struct 
drbd_device *device, sector_t se
   * Only way out: remove the conflicting intervals from the tree.
   */
  static void complete_conflicting_writes(struct drbd_request *req)
+	__must_hold(&req->device->resource->req_lock)
  {
  	DEFINE_WAIT(wait);
  	struct drbd_device *device = req->device;
@@ -1325,6 +1326,8 @@ static void drbd_send_and_submit(struct 
drbd_device *device, struct drbd_request
  	bool submit_private_bio = false;

  	spin_lock_irq(&resource->req_lock);
+	/* Tell the compiler that &resource->req_lock == 
&req->device->resource->req_lock. */
+	__assume_ctx_lock(&req->device->resource->req_lock);
  	if (rw == WRITE) {
  		/* This may temporarily give up the req_lock,
  		 * but will re-aquire it before it returns here.
diff --git a/drivers/block/drbd/drbd_state.c 
b/drivers/block/drbd/drbd_state.c
index adcba7f1d8ea..1c18d9f81e03 100644
--- a/drivers/block/drbd/drbd_state.c
+++ b/drivers/block/drbd/drbd_state.c
@@ -562,6 +562,7 @@ _req_st_cond(struct drbd_device *device, union 
drbd_state mask,
  static enum drbd_state_rv
  drbd_req_state(struct drbd_device *device, union drbd_state mask,
  	       union drbd_state val, enum chg_state_flags f)
+	__no_context_analysis /* conditional locking */
  {
  	struct completion done;
  	unsigned long flags;
@@ -2292,6 +2293,7 @@ _conn_rq_cond(struct drbd_connection *connection, 
union drbd_state mask, union d
  enum drbd_state_rv
  _conn_request_state(struct drbd_connection *connection, union 
drbd_state mask, union drbd_state val,
  		    enum chg_state_flags flags)
+	__no_context_analysis /* conditional locking */
  {
  	enum drbd_state_rv rv = SS_SUCCESS;
  	struct after_conn_state_chg_work *acscw;
diff --git a/drivers/block/drbd/drbd_state.h 
b/drivers/block/drbd/drbd_state.h
index cbaeb8018dbf..e6fded8b14ee 100644
--- a/drivers/block/drbd/drbd_state.h
+++ b/drivers/block/drbd/drbd_state.h
@@ -123,10 +123,6 @@ extern enum drbd_state_rv 
_drbd_request_state(struct drbd_device *,
  					      union drbd_state,
  					      enum chg_state_flags);

-extern enum drbd_state_rv
-_drbd_request_state_holding_state_mutex(struct drbd_device *, union 
drbd_state,
-					union drbd_state, enum chg_state_flags);
-
  extern enum drbd_state_rv _drbd_set_state(struct drbd_device *, union 
drbd_state,
  					  enum chg_state_flags,
  					  struct completion *done);
diff --git a/drivers/block/drbd/drbd_worker.c 
b/drivers/block/drbd/drbd_worker.c
index 0697f99fed18..6fec59bbf0e9 100644
--- a/drivers/block/drbd/drbd_worker.c
+++ b/drivers/block/drbd/drbd_worker.c
@@ -78,7 +78,7 @@ void drbd_md_endio(struct bio *bio)
  /* reads on behalf of the partner,
   * "submitted" by the receiver
   */
-static void drbd_endio_read_sec_final(struct drbd_peer_request 
*peer_req) __releases(local)
+static void drbd_endio_read_sec_final(struct drbd_peer_request *peer_req)
  {
  	unsigned long flags = 0;
  	struct drbd_peer_device *peer_device = peer_req->peer_device;
@@ -99,7 +99,7 @@ static void drbd_endio_read_sec_final(struct 
drbd_peer_request *peer_req) __rele

  /* writes on behalf of the partner, or resync writes,
   * "submitted" by the receiver, final stage.  */
-void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req) 
__releases(local)
+void drbd_endio_write_sec_final(struct drbd_peer_request *peer_req)
  {
  	unsigned long flags = 0;
  	struct drbd_peer_device *peer_device = peer_req->peer_device;
@@ -1923,10 +1923,8 @@ static void drbd_ldev_destroy(struct drbd_device 
*device)
  	lc_destroy(device->act_log);
  	device->act_log = NULL;

-	__acquire(local);
  	drbd_backing_dev_free(device, device->ldev);
  	device->ldev = NULL;
-	__release(local);

  	clear_bit(GOING_DISKLESS, &device->flags);
  	wake_up(&device->misc_wait);


^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang
  2026-03-09 23:15     ` Bart Van Assche
@ 2026-03-11 20:42       ` Christoph Böhmwalder
  0 siblings, 0 replies; 39+ messages in thread
From: Christoph Böhmwalder @ 2026-03-11 20:42 UTC (permalink / raw)
  To: Bart Van Assche, Jens Axboe
  Cc: Christoph Hellwig, Damien Le Moal, Marco Elver, linux-block,
	Philipp Reisner, Lars Ellenberg, Nathan Chancellor



Am 10.03.26 um 00:15 schrieb Bart Van Assche:
> On 3/9/26 3:08 AM, Christoph Böhmwalder wrote:
>> These are marked as acquiring connection->data.mutex, but actually use
>> connection->meta later. So the annotation should reference meta.mutex.
>>
>> Also, I think the annotations on drbd_send_* are wrong. These functions
>> have no path where they return without releasing the mutex, but these
>> annotations would tell clang that they hold the mutex on non-zero return.
> 
> Thanks for the feedback. Does the patch below look better? Compared to
> the previous version, all lock context annotations (except for static
> functions) have been moved from .c into .h files. Lock context aliases
> have been introduced if the synchronization object is not visible in the
> .h file. Two function declarations have been moved from before to after
> the struct drbd_device definition.
> 
> Thanks,
> 
> Bart.
> 

Yes, looks better now, and I can confirm that no warnings remain (after
the other DRBD patch in this series).

Reviewed-by: Christoph Böhmwalder <christoph.boehmwalder@linbit.com>

-- 
Christoph Böhmwalder
LINBIT | Keeping the Digital World Running
DRBD HA —  Disaster Recovery — Software defined Storage


^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2026-03-11 20:42 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-04 19:48 [PATCH 00/14] Enable lock context analysis Bart Van Assche
2026-03-04 19:48 ` [PATCH 01/14] drbd: Balance RCU calls in drbd_adm_dump_devices() Bart Van Assche
2026-03-04 20:25   ` Damien Le Moal
2026-03-04 20:59     ` Bart Van Assche
2026-03-04 19:48 ` [PATCH 02/14] blk-ioc: Prepare for enabling thread-safety analysis Bart Van Assche
2026-03-05 10:10   ` Jan Kara
2026-03-05 12:46     ` Bart Van Assche
2026-03-05 13:18       ` Marco Elver
2026-03-05 14:35         ` Bart Van Assche
2026-03-05 20:30           ` Marco Elver
2026-03-04 19:48 ` [PATCH 03/14] block: Make the lock context annotations compatible with Clang Bart Van Assche
2026-03-04 20:03   ` Tejun Heo
2026-03-04 20:29     ` Bart Van Assche
2026-03-04 20:58       ` Tejun Heo
2026-03-04 21:34         ` Bart Van Assche
2026-03-04 21:45           ` Tejun Heo
2026-03-04 21:46             ` Tejun Heo
2026-03-04 19:48 ` [PATCH 04/14] aoe: Add a lock context annotation Bart Van Assche
2026-03-04 19:48 ` [PATCH 05/14] drbd: Make the lock context annotations compatible with Clang Bart Van Assche
2026-03-09 10:08   ` Christoph Böhmwalder
2026-03-09 23:15     ` Bart Van Assche
2026-03-11 20:42       ` Christoph Böhmwalder
2026-03-04 19:48 ` [PATCH 06/14] loop: Add lock context annotations Bart Van Assche
2026-03-04 19:48 ` [PATCH 07/14] nbd: " Bart Van Assche
2026-03-04 19:48 ` [PATCH 08/14] null_blk: Add more " Bart Van Assche
2026-03-04 19:48 ` [PATCH 09/14] rbd: Add " Bart Van Assche
2026-03-04 19:48 ` [PATCH 10/14] rnbd: Add more " Bart Van Assche
2026-03-06 13:09   ` Marco Elver
2026-03-06 14:11     ` Bart Van Assche
2026-03-04 19:48 ` [PATCH 11/14] ublk: Fix the " Bart Van Assche
2026-03-04 20:43   ` Caleb Sander Mateos
2026-03-04 20:55     ` Bart Van Assche
2026-03-04 21:03       ` Caleb Sander Mateos
2026-03-04 21:36         ` Bart Van Assche
2026-03-04 19:48 ` [PATCH 12/14] zloop: Add a " Bart Van Assche
2026-03-04 19:48 ` [PATCH 13/14] zram: Add " Bart Van Assche
2026-03-05  1:23   ` Sergey Senozhatsky
2026-03-04 19:48 ` [PATCH 14/14] block: Enable lock context analysis for all block drivers Bart Van Assche
2026-03-05  1:33   ` Sergey Senozhatsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox