* [PATCH v2 26/26] block: Enable lock context analysis for all block drivers
2026-03-25 21:44 [PATCH v2 00/26] Enable lock context analysis Bart Van Assche
@ 2026-03-25 21:45 ` Bart Van Assche
0 siblings, 0 replies; 2+ messages in thread
From: Bart Van Assche @ 2026-03-25 21:45 UTC (permalink / raw)
To: Jens Axboe
Cc: linux-block, Christoph Hellwig, Damien Le Moal, Tejun Heo,
Bart Van Assche
Now that all locking functions in block drivers have been annotated,
enable lock context analysis for all block drivers at the top level of
drivers/block/.
Signed-off-by: Bart Van Assche <bvanassche@acm.org>
---
drivers/block/Makefile | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/block/Makefile b/drivers/block/Makefile
index 2d8096eb8cdf..e17f6381b798 100644
--- a/drivers/block/Makefile
+++ b/drivers/block/Makefile
@@ -6,6 +6,8 @@
# Rewritten to use lists instead of if-statements.
#
+CONTEXT_ANALYSIS := y
+
# needed for trace events
ccflags-y += -I$(src)
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH v2 26/26] block: Enable lock context analysis for all block drivers
@ 2026-03-27 21:20 kernel test robot
0 siblings, 0 replies; 2+ messages in thread
From: kernel test robot @ 2026-03-27 21:20 UTC (permalink / raw)
To: oe-kbuild; +Cc: lkp
::::::
:::::: Manual check reason: "only suspicious fbc files changed"
::::::
BCC: lkp@intel.com
CC: llvm@lists.linux.dev
CC: oe-kbuild-all@lists.linux.dev
In-Reply-To: <20260325214518.2854494-27-bvanassche@acm.org>
References: <20260325214518.2854494-27-bvanassche@acm.org>
TO: Bart Van Assche <bvanassche@acm.org>
TO: Jens Axboe <axboe@kernel.dk>
CC: linux-block@vger.kernel.org
CC: Christoph Hellwig <hch@lst.de>
CC: Damien Le Moal <dlemoal@kernel.org>
CC: Tejun Heo <tj@kernel.org>
CC: Bart Van Assche <bvanassche@acm.org>
Hi Bart,
kernel test robot noticed the following build warnings:
[auto build test WARNING on ceph-client/testing]
[also build test WARNING on ceph-client/for-linus linus/master v7.0-rc5]
[cannot apply to axboe/for-next hch-configfs/for-next next-20260326]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]
url: https://github.com/intel-lab-lkp/linux/commits/Bart-Van-Assche/block-Annotate-the-queue-limits-functions/20260327-071524
base: https://github.com/ceph/ceph-client.git testing
patch link: https://lore.kernel.org/r/20260325214518.2854494-27-bvanassche%40acm.org
patch subject: [PATCH v2 26/26] block: Enable lock context analysis for all block drivers
:::::: branch date: 22 hours ago
:::::: commit date: 22 hours ago
config: sparc64-randconfig-001-20260327 (https://download.01.org/0day-ci/archive/20260328/202603280554.fjeELK3q-lkp@intel.com/config)
compiler: clang version 23.0.0git (https://github.com/llvm/llvm-project 054e11d1a17e5ba88bb1a8ef32fad3346e80b186)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20260328/202603280554.fjeELK3q-lkp@intel.com/reproduce)
If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/r/202603280554.fjeELK3q-lkp@intel.com/
All warnings (new ones prefixed by >>):
>> drivers/block/zloop.c:490:12: warning: mutex 'blk_mq_rq_from_pdu(cmd).q->queuedata->zones[zone_no].lock' is not held on every path through here [-Wthread-safety-analysis]
490 | nr_bvec = blk_rq_nr_bvec(rq);
| ^
drivers/block/zloop.c:436:3: note: mutex acquired here
436 | mutex_lock(&zone->lock);
| ^
include/linux/mutex.h:193:26: note: expanded from macro 'mutex_lock'
193 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^
drivers/block/zloop.c:537:7: warning: mutex 'blk_mq_rq_from_pdu(cmd).q->queuedata->zones[zone_no].lock' is not held on every path through here [-Wthread-safety-analysis]
537 | if (!test_bit(ZLOOP_ZONE_CONV, &zone->flags) && is_write)
| ^
include/linux/bitops.h:60:29: note: expanded from macro 'test_bit'
60 | #define test_bit(nr, addr) bitop(_test_bit, nr, addr)
| ^
include/linux/bitops.h:43:4: note: expanded from macro 'bitop'
43 | ((__builtin_constant_p(nr) && \
| ^
drivers/block/zloop.c:436:3: note: mutex acquired here
436 | mutex_lock(&zone->lock);
| ^
include/linux/mutex.h:193:26: note: expanded from macro 'mutex_lock'
193 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^
drivers/block/zloop.c:537:7: warning: mutex 'blk_mq_rq_from_pdu(cmd).q->queuedata->zones[zone_no].lock' is not held on every path through here [-Wthread-safety-analysis]
537 | if (!test_bit(ZLOOP_ZONE_CONV, &zone->flags) && is_write)
| ^
include/linux/bitops.h:60:29: note: expanded from macro 'test_bit'
60 | #define test_bit(nr, addr) bitop(_test_bit, nr, addr)
| ^
include/linux/bitops.h:43:4: note: expanded from macro 'bitop'
43 | ((__builtin_constant_p(nr) && \
| ^
drivers/block/zloop.c:436:3: note: mutex acquired here
436 | mutex_lock(&zone->lock);
| ^
include/linux/mutex.h:193:26: note: expanded from macro 'mutex_lock'
193 | #define mutex_lock(lock) mutex_lock_nested(lock, 0)
| ^
>> drivers/block/zloop.c:538:3: warning: releasing mutex 'blk_mq_rq_from_pdu(cmd).q->queuedata->zones[zone_no].lock' that was not held [-Wthread-safety-analysis]
538 | mutex_unlock(&zone->lock);
| ^
4 warnings generated.
vim +490 drivers/block/zloop.c
eb0570c7df23c2 Damien Le Moal 2025-04-07 380
eb0570c7df23c2 Damien Le Moal 2025-04-07 381 static void zloop_rw(struct zloop_cmd *cmd)
eb0570c7df23c2 Damien Le Moal 2025-04-07 382 {
eb0570c7df23c2 Damien Le Moal 2025-04-07 383 struct request *rq = blk_mq_rq_from_pdu(cmd);
eb0570c7df23c2 Damien Le Moal 2025-04-07 384 struct zloop_device *zlo = rq->q->queuedata;
eb0570c7df23c2 Damien Le Moal 2025-04-07 385 unsigned int zone_no = rq_zone_no(rq);
eb0570c7df23c2 Damien Le Moal 2025-04-07 386 sector_t sector = blk_rq_pos(rq);
eb0570c7df23c2 Damien Le Moal 2025-04-07 387 sector_t nr_sectors = blk_rq_sectors(rq);
eb0570c7df23c2 Damien Le Moal 2025-04-07 388 bool is_append = req_op(rq) == REQ_OP_ZONE_APPEND;
eb0570c7df23c2 Damien Le Moal 2025-04-07 389 bool is_write = req_op(rq) == REQ_OP_WRITE || is_append;
eb0570c7df23c2 Damien Le Moal 2025-04-07 390 int rw = is_write ? ITER_SOURCE : ITER_DEST;
eb0570c7df23c2 Damien Le Moal 2025-04-07 391 struct req_iterator rq_iter;
eb0570c7df23c2 Damien Le Moal 2025-04-07 392 struct zloop_zone *zone;
eb0570c7df23c2 Damien Le Moal 2025-04-07 393 struct iov_iter iter;
eb0570c7df23c2 Damien Le Moal 2025-04-07 394 struct bio_vec tmp;
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 395 unsigned long flags;
eb0570c7df23c2 Damien Le Moal 2025-04-07 396 sector_t zone_end;
71075d25ca5cae Chaitanya Kulkarni 2025-12-02 397 unsigned int nr_bvec;
eb0570c7df23c2 Damien Le Moal 2025-04-07 398 int ret;
eb0570c7df23c2 Damien Le Moal 2025-04-07 399
eb0570c7df23c2 Damien Le Moal 2025-04-07 400 atomic_set(&cmd->ref, 2);
eb0570c7df23c2 Damien Le Moal 2025-04-07 401 cmd->sector = sector;
eb0570c7df23c2 Damien Le Moal 2025-04-07 402 cmd->nr_sectors = nr_sectors;
eb0570c7df23c2 Damien Le Moal 2025-04-07 403 cmd->ret = 0;
eb0570c7df23c2 Damien Le Moal 2025-04-07 404
9236c5fdd5a8be Damien Le Moal 2025-11-15 405 if (WARN_ON_ONCE(is_append && !zlo->zone_append)) {
9236c5fdd5a8be Damien Le Moal 2025-11-15 406 ret = -EIO;
9236c5fdd5a8be Damien Le Moal 2025-11-15 407 goto out;
9236c5fdd5a8be Damien Le Moal 2025-11-15 408 }
9236c5fdd5a8be Damien Le Moal 2025-11-15 409
eb0570c7df23c2 Damien Le Moal 2025-04-07 410 /* We should never get an I/O beyond the device capacity. */
eb0570c7df23c2 Damien Le Moal 2025-04-07 411 if (WARN_ON_ONCE(zone_no >= zlo->nr_zones)) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 412 ret = -EIO;
eb0570c7df23c2 Damien Le Moal 2025-04-07 413 goto out;
eb0570c7df23c2 Damien Le Moal 2025-04-07 414 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 415 zone = &zlo->zones[zone_no];
eb0570c7df23c2 Damien Le Moal 2025-04-07 416 zone_end = zone->start + zlo->zone_capacity;
eb0570c7df23c2 Damien Le Moal 2025-04-07 417
eb0570c7df23c2 Damien Le Moal 2025-04-07 418 /*
eb0570c7df23c2 Damien Le Moal 2025-04-07 419 * The block layer should never send requests that are not fully
eb0570c7df23c2 Damien Le Moal 2025-04-07 420 * contained within the zone.
eb0570c7df23c2 Damien Le Moal 2025-04-07 421 */
eb0570c7df23c2 Damien Le Moal 2025-04-07 422 if (WARN_ON_ONCE(sector + nr_sectors > zone->start + zlo->zone_size)) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 423 ret = -EIO;
eb0570c7df23c2 Damien Le Moal 2025-04-07 424 goto out;
eb0570c7df23c2 Damien Le Moal 2025-04-07 425 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 426
eb0570c7df23c2 Damien Le Moal 2025-04-07 427 if (test_and_clear_bit(ZLOOP_ZONE_SEQ_ERROR, &zone->flags)) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 428 mutex_lock(&zone->lock);
eb0570c7df23c2 Damien Le Moal 2025-04-07 429 ret = zloop_update_seq_zone(zlo, zone_no);
eb0570c7df23c2 Damien Le Moal 2025-04-07 430 mutex_unlock(&zone->lock);
eb0570c7df23c2 Damien Le Moal 2025-04-07 431 if (ret)
eb0570c7df23c2 Damien Le Moal 2025-04-07 432 goto out;
eb0570c7df23c2 Damien Le Moal 2025-04-07 433 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 434
eb0570c7df23c2 Damien Le Moal 2025-04-07 435 if (!test_bit(ZLOOP_ZONE_CONV, &zone->flags) && is_write) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 436 mutex_lock(&zone->lock);
eb0570c7df23c2 Damien Le Moal 2025-04-07 437
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 438 spin_lock_irqsave(&zone->wp_lock, flags);
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 439
e3a96ca90462f8 Damien Le Moal 2025-11-15 440 /*
e3a96ca90462f8 Damien Le Moal 2025-11-15 441 * Zone append operations always go at the current write
e3a96ca90462f8 Damien Le Moal 2025-11-15 442 * pointer, but regular write operations must already be
e3a96ca90462f8 Damien Le Moal 2025-11-15 443 * aligned to the write pointer when submitted.
e3a96ca90462f8 Damien Le Moal 2025-11-15 444 */
eb0570c7df23c2 Damien Le Moal 2025-04-07 445 if (is_append) {
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 446 /*
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 447 * If ordered zone append is in use, we already checked
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 448 * and set the target sector in zloop_queue_rq().
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 449 */
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 450 if (!zlo->ordered_zone_append) {
a9637ab93c6cfd Damien Le Moal 2025-11-19 451 if (zone->cond == BLK_ZONE_COND_FULL ||
a9637ab93c6cfd Damien Le Moal 2025-11-19 452 zone->wp + nr_sectors > zone_end) {
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 453 spin_unlock_irqrestore(&zone->wp_lock,
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 454 flags);
cf28f6f923cb1d Damien Le Moal 2025-11-15 455 ret = -EIO;
cf28f6f923cb1d Damien Le Moal 2025-11-15 456 goto unlock;
cf28f6f923cb1d Damien Le Moal 2025-11-15 457 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 458 sector = zone->wp;
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 459 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 460 cmd->sector = sector;
e3a96ca90462f8 Damien Le Moal 2025-11-15 461 } else if (sector != zone->wp) {
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 462 spin_unlock_irqrestore(&zone->wp_lock, flags);
eb0570c7df23c2 Damien Le Moal 2025-04-07 463 pr_err("Zone %u: unaligned write: sect %llu, wp %llu\n",
eb0570c7df23c2 Damien Le Moal 2025-04-07 464 zone_no, sector, zone->wp);
eb0570c7df23c2 Damien Le Moal 2025-04-07 465 ret = -EIO;
eb0570c7df23c2 Damien Le Moal 2025-04-07 466 goto unlock;
eb0570c7df23c2 Damien Le Moal 2025-04-07 467 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 468
eb0570c7df23c2 Damien Le Moal 2025-04-07 469 /* Implicitly open the target zone. */
eb0570c7df23c2 Damien Le Moal 2025-04-07 470 if (zone->cond == BLK_ZONE_COND_CLOSED ||
eb0570c7df23c2 Damien Le Moal 2025-04-07 471 zone->cond == BLK_ZONE_COND_EMPTY)
eb0570c7df23c2 Damien Le Moal 2025-04-07 472 zone->cond = BLK_ZONE_COND_IMP_OPEN;
eb0570c7df23c2 Damien Le Moal 2025-04-07 473
eb0570c7df23c2 Damien Le Moal 2025-04-07 474 /*
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 475 * Advance the write pointer, unless ordered zone append is in
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 476 * use. If the write fails, the write pointer position will be
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 477 * corrected when the next I/O starts execution.
eb0570c7df23c2 Damien Le Moal 2025-04-07 478 */
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 479 if (!is_append || !zlo->ordered_zone_append) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 480 zone->wp += nr_sectors;
866d65745b6359 Damien Le Moal 2025-11-15 481 if (zone->wp == zone_end) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 482 zone->cond = BLK_ZONE_COND_FULL;
866d65745b6359 Damien Le Moal 2025-11-15 483 zone->wp = ULLONG_MAX;
866d65745b6359 Damien Le Moal 2025-11-15 484 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 485 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 486
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 487 spin_unlock_irqrestore(&zone->wp_lock, flags);
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 488 }
fcc6eaa3a03a0e Damien Le Moal 2025-11-15 489
71075d25ca5cae Chaitanya Kulkarni 2025-12-02 @490 nr_bvec = blk_rq_nr_bvec(rq);
eb0570c7df23c2 Damien Le Moal 2025-04-07 491
eb0570c7df23c2 Damien Le Moal 2025-04-07 492 if (rq->bio != rq->biotail) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 493 struct bio_vec *bvec;
eb0570c7df23c2 Damien Le Moal 2025-04-07 494
69050f8d6d075d Kees Cook 2026-02-20 495 cmd->bvec = kmalloc_objs(*cmd->bvec, nr_bvec, GFP_NOIO);
eb0570c7df23c2 Damien Le Moal 2025-04-07 496 if (!cmd->bvec) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 497 ret = -EIO;
eb0570c7df23c2 Damien Le Moal 2025-04-07 498 goto unlock;
eb0570c7df23c2 Damien Le Moal 2025-04-07 499 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 500
eb0570c7df23c2 Damien Le Moal 2025-04-07 501 /*
eb0570c7df23c2 Damien Le Moal 2025-04-07 502 * The bios of the request may be started from the middle of
eb0570c7df23c2 Damien Le Moal 2025-04-07 503 * the 'bvec' because of bio splitting, so we can't directly
eb0570c7df23c2 Damien Le Moal 2025-04-07 504 * copy bio->bi_iov_vec to new bvec. The rq_for_each_bvec
eb0570c7df23c2 Damien Le Moal 2025-04-07 505 * API will take care of all details for us.
eb0570c7df23c2 Damien Le Moal 2025-04-07 506 */
eb0570c7df23c2 Damien Le Moal 2025-04-07 507 bvec = cmd->bvec;
eb0570c7df23c2 Damien Le Moal 2025-04-07 508 rq_for_each_bvec(tmp, rq, rq_iter) {
eb0570c7df23c2 Damien Le Moal 2025-04-07 509 *bvec = tmp;
eb0570c7df23c2 Damien Le Moal 2025-04-07 510 bvec++;
eb0570c7df23c2 Damien Le Moal 2025-04-07 511 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 512 iov_iter_bvec(&iter, rw, cmd->bvec, nr_bvec, blk_rq_bytes(rq));
eb0570c7df23c2 Damien Le Moal 2025-04-07 513 } else {
eb0570c7df23c2 Damien Le Moal 2025-04-07 514 /*
eb0570c7df23c2 Damien Le Moal 2025-04-07 515 * Same here, this bio may be started from the middle of the
eb0570c7df23c2 Damien Le Moal 2025-04-07 516 * 'bvec' because of bio splitting, so offset from the bvec
eb0570c7df23c2 Damien Le Moal 2025-04-07 517 * must be passed to iov iterator
eb0570c7df23c2 Damien Le Moal 2025-04-07 518 */
eb0570c7df23c2 Damien Le Moal 2025-04-07 519 iov_iter_bvec(&iter, rw,
eb0570c7df23c2 Damien Le Moal 2025-04-07 520 __bvec_iter_bvec(rq->bio->bi_io_vec, rq->bio->bi_iter),
eb0570c7df23c2 Damien Le Moal 2025-04-07 521 nr_bvec, blk_rq_bytes(rq));
eb0570c7df23c2 Damien Le Moal 2025-04-07 522 iter.iov_offset = rq->bio->bi_iter.bi_bvec_done;
eb0570c7df23c2 Damien Le Moal 2025-04-07 523 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 524
eb0570c7df23c2 Damien Le Moal 2025-04-07 525 cmd->iocb.ki_pos = (sector - zone->start) << SECTOR_SHIFT;
eb0570c7df23c2 Damien Le Moal 2025-04-07 526 cmd->iocb.ki_filp = zone->file;
eb0570c7df23c2 Damien Le Moal 2025-04-07 527 cmd->iocb.ki_complete = zloop_rw_complete;
eb0570c7df23c2 Damien Le Moal 2025-04-07 528 if (!zlo->buffered_io)
eb0570c7df23c2 Damien Le Moal 2025-04-07 529 cmd->iocb.ki_flags = IOCB_DIRECT;
eb0570c7df23c2 Damien Le Moal 2025-04-07 530 cmd->iocb.ki_ioprio = IOPRIO_PRIO_VALUE(IOPRIO_CLASS_NONE, 0);
eb0570c7df23c2 Damien Le Moal 2025-04-07 531
eb0570c7df23c2 Damien Le Moal 2025-04-07 532 if (rw == ITER_SOURCE)
eb0570c7df23c2 Damien Le Moal 2025-04-07 533 ret = zone->file->f_op->write_iter(&cmd->iocb, &iter);
eb0570c7df23c2 Damien Le Moal 2025-04-07 534 else
eb0570c7df23c2 Damien Le Moal 2025-04-07 535 ret = zone->file->f_op->read_iter(&cmd->iocb, &iter);
eb0570c7df23c2 Damien Le Moal 2025-04-07 536 unlock:
eb0570c7df23c2 Damien Le Moal 2025-04-07 537 if (!test_bit(ZLOOP_ZONE_CONV, &zone->flags) && is_write)
eb0570c7df23c2 Damien Le Moal 2025-04-07 @538 mutex_unlock(&zone->lock);
eb0570c7df23c2 Damien Le Moal 2025-04-07 539 out:
eb0570c7df23c2 Damien Le Moal 2025-04-07 540 if (ret != -EIOCBQUEUED)
eb0570c7df23c2 Damien Le Moal 2025-04-07 541 zloop_rw_complete(&cmd->iocb, ret);
eb0570c7df23c2 Damien Le Moal 2025-04-07 542 zloop_put_cmd(cmd);
eb0570c7df23c2 Damien Le Moal 2025-04-07 543 }
eb0570c7df23c2 Damien Le Moal 2025-04-07 544
--
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2026-03-27 21:21 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-03-27 21:20 [PATCH v2 26/26] block: Enable lock context analysis for all block drivers kernel test robot
-- strict thread matches above, loose matches on Subject: below --
2026-03-25 21:44 [PATCH v2 00/26] Enable lock context analysis Bart Van Assche
2026-03-25 21:45 ` [PATCH v2 26/26] block: Enable lock context analysis for all block drivers Bart Van Assche
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.