* [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request
@ 2026-05-11 21:22 Casey Chen
2026-05-12 6:17 ` Christoph Hellwig
2026-05-12 15:27 ` Jens Axboe
0 siblings, 2 replies; 3+ messages in thread
From: Casey Chen @ 2026-05-11 21:22 UTC (permalink / raw)
To: Jens Axboe
Cc: Keith Busch, Christoph Hellwig, Martin K . Petersen, linux-block,
dm-devel, Casey Chen
blk_insert_cloned_request() already recomputes nr_phys_segments
against the bottom queue, because "the queue settings related to
segment counting may differ from the original queue." The exact same
reasoning applies to integrity segments: a stacked driver's underlying
queue can have tighter virt_boundary_mask, seg_boundary_mask, or
max_segment_size than the top queue, in which case
blk_rq_count_integrity_sg() against the bottom queue produces a
different count than the cached rq->nr_integrity_segments inherited
from the source request by blk_rq_prep_clone().
When the cached count is lower than the bottom queue's actual count,
blk_rq_map_integrity_sg() trips
BUG_ON(segments > rq->nr_integrity_segments);
on dispatch. The same families of stacked setups that motivated the
existing nr_phys_segments recompute -- dm-multipath fanning out to
nvme-rdma in particular -- can produce this.
Mirror the nr_phys_segments handling: when the request carries
integrity, recompute nr_integrity_segments against the bottom queue
and reject the request if it exceeds the bottom queue's
max_integrity_segments. blk_rq_count_integrity_sg() and
queue_max_integrity_segments() are both already available via
<linux/blk-integrity.h>, which blk-mq.c includes.
This closes a latent gap in the stacking contract and brings the
integrity-segment accounting in line with the existing
phys-segment accounting.
Fixes: 76c313f658d2 ("blk-integrity: improved sg segment mapping")
Signed-off-by: Casey Chen <cachen@purestorage.com>
---
block/blk-mq.c | 19 +++++++++++++++++++
1 file changed, 19 insertions(+)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 4c5c16cce4f8..d0c37daf568f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3307,6 +3307,25 @@ blk_status_t blk_insert_cloned_request(struct request *rq)
return BLK_STS_IOERR;
}
+ /*
+ * Integrity segment counting depends on the same queue limits
+ * (virt_boundary_mask, seg_boundary_mask, max_segment_size) that
+ * vary across stacked queues, so recompute against the bottom
+ * queue just like nr_phys_segments above.
+ */
+ if (blk_integrity_rq(rq) && rq->bio) {
+ unsigned short max_int_segs = queue_max_integrity_segments(q);
+
+ rq->nr_integrity_segments =
+ blk_rq_count_integrity_sg(rq->q, rq->bio);
+ if (rq->nr_integrity_segments > max_int_segs) {
+ printk(KERN_ERR "%s: over max integrity segments limit. (%u > %u)\n",
+ __func__, rq->nr_integrity_segments,
+ max_int_segs);
+ return BLK_STS_IOERR;
+ }
+ }
+
if (q->disk && should_fail_request(q->disk->part0, blk_rq_bytes(rq)))
return BLK_STS_IOERR;
--
2.50.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request
2026-05-11 21:22 [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request Casey Chen
@ 2026-05-12 6:17 ` Christoph Hellwig
2026-05-12 15:27 ` Jens Axboe
1 sibling, 0 replies; 3+ messages in thread
From: Christoph Hellwig @ 2026-05-12 6:17 UTC (permalink / raw)
To: Casey Chen
Cc: Jens Axboe, Keith Busch, Christoph Hellwig, Martin K . Petersen,
linux-block, dm-devel
Looks good:
Reviewed-by: Christoph Hellwig <hch@lst.de>
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request
2026-05-11 21:22 [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request Casey Chen
2026-05-12 6:17 ` Christoph Hellwig
@ 2026-05-12 15:27 ` Jens Axboe
1 sibling, 0 replies; 3+ messages in thread
From: Jens Axboe @ 2026-05-12 15:27 UTC (permalink / raw)
To: Casey Chen
Cc: Keith Busch, Christoph Hellwig, Martin K . Petersen, linux-block,
dm-devel
On Mon, 11 May 2026 15:22:30 -0600, Casey Chen wrote:
> blk_insert_cloned_request() already recomputes nr_phys_segments
> against the bottom queue, because "the queue settings related to
> segment counting may differ from the original queue." The exact same
> reasoning applies to integrity segments: a stacked driver's underlying
> queue can have tighter virt_boundary_mask, seg_boundary_mask, or
> max_segment_size than the top queue, in which case
> blk_rq_count_integrity_sg() against the bottom queue produces a
> different count than the cached rq->nr_integrity_segments inherited
> from the source request by blk_rq_prep_clone().
>
> [...]
Applied, thanks!
[1/1] block: recompute nr_integrity_segments in blk_insert_cloned_request
commit: 2c6e6a18a37b905cb584eb0dda3ae482162a81ca
Best regards,
--
Jens Axboe
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-12 15:27 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-11 21:22 [PATCH] block: recompute nr_integrity_segments in blk_insert_cloned_request Casey Chen
2026-05-12 6:17 ` Christoph Hellwig
2026-05-12 15:27 ` Jens Axboe
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox