cgroups.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split
@ 2025-09-05  7:06 Yu Kuai
  2025-09-05  7:06 ` [PATCH for-6.18/block 01/16] block: cleanup bio_issue Yu Kuai
                   ` (16 more replies)
  0 siblings, 17 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Changes from RFC v3:
 - initialize bio->issue_time_ns in blk_mq_submit_bio, patch 2;
 - set/clear new queue_flag when iolatency is enabled/disabled, patch 3;
 - fix compile problem for md-linear, patch 12;
 - make should_fail_bio() non-static, and open code new helper, patch 14;
 - remove the checking for zoned disk, patch 15;
Changes from RFC v2:
 - add patch 1,2 to cleanup bio_issue;
 - add patch 3,4 to fix missing processing for split bio first;
 - bypass zoned device in patch 14;
Changes from RFC:
 - export a new helper bio_submit_split_bioset() instead of
export bio_submit_split() directly;
 - don't set no merge flag in the new helper;
 - add patch 7 and patch 10;
 - add patch 8 to skip bio checks for resubmitting split bio;

patch 1-5 cleanup bio_issue, and fix missing processing for split bio;
patch 6 export a bio split helper;
patch 7-13 unify bio split code;
path 14,15 convert the helper to insert split bio to the head of current
bio list;
patch 16 is a follow cleanup for raid0;

Yu Kuai (16):
  block: cleanup bio_issue
  block: initialize bio issue time in blk_mq_submit_bio()
  blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME
  md: fix mssing blktrace bio split events
  blk-crypto: fix missing blktrace bio split events
  block: factor out a helper bio_submit_split_bioset()
  md/raid0: convert raid0_handle_discard() to use
    bio_submit_split_bioset()
  md/raid1: convert to use bio_submit_split_bioset()
  md/raid10: add a new r10bio flag R10BIO_Returned
  md/raid10: convert read/write to use bio_submit_split_bioset()
  md/raid5: convert to use bio_submit_split_bioset()
  md/md-linear: convert to use bio_submit_split_bioset()
  blk-crypto: convert to use bio_submit_split_bioset()
  block: skip unnecessary checks for split bio
  block: fix reordered IO in the case recursive split
  md/raid0: convert raid0_make_request() to use
    bio_submit_split_bioset()

 block/bio.c                 |  2 +-
 block/blk-cgroup.h          |  6 ----
 block/blk-core.c            | 19 ++++++-----
 block/blk-crypto-fallback.c | 16 ++++------
 block/blk-iolatency.c       | 19 +++++------
 block/blk-merge.c           | 64 +++++++++++++++++++++++++------------
 block/blk-mq-debugfs.c      |  1 +
 block/blk-mq.c              |  3 ++
 block/blk-throttle.c        |  2 +-
 block/blk.h                 | 45 ++------------------------
 drivers/md/md-linear.c      | 14 ++------
 drivers/md/raid0.c          | 30 ++++++-----------
 drivers/md/raid1.c          | 38 ++++++++--------------
 drivers/md/raid1.h          |  4 ++-
 drivers/md/raid10.c         | 54 ++++++++++++++-----------------
 drivers/md/raid10.h         |  2 ++
 drivers/md/raid5.c          | 10 +++---
 include/linux/blk_types.h   |  7 ++--
 include/linux/blkdev.h      |  3 ++
 19 files changed, 141 insertions(+), 198 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 01/16] block: cleanup bio_issue
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05  7:06 ` [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio() Yu Kuai
                   ` (15 subsequent siblings)
  16 siblings, 0 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Now that bio->bi_issue is only used by blk-iolatency to get bio issue
time, replace bio_issue with u64 time directly and remove bio_issue to
make code cleaner.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 block/bio.c               |  2 +-
 block/blk-cgroup.h        |  2 +-
 block/blk-iolatency.c     | 14 +++----------
 block/blk.h               | 42 ---------------------------------------
 include/linux/blk_types.h |  7 ++-----
 5 files changed, 7 insertions(+), 60 deletions(-)

diff --git a/block/bio.c b/block/bio.c
index 44c43b970387..c8fce0d6e332 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -261,7 +261,7 @@ void bio_init(struct bio *bio, struct block_device *bdev, struct bio_vec *table,
 	bio->bi_private = NULL;
 #ifdef CONFIG_BLK_CGROUP
 	bio->bi_blkg = NULL;
-	bio->bi_issue.value = 0;
+	bio->issue_time_ns = 0;
 	if (bdev)
 		bio_associate_blkg(bio);
 #ifdef CONFIG_BLK_CGROUP_IOCOST
diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h
index 81868ad86330..d73204d27d72 100644
--- a/block/blk-cgroup.h
+++ b/block/blk-cgroup.h
@@ -372,7 +372,7 @@ static inline void blkg_put(struct blkcg_gq *blkg)
 
 static inline void blkcg_bio_issue_init(struct bio *bio)
 {
-	bio_issue_init(&bio->bi_issue, bio_sectors(bio));
+	bio->issue_time_ns = blk_time_get_ns();
 }
 
 static inline void blkcg_use_delay(struct blkcg_gq *blkg)
diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 2f8fdecdd7a9..554b191a6892 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -485,19 +485,11 @@ static void blkcg_iolatency_throttle(struct rq_qos *rqos, struct bio *bio)
 		mod_timer(&blkiolat->timer, jiffies + HZ);
 }
 
-static void iolatency_record_time(struct iolatency_grp *iolat,
-				  struct bio_issue *issue, u64 now,
-				  bool issue_as_root)
+static void iolatency_record_time(struct iolatency_grp *iolat, u64 start,
+				  u64 now, bool issue_as_root)
 {
-	u64 start = bio_issue_time(issue);
 	u64 req_time;
 
-	/*
-	 * Have to do this so we are truncated to the correct time that our
-	 * issue is truncated to.
-	 */
-	now = __bio_issue_time(now);
-
 	if (now <= start)
 		return;
 
@@ -625,7 +617,7 @@ static void blkcg_iolatency_done_bio(struct rq_qos *rqos, struct bio *bio)
 		 * submitted, so do not account for it.
 		 */
 		if (iolat->min_lat_nsec && bio->bi_status != BLK_STS_AGAIN) {
-			iolatency_record_time(iolat, &bio->bi_issue, now,
+			iolatency_record_time(iolat, bio->issue_time_ns, now,
 					      issue_as_root);
 			window_start = atomic64_read(&iolat->window_start);
 			if (now > window_start &&
diff --git a/block/blk.h b/block/blk.h
index 46f566f9b126..0268deb22268 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -680,48 +680,6 @@ static inline ktime_t blk_time_get(void)
 	return ns_to_ktime(blk_time_get_ns());
 }
 
-/*
- * From most significant bit:
- * 1 bit: reserved for other usage, see below
- * 12 bits: original size of bio
- * 51 bits: issue time of bio
- */
-#define BIO_ISSUE_RES_BITS      1
-#define BIO_ISSUE_SIZE_BITS     12
-#define BIO_ISSUE_RES_SHIFT     (64 - BIO_ISSUE_RES_BITS)
-#define BIO_ISSUE_SIZE_SHIFT    (BIO_ISSUE_RES_SHIFT - BIO_ISSUE_SIZE_BITS)
-#define BIO_ISSUE_TIME_MASK     ((1ULL << BIO_ISSUE_SIZE_SHIFT) - 1)
-#define BIO_ISSUE_SIZE_MASK     \
-	(((1ULL << BIO_ISSUE_SIZE_BITS) - 1) << BIO_ISSUE_SIZE_SHIFT)
-#define BIO_ISSUE_RES_MASK      (~((1ULL << BIO_ISSUE_RES_SHIFT) - 1))
-
-/* Reserved bit for blk-throtl */
-#define BIO_ISSUE_THROTL_SKIP_LATENCY (1ULL << 63)
-
-static inline u64 __bio_issue_time(u64 time)
-{
-	return time & BIO_ISSUE_TIME_MASK;
-}
-
-static inline u64 bio_issue_time(struct bio_issue *issue)
-{
-	return __bio_issue_time(issue->value);
-}
-
-static inline sector_t bio_issue_size(struct bio_issue *issue)
-{
-	return ((issue->value & BIO_ISSUE_SIZE_MASK) >> BIO_ISSUE_SIZE_SHIFT);
-}
-
-static inline void bio_issue_init(struct bio_issue *issue,
-				       sector_t size)
-{
-	size &= (1ULL << BIO_ISSUE_SIZE_BITS) - 1;
-	issue->value = ((issue->value & BIO_ISSUE_RES_MASK) |
-			(blk_time_get_ns() & BIO_ISSUE_TIME_MASK) |
-			((u64)size << BIO_ISSUE_SIZE_SHIFT));
-}
-
 void bdev_release(struct file *bdev_file);
 int bdev_open(struct block_device *bdev, blk_mode_t mode, void *holder,
 	      const struct blk_holder_ops *hops, struct file *bdev_file);
diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h
index 930daff207df..b8be751e16fc 100644
--- a/include/linux/blk_types.h
+++ b/include/linux/blk_types.h
@@ -198,10 +198,6 @@ static inline bool blk_path_error(blk_status_t error)
 	return true;
 }
 
-struct bio_issue {
-	u64 value;
-};
-
 typedef __u32 __bitwise blk_opf_t;
 
 typedef unsigned int blk_qc_t;
@@ -242,7 +238,8 @@ struct bio {
 	 * on release of the bio.
 	 */
 	struct blkcg_gq		*bi_blkg;
-	struct bio_issue	bi_issue;
+	/* Time that this bio was issued. */
+	u64			issue_time_ns;
 #ifdef CONFIG_BLK_CGROUP_IOCOST
 	u64			bi_iocost_cost;
 #endif
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
  2025-09-05  7:06 ` [PATCH for-6.18/block 01/16] block: cleanup bio_issue Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-06 15:27   ` kernel test robot
  2025-09-09  8:11   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME Yu Kuai
                   ` (14 subsequent siblings)
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

bio->issue_time_ns is only used by blk-iolatency, which can only be
enabled for rq-based disk, hence it's not necessary to initialize
the time for bio-based disk.

Meanwhile, if bio is split by blk_crypto_fallback_split_bio_if_needed(),
the issue time is not initialized for new split bio, this can be fixed
as well.

Noted the next patch will optimize better that bio issue time will
only be used when blk-iolatency is really enabled by the disk.

Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-cgroup.h | 6 ------
 block/blk-core.c   | 1 -
 block/blk-merge.c  | 1 -
 block/blk-mq.c     | 1 +
 4 files changed, 1 insertion(+), 8 deletions(-)

diff --git a/block/blk-cgroup.h b/block/blk-cgroup.h
index d73204d27d72..5330cce51060 100644
--- a/block/blk-cgroup.h
+++ b/block/blk-cgroup.h
@@ -370,11 +370,6 @@ static inline void blkg_put(struct blkcg_gq *blkg)
 		if (((d_blkg) = blkg_lookup(css_to_blkcg(pos_css),	\
 					    (p_blkg)->q)))
 
-static inline void blkcg_bio_issue_init(struct bio *bio)
-{
-	bio->issue_time_ns = blk_time_get_ns();
-}
-
 static inline void blkcg_use_delay(struct blkcg_gq *blkg)
 {
 	if (WARN_ON_ONCE(atomic_read(&blkg->use_delay) < 0))
@@ -491,7 +486,6 @@ static inline struct blkg_policy_data *blkg_to_pd(struct blkcg_gq *blkg,
 static inline struct blkcg_gq *pd_to_blkg(struct blkg_policy_data *pd) { return NULL; }
 static inline void blkg_get(struct blkcg_gq *blkg) { }
 static inline void blkg_put(struct blkcg_gq *blkg) { }
-static inline void blkcg_bio_issue_init(struct bio *bio) { }
 static inline void blk_cgroup_bio_start(struct bio *bio) { }
 static inline bool blk_cgroup_mergeable(struct request *rq, struct bio *bio) { return true; }
 
diff --git a/block/blk-core.c b/block/blk-core.c
index 4201504158a1..83c262a3dfd9 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -728,7 +728,6 @@ static void __submit_bio_noacct_mq(struct bio *bio)
 void submit_bio_noacct_nocheck(struct bio *bio)
 {
 	blk_cgroup_bio_start(bio);
-	blkcg_bio_issue_init(bio);
 
 	if (!bio_flagged(bio, BIO_TRACE_COMPLETION)) {
 		trace_block_bio_queue(bio);
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 70d704615be5..5538356770a4 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -119,7 +119,6 @@ static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
 			goto error;
 		}
 		split->bi_opf |= REQ_NOMERGE;
-		blkcg_bio_issue_init(split);
 		bio_chain(split, bio);
 		trace_block_split(split, bio->bi_iter.bi_sector);
 		WARN_ON_ONCE(bio_zone_write_plugging(bio));
diff --git a/block/blk-mq.c b/block/blk-mq.c
index ba3a4b77f578..d2538683c83d 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3168,6 +3168,7 @@ void blk_mq_submit_bio(struct bio *bio)
 	if (!bio_integrity_prep(bio))
 		goto queue_exit;
 
+	bio->issue_time_ns = blk_time_get_ns();
 	if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
 		goto queue_exit;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
  2025-09-05  7:06 ` [PATCH for-6.18/block 01/16] block: cleanup bio_issue Yu Kuai
  2025-09-05  7:06 ` [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio() Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:12   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events Yu Kuai
                   ` (13 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

bio->issue_time_ns is initialized for every bio, however, it's only used
by blk-iolatency. Add a new queue_flag and only set this flag when
blk-iolatency is enabled, so that extra blk_time_get_ns() can be saved
for disks that blk-iolatency is not enabled.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-iolatency.c  | 5 +++++
 block/blk-mq-debugfs.c | 1 +
 block/blk-mq.c         | 4 +++-
 include/linux/blkdev.h | 1 +
 4 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/block/blk-iolatency.c b/block/blk-iolatency.c
index 554b191a6892..45bd18f68541 100644
--- a/block/blk-iolatency.c
+++ b/block/blk-iolatency.c
@@ -742,10 +742,15 @@ static void blkiolatency_enable_work_fn(struct work_struct *work)
 	 */
 	enabled = atomic_read(&blkiolat->enable_cnt);
 	if (enabled != blkiolat->enabled) {
+		struct request_queue *q = blkiolat->rqos.disk->queue;
 		unsigned int memflags;
 
 		memflags = blk_mq_freeze_queue(blkiolat->rqos.disk->queue);
 		blkiolat->enabled = enabled;
+		if (enabled)
+			blk_queue_flag_set(QUEUE_FLAG_BIO_ISSUE_TIME, q);
+		else
+			blk_queue_flag_clear(QUEUE_FLAG_BIO_ISSUE_TIME, q);
 		blk_mq_unfreeze_queue(blkiolat->rqos.disk->queue, memflags);
 	}
 }
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 32c65efdda46..4896525b1c05 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -96,6 +96,7 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(DISABLE_WBT_DEF),
 	QUEUE_FLAG_NAME(NO_ELV_SWITCH),
 	QUEUE_FLAG_NAME(QOS_ENABLED),
+	QUEUE_FLAG_NAME(BIO_ISSUE_TIME),
 };
 #undef QUEUE_FLAG_NAME
 
diff --git a/block/blk-mq.c b/block/blk-mq.c
index d2538683c83d..eaa18536333f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -3168,7 +3168,9 @@ void blk_mq_submit_bio(struct bio *bio)
 	if (!bio_integrity_prep(bio))
 		goto queue_exit;
 
-	bio->issue_time_ns = blk_time_get_ns();
+	if (test_bit(QUEUE_FLAG_BIO_ISSUE_TIME, &q->queue_flags))
+		bio->issue_time_ns = blk_time_get_ns();
+
 	if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
 		goto queue_exit;
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7709d55adc23..3c3b64684d14 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -657,6 +657,7 @@ enum {
 	QUEUE_FLAG_DISABLE_WBT_DEF,	/* for sched to disable/enable wbt */
 	QUEUE_FLAG_NO_ELV_SWITCH,	/* can't switch elevator any more */
 	QUEUE_FLAG_QOS_ENABLED,		/* qos is enabled */
+	QUEUE_FLAG_BIO_ISSUE_TIME,	/* record bio->issue_time_ns */
 	QUEUE_FLAG_MAX
 };
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (2 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:44   ` Bart Van Assche
  2025-09-09  8:12   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 05/16] blk-crypto: fix missing " Yu Kuai
                   ` (12 subsequent siblings)
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

If bio is split by internal handling like chunksize or badblocks, the
corresponding trace_block_split() is missing, resulting in blktrace
inability to catch BIO split events and making it harder to analyze the
BIO sequence.

Cc: stable@vger.kernel.org
Fixes: 4b1faf931650 ("block: Kill bio_pair_split()")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 drivers/md/md-linear.c | 1 +
 drivers/md/raid0.c     | 4 ++++
 drivers/md/raid1.c     | 4 ++++
 drivers/md/raid10.c    | 8 ++++++++
 drivers/md/raid5.c     | 2 ++
 5 files changed, 19 insertions(+)

diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
index 5d9b08115375..59d7963c7843 100644
--- a/drivers/md/md-linear.c
+++ b/drivers/md/md-linear.c
@@ -266,6 +266,7 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio)
 		}
 
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		submit_bio_noacct(bio);
 		bio = split;
 	}
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index f1d8811a542a..1ba7d0c090f7 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -472,7 +472,9 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
 			bio_endio(bio);
 			return;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		submit_bio_noacct(bio);
 		bio = split;
 		end = zone->zone_end;
@@ -620,7 +622,9 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
 			bio_endio(bio);
 			return true;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		raid0_map_submit_bio(mddev, bio);
 		bio = split;
 	}
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 408c26398321..29edb7b548f3 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1383,7 +1383,9 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
 			error = PTR_ERR(split);
 			goto err_handle;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		submit_bio_noacct(bio);
 		bio = split;
 		r1_bio->master_bio = bio;
@@ -1591,7 +1593,9 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 			error = PTR_ERR(split);
 			goto err_handle;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		submit_bio_noacct(bio);
 		bio = split;
 		r1_bio->master_bio = bio;
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index b60c30bfb6c7..859c40a5ecf4 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1209,7 +1209,9 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
 			error = PTR_ERR(split);
 			goto err_handle;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		allow_barrier(conf);
 		submit_bio_noacct(bio);
 		wait_barrier(conf, false);
@@ -1495,7 +1497,9 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 			error = PTR_ERR(split);
 			goto err_handle;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		allow_barrier(conf);
 		submit_bio_noacct(bio);
 		wait_barrier(conf, false);
@@ -1679,7 +1683,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 			bio_endio(bio);
 			return 0;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		allow_barrier(conf);
 		/* Resend the fist split part */
 		submit_bio_noacct(split);
@@ -1694,7 +1700,9 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
 			bio_endio(bio);
 			return 0;
 		}
+
 		bio_chain(split, bio);
+		trace_block_split(split, bio->bi_iter.bi_sector);
 		allow_barrier(conf);
 		/* Resend the second split part */
 		submit_bio_noacct(bio);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 023649fe2476..0fb838879844 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5475,8 +5475,10 @@ static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio)
 
 	if (sectors < bio_sectors(raid_bio)) {
 		struct r5conf *conf = mddev->private;
+
 		split = bio_split(raid_bio, sectors, GFP_NOIO, &conf->bio_split);
 		bio_chain(split, raid_bio);
+		trace_block_split(split, raid_bio->bi_iter.bi_sector);
 		submit_bio_noacct(raid_bio);
 		raid_bio = split;
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 05/16] blk-crypto: fix missing blktrace bio split events
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (3 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:45   ` Bart Van Assche
  2025-09-09  8:13   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset() Yu Kuai
                   ` (11 subsequent siblings)
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

trace_block_split() is missing, resulting in blktrace inability to catch
BIO split events and making it harder to analyze the BIO sequence.

Cc: stable@vger.kernel.org
Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-crypto-fallback.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 005c9157ffb3..1f9a4c33d2bd 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -18,6 +18,7 @@
 #include <linux/module.h>
 #include <linux/random.h>
 #include <linux/scatterlist.h>
+#include <trace/events/block.h>
 
 #include "blk-cgroup.h"
 #include "blk-crypto-internal.h"
@@ -231,7 +232,9 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
 			bio->bi_status = BLK_STS_RESOURCE;
 			return false;
 		}
+
 		bio_chain(split_bio, bio);
+		trace_block_split(split_bio, bio->bi_iter.bi_sector);
 		submit_bio_noacct(bio);
 		*bio_ptr = split_bio;
 	}
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (4 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 05/16] blk-crypto: fix missing " Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:47   ` Bart Van Assche
  2025-09-09  8:13   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset() Yu Kuai
                   ` (10 subsequent siblings)
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

No functional changes are intended, some drivers like mdraid will split
bio by internal processing, prepare to unify bio split codes.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-merge.c      | 59 ++++++++++++++++++++++++++++--------------
 include/linux/blkdev.h |  2 ++
 2 files changed, 42 insertions(+), 19 deletions(-)

diff --git a/block/blk-merge.c b/block/blk-merge.c
index 5538356770a4..51fe4ed5b7c0 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -104,33 +104,54 @@ static unsigned int bio_allowed_max_sectors(const struct queue_limits *lim)
 	return round_down(UINT_MAX, lim->logical_block_size) >> SECTOR_SHIFT;
 }
 
+/*
+ * bio_submit_split_bioset - Submit a bio, splitting it at a designated sector
+ * @bio:		the original bio to be submitted and split
+ * @split_sectors:	the sector count at which to split
+ * @bs:			the bio set used for allocating the new split bio
+ *
+ * The original bio is modified to contain the remaining sectors and submitted.
+ * The caller is responsible for submitting the returned bio.
+ *
+ * If succeed, the newly allocated bio representing the initial part will be
+ * returned, on failure NULL will be returned and original bio will fail.
+ */
+struct bio *bio_submit_split_bioset(struct bio *bio, unsigned int split_sectors,
+				    struct bio_set *bs)
+{
+	struct bio *split = bio_split(bio, split_sectors, GFP_NOIO, bs);
+
+	if (IS_ERR(split)) {
+		bio->bi_status = errno_to_blk_status(PTR_ERR(split));
+		bio_endio(bio);
+		return NULL;
+	}
+
+	bio_chain(split, bio);
+	trace_block_split(split, bio->bi_iter.bi_sector);
+	WARN_ON_ONCE(bio_zone_write_plugging(bio));
+	submit_bio_noacct(bio);
+
+	return split;
+}
+EXPORT_SYMBOL_GPL(bio_submit_split_bioset);
+
 static struct bio *bio_submit_split(struct bio *bio, int split_sectors)
 {
-	if (unlikely(split_sectors < 0))
-		goto error;
+	if (unlikely(split_sectors < 0)) {
+		bio->bi_status = errno_to_blk_status(split_sectors);
+		bio_endio(bio);
+		return NULL;
+	}
 
 	if (split_sectors) {
-		struct bio *split;
-
-		split = bio_split(bio, split_sectors, GFP_NOIO,
+		bio = bio_submit_split_bioset(bio, split_sectors,
 				&bio->bi_bdev->bd_disk->bio_split);
-		if (IS_ERR(split)) {
-			split_sectors = PTR_ERR(split);
-			goto error;
-		}
-		split->bi_opf |= REQ_NOMERGE;
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		WARN_ON_ONCE(bio_zone_write_plugging(bio));
-		submit_bio_noacct(bio);
-		return split;
+		if (bio)
+			bio->bi_opf |= REQ_NOMERGE;
 	}
 
 	return bio;
-error:
-	bio->bi_status = errno_to_blk_status(split_sectors);
-	bio_endio(bio);
-	return NULL;
 }
 
 struct bio *bio_split_discard(struct bio *bio, const struct queue_limits *lim,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 3c3b64684d14..0982874b65fa 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -1000,6 +1000,8 @@ extern int blk_register_queue(struct gendisk *disk);
 extern void blk_unregister_queue(struct gendisk *disk);
 void submit_bio_noacct(struct bio *bio);
 struct bio *bio_split_to_limits(struct bio *bio);
+struct bio *bio_submit_split_bioset(struct bio *bio, unsigned int split_sectors,
+				    struct bio_set *bs);
 
 extern int blk_lld_busy(struct request_queue *q);
 extern int blk_queue_enter(struct request_queue *q, blk_mq_req_flags_t flags);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (5 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset() Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:49   ` Bart Van Assche
  2025-09-05  7:06 ` [PATCH for-6.18/block 08/16] md/raid1: convert " Yu Kuai
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, and prepare to fix disordered split IO

Noted commit 319ff40a5427 ("md/raid0: Fix performance regression for large
sequential writes") already fix reordered split IO by remapping bio to
underlying disks before resubmitting it, with the respect
md_submit_bio() already split it by sectors, and raid0_make_request()
will split at most once for unaligned IO. This is a bit hacky and we'll
convert this to solution in general later.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/raid0.c | 19 ++++++-------------
 1 file changed, 6 insertions(+), 13 deletions(-)

diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 1ba7d0c090f7..ca08ec2e1f27 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -463,23 +463,16 @@ static void raid0_handle_discard(struct mddev *mddev, struct bio *bio)
 	zone = find_zone(conf, &start);
 
 	if (bio_end_sector(bio) > zone->zone_end) {
-		struct bio *split = bio_split(bio,
-			zone->zone_end - bio->bi_iter.bi_sector, GFP_NOIO,
-			&mddev->bio_set);
-
-		if (IS_ERR(split)) {
-			bio->bi_status = errno_to_blk_status(PTR_ERR(split));
-			bio_endio(bio);
+		bio = bio_submit_split_bioset(bio,
+				zone->zone_end - bio->bi_iter.bi_sector,
+				&mddev->bio_set);
+		if (!bio)
 			return;
-		}
 
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		submit_bio_noacct(bio);
-		bio = split;
 		end = zone->zone_end;
-	} else
+	} else {
 		end = bio_end_sector(bio);
+	}
 
 	orig_end = end;
 	if (zone != conf->strip_zone)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 08/16] md/raid1: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (6 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:13   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned Yu Kuai
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, and prepare to fix reordered split IO.

Noted that bio_submit_split_bioset() can fail the original bio directly
by split error, set R1BIO_Returned in this case to notify raid_end_bio_io()
that the original bio is returned already.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/raid1.c | 38 +++++++++++---------------------------
 drivers/md/raid1.h |  4 +++-
 2 files changed, 14 insertions(+), 28 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 29edb7b548f3..f8434049f9b1 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1317,7 +1317,7 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
 	struct raid1_info *mirror;
 	struct bio *read_bio;
 	int max_sectors;
-	int rdisk, error;
+	int rdisk;
 	bool r1bio_existed = !!r1_bio;
 
 	/*
@@ -1376,18 +1376,13 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
 	}
 
 	if (max_sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, max_sectors,
-					      gfp, &conf->bio_split);
-
-		if (IS_ERR(split)) {
-			error = PTR_ERR(split);
+		bio = bio_submit_split_bioset(bio, max_sectors,
+					      &conf->bio_split);
+		if (!bio) {
+			set_bit(R1BIO_Returned, &r1_bio->state);
 			goto err_handle;
 		}
 
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		submit_bio_noacct(bio);
-		bio = split;
 		r1_bio->master_bio = bio;
 		r1_bio->sectors = max_sectors;
 	}
@@ -1415,8 +1410,6 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
 
 err_handle:
 	atomic_dec(&mirror->rdev->nr_pending);
-	bio->bi_status = errno_to_blk_status(error);
-	set_bit(R1BIO_Uptodate, &r1_bio->state);
 	raid_end_bio_io(r1_bio);
 }
 
@@ -1459,7 +1452,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 {
 	struct r1conf *conf = mddev->private;
 	struct r1bio *r1_bio;
-	int i, disks, k, error;
+	int i, disks, k;
 	unsigned long flags;
 	int first_clone;
 	int max_sectors;
@@ -1563,10 +1556,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 				 * complexity of supporting that is not worth
 				 * the benefit.
 				 */
-				if (bio->bi_opf & REQ_ATOMIC) {
-					error = -EIO;
+				if (bio->bi_opf & REQ_ATOMIC)
 					goto err_handle;
-				}
 
 				good_sectors = first_bad - r1_bio->sector;
 				if (good_sectors < max_sectors)
@@ -1586,18 +1577,13 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 		max_sectors = min_t(int, max_sectors,
 				    BIO_MAX_VECS * (PAGE_SIZE >> 9));
 	if (max_sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, max_sectors,
-					      GFP_NOIO, &conf->bio_split);
-
-		if (IS_ERR(split)) {
-			error = PTR_ERR(split);
+		bio = bio_submit_split_bioset(bio, max_sectors,
+					      &conf->bio_split);
+		if (!bio) {
+			set_bit(R1BIO_Returned, &r1_bio->state);
 			goto err_handle;
 		}
 
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		submit_bio_noacct(bio);
-		bio = split;
 		r1_bio->master_bio = bio;
 		r1_bio->sectors = max_sectors;
 	}
@@ -1687,8 +1673,6 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 		}
 	}
 
-	bio->bi_status = errno_to_blk_status(error);
-	set_bit(R1BIO_Uptodate, &r1_bio->state);
 	raid_end_bio_io(r1_bio);
 }
 
diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
index d236ef179cfb..2ebe35aaa534 100644
--- a/drivers/md/raid1.h
+++ b/drivers/md/raid1.h
@@ -178,7 +178,9 @@ enum r1bio_state {
  * any write was successful.  Otherwise we call when
  * any write-behind write succeeds, otherwise we call
  * with failure when last write completes (and all failed).
- * Record that bi_end_io was called with this flag...
+ *
+ * And for bio_split errors, record that bi_end_io was called
+ * with this flag...
  */
 	R1BIO_Returned,
 /* If a write for this request means we can clear some
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (7 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 08/16] md/raid1: convert " Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:14   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset() Yu Kuai
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

The new helper bio_submit_split_bioset() can failed the orginal bio on
split errors, prepare to handle this case in raid_end_bio_io().

The flag name is refer to the r1bio flag name.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/raid10.c | 8 +++++---
 drivers/md/raid10.h | 2 ++
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 859c40a5ecf4..a775a1317635 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -322,10 +322,12 @@ static void raid_end_bio_io(struct r10bio *r10_bio)
 	struct bio *bio = r10_bio->master_bio;
 	struct r10conf *conf = r10_bio->mddev->private;
 
-	if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
-		bio->bi_status = BLK_STS_IOERR;
+	if (!test_and_set_bit(R10BIO_Returned, &r10_bio->state)) {
+		if (!test_bit(R10BIO_Uptodate, &r10_bio->state))
+			bio->bi_status = BLK_STS_IOERR;
+		bio_endio(bio);
+	}
 
-	bio_endio(bio);
 	/*
 	 * Wake up any possible resync thread that waits for the device
 	 * to go idle.
diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
index 3f16ad6904a9..da00a55f7a55 100644
--- a/drivers/md/raid10.h
+++ b/drivers/md/raid10.h
@@ -165,6 +165,8 @@ enum r10bio_state {
  * so that raid10d knows what to do with them.
  */
 	R10BIO_ReadError,
+/* For bio_split errors, record that bi_end_io was called. */
+	R10BIO_Returned,
 /* If a write for this request means we can clear some
  * known-bad-block records, we set this flag.
  */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (8 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:14   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 11/16] md/raid5: convert " Yu Kuai
                   ` (6 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, prepare to fix reordered split IO, the error path
is modified a bit, however no functional changes are intended:

- bio_submit_split_bioset() can fail the original bio directly
  by split error, set R10BIO_Uptodate in this case to notify
  raid_end_bio_io() that the original bio is returned already.
- set R10BIO_Uptodate and set error value to -EIO is useless now,
  for r10_bio without R10BIO_Uptodate, -EIO will be returned for
  original bio.

And discard is not handled, because discard is only split for
unaligned head and tail, and this can be considered slow path, the
reorder here does not matter much.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/raid10.c | 42 +++++++++++++-----------------------------
 1 file changed, 13 insertions(+), 29 deletions(-)

diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index a775a1317635..69477be91b26 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1156,7 +1156,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
 	int slot = r10_bio->read_slot;
 	struct md_rdev *err_rdev = NULL;
 	gfp_t gfp = GFP_NOIO;
-	int error;
 
 	if (slot >= 0 && r10_bio->devs[slot].rdev) {
 		/*
@@ -1205,19 +1204,15 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
 				   rdev->bdev,
 				   (unsigned long long)r10_bio->sector);
 	if (max_sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, max_sectors,
-					      gfp, &conf->bio_split);
-		if (IS_ERR(split)) {
-			error = PTR_ERR(split);
+		allow_barrier(conf);
+		bio = bio_submit_split_bioset(bio, max_sectors,
+					      &conf->bio_split);
+		wait_barrier(conf, false);
+		if (!bio) {
+			set_bit(R10BIO_Returned, &r10_bio->state);
 			goto err_handle;
 		}
 
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		allow_barrier(conf);
-		submit_bio_noacct(bio);
-		wait_barrier(conf, false);
-		bio = split;
 		r10_bio->master_bio = bio;
 		r10_bio->sectors = max_sectors;
 	}
@@ -1245,8 +1240,6 @@ static void raid10_read_request(struct mddev *mddev, struct bio *bio,
 	return;
 err_handle:
 	atomic_dec(&rdev->nr_pending);
-	bio->bi_status = errno_to_blk_status(error);
-	set_bit(R10BIO_Uptodate, &r10_bio->state);
 	raid_end_bio_io(r10_bio);
 }
 
@@ -1355,7 +1348,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 	int i, k;
 	sector_t sectors;
 	int max_sectors;
-	int error;
 
 	if ((mddev_is_clustered(mddev) &&
 	     mddev->cluster_ops->area_resyncing(mddev, WRITE,
@@ -1469,10 +1461,8 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 				 * complexity of supporting that is not worth
 				 * the benefit.
 				 */
-				if (bio->bi_opf & REQ_ATOMIC) {
-					error = -EIO;
+				if (bio->bi_opf & REQ_ATOMIC)
 					goto err_handle;
-				}
 
 				good_sectors = first_bad - dev_sector;
 				if (good_sectors < max_sectors)
@@ -1493,19 +1483,15 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 		r10_bio->sectors = max_sectors;
 
 	if (r10_bio->sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, r10_bio->sectors,
-					      GFP_NOIO, &conf->bio_split);
-		if (IS_ERR(split)) {
-			error = PTR_ERR(split);
+		allow_barrier(conf);
+		bio = bio_submit_split_bioset(bio, r10_bio->sectors,
+					      &conf->bio_split);
+		wait_barrier(conf, false);
+		if (!bio) {
+			set_bit(R10BIO_Returned, &r10_bio->state);
 			goto err_handle;
 		}
 
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		allow_barrier(conf);
-		submit_bio_noacct(bio);
-		wait_barrier(conf, false);
-		bio = split;
 		r10_bio->master_bio = bio;
 	}
 
@@ -1537,8 +1523,6 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
 		}
 	}
 
-	bio->bi_status = errno_to_blk_status(error);
-	set_bit(R10BIO_Uptodate, &r10_bio->state);
 	raid_end_bio_io(r10_bio);
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 11/16] md/raid5: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (9 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:14   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 12/16] md/md-linear: " Yu Kuai
                   ` (5 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, prepare to fix reordered split IO.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/raid5.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0fb838879844..3c9825ad3f07 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -5468,7 +5468,6 @@ static int raid5_read_one_chunk(struct mddev *mddev, struct bio *raid_bio)
 
 static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio)
 {
-	struct bio *split;
 	sector_t sector = raid_bio->bi_iter.bi_sector;
 	unsigned chunk_sects = mddev->chunk_sectors;
 	unsigned sectors = chunk_sects - (sector & (chunk_sects-1));
@@ -5476,11 +5475,10 @@ static struct bio *chunk_aligned_read(struct mddev *mddev, struct bio *raid_bio)
 	if (sectors < bio_sectors(raid_bio)) {
 		struct r5conf *conf = mddev->private;
 
-		split = bio_split(raid_bio, sectors, GFP_NOIO, &conf->bio_split);
-		bio_chain(split, raid_bio);
-		trace_block_split(split, raid_bio->bi_iter.bi_sector);
-		submit_bio_noacct(raid_bio);
-		raid_bio = split;
+		raid_bio = bio_submit_split_bioset(raid_bio, sectors,
+						   &conf->bio_split);
+		if (!raid_bio)
+			return NULL;
 	}
 
 	if (!raid5_read_one_chunk(mddev, raid_bio))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 12/16] md/md-linear: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (10 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 11/16] md/raid5: convert " Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:15   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 13/16] blk-crypto: " Yu Kuai
                   ` (4 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, prepare to fix reordered split IO.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/md-linear.c | 15 +++------------
 1 file changed, 3 insertions(+), 12 deletions(-)

diff --git a/drivers/md/md-linear.c b/drivers/md/md-linear.c
index 59d7963c7843..76f85cc32942 100644
--- a/drivers/md/md-linear.c
+++ b/drivers/md/md-linear.c
@@ -256,19 +256,10 @@ static bool linear_make_request(struct mddev *mddev, struct bio *bio)
 
 	if (unlikely(bio_end_sector(bio) > end_sector)) {
 		/* This bio crosses a device boundary, so we have to split it */
-		struct bio *split = bio_split(bio, end_sector - bio_sector,
-					      GFP_NOIO, &mddev->bio_set);
-
-		if (IS_ERR(split)) {
-			bio->bi_status = errno_to_blk_status(PTR_ERR(split));
-			bio_endio(bio);
+		bio = bio_submit_split_bioset(bio, end_sector - bio_sector,
+					      &mddev->bio_set);
+		if (!bio)
 			return true;
-		}
-
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		submit_bio_noacct(bio);
-		bio = split;
 	}
 
 	md_account_bio(mddev, &bio);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 13/16] blk-crypto: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (11 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 12/16] md/md-linear: " Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:50   ` Bart Van Assche
  2025-09-09  8:15   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio Yu Kuai
                   ` (3 subsequent siblings)
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Unify bio split code, prepare to fix reordered split IO.

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-crypto-fallback.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
index 1f9a4c33d2bd..88539e058bf0 100644
--- a/block/blk-crypto-fallback.c
+++ b/block/blk-crypto-fallback.c
@@ -18,7 +18,6 @@
 #include <linux/module.h>
 #include <linux/random.h>
 #include <linux/scatterlist.h>
-#include <trace/events/block.h>
 
 #include "blk-cgroup.h"
 #include "blk-crypto-internal.h"
@@ -223,20 +222,14 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
 		if (++i == BIO_MAX_VECS)
 			break;
 	}
-	if (num_sectors < bio_sectors(bio)) {
-		struct bio *split_bio;
 
-		split_bio = bio_split(bio, num_sectors, GFP_NOIO,
-				      &crypto_bio_split);
-		if (IS_ERR(split_bio)) {
-			bio->bi_status = BLK_STS_RESOURCE;
+	if (num_sectors < bio_sectors(bio)) {
+		bio = bio_submit_split_bioset(bio, num_sectors,
+					      &crypto_bio_split);
+		if (!bio)
 			return false;
-		}
 
-		bio_chain(split_bio, bio);
-		trace_block_split(split_bio, bio->bi_iter.bi_sector);
-		submit_bio_noacct(bio);
-		*bio_ptr = split_bio;
+		*bio_ptr = bio;
 	}
 
 	return true;
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (12 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 13/16] blk-crypto: " Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:16   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (2 subsequent siblings)
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Lots of checks are already done while submitting this bio the first
time, and there is no need to check them again when this bio is
resubmitted after split.

Hence open code should_fail_bio() and blk_throtl_bio() that are still
necessary from submit_bio_split_bioset().

Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-core.c  | 2 +-
 block/blk-merge.c | 6 +++++-
 block/blk.h       | 1 +
 3 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 83c262a3dfd9..1021a09c5958 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -539,7 +539,7 @@ static inline void bio_check_ro(struct bio *bio)
 	}
 }
 
-static noinline int should_fail_bio(struct bio *bio)
+int should_fail_bio(struct bio *bio)
 {
 	if (should_fail_request(bdev_whole(bio->bi_bdev), bio->bi_iter.bi_size))
 		return -EIO;
diff --git a/block/blk-merge.c b/block/blk-merge.c
index 51fe4ed5b7c0..c411045fcf03 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -130,7 +130,11 @@ struct bio *bio_submit_split_bioset(struct bio *bio, unsigned int split_sectors,
 	bio_chain(split, bio);
 	trace_block_split(split, bio->bi_iter.bi_sector);
 	WARN_ON_ONCE(bio_zone_write_plugging(bio));
-	submit_bio_noacct(bio);
+
+	if (should_fail_bio(bio))
+		bio_io_error(bio);
+	else if (!blk_throtl_bio(bio))
+		submit_bio_noacct_nocheck(bio);
 
 	return split;
 }
diff --git a/block/blk.h b/block/blk.h
index 0268deb22268..18cc3c2afdd4 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -615,6 +615,7 @@ extern const struct address_space_operations def_blk_aops;
 int disk_register_independent_access_ranges(struct gendisk *disk);
 void disk_unregister_independent_access_ranges(struct gendisk *disk);
 
+int should_fail_bio(struct bio *bio);
 #ifdef CONFIG_FAIL_MAKE_REQUEST
 bool should_fail_request(struct block_device *part, unsigned int bytes);
 #else /* CONFIG_FAIL_MAKE_REQUEST */
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (13 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-05 20:51   ` Bart Van Assche
  2025-09-09  8:16   ` Christoph Hellwig
  2025-09-05  7:06 ` [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset() Yu Kuai
  2025-09-09 15:28 ` [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Jens Axboe
  16 siblings, 2 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Currently, split bio will be chained to original bio, and original bio
will be resubmitted to the tail of current->bio_list, waiting for
split bio to be issued. However, if split bio get split again, the IO
order will be messed up. This problem, on the one hand, will cause
performance degradation, especially for mdraid will large IO size; on
the other hand, will cause write errors for zoned block devices[1].

For example, in raid456 IO will first be split by max_sector from
md_submit_bio(), and then later be split again by chunksize for internal
handling:

For example, assume max_sectors is 1M, and chunksize is 512k

1) issue a 2M IO:

bio issuing: 0+2M
current->bio_list: NULL

2) md_submit_bio() split by max_sector:

bio issuing: 0+1M
current->bio_list: 1M+1M

3) chunk_aligned_read() split by chunksize:

bio issuing: 0+512k
current->bio_list: 1M+1M -> 512k+512k

4) after first bio issued, __submit_bio_noacct() will contuine issuing
next bio:

bio issuing: 1M+1M
current->bio_list: 512k+512k
bio issued: 0+512k

5) chunk_aligned_read() split by chunksize:

bio issuing: 1M+512k
current->bio_list: 512k+512k -> 1536k+512k
bio issued: 0+512k

6) no split afterwards, finally the issue order is:

0+512k -> 1M+512k -> 512k+512k -> 1536k+512k

This behaviour will cause large IO read on raid456 endup to be small
discontinuous IO in underlying disks. Fix this problem by placing split
bio to the head of current->bio_list.

Test script: test on 8 disk raid5 with 64k chunksize
dd if=/dev/md0 of=/dev/null bs=4480k iflag=direct

Test results:
Before this patch
1) iostat results:
Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz  aqu-sz  %util
md0           52430.00   3276.87     0.00   0.00    0.62    64.00   32.60  80.10
sd*           4487.00    409.00  2054.00  31.40    0.82    93.34    3.68  71.20
2) blktrace G stage:
  8,0    0   486445    11.357392936   843  G   R 14071424 + 128 [dd]
  8,0    0   486451    11.357466360   843  G   R 14071168 + 128 [dd]
  8,0    0   486454    11.357515868   843  G   R 14071296 + 128 [dd]
  8,0    0   486468    11.357968099   843  G   R 14072192 + 128 [dd]
  8,0    0   486474    11.358031320   843  G   R 14071936 + 128 [dd]
  8,0    0   486480    11.358096298   843  G   R 14071552 + 128 [dd]
  8,0    0   486490    11.358303858   843  G   R 14071808 + 128 [dd]
3) io seek for sdx:
Noted io seek is the result from blktrace D stage, statistic of:
ABS((offset of next IO) - (offset + len of previous IO))

Read|Write seek
cnt 55175, zero cnt 25079
    >=(KB) .. <(KB)     : count       ratio |distribution                            |
         0 .. 1         : 25079       45.5% |########################################|
         1 .. 2         : 0            0.0% |                                        |
         2 .. 4         : 0            0.0% |                                        |
         4 .. 8         : 0            0.0% |                                        |
         8 .. 16        : 0            0.0% |                                        |
        16 .. 32        : 0            0.0% |                                        |
        32 .. 64        : 12540       22.7% |#####################                   |
        64 .. 128       : 2508         4.5% |#####                                   |
       128 .. 256       : 0            0.0% |                                        |
       256 .. 512       : 10032       18.2% |#################                       |
       512 .. 1024      : 5016         9.1% |#########                               |

After this patch:
1) iostat results:
Device            r/s     rMB/s   rrqm/s  %rrqm r_await rareq-sz  aqu-sz  %util
md0           87965.00   5271.88     0.00   0.00    0.16    61.37   14.03  90.60
sd*           6020.00    658.44  5117.00  45.95    0.44   112.00    2.68  86.50
2) blktrace G stage:
  8,0    0   206296     5.354894072   664  G   R 7156992 + 128 [dd]
  8,0    0   206305     5.355018179   664  G   R 7157248 + 128 [dd]
  8,0    0   206316     5.355204438   664  G   R 7157504 + 128 [dd]
  8,0    0   206319     5.355241048   664  G   R 7157760 + 128 [dd]
  8,0    0   206333     5.355500923   664  G   R 7158016 + 128 [dd]
  8,0    0   206344     5.355837806   664  G   R 7158272 + 128 [dd]
  8,0    0   206353     5.355960395   664  G   R 7158528 + 128 [dd]
  8,0    0   206357     5.356020772   664  G   R 7158784 + 128 [dd]
3) io seek for sdx
Read|Write seek
cnt 28644, zero cnt 21483
    >=(KB) .. <(KB)     : count       ratio |distribution                            |
         0 .. 1         : 21483       75.0% |########################################|
         1 .. 2         : 0            0.0% |                                        |
         2 .. 4         : 0            0.0% |                                        |
         4 .. 8         : 0            0.0% |                                        |
         8 .. 16        : 0            0.0% |                                        |
        16 .. 32        : 0            0.0% |                                        |
        32 .. 64        : 7161        25.0% |##############                          |

BTW, this looks like a long term problem from day one, and large
sequential IO read is pretty common case like video playing.

And even with this patch, in this test case IO is merged to at most 128k
is due to block layer plug limit BLK_PLUG_FLUSH_SIZE, increase such
limit can get even better performance. However, we'll figure out how to do
this properly later.

[1] https://lore.kernel.org/all/e40b076d-583d-406b-b223-005910a9f46f@acm.org/
Fixes: d89d87965dcb ("When stacked block devices are in-use (e.g. md or dm), the recursive calls")
Reported-by: Tie Ren <tieren@fnnas.com>
Closes: https://lore.kernel.org/all/7dro5o7u5t64d6bgiansesjavxcuvkq5p2pok7dtwkav7b7ape@3isfr44b6352/
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 block/blk-core.c     | 16 ++++++++++------
 block/blk-merge.c    |  2 +-
 block/blk-throttle.c |  2 +-
 block/blk.h          |  2 +-
 4 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 1021a09c5958..dd39ff651095 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -725,7 +725,7 @@ static void __submit_bio_noacct_mq(struct bio *bio)
 	current->bio_list = NULL;
 }
 
-void submit_bio_noacct_nocheck(struct bio *bio)
+void submit_bio_noacct_nocheck(struct bio *bio, bool split)
 {
 	blk_cgroup_bio_start(bio);
 
@@ -744,12 +744,16 @@ void submit_bio_noacct_nocheck(struct bio *bio)
 	 * to collect a list of requests submited by a ->submit_bio method while
 	 * it is active, and then process them after it returned.
 	 */
-	if (current->bio_list)
-		bio_list_add(&current->bio_list[0], bio);
-	else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO))
+	if (current->bio_list) {
+		if (split)
+			bio_list_add_head(&current->bio_list[0], bio);
+		else
+			bio_list_add(&current->bio_list[0], bio);
+	} else if (!bdev_test_flag(bio->bi_bdev, BD_HAS_SUBMIT_BIO)) {
 		__submit_bio_noacct_mq(bio);
-	else
+	} else {
 		__submit_bio_noacct(bio);
+	}
 }
 
 static blk_status_t blk_validate_atomic_write_op_size(struct request_queue *q,
@@ -870,7 +874,7 @@ void submit_bio_noacct(struct bio *bio)
 
 	if (blk_throtl_bio(bio))
 		return;
-	submit_bio_noacct_nocheck(bio);
+	submit_bio_noacct_nocheck(bio, false);
 	return;
 
 not_supported:
diff --git a/block/blk-merge.c b/block/blk-merge.c
index c411045fcf03..77488f11a944 100644
--- a/block/blk-merge.c
+++ b/block/blk-merge.c
@@ -134,7 +134,7 @@ struct bio *bio_submit_split_bioset(struct bio *bio, unsigned int split_sectors,
 	if (should_fail_bio(bio))
 		bio_io_error(bio);
 	else if (!blk_throtl_bio(bio))
-		submit_bio_noacct_nocheck(bio);
+		submit_bio_noacct_nocheck(bio, true);
 
 	return split;
 }
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 397b6a410f9e..ead7b0eb4846 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -1224,7 +1224,7 @@ static void blk_throtl_dispatch_work_fn(struct work_struct *work)
 	if (!bio_list_empty(&bio_list_on_stack)) {
 		blk_start_plug(&plug);
 		while ((bio = bio_list_pop(&bio_list_on_stack)))
-			submit_bio_noacct_nocheck(bio);
+			submit_bio_noacct_nocheck(bio, false);
 		blk_finish_plug(&plug);
 	}
 }
diff --git a/block/blk.h b/block/blk.h
index 18cc3c2afdd4..d9efc8693aa4 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -54,7 +54,7 @@ bool blk_queue_start_drain(struct request_queue *q);
 bool __blk_freeze_queue_start(struct request_queue *q,
 			      struct task_struct *owner);
 int __bio_queue_enter(struct request_queue *q, struct bio *bio);
-void submit_bio_noacct_nocheck(struct bio *bio);
+void submit_bio_noacct_nocheck(struct bio *bio, bool split);
 void bio_await_chain(struct bio *bio);
 
 static inline bool blk_try_enter_queue(struct request_queue *q, bool pm)
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset()
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (14 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split Yu Kuai
@ 2025-09-05  7:06 ` Yu Kuai
  2025-09-09  8:17   ` Christoph Hellwig
  2025-09-09 15:28 ` [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Jens Axboe
  16 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-05  7:06 UTC (permalink / raw)
  To: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

From: Yu Kuai <yukuai3@huawei.com>

Currently, raid0_make_request() will remap the original bio to underlying
disks to prevent reordered IO. Now that bio_submit_split_bioset() will put
original bio to the head of current->bio_list, it's safe converting to use
this helper and bio will still be ordered.

CC: Jan Kara <jack@suse.cz>
Signed-off-by: Yu Kuai <yukuai3@huawei.com>
---
 drivers/md/raid0.c | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index ca08ec2e1f27..adc9e68d064d 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -607,19 +607,10 @@ static bool raid0_make_request(struct mddev *mddev, struct bio *bio)
 		 : sector_div(sector, chunk_sects));
 
 	if (sectors < bio_sectors(bio)) {
-		struct bio *split = bio_split(bio, sectors, GFP_NOIO,
+		bio = bio_submit_split_bioset(bio, sectors,
 					      &mddev->bio_set);
-
-		if (IS_ERR(split)) {
-			bio->bi_status = errno_to_blk_status(PTR_ERR(split));
-			bio_endio(bio);
+		if (!bio)
 			return true;
-		}
-
-		bio_chain(split, bio);
-		trace_block_split(split, bio->bi_iter.bi_sector);
-		raid0_map_submit_bio(mddev, bio);
-		bio = split;
 	}
 
 	raid0_map_submit_bio(mddev, bio);
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events
  2025-09-05  7:06 ` [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events Yu Kuai
@ 2025-09-05 20:44   ` Bart Van Assche
  2025-09-09  8:12   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:44 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi


On 9/5/25 12:06 AM, Yu Kuai wrote:
> If bio is split by internal handling like chunksize or badblocks, the
> corresponding trace_block_split() is missing, resulting in blktrace
> inability to catch BIO split events and making it harder to analyze the
> BIO sequence.

The bio splitting code in block/blk-crypto-fallback.c doesn't call
trace_block_split() either but maybe that code falls outside the scope 
of this patch?

Thanks,

Bart.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 05/16] blk-crypto: fix missing blktrace bio split events
  2025-09-05  7:06 ` [PATCH for-6.18/block 05/16] blk-crypto: fix missing " Yu Kuai
@ 2025-09-05 20:45   ` Bart Van Assche
  2025-09-09  8:13   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:45 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/5/25 12:06 AM, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
> 
> trace_block_split() is missing, resulting in blktrace inability to catch
> BIO split events and making it harder to analyze the BIO sequence.
> 
> Cc: stable@vger.kernel.org
> Fixes: 488f6682c832 ("block: blk-crypto-fallback for Inline Encryption")
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>
> ---
>   block/blk-crypto-fallback.c | 3 +++
>   1 file changed, 3 insertions(+)
> 
> diff --git a/block/blk-crypto-fallback.c b/block/blk-crypto-fallback.c
> index 005c9157ffb3..1f9a4c33d2bd 100644
> --- a/block/blk-crypto-fallback.c
> +++ b/block/blk-crypto-fallback.c
> @@ -18,6 +18,7 @@
>   #include <linux/module.h>
>   #include <linux/random.h>
>   #include <linux/scatterlist.h>
> +#include <trace/events/block.h>
>   
>   #include "blk-cgroup.h"
>   #include "blk-crypto-internal.h"
> @@ -231,7 +232,9 @@ static bool blk_crypto_fallback_split_bio_if_needed(struct bio **bio_ptr)
>   			bio->bi_status = BLK_STS_RESOURCE;
>   			return false;
>   		}
> +
>   		bio_chain(split_bio, bio);
> +		trace_block_split(split_bio, bio->bi_iter.bi_sector);
>   		submit_bio_noacct(bio);
>   		*bio_ptr = split_bio;
>   	}

Ah, here it is. Hence:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset() Yu Kuai
@ 2025-09-05 20:47   ` Bart Van Assche
  2025-09-09  8:13   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:47 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/5/25 12:06 AM, Yu Kuai wrote:
> No functional changes are intended, some drivers like mdraid will split
> bio by internal processing, prepare to unify bio split codes.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-05 20:49   ` Bart Van Assche
  2025-09-06  0:38     ` Damien Le Moal
  0 siblings, 1 reply; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:49 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/5/25 12:06 AM, Yu Kuai wrote:
> Unify bio split code, and prepare to fix disordered split IO

fix disordered split IO -> fix reordering of split IO



^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 13/16] blk-crypto: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 13/16] blk-crypto: " Yu Kuai
@ 2025-09-05 20:50   ` Bart Van Assche
  2025-09-06  2:42     ` Yu Kuai
  2025-09-09  8:15   ` Christoph Hellwig
  1 sibling, 1 reply; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:50 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/5/25 12:06 AM, Yu Kuai wrote:
> Unify bio split code, prepare to fix reordered split IO.

reordered split IO -> reordering of split IO

Anyway:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split
  2025-09-05  7:06 ` [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split Yu Kuai
@ 2025-09-05 20:51   ` Bart Van Assche
  2025-09-09  8:16   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Bart Van Assche @ 2025-09-05 20:51 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/5/25 12:06 AM, Yu Kuai wrote:
> Currently, split bio will be chained to original bio, and original bio
> will be resubmitted to the tail of current->bio_list, waiting for
> split bio to be issued. However, if split bio get split again, the IO
> order will be messed up. This problem, on the one hand, will cause
> performance degradation, especially for mdraid will large IO size; on
> the other hand, will cause write errors for zoned block devices[1].

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset()
  2025-09-05 20:49   ` Bart Van Assche
@ 2025-09-06  0:38     ` Damien Le Moal
  0 siblings, 0 replies; 44+ messages in thread
From: Damien Le Moal @ 2025-09-06  0:38 UTC (permalink / raw)
  To: Bart Van Assche, Yu Kuai, hch, colyli, hare, tieren, axboe, tj,
	josef, song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

On 9/6/25 05:49, Bart Van Assche wrote:
> On 9/5/25 12:06 AM, Yu Kuai wrote:
>> Unify bio split code, and prepare to fix disordered split IO
> 
> fix disordered split IO -> fix reordering of split IO

-> fix ordering of split IO

the fix is to reorder IOs :)


-- 
Damien Le Moal
Western Digital Research

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 13/16] blk-crypto: convert to use bio_submit_split_bioset()
  2025-09-05 20:50   ` Bart Van Assche
@ 2025-09-06  2:42     ` Yu Kuai
  0 siblings, 0 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-06  2:42 UTC (permalink / raw)
  To: Bart Van Assche, Yu Kuai, hch, colyli, hare, dlemoal, tieren,
	axboe, tj, josef, song, yukuai3, satyat, ebiggers, kmo, akpm,
	neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

Hi,

在 2025/9/6 4:50, Bart Van Assche 写道:
> On 9/5/25 12:06 AM, Yu Kuai wrote:
>> Unify bio split code, prepare to fix reordered split IO.
>
> reordered split IO -> reordering of split IO


I'll fix this in the next version, and the same as other patches.

>
> Anyway:
>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
>
>
Thanks for the Review!
Kuai


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
  2025-09-05  7:06 ` [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio() Yu Kuai
@ 2025-09-06 15:27   ` kernel test robot
  2025-09-07  7:57     ` Yu Kuai
  2025-09-09  8:11   ` Christoph Hellwig
  1 sibling, 1 reply; 44+ messages in thread
From: kernel test robot @ 2025-09-06 15:27 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, bvanassche, axboe,
	tj, josef, song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: oe-kbuild-all, linux-block, linux-kernel, cgroups, linux-raid,
	yi.zhang, yangerkun, johnny.chenyi

Hi Yu,

kernel test robot noticed the following build errors:

[auto build test ERROR on axboe-block/for-next]
[also build test ERROR on linus/master v6.17-rc4 next-20250905]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/block-cleanup-bio_issue/20250905-153659
base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
patch link:    https://lore.kernel.org/r/20250905070643.2533483-3-yukuai1%40huaweicloud.com
patch subject: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
config: i386-buildonly-randconfig-003-20250906 (https://download.01.org/0day-ci/archive/20250906/202509062332.tqE0Bc8k-lkp@intel.com/config)
compiler: gcc-13 (Debian 13.3.0-16) 13.3.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250906/202509062332.tqE0Bc8k-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202509062332.tqE0Bc8k-lkp@intel.com/

All errors (new ones prefixed by >>):

   block/blk-mq.c: In function 'blk_mq_submit_bio':
>> block/blk-mq.c:3171:12: error: 'struct bio' has no member named 'issue_time_ns'
    3171 |         bio->issue_time_ns = blk_time_get_ns();
         |            ^~


vim +3171 block/blk-mq.c

  3097	
  3098	/**
  3099	 * blk_mq_submit_bio - Create and send a request to block device.
  3100	 * @bio: Bio pointer.
  3101	 *
  3102	 * Builds up a request structure from @q and @bio and send to the device. The
  3103	 * request may not be queued directly to hardware if:
  3104	 * * This request can be merged with another one
  3105	 * * We want to place request at plug queue for possible future merging
  3106	 * * There is an IO scheduler active at this queue
  3107	 *
  3108	 * It will not queue the request if there is an error with the bio, or at the
  3109	 * request creation.
  3110	 */
  3111	void blk_mq_submit_bio(struct bio *bio)
  3112	{
  3113		struct request_queue *q = bdev_get_queue(bio->bi_bdev);
  3114		struct blk_plug *plug = current->plug;
  3115		const int is_sync = op_is_sync(bio->bi_opf);
  3116		struct blk_mq_hw_ctx *hctx;
  3117		unsigned int nr_segs;
  3118		struct request *rq;
  3119		blk_status_t ret;
  3120	
  3121		/*
  3122		 * If the plug has a cached request for this queue, try to use it.
  3123		 */
  3124		rq = blk_mq_peek_cached_request(plug, q, bio->bi_opf);
  3125	
  3126		/*
  3127		 * A BIO that was released from a zone write plug has already been
  3128		 * through the preparation in this function, already holds a reference
  3129		 * on the queue usage counter, and is the only write BIO in-flight for
  3130		 * the target zone. Go straight to preparing a request for it.
  3131		 */
  3132		if (bio_zone_write_plugging(bio)) {
  3133			nr_segs = bio->__bi_nr_segments;
  3134			if (rq)
  3135				blk_queue_exit(q);
  3136			goto new_request;
  3137		}
  3138	
  3139		/*
  3140		 * The cached request already holds a q_usage_counter reference and we
  3141		 * don't have to acquire a new one if we use it.
  3142		 */
  3143		if (!rq) {
  3144			if (unlikely(bio_queue_enter(bio)))
  3145				return;
  3146		}
  3147	
  3148		/*
  3149		 * Device reconfiguration may change logical block size or reduce the
  3150		 * number of poll queues, so the checks for alignment and poll support
  3151		 * have to be done with queue usage counter held.
  3152		 */
  3153		if (unlikely(bio_unaligned(bio, q))) {
  3154			bio_io_error(bio);
  3155			goto queue_exit;
  3156		}
  3157	
  3158		if ((bio->bi_opf & REQ_POLLED) && !blk_mq_can_poll(q)) {
  3159			bio->bi_status = BLK_STS_NOTSUPP;
  3160			bio_endio(bio);
  3161			goto queue_exit;
  3162		}
  3163	
  3164		bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
  3165		if (!bio)
  3166			goto queue_exit;
  3167	
  3168		if (!bio_integrity_prep(bio))
  3169			goto queue_exit;
  3170	
> 3171		bio->issue_time_ns = blk_time_get_ns();
  3172		if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
  3173			goto queue_exit;
  3174	
  3175		if (bio_needs_zone_write_plugging(bio)) {
  3176			if (blk_zone_plug_bio(bio, nr_segs))
  3177				goto queue_exit;
  3178		}
  3179	
  3180	new_request:
  3181		if (rq) {
  3182			blk_mq_use_cached_rq(rq, plug, bio);
  3183		} else {
  3184			rq = blk_mq_get_new_requests(q, plug, bio);
  3185			if (unlikely(!rq)) {
  3186				if (bio->bi_opf & REQ_NOWAIT)
  3187					bio_wouldblock_error(bio);
  3188				goto queue_exit;
  3189			}
  3190		}
  3191	
  3192		trace_block_getrq(bio);
  3193	
  3194		rq_qos_track(q, rq, bio);
  3195	
  3196		blk_mq_bio_to_request(rq, bio, nr_segs);
  3197	
  3198		ret = blk_crypto_rq_get_keyslot(rq);
  3199		if (ret != BLK_STS_OK) {
  3200			bio->bi_status = ret;
  3201			bio_endio(bio);
  3202			blk_mq_free_request(rq);
  3203			return;
  3204		}
  3205	
  3206		if (bio_zone_write_plugging(bio))
  3207			blk_zone_write_plug_init_request(rq);
  3208	
  3209		if (op_is_flush(bio->bi_opf) && blk_insert_flush(rq))
  3210			return;
  3211	
  3212		if (plug) {
  3213			blk_add_rq_to_plug(plug, rq);
  3214			return;
  3215		}
  3216	
  3217		hctx = rq->mq_hctx;
  3218		if ((rq->rq_flags & RQF_USE_SCHED) ||
  3219		    (hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
  3220			blk_mq_insert_request(rq, 0);
  3221			blk_mq_run_hw_queue(hctx, true);
  3222		} else {
  3223			blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
  3224		}
  3225		return;
  3226	
  3227	queue_exit:
  3228		/*
  3229		 * Don't drop the queue reference if we were trying to use a cached
  3230		 * request and thus didn't acquire one.
  3231		 */
  3232		if (!rq)
  3233			blk_queue_exit(q);
  3234	}
  3235	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
  2025-09-06 15:27   ` kernel test robot
@ 2025-09-07  7:57     ` Yu Kuai
  2025-09-08  1:32       ` Yu Kuai
  0 siblings, 1 reply; 44+ messages in thread
From: Yu Kuai @ 2025-09-07  7:57 UTC (permalink / raw)
  To: kernel test robot, Yu Kuai, hch, colyli, hare, dlemoal, tieren,
	bvanassche, axboe, tj, josef, song, yukuai3, satyat, ebiggers,
	kmo, akpm, neil
  Cc: oe-kbuild-all, linux-block, linux-kernel, cgroups, linux-raid,
	yi.zhang, yangerkun, johnny.chenyi

Hi,

在 2025/9/6 23:27, kernel test robot 写道:
> Hi Yu,
>
> kernel test robot noticed the following build errors:
>
> [auto build test ERROR on axboe-block/for-next]
> [also build test ERROR on linus/master v6.17-rc4 next-20250905]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch#_base_tree_information]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Yu-Kuai/block-cleanup-bio_issue/20250905-153659
> base:   https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-next
> patch link:    https://lore.kernel.org/r/20250905070643.2533483-3-yukuai1%40huaweicloud.com
> patch subject: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
> config: i386-buildonly-randconfig-003-20250906 (https://download.01.org/0day-ci/archive/20250906/202509062332.tqE0Bc8k-lkp@intel.com/config)
> compiler: gcc-13 (Debian 13.3.0-16) 13.3.0
> reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20250906/202509062332.tqE0Bc8k-lkp@intel.com/reproduce)
>
> If you fix the issue in a separate patch/commit (i.e. not just a new version of
> the same patch/commit), kindly add following tags
> | Reported-by: kernel test robot <lkp@intel.com>
> | Closes: https://lore.kernel.org/oe-kbuild-all/202509062332.tqE0Bc8k-lkp@intel.com/
>
> All errors (new ones prefixed by >>):
>
>     block/blk-mq.c: In function 'blk_mq_submit_bio':
>>> block/blk-mq.c:3171:12: error: 'struct bio' has no member named 'issue_time_ns'
>      3171 |         bio->issue_time_ns = blk_time_get_ns();

This should be included inside BLK_CGROUP config, sorry about this.

Thanks,
Kuai

>           |            ^~
>
>
> vim +3171 block/blk-mq.c
>
>    3097	
>    3098	/**
>    3099	 * blk_mq_submit_bio - Create and send a request to block device.
>    3100	 * @bio: Bio pointer.
>    3101	 *
>    3102	 * Builds up a request structure from @q and @bio and send to the device. The
>    3103	 * request may not be queued directly to hardware if:
>    3104	 * * This request can be merged with another one
>    3105	 * * We want to place request at plug queue for possible future merging
>    3106	 * * There is an IO scheduler active at this queue
>    3107	 *
>    3108	 * It will not queue the request if there is an error with the bio, or at the
>    3109	 * request creation.
>    3110	 */
>    3111	void blk_mq_submit_bio(struct bio *bio)
>    3112	{
>    3113		struct request_queue *q = bdev_get_queue(bio->bi_bdev);
>    3114		struct blk_plug *plug = current->plug;
>    3115		const int is_sync = op_is_sync(bio->bi_opf);
>    3116		struct blk_mq_hw_ctx *hctx;
>    3117		unsigned int nr_segs;
>    3118		struct request *rq;
>    3119		blk_status_t ret;
>    3120	
>    3121		/*
>    3122		 * If the plug has a cached request for this queue, try to use it.
>    3123		 */
>    3124		rq = blk_mq_peek_cached_request(plug, q, bio->bi_opf);
>    3125	
>    3126		/*
>    3127		 * A BIO that was released from a zone write plug has already been
>    3128		 * through the preparation in this function, already holds a reference
>    3129		 * on the queue usage counter, and is the only write BIO in-flight for
>    3130		 * the target zone. Go straight to preparing a request for it.
>    3131		 */
>    3132		if (bio_zone_write_plugging(bio)) {
>    3133			nr_segs = bio->__bi_nr_segments;
>    3134			if (rq)
>    3135				blk_queue_exit(q);
>    3136			goto new_request;
>    3137		}
>    3138	
>    3139		/*
>    3140		 * The cached request already holds a q_usage_counter reference and we
>    3141		 * don't have to acquire a new one if we use it.
>    3142		 */
>    3143		if (!rq) {
>    3144			if (unlikely(bio_queue_enter(bio)))
>    3145				return;
>    3146		}
>    3147	
>    3148		/*
>    3149		 * Device reconfiguration may change logical block size or reduce the
>    3150		 * number of poll queues, so the checks for alignment and poll support
>    3151		 * have to be done with queue usage counter held.
>    3152		 */
>    3153		if (unlikely(bio_unaligned(bio, q))) {
>    3154			bio_io_error(bio);
>    3155			goto queue_exit;
>    3156		}
>    3157	
>    3158		if ((bio->bi_opf & REQ_POLLED) && !blk_mq_can_poll(q)) {
>    3159			bio->bi_status = BLK_STS_NOTSUPP;
>    3160			bio_endio(bio);
>    3161			goto queue_exit;
>    3162		}
>    3163	
>    3164		bio = __bio_split_to_limits(bio, &q->limits, &nr_segs);
>    3165		if (!bio)
>    3166			goto queue_exit;
>    3167	
>    3168		if (!bio_integrity_prep(bio))
>    3169			goto queue_exit;
>    3170	
>> 3171		bio->issue_time_ns = blk_time_get_ns();
>    3172		if (blk_mq_attempt_bio_merge(q, bio, nr_segs))
>    3173			goto queue_exit;
>    3174	
>    3175		if (bio_needs_zone_write_plugging(bio)) {
>    3176			if (blk_zone_plug_bio(bio, nr_segs))
>    3177				goto queue_exit;
>    3178		}
>    3179	
>    3180	new_request:
>    3181		if (rq) {
>    3182			blk_mq_use_cached_rq(rq, plug, bio);
>    3183		} else {
>    3184			rq = blk_mq_get_new_requests(q, plug, bio);
>    3185			if (unlikely(!rq)) {
>    3186				if (bio->bi_opf & REQ_NOWAIT)
>    3187					bio_wouldblock_error(bio);
>    3188				goto queue_exit;
>    3189			}
>    3190		}
>    3191	
>    3192		trace_block_getrq(bio);
>    3193	
>    3194		rq_qos_track(q, rq, bio);
>    3195	
>    3196		blk_mq_bio_to_request(rq, bio, nr_segs);
>    3197	
>    3198		ret = blk_crypto_rq_get_keyslot(rq);
>    3199		if (ret != BLK_STS_OK) {
>    3200			bio->bi_status = ret;
>    3201			bio_endio(bio);
>    3202			blk_mq_free_request(rq);
>    3203			return;
>    3204		}
>    3205	
>    3206		if (bio_zone_write_plugging(bio))
>    3207			blk_zone_write_plug_init_request(rq);
>    3208	
>    3209		if (op_is_flush(bio->bi_opf) && blk_insert_flush(rq))
>    3210			return;
>    3211	
>    3212		if (plug) {
>    3213			blk_add_rq_to_plug(plug, rq);
>    3214			return;
>    3215		}
>    3216	
>    3217		hctx = rq->mq_hctx;
>    3218		if ((rq->rq_flags & RQF_USE_SCHED) ||
>    3219		    (hctx->dispatch_busy && (q->nr_hw_queues == 1 || !is_sync))) {
>    3220			blk_mq_insert_request(rq, 0);
>    3221			blk_mq_run_hw_queue(hctx, true);
>    3222		} else {
>    3223			blk_mq_run_dispatch_ops(q, blk_mq_try_issue_directly(hctx, rq));
>    3224		}
>    3225		return;
>    3226	
>    3227	queue_exit:
>    3228		/*
>    3229		 * Don't drop the queue reference if we were trying to use a cached
>    3230		 * request and thus didn't acquire one.
>    3231		 */
>    3232		if (!rq)
>    3233			blk_queue_exit(q);
>    3234	}
>    3235	
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
  2025-09-07  7:57     ` Yu Kuai
@ 2025-09-08  1:32       ` Yu Kuai
  0 siblings, 0 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-08  1:32 UTC (permalink / raw)
  To: Yu Kuai, kernel test robot, Yu Kuai, hch, colyli, hare, dlemoal,
	tieren, bvanassche, axboe, tj, josef, song, satyat, ebiggers, kmo,
	akpm, neil
  Cc: oe-kbuild-all, linux-block, linux-kernel, cgroups, linux-raid,
	yi.zhang, yangerkun, johnny.chenyi, yukuai (C)

Hi,

在 2025/09/07 15:57, Yu Kuai 写道:
>> If you fix the issue in a separate patch/commit (i.e. not just a new 
>> version of
>> the same patch/commit), kindly add following tags
>> | Reported-by: kernel test robot <lkp@intel.com>
>> | Closes: 
>> https://lore.kernel.org/oe-kbuild-all/202509062332.tqE0Bc8k-lkp@intel.com/ 
>>
>>
>> All errors (new ones prefixed by >>):
>>
>>     block/blk-mq.c: In function 'blk_mq_submit_bio':
>>>> block/blk-mq.c:3171:12: error: 'struct bio' has no member named 
>>>> 'issue_time_ns'
>>      3171 |         bio->issue_time_ns = blk_time_get_ns();
> 
> This should be included inside BLK_CGROUP config, sorry about this.
> 

I should keep the blkcg_bio_issue_init() and call it here. I'll wait for
sometime and let people revice other patches before I send a new
version.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio()
  2025-09-05  7:06 ` [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio() Yu Kuai
  2025-09-06 15:27   ` kernel test robot
@ 2025-09-09  8:11   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:11 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME
  2025-09-05  7:06 ` [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME Yu Kuai
@ 2025-09-09  8:12   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:12 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events
  2025-09-05  7:06 ` [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events Yu Kuai
  2025-09-05 20:44   ` Bart Van Assche
@ 2025-09-09  8:12   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:12 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 05/16] blk-crypto: fix missing blktrace bio split events
  2025-09-05  7:06 ` [PATCH for-6.18/block 05/16] blk-crypto: fix missing " Yu Kuai
  2025-09-05 20:45   ` Bart Van Assche
@ 2025-09-09  8:13   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:13 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset() Yu Kuai
  2025-09-05 20:47   ` Bart Van Assche
@ 2025-09-09  8:13   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:13 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi


Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 08/16] md/raid1: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 08/16] md/raid1: convert " Yu Kuai
@ 2025-09-09  8:13   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:13 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned
  2025-09-05  7:06 ` [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned Yu Kuai
@ 2025-09-09  8:14   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:14 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

On Fri, Sep 05, 2025 at 03:06:36PM +0800, Yu Kuai wrote:
> From: Yu Kuai <yukuai3@huawei.com>
> 
> The new helper bio_submit_split_bioset() can failed the orginal bio on
> split errors, prepare to handle this case in raid_end_bio_io().
> 
> The flag name is refer to the r1bio flag name.
> 
> Signed-off-by: Yu Kuai <yukuai3@huawei.com>

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-09  8:14   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:14 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 11/16] md/raid5: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 11/16] md/raid5: convert " Yu Kuai
@ 2025-09-09  8:14   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:14 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 12/16] md/md-linear: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 12/16] md/md-linear: " Yu Kuai
@ 2025-09-09  8:15   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:15 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 13/16] blk-crypto: convert to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 13/16] blk-crypto: " Yu Kuai
  2025-09-05 20:50   ` Bart Van Assche
@ 2025-09-09  8:15   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:15 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio
  2025-09-05  7:06 ` [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio Yu Kuai
@ 2025-09-09  8:16   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:16 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split
  2025-09-05  7:06 ` [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split Yu Kuai
  2025-09-05 20:51   ` Bart Van Assche
@ 2025-09-09  8:16   ` Christoph Hellwig
  1 sibling, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:16 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset()
  2025-09-05  7:06 ` [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-09  8:17   ` Christoph Hellwig
  0 siblings, 0 replies; 44+ messages in thread
From: Christoph Hellwig @ 2025-09-09  8:17 UTC (permalink / raw)
  To: Yu Kuai
  Cc: hch, colyli, hare, dlemoal, tieren, bvanassche, axboe, tj, josef,
	song, yukuai3, satyat, ebiggers, kmo, akpm, neil, linux-block,
	linux-kernel, cgroups, linux-raid, yi.zhang, yangerkun,
	johnny.chenyi

Looks good:

Reviewed-by: Christoph Hellwig <hch@lst.de>


^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split
  2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
                   ` (15 preceding siblings ...)
  2025-09-05  7:06 ` [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset() Yu Kuai
@ 2025-09-09 15:28 ` Jens Axboe
  2025-09-09 17:16   ` Yu Kuai
  16 siblings, 1 reply; 44+ messages in thread
From: Jens Axboe @ 2025-09-09 15:28 UTC (permalink / raw)
  To: Yu Kuai, hch, colyli, hare, dlemoal, tieren, bvanassche, tj,
	josef, song, yukuai3, satyat, ebiggers, kmo, akpm, neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

Can you spin a new version with the commit messages sorted out and with
the missing "if defined" for BLK_CGROUP fixed up too? Looks like this is
good to go.

-- 
Jens Axboe

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split
  2025-09-09 15:28 ` [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Jens Axboe
@ 2025-09-09 17:16   ` Yu Kuai
  0 siblings, 0 replies; 44+ messages in thread
From: Yu Kuai @ 2025-09-09 17:16 UTC (permalink / raw)
  To: Jens Axboe, Yu Kuai, hch, colyli, hare, dlemoal, tieren,
	bvanassche, tj, josef, song, yukuai3, satyat, ebiggers, kmo, akpm,
	neil
  Cc: linux-block, linux-kernel, cgroups, linux-raid, yi.zhang,
	yangerkun, johnny.chenyi

Hi,

在 2025/9/9 23:28, Jens Axboe 写道:
> Can you spin a new version with the commit messages sorted out and with
> the missing "if defined" for BLK_CGROUP fixed up too? Looks like this is
> good to go.

That's great! I'll do this first thing in the morning.

Thanks,
Kuai


^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2025-09-09 17:19 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-09-05  7:06 [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Yu Kuai
2025-09-05  7:06 ` [PATCH for-6.18/block 01/16] block: cleanup bio_issue Yu Kuai
2025-09-05  7:06 ` [PATCH for-6.18/block 02/16] block: initialize bio issue time in blk_mq_submit_bio() Yu Kuai
2025-09-06 15:27   ` kernel test robot
2025-09-07  7:57     ` Yu Kuai
2025-09-08  1:32       ` Yu Kuai
2025-09-09  8:11   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 03/16] blk-mq: add QUEUE_FLAG_BIO_ISSUE_TIME Yu Kuai
2025-09-09  8:12   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 04/16] md: fix mssing blktrace bio split events Yu Kuai
2025-09-05 20:44   ` Bart Van Assche
2025-09-09  8:12   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 05/16] blk-crypto: fix missing " Yu Kuai
2025-09-05 20:45   ` Bart Van Assche
2025-09-09  8:13   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 06/16] block: factor out a helper bio_submit_split_bioset() Yu Kuai
2025-09-05 20:47   ` Bart Van Assche
2025-09-09  8:13   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 07/16] md/raid0: convert raid0_handle_discard() to use bio_submit_split_bioset() Yu Kuai
2025-09-05 20:49   ` Bart Van Assche
2025-09-06  0:38     ` Damien Le Moal
2025-09-05  7:06 ` [PATCH for-6.18/block 08/16] md/raid1: convert " Yu Kuai
2025-09-09  8:13   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 09/16] md/raid10: add a new r10bio flag R10BIO_Returned Yu Kuai
2025-09-09  8:14   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 10/16] md/raid10: convert read/write to use bio_submit_split_bioset() Yu Kuai
2025-09-09  8:14   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 11/16] md/raid5: convert " Yu Kuai
2025-09-09  8:14   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 12/16] md/md-linear: " Yu Kuai
2025-09-09  8:15   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 13/16] blk-crypto: " Yu Kuai
2025-09-05 20:50   ` Bart Van Assche
2025-09-06  2:42     ` Yu Kuai
2025-09-09  8:15   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 14/16] block: skip unnecessary checks for split bio Yu Kuai
2025-09-09  8:16   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 15/16] block: fix reordered IO in the case recursive split Yu Kuai
2025-09-05 20:51   ` Bart Van Assche
2025-09-09  8:16   ` Christoph Hellwig
2025-09-05  7:06 ` [PATCH for-6.18/block 16/16] md/raid0: convert raid0_make_request() to use bio_submit_split_bioset() Yu Kuai
2025-09-09  8:17   ` Christoph Hellwig
2025-09-09 15:28 ` [PATCH for-6.18/block 00/16] block: fix reordered IO in the case recursive split Jens Axboe
2025-09-09 17:16   ` Yu Kuai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).