From: "Chen Cheng" <chencheng@fnnas.com>
To: "Yu Kuai" <yukuai@fnnas.com>
Cc: "Chen Cheng" <chencheng@fnnas.com>, <linux-raid@vger.kernel.org>,
<linux-kernel@vger.kernel.org>
Subject: [PATCH v2 2/2] md/raid10: bound reused r10bio devs[] walks by used_nr_devs
Date: Fri, 15 May 2026 17:27:07 +0800 [thread overview]
Message-ID: <20260515092707.3436464-3-chencheng@fnnas.com> (raw)
In-Reply-To: <20260515092707.3436464-1-chencheng@fnnas.com>
From: Chen Cheng <chencheng@fnnas.com>
After reshape changes raid_disks, an in-flight r10bio from the old geometry
can still be completed or freed later. In that case, using the current
geometry to walk r10_bio->devs[] is unsafe. A failure was reproduced with a
simple write workload while reshaping a raid10 array from 4 disks to 5 disks.
e.g.:
mdadm -C /dev/md777 -l10 -n4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
mkfs.ext4 /dev/md777
mount /dev/md777 /mnt/test
fsstress -d /mnt/test -n 24000 -p 8 -l 24 &
mdadm /dev/md777 --add /dev/sde
mdadm --grow /dev/md777 --raid-devices=5 \
--backup-file=/tmp/md-reshape-backup
the sequence above can trigger:
BUG: KASAN: slab-out-of-bounds in free_r10bio+0x1c4/0x260 [raid10]
Read of size 8 at addr ffff00008c2dfac8 by task ksoftirqd/0/15
free_r10bio
raid_end_bio_io
one_write_done
raid10_end_write_request
The buggy object was 200 bytes long, which matches an r10bio with space for
only four devs[] entries. However, put_all_bios() and find_bio_disk() walk
r10_bio->devs[] using the current conf->geo.raid_disks value. Once reshape
switches conf->geo.raid_disks from 4 to 5, an old 4-slot r10bio can be
completed or freed as if it had 5 slots, and the walk overruns devs[4]. The
same stale-width mismatch can also surface during a 5-disk to 4-disk reshape.
Track the number of valid devs[] entries in each reused r10bio with
used_nr_devs. Initialize it whenever an r10bio is prepared for regular I/O,
discard, or resync/recovery/reshape work, and use it to bound devs[] walks
in put_all_bios() and find_bio_disk().
Signed-off-by: Chen Cheng <chencheng@fnnas.com>
---
drivers/md/raid10.c | 8 ++++++--
drivers/md/raid10.h | 2 ++
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 886bbe6b1ebc..42865d822d95 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -275,7 +275,7 @@ static void put_all_bios(struct r10conf *conf, struct r10bio *r10_bio)
{
int i;
- for (i = 0; i < conf->geo.raid_disks; i++) {
+ for (i = 0; i < r10_bio->used_nr_devs; i++) {
struct bio **bio = & r10_bio->devs[i].bio;
if (!BIO_SPECIAL(*bio))
bio_put(*bio);
@@ -372,7 +372,7 @@ static int find_bio_disk(struct r10conf *conf, struct r10bio *r10_bio,
int slot;
int repl = 0;
- for (slot = 0; slot < conf->geo.raid_disks; slot++) {
+ for (slot = 0; slot < r10_bio->used_nr_devs; slot++) {
if (r10_bio->devs[slot].bio == bio)
break;
if (r10_bio->devs[slot].repl_bio == bio) {
@@ -1555,6 +1555,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
r10_bio->sector = bio->bi_iter.bi_sector;
r10_bio->state = 0;
r10_bio->read_slot = -1;
+ r10_bio->used_nr_devs = conf->geo.raid_disks;
memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) *
conf->geo.raid_disks);
@@ -1742,6 +1743,7 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
r10_bio->mddev = mddev;
r10_bio->state = 0;
r10_bio->sectors = 0;
+ r10_bio->used_nr_devs = geo->raid_disks;
memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
wait_blocked_dev(mddev, r10_bio);
@@ -3076,6 +3078,8 @@ static struct r10bio *raid10_alloc_init_r10buf(struct r10conf *conf)
else
nalloc = 2; /* recovery */
+ r10bio->used_nr_devs = nalloc;
+
for (i = 0; i < nalloc; i++) {
bio = r10bio->devs[i].bio;
rp = bio->bi_private;
diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
index b711626a5db7..4751119f9770 100644
--- a/drivers/md/raid10.h
+++ b/drivers/md/raid10.h
@@ -127,6 +127,8 @@ struct r10bio {
* if the IO is in READ direction, then this is where we read
*/
int read_slot;
+ /* Used to bound devs[] walks when the object is reused. */
+ unsigned int used_nr_devs;
struct list_head retry_list;
/*
--
2.54.0
prev parent reply other threads:[~2026-05-15 9:28 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-05-15 9:27 [PATCH v2 0/2] md/raid10: fix r10bio width mismatches across reshape Chen Cheng
2026-05-15 9:27 ` [PATCH v2 1/2] md/raid10: make r10bio_pool use fixed-size objects Chen Cheng
2026-05-15 9:27 ` Chen Cheng [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260515092707.3436464-3-chencheng@fnnas.com \
--to=chencheng@fnnas.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=yukuai@fnnas.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox