public inbox for linux-raid@vger.kernel.org
 help / color / mirror / Atom feed
From: "Chen Cheng" <chencheng@fnnas.com>
To: "Paul Menzel" <pmenzel@molgen.mpg.de>
Cc: <linux-raid@vger.kernel.org>, <yukuai@fnnas.com>,
	 <chenchneg33@gmail.com>
Subject: Re: [PATCH 1/4] md/raid10: prepare per-r10bio dev slot tracking
Date: Fri, 24 Apr 2026 10:11:22 +0800	[thread overview]
Message-ID: <aerQ-L00v3c-7rNv@fedora> (raw)
In-Reply-To: <6e6e4340-2181-4a79-9284-7ed167aab807@molgen.mpg.de>

On Wed, Apr 22, 2026 at 08:40:42AM +0200, Paul Menzel wrote:

Hi Paul,

> Dear Cheng,
>
>
> Am 22.04.26 um 04:33 schrieb Chen Cheng:
> > From: Chen Cheng <chencheng@fnnas.com>
> >
> > raid10 reuses r10bio objects from both r10bio_pool and r10buf_pool. Track
> > the number of devs[] slots used by each request in the r10bio itself and
> > initialize it whenever one of these objects is reused.
> >
> > No functional change yet. A later patch will use this width when reshape
> > changes conf->geo.raid_disks.
>
> Your Signed-off-by: line is missing.

Yes, i missed it, thanks for point-out;

>
> > ---
> >   drivers/md/raid10.c | 4 ++++
> >   drivers/md/raid10.h | 1 +
> >   2 files changed, 5 insertions(+)
> >
> > diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> > index 0653b5d8545a..e93933632893 100644
> > --- a/drivers/md/raid10.c
> > +++ b/drivers/md/raid10.c
> > @@ -1540,6 +1540,7 @@ static void __make_request(struct mddev *mddev, struct bio *bio, int sectors)
> >     r10_bio->sector = bio->bi_iter.bi_sector;
> >     r10_bio->state = 0;
> >     r10_bio->read_slot = -1;
> > +   r10_bio->used_nr_devs = conf->geo.raid_disks;
> >     memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) *
> >                     conf->geo.raid_disks);
> > @@ -1727,6 +1728,7 @@ static int raid10_handle_discard(struct mddev *mddev, struct bio *bio)
> >     r10_bio->mddev = mddev;
> >     r10_bio->state = 0;
> >     r10_bio->sectors = 0;
> > +   r10_bio->used_nr_devs = geo->raid_disks;
> >     memset(r10_bio->devs, 0, sizeof(r10_bio->devs[0]) * geo->raid_disks);
> >     wait_blocked_dev(mddev, r10_bio);
> > @@ -3061,6 +3063,8 @@ static struct r10bio *raid10_alloc_init_r10buf(struct r10conf *conf)
> >     else
> >             nalloc = 2; /* recovery */
> > +   r10bio->used_nr_devs = nalloc;
> > +
> >     for (i = 0; i < nalloc; i++) {
> >             bio = r10bio->devs[i].bio;
> >             rp = bio->bi_private;
> > diff --git a/drivers/md/raid10.h b/drivers/md/raid10.h
> > index ec79d87fb92f..92e8743023e6 100644
> > --- a/drivers/md/raid10.h
> > +++ b/drivers/md/raid10.h
> > @@ -127,6 +127,7 @@ struct r10bio {
> >      * if the IO is in READ direction, then this is where we read
> >      */
> >     int                     read_slot;
> > +   unsigned int            used_nr_devs;
>
> Most entries have a comment describing the use. Maybe add one too, or at
> least a blank line, so it’s clear that the existing comment is just for
> `read_slot`?

Agreed.

>
> >     struct list_head        retry_list;
> >     /*
>
> From a performance and resource usage point of view, will increasing the
> struct have a negative impact?

On 64-bit platform, doesn't have negative resource usage impact,
the new field fits into the existing padding after read_slot, so
offsetof(struct r10bio, devs) stays unchanged;

On 32-bit platform, may increase by 4 bytes per r10bio, but that's
negligible compared with the bios/pages allocated for each request;


No negative performance impact, cause bottleneck is IO, and
the IO path has no changed;

>
> The diff looks good.
>
> Reviewed-by: Paul Menzel <pmenzel@molgen.mpg.de>
>

Thanks for review;

>
> Kind regards,
>
> Paul


Thanks,
Cheng

  reply	other threads:[~2026-04-24  2:11 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-22  2:33 [PATCH 1/4] md/raid10: prepare per-r10bio dev slot tracking Chen Cheng
2026-04-22  2:33 ` [PATCH 2/4] md/raid10: prepare r10bio allocation width tracking Chen Cheng
2026-04-22  2:33 ` [PATCH 3/4] md/raid10: fix r10bio devs overflow across reshape Chen Cheng
2026-04-22  2:33 ` [PATCH 4/4] md/raid10: reset read_slot when reusing r10bio for discard Chen Cheng
2026-04-22  6:40 ` [PATCH 1/4] md/raid10: prepare per-r10bio dev slot tracking Paul Menzel
2026-04-24  2:11   ` Chen Cheng [this message]
2026-04-24  7:04 ` Yu Kuai

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=aerQ-L00v3c-7rNv@fedora \
    --to=chencheng@fnnas.com \
    --cc=chenchneg33@gmail.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=pmenzel@molgen.mpg.de \
    --cc=yukuai@fnnas.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox