linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Guoqing Jiang <guoqing.jiang@linux.dev>
To: song@kernel.org
Cc: linux-raid@vger.kernel.org
Subject: [PATCH 3/6] md/raid1: use rdev in raid1_write_request directly
Date: Mon,  4 Oct 2021 23:34:50 +0800	[thread overview]
Message-ID: <20211004153453.14051-4-guoqing.jiang@linux.dev> (raw)
In-Reply-To: <20211004153453.14051-1-guoqing.jiang@linux.dev>

We already get rdev from conf->mirrors[i].rdev at the beginning of the
loop, so just use it.

Signed-off-by: Guoqing Jiang <guoqing.jiang@linux.dev>
---
 drivers/md/raid1.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 6ba12f0f0f03..7dc8026cf6ee 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1529,13 +1529,12 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 
 		r1_bio->bios[i] = mbio;
 
-		mbio->bi_iter.bi_sector	= (r1_bio->sector +
-				   conf->mirrors[i].rdev->data_offset);
-		bio_set_dev(mbio, conf->mirrors[i].rdev->bdev);
+		mbio->bi_iter.bi_sector	= (r1_bio->sector + rdev->data_offset);
+		bio_set_dev(mbio, rdev->bdev);
 		mbio->bi_end_io	= raid1_end_write_request;
 		mbio->bi_opf = bio_op(bio) | (bio->bi_opf & (REQ_SYNC | REQ_FUA));
-		if (test_bit(FailFast, &conf->mirrors[i].rdev->flags) &&
-		    !test_bit(WriteMostly, &conf->mirrors[i].rdev->flags) &&
+		if (test_bit(FailFast, &rdev->flags) &&
+		    !test_bit(WriteMostly, &rdev->flags) &&
 		    conf->raid_disks - mddev->degraded > 1)
 			mbio->bi_opf |= MD_FAILFAST;
 		mbio->bi_private = r1_bio;
@@ -1546,7 +1545,7 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
 			trace_block_bio_remap(mbio, disk_devt(mddev->gendisk),
 					      r1_bio->sector);
 		/* flush_pending_writes() needs access to the rdev so...*/
-		mbio->bi_bdev = (void *)conf->mirrors[i].rdev;
+		mbio->bi_bdev = (void *)rdev;
 
 		cb = blk_check_plugged(raid1_unplug, mddev, sizeof(*plug));
 		if (cb)
-- 
2.31.1


  parent reply	other threads:[~2021-10-04 15:40 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-10-04 15:34 [PATCH 0/6] Misc changes for md Guoqing Jiang
2021-10-04 15:34 ` [PATCH 1/6] md/raid1: only allocate write behind bio for WriteMostly device Guoqing Jiang
2021-10-05  1:05   ` Guoqing Jiang
2021-10-05  5:55   ` Jens Stutte
2021-10-04 15:34 ` [PATCH 2/6] md/bitmap: don't set max_write_behind if there is no write mostly device Guoqing Jiang
2021-10-07  6:25   ` Song Liu
2021-10-07 10:20     ` Guoqing Jiang
2021-10-04 15:34 ` Guoqing Jiang [this message]
2021-10-04 15:34 ` [PATCH 4/6] md/raid10: add 'read_err' to raid10_read_request Guoqing Jiang
2021-10-07  6:20   ` Song Liu
2021-10-07 10:16     ` Guoqing Jiang
2021-10-04 15:34 ` [PATCH 5/6] md/raid5: call roundup_pow_of_two in raid5_run Guoqing Jiang
2021-10-04 15:34 ` [PATCH 6/6] md: remove unused argument from md_new_event Guoqing Jiang
2021-10-07  6:32 ` [PATCH 0/6] Misc changes for md Song Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20211004153453.14051-4-guoqing.jiang@linux.dev \
    --to=guoqing.jiang@linux.dev \
    --cc=linux-raid@vger.kernel.org \
    --cc=song@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).