From: linan666@huaweicloud.com
To: song@kernel.org, guoqing.jiang@cloud.ionos.com, xni@redhat.com,
colyli@suse.de
Cc: linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org,
linan122@huawei.com, yukuai3@huawei.com, yi.zhang@huawei.com,
houtao1@huawei.com, yangerkun@huawei.com
Subject: [PATCH v2 2/3] md/raid10: factor out dereference_rdev_and_rrdev()
Date: Sat, 1 Jul 2023 16:05:28 +0800 [thread overview]
Message-ID: <20230701080529.2684932-3-linan666@huaweicloud.com> (raw)
In-Reply-To: <20230701080529.2684932-1-linan666@huaweicloud.com>
From: Li Nan <linan122@huawei.com>
Factor out a helper to get 'rdev' and 'replacement' from config->mirrors.
Just to make code cleaner and prepare to fix the bug of io loss while
'replacement' replace 'rdev'.
There is no functional change.
Signed-off-by: Li Nan <linan122@huawei.com>
---
drivers/md/raid10.c | 29 ++++++++++++++++++++---------
1 file changed, 20 insertions(+), 9 deletions(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 3e6a09aaaba6..a6c3806be903 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1346,6 +1346,25 @@ static void raid10_write_one_disk(struct mddev *mddev, struct r10bio *r10_bio,
}
}
+static struct md_rdev *dereference_rdev_and_rrdev(struct raid10_info *mirror,
+ struct md_rdev **prrdev)
+{
+ struct md_rdev *rdev, *rrdev;
+
+ rrdev = rcu_dereference(mirror->replacement);
+ /*
+ * Read replacement first to prevent reading both rdev and
+ * replacement as NULL during replacement replace rdev.
+ */
+ smp_mb();
+ rdev = rcu_dereference(mirror->rdev);
+ if (rdev == rrdev)
+ rrdev = NULL;
+
+ *prrdev = rrdev;
+ return rdev;
+}
+
static void wait_blocked_dev(struct mddev *mddev, struct r10bio *r10_bio)
{
int i;
@@ -1489,15 +1508,7 @@ static void raid10_write_request(struct mddev *mddev, struct bio *bio,
int d = r10_bio->devs[i].devnum;
struct md_rdev *rdev, *rrdev;
- rrdev = rcu_dereference(conf->mirrors[d].replacement);
- /*
- * Read replacement first to prevent reading both rdev and
- * replacement as NULL during replacement replace rdev.
- */
- smp_mb();
- rdev = rcu_dereference(conf->mirrors[d].rdev);
- if (rdev == rrdev)
- rrdev = NULL;
+ rdev = dereference_rdev_and_rrdev(&conf->mirrors[d], &rrdev);
if (rdev && (test_bit(Faulty, &rdev->flags)))
rdev = NULL;
if (rrdev && (test_bit(Faulty, &rrdev->flags)))
--
2.39.2
next prev parent reply other threads:[~2023-07-01 8:06 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-01 8:05 [PATCH v2 0/3] raid10 bugfix linan666
2023-07-01 8:05 ` [PATCH v2 1/3] md/raid10: check replacement and rdev to prevent submit the same io twice linan666
2023-07-01 8:05 ` linan666 [this message]
2023-07-01 8:05 ` [PATCH v2 3/3] md/raid10: use dereference_rdev_and_rrdev() to get devices linan666
2023-07-07 9:14 ` [PATCH v2 0/3] raid10 bugfix Song Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230701080529.2684932-3-linan666@huaweicloud.com \
--to=linan666@huaweicloud.com \
--cc=colyli@suse.de \
--cc=guoqing.jiang@cloud.ionos.com \
--cc=houtao1@huawei.com \
--cc=linan122@huawei.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-raid@vger.kernel.org \
--cc=song@kernel.org \
--cc=xni@redhat.com \
--cc=yangerkun@huawei.com \
--cc=yi.zhang@huawei.com \
--cc=yukuai3@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).