From: Namhyung Kim <namhyung@gmail.com>
To: Neil Brown <neilb@suse.de>
Cc: linux-raid@vger.kernel.org
Subject: [PATCH 5/5] md/raid10: spread read for subordinate r10bios during recovery
Date: Wed, 15 Jun 2011 11:02:04 +0900 [thread overview]
Message-ID: <1308103324-2375-6-git-send-email-namhyung@gmail.com> (raw)
In-Reply-To: <1308103324-2375-1-git-send-email-namhyung@gmail.com>
In the current scheme, multiple read request could be directed to
the first active disk during recovery if there are several disk
failure at the same time. Spreading those requests on other in-sync
disks might be helpful.
Signed-off-by: Namhyung Kim <namhyung@gmail.com>
---
drivers/md/raid10.c | 10 +++++++---
1 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index dea73bdb99b8..d0188e49f881 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1832,6 +1832,7 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr,
if (!test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
/* recovery... the complicated one */
int j, k;
+ int last_read = -1;
r10_bio = NULL;
for (i=0 ; i<conf->raid_disks; i++) {
@@ -1891,7 +1892,9 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr,
&sync_blocks, still_degraded);
for (j=0; j<conf->copies;j++) {
- int d = r10_bio->devs[j].devnum;
+ int c = (last_read + j + 1) % conf->copies;
+ int d = r10_bio->devs[c].devnum;
+
if (!conf->mirrors[d].rdev ||
!test_bit(In_sync, &conf->mirrors[d].rdev->flags))
continue;
@@ -1902,13 +1905,14 @@ static sector_t sync_request(mddev_t *mddev, sector_t sector_nr,
bio->bi_private = r10_bio;
bio->bi_end_io = end_sync_read;
bio->bi_rw = READ;
- bio->bi_sector = r10_bio->devs[j].addr +
+ bio->bi_sector = r10_bio->devs[c].addr +
conf->mirrors[d].rdev->data_offset;
bio->bi_bdev = conf->mirrors[d].rdev->bdev;
atomic_inc(&conf->mirrors[d].rdev->nr_pending);
atomic_inc(&r10_bio->remaining);
- /* and we write to 'i' */
+ last_read = c;
+ /* and we write to 'i' */
for (k=0; k<conf->copies; k++)
if (r10_bio->devs[k].devnum == i)
break;
--
1.7.5.2
next prev parent reply other threads:[~2011-06-15 2:02 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-15 2:01 [PATCH RESEND 0/5] md/raid10 changes Namhyung Kim
2011-06-15 2:02 ` [PATCH v2 1/5] md/raid10: optimize read_balance() for 'far offset' arrays Namhyung Kim
2011-06-15 2:57 ` NeilBrown
2011-06-15 6:51 ` Keld Jørn Simonsen
2011-06-15 12:25 ` Namhyung Kim
2011-06-15 14:35 ` Namhyung Kim
2011-06-15 23:56 ` NeilBrown
2011-06-15 2:02 ` [PATCH 2/5] md/raid10: get rid of duplicated conditional expression Namhyung Kim
2011-06-15 2:02 ` [PATCH 3/5] md/raid10: factor out common bio handling code Namhyung Kim
2011-06-15 2:02 ` [PATCH v2 4/5] md/raid10: share pages between read and write bio's during recovery Namhyung Kim
2011-06-15 2:02 ` Namhyung Kim [this message]
2011-06-15 3:09 ` [PATCH 5/5] md/raid10: spread read for subordinate r10bios " NeilBrown
2011-06-15 3:09 ` [PATCH RESEND 0/5] md/raid10 changes NeilBrown
2011-06-15 3:33 ` Namhyung Kim
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1308103324-2375-6-git-send-email-namhyung@gmail.com \
--to=namhyung@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).