* [patch 0/4] Optimize raid1 read balance for SSD
@ 2012-05-08 10:08 Shaohua Li
2012-05-08 10:08 ` [patch 1/4] raid1: move distance based read balance to a separate function Shaohua Li
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Shaohua Li @ 2012-05-08 10:08 UTC (permalink / raw)
To: linux-raid; +Cc: neilb, axboe
raid1 read balance is an important algorithm to make read performance optimal.
It's a distance based algorithm, eg, for each request dispatch, choose disk
whose last finished request is close the request. This is great for hard disk.
But SSD has some special characteristics:
1. nonrotational. Distance means nothing for SSD, though merging small rquests
to big request is still optimal for SSD. If no merge, distributing rquests to
raid disks as more as possible is more optimal.
2. Getting too big request isn't always optimal. For hard disk, compared to
spindle move, data transfer overhead is trival, so we always prefer bigger
request. In SSD, request size exceeds specific value, performance isn't always
increased with request size increased. An example is readahead. If readahead
merges too big request and causes some disks idle, the performance is less
optimal than that when all disks are busy and running small requests.
The patches try to address the issues. The first two patches are clean up. The
third patch addresses the first item above. The forth addresses the second item
above. The idea can be applied to raid10 too, which is in my todo list.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [patch 1/4] raid1: move distance based read balance to a separate function
2012-05-08 10:08 [patch 0/4] Optimize raid1 read balance for SSD Shaohua Li
@ 2012-05-08 10:08 ` Shaohua Li
2012-05-08 10:08 ` [patch 2/4] raid1: make sequential read detection per disk based Shaohua Li
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Shaohua Li @ 2012-05-08 10:08 UTC (permalink / raw)
To: linux-raid; +Cc: neilb, axboe
[-- Attachment #1: raid1-consolidate-read-balance.patch --]
[-- Type: text/plain, Size: 2198 bytes --]
Move distance based read balance algorithm to a separate function. No
functional change.
Signed-off-by: Shaohua Li <shli@fusionio.com>
---
drivers/md/raid1.c | 42 ++++++++++++++++++++++++++++++------------
1 file changed, 30 insertions(+), 12 deletions(-)
Index: linux/drivers/md/raid1.c
===================================================================
--- linux.orig/drivers/md/raid1.c 2012-05-08 16:36:12.624232473 +0800
+++ linux/drivers/md/raid1.c 2012-05-08 16:36:14.476209200 +0800
@@ -463,6 +463,31 @@ static void raid1_end_write_request(stru
bio_put(to_put);
}
+static int read_balance_measure_distance(struct r1conf *conf,
+ struct r1bio *r1_bio, int disk, int *best_disk, sector_t *best_dist)
+{
+ const sector_t this_sector = r1_bio->sector;
+ struct md_rdev *rdev;
+ sector_t dist;
+
+ rdev = rcu_dereference(conf->mirrors[disk].rdev);
+
+ dist = abs(this_sector - conf->mirrors[disk].head_position);
+ /* Don't change to another disk for sequential reads */
+ if (conf->next_seq_sect == this_sector
+ || dist == 0
+ /* If device is idle, use it */
+ || atomic_read(&rdev->nr_pending) == 0) {
+ *best_disk = disk;
+ return 0;
+ }
+
+ if (dist < *best_dist) {
+ *best_dist = dist;
+ *best_disk = disk;
+ }
+ return 1;
+}
/*
* This routine returns the disk from which the requested read should
@@ -512,7 +537,6 @@ static int read_balance(struct r1conf *c
}
for (i = 0 ; i < conf->raid_disks * 2 ; i++) {
- sector_t dist;
sector_t first_bad;
int bad_sectors;
@@ -577,20 +601,14 @@ static int read_balance(struct r1conf *c
} else
best_good_sectors = sectors;
- dist = abs(this_sector - conf->mirrors[disk].head_position);
- if (choose_first
- /* Don't change to another disk for sequential reads */
- || conf->next_seq_sect == this_sector
- || dist == 0
- /* If device is idle, use it */
- || atomic_read(&rdev->nr_pending) == 0) {
+ if (choose_first) {
best_disk = disk;
break;
}
- if (dist < best_dist) {
- best_dist = dist;
- best_disk = disk;
- }
+
+ if (!read_balance_measure_distance(conf, r1_bio, disk,
+ &best_disk, &best_dist))
+ break;
}
if (best_disk >= 0) {
^ permalink raw reply [flat|nested] 5+ messages in thread
* [patch 2/4] raid1: make sequential read detection per disk based
2012-05-08 10:08 [patch 0/4] Optimize raid1 read balance for SSD Shaohua Li
2012-05-08 10:08 ` [patch 1/4] raid1: move distance based read balance to a separate function Shaohua Li
@ 2012-05-08 10:08 ` Shaohua Li
2012-05-08 10:08 ` [patch 3/4] raid1: read balance chooses idlest disk Shaohua Li
2012-05-08 10:08 ` [patch 4/4] raid1: split large request for SSD Shaohua Li
3 siblings, 0 replies; 5+ messages in thread
From: Shaohua Li @ 2012-05-08 10:08 UTC (permalink / raw)
To: linux-raid; +Cc: neilb, axboe
[-- Attachment #1: raid1-seq-detection.patch --]
[-- Type: text/plain, Size: 4192 bytes --]
Currently the sequential read detection is global wide. It's natural to make it
per disk based, which can improve the detection for concurrent multiple
sequential reads. And next patch will make SSD read balance not use distance
based algorithm, where this change help detect truly sequential read for SSD.
Signed-off-by: Shaohua Li <shli@fusionio.com>
---
drivers/md/raid1.c | 29 ++++++-----------------------
drivers/md/raid1.h | 11 +++++------
2 files changed, 11 insertions(+), 29 deletions(-)
Index: linux/drivers/md/raid1.c
===================================================================
--- linux.orig/drivers/md/raid1.c 2012-05-08 16:36:14.476209200 +0800
+++ linux/drivers/md/raid1.c 2012-05-08 16:36:31.559994400 +0800
@@ -474,7 +474,7 @@ static int read_balance_measure_distance
dist = abs(this_sector - conf->mirrors[disk].head_position);
/* Don't change to another disk for sequential reads */
- if (conf->next_seq_sect == this_sector
+ if (conf->mirrors[disk].next_seq_sect == this_sector
|| dist == 0
/* If device is idle, use it */
|| atomic_read(&rdev->nr_pending) == 0) {
@@ -508,7 +508,6 @@ static int read_balance(struct r1conf *c
const sector_t this_sector = r1_bio->sector;
int sectors;
int best_good_sectors;
- int start_disk;
int best_disk;
int i;
sector_t best_dist;
@@ -528,19 +527,16 @@ static int read_balance(struct r1conf *c
best_good_sectors = 0;
if (conf->mddev->recovery_cp < MaxSector &&
- (this_sector + sectors >= conf->next_resync)) {
+ (this_sector + sectors >= conf->next_resync))
choose_first = 1;
- start_disk = 0;
- } else {
+ else
choose_first = 0;
- start_disk = conf->last_used;
- }
for (i = 0 ; i < conf->raid_disks * 2 ; i++) {
sector_t first_bad;
int bad_sectors;
- int disk = start_disk + i;
+ int disk = i;
if (disk >= conf->raid_disks)
disk -= conf->raid_disks;
@@ -624,8 +620,7 @@ static int read_balance(struct r1conf *c
goto retry;
}
sectors = best_good_sectors;
- conf->next_seq_sect = this_sector + sectors;
- conf->last_used = best_disk;
+ conf->mirrors[best_disk].next_seq_sect = this_sector + sectors;
}
rcu_read_unlock();
*max_sectors = sectors;
@@ -2593,7 +2588,6 @@ static struct r1conf *setup_conf(struct
conf->recovery_disabled = mddev->recovery_disabled - 1;
err = -EIO;
- conf->last_used = -1;
for (i = 0; i < conf->raid_disks * 2; i++) {
disk = conf->mirrors + i;
@@ -2618,19 +2612,9 @@ static struct r1conf *setup_conf(struct
disk->head_position = 0;
if (disk->rdev)
conf->fullsync = 1;
- } else if (conf->last_used < 0)
- /*
- * The first working device is used as a
- * starting point to read balancing.
- */
- conf->last_used = i;
+ }
}
- if (conf->last_used < 0) {
- printk(KERN_ERR "md/raid1:%s: no operational mirrors\n",
- mdname(mddev));
- goto abort;
- }
err = -ENOMEM;
conf->thread = md_register_thread(raid1d, mddev, NULL);
if (!conf->thread) {
@@ -2880,7 +2864,6 @@ static int raid1_reshape(struct mddev *m
conf->raid_disks = mddev->raid_disks = raid_disks;
mddev->delta_disks = 0;
- conf->last_used = 0; /* just make sure it is in-range */
lower_barrier(conf);
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
Index: linux/drivers/md/raid1.h
===================================================================
--- linux.orig/drivers/md/raid1.h 2012-05-08 16:36:12.560233279 +0800
+++ linux/drivers/md/raid1.h 2012-05-08 16:36:31.559994400 +0800
@@ -4,6 +4,11 @@
struct mirror_info {
struct md_rdev *rdev;
sector_t head_position;
+
+ /* When choose the best device for a read (read_balance())
+ * we try to keep sequential reads one the same device
+ */
+ sector_t next_seq_sect;
};
/*
@@ -29,12 +34,6 @@ struct r1conf {
*/
int raid_disks;
- /* When choose the best device for a read (read_balance())
- * we try to keep sequential reads one the same device
- * using 'last_used' and 'next_seq_sect'
- */
- int last_used;
- sector_t next_seq_sect;
/* During resync, read_balancing is only allowed on the part
* of the array that has been resynced. 'next_resync' tells us
* where that is.
^ permalink raw reply [flat|nested] 5+ messages in thread
* [patch 3/4] raid1: read balance chooses idlest disk
2012-05-08 10:08 [patch 0/4] Optimize raid1 read balance for SSD Shaohua Li
2012-05-08 10:08 ` [patch 1/4] raid1: move distance based read balance to a separate function Shaohua Li
2012-05-08 10:08 ` [patch 2/4] raid1: make sequential read detection per disk based Shaohua Li
@ 2012-05-08 10:08 ` Shaohua Li
2012-05-08 10:08 ` [patch 4/4] raid1: split large request for SSD Shaohua Li
3 siblings, 0 replies; 5+ messages in thread
From: Shaohua Li @ 2012-05-08 10:08 UTC (permalink / raw)
To: linux-raid; +Cc: neilb, axboe
[-- Attachment #1: raid1-ssd-read-balance.patch --]
[-- Type: text/plain, Size: 4819 bytes --]
SSD hasn't spindle, distance between requests means nothing. And the original
distance based algorithm sometimes can cause severe performance issue for SSD
raid.
Considering two thread groups, one accesses file A, the other access file B.
The first group will access one disk and the second will access the other disk,
because requests are near from one group and far between groups. In this case,
read balance might keep one disk very busy but the other relative idle. For
SSD, we should try best to distribute requests to as more disks as possible.
There isn't spindle move penality anyway.
With below patch, I can see more than 50% throughput improvement sometimes
depending on workloads.
The only exception is small requests can be merged to a big request which
typically can drive higher throughput for SSD too. Such small requests are
sequential reads. Unlike hard disk, sequential read which can't be merged (for
example direct IO, or read without readahead) can be ignored for SSD. Again
there is no spindle move penality. readahead dispatches small requests and such
requests can be merged.
Last patch can help detect sequential read well, at least if concurrent read
number isn't greater than raid disk number. In that case, distance based
algorithm doesn't work well too.
Signed-off-by: Shaohua Li <shli@fusionio.com>
---
drivers/md/raid1.c | 55 ++++++++++++++++++++++++++++++++++++++++++++++++++---
drivers/md/raid1.h | 2 +
2 files changed, 54 insertions(+), 3 deletions(-)
Index: linux/drivers/md/raid1.c
===================================================================
--- linux.orig/drivers/md/raid1.c 2012-05-08 16:36:31.559994400 +0800
+++ linux/drivers/md/raid1.c 2012-05-08 16:36:35.255946817 +0800
@@ -463,6 +463,43 @@ static void raid1_end_write_request(stru
bio_put(to_put);
}
+static int read_balance_measure_ssd(struct r1conf *conf, struct r1bio *r1_bio,
+ int disk, int *best_disk, unsigned int *min_pending)
+{
+ const sector_t this_sector = r1_bio->sector;
+ struct md_rdev *rdev;
+ unsigned int pending;
+
+ rdev = rcu_dereference(conf->mirrors[disk].rdev);
+ pending = atomic_read(&rdev->nr_pending);
+
+ /* big request IO helps SSD too, allow sequential IO merge */
+ if (conf->mirrors[disk].next_seq_sect == this_sector) {
+ sector_t dist;
+ dist = abs(this_sector - conf->mirrors[disk].head_position);
+ /*
+ * head_position is for finished request, such reqeust can't be
+ * merged with current request, so it means nothing for SSD
+ */
+ if (dist != 0)
+ goto done;
+ }
+
+ /* If device is idle, use it */
+ if (pending == 0)
+ goto done;
+
+ /* find device with less requests pending */
+ if (*min_pending > pending) {
+ *min_pending = pending;
+ *best_disk = disk;
+ }
+ return 1;
+done:
+ *best_disk = disk;
+ return 0;
+}
+
static int read_balance_measure_distance(struct r1conf *conf,
struct r1bio *r1_bio, int disk, int *best_disk, sector_t *best_dist)
{
@@ -511,6 +548,7 @@ static int read_balance(struct r1conf *c
int best_disk;
int i;
sector_t best_dist;
+ unsigned int min_pending;
struct md_rdev *rdev;
int choose_first;
@@ -524,6 +562,7 @@ static int read_balance(struct r1conf *c
sectors = r1_bio->sectors;
best_disk = -1;
best_dist = MaxSector;
+ min_pending = -1;
best_good_sectors = 0;
if (conf->mddev->recovery_cp < MaxSector &&
@@ -602,9 +641,15 @@ static int read_balance(struct r1conf *c
break;
}
- if (!read_balance_measure_distance(conf, r1_bio, disk,
- &best_disk, &best_dist))
- break;
+ if (!conf->nonrotational) {
+ if (!read_balance_measure_distance(conf, r1_bio, disk,
+ &best_disk, &best_dist))
+ break;
+ } else {
+ if (!read_balance_measure_ssd(conf, r1_bio, disk,
+ &best_disk, &min_pending))
+ break;
+ }
}
if (best_disk >= 0) {
@@ -2531,6 +2576,7 @@ static struct r1conf *setup_conf(struct
struct mirror_info *disk;
struct md_rdev *rdev;
int err = -ENOMEM;
+ bool nonrotational = true;
conf = kzalloc(sizeof(struct r1conf), GFP_KERNEL);
if (!conf)
@@ -2575,7 +2621,10 @@ static struct r1conf *setup_conf(struct
disk->rdev = rdev;
disk->head_position = 0;
+ if (!blk_queue_nonrot(bdev_get_queue(rdev->bdev)))
+ nonrotational = false;
}
+ conf->nonrotational = nonrotational;
conf->raid_disks = mddev->raid_disks;
conf->mddev = mddev;
INIT_LIST_HEAD(&conf->retry_list);
Index: linux/drivers/md/raid1.h
===================================================================
--- linux.orig/drivers/md/raid1.h 2012-05-08 16:36:31.559994400 +0800
+++ linux/drivers/md/raid1.h 2012-05-08 16:36:35.255946817 +0800
@@ -65,6 +65,8 @@ struct r1conf {
int nr_queued;
int barrier;
+ int nonrotational;
+
/* Set to 1 if a full sync is needed, (fresh device added).
* Cleared when a sync completes.
*/
^ permalink raw reply [flat|nested] 5+ messages in thread
* [patch 4/4] raid1: split large request for SSD
2012-05-08 10:08 [patch 0/4] Optimize raid1 read balance for SSD Shaohua Li
` (2 preceding siblings ...)
2012-05-08 10:08 ` [patch 3/4] raid1: read balance chooses idlest disk Shaohua Li
@ 2012-05-08 10:08 ` Shaohua Li
3 siblings, 0 replies; 5+ messages in thread
From: Shaohua Li @ 2012-05-08 10:08 UTC (permalink / raw)
To: linux-raid; +Cc: neilb, axboe
[-- Attachment #1: raid1-ssd-split-large-read.patch --]
[-- Type: text/plain, Size: 5805 bytes --]
For SSD, if request size exceeds specific value (optimal io size), request size
isn't important for bandwidth. In such condition, if making request size bigger
will cause some disks idle, the total throughput will actually drop. A good
example is doing a readahead in a two-disk raid1 setup.
So when we should split big request? We absolutly don't want to split big
request to very small requests. Even in SSD, big request transfer is more
efficient. Below patch only consider request with size above optimal io size.
If all disks are busy, is it worthy to do split? Say optimal io size is 16k,
two requests 32k and two disks. We can let each disk run one 32k request, or
split the requests to 4 16k requests and each disk runs two. It's hard to say
which case is better, depending on hardware.
So only consider case where there are idle disks. For readahead, split is
always better in this case. And in my test, below patch can improve > 30%
thoughput. Hmm, not 100%, because disk isn't 100% busy.
Such case can happen not just in readahead, for example, in directio. But I
suppose directio usually will have bigger IO depth and make all disks busy, so
I ignored it.
Signed-off-by: Shaohua Li <shli@fusionio.com>
---
drivers/md/raid1.c | 44 +++++++++++++++++++++++++++++++++++++-------
drivers/md/raid1.h | 2 ++
2 files changed, 39 insertions(+), 7 deletions(-)
Index: linux/drivers/md/raid1.c
===================================================================
--- linux.orig/drivers/md/raid1.c 2012-05-08 16:36:35.255946817 +0800
+++ linux/drivers/md/raid1.c 2012-05-08 16:36:37.471920320 +0800
@@ -464,31 +464,49 @@ static void raid1_end_write_request(stru
}
static int read_balance_measure_ssd(struct r1conf *conf, struct r1bio *r1_bio,
- int disk, int *best_disk, unsigned int *min_pending)
+ int disk, int *best_disk, unsigned int *min_pending, int *choose_idle)
{
const sector_t this_sector = r1_bio->sector;
struct md_rdev *rdev;
unsigned int pending;
+ struct mirror_info *mirror = &conf->mirrors[disk];
+ int ret = 0;
- rdev = rcu_dereference(conf->mirrors[disk].rdev);
+ rdev = rcu_dereference(mirror->rdev);
pending = atomic_read(&rdev->nr_pending);
/* big request IO helps SSD too, allow sequential IO merge */
- if (conf->mirrors[disk].next_seq_sect == this_sector) {
+ if (mirror->next_seq_sect == this_sector && *choose_idle == 0) {
sector_t dist;
- dist = abs(this_sector - conf->mirrors[disk].head_position);
+ dist = abs(this_sector - mirror->head_position);
/*
* head_position is for finished request, such reqeust can't be
* merged with current request, so it means nothing for SSD
*/
- if (dist != 0)
+ if (dist != 0) {
+ /*
+ * If buffered sequential IO size exceeds optimal
+ * iosize and there is idle disk, choose idle disk
+ */
+ if (mirror->seq_start != MaxSector
+ && conf->opt_iosize > 0
+ && mirror->next_seq_sect > conf->opt_iosize
+ && mirror->next_seq_sect - conf->opt_iosize >=
+ mirror->seq_start) {
+ *choose_idle = 1;
+ ret = 1;
+ }
goto done;
+ }
}
/* If device is idle, use it */
if (pending == 0)
goto done;
+ if (*choose_idle == 1)
+ return 1;
+
/* find device with less requests pending */
if (*min_pending > pending) {
*min_pending = pending;
@@ -497,7 +515,7 @@ static int read_balance_measure_ssd(stru
return 1;
done:
*best_disk = disk;
- return 0;
+ return ret;
}
static int read_balance_measure_distance(struct r1conf *conf,
@@ -551,6 +569,7 @@ static int read_balance(struct r1conf *c
unsigned int min_pending;
struct md_rdev *rdev;
int choose_first;
+ int choose_idle;
rcu_read_lock();
/*
@@ -564,6 +583,7 @@ static int read_balance(struct r1conf *c
best_dist = MaxSector;
min_pending = -1;
best_good_sectors = 0;
+ choose_idle = 0;
if (conf->mddev->recovery_cp < MaxSector &&
(this_sector + sectors >= conf->next_resync))
@@ -647,7 +667,7 @@ static int read_balance(struct r1conf *c
break;
} else {
if (!read_balance_measure_ssd(conf, r1_bio, disk,
- &best_disk, &min_pending))
+ &best_disk, &min_pending, &choose_idle))
break;
}
}
@@ -665,6 +685,10 @@ static int read_balance(struct r1conf *c
goto retry;
}
sectors = best_good_sectors;
+
+ if (conf->mirrors[best_disk].next_seq_sect != this_sector)
+ conf->mirrors[best_disk].seq_start = this_sector;
+
conf->mirrors[best_disk].next_seq_sect = this_sector + sectors;
}
rcu_read_unlock();
@@ -2577,6 +2601,7 @@ static struct r1conf *setup_conf(struct
struct md_rdev *rdev;
int err = -ENOMEM;
bool nonrotational = true;
+ int opt_iosize = 0;
conf = kzalloc(sizeof(struct r1conf), GFP_KERNEL);
if (!conf)
@@ -2623,8 +2648,13 @@ static struct r1conf *setup_conf(struct
disk->head_position = 0;
if (!blk_queue_nonrot(bdev_get_queue(rdev->bdev)))
nonrotational = false;
+ else
+ opt_iosize = max(opt_iosize, bdev_io_opt(rdev->bdev));
+ disk->seq_start = MaxSector;
}
conf->nonrotational = nonrotational;
+ if (nonrotational)
+ conf->opt_iosize = opt_iosize >> 9;
conf->raid_disks = mddev->raid_disks;
conf->mddev = mddev;
INIT_LIST_HEAD(&conf->retry_list);
Index: linux/drivers/md/raid1.h
===================================================================
--- linux.orig/drivers/md/raid1.h 2012-05-08 16:36:35.255946817 +0800
+++ linux/drivers/md/raid1.h 2012-05-08 16:36:37.471920320 +0800
@@ -9,6 +9,7 @@ struct mirror_info {
* we try to keep sequential reads one the same device
*/
sector_t next_seq_sect;
+ sector_t seq_start;
};
/*
@@ -66,6 +67,7 @@ struct r1conf {
int barrier;
int nonrotational;
+ sector_t opt_iosize;
/* Set to 1 if a full sync is needed, (fresh device added).
* Cleared when a sync completes.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2012-05-08 10:08 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-05-08 10:08 [patch 0/4] Optimize raid1 read balance for SSD Shaohua Li
2012-05-08 10:08 ` [patch 1/4] raid1: move distance based read balance to a separate function Shaohua Li
2012-05-08 10:08 ` [patch 2/4] raid1: make sequential read detection per disk based Shaohua Li
2012-05-08 10:08 ` [patch 3/4] raid1: read balance chooses idlest disk Shaohua Li
2012-05-08 10:08 ` [patch 4/4] raid1: split large request for SSD Shaohua Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).