* write-behind performance ... or how behind can write-behind write
@ 2009-02-13 16:36 Georgi Alexandrov
2009-02-13 18:44 ` Paul Clements
0 siblings, 1 reply; 4+ messages in thread
From: Georgi Alexandrov @ 2009-02-13 16:36 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: Type: text/plain, Size: 4118 bytes --]
Hello,
I've been going through the list archives to find a similar thread but
couldn't.
The setup:
debian lenny 2.6.26-1-amd64
raid1 array with 1 SSD disk and 1 regular SATA disk. It will be used for
a DB.
What I'm trying to achieve is to make the system read from the SSD disk
as long as the array is healthy and write delayed on the regular SATA
disk to benefit from the good random write performance of the SSD disk.
So far I've managed to achieve only the first goal - make the system
read from SSD on healthy array via the write-mostly option.
The problem is with the second option - write-behind. I've enabled it
and made two tests (one with healthy array and one with the SATA disk
marked as failed) with tiobench and the results are not good(1).
Generally with the healthy array I'm getting the write performance of
the SATA disk alone (in terms of requests/sec issued to the disk and
bytes/sec written). The SATA disk is obviously a bottleneck even with
the write-behind option set(2).
With a failed array (SSD active only) I'm getting the great write
performance of a SSD disk(3).
1. tiobench stats:
Unit information
================
File size = megabytes
Blk Size = bytes
Rate = megabytes per second
CPU% = percentage of CPU used during the test
Latency = milliseconds
Lat% = percent of requests that took longer than X seconds
CPU Eff = Rate divided by CPU% - throughput per cpu load
Healthy Array Random Writes (~4 MB/sec):
File Blk Num Avg
Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency
Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ ---------
----------- -------- -------- -----
2.6.26-1-amd64 2000 11264 1 4.34 0.687% 0.007
0.03 0.00000 0.00000 632
2.6.26-1-amd64 2000 11264 2 4.26 2.299% 0.012
0.04 0.00000 0.00000 185
2.6.26-1-amd64 2000 11264 4 4.12 6.018% 0.019
0.13 0.00000 0.00000 68
2.6.26-1-amd64 2000 11264 8 4.13 7.611% 0.020
8.94 0.00000 0.00000 54
SSD-Only Array Random Writes (~135 MB/s):
File Blk Num Avg
Maximum Lat% Lat% CPU
Identifier Size Size Thr Rate (CPU%) Latency
Latency >2s >10s Eff
---------------------------- ------ ----- --- ------ ------ ---------
----------- -------- -------- -----
2.6.26-1-amd64 2000 11264 1 138.12 21.85% 0.007
0.03 0.00000 0.00000 632
2.6.26-1-amd64 2000 11264 2 135.82 70.80% 0.012
0.05 0.00000 0.00000 192
2.6.26-1-amd64 2000 11264 4 135.10 176.0% 0.018
0.07 0.00000 0.00000 77
2.6.26-1-amd64 2000 11264 8 145.38 380.3% 0.020
3.45 0.00000 0.00000 38
2. healthy array iostat output (sda1 being the SSD disk):
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sda1 0.00 14183.50 0.00 220.50 0.00 56.27
522.59 5.23 23.70 1.25 27.60
sdc3 0.00 14272.00 0.00 129.50 0.00 56.99
901.22 137.77 1108.25 7.72 100.00
3. ssd-only array iostat output:
Device: rrqm/s wrqm/s r/s w/s rMB/s wMB/s
avgrq-sz avgqu-sz await svctm %util
sda1 0.00 51941.50 0.00 483.00 0.00 209.70
889.16 140.81 282.74 2.07 100.00
sdc3 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00
So the questions is How behind can write-behind write? And can we get a
better performance in a similar setup.
thanks upfront!
--
regards,
Georgi Alexandrov
key server - pgp.mit.edu :: key id - 0x37B4B3EE
Key fingerprint = E429 BF93 FA67 44E9 B7D4 F89E F990 01C1 37B4 B3EE
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: write-behind performance ... or how behind can write-behind write
2009-02-13 16:36 write-behind performance ... or how behind can write-behind write Georgi Alexandrov
@ 2009-02-13 18:44 ` Paul Clements
2009-02-14 13:38 ` Bill Davidsen
0 siblings, 1 reply; 4+ messages in thread
From: Paul Clements @ 2009-02-13 18:44 UTC (permalink / raw)
To: Georgi Alexandrov; +Cc: linux-raid
Georgi Alexandrov wrote:
> Generally with the healthy array I'm getting the write performance of
> the SATA disk alone (in terms of requests/sec issued to the disk and
> bytes/sec written). The SATA disk is obviously a bottleneck even with
> the write-behind option set(2).
write-behind can help with two things:
1) overcoming latency (say one disk is on the network -- it may be the
same speed as the source disk, but it takes longer round-trip for each
I/O to complete)
2) temporary slowness of a device (say at a peak in I/O) -- the queue
can temporarily hide the slowness of the secondary disk, but this won't
last very long -- if writes continue at a pace faster than the disk can
handle (i.e., the queue gets filled) then the array drops back to
non-write-behind behavior
> So the questions is How behind can write-behind write? And can we get a
> better performance in a similar setup.
By default, it queues up 256 writes. This can be increased, but I've
actually seen worse performance in some cases -- not sure why. I haven't
had the time to dig into it and figure it out.
--
Paul
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: write-behind performance ... or how behind can write-behind write
2009-02-13 18:44 ` Paul Clements
@ 2009-02-14 13:38 ` Bill Davidsen
2009-02-16 10:39 ` Georgi Alexandrov
0 siblings, 1 reply; 4+ messages in thread
From: Bill Davidsen @ 2009-02-14 13:38 UTC (permalink / raw)
To: Paul Clements; +Cc: Georgi Alexandrov, linux-raid
Paul Clements wrote:
> Georgi Alexandrov wrote:
>
>> Generally with the healthy array I'm getting the write performance of
>> the SATA disk alone (in terms of requests/sec issued to the disk and
>> bytes/sec written). The SATA disk is obviously a bottleneck even with
>> the write-behind option set(2).
>
> write-behind can help with two things:
>
> 1) overcoming latency (say one disk is on the network -- it may be the
> same speed as the source disk, but it takes longer round-trip for each
> I/O to complete)
>
> 2) temporary slowness of a device (say at a peak in I/O) -- the queue
> can temporarily hide the slowness of the secondary disk, but this
> won't last very long -- if writes continue at a pace faster than the
> disk can handle (i.e., the queue gets filled) then the array drops
> back to non-write-behind behavior
>
At least with write-mostly all of the capacity is going into saving
data, not serving data. But as you note below if the writes are
happening at a rate faster than the device can support it will be a
bottleneck.
>> So the questions is How behind can write-behind write? And can we get a
>> better performance in a similar setup.
>
> By default, it queues up 256 writes. This can be increased, but I've
> actually seen worse performance in some cases -- not sure why. I
> haven't had the time to dig into it and figure it out.
>
> --
> Paul
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
Bill Davidsen <davidsen@tmr.com>
"Woe unto the statesman who makes war without a reason that will still
be valid when the war is over..." Otto von Bismark
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: write-behind performance ... or how behind can write-behind write
2009-02-14 13:38 ` Bill Davidsen
@ 2009-02-16 10:39 ` Georgi Alexandrov
0 siblings, 0 replies; 4+ messages in thread
From: Georgi Alexandrov @ 2009-02-16 10:39 UTC (permalink / raw)
To: Bill Davidsen; +Cc: Paul Clements, linux-raid
[-- Attachment #1: Type: text/plain, Size: 1473 bytes --]
Bill Davidsen wrote:
> Paul Clements wrote:
>> Georgi Alexandrov wrote:
>>
>>> Generally with the healthy array I'm getting the write performance of
>>> the SATA disk alone (in terms of requests/sec issued to the disk and
>>> bytes/sec written). The SATA disk is obviously a bottleneck even with
>>> the write-behind option set(2).
>>
>> write-behind can help with two things:
>>
>> 1) overcoming latency (say one disk is on the network -- it may be the
>> same speed as the source disk, but it takes longer round-trip for each
>> I/O to complete)
>>
>> 2) temporary slowness of a device (say at a peak in I/O) -- the queue
>> can temporarily hide the slowness of the secondary disk, but this
>> won't last very long -- if writes continue at a pace faster than the
>> disk can handle (i.e., the queue gets filled) then the array drops
>> back to non-write-behind behavior
>>
> At least with write-mostly all of the capacity is going into saving
> data, not serving data. But as you note below if the writes are
> happening at a rate faster than the device can support it will be a
> bottleneck.
<snip>
Well, at least write-mostly is suitable for reading from the SSD disk
only in a setup like mine. If writes get really problematic maybe it's
better to consider a SSD-only solution.
--
regards,
Georgi Alexandrov
key server - pgp.mit.edu :: key id - 0x37B4B3EE
Key fingerprint = E429 BF93 FA67 44E9 B7D4 F89E F990 01C1 37B4 B3EE
[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 197 bytes --]
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2009-02-16 10:39 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-02-13 16:36 write-behind performance ... or how behind can write-behind write Georgi Alexandrov
2009-02-13 18:44 ` Paul Clements
2009-02-14 13:38 ` Bill Davidsen
2009-02-16 10:39 ` Georgi Alexandrov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).