* Faster read performance DURING (?) resync on raid10
@ 2008-09-25 9:27 sminded
2008-09-25 18:40 ` Keld J�rn Simonsen
` (2 more replies)
0 siblings, 3 replies; 9+ messages in thread
From: sminded @ 2008-09-25 9:27 UTC (permalink / raw)
To: linux-raid
I have done some automated scripted tests with bonnie++ over RAID10 and
encryption, using different encryption cipher modes to evaluate the impact
on I/O operations.
The strange thing is that I can see a significant increase in read
performance (about 20%) when running the tests DURING the raid resync phase
directly after the raid creation as appose to running it after the resync,
or after I create the array with --assume-clean (which skips initial
resync).
Have anyone noticed the same behaviour, and how can it be explained? Any
ideas?
Can someone else verify that they get the same result?
--
View this message in context: http://www.nabble.com/Faster-read-performance-DURING-%28-%29-resync-on-raid10-tp19666016p19666016.html
Sent from the linux-raid mailing list archive at Nabble.com.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-25 9:27 Faster read performance DURING (?) resync on raid10 sminded
@ 2008-09-25 18:40 ` Keld J�rn Simonsen
2008-09-26 4:18 ` Neil Brown
2008-09-26 8:03 ` John Robinson
2 siblings, 0 replies; 9+ messages in thread
From: Keld J�rn Simonsen @ 2008-09-25 18:40 UTC (permalink / raw)
To: sminded; +Cc: linux-raid
On Thu, Sep 25, 2008 at 02:27:51AM -0700, sminded wrote:
>
> I have done some automated scripted tests with bonnie++ over RAID10 and
> encryption, using different encryption cipher modes to evaluate the impact
> on I/O operations.
> The strange thing is that I can see a significant increase in read
> performance (about 20%) when running the tests DURING the raid resync phase
> directly after the raid creation as appose to running it after the resync,
> or after I create the array with --assume-clean (which skips initial
> resync).
>
> Have anyone noticed the same behaviour, and how can it be explained? Any
> ideas?
>
> Can someone else verify that they get the same result?
I noticed something similar, vastly enhanced performance just after
creation, but found out that some parameters needed to be changed,
setreadahead or some such.
What are the actual figures?
Best regards
Keld
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-25 9:27 Faster read performance DURING (?) resync on raid10 sminded
2008-09-25 18:40 ` Keld J�rn Simonsen
@ 2008-09-26 4:18 ` Neil Brown
2008-09-26 23:19 ` Daniel Zetterman
2008-09-26 8:03 ` John Robinson
2 siblings, 1 reply; 9+ messages in thread
From: Neil Brown @ 2008-09-26 4:18 UTC (permalink / raw)
To: sminded; +Cc: linux-raid
On Thursday September 25, daniel.zetterman@gmail.com wrote:
>
> I have done some automated scripted tests with bonnie++ over RAID10 and
> encryption, using different encryption cipher modes to evaluate the impact
> on I/O operations.
> The strange thing is that I can see a significant increase in read
> performance (about 20%) when running the tests DURING the raid resync phase
> directly after the raid creation as appose to running it after the resync,
> or after I create the array with --assume-clean (which skips initial
> resync).
Sounds like the read-balancing is doing exactly the wrong thing -
quite possible.
What layout are you using (near, far, offset?), how many devices?
NeilBrown
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-25 9:27 Faster read performance DURING (?) resync on raid10 sminded
2008-09-25 18:40 ` Keld J�rn Simonsen
2008-09-26 4:18 ` Neil Brown
@ 2008-09-26 8:03 ` John Robinson
2 siblings, 0 replies; 9+ messages in thread
From: John Robinson @ 2008-09-26 8:03 UTC (permalink / raw)
To: Linux RAID
On 25/09/2008 10:27, sminded wrote:
> The strange thing is that I can see a significant increase in read
> performance (about 20%) when running the tests DURING the raid resync phase
> directly after the raid creation
Is it possible this is because the resync process has already read part
of the disc into cache when your test starts, effectively poisioning the
test results?
Cheers,
John.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-26 4:18 ` Neil Brown
@ 2008-09-26 23:19 ` Daniel Zetterman
2008-09-28 2:23 ` Keld J�rn Simonsen
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Zetterman @ 2008-09-26 23:19 UTC (permalink / raw)
To: Neil Brown; +Cc: linux-raid
I've done some more tests and here are the results:
# bonnie++ during resync with readahead set to 512 (encrypted raid10)
kpax,4G,,,77308,21,34585,17,,,82073,33,384.8,1,,,,,,,,,,,,,
kpax,4G,,,77958,21,34810,17,,,83002,34,394.0,1,,,,,,,,,,,,,
kpax,4G,,,78273,21,34884,17,,,82758,34,389.0,1,,,,,,,,,,,,,
# bonnie++ during resync with readahead set to 65536 (encrypted raid10)
kpax,4G,,,77873,21,34927,18,,,82941,34,367.5,0,,,,,,,,,,,,,
kpax,4G,,,77072,21,34966,18,,,82234,34,357.6,1,,,,,,,,,,,,,
kpax,4G,,,78033,21,34904,18,,,83381,34,372.2,1,,,,,,,,,,,,,
As seen from above, the read throughput from bonnie++ is about 80 MiB
/s during resync, regardless of the readahead size on encrypted
raid10.
# bonnie++ after resync with readahead set to 512 (encrypted raid10)
kpax,4G,,,81619,22,31528,16,,,70996,28,411.7,1,,,,,,,,,,,,,
kpax,4G,,,81013,22,31341,15,,,68451,28,354.1,1,,,,,,,,,,,,,
kpax,4G,,,81750,22,31291,16,,,69484,28,364.5,0,,,,,,,,,,,,,
# bonnie++ after resync with readahead set to 65536 (encrypted raid10)
kpax,4G,,,81464,22,31348,16,,,69140,28,356.4,1,,,,,,,,,,,,,
kpax,4G,,,81803,22,31039,16,,,67414,27,338.6,1,,,,,,,,,,,,,
kpax,4G,,,81263,22,31361,16,,,70491,28,326.7,0,,,,,,,,,,,,,
The above tests shows that the read throughput drops 15% to about 68
MiB /s after the resync has finished.
If I run the same set of tests without encryption, I get:
# bonnie++ during resync with readahead set to 512 (normal raid10)
kpax,4G,,,90242,24,58586,20,,,138938,34,320.0,0,,,,,,,,,,,,,
kpax,4G,,,90101,24,53133,19,,,141617,36,414.7,1,,,,,,,,,,,,,
kpax,4G,,,88971,24,53107,19,,,135751,33,437.4,1,,,,,,,,,,,,,
kpax,4G,,,89262,24,53058,19,,,134046,33,411.3,1,,,,,,,,,,,,,
# bonnie++ during resync with readahead set to 65536 (normal raid10)
kpax,4G,,,87487,24,59635,22,,,139171,30,440.3,1,,,,,,,,,,,,,
kpax,4G,,,88879,24,60615,22,,,148133,32,426.5,1,,,,,,,,,,,,,
kpax,4G,,,88836,24,56569,21,,,139867,30,423.0,1,,,,,,,,,,,,,
kpax,4G,,,89166,24,58811,22,,,134982,30,422.2,1,,,,,,,,,,,,,
Now we see that we have alot more juice without the encryption -
pumping about 136 MiB / s during read.
# bonnie++ after resync with readahead set to 512 (normal raid10)
kpax,4G,,,95747,27,45329,17,,,123298,31,546.7,1,,,,,,,,,,,,,
kpax,4G,,,94950,26,45006,16,,,128461,31,476.8,1,,,,,,,,,,,,,
kpax,4G,,,95652,26,45202,16,,,130082,32,442.7,1,,,,,,,,,,,,,
kpax,4G,,,94900,26,44224,16,,,125801,31,455.1,1,,,,,,,,,,,,,
# bonnie++ after resync with readahead set to 65536 (normal raid10)
kpax,4G,,,95475,27,71200,28,,,172074,37,429.2,1,,,,,,,,,,,,,
kpax,4G,,,94724,26,68041,26,,,162545,34,447.5,1,,,,,,,,,,,,,
kpax,4G,,,95133,27,72185,28,,,160979,35,412.8,1,,,,,,,,,,,,,
kpax,4G,,,95090,27,71043,27,,,167199,36,421.7,1,,,,,,,,,,,,,
And the grand finale - running plain raid10 with full readahead, shows
a whopping 161 MiB /s.
To summarize:
1) There seems to be something fishy going on using dm-crypt over
linux raid which makes read throughput go down after the resync is
done.
2) The readahead size does not seem to make any difference on
encrypted raid arrays.
Request:
Could someone try encryption over raid10 and run bonnie++ tests during
and after initial resync to see if we get similar results?
Does anyone have a clue to what this could be?
On Fri, Sep 26, 2008 at 6:18 AM, Neil Brown <neilb@suse.de> wrote:
>
> On Thursday September 25, daniel.zetterman@gmail.com wrote:
> >
> > I have done some automated scripted tests with bonnie++ over RAID10 and
> > encryption, using different encryption cipher modes to evaluate the impact
> > on I/O operations.
> > The strange thing is that I can see a significant increase in read
> > performance (about 20%) when running the tests DURING the raid resync phase
> > directly after the raid creation as appose to running it after the resync,
> > or after I create the array with --assume-clean (which skips initial
> > resync).
>
> Sounds like the read-balancing is doing exactly the wrong thing -
> quite possible.
>
> What layout are you using (near, far, offset?), how many devices?
>
> NeilBrown
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-26 23:19 ` Daniel Zetterman
@ 2008-09-28 2:23 ` Keld J�rn Simonsen
2008-09-28 10:45 ` Daniel Zetterman
0 siblings, 1 reply; 9+ messages in thread
From: Keld J�rn Simonsen @ 2008-09-28 2:23 UTC (permalink / raw)
To: Daniel Zetterman; +Cc: Neil Brown, linux-raid
On Sat, Sep 27, 2008 at 01:19:50AM +0200, Daniel Zetterman wrote:
> I've done some more tests and here are the results:
>
> # bonnie++ during resync with readahead set to 512 (encrypted raid10)
> kpax,4G,,,77308,21,34585,17,,,82073,33,384.8,1,,,,,,,,,,,,,
It would be nice to know which layout (near,far,offset)
that you are using for the test. The expected results are quite
dependent on this.
best regards
keld
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-28 2:23 ` Keld J�rn Simonsen
@ 2008-09-28 10:45 ` Daniel Zetterman
2008-09-28 11:06 ` Daniel Zetterman
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Zetterman @ 2008-09-28 10:45 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Neil Brown, linux-raid
I'm using near=2, far=1, superblock version=00.90.03
I'm not sure about offset, but its the default.
I wan't to clearify one thing about the data, its not only a problem
with encryption over raid, you get the same drop without encryption...
but you can remedy it by setting up the readahead, which gives no
effect when using encryption.
On Sun, Sep 28, 2008 at 4:23 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> On Sat, Sep 27, 2008 at 01:19:50AM +0200, Daniel Zetterman wrote:
>> I've done some more tests and here are the results:
>>
>> # bonnie++ during resync with readahead set to 512 (encrypted raid10)
>> kpax,4G,,,77308,21,34585,17,,,82073,33,384.8,1,,,,,,,,,,,,,
>
> It would be nice to know which layout (near,far,offset)
> that you are using for the test. The expected results are quite
> dependent on this.
>
> best regards
> keld
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-28 10:45 ` Daniel Zetterman
@ 2008-09-28 11:06 ` Daniel Zetterman
2008-09-28 17:35 ` Keld Jørn Simonsen
0 siblings, 1 reply; 9+ messages in thread
From: Daniel Zetterman @ 2008-09-28 11:06 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: Neil Brown, linux-raid
Sorry guys, the offset is if course as stated near=2, far=1
Isn't near=2, far=2 better for raid10?
On Sun, Sep 28, 2008 at 12:45 PM, Daniel Zetterman
<daniel.zetterman@gmail.com> wrote:
> I'm using near=2, far=1, superblock version=00.90.03
>
> I'm not sure about offset, but its the default.
>
> I wan't to clearify one thing about the data, its not only a problem
> with encryption over raid, you get the same drop without encryption...
> but you can remedy it by setting up the readahead, which gives no
> effect when using encryption.
>
>
> On Sun, Sep 28, 2008 at 4:23 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
>> On Sat, Sep 27, 2008 at 01:19:50AM +0200, Daniel Zetterman wrote:
>>> I've done some more tests and here are the results:
>>>
>>> # bonnie++ during resync with readahead set to 512 (encrypted raid10)
>>> kpax,4G,,,77308,21,34585,17,,,82073,33,384.8,1,,,,,,,,,,,,,
>>
>> It would be nice to know which layout (near,far,offset)
>> that you are using for the test. The expected results are quite
>> dependent on this.
>>
>> best regards
>> keld
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: Faster read performance DURING (?) resync on raid10
2008-09-28 11:06 ` Daniel Zetterman
@ 2008-09-28 17:35 ` Keld Jørn Simonsen
0 siblings, 0 replies; 9+ messages in thread
From: Keld Jørn Simonsen @ 2008-09-28 17:35 UTC (permalink / raw)
To: Daniel Zetterman; +Cc: Neil Brown, linux-raid
On Sun, Sep 28, 2008 at 01:06:09PM +0200, Daniel Zetterman wrote:
> Sorry guys, the offset is if course as stated near=2, far=1
>
> Isn't near=2, far=2 better for raid10?
>
a number of people say that near=1, far=2 is better.
you could try it out and see if there is any difference.
best regards
keld
>
> On Sun, Sep 28, 2008 at 12:45 PM, Daniel Zetterman
> <daniel.zetterman@gmail.com> wrote:
> > I'm using near=2, far=1, superblock version=00.90.03
> >
> > I'm not sure about offset, but its the default.
> >
> > I wan't to clearify one thing about the data, its not only a problem
> > with encryption over raid, you get the same drop without encryption...
> > but you can remedy it by setting up the readahead, which gives no
> > effect when using encryption.
> >
> >
> > On Sun, Sep 28, 2008 at 4:23 AM, Keld Jørn Simonsen <keld@dkuug.dk> wrote:
> >> On Sat, Sep 27, 2008 at 01:19:50AM +0200, Daniel Zetterman wrote:
> >>> I've done some more tests and here are the results:
> >>>
> >>> # bonnie++ during resync with readahead set to 512 (encrypted raid10)
> >>> kpax,4G,,,77308,21,34585,17,,,82073,33,384.8,1,,,,,,,,,,,,,
> >>
> >> It would be nice to know which layout (near,far,offset)
> >> that you are using for the test. The expected results are quite
> >> dependent on this.
> >>
> >> best regards
> >> keld
> >>
> >
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2008-09-28 17:35 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2008-09-25 9:27 Faster read performance DURING (?) resync on raid10 sminded
2008-09-25 18:40 ` Keld J�rn Simonsen
2008-09-26 4:18 ` Neil Brown
2008-09-26 23:19 ` Daniel Zetterman
2008-09-28 2:23 ` Keld J�rn Simonsen
2008-09-28 10:45 ` Daniel Zetterman
2008-09-28 11:06 ` Daniel Zetterman
2008-09-28 17:35 ` Keld Jørn Simonsen
2008-09-26 8:03 ` John Robinson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).