* Help with chunksize on raid10 -p o3 array
@ 2007-03-06 11:26 Peter Rabbitson
2007-03-07 0:31 ` Bill Davidsen
` (2 more replies)
0 siblings, 3 replies; 11+ messages in thread
From: Peter Rabbitson @ 2007-03-06 11:26 UTC (permalink / raw)
To: linux-raid
Hi,
I have been trying to figure out the best chunk size for raid10 before
migrating my server to it (currently raid1). I am looking at 3 offset
stripes, as I want to have two drive failure redundancy, and offset
striping is said to have the best write performance, with read
performance equal to far. Information on the internet is scarce so I
decided to test chunking myself. I used the script
http://rabbit.us/pool/misc/raid_test.txt to iterate through different
chunk sizes, and try to dd the resulting array to /dev/null. I
deliberately did not make a filesystem on top of the array - I was just
looking for raw performance, and since the FS layer is not involved no
caching/optimization is taking place. I also monitored the process with
dstat in a separate window, and memory usage confirmed that this method
is valid.
I got some pretty weird results:
http://rabbit.us/pool/misc/raid_test_results.txt
From all my readings so far I thought that with chunk size increase the
large block access throughput decreases while small block reads
increase, and it is just a matter of finding a "sweet spot" balancing
them out. The results, however, clearly show something else. There are
some inconsistencies, which I attribute to my non-scientific approach,
but the trend is clearly showing.
Here are the questions I have:
* Why did the test show best consistent performance over a 16k chunk? Is
there a way to determine this number without running a lengthy
benchmark, just from knowing the drive performance?
* Why although I have 3 identical chunks of data at any time, dstat
never showed simultaneous reading from more than 2 drives. Every dd run
was accompanied by maxing out one of the drives at 58MB/s and another
one was trying to catch up to various degrees depending on the chunk
size. Then on the next dd run two other drives would be (seemingly
random) selected and the process would repeat.
* What the test results don't show but dstat did is how the array resync
behaved after the array creation. Although my system can sustain reads
from all 4 drives at the max speed of 58MB/s, here is what the resync at
different chunk sizes looked like:
32k - simultaneous reads from all 4 drives at 47MB/s sustained
64k - simultaneous reads from all 4 drives at 56MB/s sustained
128k - simultaneous reads from all 4 drives at 54MB/s sustained
512k - simultaneous reads from all 4 drives at 30MB/s sustained
1024k - simultaneous reads from all 4 drives at 38MB/s sustained
4096k - simultaneous reads from all 4 drives at 44MB/s sustained
16384k - simultaneous reads from all 4 drives at 46MB/s sustained
32768k - simultaneous reads from 2 drives at 58MB/s sustained and
the other two at 26MB/s sustained alternating the speed
between the pairs of drives every 3 seconds or so
65536k - All 4 drives started at 58MB/s sustained gradually
reducing to 44MB/s sustained at the same time
I repeated just the creation of arrays - the results are consistent. Is
there any explanation for this?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-06 11:26 Help with chunksize on raid10 -p o3 array Peter Rabbitson
@ 2007-03-07 0:31 ` Bill Davidsen
2007-03-07 9:28 ` Peter Rabbitson
2007-03-12 4:28 ` Neil Brown
2007-03-19 14:14 ` raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array) Peter Rabbitson
2 siblings, 1 reply; 11+ messages in thread
From: Bill Davidsen @ 2007-03-07 0:31 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: linux-raid
Peter Rabbitson wrote:
> Hi,
> I have been trying to figure out the best chunk size for raid10 before
> migrating my server to it (currently raid1). I am looking at 3 offset
> stripes, as I want to have two drive failure redundancy, and offset
> striping is said to have the best write performance, with read
> performance equal to far. Information on the internet is scarce so I
> decided to test chunking myself. I used the script
> http://rabbit.us/pool/misc/raid_test.txt to iterate through different
> chunk sizes, and try to dd the resulting array to /dev/null. I
> deliberately did not make a filesystem on top of the array - I was
> just looking for raw performance, and since the FS layer is not
> involved no caching/optimization is taking place. I also monitored the
> process with dstat in a separate window, and memory usage confirmed
> that this method is valid.
> I got some pretty weird results:
> http://rabbit.us/pool/misc/raid_test_results.txt
> From all my readings so far I thought that with chunk size increase
> the large block access throughput decreases while small block reads
> increase, and it is just a matter of finding a "sweet spot" balancing
> them out. The results, however, clearly show something else. There are
> some inconsistencies, which I attribute to my non-scientific approach,
> but the trend is clearly showing.
>
> Here are the questions I have:
>
> * Why did the test show best consistent performance over a 16k chunk?
> Is there a way to determine this number without running a lengthy
> benchmark, just from knowing the drive performance?
>
By any chance did you remember to increase stripe_cache_size to match
the chunk size? If not, there you go.
> * Why although I have 3 identical chunks of data at any time, dstat
> never showed simultaneous reading from more than 2 drives. Every dd
> run was accompanied by maxing out one of the drives at 58MB/s and
> another one was trying to catch up to various degrees depending on the
> chunk size. Then on the next dd run two other drives would be
> (seemingly random) selected and the process would repeat.
>
> * What the test results don't show but dstat did is how the array
> resync behaved after the array creation. Although my system can
> sustain reads from all 4 drives at the max speed of 58MB/s, here is
> what the resync at different chunk sizes looked like:
>
> 32k - simultaneous reads from all 4 drives at 47MB/s sustained
> 64k - simultaneous reads from all 4 drives at 56MB/s sustained
> 128k - simultaneous reads from all 4 drives at 54MB/s sustained
> 512k - simultaneous reads from all 4 drives at 30MB/s sustained
> 1024k - simultaneous reads from all 4 drives at 38MB/s sustained
> 4096k - simultaneous reads from all 4 drives at 44MB/s sustained
> 16384k - simultaneous reads from all 4 drives at 46MB/s sustained
> 32768k - simultaneous reads from 2 drives at 58MB/s sustained and
> the other two at 26MB/s sustained alternating the speed
> between the pairs of drives every 3 seconds or so
> 65536k - All 4 drives started at 58MB/s sustained gradually
> reducing to 44MB/s sustained at the same time
>
> I repeated just the creation of arrays - the results are consistent.
> Is there any explanation for this?
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
Doing interesting things with small computers since 1979
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-07 0:31 ` Bill Davidsen
@ 2007-03-07 9:28 ` Peter Rabbitson
0 siblings, 0 replies; 11+ messages in thread
From: Peter Rabbitson @ 2007-03-07 9:28 UTC (permalink / raw)
To: linux-raid
Bill Davidsen wrote:
> Peter Rabbitson wrote:
>> Hi,
>> I have been trying to figure out the best chunk size for raid10 before
> By any chance did you remember to increase stripe_cache_size to match
> the chunk size? If not, there you go.
At the end of /usr/src/linux/Documentation/md.txt it specifically says
that stripe_cache_size is raid5 specific, and it made sense to me, as
caching stuff to avoid re-doing parity is beneficial. I will test later
today trying to set the cache higher. Are there any guidelines on how
large should it be in relation to the chunk size/number of drives for
raid10?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-06 11:26 Help with chunksize on raid10 -p o3 array Peter Rabbitson
2007-03-07 0:31 ` Bill Davidsen
@ 2007-03-12 4:28 ` Neil Brown
2007-03-12 14:46 ` Peter Rabbitson
2007-03-19 14:14 ` raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array) Peter Rabbitson
2 siblings, 1 reply; 11+ messages in thread
From: Neil Brown @ 2007-03-12 4:28 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: linux-raid
On Tuesday March 6, rabbit@rabbit.us wrote:
> Hi,
> I have been trying to figure out the best chunk size for raid10 before
> migrating my server to it (currently raid1). I am looking at 3 offset
> stripes, as I want to have two drive failure redundancy, and offset
> striping is said to have the best write performance, with read
> performance equal to far. Information on the internet is scarce so I
> decided to test chunking myself. I used the script
> http://rabbit.us/pool/misc/raid_test.txt to iterate through different
The different block sizes in the reads will make very little
difference to the results as the kernel will be doing read-ahead for
you. If you want to really test throughput at different block sizes
you need to insert random seeks.
> chunk sizes, and try to dd the resulting array to /dev/null. I
> deliberately did not make a filesystem on top of the array - I was just
> looking for raw performance, and since the FS layer is not involved no
> caching/optimization is taking place. I also monitored the process with
> dstat in a separate window, and memory usage confirmed that this method
> is valid.
> I got some pretty weird results:
> http://rabbit.us/pool/misc/raid_test_results.txt
> From all my readings so far I thought that with chunk size increase the
> large block access throughput decreases while small block reads
> increase, and it is just a matter of finding a "sweet spot" balancing
> them out. The results, however, clearly show something else. There are
> some inconsistencies, which I attribute to my non-scientific approach,
> but the trend is clearly showing.
>
> Here are the questions I have:
>
> * Why did the test show best consistent performance over a 16k chunk? Is
> there a way to determine this number without running a lengthy
> benchmark, just from knowing the drive performance?
When you are doing a large sequential read from a raid10-offset array,
it will (should) read from all drives for the first chunk, then skip
over the offset-copy(s) of that chunk and read again. Thus you get a
read-seek-read-seek pattern.
The time that each read takes will be proportional to the chunk size.
The time that the seek take will be pretty stable (mostly head
settling time I believe). So for small chunks, the seek time
dominates. For large chunks, the read starts to dominate.
I think this is what you are seeing. When you get to 16M chunks (16K
kilobytes) the seek time is a smaller fraction of the read time and so
you spend more time reading.
The fact that it drops off again at 32M is probably due to some limit
in the amount of read-ahead that the kernel will initiate. If it
won't issue the request to the last drive before the request to the
first drive completes, you will obviously get slower throughput.
>
> * Why although I have 3 identical chunks of data at any time, dstat
> never showed simultaneous reading from more than 2 drives. Every dd run
> was accompanied by maxing out one of the drives at 58MB/s and another
> one was trying to catch up to various degrees depending on the chunk
> size. Then on the next dd run two other drives would be (seemingly
> random) selected and the process would repeat.
Poor read-balancing code. It really needs more thought.
Possibly for raid10 we shouldn't try to balance at all. Just read
from the 'first' copy in each case....
>
> * What the test results don't show but dstat did is how the array resync
> behaved after the array creation. Although my system can sustain reads
> from all 4 drives at the max speed of 58MB/s, here is what the resync at
> different chunk sizes looked like:
>
> 32k - simultaneous reads from all 4 drives at 47MB/s sustained
> 64k - simultaneous reads from all 4 drives at 56MB/s sustained
> 128k - simultaneous reads from all 4 drives at 54MB/s sustained
> 512k - simultaneous reads from all 4 drives at 30MB/s sustained
> 1024k - simultaneous reads from all 4 drives at 38MB/s sustained
> 4096k - simultaneous reads from all 4 drives at 44MB/s sustained
> 16384k - simultaneous reads from all 4 drives at 46MB/s sustained
> 32768k - simultaneous reads from 2 drives at 58MB/s sustained and
> the other two at 26MB/s sustained alternating the speed
> between the pairs of drives every 3 seconds or so
> 65536k - All 4 drives started at 58MB/s sustained gradually
> reducing to 44MB/s sustained at the same time
>
> I repeated just the creation of arrays - the results are consistent. Is
> there any explanation for this?
A raid10 resync involves reading all copies of each block and doing
comparisons. If we find a difference, we write out one copy over the
rest.
We issue the requests in sequential order for the blocks. If you think
about how the blocks are laid out, you will see that this is not
always sequential order on each individual device. In some cases we
will ask to read a later device block before an earlier device block.
For small chunk sizes, the amount of backward-seeking will be fairly
small and the elevator will probably absorb all of it.
For larger chunk sizes, you will get longer backward seeking that
doesn't get rearranged by the elevator and so you will get lower
throughout.
Exactly where the interesting 32M artifact comes from I don't know.
It could relate to the window size used by md - there is a limit to
how many outstanding resync requests there can be at one time.
It limits to 32 requests each 64K is size which multiplies out to
2Meg..... not obvious how that connects, is it?
NeilBrown
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-12 4:28 ` Neil Brown
@ 2007-03-12 14:46 ` Peter Rabbitson
2007-03-12 18:45 ` Richard Scobie
0 siblings, 1 reply; 11+ messages in thread
From: Peter Rabbitson @ 2007-03-12 14:46 UTC (permalink / raw)
To: linux-raid
Neil Brown wrote:
> The different block sizes in the reads will make very little
> difference to the results as the kernel will be doing read-ahead for
> you. If you want to really test throughput at different block sizes
> you need to insert random seeks.
>
Neil, thank you for the time and effort to answer my previous email.
Excellent insights. I thought that read-ahead is filesystem specific,
and subsequently I would be safe to use the raw device. I will
definitely test with bonnie again.
>> * Why although I have 3 identical chunks of data at any time, dstat
>> never showed simultaneous reading from more than 2 drives. Every dd run
>> was accompanied by maxing out one of the drives at 58MB/s and another
>> one was trying to catch up to various degrees depending on the chunk
>> size. Then on the next dd run two other drives would be (seemingly
>> random) selected and the process would repeat.
>
> Poor read-balancing code. It really needs more thought.
> Possibly for raid10 we shouldn't try to balance at all. Just read
> from the 'first' copy in each case....
Is this anywhere near the top of the todo list, or for now raid10 users
are bound to a maximum read speed of a two drive combination?
And a last question - earlier in this thread Bill Davidsen suggested to
play with the stripe_cache_size. I tried to increase it (did just two
tests though) with no apparent effect. Does this setting apply to
raid1/10 at all or it is strictly in the raid5/6 domain? If so, are
there any tweaks apart from the chunk size and the layout that can
affect raid10 performance?
Once again thank you for the help.
Peter
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-12 14:46 ` Peter Rabbitson
@ 2007-03-12 18:45 ` Richard Scobie
2007-03-12 21:16 ` Peter Rabbitson
0 siblings, 1 reply; 11+ messages in thread
From: Richard Scobie @ 2007-03-12 18:45 UTC (permalink / raw)
To: linux-raid
Peter Rabbitson wrote:
> Is this anywhere near the top of the todo list, or for now raid10 users
> are bound to a maximum read speed of a two drive combination?
I have not done any testing with the md native RAID10 implementations,
so perhaps there are some other advantages, but have you tried setting
up your 4 drives as a RAID 0 made up of a pair of RAID1s?
I have had very good performance in this configuration - reads and
writes of over 140MB/s for 4 x 7200 SATA.
Regards,
Richard
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: Help with chunksize on raid10 -p o3 array
2007-03-12 18:45 ` Richard Scobie
@ 2007-03-12 21:16 ` Peter Rabbitson
0 siblings, 0 replies; 11+ messages in thread
From: Peter Rabbitson @ 2007-03-12 21:16 UTC (permalink / raw)
To: linux-raid
Richard Scobie wrote:
> Peter Rabbitson wrote:
>
>> Is this anywhere near the top of the todo list, or for now raid10
>> users are bound to a maximum read speed of a two drive combination?
>
> I have not done any testing with the md native RAID10 implementations,
> so perhaps there are some other advantages, but have you tried setting
> up your 4 drives as a RAID 0 made up of a pair of RAID1s?
The advantage is higher redundancy when I can have any two drives fail
in a x3 layout unlike the raid1/0 setup, although I sacrifice available
disk space. But yes, I agree that if I was after pure throughput raid1/0
would be more beneficial, with the downside of 1.5 disk failure redundancy.
^ permalink raw reply [flat|nested] 11+ messages in thread
* raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array)
2007-03-06 11:26 Help with chunksize on raid10 -p o3 array Peter Rabbitson
2007-03-07 0:31 ` Bill Davidsen
2007-03-12 4:28 ` Neil Brown
@ 2007-03-19 14:14 ` Peter Rabbitson
2 siblings, 0 replies; 11+ messages in thread
From: Peter Rabbitson @ 2007-03-19 14:14 UTC (permalink / raw)
To: linux-raid
Peter Rabbitson wrote:
> I have been trying to figure out the best chunk size for raid10 before
> migrating my server to it (currently raid1). I am looking at 3 offset
> stripes, as I want to have two drive failure redundancy, and offset
> striping is said to have the best write performance, with read
> performance equal to far.
Incorporating suggestions from previous posts (thank you everyone), I
used this modified script at http://rabbit.us/pool/misc/raid_test2.txt
To negate effects of caching memory was jammed below 200mb free by using
a full tmpfs mount with no swap. Here is what I got with far layout (-p
f3): http://rabbit.us/pool/misc/raid_far.html The clear winner is 1M
chunks, and is very consistent at any block size. I was surprised even
more to see that my read speed was identical to that of a raid0 getting
near the _maximum_ physical speed of 4 drives (roughly 55MB sustained
across 1.2G). Unlike offset layout, far really shines at reading stuff
back. The write speed did not suffer noticeably compared to offset
striping. Here are the results (-p o3) for comparison:
http://rabbit.us/pool/misc/raid_offset.html, and they roughly seem to
correlate with my earlier testing using dd.
So I guess the way to go for this system will be f3, although the md(4)
says that offset layout should be more beneficial. Is there anything I
missed while setting my o3 array, so that I got worse performance for
both read and write compared to f3?
Once again thanks everyone for the help.
Peter
^ permalink raw reply [flat|nested] 11+ messages in thread
* raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array)
@ 2008-12-17 12:41 Keld Jørn Simonsen
2008-12-17 12:50 ` Peter Rabbitson
0 siblings, 1 reply; 11+ messages in thread
From: Keld Jørn Simonsen @ 2008-12-17 12:41 UTC (permalink / raw)
To: linux-raid
I found this old message:
> Peter Rabbitson
> Mon, 19 Mar 2007 06:14:38 -0800
>
> Peter Rabbitson wrote:
>
> I have been trying to figure out the best chunk size for raid10
> before migrating my server to it (currently raid1). I am looking at 3
> offset stripes, as I want to have two drive failure redundancy, and
> offset striping is said to have the best write performance, with read
> performance equal to far.
>
> Incorporating suggestions from previous posts (thank you everyone), I
> used this modified script at http://rabbit.us/pool/misc/raid_test2.txt
> To negate effects of caching memory was jammed below 200mb free by using
> a full tmpfs mount with no swap. Here is what I got with far layout (-p
> f3): http://rabbit.us/pool/misc/raid_far.html The clear winner is 1M
> chunks, and is very consistent at any block size. I was surprised even
> more to see that my read speed was identical to that of a raid0 getting
> near the _maximum_ physical speed of 4 drives (roughly 55MB sustained
> across 1.2G). Unlike offset layout, far really shines at reading stuff
> back. The write speed did not suffer noticeably compared to offset
> striping. Here are the results (-p o3) for comparison:
> http://rabbit.us/pool/misc/raid_offset.html, and they roughly seem to
> correlate with my earlier testing using dd.
>
> So I guess the way to go for this system will be f3, although the md(4)
> says that offset layout should be more beneficial. Is there anything I
> missed while setting my o3 array, so that I got worse performance for
> both read and write compared to f3?
>
> Once again thanks everyone for the help.
> Peter
The links were not valid anymore. I wanted to see the results and
possibly include the results in the performance wiki page
I would appreciate some new links here.
Furthermore some comments to the post: My take on o3 vs f3 is that both
in theory and practice f3 should be much faster for sequential reading,
as the layout is equivalent to raid0. For random reading and sequential
and random writing f3 and o3 (and the same goes for the more normal f2
vs o2) should be about the same, especially when a filesystem and
its associated elevator algorithm is employed.
Best regards
keld
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array)
2008-12-17 12:41 Keld Jørn Simonsen
@ 2008-12-17 12:50 ` Peter Rabbitson
2008-12-17 14:34 ` Keld Jørn Simonsen
0 siblings, 1 reply; 11+ messages in thread
From: Peter Rabbitson @ 2008-12-17 12:50 UTC (permalink / raw)
To: Keld Jørn Simonsen; +Cc: linux-raid
Keld Jørn Simonsen wrote:
> I found this old message:
>
>> Peter Rabbitson
>> Mon, 19 Mar 2007 06:14:38 -0800
>>
>
> The links were not valid anymore. I wanted to see the results and
> possibly include the results in the performance wiki page
> I would appreciate some new links here.
I apologize, I don't have the data available anymore.
> Furthermore some comments to the post: My take on o3 vs f3 is that both
> in theory and practice f3 should be much faster for sequential reading,
> as the layout is equivalent to raid0. For random reading and sequential
> and random writing f3 and o3 (and the same goes for the more normal f2
> vs o2) should be about the same, especially when a filesystem and
> its associated elevator algorithm is employed.
Yes, this is what I also concluded since I wrote this email. I am in the
process of upgrading my raid setup, and while I am at it I am leaving
5GB blank partitions at the start of all my workstations spindles, so I
can get some real testing at night. I will share my methodology with the
list before I commence testing (which should take about 20 days the way
I am planing it).
But first comes the vacation - happy holidays to you too guys.
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array)
2008-12-17 12:50 ` Peter Rabbitson
@ 2008-12-17 14:34 ` Keld Jørn Simonsen
0 siblings, 0 replies; 11+ messages in thread
From: Keld Jørn Simonsen @ 2008-12-17 14:34 UTC (permalink / raw)
To: Peter Rabbitson; +Cc: linux-raid
On Wed, Dec 17, 2008 at 01:50:24PM +0100, Peter Rabbitson wrote:
> Keld Jørn Simonsen wrote:
> > I found this old message:
> >
> >> Peter Rabbitson
> >> Mon, 19 Mar 2007 06:14:38 -0800
> >>
> >
> > The links were not valid anymore. I wanted to see the results and
> > possibly include the results in the performance wiki page
> > I would appreciate some new links here.
>
> I apologize, I don't have the data available anymore.
'
OK.
> > Furthermore some comments to the post: My take on o3 vs f3 is that both
> > in theory and practice f3 should be much faster for sequential reading,
> > as the layout is equivalent to raid0. For random reading and sequential
> > and random writing f3 and o3 (and the same goes for the more normal f2
> > vs o2) should be about the same, especially when a filesystem and
> > its associated elevator algorithm is employed.
>
> Yes, this is what I also concluded since I wrote this email. I am in the
> process of upgrading my raid setup, and while I am at it I am leaving
> 5GB blank partitions at the start of all my workstations spindles, so I
> can get some real testing at night. I will share my methodology with the
> list before I commence testing (which should take about 20 days the way
> I am planing it).
I have tried to persuade Neil to change the description for MD to
reflect the above, but until now with no luck.
I look forward to see your new tests!
> But first comes the vacation - happy holidays to you too guys.
Yes, happy holidays to all!
Best regards
Keld
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2008-12-17 14:34 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2007-03-06 11:26 Help with chunksize on raid10 -p o3 array Peter Rabbitson
2007-03-07 0:31 ` Bill Davidsen
2007-03-07 9:28 ` Peter Rabbitson
2007-03-12 4:28 ` Neil Brown
2007-03-12 14:46 ` Peter Rabbitson
2007-03-12 18:45 ` Richard Scobie
2007-03-12 21:16 ` Peter Rabbitson
2007-03-19 14:14 ` raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array) Peter Rabbitson
-- strict thread matches above, loose matches on Subject: below --
2008-12-17 12:41 Keld Jørn Simonsen
2008-12-17 12:50 ` Peter Rabbitson
2008-12-17 14:34 ` Keld Jørn Simonsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).