linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bill Davidsen <davidsen@tmr.com>
To: Peter Rabbitson <rabbit@rabbit.us>
Cc: linux-raid@vger.kernel.org
Subject: Re: Help with chunksize on raid10 -p o3 array
Date: Tue, 06 Mar 2007 19:31:39 -0500	[thread overview]
Message-ID: <45EE07EB.8050604@tmr.com> (raw)
In-Reply-To: <45ED4FE1.5020105@rabbit.us>

Peter Rabbitson wrote:
> Hi,
> I have been trying to figure out the best chunk size for raid10 before 
> migrating my server to it (currently raid1). I am looking at 3 offset 
> stripes, as I want to have two drive failure redundancy, and offset 
> striping is said to have the best write performance, with read 
> performance equal to far. Information on the internet is scarce so I 
> decided to test chunking myself. I used the script 
> http://rabbit.us/pool/misc/raid_test.txt to iterate through different 
> chunk sizes, and try to dd the resulting array to /dev/null. I 
> deliberately did not make a filesystem on top of the array - I was 
> just looking for raw performance, and since the FS layer is not 
> involved no caching/optimization is taking place. I also monitored the 
> process with dstat in a separate window, and memory usage confirmed 
> that this method is valid.
> I got some pretty weird results: 
> http://rabbit.us/pool/misc/raid_test_results.txt
> From all my readings so far I thought that with chunk size increase 
> the large block access throughput decreases while small block reads 
> increase, and it is just a matter of finding a "sweet spot" balancing 
> them out. The results, however, clearly show something else. There are 
> some inconsistencies, which I attribute to my non-scientific approach, 
> but the trend is clearly showing.
>
> Here are the questions I have:
>
> * Why did the test show best consistent performance over a 16k chunk? 
> Is there a way to determine this number without running a lengthy 
> benchmark, just from knowing the drive performance?
>
By any chance did you remember to increase stripe_cache_size to match 
the chunk size? If not, there you go.
> * Why although I have 3 identical chunks of data at any time, dstat 
> never showed simultaneous reading from more than 2 drives. Every dd 
> run was accompanied by maxing out one of the drives at 58MB/s and 
> another one was trying to catch up to various degrees depending on the 
> chunk size. Then on the next dd run two other drives would be 
> (seemingly random) selected and the process would repeat.
>
> * What the test results don't show but dstat did is how the array 
> resync behaved after the array creation. Although my system can 
> sustain reads from all 4 drives at the max speed of 58MB/s, here is 
> what the resync at different chunk sizes looked like:
>
> 32k    -    simultaneous reads from all 4 drives at 47MB/s sustained
> 64k    -    simultaneous reads from all 4 drives at 56MB/s sustained
> 128k    -    simultaneous reads from all 4 drives at 54MB/s sustained
> 512k    -    simultaneous reads from all 4 drives at 30MB/s sustained
> 1024k    -    simultaneous reads from all 4 drives at 38MB/s sustained
> 4096k    -    simultaneous reads from all 4 drives at 44MB/s sustained
> 16384k    -    simultaneous reads from all 4 drives at 46MB/s sustained
> 32768k    -    simultaneous reads from 2 drives at 58MB/s sustained and
>          the other two at 26MB/s sustained alternating the speed
>         between the pairs of drives every 3 seconds or so
> 65536k    -    All 4 drives started at 58MB/s sustained gradually
>         reducing to 44MB/s sustained at the same time
>
> I repeated just the creation of arrays - the results are consistent. 
> Is there any explanation for this?
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>


-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


  reply	other threads:[~2007-03-07  0:31 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-03-06 11:26 Help with chunksize on raid10 -p o3 array Peter Rabbitson
2007-03-07  0:31 ` Bill Davidsen [this message]
2007-03-07  9:28   ` Peter Rabbitson
2007-03-12  4:28 ` Neil Brown
2007-03-12 14:46   ` Peter Rabbitson
2007-03-12 18:45     ` Richard Scobie
2007-03-12 21:16       ` Peter Rabbitson
2007-03-19 14:14 ` raid10 far layout outperforms offset at writing? (was: Help with chunksize on raid10 -p o3 array) Peter Rabbitson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=45EE07EB.8050604@tmr.com \
    --to=davidsen@tmr.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=rabbit@rabbit.us \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).