linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Robin Hill <robin@robinhill.me.uk>
To: "P. Gautschi" <linuxlist@gautschi.net>
Cc: linux-raid@vger.kernel.org
Subject: Re: Bad sequential performance of RAID5 with a lot of disk seeks
Date: Tue, 7 Oct 2014 08:43:47 +0100	[thread overview]
Message-ID: <20141007074347.GA18786@cthulhu.home.robinhill.me.uk> (raw)
In-Reply-To: <54336FC1.6080306@gautschi.net>

[-- Attachment #1: Type: text/plain, Size: 2116 bytes --]

On Tue Oct 07, 2014 at 06:44:49AM +0200, P. Gautschi wrote:

> I've created a RAID5 on 5 identical SATA disks. Doing some performance measurements
> with dd I get a disappointing performance.
> A dd with bs=1M on a btrfs created on md0 transfers about 110 MB/s. (both read and write)
> A dd on md0 has the same write speed but only about 20 MB/s on read.
> In all of the tests I hear the disk constantly seeking. This was also the case
> during creation of the array.
> I also created a RAID4 to make sure that I doesn't get fooled by the stripe layout of RAID5.
> Now I get about 110 MB/s for write and 230 MB/s for read on md0. But the constant
> seeking is still present for both read and write and during creation of the array.
> 
> Why are the disk perform so many seek operations? I think a sequential access on md0 should
> cause a sequential access on the individual disk.
> 
> I have to add that I did something unusual: I created the RAID4/5 with a chunk size of 4KiB.
> The idea of this was that when I'm going to use btrfs with the default nodesize of 16KiB
> all node write will fill a full stripe and there won't be any RMW at all. (both fortunate
> for performance and integrity in a power loss situation.)
> Nevertheless I think a sequential access on the array should cause a sequential access on the
> disks for any chunk size if the read/write block size is a exact multiple of
> the (numdisks-1)*chunk size.
> 
> Is there any explanation for the seeks and how do I get rid of them?
> 
After creating the arrays did you wait for them to finish syncing? The
array is created in degraded mode initially and then rebuilds onto the
additional disk (this is the fastest way to do things, unless you know
the disks are all zeroed initially). Until this rebuild is complete then
it'll be competing with any other disk activity.

Cheers,
    Robin
-- 
     ___        
    ( ' }     |       Robin Hill        <robin@robinhill.me.uk> |
   / / )      | Little Jim says ....                            |
  // !!       |      "He fallen in de water !!"                 |

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 181 bytes --]

  reply	other threads:[~2014-10-07  7:43 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-10-07  4:44 Bad sequential performance of RAID5 with a lot of disk seeks P. Gautschi
2014-10-07  7:43 ` Robin Hill [this message]
2014-10-07  7:54   ` P. Gautschi
2014-10-07  9:25     ` Robin Hill
2014-10-07 10:36       ` P. Gautschi
2014-10-07 11:05         ` Robin Hill
2014-10-08  9:05 ` XiaoNi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20141007074347.GA18786@cthulhu.home.robinhill.me.uk \
    --to=robin@robinhill.me.uk \
    --cc=linux-raid@vger.kernel.org \
    --cc=linuxlist@gautschi.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).