linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: mark delfman <markdelfman@googlemail.com>
To: Asdo <asdo@shiftmail.org>
Cc: linux-raid <linux-raid@vger.kernel.org>
Subject: Re: MD write performance issue
Date: Fri, 16 Oct 2009 16:46:12 +0100	[thread overview]
Message-ID: <66781b10910160846g1ff3e8ccq7c00a05442bb467a@mail.gmail.com> (raw)
In-Reply-To: <4AD84E08.7020807@shiftmail.org>

Quick update on .30.x kernels... which are still showing MD reduced performance:


Linux linux-tlfp 2.6.30-vanilla #1 SMP Fri Oct 16 14:22:54 BST 2009
x86_64 x86_64 x86_64 GNU/Linux

RAW: 1.1
XFS 870 MB/s



Linux linux-tlfp 2.6.31.3-vanilla #1 SMP Fri Oct 16 14:52:09 BST 2009
x86_64 x86_64 x86_64 GNU/Linux

RAW: 1.1
XFS 920 MB/s


linux-tlfp:/ # uname -a
Linux linux-tlfp 2.6.31.2-vanilla #1 SMP Fri Oct 16 15:44:44 BST 2009
x86_64 x86_64 x86_64 GNU/Linux


RAW: 1.1
XFS: 935 MB/s


On Fri, Oct 16, 2009 at 11:42 AM, Asdo <asdo@shiftmail.org> wrote:
> mark delfman wrote:
>>
>> After further work we are sure that there is a significant write
>> performance issue with either the Kernel+MD or...
>
> Hm!
> Pretty strange repeated ups and downs of the speed with increasing kernel
> versions.
>
> Have you checked:
> that compile options are the same (preferably by taking 2.6.31 compile
> options and porting them down)
> disk schedulers are the same
> the test was long enough to level jitters, like 2-3 minutes
> Also: looking at "iostat -x 1" during the transfer could show something...
>
> Apart from this, I confirm I noticed in my 2.6.31-rc? earlier tests, that
> performances on xfs writes were very inconsistent :
> These were my benchmarks (I wrote them on file at that time):
>
> Stripe_cache_size was 1024, 13 devices raid-5:
>
> bs=1M -> 206MB/s
> bs=256K -> 229MB/s
>
> retrying soon after, identical settings:
>
> bs=1M -> 129MB/s
> bs=256K -> 140MB/s
>
>
> Transfer speed was hence very unreliable, depending on something that is not
> clearly user visible... maybe dirty page cache? I thought that depending on
> the exact amount of data being pushed out by the pdflush at the first round,
> that would cause a sequence of read-modify-write stuff which would cause
> further read-modify-write and further instability later on. But I was doing
> that with raid-5 while you Mark are using raid-0 right? My theory doesn't
> hold on raid-0.
>
>
>

      reply	other threads:[~2009-10-16 15:46 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-16  8:45 MD write performance issue mark delfman
2009-10-16 10:42 ` Asdo
2009-10-16 15:46   ` mark delfman [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=66781b10910160846g1ff3e8ccq7c00a05442bb467a@mail.gmail.com \
    --to=markdelfman@googlemail.com \
    --cc=asdo@shiftmail.org \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).