linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Justin Piszcz <jpiszcz@lucidpixels.com>
To: Pallai Roland <dap@mail.index.hu>
Cc: Linux RAID Mailing List <linux-raid@vger.kernel.org>
Subject: Re: major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved]
Date: Sun, 22 Apr 2007 10:48:11 -0400 (EDT)	[thread overview]
Message-ID: <Pine.LNX.4.64.0704221047570.21579@p34.internal.lan> (raw)
In-Reply-To: <200704221638.11837.dap@mail.index.hu>



On Sun, 22 Apr 2007, Pallai Roland wrote:

>
> On Sunday 22 April 2007 13:42:43 Justin Piszcz wrote:
>> http://www.rhic.bnl.gov/hepix/talks/041019pm/schoen.pdf
>> Check page 13 of 20.
> Thanks, interesting presentation. I'm working in the same area now, big media
> files and many clients. I spent some days to build a low-cost, high
> performance server. With my experience, I think, some results of this
> presentation can't be applied to recent kernels.
>
> It's off-topic in this thread, sorry, but I like to swagger what can be done
> with Linux! :)
>
> ASUS P5B-E Plus, P4 641, 1024Mb RAM, 6 disks on 965P's south bridge, 1 disk on
> Jmicron (both driven by AHCI driver), 1 disk on Silicon Image 3132, 8 disks
> on HPT2320 (hpt's driver). 16x Seagate 500Gb 16Mb cache.
> kernel 2.6.20.3
> anticipatory scheduler
> chunk size 64Kb
> XFS file system
> file size is 400Mb, I read 200 of them in each test
>
> The yellow points are marking thrashing thresholds, I computed it based on
> process number and RAM size. It's not an exact threshold.
>
> - now see the attached picture :)
>
> Awesome performance, near disk-platter speed with big RA! It's even better
> with ~+15% if I use the -mm tree with the new adaptive readahead! Bigger
> files, bigger chunk also helps, but in my case, it's constant
> (unfortunately).
>
> The rule of readahead size is simple: the much is better, till no thrashing.
>
>
> --
> d
>
>

Have you also optimized your stripe cache for writes?

  reply	other threads:[~2007-04-22 14:48 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-20 21:06 major performance drop on raid5 due to context switches caused by small max_hw_sectors Pallai Roland
     [not found] ` <5d96567b0704202247s60e4f2f1x19511f790f597ea0@mail.gmail.com>
2007-04-21 19:32   ` major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved] Pallai Roland
2007-04-22  0:18     ` Justin Piszcz
2007-04-22  0:42       ` Pallai Roland
2007-04-22  8:47         ` Justin Piszcz
2007-04-22  9:52           ` Pallai Roland
2007-04-22 10:23             ` Justin Piszcz
2007-04-22 11:38               ` Pallai Roland
2007-04-22 11:42                 ` Justin Piszcz
2007-04-22 14:38                   ` Pallai Roland
2007-04-22 14:48                     ` Justin Piszcz [this message]
2007-04-22 15:09                       ` Pallai Roland
2007-04-22 15:53                         ` Justin Piszcz
2007-04-22 19:01                           ` Mr. James W. Laferriere
2007-04-22 20:35                             ` Justin Piszcz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Pine.LNX.4.64.0704221047570.21579@p34.internal.lan \
    --to=jpiszcz@lucidpixels.com \
    --cc=dap@mail.index.hu \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).