linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Pallai Roland <dap@mail.index.hu>
To: Justin Piszcz <jpiszcz@lucidpixels.com>
Cc: Linux RAID Mailing List <linux-raid@vger.kernel.org>
Subject: Re: major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved]
Date: Sun, 22 Apr 2007 16:38:11 +0200	[thread overview]
Message-ID: <200704221638.11837.dap@mail.index.hu> (raw)
In-Reply-To: <Pine.LNX.4.64.0704220742080.14170@p34.internal.lan>

[-- Attachment #1: Type: text/plain, Size: 1334 bytes --]


On Sunday 22 April 2007 13:42:43 Justin Piszcz wrote:
> http://www.rhic.bnl.gov/hepix/talks/041019pm/schoen.pdf
> Check page 13 of 20.
 Thanks, interesting presentation. I'm working in the same area now, big media 
files and many clients. I spent some days to build a low-cost, high 
performance server. With my experience, I think, some results of this 
presentation can't be applied to recent kernels.

 It's off-topic in this thread, sorry, but I like to swagger what can be done 
with Linux! :)

ASUS P5B-E Plus, P4 641, 1024Mb RAM, 6 disks on 965P's south bridge, 1 disk on 
Jmicron (both driven by AHCI driver), 1 disk on Silicon Image 3132, 8 disks 
on HPT2320 (hpt's driver). 16x Seagate 500Gb 16Mb cache.
 kernel 2.6.20.3
 anticipatory scheduler
 chunk size 64Kb
 XFS file system
 file size is 400Mb, I read 200 of them in each test

 The yellow points are marking thrashing thresholds, I computed it based on 
process number and RAM size. It's not an exact threshold.

- now see the attached picture :)

 Awesome performance, near disk-platter speed with big RA! It's even better 
with ~+15% if I use the -mm tree with the new adaptive readahead! Bigger 
files, bigger chunk also helps, but in my case, it's constant 
(unfortunately).

 The rule of readahead size is simple: the much is better, till no thrashing.


--
 d


[-- Attachment #2: r1.png --]
[-- Type: image/png, Size: 8277 bytes --]

  reply	other threads:[~2007-04-22 14:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-04-20 21:06 major performance drop on raid5 due to context switches caused by small max_hw_sectors Pallai Roland
     [not found] ` <5d96567b0704202247s60e4f2f1x19511f790f597ea0@mail.gmail.com>
2007-04-21 19:32   ` major performance drop on raid5 due to context switches caused by small max_hw_sectors [partially resolved] Pallai Roland
2007-04-22  0:18     ` Justin Piszcz
2007-04-22  0:42       ` Pallai Roland
2007-04-22  8:47         ` Justin Piszcz
2007-04-22  9:52           ` Pallai Roland
2007-04-22 10:23             ` Justin Piszcz
2007-04-22 11:38               ` Pallai Roland
2007-04-22 11:42                 ` Justin Piszcz
2007-04-22 14:38                   ` Pallai Roland [this message]
2007-04-22 14:48                     ` Justin Piszcz
2007-04-22 15:09                       ` Pallai Roland
2007-04-22 15:53                         ` Justin Piszcz
2007-04-22 19:01                           ` Mr. James W. Laferriere
2007-04-22 20:35                             ` Justin Piszcz

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200704221638.11837.dap@mail.index.hu \
    --to=dap@mail.index.hu \
    --cc=jpiszcz@lucidpixels.com \
    --cc=linux-raid@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).