linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Steve Bergman <sbergman27@gmail.com>
Cc: Linux RAID <linux-raid@vger.kernel.org>
Subject: Re: Is this expected RAID10 performance?
Date: Fri, 07 Jun 2013 18:33:14 -0500	[thread overview]
Message-ID: <51B26DBA.2030009@hardwarefreak.com> (raw)
In-Reply-To: <CAO9HMNG0-hhgj1V+8yVe6F_mqsRAiJgZf+06vo-Pt0BUEYjS0A@mail.gmail.com>

On 6/7/2013 8:54 AM, Steve Bergman wrote:
> This is Scientific Linux (RHEL) 6.4. That's nominally kernel 2.6.32,
> but that doesn't tell one much. The RHEL kernel is the RHEL kernel,
> with features selected from far more recent kernels included, more
> being added with every point release. (e.g., I have the dm-thin target
> available in LVM2.)

Yeah, Red Hat marches to the beat of their own drummer.  Their version
string is meaningless.  They have newer kernel features in their 2.6.32
that rely on features only available in later upstream kernels, that
can't be backported to 2.6.32. Thus the core kernel isn't based on 2.6.32.

> I've used both CFQ and Deadline for testing. It doesn't make a
> measurable difference for either the multiple dd's or for the
> single-threaded C/ISAM rebuild. (In fact, deadline, while often better
> for servers, can have problems with mixed sequential/random access
> workloads. At least according to what I've seen over on the PostgreSQL
> lists. It's no surprise that deadline doesn't help my single-threaded
> workload. Also note that deadline has shown itself to be slightly
> superior to noop for SSD's in certain benchmarks.) There's no one size
> fits all answer. Until the particular workload is actually tested, it
> *is* guesswork. I/O scheuling is too complicated for it to be
> otherwise.

Caveat:  I'm an XFS user.  CFQ simply gives sub par to horrible
performance with XFS, regardless of workload or hardware.  The upstream
developers haven't supported XFS on CFQ for many years.  Because of
things like the note at the bottom of this FAQ entry:
http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E

Considering one of, if not the, main reasons for using XFS is parallel
IO, this speaks volumes about CFQ's suitability for high performance
workloads.

> The chipset supports AHCI, but unfortunately it's turned off on the

Wow.  A huge Intel partner castrating their integrated SATA controllers.

> PET310, and the setting is not exposed in the BIOS setup, despite the
> fact that Dell advertises AHCI capability. It would do AHCI if I
> bought one of the optional SAS controllers.

Interesting.  SAS controllers typically don't use the AHCI interface.
In fact I know of none.  Dell uses LSISAS ASICs on their SAS cards and
mobo down on servers, and these use the LSI mptsas driver, not AHCI.

> Since this is an unusual RAID10 situation, and I have plenty of spare
> processor available, I'm going to try RAID5 over the weekend. I've
> never used it. But I'm guessing that parity might come at a lower
> bandwidth cost than mirroring. Should be a fun weekend. :-)

If you're looking for increased performance look elsewhere.  RMW latency
typically gives you random write throughput about 1/3rd to 1/5th that of
RAID10 with the same drive count.  Sequential read may be slightly
faster than vanilla RAID10.  However, as many are fond of mentioning,
using the far layout can get you sequential read close to that of pure
striping, so it'll be faster than RAID5 all around.  There never has
been and never will be a performance advantage for RAID5, unless you're
using SSDs where RMW latency is effectively zero.

> BTW, any recommendations on chunk size?

32KB works well for just about any workload.  Exceptions would be HPC or
media server workloads where you're writing files 10s of GB to TB in
size, especially in parallel.

-- 
Stan


  parent reply	other threads:[~2013-06-07 23:33 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-06-06 23:52 Is this expected RAID10 performance? Steve Bergman
2013-06-07  3:25 ` Stan Hoeppner
2013-06-07  7:51 ` Roger Heflin
2013-06-07  8:07   ` Alexander Zvyagin
2013-06-07 10:44     ` Steve Bergman
2013-06-07 10:52       ` Roman Mamedov
2013-06-07 11:25         ` Steve Bergman
2013-06-07 13:18           ` Stan Hoeppner
2013-06-07 13:54             ` Steve Bergman
2013-06-07 21:43               ` Bill Davidsen
2013-06-07 23:33               ` Stan Hoeppner [this message]
2013-06-07 12:39       ` Stan Hoeppner
2013-06-07 12:59         ` Steve Bergman
2013-06-07 20:51           ` Stan Hoeppner
2013-06-08 18:23 ` keld
  -- strict thread matches above, loose matches on Subject: below --
2013-06-08 19:56 Steve Bergman
2013-06-09  3:08 ` Stan Hoeppner
2013-06-09 12:09 ` Ric Wheeler
2013-06-09 20:06   ` Steve Bergman
2013-06-09 21:40     ` Ric Wheeler
2013-06-09 23:08       ` Steve Bergman
2013-06-10  8:35         ` Stan Hoeppner
2013-06-10  0:11       ` Joe Landman
2013-06-09 22:05     ` Eric Sandeen
2013-06-09 23:34       ` Steve Bergman
2013-06-10  0:02         ` Eric Sandeen
2013-06-10  2:37           ` Steve Bergman
2013-06-10 10:00             ` Stan Hoeppner
2013-06-10  7:19           ` David Brown
2013-06-10  0:05     ` Joe Landman
2013-06-09 23:53 Steve Bergman
2013-06-10  9:23 ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=51B26DBA.2030009@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=sbergman27@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).