public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Emmanuel Florac <eflorac@intellique.com>
To: Paul Anderson <pha@umich.edu>
Cc: xfs@oss.sgi.com
Subject: Re: XFS/Linux Sanity check
Date: Mon, 2 May 2011 19:13:23 +0200	[thread overview]
Message-ID: <20110502191323.417ef644@harpe.intellique.com> (raw)
In-Reply-To: <BANLkTik4YjSr7-VA+f9Sh+UxvKfFKMy=+w@mail.gmail.com>

Le Mon, 2 May 2011 11:47:48 -0400
Paul Anderson <pha@umich.edu> écrivait:

> We are deploying five Dell 810s, 192GiB RAM, 12 core, each with three
> LSI 9200-8E SAS controllers, and three SuperMicro 847 45 drive bay
> cabinets with enterprise grade 2TB drives.

I have very little experience with these RAID coontrollers. However I
have a 9212 4i4e (same card generation and same chipset) in test, and so
far I must say it looks like _utter_ _crap_. The performance is abysmal
(it's been busy rebuilding a 20TB array for... 6 days!); the server
regularly freezes and crashes without any reason (it's a pure dev
system with virtually zero load and zero IO); and there were lots of
filesystem corruptions. I'm running a 2.6.32.25 64 bits plain vanilla
kernel that poses no problem whatsoever with any other configuration.
 
> In isolated testing, I see around 5GiBytes/second raw (135 parallel dd
> reads), and with a benchmark test of 10 simultaneous 64GiByte dd
> commands, I can see just shy of 2 GiBytes/second reading, and around
> 1.4GiBytes/second writing through XFS.   The benchmark is crude, but
> fairly representative of our expected use.

I don't understand why there's such a gap between the raw and XFS
performance. Generally XFS gives 90% performance or more of raw
performance.
 
> md apparently does not support barriers, so we are badly exposed in
> that manner, I know.  As a test, I disabled write cache on all drives,
> performance dropped by 30% or so, but since md is apparently the
> problem, barriers still didn't work.

Frankly, I'd stay away from md at this array size. I'm pretty sure
you're exploring uncharted territory here. 

> Ideally, I'd firstly be able to find informed opinions about how I can
> improve this arrangement - we are mildly flexible on RAID controllers,
> very flexible on versions of Linux, etc, and can try other OS's as a
> last resort (but the leading contender here would be "something"
> running ZFS, and though I love ZFS, it really didn't seem to work well
> for our needs).

I can't yet be sure because I plan more testing with this card, but I'd
ditch the LSI controllers for LSI/3Ware or Adaptec (or Areca
eventually), and stay away from md RAID and use hardware RAID. I'm an
hardware RAID freak, but... hardware RAID allows proper write cache,
for a start (because it has BBUs). 

-- 
------------------------------------------------------------------------
Emmanuel Florac     |   Direction technique
                    |   Intellique
                    |	<eflorac@intellique.com>
                    |   +33 1 78 94 84 02
------------------------------------------------------------------------

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  parent reply	other threads:[~2011-05-02 17:09 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-02 15:47 XFS/Linux Sanity check Paul Anderson
2011-05-02 17:09 ` Andi Kleen
2011-05-02 17:13 ` Emmanuel Florac [this message]
2011-06-11  1:33   ` FYI: LSI rebuilding; and XFS speed V. raw - hints on maxing out 'dd'....(if not already obvious) Linda Walsh
2011-06-11  9:30     ` Emmanuel Florac
2011-06-11 16:48       ` Linda Walsh
2011-05-03  3:18 ` XFS/Linux Sanity check Dave Chinner
2011-05-03  8:58   ` Michael Monnerie
2011-05-03 16:05   ` Paul Anderson
2011-05-04 10:36     ` Stan Hoeppner
2011-05-04  6:18   ` Stan Hoeppner
2011-05-04  1:10 ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20110502191323.417ef644@harpe.intellique.com \
    --to=eflorac@intellique.com \
    --cc=pha@umich.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox