public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Linda Walsh <xfs@tlinx.org>
To: Emmanuel Florac <eflorac@intellique.com>
Cc: Paul Anderson <pha@umich.edu>, xfs@oss.sgi.com
Subject: FYI: LSI rebuilding; and XFS speed V. raw - hints on maxing out 'dd'....(if not already obvious)...
Date: Fri, 10 Jun 2011 18:33:08 -0700	[thread overview]
Message-ID: <4DF2C5D4.6030507@tlinx.org> (raw)
In-Reply-To: <20110502191323.417ef644@harpe.intellique.com>



Emmanuel Florac wrote:
> Le Mon, 2 May 2011 11:47:48 -0400
> Paul Anderson <pha@umich.edu> écrivait:
> 
>> We are deploying five Dell 810s, 192GiB RAM, 12 core, each with three
>> LSI 9200-8E SAS controllers, and three SuperMicro 847 45 drive bay
>> cabinets with enterprise grade 2TB drives.
> 
> I have very little experience with these RAID coontrollers. However I
> have a 9212 4i4e (same card generation and same chipset) in test, and so
> far I must say it looks like _utter_ _crap_. The performance is abysmal
> (it's been busy rebuilding a 20TB array for... 6 days!); the server
----
By default the card only allocates about 20% of it's disk capacity for
rebuilds with the rest allocated for 'real work'.   It's not so smart as
to use 100% when there is no real work...   If you enter the control software, 
(runs under X on linux -- even displays on CygX)... and enter something like
90%, you'll find your rebuilds will go much faster, but you can expect any
real-access to the device to suffer accordingly.

I have a 9285-8E and have been pretty happy with it's performance, but I only
have 10 data disks (2x6-disk RAID5 =>RAID50) with 2TB SATA's and get 1GB perf...
about what I'd expect from disks that get around 120MB each and doing 2 RAID5
calcs...)...

----

The only other things I can think of when benching XFS for max throughput.

1) Realtime partition might be an option (never tried, but thought I'd mention
it)

2) on "dd", if you are testing "write" performance, try pre-allocating the file
using (filling in the vars...):

   xfs_io -f -c "truncate $size" -c "resvsp 0 $size" "$Newfile"

then test for fragmentation:
(see if it is in 1 'extent'...., 1/line):

	xfs_bmap  "$Newfile"

if needed, try defragging:
  xfs_fsr "$Newfile"

Then on "dd" use the conv="nocreat,notrunc" flags -- that way you'll be able
to dump I/O directly into the file without it having to be "created or allocated"...







_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-06-11  1:33 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-05-02 15:47 XFS/Linux Sanity check Paul Anderson
2011-05-02 17:09 ` Andi Kleen
2011-05-02 17:13 ` Emmanuel Florac
2011-06-11  1:33   ` Linda Walsh [this message]
2011-06-11  9:30     ` FYI: LSI rebuilding; and XFS speed V. raw - hints on maxing out 'dd'....(if not already obvious) Emmanuel Florac
2011-06-11 16:48       ` Linda Walsh
2011-05-03  3:18 ` XFS/Linux Sanity check Dave Chinner
2011-05-03  8:58   ` Michael Monnerie
2011-05-03 16:05   ` Paul Anderson
2011-05-04 10:36     ` Stan Hoeppner
2011-05-04  6:18   ` Stan Hoeppner
2011-05-04  1:10 ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DF2C5D4.6030507@tlinx.org \
    --to=xfs@tlinx.org \
    --cc=eflorac@intellique.com \
    --cc=pha@umich.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox