public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: "Bryan J. Smith" <thebs413@yahoo.com>
To: Ralf Gross <Ralf-Lists@ralfgross.de>
Cc: linux-xfs@oss.sgi.com
Subject: Re: mkfs options for a 16x hw raid5 and xfs (mostly large files)
Date: Tue, 25 Sep 2007 09:28:29 -0700 (PDT)	[thread overview]
Message-ID: <152219.84729.qm@web32906.mail.mud.yahoo.com> (raw)
In-Reply-To: <20070925160737.GC20499@p15145560.pureserver.info>

Ralf Gross <Ralf-Lists@ralfgross.de> wrote:
> The hardware is fixed to one PCI-X FC HBA (4Gb) and two 48x shelfs.
> The performance I get with this setup is ok for us. The data will
> be stored in bunches of multiple TB. Only few clients will access
> the data, maybe 5-10 clients at the same time.

If raw performance is your ultimate goal, the closer you are to the
hardware, and the less overhead in the protocol, the better.

Direct SATA channels (software RAID-10), or taking advantage of the
3Ware ASIC+SRAM (hardware RAID-10) is most ideal.  I've put in a
setup myself that used three (3) 3Ware Escalade 9550SX cards on three
(3) different PCI-X channels, and then striped RAID-0 across all
three (3) volumes (found little difference between using the OS LVM
or the 3Ware manager for the RAID-0 stripe across volumes).

Using a buffered RAID-5 hardware solution is not going to get you the
best latency or direct DTR, if that is what matters.  In most cases,
it does not, depending on your application.

> I always use SW-RAID for RAID0 and RAID1. But for RAID 5/6 I choose
> either external arrays or internal controllers (Areca).

Areca is the Intel IOP + firmware.  Intel's X-Scale storage
processing engines (SPE) seem to best 3Ware's AMCC PowerPC engine. 
The off-load is massive when I/O is an issue.  Unfortunately, I still
find I prefer 3Ware's firmware and software support in Linux over
Areca, and Intel clearly does not have the dedication to addressing
issues that 3Ware does (just like back in the IOP30x/i960 days,
sigh).

To me, support is key.  I've yet to drop a 3Ware volume myself.  The
only people who seem to drop a volume are typically using 3Ware in
JBOD mode, or are "early adopters" of new products.  I don't care if
it's hardware or software, "early adoption" of anything is just not
worth it.  I'd rather have reduced performance for "piece-of-mind." 
3Ware has a solid history on Linux, and my experiences are the
ultimate after 7 years.**

[ **NOTE:  Don't get me started.  The common "proprietary" or
"hardware reliance" argument doesn't hold, because 3Ware's volume
upward compatibility is proven (I've moved volumes of ATA 6000 to
7000 series, SATA 8000 to 9000, etc...), and they have shared the
data organization so you can read them with dmraid as well.  I.e.,
you can always fall back to reading your data off a 3Ware volume with
dmraid these days.  I've also _never_ had an "ATA timeout" issue with
3Ware cards, because 3Ware updates its firmware regularly to "deal"
with troublesome [S]ATA drives.  That has bitten me far too many
times in Linux with direct [S]ATA -- not Linux's fault, just the
fault of hardware [S]ATA PHY chips and their on-drive IDE firmware,
something 3Ware has mitigated for me time and time again. ]

I'm completely biased though, I assemble file and database servers,
not web or other CPU-bound systems.  Turning my system interconnect
(not the CPU, a PC CPU crunches XOR very fast) into a bottlenecked
PIO operation is not ideal for NFS writes or large record SQL commits
in my experience.  Heck, one look at NetApp's volume w/NVRAM and
SPE-accelerated RAID-4 designs will quickly change your opinion as
well (and make you wonder if they aren't worth the cost at times as
well ;).


-- 
Bryan J. Smith   Professional, Technical Annoyance
b.j.smith@ieee.org    http://thebs413.blogspot.com
--------------------------------------------------
     Fission Power:  An Inconvenient Solution

  reply	other threads:[~2007-09-25 16:55 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-09-23  9:38 mkfs options for a 16x hw raid5 and xfs (mostly large files) Ralf Gross
2007-09-23 12:56 ` Peter Grandi
2007-09-26 14:54   ` Ralf Gross
2007-09-26 16:27     ` [UNSURE] " Justin Piszcz
2007-09-26 16:54       ` Ralf Gross
2007-09-26 16:59         ` Justin Piszcz
2007-09-26 17:38           ` Bryan J. Smith
2007-09-26 17:41             ` Justin Piszcz
2007-09-26 17:55               ` Bryan J. Smith
2007-09-26 17:13         ` [UNSURE] " Bryan J. Smith
2007-09-26 17:27           ` Justin Piszcz
2007-09-26 17:35             ` Bryan J. Smith
2007-09-26 17:37               ` Justin Piszcz
2007-09-26 17:38                 ` Justin Piszcz
2007-09-26 17:49                 ` Bryan J. Smith
2007-09-27 15:22     ` Ralf Gross
2007-09-24 17:31 ` Ralf Gross
2007-09-24 18:01   ` Justin Piszcz
2007-09-24 20:39     ` Ralf Gross
2007-09-24 20:43       ` Justin Piszcz
2007-09-24 21:33         ` Ralf Gross
2007-09-24 21:36           ` Justin Piszcz
2007-09-24 21:52             ` Ralf Gross
2007-09-25 12:35               ` Ralf Gross
2007-09-25 12:50                 ` Justin Piszcz
2007-09-25 13:44                   ` Bryan J Smith
2007-09-25 12:57                 ` KELEMEN Peter
2007-09-25 13:49                   ` Ralf Gross
2007-09-25 14:08                     ` Bryan J Smith
2007-09-25 16:07                       ` Ralf Gross
2007-09-25 16:28                         ` Bryan J. Smith [this message]
2007-09-25 17:25                           ` Ralf Gross
2007-09-25 17:41                             ` Bryan J. Smith
2007-09-25 19:13                               ` Ralf Gross
2007-09-25 20:23                                 ` Bryan J. Smith
2007-09-25 16:48                         ` Justin Piszcz
2007-09-25 18:00                           ` Bryan J. Smith
2007-09-25 18:33                             ` Ralf Gross
2007-09-25 23:38                             ` Justin Piszcz
2007-09-26  8:23                               ` Ralf Gross
2007-09-26  8:42                                 ` Justin Piszcz
2007-09-26  8:49                                   ` Ralf Gross
2007-09-26  9:52                                     ` Justin Piszcz
2007-09-26 15:03                                       ` Bryan J Smith
2007-09-26 15:15                                         ` Ralf Gross
2007-09-26 17:08                                           ` Bryan J. Smith
2007-09-26 16:24                                         ` Justin Piszcz
2007-09-26 17:11                                           ` Bryan J. Smith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=152219.84729.qm@web32906.mail.mud.yahoo.com \
    --to=thebs413@yahoo.com \
    --cc=Ralf-Lists@ralfgross.de \
    --cc=b.j.smith@ieee.org \
    --cc=linux-xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox