From: Emmanuel Florac <eflorac@intellique.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Dave Hall <kdhall@binghamton.edu>,
stan@hardwarefreak.com, "xfs@oss.sgi.com" <xfs@oss.sgi.com>
Subject: Re: XFS/LVM/Multipath on a single RAID volume
Date: Wed, 25 Feb 2015 12:49:46 +0100 [thread overview]
Message-ID: <20150225124946.1784b9ca@harpe.intellique.com> (raw)
In-Reply-To: <20150224223344.GE4251@dastard>
Le Wed, 25 Feb 2015 09:33:44 +1100
Dave Chinner <david@fromorbit.com> écrivait:
> > On an existing array based on similar but slightly slower hardware,
> > I'm getting miserable performance. The bottleneck seems to be on
> > the server side. For specifics, the array is laid out as a single
> > 26TB volume and attached by a single 3Gbps SAS.
>
> So, 300MB/s max throughput.
>
Ah yes, maybe external RAID controllers can only use one SAS channel
out of the 4 available, that would definitely limit performance badly.
This limitation don't apply to internal RAID controllers (Adaptec, LSI,
Areca) driving a JBOD though.
I'll do a short digression on external storage enclosures: they're
mostly useful to provide redundant controllers. If you're using only one
controller, cheap ones (such as infortrend, Promise and the like) will
always perform poorly compared to a modern PCIe RAID controller.
High-end storage enclosures (DotHill, NetApp, etc) with high-bandwidth
attachments (FC or IB) provide better performance AND redundancy, but
at a hefty price.
So if you want fast, cheap arrays, definitely use Adaptec/LSI/Areca and
simple JBOD chassis like supermicro's.
--
------------------------------------------------------------------------
Emmanuel Florac | Direction technique
| Intellique
| <eflorac@intellique.com>
| +33 1 78 94 84 02
------------------------------------------------------------------------
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2015-02-25 11:49 UTC|newest]
Thread overview: 32+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-03-13 18:11 xfs_fsr, sunit, and swidth Dave Hall
2013-03-13 23:57 ` Dave Chinner
2013-03-14 0:03 ` Stan Hoeppner
[not found] ` <514153ED.3000405@binghamton.edu>
2013-03-14 12:26 ` Stan Hoeppner
2013-03-14 12:55 ` Stan Hoeppner
2013-03-14 14:59 ` Dave Hall
2013-03-14 18:07 ` Stefan Ring
2013-03-15 5:14 ` Stan Hoeppner
2013-03-15 11:45 ` Dave Chinner
2013-03-16 4:47 ` Stan Hoeppner
2013-03-16 7:21 ` Dave Chinner
2013-03-16 11:45 ` Stan Hoeppner
2013-03-25 17:00 ` Dave Hall
2013-03-27 21:16 ` Stan Hoeppner
2013-03-29 19:59 ` Dave Hall
2013-03-31 1:22 ` Dave Chinner
2013-04-02 10:34 ` Hans-Peter Jansen
2013-04-03 14:25 ` Dave Hall
2013-04-12 17:25 ` Dave Hall
2013-04-13 0:45 ` Dave Chinner
2013-04-13 0:51 ` Stan Hoeppner
2013-04-15 20:35 ` Dave Hall
2013-04-16 1:45 ` Stan Hoeppner
2013-04-16 16:18 ` Dave Chinner
2015-02-22 23:35 ` XFS/LVM/Multipath on a single RAID volume Dave Hall
2015-02-23 11:18 ` Emmanuel Florac
2015-02-24 22:04 ` Dave Hall
2015-02-24 22:33 ` Dave Chinner
[not found] ` <54ED01BC.6080302@binghamton.edu>
2015-02-24 23:33 ` Dave Chinner
2015-02-25 11:49 ` Emmanuel Florac [this message]
2015-02-25 11:21 ` Emmanuel Florac
2013-03-28 1:38 ` xfs_fsr, sunit, and swidth Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150225124946.1784b9ca@harpe.intellique.com \
--to=eflorac@intellique.com \
--cc=david@fromorbit.com \
--cc=kdhall@binghamton.edu \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox