public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Chinner <david@fromorbit.com>
To: Michael Monnerie <michael.monnerie@is.it-management.at>
Cc: Angelo McComis <angelo@mccomis.com>, xfs@oss.sgi.com
Subject: Re: XFS use within multi-threaded apps
Date: Mon, 25 Oct 2010 10:08:11 +1100	[thread overview]
Message-ID: <20101024230811.GL12506@dastard> (raw)
In-Reply-To: <201010242022.46693@zmi.at>

On Sun, Oct 24, 2010 at 08:22:46PM +0200, Michael Monnerie wrote:
> On Samstag, 23. Oktober 2010 Angelo McComis wrote:
> > They quoted having 10+TB databases running OLTP on EXT3 with
> > 4-5GB/sec sustained throughput (not XFS).
> 
> Which servers and storage are these? This is nothing you can do with 
> "normal" storages. Using 8Gb/s Fibre Channel gives 1GB/s, if you can do 
> full speed I/O. So you'd need at least 5 parallel Fibre Channel storages 
> running without any overhead. Also, a single server can't do that high 
> rates, so there must be several front-end servers. That again means 
> their database must be especially organised for that type of load 
> (shared nothing or so).

Have a look at IBM's TPC-C submission here on RHEL5.2:

http://www.tpc.org/tpcc/results/tpcc_result_detail.asp?id=108081902

That's got 8x4GB FC connections to 40 storage arrays with 1920 disks
behind them. It uses 80x 24 disk raid0 luns, with each lun split
into 12 data partitions on the outer edge of each lun. That gives
960 data partitions for the benchmark.

Now, this result uses raw devices for this specific benchmark, but
it could easily use files in ext3 filesystems. With 960 ext3
filesystems, you could easily max out the 3.2GB/s of IO that sucker
has as it is <4MB/s per filesystem.

So I'm pretty sure IBM are not quoting a single filesystem
throuhgput result. While you could get that sort of result form a
single filesytsem with XFS, I think it's an order of magnitude
higher than a single ext3 filesystem can acheive....

Cheers,

Dave.
-- 
Dave Chinner
david@fromorbit.com

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2010-10-24 23:07 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-10-18 13:42 XFS use within multi-threaded apps Angelo McComis
2010-10-19  1:12 ` Dave Chinner
2010-10-19  4:24 ` Stewart Smith
2010-10-20 12:00   ` Angelo McComis
2010-10-23 19:56     ` Peter Grandi
2010-10-23 20:59       ` Angelo McComis
2010-10-23 21:01         ` Angelo McComis
2010-10-24  2:13           ` Stan Hoeppner
2010-10-24 18:22           ` Michael Monnerie
2010-10-24 23:08             ` Dave Chinner [this message]
2010-10-25  3:12               ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20101024230811.GL12506@dastard \
    --to=david@fromorbit.com \
    --cc=angelo@mccomis.com \
    --cc=michael.monnerie@is.it-management.at \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox