public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Michael Monnerie <michael.monnerie@is.it-management.at>
To: xfs@oss.sgi.com
Cc: Stan Hoeppner <stan@hardwarefreak.com>,
	John Bokma <contact@johnbokma.com>
Subject: Re: 30 TB RAID6 + XFS slow write performance
Date: Sun, 24 Jul 2011 10:47:14 +0200	[thread overview]
Message-ID: <201107241047.19745@zmi.at> (raw)
In-Reply-To: <4E2BB859.1050200@hardwarefreak.com>


[-- Attachment #1.1: Type: Text/Plain, Size: 1410 bytes --]

On Sonntag, 24. Juli 2011 Stan Hoeppner wrote:
> Is the NetApp FC/iSCSI attachment performance still competitive for
> large file/streaming IO, given that one can't optimize XFS stripe
> alignment, and with no indication of where the file fragments are
> actually written on the media?  Or does it lag behind something like
> a roughly equivalent class Infinite Storage array, or IBM DS?

I can't tell about performance difference. But I'd like to explain two 
fundamental differences to all other storages:

1) WAFL *never* overwrites an existing block. Whenver there's a write to 
an existing block, that block is instead written to a new location, 
afterwards the old block mapped to the new one. This is a key factor to 
keeping performance up when using snapshots and deduplication.

2) WAFL never does small or random writes. All writes are collected in 
NVRAM, and then written as one large sequential write, always one full 
stripe is written.

That means for workloads with lots of small random writes, NetApp 
storages beat the hell out of the disks, compared to other storages.
I can't tell for large seq. writes, though, I don't have such workload.

-- 
mit freundlichen Grüssen,
Michael Monnerie, Ing. BSc

it-management Internet Services: Protéger
http://proteger.at [gesprochen: Prot-e-schee]
Tel: +43 660 / 415 6531

// Haus zu verkaufen: http://zmi.at/langegg/

[-- Attachment #1.2: This is a digitally signed message part. --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 121 bytes --]

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

      reply	other threads:[~2011-07-24  8:47 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-07-18 19:58 30 TB RAID6 + XFS slow write performance John Bokma
2011-07-19  0:00 ` Eric Sandeen
2011-07-19  8:37 ` Emmanuel Florac
2011-07-19 22:37   ` Stan Hoeppner
2011-07-20  0:20     ` Dave Chinner
2011-07-20  5:16       ` Stan Hoeppner
2011-07-20  6:44         ` Dave Chinner
2011-07-20 12:10           ` Stan Hoeppner
2011-07-20 14:04             ` Michael Monnerie
2011-07-20 23:01               ` Dave Chinner
2011-07-21  6:19                 ` Michael Monnerie
2011-07-21  6:48                   ` Dave Chinner
2011-07-22  6:10                     ` Michael Monnerie
2011-07-22 18:05                       ` Stan Hoeppner
2011-07-22 23:10                         ` Dave Chinner
2011-07-24  6:14                           ` Stan Hoeppner
2011-07-24  8:47                             ` Michael Monnerie [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201107241047.19745@zmi.at \
    --to=michael.monnerie@is.it-management.at \
    --cc=contact@johnbokma.com \
    --cc=stan@hardwarefreak.com \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox