From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Michael Monnerie <michael.monnerie@is.it-management.at>,
xfs@oss.sgi.com, John Bokma <contact@johnbokma.com>
Subject: Re: 30 TB RAID6 + XFS slow write performance
Date: Sun, 24 Jul 2011 01:14:49 -0500 [thread overview]
Message-ID: <4E2BB859.1050200@hardwarefreak.com> (raw)
In-Reply-To: <20110722231040.GD13963@dastard>
On 7/22/2011 6:10 PM, Dave Chinner wrote:
> On Fri, Jul 22, 2011 at 01:05:14PM -0500, Stan Hoeppner wrote:
>> I've never used a NetApp filer myself. However, that said, I would
>> assume that WAFL is only in play for NFS/CIFS transactions since WAFL is
>> itself a filesystem.
>
> Netapp's website is busted, so here's a cached link:
>
> http://webcache.googleusercontent.com/search?q=cache:9DdO2a16hdIJ:blogs.netapp.com/extensible_netapp/2008/10/what-is-wafl--3.html+netapp+san+wafl&cd=1&hl=en&ct=clnk&source=www.google.com
This is interesting:
http://communities.netapp.com/community/netapp-blogs/dave/blog/2008/12/08/is-wafl-a-filesystem
The author implemented WAFL in two layers. The bottom layer handles
block stuff including volume management, dedup, snapshots, etc, and the
top layer functions as multiple file systems, amongst other duties.
> If you can be bothered trolling for that entire series of blog posts
> in the google cache, it's probably a good idea so you can get a
> basic understanding of what WAFL actually is.
It's never a bother to learn something new. :)
>> When exposing LUNs from the same filer to FC and iSCSI hosts I would
>> assume the filer acts just as any other SAN controller would.
>
> It has it's own quirks, just like any other FC attached RAID array...
>
>> In this case I would think you'd probably still want to align your
>> XFS filesystem to the underlying RAID stripe from which the LUN
>> was carved.
>
> Which actually matters very little when WAFL between the FS and the
> disk because WAFL uses copy-on-write and stages all it's writes
> through NVRAM and so you've got no idea what the alignment of any
> given address in the filesystem maps to, anyway.
Is the NetApp FC/iSCSI attachment performance still competitive for
large file/streaming IO, given that one can't optimize XFS stripe
alignment, and with no indication of where the file fragments are
actually written on the media? Or does it lag behind something like a
roughly equivalent class Infinite Storage array, or IBM DS?
--
Stan
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-07-24 6:14 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-18 19:58 30 TB RAID6 + XFS slow write performance John Bokma
2011-07-19 0:00 ` Eric Sandeen
2011-07-19 8:37 ` Emmanuel Florac
2011-07-19 22:37 ` Stan Hoeppner
2011-07-20 0:20 ` Dave Chinner
2011-07-20 5:16 ` Stan Hoeppner
2011-07-20 6:44 ` Dave Chinner
2011-07-20 12:10 ` Stan Hoeppner
2011-07-20 14:04 ` Michael Monnerie
2011-07-20 23:01 ` Dave Chinner
2011-07-21 6:19 ` Michael Monnerie
2011-07-21 6:48 ` Dave Chinner
2011-07-22 6:10 ` Michael Monnerie
2011-07-22 18:05 ` Stan Hoeppner
2011-07-22 23:10 ` Dave Chinner
2011-07-24 6:14 ` Stan Hoeppner [this message]
2011-07-24 8:47 ` Michael Monnerie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4E2BB859.1050200@hardwarefreak.com \
--to=stan@hardwarefreak.com \
--cc=contact@johnbokma.com \
--cc=david@fromorbit.com \
--cc=michael.monnerie@is.it-management.at \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox