From: Dave Chinner <david@fromorbit.com>
To: Stan Hoeppner <stan@hardwarefreak.com>
Cc: Michael Monnerie <michael.monnerie@is.it-management.at>,
John Bokma <contact@johnbokma.com>,
xfs@oss.sgi.com
Subject: Re: 30 TB RAID6 + XFS slow write performance
Date: Sat, 23 Jul 2011 09:10:40 +1000 [thread overview]
Message-ID: <20110722231040.GD13963@dastard> (raw)
In-Reply-To: <4E29BBDA.3000603@hardwarefreak.com>
On Fri, Jul 22, 2011 at 01:05:14PM -0500, Stan Hoeppner wrote:
> On 7/22/2011 1:10 AM, Michael Monnerie wrote:
>
> > Yes, I just wanted to know about the corner cases, and how XFS behaves.
> > Actually, we're changing over to using NetApps, and with their WAFL
> > anyway I should drop all su/sw usage and just use 4KB blocks.
>
> I've never used a NetApp filer myself. However, that said, I would
> assume that WAFL is only in play for NFS/CIFS transactions since WAFL is
> itself a filesystem.
Netapp's website is busted, so here's a cached link:
http://webcache.googleusercontent.com/search?q=cache:9DdO2a16hdIJ:blogs.netapp.com/extensible_netapp/2008/10/what-is-wafl--3.html+netapp+san+wafl&cd=1&hl=en&ct=clnk&source=www.google.com
"The point is that WAFL is the part of the code that provides the
'read or write from-disk' mechanisms to both NFS and CIFS and SAN.
The semantics of a how the blocks are accessed are provided by
higher level code not by WAFL, which means WAFL is not a file
system."
If you can be bothered trolling for that entire series of blog posts
in the google cache, it's probably a good idea so you can get a
basic understanding of what WAFL actually is.
> When exposing LUNs from the same filer to FC and iSCSI hosts I would
> assume the filer acts just as any other SAN controller would.
It has it's own quirks, just like any other FC attached RAID array...
> In this case I would think you'd probably still want to align your
> XFS filesystem to the underlying RAID stripe from which the LUN
> was carved.
Which actually matters very little when WAFL between the FS and the
disk because WAFL uses copy-on-write and stages all it's writes
through NVRAM and so you've got no idea what the alignment of any
given address in the filesystem maps to, anyway.
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-07-22 23:15 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-07-18 19:58 30 TB RAID6 + XFS slow write performance John Bokma
2011-07-19 0:00 ` Eric Sandeen
2011-07-19 8:37 ` Emmanuel Florac
2011-07-19 22:37 ` Stan Hoeppner
2011-07-20 0:20 ` Dave Chinner
2011-07-20 5:16 ` Stan Hoeppner
2011-07-20 6:44 ` Dave Chinner
2011-07-20 12:10 ` Stan Hoeppner
2011-07-20 14:04 ` Michael Monnerie
2011-07-20 23:01 ` Dave Chinner
2011-07-21 6:19 ` Michael Monnerie
2011-07-21 6:48 ` Dave Chinner
2011-07-22 6:10 ` Michael Monnerie
2011-07-22 18:05 ` Stan Hoeppner
2011-07-22 23:10 ` Dave Chinner [this message]
2011-07-24 6:14 ` Stan Hoeppner
2011-07-24 8:47 ` Michael Monnerie
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110722231040.GD13963@dastard \
--to=david@fromorbit.com \
--cc=contact@johnbokma.com \
--cc=michael.monnerie@is.it-management.at \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox