From: Dave Chinner <david@fromorbit.com>
To: Stan Hoeppner <stan@hardwarefreak.com>
Cc: Paul Anderson <pha@umich.edu>,
Christoph Hellwig <hch@infradead.org>, xfs-oss <xfs@oss.sgi.com>
Subject: Re: I/O hang, possibly XFS, possibly general
Date: Sat, 4 Jun 2011 20:32:47 +1000 [thread overview]
Message-ID: <20110604103247.GG561@dastard> (raw)
In-Reply-To: <4DE9E97D.30500@hardwarefreak.com>
On Sat, Jun 04, 2011 at 03:14:53AM -0500, Stan Hoeppner wrote:
> On 6/3/2011 10:59 AM, Paul Anderson wrote:
> > Not sure what I can do about the log - man page says xfs_growfs
> > doesn't implement log moving. I can rebuild the filesystems, but for
> > the one mentioned in this theread, this will take a long time.
>
> See the logdev mount option. Using two mirrored drives was recommended,
> I'd go a step further and use two quality "consumer grade", i.e. MLC
> based, SSDs, such as:
>
> http://www.cdw.com/shop/products/Corsair-Force-Series-F40-solid-state-drive-40-GB-SATA-300/2181114.aspx
>
> Rated at 50K 4K write IOPS, about 150 times greater than a 15K SAS drive.
If you are using delayed logging, then a pair of mirrored 7200rpm
SAS or SATA drives would be sufficient for most workloads as the log
bandwidth rarely gets above 50MB/s in normal operation.
If you have fsync heavy workloads, or are not using delayed logging,
then you really need to use the RAID5/6 device behind a BBWC because
the log is -seriously- bandwidth intensive. I can drive >500MB/s of
log throughput on metadata intensive workloads on 2.6.39 when not
using delayed logging or I'm regularly forcing the log via fsync.
You sure as hell don't want to be running a sustained long term
write load like that on consumer grade SSDs.....
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2011-06-04 10:32 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-02 14:42 I/O hang, possibly XFS, possibly general Paul Anderson
2011-06-02 16:17 ` Stan Hoeppner
2011-06-02 18:56 ` Peter Grandi
2011-06-02 21:24 ` Paul Anderson
2011-06-02 23:59 ` Phil Karn
2011-06-03 0:39 ` Dave Chinner
2011-06-03 2:11 ` Phil Karn
2011-06-03 2:54 ` Dave Chinner
2011-06-03 22:28 ` Phil Karn
2011-06-04 3:12 ` Dave Chinner
2011-06-03 22:19 ` Peter Grandi
2011-06-06 7:29 ` Michael Monnerie
2011-06-07 14:09 ` Peter Grandi
2011-06-08 5:18 ` Dave Chinner
2011-06-08 8:32 ` Michael Monnerie
2011-06-03 0:06 ` Phil Karn
2011-06-03 0:42 ` Christoph Hellwig
2011-06-03 1:39 ` Dave Chinner
2011-06-03 15:59 ` Paul Anderson
2011-06-04 3:15 ` Dave Chinner
2011-06-04 8:14 ` Stan Hoeppner
2011-06-04 10:32 ` Dave Chinner [this message]
2011-06-04 12:11 ` Stan Hoeppner
2011-06-04 23:10 ` Dave Chinner
2011-06-05 1:31 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110604103247.GG561@dastard \
--to=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=pha@umich.edu \
--cc=stan@hardwarefreak.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox