public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Paul Anderson <pha@umich.edu>,
	Christoph Hellwig <hch@infradead.org>, xfs-oss <xfs@oss.sgi.com>
Subject: Re: I/O hang, possibly XFS, possibly general
Date: Sat, 04 Jun 2011 07:11:50 -0500	[thread overview]
Message-ID: <4DEA2106.5000900@hardwarefreak.com> (raw)
In-Reply-To: <20110604103247.GG561@dastard>

On 6/4/2011 5:32 AM, Dave Chinner wrote:
> On Sat, Jun 04, 2011 at 03:14:53AM -0500, Stan Hoeppner wrote:
>> On 6/3/2011 10:59 AM, Paul Anderson wrote:
>>> Not sure what I can do about the log - man page says xfs_growfs
>>> doesn't implement log moving.  I can rebuild the filesystems, but for
>>> the one mentioned in this theread, this will take a long time.
>>
>> See the logdev mount option.  Using two mirrored drives was recommended,
>> I'd go a step further and use two quality "consumer grade", i.e. MLC
>> based, SSDs, such as:
>>
>> http://www.cdw.com/shop/products/Corsair-Force-Series-F40-solid-state-drive-40-GB-SATA-300/2181114.aspx
>>
>> Rated at 50K 4K write IOPS, about 150 times greater than a 15K SAS drive.
> 
> If you are using delayed logging, then a pair of mirrored 7200rpm
> SAS or SATA drives would be sufficient for most workloads as the log
> bandwidth rarely gets above 50MB/s in normal operation.

Hi Dave.  I made the first reply to Paul's post, recommending he enable
delayed logging as a possible solution to his I/O hang problem.  I
recommended this due to his mention of super heavy metadata operations
at the time on his all md raid60 on plain HBA setup.  Paul did not list
delaylog when he submitted his 2.6.38.5 mount options:

inode64,largeio,logbufs=8,noatime

Being the author of the delayed logging code, I had expected you to
comment on this, either expounding on my recommendation, or shooting it
down, and giving the reasons why.

So, would delayed logging have possibly prevented his hang problem or
no?  I always read your replies at least twice, and I don't recall you
touching on delayed logging in this thread.  If you did and I missed it,
my apologies.

Paul will have 3 of LSI's newest RAID cards with a combined 3GB BBWC to
test with, hopefully soon.  With that much cache is an external log
device still needed?  With and/or without delayed logging enabled?

> If you have fsync heavy workloads, or are not using delayed logging,
> then you really need to use the RAID5/6 device behind a BBWC because
> the log is -seriously- bandwidth intensive. I can drive >500MB/s of
> log throughput on metadata intensive workloads on 2.6.39 when not
> using delayed logging or I'm regularly forcing the log via fsync.
> You sure as hell don't want to be running a sustained long term
> write load like that on consumer grade SSDs.....

Given that the max log size is 2GB, IIRC, and that most recommendations
I've seen here are against using a log that big, I figure such MLC
drives would be fine.  AIUI, modern wear leveling will spread writes
throughout the entire flash array before going back and over writing the
first sector.  Published MTBF on most MLC drives rates are roughly
equivalent to enterprise SRDs, 1+ million hours.

Do you believe MLC based SSDs are simply never appropriate for anything
but consumer use, and that only SLC devices should be used for real
storage applications?  AIUI SLC flash cells do have about a 10:1 greater
lifetime than MLC cells.  However, there have been a number of
articles/posts demonstrating math which shows a current generation
SandForce based MLC SSD, under a constant 100MB/s write stream, will run
for 20+ years, IIRC, before sufficient live+reserved spare cells burn
out to cause hard write errors, thus necessitating drive replacement.
Under your 500MB/s load, assuming that's constant, the drives would
theoretically last 4+ years.  If that 500MB/s load was only for 12 hours
each day, the drives would last 8+ years.  I wish I had one of those
articles bookmarked...

-- 
Stan

_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs

  reply	other threads:[~2011-06-04 12:11 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-06-02 14:42 I/O hang, possibly XFS, possibly general Paul Anderson
2011-06-02 16:17 ` Stan Hoeppner
2011-06-02 18:56 ` Peter Grandi
2011-06-02 21:24   ` Paul Anderson
2011-06-02 23:59     ` Phil Karn
2011-06-03  0:39       ` Dave Chinner
2011-06-03  2:11         ` Phil Karn
2011-06-03  2:54           ` Dave Chinner
2011-06-03 22:28             ` Phil Karn
2011-06-04  3:12               ` Dave Chinner
2011-06-03 22:19     ` Peter Grandi
2011-06-06  7:29       ` Michael Monnerie
2011-06-07 14:09         ` Peter Grandi
2011-06-08  5:18           ` Dave Chinner
2011-06-08  8:32           ` Michael Monnerie
2011-06-03  0:06   ` Phil Karn
2011-06-03  0:42 ` Christoph Hellwig
2011-06-03  1:39   ` Dave Chinner
2011-06-03 15:59     ` Paul Anderson
2011-06-04  3:15       ` Dave Chinner
2011-06-04  8:14       ` Stan Hoeppner
2011-06-04 10:32         ` Dave Chinner
2011-06-04 12:11           ` Stan Hoeppner [this message]
2011-06-04 23:10             ` Dave Chinner
2011-06-05  1:31               ` Stan Hoeppner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4DEA2106.5000900@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=david@fromorbit.com \
    --cc=hch@infradead.org \
    --cc=pha@umich.edu \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox