From: "Josef 'Jeff' Sipek" <jeffpc@josefsipek.net>
To: Dave Chinner <david@fromorbit.com>
Cc: xfs@oss.sgi.com
Subject: Re: task blocked for more than 120 seconds
Date: Tue, 24 Apr 2012 11:10:10 -0400 [thread overview]
Message-ID: <20120424151010.GA21619@poseidon.cudanet.local> (raw)
In-Reply-To: <20120423232759.GO9541@dastard>
On Tue, Apr 24, 2012 at 09:27:59AM +1000, Dave Chinner wrote:
> On Mon, Apr 23, 2012 at 04:24:41PM -0400, Josef 'Jeff' Sipek wrote:
> > On Sat, Apr 21, 2012 at 10:29:32AM +1000, Dave Chinner wrote:
> > ...
> > > but also given the length of the incident, some other data is definitely
> > > needed:
> > > - a 30s event trace - it'll compress pretty well
> > > (trace-cmd record -e xfs* sleep 30; trace-cmd report > output.txt)
> >
> > http://31bits.net/download/output-1335211829.txt.bz2
>
> Ok, that's instructive. Inode RMW cycles in the xfsaild:
...
> Every buffer read is followed by a queuing for write. It is also
> showing that it is typically taking 10-25ms per inode read IO, which
> is exactly what I'd expect for your given storage. There are 2500 of
> these over the 30s period, which translates to about one every 12ms
> across the 30s sample.
>
> So, yes, your hangs are definitely due to inode buffer RMW cycles
> when trying to flush dirty inodes from the cache. I have a few
> ideas on how to fix this - but I'm not sure whether a current TOT
> solution will be easily back-portable.
If it is too much effort to backport, we should be able to move the box to a
3.3 stable kernel (assuming no driver problems).
> The simplest solution is a readahead based solution - AIL pushing is
> async, and will cycle back to inodes that it failed to flush the first
> time past, so triggering readahead on the first pass might work just fine.
...
This makes sense. With a large enough log, could you not end up evicting
the readahead inodes by the time you get back to them?
> That way the xfsaild will make a pass across the AIL doing
> readahead and doesn't block on RMW cycles. Effectively we get async
> RMW cycles occurring, and the latency of a single cycle will no
> longer be the performance limiting factor. I'll start to prototype
> something to address this - it isn't a new idea, and I've seen it
> done before, so i should be able to get something working.
Cool. Let me know when you have something we can try. I don't quite know
what it is that's causing this giant backlog of inode modifications - I
suspect it's the rsync that's pushing it over. But regardless, I'm
interested in testing the fix.
Thanks!
Jeff.
--
Fact: 30.3% of all statistics are generated randomly.
_______________________________________________
xfs mailing list
xfs@oss.sgi.com
http://oss.sgi.com/mailman/listinfo/xfs
next prev parent reply other threads:[~2012-04-24 15:10 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-04-18 15:11 task blocked for more than 120 seconds Josef 'Jeff' Sipek
2012-04-18 18:28 ` Ben Myers
2012-04-18 23:48 ` Dave Chinner
2012-04-19 15:46 ` Josef 'Jeff' Sipek
2012-04-19 22:56 ` Dave Chinner
2012-04-20 13:58 ` Josef 'Jeff' Sipek
2012-04-21 0:29 ` Dave Chinner
2012-04-23 17:16 ` Josef 'Jeff' Sipek
2012-04-23 20:24 ` Josef 'Jeff' Sipek
2012-04-23 23:27 ` Dave Chinner
2012-04-24 15:10 ` Josef 'Jeff' Sipek [this message]
2012-09-27 12:49 ` Josef 'Jeff' Sipek
2012-09-27 22:50 ` Dave Chinner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120424151010.GA21619@poseidon.cudanet.local \
--to=jeffpc@josefsipek.net \
--cc=david@fromorbit.com \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox