From: Dave Chinner <david@fromorbit.com>
To: "J.H." <warthog9@kernel.org>
Cc: linux-kernel <linux-kernel@vger.kernel.org>,
jaxboe@fusionio.com, Christoph Hellwig <hch@infradead.org>
Subject: Re: More XFS resource starvation?
Date: Tue, 16 Nov 2010 10:49:40 +1100 [thread overview]
Message-ID: <20101115234940.GE22876@dastard> (raw)
In-Reply-To: <4CE17C4E.7010206@kernel.org>
On Mon, Nov 15, 2010 at 10:30:38AM -0800, J.H. wrote:
> So apparently I'm having fun tripping over all kinds of bugs lately.
> I've seen this a couple of times now on the box in question. Usually
> happens after a few days, or after particularly heavy rsync traffic on
> the box.
>
> http://pastebin.osuosl.org/36014
>
> Christoph seemed to think it's a memory exhaustion problem, so I've
> included the /proc/meminfo and as you can see there's plenty of memory
> around on the system.
That looks very much like some IOs have not been completed and XFS
is waiting around for them to complete. Both the xfsbufd and the
flush daemons are stuck in get_request_wait(), which implies that
the request queue is full and not being serviced. Various rsync process
is stuck waiting for log buffer completion, waiting for buffer reads
to complete, etc, which implies that IO is simply not being completed.
My experience with such hangs is that they are typically caused by a
storage problem (e.g. lost interrupt, IO not completed, controller
firmware problem, etc).
> Loads have, expectedly, climbed currently around 1250.05 but growing slowly.
>
> Quick overview of the underlying storage:
>
> xfs -> md (raid 0) -+--> P812 hardware raid6 (cciss driver)
> |
> +--> P812 hardware raid6 (cciss driver)
>
> This is running on an HP DL380 G7.
>
> I saw this both on an older 2.6.30.10-105.2.23.fc11.x86_64, and
> currently on 2.6.34.7-61.fc13.x86_64 (both being Fedora stock kernels)
>
> I have not seen this on a very similar DL380 G6, with the same storage
> setup and it is currently running the 2.6.30 kernel from above.
>
> Christoph suggest increasing the nr_request values for each of the
> underlying devices, but this didn't seem to change anything
> significantly on the system.
>
> Anyone have any ideas on what's going on?
Any other information in the log (e.g. from the cciss driver)? Are
the raid controllers all running the latest (and same) firmware? I'd
be wanting to make sure that all the storage below the filesystem is
working correctly before looking at anything filesystem related...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
prev parent reply other threads:[~2010-11-15 23:50 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-11-15 18:30 More XFS resource starvation? J.H.
2010-11-15 19:08 ` Simon Kirby
2010-11-15 23:49 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101115234940.GE22876@dastard \
--to=david@fromorbit.com \
--cc=hch@infradead.org \
--cc=jaxboe@fusionio.com \
--cc=linux-kernel@vger.kernel.org \
--cc=warthog9@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox