From: Dave Chinner <david@fromorbit.com>
To: Andreas Dilger <adilger@dilger.ca>
Cc: Christoph Hellwig <hch@infradead.org>,
xfs@oss.sgi.com,
"linux-fsdevel@vger.kernel.org Devel"
<linux-fsdevel@vger.kernel.org>
Subject: Re: XFS status update for May 2012
Date: Tue, 19 Jun 2012 11:11:08 +1000 [thread overview]
Message-ID: <20120619011108.GF25389@dastard> (raw)
In-Reply-To: <AD997E9D-2C1E-4EE4-80D7-2A5C998B6E9E@dilger.ca>
On Mon, Jun 18, 2012 at 12:25:37PM -0600, Andreas Dilger wrote:
> On 2012-06-18, at 6:08 AM, Christoph Hellwig wrote:
> > May saw the release of Linux 3.4, including a decent sized XFS update.
> > Remarkable XFS features in Linux 3.4 include moving over all metadata
> > updates to use transactions, the addition of a work queue for the
> > low-level allocator code to avoid stack overflows due to extreme stack
> > use in the Linux VM/VFS call chain,
>
> This is essentially a workaround for too-small stacks in the kernel,
> which we've had to do at times as well, by doing work in a separate
> thread (with a new stack) and waiting for the results? This is a
> generic problem that any reasonably-complex filesystem will have when
> running under memory pressure on a complex storage stack (e.g. LVM +
> iSCSI), but causes unnecessary context switching.
I've seen no performance issues from the context switching. The
overhead of them is so small to be unmeasurable most cases, because
a typical allocation already requires context switches for contended
locks and metadata IO....
> Any thoughts on a better way to handle this, or will there continue
> to be a 4kB stack limit
We were blowing 8k stacks on x86-64 with alarming ease. Even the
flusher threads were overflowing.
> and hack around this with repeated kmalloc
> on callpaths for any struct over a few tens of bytes, implementing
> memory pools all over the place, and "forking" over to other threads
> to continue the stack consumption for another 4kB to work around
> the small stack limit?
I mentioned that we needed to consider 16k stacks at last years
Kernel Summit and the response was along the lines of "you've got to
be kidding - fix your broken filesystem". That's the perception you
have to change, and i don't feel like having a 4k stacks battle
again...
Cheers,
Dave.
--
Dave Chinner
david@fromorbit.com
prev parent reply other threads:[~2012-06-19 1:12 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-06-18 12:08 XFS status update for May 2012 Christoph Hellwig
2012-06-18 18:25 ` Andreas Dilger
2012-06-18 18:43 ` Ben Myers
2012-06-18 20:36 ` Andreas Dilger
2012-06-19 1:20 ` Dave Chinner
2012-06-18 21:11 ` Eric Sandeen
2012-06-18 21:16 ` Eric Sandeen
2012-06-19 1:27 ` Dave Chinner
2012-06-19 1:11 ` Dave Chinner [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20120619011108.GF25389@dastard \
--to=david@fromorbit.com \
--cc=adilger@dilger.ca \
--cc=hch@infradead.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=xfs@oss.sgi.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).