public inbox for linux-xfs@vger.kernel.org
 help / color / mirror / Atom feed
From: David Chinner <dgc@sgi.com>
To: Chris Wedgwood <cw@f00f.org>
Cc: Eric Sandeen <sandeen@sandeen.net>, xfs@oss.sgi.com
Subject: Re: a modest proposal for 4kstacks & xfs
Date: Thu, 15 Feb 2007 07:48:03 +1100	[thread overview]
Message-ID: <20070214204803.GR44411608@melbourne.sgi.com> (raw)
In-Reply-To: <20070213201306.GA10237@tuatara.stupidest.org>

On Tue, Feb 13, 2007 at 12:13:06PM -0800, Chris Wedgwood wrote:
> On Tue, Feb 13, 2007 at 10:05:48AM -0800, Eric Sandeen wrote:
> 
> > XFS continues to come up against 4k stacks, despite the best efforts
> > of several people to slim down xfs a bit (and in fact it seems ok
> > over simple storage these days), people are always able to stack up
> > enough IO path to push the limits of a 4k stack.
> 
> i'll argue "if XFS is enabled 4K stacks should be disabled"

Only way to be sure, IMO....

> the only people this is really going to bother surely are people
> running stock RH kernels where XFS isn't supported anyhow
> 
> > modprobe xfs 4kstacks_may_break=1
> >
> > or somesuch; and without this modprobe would fail on a 4kstacks
> > kernel with a "helpful" message.
> 
> won't people just add the param and continue anyhow w/o thinking about
> the issue(s)?
> 
> > I hate to further the meme of "xfs won't work with 4kstacks" but the
> > truth is that there are IO path scenarios where it can lead to
> > problems.
> 
> i have a setup here with ENOSPC conditions will wedge up tight wen
> using 4k stacks (the allocator has a path where it calls back into
> itself i think)

Memory reclaim will call the writeback path if you are not within
filesystem code when a kernel allocation fails. So if you fail a
memory allocation and enter reclaim when you are already using half
the stack....

> > What do folks think; useful?  pointless?  too heavy-handed?
> 
> i prefer kconfig magic to simply disallow compilation of XFS w/ 4k
> stacks, it's not in the past you had to try hard to break xfs in these
> conditions --- is it really much harder now?
> 
> also, someone claimed gcc 4.1+ reused stack slots --- if that's the
> case it might make things a lot better than older compilers?
> checkstack.pl should show a difference though

gcc 4.0+ inline single use static functions by default, so it defeats
the code we moved out of the bad functions to reduce the stack usage.
IOWs, it undoes a lot of the previous things we've done to reduce stack
usage in the critical path. That's why we had to add the "noinline"
keywork to all our "STATIC" declarations....

Basically, the only real compiler thing you can do that changes stack
usæge is turn on "optimise for size" which reduces stack usage by
20-25%. It is significant, but you can probably still blow a 4k
stack if you add enough layers....

Cheers,

Dave.
-- 
Dave Chinner
Principal Engineer
SGI Australian Software Group

      reply	other threads:[~2007-02-14 20:48 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2007-02-13 18:05 a modest proposal for 4kstacks & xfs Eric Sandeen
2007-02-13 20:13 ` Chris Wedgwood
2007-02-14 20:48   ` David Chinner [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20070214204803.GR44411608@melbourne.sgi.com \
    --to=dgc@sgi.com \
    --cc=cw@f00f.org \
    --cc=sandeen@sandeen.net \
    --cc=xfs@oss.sgi.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox