public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Matt Mackall <mpm@selenic.com>
To: linux-kernel <linux-kernel@vger.kernel.org>
Subject: Light-weight dynamically extended stacks
Date: Sun, 18 Dec 2005 16:12:49 -0800	[thread overview]
Message-ID: <20051219001249.GD11856@waste.org> (raw)

Perhaps the time for this has come and gone, but it occurred to me
that it should be relatively straightforward to make a form of
dynamically extended stacks that are appropriate to the kernel.

While we have a good handle on most of the worst stack offenders, we
can still run into trouble with pathological cases (say, symlink
recursion for XFS on a RAID built from loopback mounts over NFS
tunneled over IPSEC through GRE). So there's probably no
one-size-fits-all when it comes to stack size.

Rather than relying on guard pages and VM faults like userspace, we
can use a cooperative scheme where we "label" call paths that might be
extra deep (recursion through the block layer, network tunnels,
symlinks, etc.) with something like the following:

	  ret = grow_stack(function, arg, GFP_ATOMIC);

This is much like cond_resched() except for stack usage rather than
CPU usage. grow_stack() checks if we're in the danger zone for stack
usage (say 1k remaining), and if so, allocates a new stack and
swizzles the stack pointer over to it.

Then, whether we allocated a new stack page or not, we call
function(arg) to continue with our operation. When function() returns,
we deallocate the new stack (if we built one), switch back to the old
one, and propagate function's return value.

We only get into trouble with this scheme when we can't allocate a new
stack, which will only happen when we're completely out of memory[1]
and we can't sleep waiting for more. In which case, we print a warning
of impending doom and proceed to run with our current stack. This is
the same as our current behavior but with a warning message. For
safety, we can keep a small mempool of extra stacks on hand to avoid
hitting this wall when dealing with OOM in an atomic context.

We can also easily instrument the scheme to print warnings when a
process has allocated more than a couple stacks, with a hard limit to
catch any unbounded recursion.

[1] Assuming we're using 4k stacks, where fragmentation is not an
issue. But there's no reason not to use single-page stacks with this
scheme.
-- 
Mathematics is the supreme nostalgia of our time.

             reply	other threads:[~2005-12-19  0:19 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2005-12-19  0:12 Matt Mackall [this message]
2005-12-19  8:45 ` Light-weight dynamically extended stacks Arjan van de Ven
2005-12-19 18:36 ` Adrian Bunk
2005-12-20  0:27   ` Matt Mackall
2005-12-20 16:43     ` Adrian Bunk
2005-12-20 18:30       ` Matt Mackall
2005-12-20 19:40         ` Patrick McLean
2005-12-21  5:57           ` Valdis.Kletnieks

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20051219001249.GD11856@waste.org \
    --to=mpm@selenic.com \
    --cc=linux-kernel@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox