public inbox for linux-metag@vger.kernel.org
 help / color / mirror / Atom feed
From: James Bottomley <James.Bottomley@HansenPartnership.com>
To: James Hogan <james.hogan@imgtec.com>
Cc: Helge Deller <deller@gmx.de>,
	linux-kernel@vger.kernel.org, linux-parisc@vger.kernel.org,
	John David Anglin <dave.anglin@bell.net>,
	linux-metag@vger.kernel.org
Subject: Re: [PATCH] parisc,metag: Do not hardcode maximum userspace stack size
Date: Thu, 01 May 2014 10:50:36 -0700	[thread overview]
Message-ID: <1398966636.2174.21.camel@dabdike> (raw)
In-Reply-To: <53622DBA.807@imgtec.com>


> +
> +config MAX_STACK_SIZE_MB
> +	int "Maximum user stack size (MB)"
> +	default 80
> +	range 8 256 if METAG
> +	range 8 2048
> +	depends on STACK_GROWSUP
> +	help
> +	  This is the maximum stack size in Megabytes in the VM layout of user
> +	  processes when the stack grows upwards (currently only on parisc and
> +	  metag arch). The stack will be located at the highest memory address
> +	  minus the given value, unless the RLIMIT_STACK hard limit is changed
> +	  to a smaller value in which case that is used.
> +
> +	  A sane initial value is 80 MB.

There's one final issue with this: placement of the stack only really
matters on 32 bits.  We have three expanding memory areas: stack, heap
and maps.  On 64 bits these are placed well separated from each other on
64 bits, so an artificial limit like this doesn't matter.

Also, even on 32 bits, I can't help feeling we could simply layout the
binary better ... the problem is we have three upward growing regions:
stack, maps and   heap.  However, if you look at the current standard elf
layout for downward growing stacks, the maps grow up from the bottom
until it hits the mapped binary, the heap grows up from the mapped
binary and the stack grows down from the top.  You run out of memory
when the stack and heap cross or when the maps hits the binary.
Obviously with three upwardly growing regions, it's problematic, but we
could do something like make the maps grow down (can't, unfortunately,
make the heap grow down since sbrk depends on the upward behaviour).


James



  reply	other threads:[~2014-05-01 17:50 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2014-04-30 21:26 [PATCH] parisc,metag: Do not hardcode maximum userspace stack size Helge Deller
     [not found] ` <20140430212602.GA20601-9U14vcwSumxwFLYp8hBm2A@public.gmane.org>
2014-04-30 22:53   ` John David Anglin
2014-05-01 11:19 ` James Hogan
2014-05-01 17:50   ` James Bottomley [this message]
2014-05-02 11:54     ` James Hogan
2014-05-02 14:48       ` James Bottomley
2014-05-04  7:28         ` Helge Deller
     [not found]           ` <5365EC05.5080900-Mmb7MZpHnFY@public.gmane.org>
2014-05-13 11:18             ` James Hogan
2014-05-13 19:45               ` Helge Deller
2014-05-13 22:52                 ` James Hogan
2014-05-01 18:08   ` Aw: " Helge Deller
2014-05-01 14:06 ` John David Anglin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1398966636.2174.21.camel@dabdike \
    --to=james.bottomley@hansenpartnership.com \
    --cc=dave.anglin@bell.net \
    --cc=deller@gmx.de \
    --cc=james.hogan@imgtec.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-metag@vger.kernel.org \
    --cc=linux-parisc@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox