public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Christopher Snook <csnook@redhat.com>
To: Wappler Marcel <Marcel.Wappler@bridgeco.net>
Cc: Alex Riesen <raa.lkml@gmail.com>, linux-kernel@vger.kernel.org
Subject: Re: Behaviour of the VM on a embedded linux
Date: Fri, 22 Aug 2008 17:37:18 -0400	[thread overview]
Message-ID: <48AF318E.1060508@redhat.com> (raw)
In-Reply-To: <83116F0A4FF67A4F97BA0B6E408C48E30289D979@zuerich.BC-Int.NET>

Wappler Marcel wrote:
> Alex Riesen wrote:
>>> I'm trying to figure out whats going on an embedded system I have to deal
>>> with. It's running a 2.6.24.7 kernel on 32 MBytes of RAM. There is no
>>> swapping. There are some daemons and shells running and - a big
>>> monolithic c++ application.
>>> 
>>> The application runs a lot of pthreads on different real time priority
>>> levels. It looks like the application consumes  a huge ammount of real
>>> memory in contrast to the assumption, that large code size is no problem
>>> due to paging out pages with unused code.
>> Maybe the kernel wont page anything if the paging support is compiled out.
>> IOW, you still need paging code even if there is now swap partitions.
> 
> Alex, this is the case - I do observe normal operation of the VM subsytem -
> it moves memory pages dynamicaly throughout the system. But: when I create a
> large file on the tmpfs a kernel OOM occurs and kills the big monolithic
> application instead of stealing pages from the application. This is the fact
> I'm wondering about. In the past every guy told me that code size is no
> problem on systems using MMUs because the system can steal pages which
> contain code of the application in situations of low memory. But in my
> situation this is not the case.
> 
> Any ideas?
> 
> Marcel PS: please CC me on replies

All these things you're doing in userspace have a memory footprint in 
kernelspace as well, and that memory can't be swapped.  Page tables for your 
tmpfs mappings aren't free.  Kernel stacks and task_structs for your threads 
aren't free.

Also, there are many places in the kernel where a thread may not go to sleep to 
wait for memory to be freed.  The kernel has asynchronous tasks that try to keep 
memory free to avoid this problem, but if you're churning through your big 
monolithic binary, it's getting paged in as fast as the kernel can page it out.

That said, the modern VM is tuned with larger systems in mind, so you may be 
able to improve the situation by tweaking the vm.* sysctls, particularly 
vm.min_free_kbytes.  You can also change oom-killer settings for your process 
via the /proc/$PID/oom_* parameters.  It might help, or it might replace a 
recoverable userspace oom-kill with an unrecoverable kernel oom panic.

Either way, I'd be a little more conservative about code size on very small 
systems with no swap.

-- Chris

  reply	other threads:[~2008-08-22 21:39 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-08-22 12:31 Behaviour of the VM on a embedded linux Wappler Marcel
2008-08-22 12:59 ` Alex Riesen
2008-08-22 15:25   ` Wappler Marcel
2008-08-22 21:37     ` Christopher Snook [this message]
2008-08-24 23:35     ` Ingo Oeser
2008-08-24 23:55       ` Arjan van de Ven
2008-08-25  7:58         ` Wappler Marcel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=48AF318E.1060508@redhat.com \
    --to=csnook@redhat.com \
    --cc=Marcel.Wappler@bridgeco.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=raa.lkml@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox