From: "Bruno Prémont" <bonbons@linux-vserver.org>
To: Pekka Enberg <penberg@kernel.org>
Cc: Mike Frysinger <vapier.adi@gmail.com>,
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
Catalin Marinas <catalin.marinas@arm.com>
Subject: Re: 2.6.39-rc4+: Kernel leaking memory during FS scanning, regression?
Date: Mon, 25 Apr 2011 12:34:44 +0200 [thread overview]
Message-ID: <20110425123444.639aad34@neptune.home> (raw)
In-Reply-To: <BANLkTi=2DK+iq-5NEFKexe0QhpW8G0RL8Q@mail.gmail.com>
On Mon, 25 April 2011 Pekka Enberg <penberg@kernel.org> wrote:
> On Mon, Apr 25, 2011 at 12:17 PM, Bruno Prémont
> <bonbons@linux-vserver.org> wrote:
> > On Mon, 25 April 2011 Mike Frysinger wrote:
> >> On Sun, Apr 24, 2011 at 22:42, KOSAKI Motohiro wrote:
> >> >> On Sun, 24 April 2011 Bruno Prémont wrote:
> >> >> > On an older system I've been running Gentoo's revdep-rebuild to check
> >> >> > for system linking/*.la consistency and after doing most of the work the
> >> >> > system starved more or less, just complaining about stuck tasks now and
> >> >> > then.
> >> >> > Memory usage graph as seen from userspace showed sudden quick increase of
> >> >> > memory usage though only a very few MB were swapped out (c.f. attached RRD
> >> >> > graph).
> >> >>
> >> >> Seems I've hit it once again (though detected before system was fully
> >> >> stalled by trying to reclaim memory without success).
> >> >>
> >> >> This time it was during simple compiling...
> >> >> Gathered info below:
> >> >>
> >> >> /proc/meminfo:
> >> >> MemTotal: 480660 kB
> >> >> MemFree: 64948 kB
> >> >> Buffers: 10304 kB
> >> >> Cached: 6924 kB
> >> >> SwapCached: 4220 kB
> >> >> Active: 11100 kB
> >> >> Inactive: 15732 kB
> >> >> Active(anon): 4732 kB
> >> >> Inactive(anon): 4876 kB
> >> >> Active(file): 6368 kB
> >> >> Inactive(file): 10856 kB
> >> >> Unevictable: 32 kB
> >> >> Mlocked: 32 kB
> >> >> SwapTotal: 524284 kB
> >> >> SwapFree: 456432 kB
> >> >> Dirty: 80 kB
> >> >> Writeback: 0 kB
> >> >> AnonPages: 6268 kB
> >> >> Mapped: 2604 kB
> >> >> Shmem: 4 kB
> >> >> Slab: 250632 kB
> >> >> SReclaimable: 51144 kB
> >> >> SUnreclaim: 199488 kB <--- look big as well...
> >> >> KernelStack: 131032 kB <--- what???
> >> >
> >> > KernelStack is used 8K bytes per thread. then, your system should have
> >> > 16000 threads. but your ps only showed about 80 processes.
> >> > Hmm... stack leak?
> >>
> >> i might have a similar report for 2.6.39-rc4 (seems to be working fine
> >> in 2.6.38.4), but for embedded Blackfin systems running gdbserver
> >> processes over and over (so lots of short lived forks)
> >>
> >> i wonder if you have a lot of zombies or otherwise unclaimed resources
> >> ? does `ps aux` show anything unusual ?
> >
> > I've not seen anything special (no big amount of threads behind my about 80
> > processes, even after kernel oom-killed nearly all processes the hogged
> > memory has not been freed. And no, there are no zombies around).
> >
> > Here it seems to happened when I run 2 intensive tasks in parallel, e.g.
> > (re)emerging gimp and running revdep-rebuild -pi in another terminal.
> > This produces a fork rate of about 100-300 per second.
> >
> > Suddenly kmalloc-128 slabs stop being freed and things degrade.
> >
> > Trying to trace some of the kmalloc-128 slab allocations I end up seeing
> > lots of allocations like this:
> >
> > [ 1338.554429] TRACE kmalloc-128 alloc 0xc294ff00 inuse=30 fp=0xc294ff00
> > [ 1338.554434] Pid: 1573, comm: collectd Tainted: G W 2.6.39-rc4-jupiter-00187-g686c4cb #1
> > [ 1338.554437] Call Trace:
> > [ 1338.554442] [<c10aef47>] trace+0x57/0xa0
> > [ 1338.554447] [<c10b07b3>] alloc_debug_processing+0xf3/0x140
> > [ 1338.554452] [<c10b0972>] T.999+0x172/0x1a0
> > [ 1338.554455] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.554459] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.554464] [<c10b0a52>] kmem_cache_alloc+0xb2/0x100
> > [ 1338.554468] [<c10c08b5>] ? path_put+0x15/0x20
> > [ 1338.554472] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.554476] [<c10b95d8>] get_empty_filp+0x58/0xc0
> > [ 1338.554481] [<c10c323f>] path_openat+0x1f/0x320
> > [ 1338.554485] [<c10a0a4e>] ? __access_remote_vm+0x19e/0x1d0
> > [ 1338.554490] [<c10c3620>] do_filp_open+0x30/0x80
> > [ 1338.554495] [<c10b0a30>] ? kmem_cache_alloc+0x90/0x100
> > [ 1338.554500] [<c10c16f8>] ? getname_flags+0x28/0xe0
> > [ 1338.554505] [<c10cd522>] ? alloc_fd+0x62/0xe0
> > [ 1338.554509] [<c10c1731>] ? getname_flags+0x61/0xe0
> > [ 1338.554514] [<c10b781d>] do_sys_open+0xed/0x1e0
> > [ 1338.554519] [<c10b7979>] sys_open+0x29/0x40
> > [ 1338.554524] [<c1391390>] sysenter_do_call+0x12/0x26
> > [ 1338.556764] TRACE kmalloc-128 alloc 0xc294ff80 inuse=31 fp=0xc294ff80
> > [ 1338.556774] Pid: 1332, comm: bash Tainted: G W 2.6.39-rc4-jupiter-00187-g686c4cb #1
> > [ 1338.556779] Call Trace:
> > [ 1338.556794] [<c10aef47>] trace+0x57/0xa0
> > [ 1338.556802] [<c10b07b3>] alloc_debug_processing+0xf3/0x140
> > [ 1338.556807] [<c10b0972>] T.999+0x172/0x1a0
> > [ 1338.556812] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.556817] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.556821] [<c10b0a52>] kmem_cache_alloc+0xb2/0x100
> > [ 1338.556826] [<c10b95d8>] ? get_empty_filp+0x58/0xc0
> > [ 1338.556830] [<c10b95d8>] get_empty_filp+0x58/0xc0
> > [ 1338.556841] [<c121fca8>] ? tty_ldisc_deref+0x8/0x10
> > [ 1338.556849] [<c10c323f>] path_openat+0x1f/0x320
> > [ 1338.556857] [<c11e2b3e>] ? fbcon_cursor+0xfe/0x180
> > [ 1338.556863] [<c10c3620>] do_filp_open+0x30/0x80
> > [ 1338.556868] [<c10b0a30>] ? kmem_cache_alloc+0x90/0x100
> > [ 1338.556873] [<c10c5e8e>] ? do_vfs_ioctl+0x7e/0x580
> > [ 1338.556878] [<c10c16f8>] ? getname_flags+0x28/0xe0
> > [ 1338.556886] [<c10cd522>] ? alloc_fd+0x62/0xe0
> > [ 1338.556891] [<c10c1731>] ? getname_flags+0x61/0xe0
> > [ 1338.556898] [<c10b781d>] do_sys_open+0xed/0x1e0
> > [ 1338.556903] [<c10b7979>] sys_open+0x29/0x40
> > [ 1338.556913] [<c1391390>] sysenter_do_call+0x12/0x26
> >
> > Collectd is system monitoring daemon that counts processes, memory
> > usage an much more, reading lots of files under /proc every 10
> > seconds.
> > Maybe it opens a process related file at a racy moment and thus
> > prevents the 128 slabs and kernel stacks from being released?
> >
> > Replaying the scenario I'm at:
> > Slab: 43112 kB
> > SReclaimable: 25396 kB
> > SUnreclaim: 17716 kB
> > KernelStack: 16432 kB
> > PageTables: 1320 kB
> >
> > with
> > kmalloc-256 55 64 256 16 1 : tunables 0 0 0 : slabdata 4 4 0
> > kmalloc-128 66656 66656 128 32 1 : tunables 0 0 0 : slabdata 2083 2083 0
> > kmalloc-64 3902 3904 64 64 1 : tunables 0 0 0 : slabdata 61 61 0
> >
> > (and compiling process tree now SIGSTOPped in order to have system
> > not starve immediately so I can look around for information)
> >
> > If I resume one of the compiling process trees both KernelStack and
> > slab (kmalloc-128) usage increase quite quickly (and seems to never
> > get down anymore) - probably at same rate as processes get born (no
> > matter when they end).
>
> Looks like it might be a leak in VFS. You could try kmemleak to narrow
> it down some more. See Documentation/kmemleak.txt for details.
Hm, seems not to be willing to let me run kmemleak... each time I put
on my load scenario I get "BUG: unable to handle kernel " on console
as a last breath from the system. (the rest of the trace never shows up)
Going to try harder to get at least a complete trace...
Bruno
> Pekka
next prev parent reply other threads:[~2011-04-25 10:34 UTC|newest]
Thread overview: 88+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-04-24 18:21 2.6.39-rc4+: Kernel leaking memory during FS scanning, regression? Bruno Prémont
2011-04-24 21:59 ` Bruno Prémont
2011-04-25 2:42 ` KOSAKI Motohiro
2011-04-25 7:47 ` Mike Frysinger
2011-04-25 9:17 ` Bruno Prémont
2011-04-25 9:25 ` Pekka Enberg
2011-04-25 10:34 ` Bruno Prémont [this message]
2011-04-25 11:41 ` Bruno Prémont
2011-04-25 11:47 ` Pekka Enberg
2011-04-25 12:11 ` Bruno Prémont
2011-04-25 12:14 ` Tetsuo Handa
2011-04-25 12:21 ` Tetsuo Handa
2011-04-25 15:22 ` Linus Torvalds
2011-04-25 16:04 ` Bruno Prémont
2011-04-25 16:31 ` Linus Torvalds
2011-04-25 17:00 ` Bruno Prémont
2011-04-25 17:10 ` Linus Torvalds
2011-04-25 17:20 ` Linus Torvalds
2011-04-25 18:36 ` Bruno Prémont
2011-04-25 19:16 ` Paul E. McKenney
2011-04-25 21:10 ` Bruno Prémont
2011-04-25 21:26 ` Paul E. McKenney
2011-04-25 21:30 ` Linus Torvalds
2011-04-25 21:49 ` Paul E. McKenney
2011-04-26 6:19 ` Bruno Prémont
2011-04-26 11:27 ` Paul E. McKenney
2011-04-26 16:38 ` Bruno Prémont
2011-04-26 17:09 ` Bruno Prémont
2011-04-26 17:18 ` Linus Torvalds
2011-04-26 22:28 ` Thomas Gleixner
2011-04-27 6:15 ` Bruno Prémont
2011-04-27 18:41 ` Bruno Prémont
2011-04-27 19:16 ` Pádraig Brady
2011-04-27 19:34 ` Bruno Prémont
2011-04-27 22:05 ` Paul E. McKenney
2011-04-27 20:40 ` Bruno Prémont
2011-04-27 22:07 ` Paul E. McKenney
2011-04-28 6:10 ` Bruno Prémont
2011-04-27 22:06 ` Thomas Gleixner
2011-04-27 22:27 ` Paul E. McKenney
2011-04-27 22:32 ` Thomas Gleixner
2011-04-27 22:59 ` Paul E. McKenney
2011-04-27 23:28 ` Linus Torvalds
2011-04-27 23:46 ` Linus Torvalds
2011-04-28 9:09 ` Thomas Gleixner
2011-04-28 9:17 ` Sedat Dilek
2011-04-28 9:40 ` Thomas Gleixner
2011-04-28 10:12 ` Mike Galbraith
2011-04-28 9:45 ` Sedat Dilek
2011-04-28 10:26 ` Paul E. McKenney
2011-04-28 13:30 ` Mike Galbraith
2011-04-28 15:28 ` Sedat Dilek
2011-04-28 15:44 ` Sedat Dilek
2011-04-28 15:48 ` Linus Torvalds
2011-04-28 18:49 ` Thomas Gleixner
2011-04-28 20:23 ` Bruno Prémont
2011-04-28 20:29 ` Thomas Gleixner
2011-04-28 20:44 ` Bruno Prémont
2011-04-28 21:04 ` Thomas Gleixner
2011-04-28 21:51 ` john stultz
2011-04-28 22:02 ` Thomas Gleixner
2011-04-28 23:06 ` Sedat Dilek
2011-04-28 23:35 ` Sedat Dilek
2011-04-29 0:42 ` Paul E. McKenney
2011-04-29 9:34 ` Thomas Gleixner
2011-04-29 7:55 ` Sedat Dilek
2011-04-29 18:09 ` Mike Frysinger
2011-04-29 18:26 ` Thomas Gleixner
2011-04-29 19:31 ` Bruno Prémont
2011-04-29 20:10 ` Thomas Gleixner
2011-04-29 20:14 ` Bruno Prémont
2011-04-30 9:14 ` Sedat Dilek
2011-04-28 20:41 ` Sedat Dilek
2011-04-28 19:22 ` Mike Galbraith
2011-04-27 21:55 ` Paul E. McKenney
2011-04-28 6:22 ` Bruno Prémont
2011-04-28 10:26 ` Paul E. McKenney
2011-04-26 17:12 ` Linus Torvalds
2011-04-26 18:50 ` Paul E. McKenney
2011-04-26 19:17 ` Sedat Dilek
2011-04-27 22:02 ` Paul E. McKenney
2011-04-25 22:08 ` Mike Frysinger
2011-04-25 17:29 ` Paul E. McKenney
2011-04-25 18:13 ` Sedat Dilek
2011-04-25 18:28 ` Paul E. McKenney
2011-04-25 17:26 ` Paul E. McKenney
2011-04-27 10:28 ` Catalin Marinas
2011-04-25 17:51 ` Pekka Enberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110425123444.639aad34@neptune.home \
--to=bonbons@linux-vserver.org \
--cc=catalin.marinas@arm.com \
--cc=kosaki.motohiro@jp.fujitsu.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=vapier.adi@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).