From: Olivier Bonvalet <xen.list@daevel.fr>
To: Ian Campbell <Ian.Campbell@citrix.com>
Cc: xen-users@lists.xen.org, xen-devel@lists.xen.org,
Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [Xen-users] unexpected Out Of Memory (OOM)
Date: Thu, 08 Aug 2013 13:43:08 +0200 [thread overview]
Message-ID: <1375962188.13572.58.camel@localhost> (raw)
In-Reply-To: <1375957095.970.34.camel@kazak.uk.xensource.com>
Le jeudi 08 août 2013 à 11:18 +0100, Ian Campbell a écrit :
> On Thu, 2013-08-08 at 12:10 +0200, Olivier Bonvalet wrote:
> >
> > Le jeudi 08 août 2013 à 09:58 +0100, Ian Campbell a écrit :
> > > On Wed, 2013-08-07 at 23:37 +0200, Olivier Bonvalet wrote:
> > > > So I recompiled a kernel with the kmemleak feature. I obtain that kind
> > > > of list, but not sure that it's usefull :
> > >
> > > These look to me like valid things to be allocating at boot time, and
> > > even if they are leaked there isn't enough here to exhaust 8GB by a long
> > > way.
> > >
> > > It'd be worth monitoring to see if it grows at all or if anything
> > > interesting shows up after running for a while with the leak.
> > >
> > > Likewise it'd be worth keeping an eye on the process list and slabtop
> > > and seeing if anything appears to be growing without bound.
> > >
> > > Other than that I'm afraid I don't have many smart ideas.
> > >
> > > Ian.
> > >
> > >
> >
> > Ok, then I will become crazy : when I start the kernel with kmemleak=on
> > in fact I haven't memory leak. The memory usage stay near 300MB.
> >
> > Then I restart on the same kernel, without kmemleak=on, the memory usage
> > jump to 600MB and still grow.
> >
> > Olivier
> >
> > PS : I retry several time, to confirm that.
>
> *boggles*
>
> Ian.
>
>
So, I retried the slabtop test, with more leaked memory, to have better
visibility :
--- a 2013-08-08 12:29:48.437966407 +0200
+++ c 2013-08-08 13:33:41.213711305 +0200
@@ -1,23 +1,23 @@
- Active / Total Objects (% used) : 186382 / 189232 (98.5%)
- Active / Total Slabs (% used) : 6600 / 6600 (100.0%)
- Active / Total Caches (% used) : 100 / 151 (66.2%)
- Active / Total Size (% used) : 111474.55K / 113631.58K (98.1%)
- Minimum / Average / Maximum Object : 0.33K / 0.60K / 8.32K
+ Active / Total Objects (% used) : 2033635 / 2037851 (99.8%)
+ Active / Total Slabs (% used) : 70560 / 70560 (100.0%)
+ Active / Total Caches (% used) : 101 / 151 (66.9%)
+ Active / Total Size (% used) : 1289959.44K / 1292725.98K (99.8%)
+ Minimum / Average / Maximum Object : 0.33K / 0.63K / 8.32K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
- 55048 55038 99% 0.56K 1966 28 31456K filp
- 29536 29528 99% 0.50K 923 32 14768K cred_jar
- 22909 22909 100% 0.51K 739 31 11824K dentry
+831572 831552 99% 0.56K 29699 28 475184K filp
+501664 501635 99% 0.50K 15677 32 250832K cred_jar
+172453 172432 99% 0.51K 5563 31 89008K dentry
+150920 150906 99% 0.91K 4312 35 137984K proc_inode_cache
+ 54686 54652 99% 0.43K 1478 37 23648K task_delay_info
+ 54656 54651 99% 1.98K 3416 16 109312K task_struct
+ 54652 54651 99% 1.19K 2102 26 67264K task_xstate
+ 54648 54644 99% 0.44K 1518 36 24288K pid
+ 54648 54645 99% 1.38K 2376 23 76032K signal_cache
+ 38200 38188 99% 0.38K 1910 20 15280K kmalloc-64
11803 11774 99% 0.43K 319 37 5104K sysfs_dir_cache
- 7350 7327 99% 0.91K 210 35 6720K proc_inode_cache
- 5520 5465 99% 0.38K 276 20 2208K anon_vma_chain
- 5216 5137 98% 0.50K 163 32 2608K vm_area_struct
- 3984 3978 99% 0.33K 166 24 1328K kmalloc-8
- 3811 3798 99% 0.84K 103 37 3296K inode_cache
- 3384 3359 99% 0.44K 94 36 1504K pid
- 3381 3362 99% 1.38K 147 23 4704K signal_cache
- 3380 3366 99% 1.19K 130 26 4160K task_xstate
- 3376 3366 99% 1.98K 211 16 6752K task_struct
- 3367 3367 100% 0.43K 91 37 1456K task_delay_info
- 2886 2864 99% 0.42K 78 37 1248K buffer_head
- 2720 2714 99% 0.93K 80 34 2560K shmem_inode_cache
+ 7920 7676 96% 0.38K 396 20 3168K anon_vma_chain
+ 7808 7227 92% 0.50K 244 32 3904K vm_area_struct
+ 5624 5581 99% 0.42K 152 37 2432K buffer_head
+ 4316 4308 99% 1.22K 166 26 5312K ext4_inode_cache
+ 3984 3977 99% 0.33K 166 24 1328K kmalloc-8
So in 1 hour, "filp" and "cred_jar" eat a lot of memory.
But I have no idea what is it...
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2013-08-08 11:43 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-07 0:02 unexpected Out Of Memory (OOM) Olivier Bonvalet
2013-08-07 8:29 ` [Xen-users] " Ian Campbell
2013-08-07 8:58 ` Olivier Bonvalet
2013-08-07 9:35 ` Olivier Bonvalet
2013-08-07 9:46 ` Olivier Bonvalet
2013-08-07 10:16 ` Ian Campbell
2013-08-07 11:17 ` Olivier Bonvalet
2013-08-07 13:36 ` Ian Campbell
2013-08-07 14:26 ` Olivier Bonvalet
2013-08-07 21:37 ` Olivier Bonvalet
2013-08-08 8:58 ` Ian Campbell
2013-08-08 10:10 ` Olivier Bonvalet
2013-08-08 10:18 ` Ian Campbell
2013-08-08 11:43 ` Olivier Bonvalet [this message]
2013-08-08 13:25 ` Wei Liu
2013-08-08 14:17 ` Olivier Bonvalet
2013-08-09 12:53 ` Konrad Rzeszutek Wilk
2013-08-07 14:48 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1375962188.13572.58.camel@localhost \
--to=xen.list@daevel.fr \
--cc=Ian.Campbell@citrix.com \
--cc=roger.pau@citrix.com \
--cc=xen-devel@lists.xen.org \
--cc=xen-users@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).