From: Wei Liu <wei.liu2@citrix.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
Wei Liu <wei.liu2@citrix.com>,
Ian Jackson <ian.jackson@eu.citrix.com>,
David Scott <dave@recoil.org>
Subject: Re: xenstored memory leak
Date: Wed, 13 Jul 2016 14:52:06 +0100 [thread overview]
Message-ID: <20160713135206.GI31770@citrix.com> (raw)
In-Reply-To: <57864159.3090805@suse.com>
On Wed, Jul 13, 2016 at 03:25:45PM +0200, Juergen Gross wrote:
> On 13/07/16 15:07, Wei Liu wrote:
> > On Wed, Jul 13, 2016 at 02:21:38PM +0200, Juergen Gross wrote:
> >> On 06/07/16 09:31, Juergen Gross wrote:
> >>> While testing some patches for support of ballooning in Mini-OS by using
> >>> the xenstore domain I realized that each xl create/destroy pair would
> >>> increase memory consumption in Mini-OS by about 5kB. Wondering whether
> >>> this is a xenstore domain only effect I did the same test with xenstored
> >>> and oxenstored daemons.
> >>>
> >>> xenstored showed the same behavior, the "referenced" size showed by the
> >>> pmap command grew by about 5kB for each create/destroy pair.
> >>>
> >>> oxenstored seemed to be even worse in the beginning (about 6kB for each
> >>> pair), but after about 100 create/destroys the value seemed to be
> >>> rather stable.
> >>>
> >>> Did anyone notice this memory leak before?
> >>
> >> I think I've found the problem:
> >>
> >> qemu as the device model is setting up a xenstore watch for each backend
> >> type it is supporting. Unfortunately those watches are never removed
> >> again. This sums up to the observed memory leak.
> >>
> >> I'm not sure how oxenstored is avoiding the problem, may be by testing
> >> socket connections to be still alive and so detecting qemu has gone.
> >> OTOH this won't help for oxenstored running in another domain than the
> >> device model (either due to oxenstore-stubdom, or a driver domain with
> >> a qemu based device model).
> >>
> >
> > How unfortunate.
> >
> > My gut feeling is that xenstored shouldn't have the knowledge to
> > associate a watch with a "process". The concept of a process is only
> > meaningful to OS, which wouldn't work on cross-domain xenstored setup.
>
> Right.
>
> > Maybe the OS xenbus driver should reap all watches on behalf the dead
> > process. This would also avoid a crashed QEMU leaking resources.
> >
> > And xenstored should have proper quota support so that a domain can't
> > set up excessive numbers of watches.
>
> This would be dom0 unless you arrange the device model to be accounted
> as the domid it is running for. But this is problematic with a xenstore
> domain again.
>
The quota could be based on "connection" (ring or socket) and
counted as per-connection? Just throwing ideas around, not necessarily
saying this is the way to go.
Wei.
>
> Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-13 13:52 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-06 7:31 xenstored memory leak Juergen Gross
2016-07-06 13:48 ` Andrew Cooper
2016-07-06 13:55 ` Juergen Gross
2016-07-06 13:59 ` Andrew Cooper
2016-07-07 16:22 ` Wei Liu
2016-07-13 12:21 ` Juergen Gross
2016-07-13 12:40 ` Andrew Cooper
2016-07-13 13:21 ` Juergen Gross
2016-07-13 13:30 ` Ian Jackson
2016-07-13 13:07 ` Wei Liu
2016-07-13 13:17 ` David Vrabel
2016-07-13 13:32 ` Juergen Gross
2016-07-13 13:37 ` David Vrabel
2016-07-13 14:28 ` Ian Jackson
2016-07-13 14:50 ` Juergen Gross
2016-07-13 13:20 ` Ian Jackson
2016-07-13 13:47 ` Wei Liu
2016-07-13 13:25 ` Juergen Gross
2016-07-13 13:52 ` Wei Liu [this message]
2016-07-13 14:09 ` Juergen Gross
2016-07-13 14:18 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160713135206.GI31770@citrix.com \
--to=wei.liu2@citrix.com \
--cc=dave@recoil.org \
--cc=ian.jackson@eu.citrix.com \
--cc=jgross@suse.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).