From: Juergen Gross <jgross@suse.com>
To: xen-devel <xen-devel@lists.xenproject.org>,
Ian Jackson <ian.jackson@eu.citrix.com>,
Wei Liu <wei.liu2@citrix.com>, David Scott <dave@recoil.org>
Subject: Re: xenstored memory leak
Date: Wed, 13 Jul 2016 14:21:38 +0200 [thread overview]
Message-ID: <57863252.1070308@suse.com> (raw)
In-Reply-To: <577CB3DA.1090003@suse.com>
On 06/07/16 09:31, Juergen Gross wrote:
> While testing some patches for support of ballooning in Mini-OS by using
> the xenstore domain I realized that each xl create/destroy pair would
> increase memory consumption in Mini-OS by about 5kB. Wondering whether
> this is a xenstore domain only effect I did the same test with xenstored
> and oxenstored daemons.
>
> xenstored showed the same behavior, the "referenced" size showed by the
> pmap command grew by about 5kB for each create/destroy pair.
>
> oxenstored seemed to be even worse in the beginning (about 6kB for each
> pair), but after about 100 create/destroys the value seemed to be
> rather stable.
>
> Did anyone notice this memory leak before?
I think I've found the problem:
qemu as the device model is setting up a xenstore watch for each backend
type it is supporting. Unfortunately those watches are never removed
again. This sums up to the observed memory leak.
I'm not sure how oxenstored is avoiding the problem, may be by testing
socket connections to be still alive and so detecting qemu has gone.
OTOH this won't help for oxenstored running in another domain than the
device model (either due to oxenstore-stubdom, or a driver domain with
a qemu based device model).
I'll post a qemu patch to remove those watches on exit soon.
To find the problem I've added some debug aid to xenstored: when
a special parameter is specified on invocation it will dump its memory
allocation structure via talloc_report_full() to a file whenever it is
receiving a SIGUSR1 signal. Anybody interested in this patch?
Juergen
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-13 12:21 UTC|newest]
Thread overview: 21+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-07-06 7:31 xenstored memory leak Juergen Gross
2016-07-06 13:48 ` Andrew Cooper
2016-07-06 13:55 ` Juergen Gross
2016-07-06 13:59 ` Andrew Cooper
2016-07-07 16:22 ` Wei Liu
2016-07-13 12:21 ` Juergen Gross [this message]
2016-07-13 12:40 ` Andrew Cooper
2016-07-13 13:21 ` Juergen Gross
2016-07-13 13:30 ` Ian Jackson
2016-07-13 13:07 ` Wei Liu
2016-07-13 13:17 ` David Vrabel
2016-07-13 13:32 ` Juergen Gross
2016-07-13 13:37 ` David Vrabel
2016-07-13 14:28 ` Ian Jackson
2016-07-13 14:50 ` Juergen Gross
2016-07-13 13:20 ` Ian Jackson
2016-07-13 13:47 ` Wei Liu
2016-07-13 13:25 ` Juergen Gross
2016-07-13 13:52 ` Wei Liu
2016-07-13 14:09 ` Juergen Gross
2016-07-13 14:18 ` Wei Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=57863252.1070308@suse.com \
--to=jgross@suse.com \
--cc=dave@recoil.org \
--cc=ian.jackson@eu.citrix.com \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).