From: PGNet Dev <pgnet.dev@gmail.com>
To: George Dunlap <george.dunlap@citrix.com>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch
Date: Mon, 4 Jul 2016 07:58:18 -0700 [thread overview]
Message-ID: <e320d6f1-dbca-9fbf-fc22-06c921c6529c@gmail.com> (raw)
In-Reply-To: <CAFLBxZY7NALLqRig8Bh70Znc4qOJgpQZr1vH+-0YpuRRd7xg6w@mail.gmail.com>
On 07/04/2016 04:22 AM, George Dunlap wrote:
> Thanks for your persistence. :-)
I appreciate the reply :-)
> It's likely that this is related to a known problem with the interface
> between the balloon driver and the toolstack. The warning itself is
> benign: it simply means that the balloon driver asked Xen for another
> page (thinking incorrectly it was a few pages short), and was told
> "No" by Xen.
Reading
https://blog.xenproject.org/2014/02/14/ballooning-rebooting-and-the-feature-youve-never-heard-of/
"... Populate-on-demand comes into play in Xen whenever you start
an HVM guest with maxmem and memory set to different values. ..."
Which sounds like you can turn ballooning in the DomU off.
But, currently, my DomUs are all PVHVM, and all have
maxmem = 2048
memory = 2048
It appears that having 'maxmem' == 'memory' results in the '"No" by Xen'
answer rather than ballooning driver not being used.
Which is the intended case?
> Fixing it properly requires a re-architecting of the interface between
> all the different components that use memory (Xen, qemu, the
> toolstack, the guest balloon driver, &c). This is on the to-do list,
> but since it's quite a complicated problem,
Sounds like the'fix is in'. Eventually.
> If the log space is an issue for you your best bet for now is to turn
> down the loglevel so that this warning doesn't show up.
It's less an issue of space, and more that the incessant noise makes
picking out actually important/useful debugging info more of a
challenge. These guests are PVHVM-on-EFI, and the host is Xen 4.7+ on
EFI. the combo enjoys a fair share of issues; hence the debugging
loglevels are higher.
> and the main side-effect
> is mostly just warnings like this it hasn't been a high priority.
That there's no functional ill-effect is the valuable info here.
Warning or not, having 10Ks of them does not signal 'all is well' ...
btw, is there a relevant tracking bug for this?
Thanks for the comments!
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-07-04 14:58 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-06-29 0:06 repeating 'd1v0 Over-allocation for domain 1' messages in xen 4.7 Host logs on PVHVM Guest launch PGNet Dev
2016-06-29 10:07 ` Jan Beulich
2016-06-29 12:58 ` PGNet Dev
2016-06-29 14:10 ` PGNet Dev
2016-06-29 14:17 ` Jan Beulich
2016-06-29 15:38 ` PGNet Dev
2016-06-29 15:59 ` Jan Beulich
2016-06-29 16:27 ` PGNet Dev
2016-07-04 11:22 ` George Dunlap
2016-07-04 14:58 ` PGNet Dev [this message]
2016-07-05 13:35 ` George Dunlap
2016-07-05 14:13 ` PGNet Dev
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e320d6f1-dbca-9fbf-fc22-06c921c6529c@gmail.com \
--to=pgnet.dev@gmail.com \
--cc=george.dunlap@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).