xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Konrad Rzeszutek Wilk <konrad@darnok.org>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xen.org>
Subject: Re: (XEN) page_alloc.c:1148:d0 Over-allocation for domain 0: 694017 > 694016
Date: Tue, 1 May 2012 16:12:52 -0400	[thread overview]
Message-ID: <20120501201252.GA29872@andromeda.dapyr.net> (raw)
In-Reply-To: <20120427143122.GD9186@phenom.dumpdata.com>

On Fri, Apr 27, 2012 at 10:31:22AM -0400, Konrad Rzeszutek Wilk wrote:
> > How would that be? 2711MiB = 2776064kiB, which 446k off the value
> > above. And apart from that, the value above isn't even divisible by 4
> 
> I messed up on that. Redid the numbers and I was off.
> 
> > (i.e. not an even number of pages).
> 
> To make this a bit easier I used 'dom0_max=max:3G', which means
> (with this swiss-cheese type E820 on this Intel box):
> 
> [    0.000000] Released 75745 pages of unused memory
> 
> so I should have 75745 pages left to play with. But what I found is that
> I can only go up to 786415 which is 17 pages short of the 786432 goal.
> 
> Here are the steps:
> 
> $cat `find /sys -name current_kb`
> 2842816
> $echo $((3*1024*1024))
> 3145728
> $echo "3145728" > `find /sys -name target_kb`
> $cat `find /sys -name current_kb`
> 3145660
> $xl dmesg | tail
> (XEN) page_alloc.c:1148:d0 Over-allocation for domain 0: 786433 (786432) > 786432
> (XEN) memory.c:133:d0 Could not allocate order=0 extent: id=0 memflags=0 (0 of 17)
> 
> > > Any ideas of what that might be? Could it be the shared_info, hypercall page,
> > > start_info, xenconsole and some other ones are the magic 6 pages which
> > > inhibit how much we can balloon up to?
> > 
> > Not likely: The hypercall page is in kernel (image) memory, and there's
> > no console page at all fro Dom0.
> 
> 17 pages.. Hmm

I am still not exactly sure what the problem is, but by running this on
various machines I found that I can be off by 1,2,3, 4 or 17 pages. It
looked to vary based on the amount of ACPI tables that showed up in MADT.

So I think what is happening is that the initial domain gets the ACPI
regions (which are shared) accounted in its d->tot_pages. I can't
pinpoint the exact piece of code in the hypervisor for this.

But what I did do on the Linux side was using the current_reservation
hypercall (so d->tot_pages) and based on that would populate exactly
up to start_info->nr_pages - d->tot_pages pages and did not get any of
those errors.

> > 
> > Jan
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> http://lists.xen.org/xen-devel

      reply	other threads:[~2012-05-01 20:12 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-04-27  2:34 (XEN) page_alloc.c:1148:d0 Over-allocation for domain 0: 694017 > 694016 Konrad Rzeszutek Wilk
2012-04-27 11:53 ` Jan Beulich
2012-04-27 14:31   ` Konrad Rzeszutek Wilk
2012-05-01 20:12     ` Konrad Rzeszutek Wilk [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20120501201252.GA29872@andromeda.dapyr.net \
    --to=konrad@darnok.org \
    --cc=JBeulich@suse.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).