xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Juergen Gross <jgross@suse.com>
To: Wei Liu <wei.liu2@citrix.com>,
	xen-devel@lists.xenproject.org,
	David Vrabel <david.vrabel@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	Ian Campbell <ian.campbell@citrix.com>
Subject: Re: Linux 4.1 reports wrong number of pages to toolstack
Date: Fri, 4 Sep 2015 05:38:13 +0200	[thread overview]
Message-ID: <55E91225.4090500@suse.com> (raw)
In-Reply-To: <20150904004039.GA23402@zion.uk.xensource.com>

On 09/04/2015 02:40 AM, Wei Liu wrote:
> Hi David
>
> This issue is exposed by the introduction of migration v2. The symptom is that
> a guest with 32 bit 4.1 kernel can't be restored because it's asking for too
> many pages.
>
> Note that all guests have 512MB memory, which means they have 131072 pages.
>
> Both 3.14 tests [2] [3] get the correct number of pages.  Like:
>
>     xc: detail: max_pfn 0x1ffff, p2m_frames 256
>     ...
>     xc: detail: Memory: 2048/131072    1%
>     ...
>
> However in both 4.1 [0] [1] the number of pages are quite wrong.
>
> 4.1 32 bit:
>
>     xc: detail: max_pfn 0xfffff, p2m_frames 1024
>     ...
>     xc: detail: Memory: 11264/1048576    1%
>     ...
>
> It thinks it has 4096MB memory.
>
> 4.1 64 bit:
>
>     xc: detail: max_pfn 0x3ffff, p2m_frames 512
>     ...
>     xc: detail: Memory: 3072/262144    1%
>     ...
>
> It thinks it has 1024MB memory.
>
> The total number of pages is determined in libxc by calling
> xc_domain_nr_gpfns, which yanks shared_info->arch.max_pfn from
> hypervisor. And that value is clearly touched by Linux in some way.

Sure. shared_info->arch.max_pfn holds the number of pfns the p2m list
can handle. This is not the memory size of the domain.

> I now think this is a bug in Linux kernel. The biggest suspect is the
> introduction of linear P2M.  If you think this is a bug in toolstack,
> please let me know.

I absolutely think it is a toolstack bug. Even without the linear p2m
things would go wrong in case a ballooned down guest would be migrated,
as shared_info->arch.max_pfn would hold the upper limit of the guest
in this case and not the current size.


Juergen

  reply	other threads:[~2015-09-04  3:38 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-09-04  0:40 Linux 4.1 reports wrong number of pages to toolstack Wei Liu
2015-09-04  3:38 ` Juergen Gross [this message]
2015-09-04  8:28   ` Jan Beulich
2015-09-04  9:35     ` Andrew Cooper
2015-09-04 11:35       ` Wei Liu
2015-09-04 18:39         ` Andrew Cooper
2015-09-04 19:46           ` Wei Liu
2015-09-04 20:32             ` Andrew Cooper
2015-09-04 11:40     ` Wei Liu
2015-09-04  8:53 ` Ian Campbell
2015-09-04  9:28   ` Ian Campbell
2015-09-04 14:42   ` David Vrabel
2015-09-04 14:53     ` Wei Liu
2015-09-04 14:58       ` David Vrabel
2015-09-07  7:09   ` Jan Beulich

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=55E91225.4090500@suse.com \
    --to=jgross@suse.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=david.vrabel@citrix.com \
    --cc=ian.campbell@citrix.com \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).