xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dario Faggioli <dario.faggioli@citrix.com>
To: Saurabh Mishra <saurabh.globe@gmail.com>
Cc: xen-devel@lists.xen.org
Subject: Re: Xen NUMA memory allocation policy
Date: Wed, 18 Dec 2013 02:38:29 +0100	[thread overview]
Message-ID: <1387330709.3880.52.camel@Solace> (raw)
In-Reply-To: <CAMnwyJ1KTyouCJVBh7YMPG4aezpdvDhtY-1VxpgR_Y1AV5RpxA@mail.gmail.com>


[-- Attachment #1.1: Type: text/plain, Size: 3650 bytes --]

On mar, 2013-12-17 at 11:41 -0800, Saurabh Mishra wrote:
> Hi --
>
Hi,

> We are using Xen 4.2.2_06 on SLES SP3 Updates and wanted to know if
> there is a simple way to gather information about physical pages
> allocated for a HVM guest. 
>
In general, no, here are no simple ways to retrieve such information.
Actually, putting something together that would allow one to get much
more info on the memory layout of a guest (wrt NUMA) is something that
is on my TODO list for quite some time, but I haven't got there yet...
I'll get to there eventually, and any help is appreciated! :-)

> We are trying to figure whether XL is better off in allocating
> contiguous huge/large pages for a guest or XM. I guess it does not
> matter since Xen's hypervisor would be implementing page allocation
> polices.
> 
Indeed. What changes between xl and xm/xend, is whether and how they
build up a vcpu-to-pcpu pinning mask, when the domain is created. In
fact, as of now, that is all that matters, as far as allocating pages on
nodes (happening in the hypervisor) is concerned.

In both cases, if you specify a vcpu-to-pcpu pinning mask in the domain
config file, that is passed directly to the hypervisor, which would then
allocate memory striping the pages on the NUMA nodes to which the pcpus
in the mask belong.

Also, in case no pinning is specified in the config file, both toolstack
tries to come up with a best possible placement of the new guest on the
host NUMA nodes, and build up a suitable vcpu-to-pcpu pinning mask, pass
it to the hypervisor, and... See above. :-)

What differs between xl and xm is the algorithm used to come up with
such automatic placement (i.e., both algorithms are based on some
heuristics, but those heuristics are different). I'd say that the xl's
algorithm is better, but that's a very biased opinion, as I'm the one
who wrote it! :-P
However, since xl is the default toolstack, while xm is already
deprecated and won't be even built by default very soon, I'm definitely
saying, try xl, and, if there is anything that doesn't work or seems
wrong, please report it here (putting me in Cc).

Hope this clarifies things a bit for you...

> With xl debug-key u, we know how much memory was allocated from each
> NUMA node, but we would also like to know whether how much of them
> were huge pages and were they contiguous or not. 
>
I'm not aware of any tool giving this sort of information.

> Basically we need to retrieve machine pfn and VM's pfn to do some
> comparison.
> 
Well, at some point, for debugging an understanding purposes, I wrote
something called xen-mfnup, which is in tree:
http://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=ae763e4224304983a1cde2fbb3d6e0c4d60b2688

It does allow you to get some insights about pfn-s and mfn-s, but not as
much as you need, I'm afraid (not to mention that I did it mostly with
PV guests in mind, and tested mostly on them).

> (XEN) Memory location of each domain:
> (XEN) Domain 0 (total: 603765):
> (XEN)     Node 0: 363652
> (XEN)     Node 1: 240113
> (XEN) Domain 1 (total: 2096119):
> (XEN)     Node 0: 1047804
> (XEN)     Node 1: 1048315
> (XEN) Domain 2 (total: 25164798):
> (XEN)     Node 0: 12582143
> (XEN)     Node 1: 12582655
> 
> 
Mmm... BTW, if I can ask, what's the config file for these domains?

Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)


[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

[-- Attachment #2: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2013-12-18  1:38 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-12-17 19:41 Xen NUMA memory allocation policy Saurabh Mishra
2013-12-18  1:38 ` Dario Faggioli [this message]
2013-12-19 23:48   ` Saurabh Mishra
2013-12-20 13:52     ` Dario Faggioli

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1387330709.3880.52.camel@Solace \
    --to=dario.faggioli@citrix.com \
    --cc=saurabh.globe@gmail.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).