From: Andre Przywara <andre.przywara@amd.com>
To: Dulloor <dulloor@gmail.com>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>
Subject: Re: [vNUMA v2][PATCH 6/8] Build NUMA HVM
Date: Fri, 13 Aug 2010 17:24:46 +0200 [thread overview]
Message-ID: <4C6563BE.50101@amd.com> (raw)
In-Reply-To: <AANLkTika2fMf8a2dhbPYnUTPNrK=hzb0JM_rMp5qUeDk@mail.gmail.com>
Dulloor wrote:
> Allocate the memory for the HVM based on the scheme and the selection
> of nodes. Also, disable PoD for NUMA allocation schemes.
>
Sorry for the delay, finally I found some time to play a bit with the code.
To me it looks quite matured, so sometimes it is hard to see why things
were done in a certain way, although it mostly gets clearer later.
Some general comments:
1. I didn't manage to get striping to work. I tried several settings,
it all ended up with an almost endless loop of:
xc: info: PHYSICAL MEMORY ALLOCATION (NODE {7,6,4,5}):
4KB PAGES: 0x00000000000000c0
2MB PAGES: 0x0000000000000000
1GB PAGES: 0x0000000000000000
and then stopped creating the guest. I didn't investigate, though.
2. I don't like the limitation imposed on the guest's NUMA layout.
Requiring the number of nodes and the number of VCPUs to be a power of 2
is too restrictive in my eyes. My older code could cope with a wild
combination of memory, nodes and VCPUSs. I remember testing a rather
big matrix, including things like 3.5 GB of memory over 3 nodes and 5 VCPUs.
As your patch 6&7 touch my work anyway, I'd also volunteer to fix this
by basically rebasing my code onto your foundation. I left out the SLIT
part for the first round, but I suppose this could be easily added at
the end.
I started to hack on this already and moved the "hole-punching" (VGA
hole and PCI hole) from libxc into hvmloader. I then removed the
limitation check and tried some setups, although there seems to be still
an issue with the memory layout, as the guest Linux kernel crashes early
(although the same guest setup works with QEMU).
3. Is that really necessary to clutter the hvm_info_table with such much
information? Until now it is really small and static. I'd prefer to
simply enter the values really needed: vCPU->vnode mapping, vnode memory
size and SLIT information.
AFAIK there is no compatibility promise for this interface between
hvmloader and the Xen tools, so we could even make the arrays here
statically declared at compile-time.
Regards,
Andre.
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12
prev parent reply other threads:[~2010-08-13 15:24 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <1BEA8649F0C00540AB2811D7922ECB6C9338B4D2@orsmsx507.amr.corp.intel.com>
2010-07-02 23:55 ` [XEN][vNUMA][PATCH 7/9] Build NUMA HVM Dulloor
2010-07-05 9:55 ` George Dunlap
2010-07-06 6:07 ` Dulloor
2010-07-06 10:09 ` George Dunlap
2010-07-06 16:10 ` Dulloor
2010-08-01 22:05 ` [vNUMA v2][PATCH 6/8] " Dulloor
2010-08-13 15:24 ` Andre Przywara [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4C6563BE.50101@amd.com \
--to=andre.przywara@amd.com \
--cc=dulloor@gmail.com \
--cc=xen-devel@lists.xensource.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).