xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Dulloor <dulloor@gmail.com>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Andre Przywara <andre.przywara@amd.com>,
	xen-devel@lists.xensource.com, "Nakajima,
	Jun" <jun.nakajima@intel.com>
Subject: Re: [vNUMA v2][PATCH 0/8] VM memory mgmt for NUMA
Date: Mon, 2 Aug 2010 10:03:22 -0700	[thread overview]
Message-ID: <AANLkTimU9LACopZ-RCRxEZyXis5mzr4_Rg89wz969zGa@mail.gmail.com> (raw)
In-Reply-To: <20100802161638.GB6961@phenom.dumpdata.com>

On Mon, Aug 2, 2010 at 9:16 AM, Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
> On Sun, Aug 01, 2010 at 03:00:31PM -0700, Dulloor wrote:
>> Sorry for the delay. I have been busy with other things.
>
> Np. Can you CC these patches in the future to Andre?
> His email is Andre Przywara <andre.przywara@amd.com>
Sure, thanks ! :) I should be cc'ing Jun Nakajima too.

>
> In the meantime,  I am CC-ing him here.


>>
>>
>> Summary of the patches :
>> In this patch series, we implement the following ~
>>
>> [1] Memory allocation schemes for VMs on NUMA platforms : The specific
>> allocation allocation strategies available as configuration parameters are -
>>
>>         * CONFINE - Confine the VM memory to a single NUMA node.
>>           [config]
>>           strategy = "confine"
>>
>>         * STRIPE - Stripe the VM memory across a specified number of nodes.
>>           [config]
>>           strategy = "stripe"
>>           vnodes = <num>
>>           stripesz = <in pages>
>>
>>         * SPLIT - Split the VM memory across a specified number of nodes
>>           to construct virtual nodes, which are then exposed to the VM.
>>           For now, we require the number of vnodes and number of vcpus to
>>           be powers of 2 (for symmetric distribution), as opposed to using
>>           multiples.
>>           [config]
>>           strategy = "split"
>>           vnodes = <num>
>>
>>         * AUTO - Choose a scheme automatically, based on memory distribution
>>           across the nodes. The strategy attempts CONFINE and STRIPE(by
>>           dividing memory in equal parts) in that order. If both fail, then
>>           it reverts to the existing non-numa allocation.
>>           [config]
>>           strategy = "auto"
>>
>>         * No Configuration - No change from existing behaviour.
>>
>> [2] HVM NUMA guests : If the user specifies "split" strategy, we expose the
>> virtual nodes to the HVM (SRAT/SLIT).
>>
>> [3] Disable migration : For now, the allocation information is not preserved
>> across migration, so we just disable migration. We will address this in the next
>> patch series.
>>
>> [4] PoD (Populate on Demand) : For now, PoD is disabled internally if a NUMA
>> allocation strategy is specified and applied to a VM. We will address
>> this in the
>> next patch series.
>>
>> Changes from previous version :
>> [1] The guest interface structure has been modified per Keir's suggestions.
>> Most changes from previous version are due to this.
>> [2] Cleaned up debug code in setup_guest (spotted by George).
>>
>>
>> -Dulloor
>>
>> Signed-off-by: Dulloor <dulloor@gmail.com>
>>
>> --
>>  tools/firmware/hvmloader/acpi/acpi2_0.h |   64 ++++++
>>  tools/firmware/hvmloader/acpi/build.c   |  122 ++++++++++++
>>  tools/libxc/Makefile                    |    2 +
>>  tools/libxc/ia64/xc_ia64_hvm_build.c    |    1 +
>>  tools/libxc/xc_cpumap.c                 |   88 +++++++++
>>  tools/libxc/xc_cpumap.h                 |  113 +++++++++++
>>  tools/libxc/xc_dom_numa.c               |  901
>> +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>>  tools/libxc/xc_dom_numa.h               |   73 +++++++
>>  tools/libxc/xc_hvm_build.c              |  574
>> ++++++++++++++++++++++++++++++++++++++++------------------
>>  tools/libxc/xenctrl.h                   |   19 +
>>  tools/libxc/xenguest.h                  |    1 +
>>  tools/libxl/libxl.h                     |    1 +
>>  tools/libxl/libxl_dom.c                 |    1 +
>>  tools/libxl/xl_cmdimpl.c                |   44 ++++
>>  tools/python/xen/lowlevel/xc/xc.c       |    2 +-
>>  xen/include/public/arch-x86/dom_numa.h  |   91 +++++++++
>>  xen/include/public/dom_numa.h           |   33 +++
>>  xen/include/public/hvm/hvm_info_table.h |   10 +-
>>  18 files changed, 1954 insertions(+), 186 deletions(-)
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xensource.com
>> http://lists.xensource.com/xen-devel
>

  reply	other threads:[~2010-08-02 17:03 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <1BEA8649F0C00540AB2811D7922ECB6C933256EA@orsmsx507.amr.corp.intel.com>
2010-07-02 23:53 ` [XEN][vNUMA][PATCH 0/9] VM memory mgmt for NUMA Dulloor
2010-08-01 22:00   ` [vNUMA v2][PATCH 0/8] " Dulloor
2010-08-02 16:16     ` Konrad Rzeszutek Wilk
2010-08-02 17:03       ` Dulloor [this message]
2010-08-02 21:12         ` Andre Przywara

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AANLkTimU9LACopZ-RCRxEZyXis5mzr4_Rg89wz969zGa@mail.gmail.com \
    --to=dulloor@gmail.com \
    --cc=andre.przywara@amd.com \
    --cc=jun.nakajima@intel.com \
    --cc=konrad.wilk@oracle.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).