From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH RFC v2 0/7] xen: vNUMA introduction Date: Fri, 13 Sep 2013 12:19:02 +0100 Message-ID: <5232F4A6.9050303@eu.citrix.com> References: <1379062177-13681-1-git-send-email-ufimtseva@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1379062177-13681-1-git-send-email-ufimtseva@gmail.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Elena Ufimtseva Cc: keir@xen.org, Ian.Campbell@citrix.com, stefano.stabellini@eu.citrix.com, dario.faggioli@citrix.com, lccycc123@gmail.com, ian.jackson@eu.citrix.com, xen-devel@lists.xen.org, JBeulich@suse.com, sw@linux.com List-Id: xen-devel@lists.xenproject.org On 13/09/13 09:49, Elena Ufimtseva wrote: > This series of patches introduces vNUMA topology awareness and > provides interfaces and data structures to enable vNUMA for > PV domU guests. > > vNUMA topology support should be supported by PV guest kernel. > Corresponging patches should be applied. > > Introduction > ------------- > > vNUMA topology is exposed to the PV guest to improve performance when running > workloads on NUMA machines. > XEN vNUMA implementation provides a way to create vNUMA-enabled guests on NUMA/UMA > and map vNUMA topology to physical NUMA in a optimal way. > > XEN vNUMA support > > Current set of patches introduces subop hypercall that is available for enlightened > PV guests with vNUMA patches applied. > > Domain structure was modified to reflect per-domain vNUMA topology for use in other > vNUMA-aware subsystems (e.g. ballooning). > > libxc > > libxc provides interfaces to build PV guests with vNUMA support and in case of NUMA > machines provides initial memory allocation on physical NUMA nodes. This implemented by > utilizing nodemap formed by automatic NUMA placement. Details are in patch #3. > > libxl > > libxl provides a way to predefine in VM config vNUMA topology - number of vnodes, > memory arrangement, vcpus to vnodes assignment, distance map. > > PV guest > > As of now, only PV guest can take advantage of vNUMA functionality. vNUMA Linux patches > should be applied and NUMA support should be compiled in kernel. > > Example of booting vNUMA enabled pv domU: > > NUMA machine: > cpu_topology : > cpu: core socket node > 0: 0 0 0 > 1: 1 0 0 > 2: 2 0 0 > 3: 3 0 0 > 4: 0 1 1 > 5: 1 1 1 > 6: 2 1 1 > 7: 3 1 1 > numa_info : > node: memsize memfree distances > 0: 17664 12243 10,20 > 1: 16384 11929 20,10 > > VM config: > > memory = 16384 > vcpus = 8 > name = "rcbig" > vnodes = 8 > vnumamem = "2g, 2g, 2g, 2g, 2g, 2g, 2g, 2g" > vcpu_to_vnode ="5 6 7 4 3 2 1 0" This was a bit confusing for me as the table above and the config below don't seem to be the same. > Patchset applies to latest Xen tree > commit e008e9119d03852020b93e1d4da9a80ec1af9c75 > Available at http://git.gitorious.org/xenvnuma/xenvnuma.git Thanks for the git repo. It's probably a good idea in the future to make a branch for each series of patches you post -- e.g., vnuma-v2 or something like that -- so that even if you do more updates / development people can still have access to the old set of patches. (Or have access to the old set while you are preparing the new set.) -George