xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH 00 of 10 [RFC]] Automatically place guest on host's NUMA nodes with xl
@ 2013-03-06 10:49 butian huang
  2013-03-06 11:16 ` Dario Faggioli
  0 siblings, 1 reply; 3+ messages in thread
From: butian huang @ 2013-03-06 10:49 UTC (permalink / raw)
  To: dario.faggioli
  Cc: andre.przywara, juergen.gross, Ian.Jackson, xen-devel, stephen,
	JBeulich

hello,

I am using your patch about "[Xen-devel] [PATCH 00 of 10 [RFC]] Automatically place guest on host's NUMA nodes with xl",
but I meet with a problem,that is, using the three NUMA placement policies, and putting the VM on the selected numa node;
Actually,using the command  "xl info -n",the VM's memory is not placed the selected numa node,but it is placed averagly 
on the four numa nodes.
For example,the VM's memory is 2G,each node occupies 512m memory,why??? 
	
Thanks,
Regards,
Butine huang
Zhejiang University

2013-03-06

^ permalink raw reply	[flat|nested] 3+ messages in thread
* [PATCH 00 of 10 [RFC]] Automatically place guest on host's NUMA nodes with xl
@ 2012-04-11 13:17 Dario Faggioli
  0 siblings, 0 replies; 3+ messages in thread
From: Dario Faggioli @ 2012-04-11 13:17 UTC (permalink / raw)
  To: xen-devel
  Cc: Andre Przywara, Ian Campbell, Stefano Stabellini, George Dunlap,
	Juergen Gross, Ian Jackson, Jan Beulich

Hello Everybody,

This is the first take of the automatic placement of a guest on the host NUMA
nodes I've been working on for a while. It, right now, takes into account the
amount of memory the guest needs as compared to the amount of free memory on
the various nodes.

It's still in [RFC] status, as there are still quite a bit of design choices
I'd like to discuss, and quite a bit of changes I've made on which I'd really
like to have a second opinion. :-P

Just very quickly, these are refactorings of existing data structures and code,
paving the way for the real "meat":

 1 of 10  libxc: Generalize xenctl_cpumap to just xenctl_map
 2 of 10  libxl: Generalize libxl_cpumap to just libxl_map
 3 of 10  libxc, libxl: Introduce xc_nodemap_t and libxl_nodemap
 4 of 10  libxl: Introduce libxl_get_numainfo() calling xc_numainfo()


These enable NUMA affinity to be eplicitly specified with xl, both via
config file and command line:

 5 of 10  xl: Explicit node affinity specification for guests via config file
 6 of 10  xl: Allow user to set or change node affinity on-line


And this is where the fun happens, as these patches contain the core of the
automatic placement logic and the modifications to the (credit) scheduler
needed for taking NUMA node affinity into account:

 7 of 10  sched_credit: Let the scheduler know about `node affinity`
 8 of 10  xl: Introduce First Fit memory-wise placement of guests on nodes
 9 of 10  xl: Introduce Best and Worst Fit guest placement algorithms

Finally, here it comes some rationale and user-level oriented documentation:
 10 of 10 xl: Some automatic NUMA placement documentation


Some of the changelogs contain a TODO list, with stuff that need to be
considered, thought about, or just added, perhaps in the next version of the
series. Also, the various patches have quite a bit of 'XXX' marked code
comments, to better highlight the spots where I think I might have done
something scary, or where I would like discussion the to concentrate on.
Providing any kind of feedback about these design and coding decisions (I mean
the TODOs and XXXs) I made, is really going to be of great help to me! :-)

As per the timing... I know we're in feature freeze, and I don't see much about
the issue this series tackles in the release plan. So, I'll be more than happy
if (even if just some of) the patches becomes 4.2 material, and I can commit on
giving them as much testing and benchmarking as possible, but I understand if
this is judged to be too immature for being considered.

I did some benchmarking about the current performances of Xen on a (small) NUMA
machine, and you can see them here:

 http://xenbits.xen.org/people/dariof/benchmarks/specjbb2005/

This is _before_ any of these patches, just xen-unstable with plain
vcpu-pinning.  As changelogs say, I'm benchmarking the various features this
series introduces as well, and I'll share the results as soon as they'll be
ready.

Thanks and Regards,
Dario

-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-------------------------------------------------------------------
Dario Faggioli, Ph.D, http://retis.sssup.it/people/faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2013-03-06 11:16 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-06 10:49 [PATCH 00 of 10 [RFC]] Automatically place guest on host's NUMA nodes with xl butian huang
2013-03-06 11:16 ` Dario Faggioli
  -- strict thread matches above, loose matches on Subject: below --
2012-04-11 13:17 Dario Faggioli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).