xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* NUMA guest: best-fit-nodes algorithm (was Re: [PATCH 00/11] PV NUMA Guests)
@ 2010-04-23 12:45 Andre Przywara
  2010-04-24  6:51 ` Dulloor
  0 siblings, 1 reply; 2+ messages in thread
From: Andre Przywara @ 2010-04-23 12:45 UTC (permalink / raw)
  To: Dulloor, Cui, Dexuan; +Cc: xen-devel, Nakajima, Jun

Dulloor wrote:
 > Cui, Dexuan <dexuan.cui@intel.com> wrote:
 >> xc_select_best_fit_nodes() decides the "min-set" of host nodes that
 >> will be used for the guest. It only considers the current memory
 >> usage of the system. Maybe we should also condider the cpu load? And 
 >> the number of the nodes must be 2^^n? And how to handle the case
 >> #vcpu is < #vnode?
 >> And looks your patches only consider the guest's memory requirement
 >> -- guest's vcpu requirement is neglected? e.g., a guest may not need
 >> a very large amount of memory while it needs many vcpus.
 >> xc_select_best_fit_nodes() should consider this when
 >> determining the number of vnode.
 > I agree with you. I was planning to consider vcpu load as the next
 > step. Also, I am looking for a good heuristic. I looked at the
 > nodeload heuristic (currently in xen), but found it too naive.
 > But, if you/Andre think it is a good heuristic, I will add the
 > support. Actually, I think in future we should do away with strict
 > vcpu-affinities and rely more on a scheduler with necessary NUMA
 > support to complement our placement strategies.
 >
 > As of now, we don't SPLIT, if #vcpu < #vnode. We use STRIPING in that
 > case.
Determing the current load of a node is quite a hard thing to do 
currently in Xen. If guests are pinned to nodes (which I'd consider 
necessary with the current credit scheduler), then using this affinity 
is a good heuristic to find good nodes, at least the best I can think 
of. So until we have a NUMA aware scheduler, we should go with this 
solution. Of course it only measures the theoretical load of a node and 
doesn't distinguish between idle and loaded guests. One would need 
something like a permanently running xm top to gather statistics about 
the guest's load, but that is something for a future patch.
(Or is there a guest load metric already measured in Xen?)

Regards,
Andre.


-- 
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448-3567-12

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: NUMA guest: best-fit-nodes algorithm (was Re: [PATCH 00/11] PV NUMA Guests)
  2010-04-23 12:45 NUMA guest: best-fit-nodes algorithm (was Re: [PATCH 00/11] PV NUMA Guests) Andre Przywara
@ 2010-04-24  6:51 ` Dulloor
  0 siblings, 0 replies; 2+ messages in thread
From: Dulloor @ 2010-04-24  6:51 UTC (permalink / raw)
  To: Andre Przywara; +Cc: xen-devel, Nakajima, Jun, Cui, Dexuan

On Fri, Apr 23, 2010 at 8:45 AM, Andre Przywara <andre.przywara@amd.com> wrote:
> Dulloor wrote:
>> Cui, Dexuan <dexuan.cui@intel.com> wrote:
>>> xc_select_best_fit_nodes() decides the "min-set" of host nodes that
>>> will be used for the guest. It only considers the current memory
>>> usage of the system. Maybe we should also condider the cpu load? And >>
>>> the number of the nodes must be 2^^n? And how to handle the case
>>> #vcpu is < #vnode?
>>> And looks your patches only consider the guest's memory requirement
>>> -- guest's vcpu requirement is neglected? e.g., a guest may not need
>>> a very large amount of memory while it needs many vcpus.
>>> xc_select_best_fit_nodes() should consider this when
>>> determining the number of vnode.
>> I agree with you. I was planning to consider vcpu load as the next
>> step. Also, I am looking for a good heuristic. I looked at the
>> nodeload heuristic (currently in xen), but found it too naive.
>> But, if you/Andre think it is a good heuristic, I will add the
>> support. Actually, I think in future we should do away with strict
>> vcpu-affinities and rely more on a scheduler with necessary NUMA
>> support to complement our placement strategies.
>>
>> As of now, we don't SPLIT, if #vcpu < #vnode. We use STRIPING in that
>> case.
> Determing the current load of a node is quite a hard thing to do currently
> in Xen. If guests are pinned to nodes (which I'd consider necessary with the
> current credit scheduler), then using this affinity is a good heuristic to
> find good nodes, at least the best I can think of. So until we have a NUMA
> aware scheduler, we should go with this solution. Of course it only measures
> the theoretical load of a node and doesn't distinguish between idle and
> loaded guests. One would need something like a permanently running xm top to
> gather statistics about the guest's load, but that is something for a future
> patch.
> (Or is there a guest load metric already measured in Xen?)
Yeah, for the current credit scheduler, looks like we could use only
affinity for load heuristics.
I will add that to the node selection algorithm -  similar to what you
do in calculating nodeload.
Also, gathering guest load statistics over a period of time could be
useful too. But, it is unclear
how any temporal behaviour could aid permanent memory placement.

I have started looking into load balancing and NUMA-related stuff for
credit2. I hope to send out
something in coming weeks.

>
> Regards,
> Andre.
>
>
> --
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> Tel: +49 351 448-3567-12
>
>
-dulloor

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2010-04-24  6:51 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-04-23 12:45 NUMA guest: best-fit-nodes algorithm (was Re: [PATCH 00/11] PV NUMA Guests) Andre Przywara
2010-04-24  6:51 ` Dulloor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).