From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: [PATCH 03 of 10 v2] xen: sched_credit: let the scheduler know about node-affinity Date: Thu, 20 Dec 2012 09:25:58 +0100 Message-ID: <50D2CB96.4030509@ts.fujitsu.com> References: <06d2f322a6319d8ba212.1355944039@Solace> <50D2B3DE.70206@ts.fujitsu.com> <1355991370.28419.15.camel@Abyss> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1355991370.28419.15.camel@Abyss> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli Cc: Marcus Granado , Dan Magenheimer , Ian Campbell , Anil Madhavapeddy , George Dunlap , Andrew Cooper , Ian Jackson , xen-devel@lists.xen.org, Jan Beulich , Daniel De Graaf , Matt Wilson List-Id: xen-devel@lists.xenproject.org Am 20.12.2012 09:16, schrieb Dario Faggioli: > On Thu, 2012-12-20 at 07:44 +0100, Juergen Gross wrote: >> Am 19.12.2012 20:07, schrieb Dario Faggioli: >>> [...] >>> >>> This change modifies the VCPU load balancing algorithm (for the >>> credit scheduler only), introducing a two steps logic. >>> During the first step, we use the node-affinity mask. The aim is >>> giving precedence to the CPUs where it is known to be preferable >>> for the domain to run. If that fails in finding a valid PCPU, the >>> node-affinity is just ignored and, in the second step, we fall >>> back to using cpu-affinity only. >>> >>> Signed-off-by: Dario Faggioli >>> --- >>> Changes from v1: >>> * CPU masks variables moved off from the stack, as requested during >>> review. As per the comments in the code, having them in the private >>> (per-scheduler instance) struct could have been enough, but it would be >>> racy (again, see comments). For that reason, use a global bunch of >>> them of (via per_cpu()); >> >> Wouldn't it be better to put the mask in the scheduler private per-pcpu area? >> This could be applied to several other instances of cpu masks on the stack, >> too. >> > Yes, as I tired to explain, if it's per-cpu it should be fine, since > credit has one runq per each CPU and hence runq lock is enough for > serialization. > > BTW, can you be a little bit more specific about where you're suggesting > to put it? I'm sorry but I'm not sure I figured what you mean by "the > scheduler private per-pcpu area"... Do you perhaps mean making it a > member of `struct csched_pcpu' ? Yes, that's what I would suggest. Juergen -- Juergen Gross Principal Developer Operating Systems PBG PDG ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967 Fujitsu Technology Solutions e-mail: juergen.gross@ts.fujitsu.com Domagkstr. 28 Internet: ts.fujitsu.com D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html