From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Date: Tue, 5 Nov 2013 16:56:06 +0000 Message-ID: <52792326.4050206@eu.citrix.com> References: <20131105142844.30446.78671.stgit@Solace> <20131105143500.30446.9976.stgit@Solace> <5279143702000078000FFB15@nat28.tlf.novell.com> <527908B2.5090208@eu.citrix.com> <52790A93.4020903@eu.citrix.com> <52791B8702000078000FFBC4@nat28.tlf.novell.com> <5279114B.9080405@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VdjvI-0007La-UO for xen-devel@lists.xenproject.org; Tue, 05 Nov 2013 16:56:13 +0000 In-Reply-To: <5279114B.9080405@eu.citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: MarcusGranado , Justin Weaver , Ian Campbell , Li Yechen , Andrew Cooper , Dario Faggioli , Ian Jackson , Matt Wilson , xen-devel , Daniel De Graaf , KeirFraser , Elena Ufimtseva , Juergen Gross List-Id: xen-devel@lists.xenproject.org On 11/05/2013 03:39 PM, George Dunlap wrote: > On 11/05/2013 03:23 PM, Jan Beulich wrote: >>>>> On 05.11.13 at 16:11, George Dunlap >>>>> wrote: >>> Or, we could internally change the names to "cpu_hard_affinity" and >>> "cpu_soft_affinity", since that's effectively what the scheduler will >>> do. It's possible someone might want to set soft affinities for some >>> other reason besides NUMA performance. >> >> I like that. > > A potential problem with that is the "auto domain numa" thing. In this > patch, if the domain numa affinity is not set but vcpu numa affinity is, > the domain numa affinity (which will be used to allocate memory for the > domain) will be set based on the vcpu numa affinity. That seems like a > useful feature (though perhaps it's starting to violate the "policy > should be in the tools" principle). If we change this to just "hard > affinity" and "soft affinity", we'll lose the natural logical connection > there. It might have impacts on how we end up doing vNUMA as well. So > I'm a bit torn ATM. > > Dario, any thoughts? [Coming back after going through the whole series] This is basically the main architectural question that needs to be sorted out with the series: Do we bake in that the "soft affinity" is specifically for NUMA-ness, or not? The patch the way it is does make this connection, and that has several implications: * There is no more concept of a separate "domain numa affinity" (Patch 06); the domain numa affinity is just a pre-calculated union of the vcpu affinities. * The interface to this "soft affinity" is a bitmask of numa nodes, not a bitmask of cpus. If we're OK with that direction, then I think this patch series looks pretty good. Release-wise, I think as long as we're OK with libxl providing a "set_vcpu_numa_affinity", then we can always come back and change the implementation later if we want to maintain that distinction internally. -George