From mboxrd@z Thu Jan 1 00:00:00 1970 From: George Dunlap Subject: Re: [PATCH RESEND 05/12] xen: numa-sched: make space for per-vcpu node-affinity Date: Tue, 5 Nov 2013 15:11:15 +0000 Message-ID: <52790A93.4020903@eu.citrix.com> References: <20131105142844.30446.78671.stgit@Solace> <20131105143500.30446.9976.stgit@Solace> <5279143702000078000FFB15@nat28.tlf.novell.com> <527908B2.5090208@eu.citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta4.messagelabs.com ([85.158.143.247]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VdiHp-0001Nl-4R for xen-devel@lists.xenproject.org; Tue, 05 Nov 2013 15:11:21 +0000 In-Reply-To: <527908B2.5090208@eu.citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: Marcus Granado , Justin Weaver , Ian Campbell , Li Yechen , Andrew Cooper , Dario Faggioli , Ian Jackson , Matt Wilson , xen-devel , Daniel De Graaf , Keir Fraser , Elena Ufimtseva , Juergen Gross List-Id: xen-devel@lists.xenproject.org On 11/05/2013 03:03 PM, George Dunlap wrote: > On 11/05/2013 02:52 PM, Jan Beulich wrote: >>>>> On 05.11.13 at 15:35, Dario Faggioli >>>>> wrote: >>> @@ -197,6 +199,13 @@ struct vcpu >>> /* Used to restore affinity across S3. */ >>> cpumask_var_t cpu_affinity_saved; >>> >>> + /* >>> + * Bitmask of CPUs on which this VCPU prefers to run. For both this >>> + * and auto_node_affinity access is serialized against >>> + * v->domain->node_affinity_lock. >>> + */ >>> + cpumask_var_t node_affinity; >> >> This all looks quite sensible, except for the naming here: We >> already have a node_affinity field in struct domain, having a >> meaning that one can expect with this name. So you break >> both consistency and the rule of least surprise here. How >> about just "preferred_cpus"? > > Actually, would it make more sense to remove node_affinity from the > domain struct, and have the tools manually set the node_affinity for the > various vcpus if the user attempts to set the "domain numa affinity"? Sorry, speaking before I had thought it through. Of course we need a numa affinity for the domain for allocating memory. How about, "cpu_node_affinity"? Or, we could internally change the names to "cpu_hard_affinity" and "cpu_soft_affinity", since that's effectively what the scheduler will do. It's possible someone might want to set soft affinities for some other reason besides NUMA performance. -George