From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: PV-vNUMA issue: topology is misinterpreted by the guest Date: Fri, 24 Jul 2015 18:10:31 +0200 Message-ID: <55B26377.4060807@suse.com> References: <1437042762.28251.18.camel@citrix.com> <55A7A7F40200007800091D60@mail.emea.novell.com> <55A78DF2.1060709@citrix.com> <20150716152513.GU12455@zion.uk.xensource.com> <55A7D17C.5060602@citrix.com> <55A7D2CC.1050708@oracle.com> <55A7F7F40200007800092152@mail.emea.novell.com> <55A7DE45.4040804@citrix.com> <55A7E2D8.3040203@oracle.com> <55A8B83802000078000924AE@mail.emea.novell.com> <1437118075.23656.25.camel@citrix.com> <55A946C6.8000002@oracle.com> <1437401354.5036.19.camel@citrix.com> <55AD08F7.7020105@oracle.com> <55AEA4DD.7080406@oracle.com> <1437572160.5036.39.camel@citrix.com> <55AF9F8F.7030200@suse.com> <55AFA16B.3070103@oracle.com> <55AFA41E.1080101@suse.com> <55AFAC34.1060606@oracle.com> <55B070ED.2040200@suse.com> <1437660433.5036.96.camel@citrix.com> <55B21364.5040906@suse.com> <1437749076.4682.47.camel@citrix.com> <55B25650.4030402@suse.com> <55B258C9.4040400@suse.com> <1437753509.4682.78.camel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZIfYQ-0007Fa-Q5 for xen-devel@lists.xenproject.org; Fri, 24 Jul 2015 16:10:35 +0000 In-Reply-To: <1437753509.4682.78.camel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli Cc: Elena Ufimtseva , Wei Liu , Andrew Cooper , David Vrabel , Jan Beulich , "xen-devel@lists.xenproject.org" , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org On 07/24/2015 05:58 PM, Dario Faggioli wrote: > On Fri, 2015-07-24 at 17:24 +0200, Juergen Gross wrote: >> On 07/24/2015 05:14 PM, Juergen Gross wrote: >>> On 07/24/2015 04:44 PM, Dario Faggioli wrote: > >>>> In fact, I think that it is the topology, i.e., what comes from MSRs, >>>> that needs to adapt, and follow vNUMA, as much as possible. Do we agree >>>> on this? >>> >>> I think we have to be very careful here. I see two possible scenarios: >>> >>> 1) The vcpus are not pinned 1:1 on physical cpus. The hypervisor will >>> try to schedule the vcpus according to their numa affinity. So they >>> can change pcpus at any time in case of very busy guests. I don't >>> think the linux kernel should treat the cpus differently in this >>> case as it will be in vane regarding the Xen scheduler's activity. >>> So we should use the "null" topology in this case. >> >> Sorry, the topology should reflect the vcpu<->numa-node relations, of >> course, but nothing else (so flat topolgy in each numa node). >> > Yeah, I was replying to this point saying something like this right > now... Luckily, I've seen this email! :-P > > With this semantic, I fully agree with this. > >>> 2) The vcpus of the guest are all pinned 1:1 to physical cpus. The Xen >>> scheduler can't move vcpus between pcpus, so the linux kernel should >>> see the real topology of the used pcpus in order to optimize for this >>> picture. >>> >> > Mmm... I did think about this too, but I'm not sure. I see the value of > this of course, and the reason why it makes sense. However, pinning can > change on-line, via `xl vcpu-pin' and stuff. Also migration could make > things less certain, I think. What happens if we build on top of the > initial pinning, and then things change? If we can fiddle with the masks on boot, we could do it in a running system, too. Another advantage with not relying on cpuid. :-) > To be fair, there is stuff building on top of the initial pinning > already, e.g., from which physical NUMA node we allocate the memory > relies depends exactly on that. That being said, I'm not sure I'm > comfortable with adding more of this... > > Perhaps introduce an 'immutable_pinning' flag, which will prevent > affinity to be changed, and then bind the topology to pinning only if > that one is set? Hmm, this would require disabling migration as well. Or enable it with "--force" only. > >>>> Maybe, there is room for "fixing" this at this level, hooking up inside >>>> the scheduler code... but I'm shooting in the dark, without having check >>>> whether and how this could be really feasible, should I? >>> >>> Uuh, I don't think a change of the scheduler on behalf of Xen is really >>> appreciated. :-) >>> > I'm sure it would (have been! :-)) a true and giant nightmare!! :-D > >>>> One thing I don't like about this approach is that it would potentially >>>> solve vNUMA and other scheduling anomalies, but... >>>> >>>>> cpuid instruction is available for user mode as well. >>>>> >>>> ...it would not do any good for other subsystems, and user level code >>>> and apps. >>> >>> Indeed. I think the optimal solution would be two-fold: give the >>> scheduler the information it is needing to react correctly via a >>> kernel patch not relying on cpuid values and fiddle with the cpuid >>> values from xen tools according to any needs of other subsystems and/or >>> user code (e.g. licensing). >> > So, just to check if I'm understanding is correct: you'd like to add an > abstraction layer, in Linux, like in generic (or, perhaps, scheduling) > code, to hide the direct interaction with CPUID. > Such layer, on baremetal, would just read CPUID while, on PV-ops, it'd > check with Xen/match vNUMA/whatever... Is this that you are saying? Sort of, yes. I just wouldn't add it, as it is already existing (more or less). It can deal right now with AMD and Intel, we would "just" have to add Xen. > > If yes, I think I like it... I hope e.g. the KVM guys will like it, too. :-) Juergen