From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: PV-vNUMA issue: topology is misinterpreted by the guest Date: Fri, 24 Jul 2015 17:24:57 +0200 Message-ID: <55B258C9.4040400@suse.com> References: <1437042762.28251.18.camel@citrix.com> <55A7A7F40200007800091D60@mail.emea.novell.com> <55A78DF2.1060709@citrix.com> <20150716152513.GU12455@zion.uk.xensource.com> <55A7D17C.5060602@citrix.com> <55A7D2CC.1050708@oracle.com> <55A7F7F40200007800092152@mail.emea.novell.com> <55A7DE45.4040804@citrix.com> <55A7E2D8.3040203@oracle.com> <55A8B83802000078000924AE@mail.emea.novell.com> <1437118075.23656.25.camel@citrix.com> <55A946C6.8000002@oracle.com> <1437401354.5036.19.camel@citrix.com> <55AD08F7.7020105@oracle.com> <55AEA4DD.7080406@oracle.com> <1437572160.5036.39.camel@citrix.com> <55AF9F8F.7030200@suse.com> <55AFA16B.3070103@oracle.com> <55AFA41E.1080101@suse.com> <55AFAC34.1060606@oracle.com> <55B070ED.2040200@suse.com> <1437660433.5036.96.camel@citrix.com> <55B21364.5040906@suse.com> <1437749076.4682.47.camel@citrix.com> <55B25650.4030402@suse.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZIeqK-0007MY-IX for xen-devel@lists.xenproject.org; Fri, 24 Jul 2015 15:25:00 +0000 In-Reply-To: <55B25650.4030402@suse.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli Cc: Elena Ufimtseva , Wei Liu , Andrew Cooper , David Vrabel , Jan Beulich , "xen-devel@lists.xenproject.org" , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org On 07/24/2015 05:14 PM, Juergen Gross wrote: > On 07/24/2015 04:44 PM, Dario Faggioli wrote: >> On Fri, 2015-07-24 at 12:28 +0200, Juergen Gross wrote: >>> On 07/23/2015 04:07 PM, Dario Faggioli wrote: >> >>>> FWIW, I was thinking that the kernel were a better place, as Juergen is >>>> saying, while now I'm more convinced that tools would be more >>>> appropriate, as Boris is saying. >>> >>> I've collected some information from the linux kernel sources as a base >>> for the discussion: >>> >> That's great, thanks for this! >> >>> The complete numa information (cpu->node and memory->node relations) is >>> taken from the acpi tables (srat, slit for "distances"). >>> >> Ok. And I already have a question (as I lost track of things a bit). >> What you just said about ACPI tables is certainly true for baremetal and >> HVM guests, but for PV? At the time I was looking into it, together with >> Elena, there were Linux patches being produced for the PV case, which >> makes sense. >> However, ISTR that both Wei and Elena mentioned recently that those >> patches have not been upstreamed in Linux yet... Is that the case? Maybe >> not all, but at least some of them are there? Because if not, I'm not >> sure I see how a PV guest would even see a vNUMA topology (which it >> does). >> >> Of course, I can go and check, but since you just looked, you may have >> it fresh and clear already. :-) > > I checked "bottom up", so when I found the acpi scan stuff I stopped > searching how the kernel obtains numa info. During my search I found no > clue of an pv-numa stuff in the kernel. And a quick "grep -i numa" in > arch/x86/xen and drivers/xen didn't reveal anything. Same for a complete > kernel source search for "vnuma". > >> >>> The topology information is obtained via: >>> - intel: >>> + cpuid leaf b with subleafs, leaf 4 >>> + cpuid leaf 2 and/or leaf 1 if leaf b and/or 4 isn't available >>> - amd: >>> + cpuid leaf 8000001e, leaf 8000001d, leaf 4 >>> + msr c001100c >>> + cpuid leaf 2 and/or leaf 1 if leaf b and/or 4 isn't available >>> >>> The scheduler is aware of: >>> - smt siblings (from topology) >>> - last-level-cache siblings (from topology) >>> - node siblings (from numa information) >>> >> Right. So, this confirms what we were guessing: we need to "reconcile" >> these two sources of information (from the guest point of view). >> >> Both the 'in kernel' and 'in toolstack' approach should have all the >> necessary information to make things match, I think. In fact, in >> toolstack, we know what the vNUMA topology is (we're parsing and >> actually putting it in place!). In kernel, we know it as we read it from >> tables or hypercalls (isn't that so, for PV guest?). >> >> In fact, I think that it is the topology, i.e., what comes from MSRs, >> that needs to adapt, and follow vNUMA, as much as possible. Do we agree >> on this? > > I think we have to be very careful here. I see two possible scenarios: > > 1) The vcpus are not pinned 1:1 on physical cpus. The hypervisor will > try to schedule the vcpus according to their numa affinity. So they > can change pcpus at any time in case of very busy guests. I don't > think the linux kernel should treat the cpus differently in this > case as it will be in vane regarding the Xen scheduler's activity. > So we should use the "null" topology in this case. Sorry, the topology should reflect the vcpu<->numa-node relations, of course, but nothing else (so flat topolgy in each numa node). > > 2) The vcpus of the guest are all pinned 1:1 to physical cpus. The Xen > scheduler can't move vcpus between pcpus, so the linux kernel should > see the real topology of the used pcpus in order to optimize for this > picture. > > This only covers the scheduling aspect, of course. > >> >> IMO, the thing boils down to these: >> >> 1) from where (kernel vs. toolstack) is it the most easy and effective >> to enact the CPUID fiddling? As in, can we do that in toolstack? >> (Andrew was not so sure, and Boris found issues, although Jan seems >> to think they're no show stopper.) >> I'm quite certain that we can do that from inside the kernel, >> although, how early would we need to be doing it? Do we have the >> vNUMA info already? >> >> 2) when tweaking the values of CPUID and other MSRs, are there other >> vNUMA (and topology in general) constraints and requirements we >> should take into account? For instance, do we want, for licensing >> reasons, all (or most) of the vcpus to be siblings, rather than full >> sockets? Etc. >> 2a) if yes, how and where are these constraints specified? >> >> If looking at 1) only, it still looks to me that doing things within the >> kernel would be the way to go. >> >> When looking at 2), OTOH, toolstacks variants start to be more >> appealing. Especially depending on our answer to 2a). In fact, >> in case we want to give the user a way to specify this >> siblings-vs-cores-vs-sockets information, it IMO would be good to deal >> with that in tools, rather than having to involve Xen or Linux! >> >>> It will especially move tasks from one cpu to another first between smt >>> siblings, second between llc siblings, third between node siblings and >>> last all cpus. >>> >> Yep, this part, I knew. >> >> Maybe, there is room for "fixing" this at this level, hooking up inside >> the scheduler code... but I'm shooting in the dark, without having check >> whether and how this could be really feasible, should I? > > Uuh, I don't think a change of the scheduler on behalf of Xen is really > appreciated. :-) > > I'd rather fiddle with the cpu masks on the different levels to let the > scheduler do the right thing. > >> One thing I don't like about this approach is that it would potentially >> solve vNUMA and other scheduling anomalies, but... >> >>> cpuid instruction is available for user mode as well. >>> >> ...it would not do any good for other subsystems, and user level code >> and apps. > > Indeed. I think the optimal solution would be two-fold: give the > scheduler the information it is needing to react correctly via a > kernel patch not relying on cpuid values and fiddle with the cpuid > values from xen tools according to any needs of other subsystems and/or > user code (e.g. licensing). Juergen