From mboxrd@z Thu Jan 1 00:00:00 1970 From: Juergen Gross Subject: Re: PV-vNUMA issue: topology is misinterpreted by the guest Date: Wed, 29 Jul 2015 08:04:13 +0200 Message-ID: <55B86CDD.1060803@suse.com> References: <1437042762.28251.18.camel@citrix.com> <55A7D17C.5060602@citrix.com> <55A7D2CC.1050708@oracle.com> <55A7F7F40200007800092152@mail.emea.novell.com> <55A7DE45.4040804@citrix.com> <55A7E2D8.3040203@oracle.com> <55A8B83802000078000924AE@mail.emea.novell.com> <1437118075.23656.25.camel@citrix.com> <55A946C6.8000002@oracle.com> <1437401354.5036.19.camel@citrix.com> <55AD08F7.7020105@oracle.com> <55AEA4DD.7080406@oracle.com> <1437572160.5036.39.camel@citrix.com> <55AF9F8F.7030200@suse.com> <55AFA16B.3070103@oracle.com> <55AFA41E.1080101@suse.com> <55AFAC34.1060606@oracle.com> <55B070ED.2040200@suse.com> <1437660433.5036.96.camel@citrix.com> <55B21364.5040906@suse.com> <1437749076.4682.47.camel@citrix.com> <55B25650.4030402@suse.com> <55B258C9.4040400@suse.com> <1437753509.4682.78.camel@citrix.com> <55B26377.4060807@suse.com> <1438006166.5036.156.camel@citrix.com> <55B7052F.8090804@suse.com> <55B79B9C.6030505@suse.com> <1438100238.2889.135.camel@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii"; Format="flowed" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZKKTS-0005yU-1D for xen-devel@lists.xenproject.org; Wed, 29 Jul 2015 06:04:18 +0000 In-Reply-To: <1438100238.2889.135.camel@citrix.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Dario Faggioli Cc: Elena Ufimtseva , Wei Liu , Andrew Cooper , David Vrabel , Jan Beulich , "xen-devel@lists.xenproject.org" , Boris Ostrovsky List-Id: xen-devel@lists.xenproject.org On 07/28/2015 06:17 PM, Dario Faggioli wrote: > On Tue, 2015-07-28 at 17:11 +0200, Juergen Gross wrote: >> On 07/28/2015 06:29 AM, Juergen Gross wrote: > >>> I'll make some performance tests on a big machine (4 sockets, 60 cores, >>> 120 threads) regarding topology information: >>> >>> - bare metal >>> - "random" topology (like today) >>> - "simple" topology (all vcpus regarded as equal) >>> - "real" topology with all vcpus pinned >>> >>> This should show: >>> >>> - how intrusive would the topology patch(es) be? >>> - what is the performance impact of a "wrong" scheduling data base >> >> On the above box I used a pvops kernel 4.2-rc4 plus a rather small patch >> (see attachment). I did 5 kernel builds in each environment: >> >> make clean >> time make -j 120 >> > Right. If you have time, can you try '-j60' and '-j30' (maybe even -j45 > and -j15, if you've got _a_lot_ of time! :-)). The test machine can do this without me watching, so I've just started the first configuration... > I'm asking this because, with hyperthreading involved, I've sometimes > seen things being the worse when *not* (over)saturating the CPU > capacity. Hmm, oversaturation shouldn't happen here. I've added -j 240 to let it happen. ... > So, basically, as far as Dom0 on my test box is concerned, "random" > actually matches the host topology. Okay, have to check that on my box. I think I'll have another try with a domU. This could be much more "random" than dom0. > Sure, without pinning, this looks equally wrong, as Xen's scheduler can > well execute, say, vcpu 0 and vcpu 4, which are not siblings, on the > same core. But then again, if the load is small, it just won't happen > (e.g., if there are only those two busy vcpus, Xen will send them on > !siblings core), while if it's too hugh, it won't matter... :-/