From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga06.intel.com ([134.134.136.31]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fNkIz-0000NF-V7 for speck@linutronix.de; Tue, 29 May 2018 21:29:14 +0200 References: <20180424090630.wlghmrpasn7v7wbn@suse.de> <20180424093537.GC4064@hirez.programming.kicks-ass.net> <1524563292.8691.38.camel@infradead.org> <20180424110445.GU4043@hirez.programming.kicks-ass.net> <1527068745.8186.89.camel@infradead.org> <20180524094526.GE12198@hirez.programming.kicks-ass.net> From: Tim Chen Message-ID: <0dacc8a9-3809-2fea-cf77-c15b0a395335@linux.intel.com> Date: Tue, 29 May 2018 12:29:09 -0700 MIME-Version: 1.0 In-Reply-To: Subject: [MODERATED] Encrypted Message Content-Type: multipart/mixed; boundary="KwkerdjveElxebuhh9Y8sO7iKnoyNsqMh"; protected-headers="v1" To: speck@linutronix.de List-ID: This is an OpenPGP/MIME encrypted message (RFC 4880 and 3156) --KwkerdjveElxebuhh9Y8sO7iKnoyNsqMh Content-Type: text/rfc822-headers; protected-headers="v1" Content-Disposition: inline From: Tim Chen To: speck for Thomas Gleixner Subject: Re: L1D-Fault KVM mitigation --KwkerdjveElxebuhh9Y8sO7iKnoyNsqMh Content-Type: text/plain; charset=windows-1252 Content-Language: en-US Content-Transfer-Encoding: quoted-printable On 05/26/2018 12:14 PM, speck for Thomas Gleixner wrote: > On Thu, 24 May 2018, speck for Tim Chen wrote: >=20 of load time. >> >> We may need to do the co-scheduling only when VM exit rate is low, and= >> turn off the SMT when VM exit rate becomes too high. >=20 > You cannot do that during runtime. That will destroy placement schemes = and > whatever. The SMT off decision needs to be done at a quiescent moment, > i.e. before starting VMs. Taking the SMT offline is a bit much and too big a hammer. Andi and I th= ought about having the scheduler forcing the other thread in idle instead for high VM exit rate scenario. We don't have to bother about doing sync with the other idle thread. But we have issues about fairness, as we will be starving the other run queue. >=20 > Running the same compile single threaded (offlining vCPU1 in the guest)= > increases the time to 107 seconds. >=20 > 107 / 88 =3D 1.22 >=20 > I.e. it's 20% slower than the one using two threads. That means that it= is > the same slowdown as having two threads synchronized (your number). yes, with compile workload, the HT speedup was mostly eaten up by overhead. >=20 > So if I take the above example and assume that the overhead of > synchronization is ~20% then the average vmenter/vmexit time is close t= o > 50us. >=20 >=20 > So I can see the usefulness for scenarious which David Woodhouse descri= bed > where vCPU and host CPU have a fixed relationship and the guests exit o= nce > in a while. But that should really be done with ucode assisantance whic= h > avoids all the nasty synchronization hackery more or less completely. The ucode guys are looking into such possibilities. It is tough as they have to work within the constraint of limited ucode headroom. Thanks. Tim --KwkerdjveElxebuhh9Y8sO7iKnoyNsqMh--