From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com ([192.55.52.115]) by Galois.linutronix.de with esmtps (TLS1.2:DHE_RSA_AES_256_CBC_SHA256:256) (Exim 4.80) (envelope-from ) id 1fMg27-0003nO-Ke for speck@linutronix.de; Sat, 26 May 2018 22:43:24 +0200 Date: Sat, 26 May 2018 13:43:19 -0700 From: Andi Kleen Subject: [MODERATED] Re: L1D-Fault KVM mitigation Message-ID: <20180526204319.GB4486@tassilo.jf.intel.com> References: <20180424090630.wlghmrpasn7v7wbn@suse.de> <20180424093537.GC4064@hirez.programming.kicks-ass.net> <1524563292.8691.38.camel@infradead.org> <20180424110445.GU4043@hirez.programming.kicks-ass.net> <1527068745.8186.89.camel@infradead.org> <20180524094526.GE12198@hirez.programming.kicks-ass.net> MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: 7bit To: speck@linutronix.de List-ID: > The PIO case _IS_ interesting because it highlights the problem with the > synchronization overhead. And it does not matter at all whether you VMEXIT > because of a PIO access or due to any other reason. So even if you optimize > it then you still have a gazillion of vm_exits on boot. The simple boot > tests I did have ~250k vm_exits in 5 seconds and only half of them are PIO. Keep in mind that we don't need to synchronize when the other CPU is idle in the guest, so it's only a problem when all the CPUs are busy. That should be the common case for boot. Right now something doesn't seem to be working right with this, so the PIO overhead is still high. > Nevertheless it gave me very interesting insights via tracing the > synchronization mechanics. The interesting thing is that halfways > synchronous vmexits on both vCPUs are rather cheap. The slightly async ones What's an async vmexit? One that blocks? I didn't think we had that many of those. What exactly you are seeing? -Andi