From mboxrd@z Thu Jan 1 00:00:00 1970 From: Konrad Rzeszutek Wilk Subject: Re: [PATCH] x86: vmx: Allow direct access to MSR_IA32_SPEC_CTRL Date: Mon, 29 Jan 2018 12:27:41 -0500 Message-ID: <20180129172741.GN22045@char.us.oracle.com> References: <6b9a1ec2-5ebd-4624-a825-3f31db5cefb5@default> <1517215563.6624.118.camel@infradead.org> <8bed4a5a-afc6-1569-d9bf-a3e1103e92f8@amazon.com> <1517222264.6624.131.camel@infradead.org> Mime-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Transfer-Encoding: quoted-printable Cc: KarimAllah Ahmed , Liran Alon , luto@kernel.org, tglx@linutronix.de, torvalds@linux-foundation.org, gregkh@linuxfoundation.org, asit.k.mallick@intel.com, dave.hansen@intel.com, karahmed@amazon.de, jun.nakajima@intel.com, dan.j.williams@intel.com, ashok.raj@intel.com, daniel.kiper@oracle.com, arjan.van.de.ven@intel.com, tim.c.chen@linux.intel.com, pbonzini@redhat.com, linux-kernel@vger.kernel.org, ak@linux.intel.com, kvm@vger.kernel.org, aarcange@redhat.com To: David Woodhouse , daniel.kiper@oracle.com, Mihai Carabas Return-path: Content-Disposition: inline In-Reply-To: <1517222264.6624.131.camel@infradead.org> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On Mon, Jan 29, 2018 at 10:37:44AM +0000, David Woodhouse wrote: > On Mon, 2018-01-29 at 10:43 +0100, KarimAllah Ahmed wrote: > > On 01/29/2018 09:46 AM, David Woodhouse wrote: > > > Reading the code and comparing with the SDM, I can't see where we'r= e > > > ever setting VM_EXIT_MSR_STORE_{ADDR,COUNT} except in the nested > > > case... > > Hmmm ... you are probably right! I think all users of this interface > > always trap + update save area and never passthrough the MSR. That is > > why only LOAD is needed *so far*. > >=20 > > Okay, let me sort this out in v3 then. >=20 > I'm starting to think a variant of Ashok's patch might actually be the > simpler approach, and not "premature optimisation". Especially if we > need to support the !cpu_has_vmx_msr_bitmaps() case? >=20 > Start with vmx->spec_ctrl set to zero. When first touched, make it > passthrough (but not atomically switched) and set a flag (e.g. > "spec_ctrl_live") which triggers the 'restore_branch_speculation' and > 'save_and_restrict_branch_speculation' behaviours. Except don't use > those macros. Those can look something like >=20 > =A0/* If this vCPU has touched SPEC_CTRL then restore its value if need= ed */ > =A0if (vmx->spec_ctrl_live && vmx->spec_ctrl) > =A0 =A0 =A0wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > =A0/* vmentry is serialising on affected CPUs, so the conditional branc= h is safe */ >=20 >=20 > ... and, respectively, ... >=20 > /* If this vCPU has touched SPEC_CTRL then save its value and ensure w= e have zero */ > if (vmx->spec_ctrl_live) { > rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > if (vmx->spec_ctrl) > wrmsrl(MSR_IA32_SPEC_CTRL, 0); > } >=20 >=20 > Perhaps we can ditch the separate 'spec_ctrl_live' flag and check the > pass-through MSR bitmap directly, in the case that it exists?=A0 Or the cpuid_flag as that would determine whether the MSR bitmap intercep= t is set or not.