From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Google-Smtp-Source: AH8x225InZADO+0r3ycXt0eBShSQ7c5WY1UC5hwx8mHMTM6nhj8DLzQDUa+dvRybib+WcJyO5yBH ARC-Seal: i=1; a=rsa-sha256; t=1517246897; cv=none; d=google.com; s=arc-20160816; b=0VQW/srP6fCXv4W5UdfYySMrCK0JsTTNNTnA84yBehTip3hr5HHj4GRydGfEqW/IJb 8dIQh0aUibdpp9zut2OQHFIlmOI/RmxtRx/ok0TxPUQhbahaqkK8EJM/hGrW4yr0H+vz TTMMQSHetKsFMoSW6P2+vFDutl53sYbbdA/xGEbJMTCmU2cZ4SV5I/wNY/bko6FdYgIN fkW6oJxhSvB8UkUcbcozAESCPCElR63iyOCA9I73LGXfvV7F6aTdy43byWuspqUSG7dY +Gm2SgizEw6fJFeoHhghqeCUALmVuef2/SYilJmexSgCilWeZwN4OHNRtHj70LYNdkRQ L0UA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=content-transfer-encoding:user-agent:in-reply-to :content-disposition:mime-version:references:message-id:subject:cc :to:from:date:dkim-signature:arc-authentication-results; bh=H5R8ejy1qOhTyY8PIWVHAyrwfdmPrUk0Ko9+XIFe15g=; b=Qu250O7iv3Jjes+Vrnf3idPqBQZehH0mgQQ5itvtvhWojcXgT+s6iHtHJbDQl14SQV VGReYa8HSl2kzJ3yiLhlK55AL/xgBtwhbZrWgkWFIMNWIa5Ji0Wi+c6iCLD/O60ej7DQ peaQz4W0mmOimJCFHTy1LelGY5ID+L7+yOftsxEdXi8eTmWZ9ILSy5BYevWw61wrEaEF F+x+V2pYOF2no4B67JAVLnGmjxA+cjWp1Cb2HuDqbRLZIOR2hjGxYiVYffNV7ZvXSKM8 0ZhNKf8Suw+/jPi7eW3Jt+PUuKfBc0rQsnmCscGrbR3L3U5tdxjKM3rsZ9w1RqJd8y3S Zh6A== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=AJIjR72x; spf=pass (google.com: domain of konrad.wilk@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=konrad.wilk@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Authentication-Results: mx.google.com; dkim=pass header.i=@oracle.com header.s=corp-2017-10-26 header.b=AJIjR72x; spf=pass (google.com: domain of konrad.wilk@oracle.com designates 141.146.126.78 as permitted sender) smtp.mailfrom=konrad.wilk@oracle.com; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=oracle.com Date: Mon, 29 Jan 2018 12:27:41 -0500 From: Konrad Rzeszutek Wilk To: David Woodhouse , daniel.kiper@oracle.com, Mihai Carabas Cc: KarimAllah Ahmed , Liran Alon , luto@kernel.org, tglx@linutronix.de, torvalds@linux-foundation.org, gregkh@linuxfoundation.org, asit.k.mallick@intel.com, dave.hansen@intel.com, karahmed@amazon.de, jun.nakajima@intel.com, dan.j.williams@intel.com, ashok.raj@intel.com, daniel.kiper@oracle.com, arjan.van.de.ven@intel.com, tim.c.chen@linux.intel.com, pbonzini@redhat.com, linux-kernel@vger.kernel.org, ak@linux.intel.com, kvm@vger.kernel.org, aarcange@redhat.com Subject: Re: [PATCH] x86: vmx: Allow direct access to MSR_IA32_SPEC_CTRL Message-ID: <20180129172741.GN22045@char.us.oracle.com> References: <6b9a1ec2-5ebd-4624-a825-3f31db5cefb5@default> <1517215563.6624.118.camel@infradead.org> <8bed4a5a-afc6-1569-d9bf-a3e1103e92f8@amazon.com> <1517222264.6624.131.camel@infradead.org> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline In-Reply-To: <1517222264.6624.131.camel@infradead.org> User-Agent: Mutt/1.8.3 (2017-05-23) Content-Transfer-Encoding: quoted-printable X-Proofpoint-Virus-Version: vendor=nai engine=5900 definitions=8789 signatures=668655 X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 malwarescore=0 phishscore=0 bulkscore=0 spamscore=0 mlxscore=0 mlxlogscore=844 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1711220000 definitions=main-1801290226 X-getmail-retrieved-from-mailbox: INBOX X-GMAIL-THRID: =?utf-8?q?1590865729989295970?= X-GMAIL-MSGID: =?utf-8?q?1590948682811385507?= X-Mailing-List: linux-kernel@vger.kernel.org List-ID: On Mon, Jan 29, 2018 at 10:37:44AM +0000, David Woodhouse wrote: > On Mon, 2018-01-29 at 10:43 +0100, KarimAllah Ahmed wrote: > > On 01/29/2018 09:46 AM, David Woodhouse wrote: > > > Reading the code and comparing with the SDM, I can't see where we'r= e > > > ever setting VM_EXIT_MSR_STORE_{ADDR,COUNT} except in the nested > > > case... > > Hmmm ... you are probably right! I think all users of this interface > > always trap + update save area and never passthrough the MSR. That is > > why only LOAD is needed *so far*. > >=20 > > Okay, let me sort this out in v3 then. >=20 > I'm starting to think a variant of Ashok's patch might actually be the > simpler approach, and not "premature optimisation". Especially if we > need to support the !cpu_has_vmx_msr_bitmaps() case? >=20 > Start with vmx->spec_ctrl set to zero. When first touched, make it > passthrough (but not atomically switched) and set a flag (e.g. > "spec_ctrl_live") which triggers the 'restore_branch_speculation' and > 'save_and_restrict_branch_speculation' behaviours. Except don't use > those macros. Those can look something like >=20 > =A0/* If this vCPU has touched SPEC_CTRL then restore its value if need= ed */ > =A0if (vmx->spec_ctrl_live && vmx->spec_ctrl) > =A0 =A0 =A0wrmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > =A0/* vmentry is serialising on affected CPUs, so the conditional branc= h is safe */ >=20 >=20 > ... and, respectively, ... >=20 > /* If this vCPU has touched SPEC_CTRL then save its value and ensure w= e have zero */ > if (vmx->spec_ctrl_live) { > rdmsrl(MSR_IA32_SPEC_CTRL, vmx->spec_ctrl); > if (vmx->spec_ctrl) > wrmsrl(MSR_IA32_SPEC_CTRL, 0); > } >=20 >=20 > Perhaps we can ditch the separate 'spec_ctrl_live' flag and check the > pass-through MSR bitmap directly, in the case that it exists?=A0 Or the cpuid_flag as that would determine whether the MSR bitmap intercep= t is set or not.