From: Binbin Wu <binbin.wu@linux.intel.com>
To: "Huang, Kai" <kai.huang@intel.com>
Cc: "kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"robert.hu@linux.intel.com" <robert.hu@linux.intel.com>,
"pbonzini@redhat.com" <pbonzini@redhat.com>, "Christopherson,,
Sean" <seanjc@google.com>, "Gao, Chao" <chao.gao@intel.com>
Subject: Re: [PATCH v8 2/6] KVM: x86: Virtualize CR4.LAM_SUP
Date: Fri, 12 May 2023 09:33:57 +0800 [thread overview]
Message-ID: <abbb7938-0615-8578-0072-a96d21df3b4d@linux.intel.com> (raw)
In-Reply-To: <67a20fe2a41fbe99de1470254b14f282f72571c7.camel@intel.com>
On 5/11/2023 8:50 PM, Huang, Kai wrote:
> On Wed, 2023-05-10 at 14:06 +0800, Binbin Wu wrote:
>> From: Robert Hoo <robert.hu@linux.intel.com>
>>
>> Add support to allow guests to set the new CR4 control bit for guests to enable
>> the new Intel CPU feature Linear Address Masking (LAM) on supervisor pointers.
>>
>> LAM modifies the checking that is applied to 64-bit linear addresses, allowing
>> software to use of the untranslated address bits for metadata and masks the
>> metadata bits before using them as linear addresses to access memory. LAM uses
>> CR4.LAM_SUP (bit 28) to configure LAM for supervisor pointers. LAM also changes
>> VMENTER to allow the bit to be set in VMCS's HOST_CR4 and GUEST_CR4 for
>> virtualization.
>>
>> Move CR4.LAM_SUP out of CR4_RESERVED_BITS and its reservation depends on vcpu
>> supporting LAM feature or not. Leave the bit intercepted to avoid vmread every
>> time when KVM fetches its value, with the expectation that guest won't toggle
>> the bit frequently.
>>
>> Set CR4.LAM_SUP bit in the emulated IA32_VMX_CR4_FIXED1 MSR for guests to allow
>> guests to enable LAM for supervisor pointers in nested VMX operation.
>>
>> Hardware is not required to do TLB flush when CR4.LAM_SUP toggled, KVM doesn't
>> need to emulate TLB flush based on it.
>> There's no other features/vmx_exec_controls connection, no other code needed in
>> {kvm,vmx}_set_cr4().
>>
>> Signed-off-by: Robert Hoo <robert.hu@linux.intel.com>
>> Co-developed-by: Binbin Wu <binbin.wu@linux.intel.com>
>> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
>> Reviewed-by: Chao Gao <chao.gao@intel.com>
>> Tested-by: Xuelian Guo <xuelian.guo@intel.com>
>> ---
>> arch/x86/include/asm/kvm_host.h | 3 ++-
>> arch/x86/kvm/vmx/vmx.c | 3 +++
>> arch/x86/kvm/x86.h | 2 ++
>> 3 files changed, 7 insertions(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index fb9d1f2d6136..c6f03d151c31 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -125,7 +125,8 @@
>> | X86_CR4_PGE | X86_CR4_PCE | X86_CR4_OSFXSR | X86_CR4_PCIDE \
>> | X86_CR4_OSXSAVE | X86_CR4_SMEP | X86_CR4_FSGSBASE \
>> | X86_CR4_OSXMMEXCPT | X86_CR4_LA57 | X86_CR4_VMXE \
>> - | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP))
>> + | X86_CR4_SMAP | X86_CR4_PKE | X86_CR4_UMIP \
>> + | X86_CR4_LAM_SUP))
>>
>> #define CR8_RESERVED_BITS (~(unsigned long)X86_CR8_TPR)
>>
>> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
>> index 44fb619803b8..362b2dce7661 100644
>> --- a/arch/x86/kvm/vmx/vmx.c
>> +++ b/arch/x86/kvm/vmx/vmx.c
>> @@ -7603,6 +7603,9 @@ static void nested_vmx_cr_fixed1_bits_update(struct kvm_vcpu *vcpu)
>> cr4_fixed1_update(X86_CR4_UMIP, ecx, feature_bit(UMIP));
>> cr4_fixed1_update(X86_CR4_LA57, ecx, feature_bit(LA57));
>>
>> + entry = kvm_find_cpuid_entry_index(vcpu, 0x7, 1);
>> + cr4_fixed1_update(X86_CR4_LAM_SUP, eax, feature_bit(LAM));
>> +
>> #undef cr4_fixed1_update
>> }
>>
>> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
>> index c544602d07a3..fe67b641cce4 100644
>> --- a/arch/x86/kvm/x86.h
>> +++ b/arch/x86/kvm/x86.h
>> @@ -529,6 +529,8 @@ bool kvm_msr_allowed(struct kvm_vcpu *vcpu, u32 index, u32 type);
>> __reserved_bits |= X86_CR4_VMXE; \
>> if (!__cpu_has(__c, X86_FEATURE_PCID)) \
>> __reserved_bits |= X86_CR4_PCIDE; \
>> + if (!__cpu_has(__c, X86_FEATURE_LAM)) \
>> + __reserved_bits |= X86_CR4_LAM_SUP; \
>> __reserved_bits; \
>> })
>>
> LAM only applies to 64-bit linear address, which means LAM can only be enabled
> when CPU is in 64-bit mode with either 4-level or 5-level paging enabled.
>
> What's the hardware behaviour if we set CR4.LAM_SUP when CPU isn't in 64-bit
> mode? And how does VMENTRY check GUEST_CR4.LAM_SUP and 64-bit mode?
>
> Looks they are not clear in the spec you pasted in the cover letter:
>
> https://cdrdv2.intel.com/v1/dl/getContent/671368
>
> Or I am missing something?
Yes, it is not clearly described in LAM spec.
Had some internal discussions and also did some tests in host,
if the processor supports LAM, CR4.LAM_SUP is allowed to be set even
when cpu isn't in 64bit mode.
There was a statement in commit message of the last version, but I
missed it in this version. I'll add it back.
"CR4.LAM_SUP is allowed to be set even not in 64-bit mode, but it will not
take effect since LAM only applies to 64-bit linear address."
Also, I will try to ask Intel guys if it's possible to update the document.
next prev parent reply other threads:[~2023-05-12 1:34 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-05-10 6:06 [PATCH v8 0/6] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-05-10 6:06 ` [PATCH v8 1/6] KVM: x86: Consolidate flags for __linearize() Binbin Wu
2023-05-10 7:42 ` Chao Gao
2023-05-11 1:25 ` Binbin Wu
2023-05-11 9:58 ` David Laight
2023-05-12 1:35 ` Binbin Wu
2023-05-10 12:41 ` Huang, Kai
2023-05-11 1:30 ` Binbin Wu
2023-05-10 6:06 ` [PATCH v8 2/6] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
2023-05-11 12:50 ` Huang, Kai
2023-05-12 1:33 ` Binbin Wu [this message]
2023-05-12 10:49 ` Huang, Kai
2023-05-18 4:01 ` Binbin Wu
2023-05-10 6:06 ` [PATCH v8 3/6] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
2023-05-10 8:58 ` Chao Gao
2023-05-11 1:27 ` Binbin Wu
2023-05-10 11:59 ` Huang, Kai
2023-05-10 6:06 ` [PATCH v8 4/6] KVM: x86: Introduce untag_addr() in kvm_x86_ops Binbin Wu
2023-05-11 6:03 ` Chao Gao
2023-05-11 9:18 ` Binbin Wu
2023-05-11 10:37 ` Chao Gao
2023-05-10 6:06 ` [PATCH v8 5/6] KVM: x86: Untag address when LAM applicable Binbin Wu
2023-05-11 6:28 ` Chao Gao
2023-05-10 6:06 ` [PATCH v8 6/6] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
2023-05-12 12:49 ` Huang, Kai
2023-05-16 3:30 ` Binbin Wu
2023-05-25 2:08 ` [PATCH v8 0/6] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-05-25 15:59 ` Sean Christopherson
2023-06-06 9:26 ` Binbin Wu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=abbb7938-0615-8578-0072-a96d21df3b4d@linux.intel.com \
--to=binbin.wu@linux.intel.com \
--cc=chao.gao@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=robert.hu@linux.intel.com \
--cc=seanjc@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox