From: Sean Christopherson <seanjc@google.com>
To: Binbin Wu <binbin.wu@linux.intel.com>
Cc: kvm@vger.kernel.org, linux-kernel@vger.kernel.org,
pbonzini@redhat.com, chao.gao@intel.com, kai.huang@intel.com,
David.Laight@aculab.com, robert.hu@linux.intel.com,
guang.zeng@intel.com
Subject: Re: [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable
Date: Wed, 16 Aug 2023 15:10:31 -0700 [thread overview]
Message-ID: <ZN1JV2TR277zGevl@google.com> (raw)
In-Reply-To: <20230719144131.29052-9-binbin.wu@linux.intel.com>
On Wed, Jul 19, 2023, Binbin Wu wrote:
> Untag address for 64-bit memory operand in VMExit handlers when LAM is applicable.
>
> For VMExit handlers related to 64-bit linear address:
> - Cases need to untag address (handled in get_vmx_mem_address())
> Operand(s) of VMX instructions and INVPCID.
> Operand(s) of SGX ENCLS.
> - Cases LAM doesn't apply to (no change needed)
> Operand of INVLPG.
> Linear address in INVPCID descriptor.
> Linear address in INVVPID descriptor.
> BASEADDR specified in SESC of ECREATE.
>
> Note:
> LAM doesn't apply to the writes to control registers or MSRs.
> LAM masking applies before paging, so the faulting linear address in CR2
> doesn't contain the metadata.
> The guest linear address saved in VMCS doesn't contain metadata.
>
> Signed-off-by: Binbin Wu <binbin.wu@linux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@intel.com>
> ---
> arch/x86/kvm/vmx/nested.c | 2 ++
> arch/x86/kvm/vmx/sgx.c | 1 +
> arch/x86/kvm/vmx/vmx.c | 3 +--
> arch/x86/kvm/vmx/vmx.h | 2 ++
> arch/x86/kvm/x86.c | 1 +
> 5 files changed, 7 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/nested.c b/arch/x86/kvm/vmx/nested.c
> index 76c9904c6625..bd2c8936953a 100644
> --- a/arch/x86/kvm/vmx/nested.c
> +++ b/arch/x86/kvm/vmx/nested.c
> @@ -4980,6 +4980,7 @@ int get_vmx_mem_address(struct kvm_vcpu *vcpu, unsigned long exit_qualification,
> else
> *ret = off;
>
> + *ret = vmx_get_untagged_addr(vcpu, *ret, 0);
> /* Long mode: #GP(0)/#SS(0) if the memory address is in a
> * non-canonical form. This is the only check on the memory
> * destination for long mode!
> @@ -5797,6 +5798,7 @@ static int handle_invvpid(struct kvm_vcpu *vcpu)
> vpid02 = nested_get_vpid02(vcpu);
> switch (type) {
> case VMX_VPID_EXTENT_INDIVIDUAL_ADDR:
> + /* LAM doesn't apply to the address in descriptor of invvpid */
Nit, if we're going to bother with a comment, I think it makes sense to explain
that LAM doesn't apply to any TLB invalidation input, i.e. as opposed to just
saying the INVVPID is special.
/*
* LAM doesn't apply to addresses that are inputs to TLB
* invalidation.
*/
And then when LAM and LASS collide:
/*
* LAM and LASS don't apply to ...
*/
> if (!operand.vpid ||
> is_noncanonical_address(operand.gla, vcpu))
> return nested_vmx_fail(vcpu,
> diff --git a/arch/x86/kvm/vmx/sgx.c b/arch/x86/kvm/vmx/sgx.c
> index 3e822e582497..6fef01e0536e 100644
> --- a/arch/x86/kvm/vmx/sgx.c
> +++ b/arch/x86/kvm/vmx/sgx.c
> @@ -37,6 +37,7 @@ static int sgx_get_encls_gva(struct kvm_vcpu *vcpu, unsigned long offset,
> if (!IS_ALIGNED(*gva, alignment)) {
> fault = true;
> } else if (likely(is_64_bit_mode(vcpu))) {
> + *gva = vmx_get_untagged_addr(vcpu, *gva, 0);
> fault = is_noncanonical_address(*gva, vcpu);
> } else {
> *gva &= 0xffffffff;
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index abf6d42672cd..f18e610c4363 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -8177,8 +8177,7 @@ static void vmx_vm_destroy(struct kvm *kvm)
> free_pages((unsigned long)kvm_vmx->pid_table, vmx_get_pid_table_order(kvm));
> }
>
> -static gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva,
> - unsigned int flags)
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags)
> {
> unsigned long cr3_bits;
> int lam_bit;
> diff --git a/arch/x86/kvm/vmx/vmx.h b/arch/x86/kvm/vmx/vmx.h
> index 32384ba38499..6fb612355769 100644
> --- a/arch/x86/kvm/vmx/vmx.h
> +++ b/arch/x86/kvm/vmx/vmx.h
> @@ -421,6 +421,8 @@ void vmx_enable_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr, int type);
> u64 vmx_get_l2_tsc_offset(struct kvm_vcpu *vcpu);
> u64 vmx_get_l2_tsc_multiplier(struct kvm_vcpu *vcpu);
>
> +gva_t vmx_get_untagged_addr(struct kvm_vcpu *vcpu, gva_t gva, unsigned int flags);
> +
> static inline void vmx_set_intercept_for_msr(struct kvm_vcpu *vcpu, u32 msr,
> int type, bool value)
> {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index 339a113b45af..d2a0cdfb77a5 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -13370,6 +13370,7 @@ int kvm_handle_invpcid(struct kvm_vcpu *vcpu, unsigned long type, gva_t gva)
>
> switch (type) {
> case INVPCID_TYPE_INDIV_ADDR:
> + /* LAM doesn't apply to the address in descriptor of invpcid */
Same thing here.
> if ((!pcid_enabled && (operand.pcid != 0)) ||
> is_noncanonical_address(operand.gla, vcpu)) {
> kvm_inject_gp(vcpu, 0);
> --
> 2.25.1
>
next prev parent reply other threads:[~2023-08-16 22:11 UTC|newest]
Thread overview: 42+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-07-19 14:41 [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-07-19 14:41 ` [PATCH v10 1/9] KVM: x86/mmu: Use GENMASK_ULL() to define __PT_BASE_ADDR_MASK Binbin Wu
2023-08-16 21:00 ` Sean Christopherson
2023-08-28 4:06 ` Binbin Wu
2023-08-31 19:26 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 2/9] KVM: x86: Add & use kvm_vcpu_is_legal_cr3() to check CR3's legality Binbin Wu
2023-07-20 23:53 ` Isaku Yamahata
2023-07-21 2:20 ` Binbin Wu
2023-07-21 15:03 ` Sean Christopherson
2023-07-24 2:07 ` Binbin Wu
2023-07-25 16:05 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 3/9] KVM: x86: Use KVM-governed feature framework to track "LAM enabled" Binbin Wu
2023-08-16 3:46 ` Huang, Kai
2023-08-16 7:08 ` Binbin Wu
2023-08-16 9:47 ` Huang, Kai
2023-08-16 21:33 ` Sean Christopherson
2023-08-16 23:03 ` Huang, Kai
2023-08-17 1:28 ` Binbin Wu
2023-08-17 19:46 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 4/9] KVM: x86: Virtualize CR4.LAM_SUP Binbin Wu
2023-08-16 21:41 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 5/9] KVM: x86: Virtualize CR3.LAM_{U48,U57} Binbin Wu
2023-08-16 21:44 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 6/9] KVM: x86: Introduce get_untagged_addr() in kvm_x86_ops and call it in emulator Binbin Wu
2023-07-19 14:41 ` [PATCH v10 7/9] KVM: VMX: Implement and wire get_untagged_addr() for LAM Binbin Wu
2023-08-16 22:01 ` Sean Christopherson
2023-08-17 9:51 ` Binbin Wu
2023-08-17 14:44 ` Sean Christopherson
2023-07-19 14:41 ` [PATCH v10 8/9] KVM: x86: Untag address for vmexit handlers when LAM applicable Binbin Wu
2023-08-16 21:49 ` Sean Christopherson
2023-08-16 22:10 ` Sean Christopherson [this message]
2023-07-19 14:41 ` [PATCH v10 9/9] KVM: x86: Expose LAM feature to userspace VMM Binbin Wu
2023-08-16 21:53 ` Sean Christopherson
2023-08-17 1:59 ` Binbin Wu
2023-08-15 2:05 ` [PATCH v10 0/9] Linear Address Masking (LAM) KVM Enabling Binbin Wu
2023-08-15 23:49 ` Sean Christopherson
2023-08-16 22:25 ` Sean Christopherson
2023-08-17 9:17 ` Binbin Wu
2023-08-18 4:31 ` Binbin Wu
2023-08-18 13:53 ` Sean Christopherson
2023-08-25 14:18 ` Zeng Guang
2023-08-31 20:24 ` Sean Christopherson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZN1JV2TR277zGevl@google.com \
--to=seanjc@google.com \
--cc=David.Laight@aculab.com \
--cc=binbin.wu@linux.intel.com \
--cc=chao.gao@intel.com \
--cc=guang.zeng@intel.com \
--cc=kai.huang@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=pbonzini@redhat.com \
--cc=robert.hu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).