* Re: [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
@ 2020-02-21 6:56 linmiaohe
0 siblings, 0 replies; 3+ messages in thread
From: linmiaohe @ 2020-02-21 6:56 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel, kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Sean Christopherson <sean.j.christopherson@intel.com> writes:
>Directly invoke vpid_sync_context() to do a global INVVPID when the individual address variant is not supported instead of deferring such behavior to the caller. This allows for additional consolidation of code as the logic is basically identical to the emulation of the individual address variant in handle_invvpid().
>
>No functional change intended.
>
>Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
>---
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code
@ 2020-02-20 20:43 Sean Christopherson
2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel, kvm, linux-kernel
This series is technically x86 wide, but it only superficially affects
SVM, the motivation and primary touchpoints are all about VMX.
The goal of this series to ultimately clean up __vmx_flush_tlb(), which,
for me, manages to be extremely confusing despite being only ten lines of
code.
The most confusing aspect of __vmx_flush_tlb() is that it is overloaded
for multiple uses:
1) TLB flushes in response to a change in KVM's MMU
2) TLB flushes during nested VM-Enter/VM-Exit when VPID is enabled
3) Guest-scoped TLB flushes for paravirt TLB flushing
Handling (2) and (3) in the same flow as (1) is kludgy, because the rules
for (1) are quite different than the rules for (2) and (3). They're all
squeezed into __vmx_flush_tlb() via the @invalidate_gpa param, which means
"invalidate gpa mappings", not "invalidate a specific gpa"; it took me
forever and a day to realize that.
To clean things up, handle (2) by directly calling vpid_sync_context()
instead of bouncing through __vmx_flush_tlb(), and handle (3) via a
dedicated kvm_x86_ops hook. This allows for a less tricky implementation
of vmx_flush_tlb() for (1), and (hopefully) clarifies the rules for what
mappings must be invalidated when.
Sean Christopherson (10):
KVM: VMX: Use vpid_sync_context() directly when possible
KVM: VMX: Move vpid_sync_vcpu_addr() down a few lines
KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
KVM: VMX: Fold vpid_sync_vcpu_{single,global}() into
vpid_sync_context()
KVM: nVMX: Use vpid_sync_vcpu_addr() to emulate INVVPID with address
KVM: x86: Move "flush guest's TLB" logic to separate kvm_x86_ops hook
KVM: VMX: Clean up vmx_flush_tlb_gva()
KVM: x86: Drop @invalidate_gpa param from kvm_x86_ops' tlb_flush()
KVM: VMX: Drop @invalidate_gpa from __vmx_flush_tlb()
KVM: VMX: Fold __vmx_flush_tlb() into vmx_flush_tlb()
arch/x86/include/asm/kvm_host.h | 8 +++++++-
arch/x86/kvm/mmu/mmu.c | 2 +-
arch/x86/kvm/svm.c | 14 ++++++++++----
arch/x86/kvm/vmx/nested.c | 12 ++++--------
arch/x86/kvm/vmx/ops.h | 32 +++++++++-----------------------
arch/x86/kvm/vmx/vmx.c | 26 +++++++++++++++++---------
arch/x86/kvm/vmx/vmx.h | 19 ++++++++++---------
arch/x86/kvm/x86.c | 8 ++++----
8 files changed, 62 insertions(+), 59 deletions(-)
--
2.24.1
^ permalink raw reply [flat|nested] 3+ messages in thread* [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
@ 2020-02-20 20:43 ` Sean Christopherson
2020-02-21 13:26 ` Vitaly Kuznetsov
0 siblings, 1 reply; 3+ messages in thread
From: Sean Christopherson @ 2020-02-20 20:43 UTC (permalink / raw)
To: Paolo Bonzini
Cc: Sean Christopherson, Vitaly Kuznetsov, Wanpeng Li, Jim Mattson,
Joerg Roedel, kvm, linux-kernel
Directly invoke vpid_sync_context() to do a global INVVPID when the
individual address variant is not supported instead of deferring such
behavior to the caller. This allows for additional consolidation of
code as the logic is basically identical to the emulation of the
individual address variant in handle_invvpid().
No functional change intended.
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
arch/x86/kvm/vmx/ops.h | 12 +++++-------
arch/x86/kvm/vmx/vmx.c | 3 +--
2 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
index a2b0689e65e3..612df1bdb26b 100644
--- a/arch/x86/kvm/vmx/ops.h
+++ b/arch/x86/kvm/vmx/ops.h
@@ -276,17 +276,15 @@ static inline void vpid_sync_context(int vpid)
vpid_sync_vcpu_global();
}
-static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
+static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
{
if (vpid == 0)
- return true;
+ return;
- if (cpu_has_vmx_invvpid_individual_addr()) {
+ if (cpu_has_vmx_invvpid_individual_addr())
__invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
- return true;
- }
-
- return false;
+ else
+ vpid_sync_context(vpid);
}
static inline void ept_sync_global(void)
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 9a6664886f2e..349a6e054e0e 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -2826,8 +2826,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
{
int vpid = to_vmx(vcpu)->vpid;
- if (!vpid_sync_vcpu_addr(vpid, addr))
- vpid_sync_context(vpid);
+ vpid_sync_vcpu_addr(vpid, addr);
/*
* If VPIDs are not supported or enabled, then the above is a no-op.
--
2.24.1
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr()
2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
@ 2020-02-21 13:26 ` Vitaly Kuznetsov
0 siblings, 0 replies; 3+ messages in thread
From: Vitaly Kuznetsov @ 2020-02-21 13:26 UTC (permalink / raw)
To: Sean Christopherson
Cc: Paolo Bonzini, Wanpeng Li, Jim Mattson, Joerg Roedel, kvm,
linux-kernel
Sean Christopherson <sean.j.christopherson@intel.com> writes:
> Directly invoke vpid_sync_context() to do a global INVVPID when the
> individual address variant is not supported instead of deferring such
> behavior to the caller. This allows for additional consolidation of
> code as the logic is basically identical to the emulation of the
> individual address variant in handle_invvpid().
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
> ---
> arch/x86/kvm/vmx/ops.h | 12 +++++-------
> arch/x86/kvm/vmx/vmx.c | 3 +--
> 2 files changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx/ops.h b/arch/x86/kvm/vmx/ops.h
> index a2b0689e65e3..612df1bdb26b 100644
> --- a/arch/x86/kvm/vmx/ops.h
> +++ b/arch/x86/kvm/vmx/ops.h
> @@ -276,17 +276,15 @@ static inline void vpid_sync_context(int vpid)
> vpid_sync_vcpu_global();
> }
>
> -static inline bool vpid_sync_vcpu_addr(int vpid, gva_t addr)
> +static inline void vpid_sync_vcpu_addr(int vpid, gva_t addr)
> {
> if (vpid == 0)
> - return true;
> + return;
>
> - if (cpu_has_vmx_invvpid_individual_addr()) {
> + if (cpu_has_vmx_invvpid_individual_addr())
> __invvpid(VMX_VPID_EXTENT_INDIVIDUAL_ADDR, vpid, addr);
> - return true;
> - }
> -
> - return false;
> + else
> + vpid_sync_context(vpid);
> }
>
> static inline void ept_sync_global(void)
> diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
> index 9a6664886f2e..349a6e054e0e 100644
> --- a/arch/x86/kvm/vmx/vmx.c
> +++ b/arch/x86/kvm/vmx/vmx.c
> @@ -2826,8 +2826,7 @@ static void vmx_flush_tlb_gva(struct kvm_vcpu *vcpu, gva_t addr)
> {
> int vpid = to_vmx(vcpu)->vpid;
>
> - if (!vpid_sync_vcpu_addr(vpid, addr))
> - vpid_sync_context(vpid);
> + vpid_sync_vcpu_addr(vpid, addr);
>
> /*
> * If VPIDs are not supported or enabled, then the above is a no-op.
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-02-21 13:26 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-02-21 6:56 [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() linmiaohe
-- strict thread matches above, loose matches on Subject: below --
2020-02-20 20:43 [PATCH 00/10] KVM: x86: Clean up VMX's TLB flushing code Sean Christopherson
2020-02-20 20:43 ` [PATCH 03/10] KVM: VMX: Handle INVVPID fallback logic in vpid_sync_vcpu_addr() Sean Christopherson
2020-02-21 13:26 ` Vitaly Kuznetsov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox