* Re: [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly
2019-11-12 18:33 [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly Liran Alon
@ 2019-11-12 19:02 ` Sean Christopherson
2019-11-13 9:20 ` Vitaly Kuznetsov
2019-11-13 15:17 ` Paolo Bonzini
2 siblings, 0 replies; 5+ messages in thread
From: Sean Christopherson @ 2019-11-12 19:02 UTC (permalink / raw)
To: Liran Alon; +Cc: pbonzini, rkrcmar, kvm, jmattson, vkuznets, Bhavesh Davda
On Tue, Nov 12, 2019 at 08:33:00PM +0200, Liran Alon wrote:
> When KVM emulates a nested VMEntry (L1->L2 VMEntry), it switches mmu root
> page. If nEPT is used, this will happen from
> kvm_init_shadow_ept_mmu()->__kvm_mmu_new_cr3() and otherwise it will
> happpen from nested_vmx_load_cr3()->kvm_mmu_new_cr3(). Either case,
> __kvm_mmu_new_cr3() will use fast_cr3_switch() in attempt to switch to a
> previously cached root page.
>
> In case fast_cr3_switch() finds a matching cached root page, it will
> set it in mmu->root_hpa and request KVM_REQ_LOAD_CR3 such that on
> next entry to guest, KVM will set root HPA in appropriate hardware
> fields (e.g. vmcs->eptp). In addition, fast_cr3_switch() calls
> kvm_x86_ops->tlb_flush() in order to flush TLB as MMU root page
> was replaced.
>
> This works as mmu->root_hpa, which vmx_flush_tlb() use, was
> already replaced in cached_root_available(). However, this may
> result in unnecessary INVEPT execution because a KVM_REQ_TLB_FLUSH
> may have already been requested. For example, by prepare_vmcs02()
> in case L1 don't use VPID.
>
> Therefore, change fast_cr3_switch() to just request TLB flush on
> next entry to guest.
>
> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com>
> Signed-off-by: Liran Alon <liran.alon@oracle.com>
> ---
> arch/x86/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 24c23c66b226..150d982ec1d2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4295,7 +4295,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
> kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
> if (!skip_tlb_flush) {
> kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
> - kvm_x86_ops->tlb_flush(vcpu, true);
> + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
Ha, I brought this up in the original review[*]. Junaid was concerned with
maintaining the existing behavior for vcpu->stat.tlb_flush, so we kept the
direct call to ->tlb_flush(). Incrementing tlb_flush seems a-ok, so:
Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com>
[*] https://patchwork.kernel.org/patch/10461319/#21985483
> }
>
> /*
> --
> 2.20.1
>
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly
2019-11-12 18:33 [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly Liran Alon
2019-11-12 19:02 ` Sean Christopherson
@ 2019-11-13 9:20 ` Vitaly Kuznetsov
2019-11-13 15:17 ` Paolo Bonzini
2 siblings, 0 replies; 5+ messages in thread
From: Vitaly Kuznetsov @ 2019-11-13 9:20 UTC (permalink / raw)
To: Liran Alon
Cc: sean.j.christopherson, jmattson, Liran Alon, Bhavesh Davda,
pbonzini, rkrcmar, kvm
Liran Alon <liran.alon@oracle.com> writes:
> When KVM emulates a nested VMEntry (L1->L2 VMEntry), it switches mmu root
> page. If nEPT is used, this will happen from
> kvm_init_shadow_ept_mmu()->__kvm_mmu_new_cr3() and otherwise it will
> happpen from nested_vmx_load_cr3()->kvm_mmu_new_cr3(). Either case,
> __kvm_mmu_new_cr3() will use fast_cr3_switch() in attempt to switch to a
> previously cached root page.
>
> In case fast_cr3_switch() finds a matching cached root page, it will
> set it in mmu->root_hpa and request KVM_REQ_LOAD_CR3 such that on
> next entry to guest, KVM will set root HPA in appropriate hardware
> fields (e.g. vmcs->eptp). In addition, fast_cr3_switch() calls
> kvm_x86_ops->tlb_flush() in order to flush TLB as MMU root page
> was replaced.
>
> This works as mmu->root_hpa, which vmx_flush_tlb() use, was
> already replaced in cached_root_available(). However, this may
> result in unnecessary INVEPT execution because a KVM_REQ_TLB_FLUSH
> may have already been requested. For example, by prepare_vmcs02()
> in case L1 don't use VPID.
>
> Therefore, change fast_cr3_switch() to just request TLB flush on
> next entry to guest.
>
> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com>
> Signed-off-by: Liran Alon <liran.alon@oracle.com>
> ---
> arch/x86/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 24c23c66b226..150d982ec1d2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4295,7 +4295,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
> kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
> if (!skip_tlb_flush) {
> kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
> - kvm_x86_ops->tlb_flush(vcpu, true);
> + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
> }
>
> /*
Reviewed-by: Vitaly Kuznetsov <vkuznets@redhat.com>
--
Vitaly
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly
2019-11-12 18:33 [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly Liran Alon
2019-11-12 19:02 ` Sean Christopherson
2019-11-13 9:20 ` Vitaly Kuznetsov
@ 2019-11-13 15:17 ` Paolo Bonzini
2019-11-13 15:20 ` Liran Alon
2 siblings, 1 reply; 5+ messages in thread
From: Paolo Bonzini @ 2019-11-13 15:17 UTC (permalink / raw)
To: Liran Alon, rkrcmar, kvm
Cc: sean.j.christopherson, jmattson, vkuznets, Bhavesh Davda
On 12/11/19 19:33, Liran Alon wrote:
> When KVM emulates a nested VMEntry (L1->L2 VMEntry), it switches mmu root
> page. If nEPT is used, this will happen from
> kvm_init_shadow_ept_mmu()->__kvm_mmu_new_cr3() and otherwise it will
> happpen from nested_vmx_load_cr3()->kvm_mmu_new_cr3(). Either case,
> __kvm_mmu_new_cr3() will use fast_cr3_switch() in attempt to switch to a
> previously cached root page.
>
> In case fast_cr3_switch() finds a matching cached root page, it will
> set it in mmu->root_hpa and request KVM_REQ_LOAD_CR3 such that on
> next entry to guest, KVM will set root HPA in appropriate hardware
> fields (e.g. vmcs->eptp). In addition, fast_cr3_switch() calls
> kvm_x86_ops->tlb_flush() in order to flush TLB as MMU root page
> was replaced.
>
> This works as mmu->root_hpa, which vmx_flush_tlb() use, was
> already replaced in cached_root_available(). However, this may
> result in unnecessary INVEPT execution because a KVM_REQ_TLB_FLUSH
> may have already been requested. For example, by prepare_vmcs02()
> in case L1 don't use VPID.
>
> Therefore, change fast_cr3_switch() to just request TLB flush on
> next entry to guest.
>
> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com>
> Signed-off-by: Liran Alon <liran.alon@oracle.com>
> ---
> arch/x86/kvm/mmu.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index 24c23c66b226..150d982ec1d2 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -4295,7 +4295,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
> kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
> if (!skip_tlb_flush) {
> kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
> - kvm_x86_ops->tlb_flush(vcpu, true);
> + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
> }
>
> /*
>
Queued, thanks.
(I should get kvm/queue properly tested and pushed by the end of this week).
Paolo
^ permalink raw reply [flat|nested] 5+ messages in thread* Re: [PATCH] KVM: x86: Optimization: Requst TLB flush in fast_cr3_switch() instead of do it directly
2019-11-13 15:17 ` Paolo Bonzini
@ 2019-11-13 15:20 ` Liran Alon
0 siblings, 0 replies; 5+ messages in thread
From: Liran Alon @ 2019-11-13 15:20 UTC (permalink / raw)
To: Paolo Bonzini
Cc: rkrcmar, kvm, sean.j.christopherson, jmattson, vkuznets,
Bhavesh Davda
> On 13 Nov 2019, at 17:17, Paolo Bonzini <pbonzini@redhat.com> wrote:
>
> On 12/11/19 19:33, Liran Alon wrote:
>> When KVM emulates a nested VMEntry (L1->L2 VMEntry), it switches mmu root
>> page. If nEPT is used, this will happen from
>> kvm_init_shadow_ept_mmu()->__kvm_mmu_new_cr3() and otherwise it will
>> happpen from nested_vmx_load_cr3()->kvm_mmu_new_cr3(). Either case,
>> __kvm_mmu_new_cr3() will use fast_cr3_switch() in attempt to switch to a
>> previously cached root page.
>>
>> In case fast_cr3_switch() finds a matching cached root page, it will
>> set it in mmu->root_hpa and request KVM_REQ_LOAD_CR3 such that on
>> next entry to guest, KVM will set root HPA in appropriate hardware
>> fields (e.g. vmcs->eptp). In addition, fast_cr3_switch() calls
>> kvm_x86_ops->tlb_flush() in order to flush TLB as MMU root page
>> was replaced.
>>
>> This works as mmu->root_hpa, which vmx_flush_tlb() use, was
>> already replaced in cached_root_available(). However, this may
>> result in unnecessary INVEPT execution because a KVM_REQ_TLB_FLUSH
>> may have already been requested. For example, by prepare_vmcs02()
>> in case L1 don't use VPID.
>>
>> Therefore, change fast_cr3_switch() to just request TLB flush on
>> next entry to guest.
>>
>> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com>
>> Signed-off-by: Liran Alon <liran.alon@oracle.com>
>> ---
>> arch/x86/kvm/mmu.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index 24c23c66b226..150d982ec1d2 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -4295,7 +4295,7 @@ static bool fast_cr3_switch(struct kvm_vcpu *vcpu, gpa_t new_cr3,
>> kvm_make_request(KVM_REQ_LOAD_CR3, vcpu);
>> if (!skip_tlb_flush) {
>> kvm_make_request(KVM_REQ_MMU_SYNC, vcpu);
>> - kvm_x86_ops->tlb_flush(vcpu, true);
>> + kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
>> }
>>
>> /*
>>
>
> Queued, thanks.
>
> (I should get kvm/queue properly tested and pushed by the end of this week).
>
> Paolo
>
Thanks.
Also note that I have sent another trivial patch that didn’t got any response ("KVM: VMX: Consume pending LAPIC INIT event when exit on INIT_SIGNAL”).
See: https://patchwork.kernel.org/patch/11236869/
I have also sent kvm-unit-tests for my recent patches (The INIT_SIGNAL fix and nVMX TPR-Threshold issue).
See: https://patchwork.kernel.org/patch/11236951/ and https://patchwork.kernel.org/patch/11236961/
-Liran
^ permalink raw reply [flat|nested] 5+ messages in thread