From mboxrd@z Thu Jan 1 00:00:00 1970 From: Maxim Levitsky Date: Wed, 27 Oct 2021 13:40:40 +0000 Subject: Re: [PATCH v2 11/43] KVM: Don't block+unblock when halt-polling is successful Message-Id: List-Id: References: <20211009021236.4122790-1-seanjc@google.com> <20211009021236.4122790-12-seanjc@google.com> In-Reply-To: <20211009021236.4122790-12-seanjc@google.com> MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit To: Sean Christopherson , Marc Zyngier , Huacai Chen , Aleksandar Markovic , Paul Mackerras , Anup Patel , Paul Walmsley , Palmer Dabbelt , Albert Ou , Christian Borntraeger , Janosch Frank , Paolo Bonzini Cc: James Morse , Alexandru Elisei , Suzuki K Poulose , Atish Patra , David Hildenbrand , Cornelia Huck , Claudio Imbrenda , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , linux-arm-kernel@lists.infradead.org, kvmarm@lists.cs.columbia.edu, linux-mips@vger.kernel.org, kvm@vger.kernel.org, kvm-ppc@vger.kernel.org, kvm-riscv@lists.infradead.org, linux-riscv@lists.infradead.org, linux-kernel@vger.kernel.org, David Matlack , Oliver Upton , Jing Zhang On Fri, 2021-10-08 at 19:12 -0700, Sean Christopherson wrote: > Invoke the arch hooks for block+unblock if and only if KVM actually > attempts to block the vCPU. The only non-nop implementation is on x86, > specifically SVM's AVIC, and there is no need to put the AVIC prior to > halt-polling as KVM x86's kvm_vcpu_has_events() will scour the full vIRR > to find pending IRQs regardless of whether the AVIC is loaded/"running". > > The primary motivation is to allow future cleanup to split out "block" > from "halt", but this is also likely a small performance boost on x86 SVM > when halt-polling is successful. > > Adjust the post-block path to update "cur" after unblocking, i.e. include > AVIC load time in halt_wait_ns and halt_wait_hist, so that the behavior > is consistent. Moving just the pre-block arch hook would result in only > the AVIC put latency being included in the halt_wait stats. There is no > obvious evidence that one way or the other is correct, so just ensure KVM > is consistent. > > Note, x86 has two separate paths for handling APICv with respect to vCPU > blocking. VMX uses hooks in x86's vcpu_block(), while SVM uses the arch > hooks in kvm_vcpu_block(). Prior to this path, the two paths were more > or less functionally identical. That is very much not the case after > this patch, as the hooks used by VMX _must_ fire before halt-polling. > x86's entire mess will be cleaned up in future patches. > > Signed-off-by: Sean Christopherson > --- > virt/kvm/kvm_main.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index f90b3ed05628..227f6bbe0716 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -3235,8 +3235,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > bool waited = false; > u64 block_ns; > > - kvm_arch_vcpu_blocking(vcpu); > - > start = cur = poll_end = ktime_get(); > if (do_halt_poll) { > ktime_t stop = ktime_add_ns(ktime_get(), vcpu->halt_poll_ns); > @@ -3253,6 +3251,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > } while (kvm_vcpu_can_poll(cur, stop)); > } > > + kvm_arch_vcpu_blocking(vcpu); > > prepare_to_rcuwait(wait); > for (;;) { > @@ -3265,6 +3264,9 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > schedule(); > } > finish_rcuwait(wait); > + > + kvm_arch_vcpu_unblocking(vcpu); > + > cur = ktime_get(); > if (waited) { > vcpu->stat.generic.halt_wait_ns +> @@ -3273,7 +3275,6 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > ktime_to_ns(cur) - ktime_to_ns(poll_end)); > } > out: > - kvm_arch_vcpu_unblocking(vcpu); > block_ns = ktime_to_ns(cur) - ktime_to_ns(start); > > /* Makes sense. Reviewed-by: Maxim Levitsky Best regards, Maxim Levitsky