From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH] KVM: x86: lower default for halt_poll_ns Date: Wed, 17 May 2017 02:54:18 -0400 (EDT) Message-ID: <1631735288.8411893.1495004058615.JavaMail.zimbra@redhat.com> References: <20170418104118.21719-1-pbonzini@redhat.com> <4385bc02-2456-e8a7-a22b-cc2783ad6822@redhat.com> <20170516204109.GA15344@potion> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org To: Radim =?utf-8?B?S3LEjW3DocWZ?= Return-path: In-Reply-To: <20170516204109.GA15344@potion> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org > Still, I think we have dynamic polling to mitigate this overhead; > how was it behaving? Correctly: the polling stopped as soon as the benchmark ended. :) > I noticed a questionable decision in growing the window: > we know how long the polling should have been (block_ns), but we do not > use that information to set the next halt_poll_ns. > > Has something like this been tried? > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index f0fe9d02f6bb..d8dbf50957fc 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2193,7 +2193,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < halt_poll_ns && > block_ns < halt_poll_ns) > - grow_halt_poll_ns(vcpu); > + vcpu->halt_poll_ns = block_ns /* + x ? */; IIUC the idea was to grow slower than just, say, 10 ns -> 150 ns. Taking into account block_ns might also be useful, but it shouldn't matter much since the shrinking is very aggressive. Paolo > } else > vcpu->halt_poll_ns = 0; > > > It would avoid a case where several halts in a row were interrupted > after 300 us, but on the first one we'd schedule out after 10 us, then > after 20, 40, 80, 160, and finally have the successful poll at 320 us, > but we have just wasted time if the window is reset at any point before > that. > > (I really don't like benchmarking ...) > > Thanks. >