From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753437AbdEQGyW (ORCPT ); Wed, 17 May 2017 02:54:22 -0400 Received: from mx1.redhat.com ([209.132.183.28]:56718 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752116AbdEQGyT (ORCPT ); Wed, 17 May 2017 02:54:19 -0400 DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com E7B6A5D5E9 Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx10.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=pbonzini@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com E7B6A5D5E9 Date: Wed, 17 May 2017 02:54:18 -0400 (EDT) From: Paolo Bonzini To: Radim =?utf-8?B?S3LEjW3DocWZ?= Cc: linux-kernel@vger.kernel.org, kvm@vger.kernel.org Message-ID: <1631735288.8411893.1495004058615.JavaMail.zimbra@redhat.com> In-Reply-To: <20170516204109.GA15344@potion> References: <20170418104118.21719-1-pbonzini@redhat.com> <4385bc02-2456-e8a7-a22b-cc2783ad6822@redhat.com> <20170516204109.GA15344@potion> Subject: Re: [PATCH] KVM: x86: lower default for halt_poll_ns MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [10.4.164.1, 10.4.195.1] Thread-Topic: x86: lower default for halt_poll_ns Thread-Index: Zsh/PvCx7GoX0911pN2NVafWHIi+lw== X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Wed, 17 May 2017 06:54:19 +0000 (UTC) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Still, I think we have dynamic polling to mitigate this overhead; > how was it behaving? Correctly: the polling stopped as soon as the benchmark ended. :) > I noticed a questionable decision in growing the window: > we know how long the polling should have been (block_ns), but we do not > use that information to set the next halt_poll_ns. > > Has something like this been tried? > > diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c > index f0fe9d02f6bb..d8dbf50957fc 100644 > --- a/virt/kvm/kvm_main.c > +++ b/virt/kvm/kvm_main.c > @@ -2193,7 +2193,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu) > /* we had a short halt and our poll time is too small */ > else if (vcpu->halt_poll_ns < halt_poll_ns && > block_ns < halt_poll_ns) > - grow_halt_poll_ns(vcpu); > + vcpu->halt_poll_ns = block_ns /* + x ? */; IIUC the idea was to grow slower than just, say, 10 ns -> 150 ns. Taking into account block_ns might also be useful, but it shouldn't matter much since the shrinking is very aggressive. Paolo > } else > vcpu->halt_poll_ns = 0; > > > It would avoid a case where several halts in a row were interrupted > after 300 us, but on the first one we'd schedule out after 10 us, then > after 20, 40, 80, 160, and finally have the successful poll at 320 us, > but we have just wasted time if the window is reset at any point before > that. > > (I really don't like benchmarking ...) > > Thanks. >