From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH] KVM: MMU: Inform users of mmio generation wraparound Date: Thu, 20 Jun 2013 12:59:54 +0200 Message-ID: <51C2E0AA.7060404@redhat.com> References: <20130620175914.4e4f9eb3.yoshikawa_takuya_b1@lab.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Cc: gleb@redhat.com, kvm@vger.kernel.org, xiaoguangrong@linux.vnet.ibm.com To: Takuya Yoshikawa Return-path: Received: from mail-ea0-f178.google.com ([209.85.215.178]:37130 "EHLO mail-ea0-f178.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S964966Ab3FTLAD (ORCPT ); Thu, 20 Jun 2013 07:00:03 -0400 Received: by mail-ea0-f178.google.com with SMTP id l15so3805169eak.37 for ; Thu, 20 Jun 2013 04:00:01 -0700 (PDT) In-Reply-To: <20130620175914.4e4f9eb3.yoshikawa_takuya_b1@lab.ntt.co.jp> Sender: kvm-owner@vger.kernel.org List-ID: Il 20/06/2013 10:59, Takuya Yoshikawa ha scritto: > Without this information, users will just see unexpected performance > problems and there is little chance we will get good reports from them: > note that mmio generation is increased even when we just start, or stop, > dirty logging for some memory slot, in which case users should never > expect all shadow pages to be zapped. > > Signed-off-by: Takuya Yoshikawa > --- > arch/x86/kvm/mmu.c | 4 +++- > 1 file changed, 3 insertions(+), 1 deletion(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index c60c5da..bc8302f 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -4385,8 +4385,10 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm) > * The max value is MMIO_MAX_GEN - 1 since it is not called > * when mark memslot invalid. > */ > - if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) > + if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) { > + printk(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound"); This should at least be rate-limited, because it is guest triggerable. But why isn't the kvm_mmu_invalidate_zap_all_pages tracepoint enough? Paolo > kvm_mmu_invalidate_zap_all_pages(kvm); > + } > } > > static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc) >