From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [PATCH] KVM: MMU: Inform users of mmio generation wraparound Date: Thu, 20 Jun 2013 15:54:38 +0300 Message-ID: <20130620125438.GM5832@redhat.com> References: <20130620175914.4e4f9eb3.yoshikawa_takuya_b1@lab.ntt.co.jp> <51C2E0AA.7060404@redhat.com> <20130620114504.GG5832@redhat.com> <20130620212837.185c5d4f5a9adbbd44c6f1ad@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Paolo Bonzini , Takuya Yoshikawa , kvm@vger.kernel.org, xiaoguangrong@linux.vnet.ibm.com To: Takuya Yoshikawa Return-path: Received: from mx1.redhat.com ([209.132.183.28]:51255 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757686Ab3FTMyo (ORCPT ); Thu, 20 Jun 2013 08:54:44 -0400 Content-Disposition: inline In-Reply-To: <20130620212837.185c5d4f5a9adbbd44c6f1ad@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Jun 20, 2013 at 09:28:37PM +0900, Takuya Yoshikawa wrote: > On Thu, 20 Jun 2013 14:45:04 +0300 > Gleb Natapov wrote: > > > On Thu, Jun 20, 2013 at 12:59:54PM +0200, Paolo Bonzini wrote: > > > Il 20/06/2013 10:59, Takuya Yoshikawa ha scritto: > > > > Without this information, users will just see unexpected performance > > > > problems and there is little chance we will get good reports from them: > > > > note that mmio generation is increased even when we just start, or stop, > > > > dirty logging for some memory slot, in which case users should never > > > > expect all shadow pages to be zapped. > > > > > > > > Signed-off-by: Takuya Yoshikawa > > > > --- > > > > arch/x86/kvm/mmu.c | 4 +++- > > > > 1 file changed, 3 insertions(+), 1 deletion(-) > > > > > > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > > > > index c60c5da..bc8302f 100644 > > > > --- a/arch/x86/kvm/mmu.c > > > > +++ b/arch/x86/kvm/mmu.c > > > > @@ -4385,8 +4385,10 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm) > > > > * The max value is MMIO_MAX_GEN - 1 since it is not called > > > > * when mark memslot invalid. > > > > */ > > > > - if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) > > > > + if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) { > > > > + printk(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound"); > > > > > > This should at least be rate-limited, because it is guest triggerable. > > > > > It will be hard for guest to triggers it 1 << 19 times too fast though. > > I think guest-triggerable zap_all itself is a threat for the host, rather > than a matter of log flooding, even if it can be preempted. > It's not much we can do about it. Slot removal/creation is triggerable through HW emulation registers. > > > > > But why isn't the kvm_mmu_invalidate_zap_all_pages tracepoint enough? > > > > > This one will trigger during slot deletion/move too. > > > > I would put it in to see if it actually triggers in some real world > > workloads (skipping the firs wraparound since it is intentional), > > we can always drop it if it will turn out to create a lot of noise. > > > > This patch is not for developers but for end users: of course they do not > use tracers during running their services normally. > > If they see mysterious peformance problems induced by this wraparound, the only > way to know the cause later is by this kind of information in the syslog. > So even the first wraparound may better be printed out IMO. Think about starting hundreds VMs on a freshly booted host. You will see hundreds of those pretty quickly. > > I want to let administrators know the cause if possible, any better way? > Not that I can think of. Paolo what about print_once() and ignore first wraparound? -- Gleb.