From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH] KVM: x86: Avoid zapping mmio sptes twice for generation wraparound Date: Wed, 03 Jul 2013 16:50:36 +0800 Message-ID: <51D3E5DC.5020902@linux.vnet.ibm.com> References: <20130703171804.89d6cc2c.yoshikawa_takuya_b1@lab.ntt.co.jp> <51D3E093.3020408@redhat.com> <51D3E33D.1090704@linux.vnet.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Paolo Bonzini , Takuya Yoshikawa , gleb@redhat.com, kvm@vger.kernel.org To: Xiao Guangrong Return-path: Received: from e23smtp06.au.ibm.com ([202.81.31.148]:49575 "EHLO e23smtp06.au.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755381Ab3GCIuq (ORCPT ); Wed, 3 Jul 2013 04:50:46 -0400 Received: from /spool/local by e23smtp06.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 3 Jul 2013 18:43:19 +1000 Received: from d23relay04.au.ibm.com (d23relay04.au.ibm.com [9.190.234.120]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id 18D982BB004F for ; Wed, 3 Jul 2013 18:50:40 +1000 (EST) Received: from d23av03.au.ibm.com (d23av03.au.ibm.com [9.190.234.97]) by d23relay04.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id r638ZfRq4194732 for ; Wed, 3 Jul 2013 18:35:41 +1000 Received: from d23av03.au.ibm.com (loopback [127.0.0.1]) by d23av03.au.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id r638oc1P027264 for ; Wed, 3 Jul 2013 18:50:39 +1000 In-Reply-To: <51D3E33D.1090704@linux.vnet.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: On 07/03/2013 04:39 PM, Xiao Guangrong wrote: > On 07/03/2013 04:28 PM, Paolo Bonzini wrote: >> Il 03/07/2013 10:18, Takuya Yoshikawa ha scritto: >>> Since kvm_arch_prepare_memory_region() is called right after installing >>> the slot marked invalid, wraparound checking should be there to avoid >>> zapping mmio sptes when mmio generation is still MMIO_MAX_GEN - 1. >>> >>> Signed-off-by: Takuya Yoshikawa >>> --- >>> This seems to be the simplest solution for fixing the off-by-one issue >>> we discussed before. >>> >>> arch/x86/kvm/mmu.c | 5 +---- >>> arch/x86/kvm/x86.c | 7 +++++++ >>> 2 files changed, 8 insertions(+), 4 deletions(-) >>> >>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >>> index 0d094da..bf7af1e 100644 >>> --- a/arch/x86/kvm/mmu.c >>> +++ b/arch/x86/kvm/mmu.c >>> @@ -4383,11 +4383,8 @@ void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm) >>> /* >>> * The very rare case: if the generation-number is round, >>> * zap all shadow pages. >>> - * >>> - * The max value is MMIO_MAX_GEN - 1 since it is not called >>> - * when mark memslot invalid. >>> */ >>> - if (unlikely(kvm_current_mmio_generation(kvm) >= (MMIO_MAX_GEN - 1))) { >>> + if (unlikely(kvm_current_mmio_generation(kvm) >= MMIO_MAX_GEN)) { >>> printk_ratelimited(KERN_INFO "kvm: zapping shadow pages for mmio generation wraparound\n"); >>> kvm_mmu_invalidate_zap_all_pages(kvm); >>> } >>> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c >>> index 7d71c0f..9ddd4ff 100644 >>> --- a/arch/x86/kvm/x86.c >>> +++ b/arch/x86/kvm/x86.c >>> @@ -7046,6 +7046,13 @@ int kvm_arch_prepare_memory_region(struct kvm *kvm, >>> memslot->userspace_addr = userspace_addr; >>> } >>> >>> + /* >>> + * In these cases, slots->generation has been increased for marking the >>> + * slot invalid, so we need wraparound checking here. >>> + */ >>> + if ((change == KVM_MR_DELETE) || (change == KVM_MR_MOVE)) >>> + kvm_mmu_invalidate_mmio_sptes(kvm); >>> + >>> return 0; >>> } >>> >>> >> >> Applied, thanks. > > Please wait a while. I can not understand it very clearly. > > This conditional check will cause caching a overflow value into mmio spte. > The simple case is that kvm adds new slots for many times, the mmio-gen is easily > more than MMIO_MAX_GEN. > Actually, the double zapping can be avoided by moving kvm_mmu_invalidate_mmio_sptes to the end of install_new_memslots().