From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [patch 3/4] KVM: MMU: reload request from GET_DIRTY_LOG path Date: Mon, 21 Jul 2014 16:14:24 +0300 Message-ID: <20140721131424.GZ18167@minantech.com> References: <20140709191250.408928362@amt.cnet> <20140709191611.280800634@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, ak@linux.intel.com, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, avi.kivity@gmail.com To: mtosatti@redhat.com Return-path: Received: from mail-we0-f174.google.com ([74.125.82.174]:64321 "EHLO mail-we0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932372AbaGUNOa (ORCPT ); Mon, 21 Jul 2014 09:14:30 -0400 Received: by mail-we0-f174.google.com with SMTP id x48so7537699wes.19 for ; Mon, 21 Jul 2014 06:14:29 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20140709191611.280800634@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Wed, Jul 09, 2014 at 04:12:53PM -0300, mtosatti@redhat.com wrote: > Reload remote vcpus MMU from GET_DIRTY_LOG codepath, before > deleting a pinned spte. > > Signed-off-by: Marcelo Tosatti > > --- > arch/x86/kvm/mmu.c | 29 +++++++++++++++++++++++------ > 1 file changed, 23 insertions(+), 6 deletions(-) > > Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c > =================================================================== > --- kvm.pinned-sptes.orig/arch/x86/kvm/mmu.c 2014-07-09 11:23:59.290744490 -0300 > +++ kvm.pinned-sptes/arch/x86/kvm/mmu.c 2014-07-09 11:24:58.449632435 -0300 > @@ -1208,7 +1208,8 @@ > * > * Return true if tlb need be flushed. > */ > -static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect) > +static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect, > + bool skip_pinned) > { > u64 spte = *sptep; > > @@ -1218,6 +1219,22 @@ > > rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); > > + if (is_pinned_spte(spte)) { > + /* keep pinned spte intact, mark page dirty again */ > + if (skip_pinned) { > + struct kvm_mmu_page *sp; > + gfn_t gfn; > + > + sp = page_header(__pa(sptep)); > + gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); > + > + mark_page_dirty(kvm, gfn); > + return false; Why not mark all pinned gfns as dirty in kvm_vm_ioctl_get_dirty_log() while populating dirty_bitmap_buffer? > + } else > + mmu_reload_pinned_vcpus(kvm); Can you explain why do you need this? > + } > + > + > if (pt_protect) > spte &= ~SPTE_MMU_WRITEABLE; > spte = spte & ~PT_WRITABLE_MASK; > @@ -1226,7 +1243,7 @@ > } > > static bool __rmap_write_protect(struct kvm *kvm, unsigned long *rmapp, > - bool pt_protect) > + bool pt_protect, bool skip_pinned) > { > u64 *sptep; > struct rmap_iterator iter; > @@ -1235,7 +1252,7 @@ > for (sptep = rmap_get_first(*rmapp, &iter); sptep;) { > BUG_ON(!(*sptep & PT_PRESENT_MASK)); > > - flush |= spte_write_protect(kvm, sptep, pt_protect); > + flush |= spte_write_protect(kvm, sptep, pt_protect, skip_pinned); > sptep = rmap_get_next(&iter); > } > > @@ -1261,7 +1278,7 @@ > while (mask) { > rmapp = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask), > PT_PAGE_TABLE_LEVEL, slot); > - __rmap_write_protect(kvm, rmapp, false); > + __rmap_write_protect(kvm, rmapp, false, true); > > /* clear the first set bit */ > mask &= mask - 1; > @@ -1280,7 +1297,7 @@ > for (i = PT_PAGE_TABLE_LEVEL; > i < PT_PAGE_TABLE_LEVEL + KVM_NR_PAGE_SIZES; ++i) { > rmapp = __gfn_to_rmap(gfn, i, slot); > - write_protected |= __rmap_write_protect(kvm, rmapp, true); > + write_protected |= __rmap_write_protect(kvm, rmapp, true, false); > } > > return write_protected; > @@ -4565,7 +4582,7 @@ > > for (index = 0; index <= last_index; ++index, ++rmapp) { > if (*rmapp) > - __rmap_write_protect(kvm, rmapp, false); > + __rmap_write_protect(kvm, rmapp, false, false); > > if (need_resched() || spin_needbreak(&kvm->mmu_lock)) > cond_resched_lock(&kvm->mmu_lock); > > -- Gleb.