From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [patch 3/4] KVM: MMU: reload request from GET_DIRTY_LOG path Date: Sat, 4 Oct 2014 10:23:32 +0300 Message-ID: <20141004072332.GS26540@minantech.com> References: <20140709191250.408928362@amt.cnet> <20140709191611.280800634@amt.cnet> <20140721131424.GZ18167@minantech.com> <20140909152811.GA4153@amt.cnet> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, ak@linux.intel.com, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, avi.kivity@gmail.com To: Marcelo Tosatti Return-path: Received: from mail-wg0-f51.google.com ([74.125.82.51]:63981 "EHLO mail-wg0-f51.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750823AbaJDHXm (ORCPT ); Sat, 4 Oct 2014 03:23:42 -0400 Received: by mail-wg0-f51.google.com with SMTP id b13so3056663wgh.22 for ; Sat, 04 Oct 2014 00:23:40 -0700 (PDT) Content-Disposition: inline In-Reply-To: <20140909152811.GA4153@amt.cnet> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Sep 09, 2014 at 12:28:11PM -0300, Marcelo Tosatti wrote: > On Mon, Jul 21, 2014 at 04:14:24PM +0300, Gleb Natapov wrote: > > On Wed, Jul 09, 2014 at 04:12:53PM -0300, mtosatti@redhat.com wrote: > > > Reload remote vcpus MMU from GET_DIRTY_LOG codepath, before > > > deleting a pinned spte. > > > > > > Signed-off-by: Marcelo Tosatti > > > > > > --- > > > arch/x86/kvm/mmu.c | 29 +++++++++++++++++++++++------ > > > 1 file changed, 23 insertions(+), 6 deletions(-) > > > > > > Index: kvm.pinned-sptes/arch/x86/kvm/mmu.c > > > =================================================================== > > > --- kvm.pinned-sptes.orig/arch/x86/kvm/mmu.c 2014-07-09 11:23:59.290744490 -0300 > > > +++ kvm.pinned-sptes/arch/x86/kvm/mmu.c 2014-07-09 11:24:58.449632435 -0300 > > > @@ -1208,7 +1208,8 @@ > > > * > > > * Return true if tlb need be flushed. > > > */ > > > -static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect) > > > +static bool spte_write_protect(struct kvm *kvm, u64 *sptep, bool pt_protect, > > > + bool skip_pinned) > > > { > > > u64 spte = *sptep; > > > > > > @@ -1218,6 +1219,22 @@ > > > > > > rmap_printk("rmap_write_protect: spte %p %llx\n", sptep, *sptep); > > > > > > + if (is_pinned_spte(spte)) { > > > + /* keep pinned spte intact, mark page dirty again */ > > > + if (skip_pinned) { > > > + struct kvm_mmu_page *sp; > > > + gfn_t gfn; > > > + > > > + sp = page_header(__pa(sptep)); > > > + gfn = kvm_mmu_page_get_gfn(sp, sptep - sp->spt); > > > + > > > + mark_page_dirty(kvm, gfn); > > > + return false; > > Why not mark all pinned gfns as dirty in kvm_vm_ioctl_get_dirty_log() while > > populating dirty_bitmap_buffer? > > The pinned gfns are per-vcpu. Requires looping all vcpus (not > scalable). > OK, but do they really have to be per-cpu? What's the advantage? > > > > + } else > > > + mmu_reload_pinned_vcpus(kvm); > > Can you explain why do you need this? > > Because if skip_pinned = false, we want vcpus to exit (called > from enable dirty logging codepath). > I guess what I wanted to ask is why do we need skip_pinned? As far as I see it is set to false in two cases: 1: page is write protected for shadow MMU needs, should not happen since the feature is not supported with shadow mmu (can happen with nested EPT, but page will be marked is accessed during next vcpu entry anyway, so how will it work)? 2: when slot is marked as read only: such slot cannot have PEBS pages and if it will guest will die anyway during next guest entry, so why not maintain list of pinned pages per slot and kill aguest if slot with pinned pages is marked read only. -- Gleb.