From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 2/4] KVM: MMU: allow pinning spte translations (TDP-only) Date: Thu, 17 Jul 2014 18:38:23 -0300 Message-ID: <20140717213823.GA18770@amt.cnet> References: <20140709191250.408928362@amt.cnet> <20140709191611.207208253@amt.cnet> <53C8054B.5080604@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: kvm@vger.kernel.org, ak@linux.intel.com, pbonzini@redhat.com, xiaoguangrong@linux.vnet.ibm.com, gleb@kernel.org, avi.kivity@gmail.com To: Nadav Amit Return-path: Received: from mx1.redhat.com ([209.132.183.28]:42347 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752507AbaGQViu (ORCPT ); Thu, 17 Jul 2014 17:38:50 -0400 Content-Disposition: inline In-Reply-To: <53C8054B.5080604@gmail.com> Sender: kvm-owner@vger.kernel.org List-ID: On Thu, Jul 17, 2014 at 08:18:03PM +0300, Nadav Amit wrote: > Small question if I may regarding kvm_mmu_pin_pages: > > On 7/9/14, 10:12 PM, mtosatti@redhat.com wrote: > >+ > >+static int kvm_mmu_pin_pages(struct kvm_vcpu *vcpu) > >+{ > >+ struct kvm_pinned_page_range *p; > >+ int r = 1; > >+ > >+ if (is_guest_mode(vcpu)) > >+ return r; > >+ > >+ if (!vcpu->arch.mmu.direct_map) > >+ return r; > >+ > >+ ASSERT(VALID_PAGE(vcpu->arch.mmu.root_hpa)); > >+ > >+ list_for_each_entry(p, &vcpu->arch.pinned_mmu_pages, link) { > >+ gfn_t gfn_offset; > >+ > >+ for (gfn_offset = 0; gfn_offset < p->npages; gfn_offset++) { > >+ gfn_t gfn = p->base_gfn + gfn_offset; > >+ int r; > >+ bool pinned = false; > >+ > >+ r = vcpu->arch.mmu.page_fault(vcpu, gfn << PAGE_SHIFT, > >+ PFERR_WRITE_MASK, false, > >+ true, &pinned); > > I understand that the current use-case is for pinning only few > pages. Yet, wouldn't it be better (for performance) to check whether > the gfn uses a large page and if so to skip forward, increasing > gfn_offset to point to the next large page? Sure, that can be a lazy optimization and performed when necessary? (feel free to do it in advance if you're interested in doing it now).