From mboxrd@z Thu Jan 1 00:00:00 1970 From: Marcelo Tosatti Subject: Re: [patch 03/10] KVM: MMU: do not write-protect large mappings Date: Sat, 20 Sep 2008 21:41:43 -0300 Message-ID: <20080921004143.GB9262@dmt.cnet> References: <20080918212749.800177179@localhost.localdomain> <20080918213336.633920548@localhost.localdomain> <48D443E8.3090505@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , kvm@vger.kernel.org, "David S. Ahern" To: Avi Kivity Return-path: Received: from mx2.redhat.com ([66.187.237.31]:51110 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751259AbYIUAnS (ORCPT ); Sat, 20 Sep 2008 20:43:18 -0400 Content-Disposition: inline In-Reply-To: <48D443E8.3090505@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, Sep 19, 2008 at 05:29:28PM -0700, Avi Kivity wrote: > Marcelo Tosatti wrote: >> There is not much point in write protecting large mappings. This >> can only happen when a page is shadowed during the window between >> is_largepage_backed and mmu_lock acquision. Zap the entry instead, so >> the next pagefault will find a shadowed page via is_largepage_backed and >> fallback to 4k translations. >> >> Simplifies out of sync shadow. >> >> Signed-off-by: Marcelo Tosatti >> >> Index: kvm/arch/x86/kvm/mmu.c >> =================================================================== >> --- kvm.orig/arch/x86/kvm/mmu.c >> +++ kvm/arch/x86/kvm/mmu.c >> @@ -1180,11 +1180,16 @@ static int set_spte(struct kvm_vcpu *vcp >> || (write_fault && !is_write_protection(vcpu) && !user_fault)) { >> struct kvm_mmu_page *shadow; >> + if (largepage && has_wrprotected_page(vcpu->kvm, gfn)) { >> + ret = 1; >> + spte = shadow_trap_nonpresent_pte; >> + goto set_pte; >> + } > > Don't we need to drop a reference to the page? mmu_set_spte will do it if necessary: if (!was_rmapped) { rmap_add(vcpu, shadow_pte, gfn, largepage); if (!is_rmap_pte(*shadow_pte)) kvm_release_pfn_clean(pfn); But as noted, this part is wrong: @@ -307,11 +307,10 @@ static int FNAME(shadow_walk_entry)(stru return 1; } - if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) + if (is_shadow_present_pte(*sptep)) return 0; - if (is_large_pte(*sptep)) - rmap_remove(vcpu->kvm, sptep); + WARN_ON (is_large_pte(*sptep)); Since it might still be necessary to replace a read-only large mapping (which rmap_write_protect will not nuke) with a normal pte pointer.