From mboxrd@z Thu Jan 1 00:00:00 1970 From: Gleb Natapov Subject: Re: [patch 2/3] KVM: dont require read-only host ptes Date: Tue, 5 Oct 2010 15:43:01 +0200 Message-ID: <20101005134301.GQ11145@redhat.com> References: <20101005115458.792126399@redhat.com> <20101005121555.827500635@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: avi@redhat.com, aarcange@redhat.com, kvm@vger.kernel.org To: Marcelo Tosatti Return-path: Received: from mx1.redhat.com ([209.132.183.28]:7753 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752937Ab0JENnD (ORCPT ); Tue, 5 Oct 2010 09:43:03 -0400 Received: from int-mx03.intmail.prod.int.phx2.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.16]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id o95Dh3bP007082 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Tue, 5 Oct 2010 09:43:03 -0400 Content-Disposition: inline In-Reply-To: <20101005121555.827500635@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On Tue, Oct 05, 2010 at 08:55:00AM -0300, Marcelo Tosatti wrote: > gfn_to_pfn requires a writable host pte, failing otherwise. > > Change it to fallback to read-only "acquision', informing the callers. > > Hopefully the ptes are cache-hot so the overhead is minimal. > > Signed-off-by: Marcelo Tosatti > > Index: kvm/arch/ia64/kvm/kvm-ia64.c > =================================================================== > --- kvm.orig/arch/ia64/kvm/kvm-ia64.c > +++ kvm/arch/ia64/kvm/kvm-ia64.c > @@ -1589,7 +1589,7 @@ int kvm_arch_prepare_memory_region(struc > return -ENOMEM; > > for (i = 0; i < npages; i++) { > - pfn = gfn_to_pfn(kvm, base_gfn + i); > + pfn = gfn_to_pfn(kvm, base_gfn + i, NULL); > if (!kvm_is_mmio_pfn(pfn)) { > kvm_set_pmt_entry(kvm, base_gfn + i, > pfn << PAGE_SHIFT, > Index: kvm/arch/x86/kvm/mmu.c > =================================================================== > --- kvm.orig/arch/x86/kvm/mmu.c > +++ kvm/arch/x86/kvm/mmu.c > @@ -2273,6 +2273,7 @@ static int nonpaging_map(struct kvm_vcpu > { > int r; > int level; > + int writable; > pfn_t pfn; > unsigned long mmu_seq; > > @@ -2289,10 +2290,10 @@ static int nonpaging_map(struct kvm_vcpu > > mmu_seq = vcpu->kvm->mmu_notifier_seq; > smp_rmb(); > - pfn = gfn_to_pfn(vcpu->kvm, gfn); > + pfn = gfn_to_pfn(vcpu->kvm, gfn, &writable); > > /* mmio */ > - if (is_error_pfn(pfn)) > + if (is_error_pfn(pfn) || !writable) > return kvm_handle_bad_page(vcpu->kvm, gfn, pfn); > > spin_lock(&vcpu->kvm->mmu_lock); > @@ -2581,6 +2582,8 @@ static int tdp_page_fault(struct kvm_vcp > pfn_t pfn; > int r; > int level; > + int writable; > + int write = error_code & PFERR_WRITE_MASK; > gfn_t gfn = gpa >> PAGE_SHIFT; > unsigned long mmu_seq; > > @@ -2597,15 +2600,14 @@ static int tdp_page_fault(struct kvm_vcp > > mmu_seq = vcpu->kvm->mmu_notifier_seq; > smp_rmb(); > - pfn = gfn_to_pfn(vcpu->kvm, gfn); > - if (is_error_pfn(pfn)) > + pfn = gfn_to_pfn(vcpu->kvm, gfn, &writable); > + if (is_error_pfn(pfn) || !writable) Why would we fail read only access to read only memory? Shouldn't we check access type here? -- Gleb.