From mboxrd@z Thu Jan 1 00:00:00 1970 From: Junaid Shahid Subject: Re: [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Date: Wed, 25 Jan 2017 12:05:02 -0800 Message-ID: <12702206.fSC68b8oy2@js-desktop.mtv.corp.google.com> References: <1482380972-25573-1-git-send-email-junaids@google.com> <1482380972-25573-6-git-send-email-junaids@google.com> <2cf4d0ad-e022-c0b7-a07a-e9b6e82fa5ba@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7Bit Cc: kvm@vger.kernel.org, andreslc@google.com, pfeiner@google.com, guangrong.xiao@linux.intel.com To: Paolo Bonzini Return-path: Received: from mail-pg0-f49.google.com ([74.125.83.49]:34183 "EHLO mail-pg0-f49.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751482AbdAYUFE (ORCPT ); Wed, 25 Jan 2017 15:05:04 -0500 Received: by mail-pg0-f49.google.com with SMTP id 14so66984630pgg.1 for ; Wed, 25 Jan 2017 12:05:04 -0800 (PST) In-Reply-To: <2cf4d0ad-e022-c0b7-a07a-e9b6e82fa5ba@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Looks good. Thanks. On Wednesday, January 25, 2017 12:49:06 PM Paolo Bonzini wrote: > > > - remove_acc_track = is_access_track_spte(spte); > > - > > - /* Verify that the fault can be handled in the fast path */ > > - if (!remove_acc_track && !remove_write_prot) > > - break; > > + new_spte = is_access_track_spte(spte) > > + ? restore_acc_track_spte(spte) > > + : spte; > > Just a tiny stylistic change to match the "new_spte |= PT_WRITABLE_MASK" > in the block below: > > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index aa3b0d14c019..2fd7586aad4d 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -3135,9 +3135,10 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, > break; > } > > - new_spte = is_access_track_spte(spte) > - ? restore_acc_track_spte(spte) > - : spte; > + new_spte = spte; > + > + if (is_access_track_spte(spte)) > + new_spte = restore_acc_track_spte(new_spte); > > /* > * Currently, to simplify the code, write-protection can > > Paolo > > > /* > > - * Do not fix write-permission on the large spte since we only > > - * dirty the first page into the dirty-bitmap in > > - * fast_pf_fix_direct_spte() that means other pages are missed > > - * if its slot is dirty-logged. > > - * > > - * Instead, we let the slow page fault path create a normal spte > > - * to fix the access. > > - * > > - * See the comments in kvm_arch_commit_memory_region(). > > + * Currently, to simplify the code, write-protection can > > + * be removed in the fast path only if the SPTE was > > + * write-protected for dirty-logging or access tracking. > > */ > > - if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot) > > + if ((error_code & PFERR_WRITE_MASK) && > > + spte_can_locklessly_be_made_writable(spte)) > > + { > > + new_spte |= PT_WRITABLE_MASK; > > + > > + /* > > + * Do not fix write-permission on the large spte since > > + * we only dirty the first page into the dirty-bitmap in > > + * fast_pf_fix_direct_spte() that means other pages are > > + * missed if its slot is dirty-logged. > > + * > > + * Instead, we let the slow page fault path create a > > + * normal spte to fix the access. > > + * > > + * See the comments in kvm_arch_commit_memory_region(). > > + */ > > + if (sp->role.level > PT_PAGE_TABLE_LEVEL) > > + break; > > + } > > + > > + /* Verify that the fault can be handled in the fast path */ > > + if (new_spte == spte || > > + !is_access_allowed(error_code, new_spte)) > > break; > > > > /* > > @@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, > > */ > > fault_handled = fast_pf_fix_direct_spte(vcpu, sp, > > iterator.sptep, spte, > > - remove_write_prot, > > - remove_acc_track); > > + new_spte); > > if (fault_handled) > > break; > > > >