From mboxrd@z Thu Jan 1 00:00:00 1970 From: Paolo Bonzini Subject: Re: [PATCH 5/5] kvm: x86: mmu: Verify that restored PTE has needed perms in fast page fault Date: Wed, 25 Jan 2017 12:49:06 +0100 Message-ID: <2cf4d0ad-e022-c0b7-a07a-e9b6e82fa5ba@redhat.com> References: <1482380972-25573-1-git-send-email-junaids@google.com> <1482380972-25573-6-git-send-email-junaids@google.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit Cc: andreslc@google.com, pfeiner@google.com, guangrong.xiao@linux.intel.com To: Junaid Shahid , kvm@vger.kernel.org Return-path: Received: from mail-lf0-f65.google.com ([209.85.215.65]:33171 "EHLO mail-lf0-f65.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751406AbdAYLtK (ORCPT ); Wed, 25 Jan 2017 06:49:10 -0500 Received: by mail-lf0-f65.google.com with SMTP id x1so20677910lff.0 for ; Wed, 25 Jan 2017 03:49:09 -0800 (PST) In-Reply-To: <1482380972-25573-6-git-send-email-junaids@google.com> Sender: kvm-owner@vger.kernel.org List-ID: > - remove_acc_track = is_access_track_spte(spte); > - > - /* Verify that the fault can be handled in the fast path */ > - if (!remove_acc_track && !remove_write_prot) > - break; > + new_spte = is_access_track_spte(spte) > + ? restore_acc_track_spte(spte) > + : spte; Just a tiny stylistic change to match the "new_spte |= PT_WRITABLE_MASK" in the block below: diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index aa3b0d14c019..2fd7586aad4d 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -3135,9 +3135,10 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, break; } - new_spte = is_access_track_spte(spte) - ? restore_acc_track_spte(spte) - : spte; + new_spte = spte; + + if (is_access_track_spte(spte)) + new_spte = restore_acc_track_spte(new_spte); /* * Currently, to simplify the code, write-protection can Paolo > /* > - * Do not fix write-permission on the large spte since we only > - * dirty the first page into the dirty-bitmap in > - * fast_pf_fix_direct_spte() that means other pages are missed > - * if its slot is dirty-logged. > - * > - * Instead, we let the slow page fault path create a normal spte > - * to fix the access. > - * > - * See the comments in kvm_arch_commit_memory_region(). > + * Currently, to simplify the code, write-protection can > + * be removed in the fast path only if the SPTE was > + * write-protected for dirty-logging or access tracking. > */ > - if (sp->role.level > PT_PAGE_TABLE_LEVEL && remove_write_prot) > + if ((error_code & PFERR_WRITE_MASK) && > + spte_can_locklessly_be_made_writable(spte)) > + { > + new_spte |= PT_WRITABLE_MASK; > + > + /* > + * Do not fix write-permission on the large spte since > + * we only dirty the first page into the dirty-bitmap in > + * fast_pf_fix_direct_spte() that means other pages are > + * missed if its slot is dirty-logged. > + * > + * Instead, we let the slow page fault path create a > + * normal spte to fix the access. > + * > + * See the comments in kvm_arch_commit_memory_region(). > + */ > + if (sp->role.level > PT_PAGE_TABLE_LEVEL) > + break; > + } > + > + /* Verify that the fault can be handled in the fast path */ > + if (new_spte == spte || > + !is_access_allowed(error_code, new_spte)) > break; > > /* > @@ -3166,8 +3175,7 @@ static bool fast_page_fault(struct kvm_vcpu *vcpu, gva_t gva, int level, > */ > fault_handled = fast_pf_fix_direct_spte(vcpu, sp, > iterator.sptep, spte, > - remove_write_prot, > - remove_acc_track); > + new_spte); > if (fault_handled) > break; > >