From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bandan Das Subject: [PATCH 1/5] mmu: mark spte present if the x bit is set Date: Tue, 28 Jun 2016 00:32:36 -0400 Message-ID: <1467088360-10186-2-git-send-email-bsd@redhat.com> References: <1467088360-10186-1-git-send-email-bsd@redhat.com> Cc: pbonzini@redhat.com, guangrong.xiao@linux.intel.com, linux-kernel@vger.kernel.org To: kvm@vger.kernel.org Return-path: In-Reply-To: <1467088360-10186-1-git-send-email-bsd@redhat.com> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org This is safe because is_shadow_present_pte() is called on host controlled page table and we know the spte is valid Signed-off-by: Bandan Das --- arch/x86/kvm/mmu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c index def97b3..a50af79 100644 --- a/arch/x86/kvm/mmu.c +++ b/arch/x86/kvm/mmu.c @@ -304,7 +304,8 @@ static int is_nx(struct kvm_vcpu *vcpu) static int is_shadow_present_pte(u64 pte) { - return pte & PT_PRESENT_MASK && !is_mmio_spte(pte); + return pte & (PT_PRESENT_MASK | shadow_x_mask) && + !is_mmio_spte(pte); } static int is_large_pte(u64 pte) -- 2.5.5