From mboxrd@z Thu Jan 1 00:00:00 1970 From: Bandan Das Subject: Re: [RFC PATCH 3/4] mmu: don't set the present bit unconditionally Date: Wed, 22 Jun 2016 12:21:24 -0400 Message-ID: References: <1466478746-14153-1-git-send-email-bsd@redhat.com> <1466478746-14153-4-git-send-email-bsd@redhat.com> <576A144D.90007@linux.intel.com> Mime-Version: 1.0 Content-Type: text/plain Cc: kvm@vger.kernel.org, pbonzini@redhat.com To: Xiao Guangrong Return-path: Received: from mx1.redhat.com ([209.132.183.28]:50546 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752756AbcFVQVm (ORCPT ); Wed, 22 Jun 2016 12:21:42 -0400 In-Reply-To: <576A144D.90007@linux.intel.com> (Xiao Guangrong's message of "Wed, 22 Jun 2016 12:30:05 +0800") Sender: kvm-owner@vger.kernel.org List-ID: Xiao Guangrong writes: > On 06/21/2016 11:12 AM, Bandan Das wrote: >> To support execute only mappings on behalf of L1 hypervisors, >> we teach set_spte to honor L1's valid XWR bits. This is only >> if host supports EPT execute only. Use ACC_USER_MASK to signify >> if the L1 hypervisor has the present bit set. >> >> Signed-off-by: Bandan Das >> --- >> arch/x86/kvm/mmu.c | 11 ++++++++--- >> arch/x86/kvm/paging_tmpl.h | 2 +- >> 2 files changed, 9 insertions(+), 4 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c >> index 57d8696..3ca1a99 100644 >> --- a/arch/x86/kvm/mmu.c >> +++ b/arch/x86/kvm/mmu.c >> @@ -2528,7 +2528,8 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, >> if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) >> return 0; >> >> - spte = PT_PRESENT_MASK; >> + if (!shadow_xonly_valid) >> + spte = PT_PRESENT_MASK; > > The xonly info can be fetched from vcpu->mmu. shadow_xonly_valid looks like > can be dropped. I added shadow_xonly_valid mainly for is_shadow_present_pte and since it seems it isn't needed there, I will drop it. >> if (!speculative) >> spte |= shadow_accessed_mask; >> >> @@ -2537,8 +2538,12 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, >> else >> spte |= shadow_nx_mask; >> >> - if (pte_access & ACC_USER_MASK) >> - spte |= shadow_user_mask; >> + if (pte_access & ACC_USER_MASK) { >> + if (shadow_xonly_valid) >> + spte |= PT_PRESENT_MASK; >> + else >> + spte |= shadow_user_mask; >> + } > > It can be simplified by setting shadow_user_mask to PT_PRESENT_MASK > if ept enabled. Ok, sounds good.