From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [RFC PATCH 3/4] mmu: don't set the present bit unconditionally Date: Wed, 22 Jun 2016 12:30:05 +0800 Message-ID: <576A144D.90007@linux.intel.com> References: <1466478746-14153-1-git-send-email-bsd@redhat.com> <1466478746-14153-4-git-send-email-bsd@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 7bit Cc: pbonzini@redhat.com To: Bandan Das , kvm@vger.kernel.org Return-path: Received: from mga09.intel.com ([134.134.136.24]:60842 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750761AbcFVEge (ORCPT ); Wed, 22 Jun 2016 00:36:34 -0400 In-Reply-To: <1466478746-14153-4-git-send-email-bsd@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: On 06/21/2016 11:12 AM, Bandan Das wrote: > To support execute only mappings on behalf of L1 hypervisors, > we teach set_spte to honor L1's valid XWR bits. This is only > if host supports EPT execute only. Use ACC_USER_MASK to signify > if the L1 hypervisor has the present bit set. > > Signed-off-by: Bandan Das > --- > arch/x86/kvm/mmu.c | 11 ++++++++--- > arch/x86/kvm/paging_tmpl.h | 2 +- > 2 files changed, 9 insertions(+), 4 deletions(-) > > diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c > index 57d8696..3ca1a99 100644 > --- a/arch/x86/kvm/mmu.c > +++ b/arch/x86/kvm/mmu.c > @@ -2528,7 +2528,8 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, > if (set_mmio_spte(vcpu, sptep, gfn, pfn, pte_access)) > return 0; > > - spte = PT_PRESENT_MASK; > + if (!shadow_xonly_valid) > + spte = PT_PRESENT_MASK; The xonly info can be fetched from vcpu->mmu. shadow_xonly_valid looks like can be dropped. > if (!speculative) > spte |= shadow_accessed_mask; > > @@ -2537,8 +2538,12 @@ static int set_spte(struct kvm_vcpu *vcpu, u64 *sptep, > else > spte |= shadow_nx_mask; > > - if (pte_access & ACC_USER_MASK) > - spte |= shadow_user_mask; > + if (pte_access & ACC_USER_MASK) { > + if (shadow_xonly_valid) > + spte |= PT_PRESENT_MASK; > + else > + spte |= shadow_user_mask; > + } It can be simplified by setting shadow_user_mask to PT_PRESENT_MASK if ept enabled.