From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v2 1/2] x86/mem_event: Deliver gla fault EPT violation information Date: Thu, 07 Aug 2014 23:58:07 +0100 Message-ID: <53E4047F.8050409@citrix.com> References: <1407440824-3281-1-git-send-email-tamas.lengyel@zentific.com> <53E3F215.7050205@oracle.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta14.messagelabs.com ([193.109.254.103]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1XFWdR-0003TK-5r for xen-devel@lists.xenproject.org; Thu, 07 Aug 2014 22:58:13 +0000 In-Reply-To: List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Tamas Lengyel , Boris Ostrovsky Cc: kevin.tian@intel.com, Ian Campbell , Stefano Stabellini , eddie.dong@intel.com, Ian Jackson , Aravind.Gopalakrishnan@amd.com, Jun Nakajima , "xen-devel@lists.xenproject.org" , suravee.suthikulpanit@amd.com List-Id: xen-devel@lists.xenproject.org On 07/08/2014 22:53, Tamas Lengyel wrote: > On Thu, Aug 7, 2014 at 11:39 PM, Boris Ostrovsky > wrote: >> On 08/07/2014 03:47 PM, Tamas K Lengyel wrote: >> >>> On Intel EPT the exit qualification generated by a violation also >>> includes a bit (EPT_GLA_FAULT) which describes the following information: >>> Set if the access causing the EPT violation is to a guest-physical >>> address that is the translation of a linear address. Clear if the access >>> causing the EPT violation is to a paging-structure entry as part of a page >>> walk or the update of an accessed or dirty bit. >>> >>> For more information see Table 27-7 in the Intel SDM. >>> >>> This patch extends the mem_event system to deliver this extra >>> information, which could be useful for determining the cause of a violation. >>> >>> v2: Split gla_fault into fault_in_gpt and fault_gla to be more compatible >>> with the AMD implementation. >>> >>> Signed-off-by: Tamas K Lengyel >>> --- >>> xen/arch/x86/hvm/hvm.c | 8 ++++++-- >>> xen/arch/x86/hvm/svm/svm.c | 2 +- >>> xen/arch/x86/hvm/vmx/vmx.c | 23 ++++++++++++++++++++++- >>> xen/arch/x86/mm/p2m.c | 5 ++++- >>> xen/include/asm-x86/hvm/hvm.h | 5 ++++- >>> xen/include/asm-x86/p2m.h | 3 ++- >>> xen/include/public/mem_event.h | 4 +++- >>> 7 files changed, 42 insertions(+), 8 deletions(-) >>> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c >>> index e834406..d7b5e2b 100644 >>> --- a/xen/arch/x86/hvm/hvm.c >>> +++ b/xen/arch/x86/hvm/hvm.c >>> @@ -2725,6 +2725,8 @@ void hvm_inject_page_fault(int errcode, unsigned >>> long cr2) >>> int hvm_hap_nested_page_fault(paddr_t gpa, >>> bool_t gla_valid, >>> unsigned long gla, >>> + bool_t fault_in_gpt, >>> + bool_t fault_gla, >>> bool_t access_r, >>> bool_t access_w, >>> bool_t access_x) >>> @@ -2832,8 +2834,10 @@ int hvm_hap_nested_page_fault(paddr_t gpa, >>> if ( violation ) >>> { >>> - if ( p2m_mem_access_check(gpa, gla_valid, gla, access_r, >>> - access_w, access_x, &req_ptr) ) >>> + if ( p2m_mem_access_check(gpa, gla_valid, gla, >>> + fault_in_gpt, fault_gla, >>> + access_r, access_w, access_x, >>> + &req_ptr) ) >>> { >>> fall_through = 1; >>> } else { >>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c >>> index 76616ac..9e35e7a 100644 >>> --- a/xen/arch/x86/hvm/svm/svm.c >>> +++ b/xen/arch/x86/hvm/svm/svm.c >>> @@ -1403,7 +1403,7 @@ static void svm_do_nested_pgfault(struct vcpu *v, >>> p2m_access_t p2ma; >>> struct p2m_domain *p2m = NULL; >>> - ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, >>> + ret = hvm_hap_nested_page_fault(gpa, 0, ~0ul, 0, 0, >>> >> >> Why not pass the actual bits that the HW provides? >> > The actual bits could be passed but it makes no difference at this point > since the AMD side isn't setup to work with mem_event. When it is > integrated, those bits could and should be passed accordingly. > > Tamas There is a lot more than mem_event which might want these bits from AMD. If the bits are easily available at this point, you should fill them in. ~Andrew