xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Christoph Egger <Christoph.Egger@amd.com>
To: Tim Deegan <tim@xen.org>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH] nestedhvm: fix write access fault on ro mapping
Date: Thu, 2 Aug 2012 14:32:42 +0200	[thread overview]
Message-ID: <501A736A.7080300@amd.com> (raw)
In-Reply-To: <20120802104524.GA11437@ocelot.phlegethon.org>

[-- Attachment #1: Type: text/plain, Size: 2065 bytes --]

On 08/02/12 12:45, Tim Deegan wrote:

> At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
>>>> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
>>>>              if ( !handle_mmio() )
>>>>                  hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>>              return 1;
>>>> +        case NESTEDHVM_PAGEFAULT_READONLY:
>>>> +            break;
>>>
>>> Don't we have to translate the faulting PA into an L1 address before
>>> letting the rest of this fault handler run?  It explicitly operates on
>>> the hostp2m.  
>>>
>>> If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
>>> rather than special-casing READONLY.  That way any other
>>> automatically-fixed types (like the p2m_access magic) will be covered
>>> too.
>>
>> How do you differentiate if the error happened from walking l1 npt or
>> host npt ?
>> In the first case it isn't possible to provide l1 address.
> 
> It must be _possible_; after all we managed to detect the error. :)  In
> any case it's definitely wrong to carry on with this handler with the
> wrong address in hand.  So I wonder why this patch actually works for
> you.  Does replacing the 'break' above with 'return 1' also fix the
> problem?


No. Two things have to happen:

1. Calling paging_mark_dirty() and
2. using the same p2mt from the hostp2m in the nestedp2m.

>

> In the short term, do you only care about pages that are read-only for
> log-dirty tracking?  For the L1 walk, that should be handled by the PT
> walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> could potentially take care of the other case by calling
> paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().


Ok, I consider this as a performance improvement rather a bugfix.

New version is attached.

Christoph

-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632

[-- Attachment #2: xen_nh_p2m.diff --]
[-- Type: text/plain, Size: 5861 bytes --]

diff -r 8330198c3240 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/hvm/hvm.c	Thu Aug 02 13:42:15 2012 +0200
@@ -1278,12 +1278,14 @@ int hvm_hap_nested_page_fault(unsigned l
          * into l1 guest if not fixable. The algorithm is
          * the same as for shadow paging.
          */
-        rv = nestedhvm_hap_nested_page_fault(v, gpa,
+        rv = nestedhvm_hap_nested_page_fault(v, &gpa,
                                              access_r, access_w, access_x);
         switch (rv) {
         case NESTEDHVM_PAGEFAULT_DONE:
             return 1;
-        case NESTEDHVM_PAGEFAULT_ERROR:
+        case NESTEDHVM_PAGEFAULT_L1_ERROR:
+            /* An error occured while translating gpa from
+             * l2 guest address to l1 guest address. */
             return 0;
         case NESTEDHVM_PAGEFAULT_INJECT:
             return -1;
@@ -1291,6 +1293,10 @@ int hvm_hap_nested_page_fault(unsigned l
             if ( !handle_mmio() )
                 hvm_inject_hw_exception(TRAP_gp_fault, 0);
             return 1;
+        case NESTEDHVM_PAGEFAULT_L0_ERROR:
+            /* gpa is now translated to l1 guest address, update gfn. */
+            gfn = gpa >> PAGE_SHIFT;
+            break;
         }
     }
 
diff -r 8330198c3240 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/mm/hap/nested_hap.c	Thu Aug 02 13:42:15 2012 +0200
@@ -141,26 +141,29 @@ nestedhap_fix_p2m(struct vcpu *v, struct
  */
 static int
 nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
-                      unsigned int *page_order)
+                      p2m_type_t *p2mt,
+                      unsigned int *page_order,
+                      bool_t access_r, bool_t access_w, bool_t access_x)
 {
     mfn_t mfn;
-    p2m_type_t p2mt;
     p2m_access_t p2ma;
     int rc;
 
     /* walk L0 P2M table */
-    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma, 
+    mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma, 
                               0, page_order);
 
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( p2m_is_mmio(p2mt) )
+    if ( p2m_is_mmio(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
-    if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
+    rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
+    if ( access_w && p2m_is_readonly(*p2mt) )
         goto out;
 
-    rc = NESTEDHVM_PAGEFAULT_ERROR;
+    if ( p2m_is_paging(*p2mt) || p2m_is_shared(*p2mt) || !p2m_is_ram(*p2mt) )
+        goto out;
+
     if ( !mfn_valid(mfn) )
         goto out;
 
@@ -207,7 +210,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
  * Returns:
  */
 int
-nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x)
 {
     int rv;
@@ -215,19 +218,20 @@ nestedhvm_hap_nested_page_fault(struct v
     struct domain *d = v->domain;
     struct p2m_domain *p2m, *nested_p2m;
     unsigned int page_order_21, page_order_10, page_order_20;
+    p2m_type_t p2mt_10;
 
     p2m = p2m_get_hostp2m(d); /* L0 p2m */
     nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
 
     /* walk the L1 P2M table */
-    rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa, &page_order_21,
+    rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
         access_r, access_w, access_x);
 
     /* let caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L1_ERROR:
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -237,13 +241,16 @@ nestedhvm_hap_nested_page_fault(struct v
     }
 
     /* ==> we have to walk L0 P2M */
-    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
+    rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
+        &p2mt_10, &page_order_10,
+        access_r, access_w, access_x);
 
     /* let upper level caller to handle these two cases */
     switch (rv) {
     case NESTEDHVM_PAGEFAULT_INJECT:
         return rv;
-    case NESTEDHVM_PAGEFAULT_ERROR:
+    case NESTEDHVM_PAGEFAULT_L0_ERROR:
+        *L2_gpa = L1_gpa;
         return rv;
     case NESTEDHVM_PAGEFAULT_DONE:
         break;
@@ -257,9 +264,9 @@ nestedhvm_hap_nested_page_fault(struct v
     page_order_20 = min(page_order_21, page_order_10);
 
     /* fix p2m_get_pagetable(nested_p2m) */
-    nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
-        p2m_ram_rw,
-        p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
+    nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
+        p2mt_10,
+        p2m_access_rwx /* FIXME: Should use minimum permission. */);
 
     return NESTEDHVM_PAGEFAULT_DONE;
 }
diff -r 8330198c3240 xen/include/asm-x86/hvm/nestedhvm.h
--- a/xen/include/asm-x86/hvm/nestedhvm.h	Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/include/asm-x86/hvm/nestedhvm.h	Thu Aug 02 13:42:15 2012 +0200
@@ -47,11 +47,12 @@ bool_t nestedhvm_vcpu_in_guestmode(struc
     vcpu_nestedhvm(v).nv_guestmode = 0
 
 /* Nested paging */
-#define NESTEDHVM_PAGEFAULT_DONE   0
-#define NESTEDHVM_PAGEFAULT_INJECT 1
-#define NESTEDHVM_PAGEFAULT_ERROR  2
-#define NESTEDHVM_PAGEFAULT_MMIO   3
-int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+#define NESTEDHVM_PAGEFAULT_DONE       0
+#define NESTEDHVM_PAGEFAULT_INJECT     1
+#define NESTEDHVM_PAGEFAULT_L1_ERROR   2
+#define NESTEDHVM_PAGEFAULT_L0_ERROR   3
+#define NESTEDHVM_PAGEFAULT_MMIO       4
+int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
     bool_t access_r, bool_t access_w, bool_t access_x);
 
 /* IO permission map */

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2012-08-02 12:32 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-07-19 14:15 [PATCH] nestedhvm: fix write access fault on ro mapping Christoph Egger
2012-07-26 18:21 ` Tim Deegan
2012-07-27 11:57   ` Christoph Egger
2012-08-02 10:45     ` Tim Deegan
2012-08-02 12:32       ` Christoph Egger [this message]
2012-08-02 13:40         ` Tim Deegan

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=501A736A.7080300@amd.com \
    --to=christoph.egger@amd.com \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).