* [PATCH] nestedhvm: fix write access fault on ro mapping
@ 2012-07-19 14:15 Christoph Egger
2012-07-26 18:21 ` Tim Deegan
0 siblings, 1 reply; 6+ messages in thread
From: Christoph Egger @ 2012-07-19 14:15 UTC (permalink / raw)
To: xen-devel@lists.xen.org; +Cc: Tim Deegan
[-- Attachment #1: Type: text/plain, Size: 558 bytes --]
Fix write access fault when host npt is mapped read-only.
In this case let the host handle the #NPF.
Apply host p2mt to hap-on-hap pagetable entry.
This fixes the l2 guest graphic display refresh problem.
Signed-off-by: Christoph Egger <Christoph.Egger@amd.com>
CC: Tim Deegan <tim@xen.org>
--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632
[-- Attachment #2: xen_nh_p2m.diff --]
[-- Type: text/plain, Size: 3961 bytes --]
diff -r ae0e96e156f3 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c Thu Jul 19 12:12:12 2012 +0200
+++ b/xen/arch/x86/hvm/hvm.c Thu Jul 19 15:30:04 2012 +0200
@@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
if ( !handle_mmio() )
hvm_inject_hw_exception(TRAP_gp_fault, 0);
return 1;
+ case NESTEDHVM_PAGEFAULT_READONLY:
+ break;
}
}
diff -r ae0e96e156f3 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c Thu Jul 19 12:12:12 2012 +0200
+++ b/xen/arch/x86/mm/hap/nested_hap.c Thu Jul 19 15:30:04 2012 +0200
@@ -141,23 +141,28 @@ nestedhap_fix_p2m(struct vcpu *v, struct
*/
static int
nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
- unsigned int *page_order)
+ p2m_type_t *p2mt,
+ unsigned int *page_order,
+ bool_t access_r, bool_t access_w, bool_t access_x)
{
mfn_t mfn;
- p2m_type_t p2mt;
p2m_access_t p2ma;
int rc;
/* walk L0 P2M table */
- mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma,
+ mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma,
0, page_order);
rc = NESTEDHVM_PAGEFAULT_MMIO;
- if ( p2m_is_mmio(p2mt) )
+ if ( p2m_is_mmio(*p2mt) )
+ goto out;
+
+ rc = NESTEDHVM_PAGEFAULT_READONLY;
+ if ( access_w && p2m_is_readonly(*p2mt) )
goto out;
rc = NESTEDHVM_PAGEFAULT_ERROR;
- if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
+ if ( p2m_is_paging(*p2mt) || p2m_is_shared(*p2mt) || !p2m_is_ram(*p2mt) )
goto out;
rc = NESTEDHVM_PAGEFAULT_ERROR;
@@ -215,6 +220,7 @@ nestedhvm_hap_nested_page_fault(struct v
struct domain *d = v->domain;
struct p2m_domain *p2m, *nested_p2m;
unsigned int page_order_21, page_order_10, page_order_20;
+ p2m_type_t p2mt_10;
p2m = p2m_get_hostp2m(d); /* L0 p2m */
nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
@@ -237,7 +243,9 @@ nestedhvm_hap_nested_page_fault(struct v
}
/* ==> we have to walk L0 P2M */
- rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
+ rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
+ &p2mt_10, &page_order_10,
+ access_r, access_w, access_x);
/* let upper level caller to handle these two cases */
switch (rv) {
@@ -249,6 +257,8 @@ nestedhvm_hap_nested_page_fault(struct v
break;
case NESTEDHVM_PAGEFAULT_MMIO:
return rv;
+ case NESTEDHVM_PAGEFAULT_READONLY:
+ return rv;
default:
BUG();
break;
@@ -258,8 +268,8 @@ nestedhvm_hap_nested_page_fault(struct v
/* fix p2m_get_pagetable(nested_p2m) */
nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
- p2m_ram_rw,
- p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
+ p2mt_10,
+ p2m_access_rwx /* FIXME: Should use minimum permission. */);
return NESTEDHVM_PAGEFAULT_DONE;
}
diff -r ae0e96e156f3 xen/include/asm-x86/hvm/nestedhvm.h
--- a/xen/include/asm-x86/hvm/nestedhvm.h Thu Jul 19 12:12:12 2012 +0200
+++ b/xen/include/asm-x86/hvm/nestedhvm.h Thu Jul 19 15:30:04 2012 +0200
@@ -47,10 +47,11 @@ bool_t nestedhvm_vcpu_in_guestmode(struc
vcpu_nestedhvm(v).nv_guestmode = 0
/* Nested paging */
-#define NESTEDHVM_PAGEFAULT_DONE 0
-#define NESTEDHVM_PAGEFAULT_INJECT 1
-#define NESTEDHVM_PAGEFAULT_ERROR 2
-#define NESTEDHVM_PAGEFAULT_MMIO 3
+#define NESTEDHVM_PAGEFAULT_DONE 0
+#define NESTEDHVM_PAGEFAULT_INJECT 1
+#define NESTEDHVM_PAGEFAULT_ERROR 2
+#define NESTEDHVM_PAGEFAULT_MMIO 3
+#define NESTEDHVM_PAGEFAULT_READONLY 4
int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
bool_t access_r, bool_t access_w, bool_t access_x);
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] nestedhvm: fix write access fault on ro mapping
2012-07-19 14:15 [PATCH] nestedhvm: fix write access fault on ro mapping Christoph Egger
@ 2012-07-26 18:21 ` Tim Deegan
2012-07-27 11:57 ` Christoph Egger
0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2012-07-26 18:21 UTC (permalink / raw)
To: Christoph Egger; +Cc: xen-devel@lists.xen.org
At 16:15 +0200 on 19 Jul (1342714507), Christoph Egger wrote:
>
> Fix write access fault when host npt is mapped read-only.
> In this case let the host handle the #NPF.
> Apply host p2mt to hap-on-hap pagetable entry.
> This fixes the l2 guest graphic display refresh problem.
> diff -r ae0e96e156f3 xen/arch/x86/hvm/hvm.c
> --- a/xen/arch/x86/hvm/hvm.c Thu Jul 19 12:12:12 2012 +0200
> +++ b/xen/arch/x86/hvm/hvm.c Thu Jul 19 15:30:04 2012 +0200
> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
> if ( !handle_mmio() )
> hvm_inject_hw_exception(TRAP_gp_fault, 0);
> return 1;
> + case NESTEDHVM_PAGEFAULT_READONLY:
> + break;
Don't we have to translate the faulting PA into an L1 address before
letting the rest of this fault handler run? It explicitly operates on
the hostp2m.
If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
rather than special-casing READONLY. That way any other
automatically-fixed types (like the p2m_access magic) will be covered
too.
Cheers,
Tim.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] nestedhvm: fix write access fault on ro mapping
2012-07-26 18:21 ` Tim Deegan
@ 2012-07-27 11:57 ` Christoph Egger
2012-08-02 10:45 ` Tim Deegan
0 siblings, 1 reply; 6+ messages in thread
From: Christoph Egger @ 2012-07-27 11:57 UTC (permalink / raw)
To: Tim Deegan; +Cc: xen-devel@lists.xen.org
On 07/26/12 20:21, Tim Deegan wrote:
> At 16:15 +0200 on 19 Jul (1342714507), Christoph Egger wrote:
>>
>> Fix write access fault when host npt is mapped read-only.
>> In this case let the host handle the #NPF.
>> Apply host p2mt to hap-on-hap pagetable entry.
>> This fixes the l2 guest graphic display refresh problem.
>
>> diff -r ae0e96e156f3 xen/arch/x86/hvm/hvm.c
>> --- a/xen/arch/x86/hvm/hvm.c Thu Jul 19 12:12:12 2012 +0200
>> +++ b/xen/arch/x86/hvm/hvm.c Thu Jul 19 15:30:04 2012 +0200
>> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
>> if ( !handle_mmio() )
>> hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> return 1;
>> + case NESTEDHVM_PAGEFAULT_READONLY:
>> + break;
>
> Don't we have to translate the faulting PA into an L1 address before
> letting the rest of this fault handler run? It explicitly operates on
> the hostp2m.
>
> If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
> rather than special-casing READONLY. That way any other
> automatically-fixed types (like the p2m_access magic) will be covered
> too.
How do you differentiate if the error happened from walking l1 npt or
host npt ?
In the first case it isn't possible to provide l1 address.
Christoph
--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] nestedhvm: fix write access fault on ro mapping
2012-07-27 11:57 ` Christoph Egger
@ 2012-08-02 10:45 ` Tim Deegan
2012-08-02 12:32 ` Christoph Egger
0 siblings, 1 reply; 6+ messages in thread
From: Tim Deegan @ 2012-08-02 10:45 UTC (permalink / raw)
To: Christoph Egger; +Cc: xen-devel@lists.xen.org
At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
> >> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
> >> if ( !handle_mmio() )
> >> hvm_inject_hw_exception(TRAP_gp_fault, 0);
> >> return 1;
> >> + case NESTEDHVM_PAGEFAULT_READONLY:
> >> + break;
> >
> > Don't we have to translate the faulting PA into an L1 address before
> > letting the rest of this fault handler run? It explicitly operates on
> > the hostp2m.
> >
> > If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
> > rather than special-casing READONLY. That way any other
> > automatically-fixed types (like the p2m_access magic) will be covered
> > too.
>
> How do you differentiate if the error happened from walking l1 npt or
> host npt ?
> In the first case it isn't possible to provide l1 address.
It must be _possible_; after all we managed to detect the error. :) In
any case it's definitely wrong to carry on with this handler with the
wrong address in hand. So I wonder why this patch actually works for
you. Does replacing the 'break' above with 'return 1' also fix the
problem?
In the short term, do you only care about pages that are read-only for
log-dirty tracking? For the L1 walk, that should be handled by the PT
walker's own calls to paging_mark_dirty(), and the nested-p2m handler
could potentially take care of the other case by calling
paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().
Tim.
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] nestedhvm: fix write access fault on ro mapping
2012-08-02 10:45 ` Tim Deegan
@ 2012-08-02 12:32 ` Christoph Egger
2012-08-02 13:40 ` Tim Deegan
0 siblings, 1 reply; 6+ messages in thread
From: Christoph Egger @ 2012-08-02 12:32 UTC (permalink / raw)
To: Tim Deegan; +Cc: xen-devel@lists.xen.org
[-- Attachment #1: Type: text/plain, Size: 2065 bytes --]
On 08/02/12 12:45, Tim Deegan wrote:
> At 13:57 +0200 on 27 Jul (1343397454), Christoph Egger wrote:
>>>> @@ -1291,6 +1291,8 @@ int hvm_hap_nested_page_fault(unsigned l
>>>> if ( !handle_mmio() )
>>>> hvm_inject_hw_exception(TRAP_gp_fault, 0);
>>>> return 1;
>>>> + case NESTEDHVM_PAGEFAULT_READONLY:
>>>> + break;
>>>
>>> Don't we have to translate the faulting PA into an L1 address before
>>> letting the rest of this fault handler run? It explicitly operates on
>>> the hostp2m.
>>>
>>> If we do that, we should probably do it for NESTEDHVM_PAGEFAULT_ERROR,
>>> rather than special-casing READONLY. That way any other
>>> automatically-fixed types (like the p2m_access magic) will be covered
>>> too.
>>
>> How do you differentiate if the error happened from walking l1 npt or
>> host npt ?
>> In the first case it isn't possible to provide l1 address.
>
> It must be _possible_; after all we managed to detect the error. :) In
> any case it's definitely wrong to carry on with this handler with the
> wrong address in hand. So I wonder why this patch actually works for
> you. Does replacing the 'break' above with 'return 1' also fix the
> problem?
No. Two things have to happen:
1. Calling paging_mark_dirty() and
2. using the same p2mt from the hostp2m in the nestedp2m.
>
> In the short term, do you only care about pages that are read-only for
> log-dirty tracking? For the L1 walk, that should be handled by the PT
> walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> could potentially take care of the other case by calling
> paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().
Ok, I consider this as a performance improvement rather a bugfix.
New version is attached.
Christoph
--
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85689 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632
[-- Attachment #2: xen_nh_p2m.diff --]
[-- Type: text/plain, Size: 5861 bytes --]
diff -r 8330198c3240 xen/arch/x86/hvm/hvm.c
--- a/xen/arch/x86/hvm/hvm.c Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/hvm/hvm.c Thu Aug 02 13:42:15 2012 +0200
@@ -1278,12 +1278,14 @@ int hvm_hap_nested_page_fault(unsigned l
* into l1 guest if not fixable. The algorithm is
* the same as for shadow paging.
*/
- rv = nestedhvm_hap_nested_page_fault(v, gpa,
+ rv = nestedhvm_hap_nested_page_fault(v, &gpa,
access_r, access_w, access_x);
switch (rv) {
case NESTEDHVM_PAGEFAULT_DONE:
return 1;
- case NESTEDHVM_PAGEFAULT_ERROR:
+ case NESTEDHVM_PAGEFAULT_L1_ERROR:
+ /* An error occured while translating gpa from
+ * l2 guest address to l1 guest address. */
return 0;
case NESTEDHVM_PAGEFAULT_INJECT:
return -1;
@@ -1291,6 +1293,10 @@ int hvm_hap_nested_page_fault(unsigned l
if ( !handle_mmio() )
hvm_inject_hw_exception(TRAP_gp_fault, 0);
return 1;
+ case NESTEDHVM_PAGEFAULT_L0_ERROR:
+ /* gpa is now translated to l1 guest address, update gfn. */
+ gfn = gpa >> PAGE_SHIFT;
+ break;
}
}
diff -r 8330198c3240 xen/arch/x86/mm/hap/nested_hap.c
--- a/xen/arch/x86/mm/hap/nested_hap.c Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/arch/x86/mm/hap/nested_hap.c Thu Aug 02 13:42:15 2012 +0200
@@ -141,26 +141,29 @@ nestedhap_fix_p2m(struct vcpu *v, struct
*/
static int
nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
- unsigned int *page_order)
+ p2m_type_t *p2mt,
+ unsigned int *page_order,
+ bool_t access_r, bool_t access_w, bool_t access_x)
{
mfn_t mfn;
- p2m_type_t p2mt;
p2m_access_t p2ma;
int rc;
/* walk L0 P2M table */
- mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, &p2mt, &p2ma,
+ mfn = get_gfn_type_access(p2m, L1_gpa >> PAGE_SHIFT, p2mt, &p2ma,
0, page_order);
rc = NESTEDHVM_PAGEFAULT_MMIO;
- if ( p2m_is_mmio(p2mt) )
+ if ( p2m_is_mmio(*p2mt) )
goto out;
- rc = NESTEDHVM_PAGEFAULT_ERROR;
- if ( p2m_is_paging(p2mt) || p2m_is_shared(p2mt) || !p2m_is_ram(p2mt) )
+ rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
+ if ( access_w && p2m_is_readonly(*p2mt) )
goto out;
- rc = NESTEDHVM_PAGEFAULT_ERROR;
+ if ( p2m_is_paging(*p2mt) || p2m_is_shared(*p2mt) || !p2m_is_ram(*p2mt) )
+ goto out;
+
if ( !mfn_valid(mfn) )
goto out;
@@ -207,7 +210,7 @@ nestedhap_walk_L1_p2m(struct vcpu *v, pa
* Returns:
*/
int
-nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
bool_t access_r, bool_t access_w, bool_t access_x)
{
int rv;
@@ -215,19 +218,20 @@ nestedhvm_hap_nested_page_fault(struct v
struct domain *d = v->domain;
struct p2m_domain *p2m, *nested_p2m;
unsigned int page_order_21, page_order_10, page_order_20;
+ p2m_type_t p2mt_10;
p2m = p2m_get_hostp2m(d); /* L0 p2m */
nested_p2m = p2m_get_nestedp2m(v, nhvm_vcpu_hostcr3(v));
/* walk the L1 P2M table */
- rv = nestedhap_walk_L1_p2m(v, L2_gpa, &L1_gpa, &page_order_21,
+ rv = nestedhap_walk_L1_p2m(v, *L2_gpa, &L1_gpa, &page_order_21,
access_r, access_w, access_x);
/* let caller to handle these two cases */
switch (rv) {
case NESTEDHVM_PAGEFAULT_INJECT:
return rv;
- case NESTEDHVM_PAGEFAULT_ERROR:
+ case NESTEDHVM_PAGEFAULT_L1_ERROR:
return rv;
case NESTEDHVM_PAGEFAULT_DONE:
break;
@@ -237,13 +241,16 @@ nestedhvm_hap_nested_page_fault(struct v
}
/* ==> we have to walk L0 P2M */
- rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa, &page_order_10);
+ rv = nestedhap_walk_L0_p2m(p2m, L1_gpa, &L0_gpa,
+ &p2mt_10, &page_order_10,
+ access_r, access_w, access_x);
/* let upper level caller to handle these two cases */
switch (rv) {
case NESTEDHVM_PAGEFAULT_INJECT:
return rv;
- case NESTEDHVM_PAGEFAULT_ERROR:
+ case NESTEDHVM_PAGEFAULT_L0_ERROR:
+ *L2_gpa = L1_gpa;
return rv;
case NESTEDHVM_PAGEFAULT_DONE:
break;
@@ -257,9 +264,9 @@ nestedhvm_hap_nested_page_fault(struct v
page_order_20 = min(page_order_21, page_order_10);
/* fix p2m_get_pagetable(nested_p2m) */
- nestedhap_fix_p2m(v, nested_p2m, L2_gpa, L0_gpa, page_order_20,
- p2m_ram_rw,
- p2m_access_rwx /* FIXME: Should use same permission as l1 guest */);
+ nestedhap_fix_p2m(v, nested_p2m, *L2_gpa, L0_gpa, page_order_20,
+ p2mt_10,
+ p2m_access_rwx /* FIXME: Should use minimum permission. */);
return NESTEDHVM_PAGEFAULT_DONE;
}
diff -r 8330198c3240 xen/include/asm-x86/hvm/nestedhvm.h
--- a/xen/include/asm-x86/hvm/nestedhvm.h Fri Jul 27 12:24:03 2012 +0200
+++ b/xen/include/asm-x86/hvm/nestedhvm.h Thu Aug 02 13:42:15 2012 +0200
@@ -47,11 +47,12 @@ bool_t nestedhvm_vcpu_in_guestmode(struc
vcpu_nestedhvm(v).nv_guestmode = 0
/* Nested paging */
-#define NESTEDHVM_PAGEFAULT_DONE 0
-#define NESTEDHVM_PAGEFAULT_INJECT 1
-#define NESTEDHVM_PAGEFAULT_ERROR 2
-#define NESTEDHVM_PAGEFAULT_MMIO 3
-int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t L2_gpa,
+#define NESTEDHVM_PAGEFAULT_DONE 0
+#define NESTEDHVM_PAGEFAULT_INJECT 1
+#define NESTEDHVM_PAGEFAULT_L1_ERROR 2
+#define NESTEDHVM_PAGEFAULT_L0_ERROR 3
+#define NESTEDHVM_PAGEFAULT_MMIO 4
+int nestedhvm_hap_nested_page_fault(struct vcpu *v, paddr_t *L2_gpa,
bool_t access_r, bool_t access_w, bool_t access_x);
/* IO permission map */
[-- Attachment #3: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] nestedhvm: fix write access fault on ro mapping
2012-08-02 12:32 ` Christoph Egger
@ 2012-08-02 13:40 ` Tim Deegan
0 siblings, 0 replies; 6+ messages in thread
From: Tim Deegan @ 2012-08-02 13:40 UTC (permalink / raw)
To: Christoph Egger; +Cc: xen-devel@lists.xen.org
At 14:32 +0200 on 02 Aug (1343917962), Christoph Egger wrote:
> > It must be _possible_; after all we managed to detect the error. :) In
> > any case it's definitely wrong to carry on with this handler with the
> > wrong address in hand. So I wonder why this patch actually works for
> > you. Does replacing the 'break' above with 'return 1' also fix the
> > problem?
>
> No. Two things have to happen:
>
> 1. Calling paging_mark_dirty() and
> 2. using the same p2mt from the hostp2m in the nestedp2m.
>
> >
>
> > In the short term, do you only care about pages that are read-only for
> > log-dirty tracking? For the L1 walk, that should be handled by the PT
> > walker's own calls to paging_mark_dirty(), and the nested-p2m handler
> > could potentially take care of the other case by calling
> > paging_mark_dirty() (for writes!) before calling nestedhap_walk_L0_p2m().
>
> Ok, I consider this as a performance improvement rather a bugfix.
>
> New version is attached.
Applied, thanks.
Tim.
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-08-02 13:40 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-07-19 14:15 [PATCH] nestedhvm: fix write access fault on ro mapping Christoph Egger
2012-07-26 18:21 ` Tim Deegan
2012-07-27 11:57 ` Christoph Egger
2012-08-02 10:45 ` Tim Deegan
2012-08-02 12:32 ` Christoph Egger
2012-08-02 13:40 ` Tim Deegan
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).