* [PATCH v4 1/8] KVM: MMU: Add link_shadow_page() helper
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 2/8] KVM: MMU: Use __set_spte to link shadow pages Avi Kivity
` (8 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
To simplify the process of fetching an spte, add a helper that links
a shadow page to an spte.
Reviewed-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 10 ++++++++++
arch/x86/kvm/paging_tmpl.h | 7 ++-----
2 files changed, 12 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b93b94f..ae35853 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1482,6 +1482,16 @@ static void shadow_walk_next(struct kvm_shadow_walk_iterator *iterator)
--iterator->level;
}
+static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp)
+{
+ u64 spte;
+
+ spte = __pa(sp->spt)
+ | PT_PRESENT_MASK | PT_ACCESSED_MASK
+ | PT_WRITABLE_MASK | PT_USER_MASK;
+ *sptep = spte;
+}
+
static void kvm_mmu_page_unlink_children(struct kvm *kvm,
struct kvm_mmu_page *sp)
{
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6daeacf..4606c88 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -309,7 +309,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
{
unsigned access = gw->pt_access;
struct kvm_mmu_page *sp;
- u64 spte, *sptep = NULL;
+ u64 *sptep = NULL;
int direct;
gfn_t table_gfn;
int r;
@@ -394,10 +394,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
}
}
- spte = __pa(sp->spt)
- | PT_PRESENT_MASK | PT_ACCESSED_MASK
- | PT_WRITABLE_MASK | PT_USER_MASK;
- *sptep = spte;
+ link_shadow_page(sptep, sp);
}
return sptep;
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 2/8] KVM: MMU: Use __set_spte to link shadow pages
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
2010-07-13 11:27 ` [PATCH v4 1/8] KVM: MMU: Add link_shadow_page() helper Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 3/8] KVM: MMU: Add drop_large_spte() helper Avi Kivity
` (7 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
To avoid split accesses to 64 bit sptes on i386, use __set_spte() to link
shadow pages together.
(not technically required since shadow pages are __GFP_KERNEL, so upper 32
bits are always clear)
Reviewed-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ae35853..75cfb79 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1489,7 +1489,7 @@ static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp)
spte = __pa(sp->spt)
| PT_PRESENT_MASK | PT_ACCESSED_MASK
| PT_WRITABLE_MASK | PT_USER_MASK;
- *sptep = spte;
+ __set_spte(sptep, spte);
}
static void kvm_mmu_page_unlink_children(struct kvm *kvm,
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 3/8] KVM: MMU: Add drop_large_spte() helper
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
2010-07-13 11:27 ` [PATCH v4 1/8] KVM: MMU: Add link_shadow_page() helper Avi Kivity
2010-07-13 11:27 ` [PATCH v4 2/8] KVM: MMU: Use __set_spte to link shadow pages Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 4/8] KVM: MMU: Add validate_direct_spte() helper Avi Kivity
` (6 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
To clarify spte fetching code, move large spte handling into a helper.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 8 ++++++++
arch/x86/kvm/paging_tmpl.h | 5 +----
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 75cfb79..747af72 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1492,6 +1492,14 @@ static void link_shadow_page(u64 *sptep, struct kvm_mmu_page *sp)
__set_spte(sptep, spte);
}
+static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep)
+{
+ if (is_large_pte(*sptep)) {
+ drop_spte(vcpu->kvm, sptep, shadow_trap_nonpresent_pte);
+ kvm_flush_remote_tlbs(vcpu->kvm);
+ }
+}
+
static void kvm_mmu_page_unlink_children(struct kvm *kvm,
struct kvm_mmu_page *sp)
{
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 4606c88..69dcac0 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -360,10 +360,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
kvm_flush_remote_tlbs(vcpu->kvm);
}
- if (is_large_pte(*sptep)) {
- drop_spte(vcpu->kvm, sptep, shadow_trap_nonpresent_pte);
- kvm_flush_remote_tlbs(vcpu->kvm);
- }
+ drop_large_spte(vcpu, sptep);
if (level <= gw->level) {
direct = 1;
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 4/8] KVM: MMU: Add validate_direct_spte() helper
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (2 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 3/8] KVM: MMU: Add drop_large_spte() helper Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 5/8] KVM: MMU: Add gpte_valid() helper Avi Kivity
` (5 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
Add a helper to verify that a direct shadow page is valid wrt the required
access permissions; drop the page if it is not valid.
Reviewed-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/mmu.c | 23 +++++++++++++++++++++++
arch/x86/kvm/paging_tmpl.h | 27 ++++++---------------------
2 files changed, 29 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 747af72..d16efbe 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1500,6 +1500,29 @@ static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep)
}
}
+static void validate_direct_spte(struct kvm_vcpu *vcpu, u64 *sptep,
+ unsigned direct_access)
+{
+ if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) {
+ struct kvm_mmu_page *child;
+
+ /*
+ * For the direct sp, if the guest pte's dirty bit
+ * changed form clean to dirty, it will corrupt the
+ * sp's access: allow writable in the read-only sp,
+ * so we should update the spte at this point to get
+ * a new sp with the correct access.
+ */
+ child = page_header(*sptep & PT64_BASE_ADDR_MASK);
+ if (child->role.access == direct_access)
+ return;
+
+ mmu_page_remove_parent_pte(child, sptep);
+ __set_spte(sptep, shadow_trap_nonpresent_pte);
+ kvm_flush_remote_tlbs(vcpu->kvm);
+ }
+}
+
static void kvm_mmu_page_unlink_children(struct kvm *kvm,
struct kvm_mmu_page *sp)
{
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 69dcac0..893a75c 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -338,30 +338,15 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
break;
}
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)) {
- struct kvm_mmu_page *child;
-
- if (level != gw->level)
- continue;
-
- /*
- * For the direct sp, if the guest pte's dirty bit
- * changed form clean to dirty, it will corrupt the
- * sp's access: allow writable in the read-only sp,
- * so we should update the spte at this point to get
- * a new sp with the correct access.
- */
- child = page_header(*sptep & PT64_BASE_ADDR_MASK);
- if (child->role.access == direct_access)
- continue;
-
- mmu_page_remove_parent_pte(child, sptep);
- __set_spte(sptep, shadow_trap_nonpresent_pte);
- kvm_flush_remote_tlbs(vcpu->kvm);
- }
+ if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)
+ && level == gw->level)
+ validate_direct_spte(vcpu, sptep, direct_access);
drop_large_spte(vcpu, sptep);
+ if (is_shadow_present_pte(*sptep))
+ continue;
+
if (level <= gw->level) {
direct = 1;
access = direct_access;
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 5/8] KVM: MMU: Add gpte_valid() helper
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (3 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 4/8] KVM: MMU: Add validate_direct_spte() helper Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 6/8] KVM: MMU: Simplify spte fetch() function Avi Kivity
` (4 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
Move the code to check whether a gpte has changed since we fetched it into
a helper.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 25 +++++++++++++++++--------
1 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 893a75c..22159e6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -299,6 +299,17 @@ static void FNAME(update_pte)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
gpte_to_gfn(gpte), pfn, true, true);
}
+static bool FNAME(gpte_changed)(struct kvm_vcpu *vcpu,
+ struct guest_walker *gw, int level)
+{
+ int r;
+ pt_element_t curr_pte;
+
+ r = kvm_read_guest_atomic(vcpu->kvm, gw->pte_gpa[level - 1],
+ &curr_pte, sizeof(curr_pte));
+ return r || curr_pte != gw->ptes[level - 1];
+}
+
/*
* Fetch a shadow pte for a specific level in the paging hierarchy.
*/
@@ -312,11 +323,9 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
u64 *sptep = NULL;
int direct;
gfn_t table_gfn;
- int r;
int level;
bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]);
unsigned direct_access;
- pt_element_t curr_pte;
struct kvm_shadow_walk_iterator iterator;
if (!is_present_gpte(gw->ptes[gw->level - 1]))
@@ -364,17 +373,17 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
}
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
direct, access, sptep);
- if (!direct) {
- r = kvm_read_guest_atomic(vcpu->kvm,
- gw->pte_gpa[level - 2],
- &curr_pte, sizeof(curr_pte));
- if (r || curr_pte != gw->ptes[level - 2]) {
+ if (!direct)
+ /*
+ * Verify that the gpte in the page we've just write
+ * protected is still there.
+ */
+ if (FNAME(gpte_changed)(vcpu, gw, level - 1)) {
kvm_mmu_put_page(sp, sptep);
kvm_release_pfn_clean(pfn);
sptep = NULL;
break;
}
- }
link_shadow_page(sptep, sp);
}
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 6/8] KVM: MMU: Simplify spte fetch() function
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (4 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 5/8] KVM: MMU: Add gpte_valid() helper Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 7/8] KVM: MMU: Validate all gptes during fetch, not just those used for new pages Avi Kivity
` (3 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
Partition the function into three sections:
- fetching indirect shadow pages (host_level > guest_level)
- fetching direct shadow pages (page_level < host_level <= guest_level)
- the final spte (page_level == host_level)
Instead of the current spaghetti.
A slight change from the original code is that we call validate_direct_spte()
more often: previously we called it only for gw->level, now we also call it for
lower levels. The change should have no effect.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 92 +++++++++++++++++++++++--------------------
1 files changed, 49 insertions(+), 43 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 22159e6..a550957 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -321,9 +321,7 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
unsigned access = gw->pt_access;
struct kvm_mmu_page *sp;
u64 *sptep = NULL;
- int direct;
- gfn_t table_gfn;
- int level;
+ int uninitialized_var(level);
bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]);
unsigned direct_access;
struct kvm_shadow_walk_iterator iterator;
@@ -335,60 +333,68 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!dirty)
direct_access &= ~ACC_WRITE_MASK;
- for_each_shadow_entry(vcpu, addr, iterator) {
+ for (shadow_walk_init(&iterator, vcpu, addr);
+ shadow_walk_okay(&iterator) && iterator.level > gw->level;
+ shadow_walk_next(&iterator)) {
+ gfn_t table_gfn;
+
level = iterator.level;
sptep = iterator.sptep;
- if (iterator.level == hlevel) {
- mmu_set_spte(vcpu, sptep, access,
- gw->pte_access & access,
- user_fault, write_fault,
- dirty, ptwrite, level,
- gw->gfn, pfn, false, true);
- break;
- }
-
- if (is_shadow_present_pte(*sptep) && !is_large_pte(*sptep)
- && level == gw->level)
- validate_direct_spte(vcpu, sptep, direct_access);
drop_large_spte(vcpu, sptep);
if (is_shadow_present_pte(*sptep))
continue;
- if (level <= gw->level) {
- direct = 1;
- access = direct_access;
-
- /*
- * It is a large guest pages backed by small host pages,
- * So we set @direct(@sp->role.direct)=1, and set
- * @table_gfn(@sp->gfn)=the base page frame for linear
- * translations.
- */
- table_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
- } else {
- direct = 0;
- table_gfn = gw->table_gfn[level - 2];
- }
+ table_gfn = gw->table_gfn[level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
- direct, access, sptep);
- if (!direct)
- /*
- * Verify that the gpte in the page we've just write
- * protected is still there.
- */
- if (FNAME(gpte_changed)(vcpu, gw, level - 1)) {
- kvm_mmu_put_page(sp, sptep);
- kvm_release_pfn_clean(pfn);
- sptep = NULL;
- break;
- }
+ false, access, sptep);
+
+ /*
+ * Verify that the gpte in the page we've just write
+ * protected is still there.
+ */
+ if (FNAME(gpte_changed)(vcpu, gw, level - 1))
+ goto out_gpte_changed;
+
+ link_shadow_page(sptep, sp);
+ }
+
+ for (;
+ shadow_walk_okay(&iterator) && iterator.level > hlevel;
+ shadow_walk_next(&iterator)) {
+ gfn_t direct_gfn;
+ level = iterator.level;
+ sptep = iterator.sptep;
+
+ drop_large_spte(vcpu, sptep);
+
+ if (is_shadow_present_pte(*sptep))
+ continue;
+
+ validate_direct_spte(vcpu, sptep, direct_access);
+
+ direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
+
+ sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, level-1,
+ true, direct_access, sptep);
link_shadow_page(sptep, sp);
}
+ sptep = iterator.sptep;
+ level = iterator.level;
+
+ mmu_set_spte(vcpu, sptep, access, gw->pte_access & access,
+ user_fault, write_fault, dirty, ptwrite, level,
+ gw->gfn, pfn, false, true);
+
return sptep;
+
+out_gpte_changed:
+ kvm_mmu_put_page(sp, sptep);
+ kvm_release_pfn_clean(pfn);
+ return NULL;
}
/*
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 7/8] KVM: MMU: Validate all gptes during fetch, not just those used for new pages
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (5 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 6/8] KVM: MMU: Simplify spte fetch() function Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:27 ` [PATCH v4 8/8] KVM: MMU: Eliminate redundant temporaries in FNAME(fetch) Avi Kivity
` (2 subsequent siblings)
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
Currently, when we fetch an spte, we only verify that gptes match those that
the walker saw if we build new shadow pages for them.
However, this misses the following race:
vcpu1 vcpu2
walk
change gpte
walk
instantiate sp
fetch existing sp
Fix by validating every gpte, regardless of whether it is used for building
a new sp or not.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 33 ++++++++++++++++++++++++---------
1 files changed, 24 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index a550957..284ab16 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -319,10 +319,11 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
int *ptwrite, pfn_t pfn)
{
unsigned access = gw->pt_access;
- struct kvm_mmu_page *sp;
+ struct kvm_mmu_page *sp = NULL;
u64 *sptep = NULL;
int uninitialized_var(level);
bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]);
+ int top_level;
unsigned direct_access;
struct kvm_shadow_walk_iterator iterator;
@@ -333,6 +334,18 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (!dirty)
direct_access &= ~ACC_WRITE_MASK;
+ top_level = vcpu->arch.mmu.root_level;
+ if (top_level == PT32E_ROOT_LEVEL)
+ top_level = PT32_ROOT_LEVEL;
+ /*
+ * Verify that the top-level gpte is still there. Since the page
+ * is a root page, it is either write protected (and cannot be
+ * changed from now on) or it is invalid (in which case, we don't
+ * really care if it changes underneath us after this point).
+ */
+ if (FNAME(gpte_changed)(vcpu, gw, top_level))
+ goto out_gpte_changed;
+
for (shadow_walk_init(&iterator, vcpu, addr);
shadow_walk_okay(&iterator) && iterator.level > gw->level;
shadow_walk_next(&iterator)) {
@@ -343,12 +356,12 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
drop_large_spte(vcpu, sptep);
- if (is_shadow_present_pte(*sptep))
- continue;
-
- table_gfn = gw->table_gfn[level - 2];
- sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
- false, access, sptep);
+ sp = NULL;
+ if (!is_shadow_present_pte(*sptep)) {
+ table_gfn = gw->table_gfn[level - 2];
+ sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
+ false, access, sptep);
+ }
/*
* Verify that the gpte in the page we've just write
@@ -357,7 +370,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (FNAME(gpte_changed)(vcpu, gw, level - 1))
goto out_gpte_changed;
- link_shadow_page(sptep, sp);
+ if (sp)
+ link_shadow_page(sptep, sp);
}
for (;
@@ -392,7 +406,8 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
return sptep;
out_gpte_changed:
- kvm_mmu_put_page(sp, sptep);
+ if (sp)
+ kvm_mmu_put_page(sp, sptep);
kvm_release_pfn_clean(pfn);
return NULL;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* [PATCH v4 8/8] KVM: MMU: Eliminate redundant temporaries in FNAME(fetch)
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (6 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 7/8] KVM: MMU: Validate all gptes during fetch, not just those used for new pages Avi Kivity
@ 2010-07-13 11:27 ` Avi Kivity
2010-07-13 11:50 ` [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
2010-07-14 0:09 ` Marcelo Tosatti
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:27 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
'level' and 'sptep' are aliases for 'interator.level' and 'iterator.sptep', no
need for them.
Signed-off-by: Avi Kivity <avi@redhat.com>
---
arch/x86/kvm/paging_tmpl.h | 59 ++++++++++++++++++--------------------------
1 files changed, 24 insertions(+), 35 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 284ab16..a09e04c 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -320,12 +320,10 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
{
unsigned access = gw->pt_access;
struct kvm_mmu_page *sp = NULL;
- u64 *sptep = NULL;
- int uninitialized_var(level);
bool dirty = is_dirty_gpte(gw->ptes[gw->level - 1]);
int top_level;
unsigned direct_access;
- struct kvm_shadow_walk_iterator iterator;
+ struct kvm_shadow_walk_iterator it;
if (!is_present_gpte(gw->ptes[gw->level - 1]))
return NULL;
@@ -346,68 +344,59 @@ static u64 *FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
if (FNAME(gpte_changed)(vcpu, gw, top_level))
goto out_gpte_changed;
- for (shadow_walk_init(&iterator, vcpu, addr);
- shadow_walk_okay(&iterator) && iterator.level > gw->level;
- shadow_walk_next(&iterator)) {
+ for (shadow_walk_init(&it, vcpu, addr);
+ shadow_walk_okay(&it) && it.level > gw->level;
+ shadow_walk_next(&it)) {
gfn_t table_gfn;
- level = iterator.level;
- sptep = iterator.sptep;
-
- drop_large_spte(vcpu, sptep);
+ drop_large_spte(vcpu, it.sptep);
sp = NULL;
- if (!is_shadow_present_pte(*sptep)) {
- table_gfn = gw->table_gfn[level - 2];
- sp = kvm_mmu_get_page(vcpu, table_gfn, addr, level-1,
- false, access, sptep);
+ if (!is_shadow_present_pte(*it.sptep)) {
+ table_gfn = gw->table_gfn[it.level - 2];
+ sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
+ false, access, it.sptep);
}
/*
* Verify that the gpte in the page we've just write
* protected is still there.
*/
- if (FNAME(gpte_changed)(vcpu, gw, level - 1))
+ if (FNAME(gpte_changed)(vcpu, gw, it.level - 1))
goto out_gpte_changed;
if (sp)
- link_shadow_page(sptep, sp);
+ link_shadow_page(it.sptep, sp);
}
for (;
- shadow_walk_okay(&iterator) && iterator.level > hlevel;
- shadow_walk_next(&iterator)) {
+ shadow_walk_okay(&it) && it.level > hlevel;
+ shadow_walk_next(&it)) {
gfn_t direct_gfn;
- level = iterator.level;
- sptep = iterator.sptep;
+ drop_large_spte(vcpu, it.sptep);
- drop_large_spte(vcpu, sptep);
-
- if (is_shadow_present_pte(*sptep))
+ if (is_shadow_present_pte(*it.sptep))
continue;
- validate_direct_spte(vcpu, sptep, direct_access);
+ validate_direct_spte(vcpu, it.sptep, direct_access);
- direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(level) - 1);
+ direct_gfn = gw->gfn & ~(KVM_PAGES_PER_HPAGE(it.level) - 1);
- sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, level-1,
- true, direct_access, sptep);
- link_shadow_page(sptep, sp);
+ sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
+ true, direct_access, it.sptep);
+ link_shadow_page(it.sptep, sp);
}
- sptep = iterator.sptep;
- level = iterator.level;
-
- mmu_set_spte(vcpu, sptep, access, gw->pte_access & access,
- user_fault, write_fault, dirty, ptwrite, level,
+ mmu_set_spte(vcpu, it.sptep, access, gw->pte_access & access,
+ user_fault, write_fault, dirty, ptwrite, it.level,
gw->gfn, pfn, false, true);
- return sptep;
+ return it.sptep;
out_gpte_changed:
if (sp)
- kvm_mmu_put_page(sp, sptep);
+ kvm_mmu_put_page(sp, it.sptep);
kvm_release_pfn_clean(pfn);
return NULL;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 11+ messages in thread* Re: [PATCH v4 0/8] Simplify and fix fetch()
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (7 preceding siblings ...)
2010-07-13 11:27 ` [PATCH v4 8/8] KVM: MMU: Eliminate redundant temporaries in FNAME(fetch) Avi Kivity
@ 2010-07-13 11:50 ` Avi Kivity
2010-07-14 0:09 ` Marcelo Tosatti
9 siblings, 0 replies; 11+ messages in thread
From: Avi Kivity @ 2010-07-13 11:50 UTC (permalink / raw)
To: Xiao Guangrong, Marcelo Tosatti, kvm
On 07/13/2010 02:27 PM, Avi Kivity wrote:
> fetch() has become a monster, with a zillion continues, breaks, and gotos.
> Simplify it before Xiao adds even more.
>
> Also fix the gpte validation race.
>
>
Survives autotest, btw.
--
error compiling committee.c: too many arguments to function
^ permalink raw reply [flat|nested] 11+ messages in thread* Re: [PATCH v4 0/8] Simplify and fix fetch()
2010-07-13 11:27 [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
` (8 preceding siblings ...)
2010-07-13 11:50 ` [PATCH v4 0/8] Simplify and fix fetch() Avi Kivity
@ 2010-07-14 0:09 ` Marcelo Tosatti
9 siblings, 0 replies; 11+ messages in thread
From: Marcelo Tosatti @ 2010-07-14 0:09 UTC (permalink / raw)
To: Avi Kivity; +Cc: Xiao Guangrong, kvm
On Tue, Jul 13, 2010 at 02:27:03PM +0300, Avi Kivity wrote:
> fetch() has become a monster, with a zillion continues, breaks, and gotos.
> Simplify it before Xiao adds even more.
>
> Also fix the gpte validation race.
>
> Avi Kivity (8):
> KVM: MMU: Add link_shadow_page() helper
> KVM: MMU: Use __set_spte to link shadow pages
> KVM: MMU: Add drop_large_spte() helper
> v2: s/drop_spte_if_large/drop_large_spte/g
> KVM: MMU: Add validate_direct_spte() helper
> KVM: MMU: Add gpte_changed() helper
> v4: change name to gpte_changed(), move error handling back to fetch()
> v2: add comments
> make validate_indirect_spte() look at spte in its own level, not one
> below (but adjust caller so no net effect)
> KVM: MMU: Simplify spte fetch() function
> v2: add comments
> update 'sptep' and 'level' for last level
> KVM: MMU: Validate all gptes during fetch, not just those used for
> new pages
> v4: fix freeing of incorrect shadow page on changed gpte
> v3: fix top-level gpte validation oops
> v2: validate top-level gpte (in root page)
> KVM: MMU: Eliminate redundant temporaries in FNAME(fetch)
> v2: new patch
>
> arch/x86/kvm/mmu.c | 41 ++++++++++++
> arch/x86/kvm/paging_tmpl.h | 150 ++++++++++++++++++++++----------------------
> 2 files changed, 115 insertions(+), 76 deletions(-)
Applied, thanks.
^ permalink raw reply [flat|nested] 11+ messages in thread