linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 5/10] KVM MMU: cleanup invlpg code
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
@ 2010-04-22  6:12 ` Xiao Guangrong
  2010-04-22  6:13 ` [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:12 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

Using is_last_spte() to cleanup invlpg code

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |    4 +---
 1 files changed, 1 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fac7c09..fd027a6 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2271,9 +2271,7 @@ static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 		level = iterator.level;
 		sptep = iterator.sptep;
 
-		if (level == PT_PAGE_TABLE_LEVEL  ||
-		    ((level == PT_DIRECTORY_LEVEL && is_large_pte(*sptep))) ||
-		    ((level == PT_PDPE_LEVEL && is_large_pte(*sptep)))) {
+		if (is_last_spte(*sptep, level)) {
 			struct kvm_mmu_page *sp = page_header(__pa(sptep));
 			int offset = 0;
 
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
  2010-04-22  6:12 ` [PATCH 5/10] KVM MMU: cleanup invlpg code Xiao Guangrong
@ 2010-04-22  6:13 ` Xiao Guangrong
  2010-04-22 19:29   ` Marcelo Tosatti
  2010-04-23 11:35   ` Avi Kivity
  2010-04-22  6:13 ` [PATCH 7/10] KVM MMU: allow more page become unsync at gfn mapping time Xiao Guangrong
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:13 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

If have new mapping to the unsync page(i.e, add a new parent), just
update the page from sp->gfn but not write-protect gfn, and if need
create new shadow page form sp->gfn, we should sync it

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   27 +++++++++++++++++++--------
 1 files changed, 19 insertions(+), 8 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fd027a6..8607a64 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1196,16 +1196,20 @@ static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 
 static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp);
 
-static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
+static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
+			 bool clear_unsync)
 {
 	if (sp->role.cr4_pae != !!is_pae(vcpu)) {
 		kvm_mmu_zap_page(vcpu->kvm, sp);
 		return 1;
 	}
 
-	if (rmap_write_protect(vcpu->kvm, sp->gfn))
-		kvm_flush_remote_tlbs(vcpu->kvm);
-	kvm_unlink_unsync_page(vcpu->kvm, sp);
+	if (clear_unsync) {
+		if (rmap_write_protect(vcpu->kvm, sp->gfn))
+			kvm_flush_remote_tlbs(vcpu->kvm);
+		kvm_unlink_unsync_page(vcpu->kvm, sp);
+	}
+
 	if (vcpu->arch.mmu.sync_page(vcpu, sp)) {
 		kvm_mmu_zap_page(vcpu->kvm, sp);
 		return 1;
@@ -1293,7 +1297,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
 			kvm_flush_remote_tlbs(vcpu->kvm);
 
 		for_each_sp(pages, sp, parents, i) {
-			kvm_sync_page(vcpu, sp);
+			kvm_sync_page(vcpu, sp, true);
 			mmu_pages_clear_parents(&parents);
 		}
 		cond_resched_lock(&vcpu->kvm->mmu_lock);
@@ -1313,7 +1317,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 	unsigned index;
 	unsigned quadrant;
 	struct hlist_head *bucket;
-	struct kvm_mmu_page *sp;
+	struct kvm_mmu_page *sp, *unsync_sp = NULL;
 	struct hlist_node *node, *tmp;
 
 	role = vcpu->arch.mmu.base_role;
@@ -1332,12 +1336,16 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 	hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
 		if (sp->gfn == gfn) {
 			if (sp->unsync)
-				if (kvm_sync_page(vcpu, sp))
-					continue;
+				unsync_sp = sp;
 
 			if (sp->role.word != role.word)
 				continue;
 
+			if (unsync_sp && kvm_sync_page(vcpu, unsync_sp, false)) {
+				unsync_sp = NULL;
+				continue;
+			}
+
 			mmu_page_add_parent_pte(vcpu, sp, parent_pte);
 			if (sp->unsync_children) {
 				set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests);
@@ -1346,6 +1354,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 			trace_kvm_mmu_get_page(sp, false);
 			return sp;
 		}
+	if (unsync_sp)
+		kvm_sync_page(vcpu, unsync_sp, true);
+
 	++vcpu->kvm->stat.mmu_cache_miss;
 	sp = kvm_mmu_alloc_page(vcpu, parent_pte);
 	if (!sp)
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 7/10] KVM MMU: allow more page become unsync at gfn mapping time
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
  2010-04-22  6:12 ` [PATCH 5/10] KVM MMU: cleanup invlpg code Xiao Guangrong
  2010-04-22  6:13 ` [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
@ 2010-04-22  6:13 ` Xiao Guangrong
  2010-04-22  6:13 ` [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time Xiao Guangrong
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:13 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

In current code, shadow page can become asynchronous only if one
shadow page for a gfn, this rule is too strict, in fact, we can
let all last mapping page(i.e, it's the pte page) become unsync,
and sync them at invlpg or flush tlb time.

This patch allow more page become asynchronous at gfn mapping time 

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   81 +++++++++++++++++++++++----------------------------
 1 files changed, 37 insertions(+), 44 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8607a64..13378e7 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1166,26 +1166,6 @@ static int mmu_unsync_walk(struct kvm_mmu_page *sp,
 	return __mmu_unsync_walk(sp, pvec);
 }
 
-static struct kvm_mmu_page *kvm_mmu_lookup_page(struct kvm *kvm, gfn_t gfn)
-{
-	unsigned index;
-	struct hlist_head *bucket;
-	struct kvm_mmu_page *sp;
-	struct hlist_node *node;
-
-	pgprintk("%s: looking for gfn %lx\n", __func__, gfn);
-	index = kvm_page_table_hashfn(gfn);
-	bucket = &kvm->arch.mmu_page_hash[index];
-	hlist_for_each_entry(sp, node, bucket, hash_link)
-		if (sp->gfn == gfn && !sp->role.direct
-		    && !sp->role.invalid) {
-			pgprintk("%s: found role %x\n",
-				 __func__, sp->role.word);
-			return sp;
-		}
-	return NULL;
-}
-
 static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
 {
 	WARN_ON(!sp->unsync);
@@ -1734,47 +1714,60 @@ u8 kvm_get_guest_memory_type(struct kvm_vcpu *vcpu, gfn_t gfn)
 }
 EXPORT_SYMBOL_GPL(kvm_get_guest_memory_type);
 
-static int kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
+static void __kvm_unsync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
+{
+	trace_kvm_mmu_unsync_page(sp);
+	++vcpu->kvm->stat.mmu_unsync;
+	sp->unsync = 1;
+
+	kvm_mmu_mark_parents_unsync(sp);
+	mmu_convert_notrap(sp);
+}
+
+static void kvm_unsync_pages(struct kvm_vcpu *vcpu,  gfn_t gfn)
 {
-	unsigned index;
 	struct hlist_head *bucket;
 	struct kvm_mmu_page *s;
 	struct hlist_node *node, *n;
+	unsigned index;
 
-	index = kvm_page_table_hashfn(sp->gfn);
+	index = kvm_page_table_hashfn(gfn);
 	bucket = &vcpu->kvm->arch.mmu_page_hash[index];
-	/* don't unsync if pagetable is shadowed with multiple roles */
+
 	hlist_for_each_entry_safe(s, node, n, bucket, hash_link) {
-		if (s->gfn != sp->gfn || s->role.direct)
+		if (s->gfn != gfn || s->role.direct || s->unsync)
 			continue;
-		if (s->role.word != sp->role.word)
-			return 1;
+		WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL);
+		__kvm_unsync_page(vcpu, s);
 	}
-	trace_kvm_mmu_unsync_page(sp);
-	++vcpu->kvm->stat.mmu_unsync;
-	sp->unsync = 1;
-
-	kvm_mmu_mark_parents_unsync(sp);
-
-	mmu_convert_notrap(sp);
-	return 0;
 }
 
 static int mmu_need_write_protect(struct kvm_vcpu *vcpu, gfn_t gfn,
 				  bool can_unsync)
 {
-	struct kvm_mmu_page *shadow;
+	unsigned index;
+	struct hlist_head *bucket;
+	struct kvm_mmu_page *s;
+	struct hlist_node *node, *n;
+	bool need_unsync = false;
+
+	index = kvm_page_table_hashfn(gfn);
+	bucket = &vcpu->kvm->arch.mmu_page_hash[index];
+	hlist_for_each_entry_safe(s, node, n, bucket, hash_link) {
+		if (s->gfn != gfn || s->role.direct)
+			continue;
 
-	shadow = kvm_mmu_lookup_page(vcpu->kvm, gfn);
-	if (shadow) {
-		if (shadow->role.level != PT_PAGE_TABLE_LEVEL)
+		if (s->role.level != PT_PAGE_TABLE_LEVEL)
 			return 1;
-		if (shadow->unsync)
-			return 0;
-		if (can_unsync && oos_shadow)
-			return kvm_unsync_page(vcpu, shadow);
-		return 1;
+
+		if (!need_unsync && !s->unsync) {
+			if (!can_unsync || !oos_shadow)
+				return 1;
+			need_unsync = true;
+		}
 	}
+	if (need_unsync)
+		kvm_unsync_pages(vcpu, gfn);
 	return 0;
 }
 
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
                   ` (2 preceding siblings ...)
  2010-04-22  6:13 ` [PATCH 7/10] KVM MMU: allow more page become unsync at gfn mapping time Xiao Guangrong
@ 2010-04-22  6:13 ` Xiao Guangrong
  2010-04-23 12:08   ` Avi Kivity
  2010-04-22  6:13 ` [PATCH 9/10] KVM MMU: separate invlpg code form kvm_mmu_pte_write() Xiao Guangrong
  2010-04-22  6:14 ` [PATCH 10/10] KVM MMU: optimize sync/update unsync-page Xiao Guangrong
  5 siblings, 1 reply; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:13 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level > 0), we should unsync all
gfn's unsync page

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   22 ++++++++++++++++++++--
 1 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 13378e7..e0bb4d8 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1199,6 +1199,23 @@ static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
 	return 0;
 }
 
+static void kvm_sync_pages(struct kvm_vcpu *vcpu,  gfn_t gfn)
+{
+	struct hlist_head *bucket;
+	struct kvm_mmu_page *s;
+	struct hlist_node *node, *n;
+	unsigned index;
+
+	index = kvm_page_table_hashfn(gfn);
+	bucket = &vcpu->kvm->arch.mmu_page_hash[index];
+	hlist_for_each_entry_safe(s, node, n, bucket, hash_link) {
+		if (s->gfn != gfn || !s->unsync)
+			continue;
+		WARN_ON(s->role.level != PT_PAGE_TABLE_LEVEL);
+		kvm_sync_page(vcpu, s, true);
+	}
+}
+
 struct mmu_page_path {
 	struct kvm_mmu_page *parent[PT64_ROOT_LEVEL-1];
 	unsigned int idx[PT64_ROOT_LEVEL-1];
@@ -1334,8 +1351,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
 			trace_kvm_mmu_get_page(sp, false);
 			return sp;
 		}
-	if (unsync_sp)
-		kvm_sync_page(vcpu, unsync_sp, true);
+
+	if (!direct && level > PT_PAGE_TABLE_LEVEL && unsync_sp)
+		kvm_sync_pages(vcpu, gfn);
 
 	++vcpu->kvm->stat.mmu_cache_miss;
 	sp = kvm_mmu_alloc_page(vcpu, parent_pte);
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 9/10] KVM MMU: separate invlpg code form kvm_mmu_pte_write()
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
                   ` (3 preceding siblings ...)
  2010-04-22  6:13 ` [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time Xiao Guangrong
@ 2010-04-22  6:13 ` Xiao Guangrong
  2010-04-22  6:14 ` [PATCH 10/10] KVM MMU: optimize sync/update unsync-page Xiao Guangrong
  5 siblings, 0 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:13 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   40 ++++++++++++++++++++++++----------------
 1 files changed, 24 insertions(+), 16 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e0bb4d8..f092e71 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2278,14 +2278,21 @@ static bool is_rsvd_bits_set(struct kvm_vcpu *vcpu, u64 gpte, int level)
 	return (gpte & vcpu->arch.mmu.rsvd_bits_mask[bit7][level-1]) != 0;
 }
 
+static void mmu_guess_page_from_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
+					  u64 gpte);
+static void mmu_pte_write_new_pte(struct kvm_vcpu *vcpu,
+				  struct kvm_mmu_page *sp,
+				  u64 *spte,
+				  const void *new);
+
 static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 {
+	struct kvm_mmu_page *sp = NULL;
 	struct kvm_shadow_walk_iterator iterator;
-	gpa_t pte_gpa = -1;
-	int level;
-	u64 *sptep;
-	int need_flush = 0;
+	gfn_t gfn = -1;
+	u64 *sptep = NULL,  gentry;
 	unsigned pte_size = 0;
+	int invlpg_counter, level, offset = 0, need_flush = 0;
 
 	spin_lock(&vcpu->kvm->mmu_lock);
 
@@ -2294,14 +2301,14 @@ static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 		sptep = iterator.sptep;
 
 		if (is_last_spte(*sptep, level)) {
-			struct kvm_mmu_page *sp = page_header(__pa(sptep));
-			int offset = 0;
+
+			sp = page_header(__pa(sptep));
 
 			if (!sp->role.cr4_pae)
 				offset = sp->role.quadrant << PT64_LEVEL_BITS;;
 			pte_size = sp->role.cr4_pae ? 8 : 4;
-			pte_gpa = (sp->gfn << PAGE_SHIFT);
-			pte_gpa += (sptep - sp->spt + offset) * pte_size;
+			gfn = (sp->gfn << PAGE_SHIFT);
+			offset = (sptep - sp->spt + offset) * pte_size;
 
 			if (is_shadow_present_pte(*sptep)) {
 				rmap_remove(vcpu->kvm, sptep);
@@ -2320,16 +2327,22 @@ static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 	if (need_flush)
 		kvm_flush_remote_tlbs(vcpu->kvm);
 
-	atomic_inc(&vcpu->kvm->arch.invlpg_counter);
+	invlpg_counter = atomic_add_return(1, &vcpu->kvm->arch.invlpg_counter);
 
 	spin_unlock(&vcpu->kvm->mmu_lock);
 
-	if (pte_gpa == -1)
+	if (gfn == -1)
 		return;
 
 	if (mmu_topup_memory_caches(vcpu))
 		return;
-	kvm_mmu_pte_write(vcpu, pte_gpa, NULL, pte_size, 0);
+
+	kvm_read_guest_page(vcpu->kvm, gfn, &gentry, offset, pte_size);
+	mmu_guess_page_from_pte_write(vcpu, gfn_to_gpa(gfn) + offset, gentry);
+	spin_lock(&vcpu->kvm->mmu_lock);
+	if (atomic_read(&vcpu->kvm->arch.invlpg_counter) == invlpg_counter)
+		mmu_pte_write_new_pte(vcpu, sp, sptep, &gentry);
+	spin_unlock(&vcpu->kvm->mmu_lock);
 }
 
 #define PTTYPE 64
@@ -2675,12 +2688,9 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 	int flooded = 0;
 	int npte;
 	int r;
-	int invlpg_counter;
 
 	pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);
 
-	invlpg_counter = atomic_read(&vcpu->kvm->arch.invlpg_counter);
-
 	/*
 	 * Assume that the pte write on a page table of the same type
 	 * as the current vcpu paging mode.  This is nearly always true
@@ -2713,8 +2723,6 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 
 	mmu_guess_page_from_pte_write(vcpu, gpa, gentry);
 	spin_lock(&vcpu->kvm->mmu_lock);
-	if (atomic_read(&vcpu->kvm->arch.invlpg_counter) != invlpg_counter)
-		gentry = 0;
 	kvm_mmu_access_page(vcpu, gfn);
 	kvm_mmu_free_some_pages(vcpu);
 	++vcpu->kvm->stat.mmu_pte_write;
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 10/10] KVM MMU: optimize sync/update unsync-page
       [not found] <4BCFE581.8050305@cn.fujitsu.com>
                   ` (4 preceding siblings ...)
  2010-04-22  6:13 ` [PATCH 9/10] KVM MMU: separate invlpg code form kvm_mmu_pte_write() Xiao Guangrong
@ 2010-04-22  6:14 ` Xiao Guangrong
  5 siblings, 0 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-22  6:14 UTC (permalink / raw)
  To: Avi Kivity; +Cc: Marcelo Tosatti, KVM list, LKML

invlpg only need update unsync page, sp->unsync and sp->unsync_children
can help us to find it

Now, a gfn may have many shadow pages, when one sp need be synced, we
write protect sp->gfn and sync this sp but we keep other shadow pages
asynchronous

So, while gfn happen page fault, let it not touch unsync page, the unsync
page only updated at invlpg/flush TLB time

Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
---
 arch/x86/kvm/mmu.c |   10 ++++++----
 1 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f092e71..5bdcc17 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2299,10 +2299,11 @@ static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 	for_each_shadow_entry(vcpu, gva, iterator) {
 		level = iterator.level;
 		sptep = iterator.sptep;
+		sp = page_header(__pa(sptep));
 
 		if (is_last_spte(*sptep, level)) {
-
-			sp = page_header(__pa(sptep));
+			if (!sp->unsync)
+				break;
 
 			if (!sp->role.cr4_pae)
 				offset = sp->role.quadrant << PT64_LEVEL_BITS;;
@@ -2320,7 +2321,7 @@ static void paging_invlpg(struct kvm_vcpu *vcpu, gva_t gva)
 			break;
 		}
 
-		if (!is_shadow_present_pte(*sptep))
+		if (!is_shadow_present_pte(*sptep) || !sp->unsync_children)
 			break;
 	}
 
@@ -2744,7 +2745,8 @@ void kvm_mmu_pte_write(struct kvm_vcpu *vcpu, gpa_t gpa,
 
 restart:
 	hlist_for_each_entry_safe(sp, node, n, bucket, hash_link) {
-		if (sp->gfn != gfn || sp->role.direct || sp->role.invalid)
+		if (sp->gfn != gfn || sp->role.direct || sp->role.invalid ||
+			sp->unsync)
 			continue;
 		pte_size = sp->role.cr4_pae ? 8 : 4;
 		misaligned = (offset ^ (offset + bytes - 1)) & ~(pte_size - 1);
-- 
1.6.1.2



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page
  2010-04-22  6:13 ` [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
@ 2010-04-22 19:29   ` Marcelo Tosatti
  2010-04-23  3:35     ` Xiao Guangrong
  2010-04-23 11:35   ` Avi Kivity
  1 sibling, 1 reply; 10+ messages in thread
From: Marcelo Tosatti @ 2010-04-22 19:29 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Avi Kivity, KVM list, LKML

On Thu, Apr 22, 2010 at 02:13:04PM +0800, Xiao Guangrong wrote:
> If have new mapping to the unsync page(i.e, add a new parent), just
> update the page from sp->gfn but not write-protect gfn, and if need
> create new shadow page form sp->gfn, we should sync it
> 
> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> ---
>  arch/x86/kvm/mmu.c |   27 +++++++++++++++++++--------
>  1 files changed, 19 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index fd027a6..8607a64 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -1196,16 +1196,20 @@ static void kvm_unlink_unsync_page(struct kvm *kvm, struct kvm_mmu_page *sp)
>  
>  static int kvm_mmu_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp);
>  
> -static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
> +static int kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
> +			 bool clear_unsync)
>  {
>  	if (sp->role.cr4_pae != !!is_pae(vcpu)) {
>  		kvm_mmu_zap_page(vcpu->kvm, sp);
>  		return 1;
>  	}
>  
> -	if (rmap_write_protect(vcpu->kvm, sp->gfn))
> -		kvm_flush_remote_tlbs(vcpu->kvm);
> -	kvm_unlink_unsync_page(vcpu->kvm, sp);
> +	if (clear_unsync) {
> +		if (rmap_write_protect(vcpu->kvm, sp->gfn))
> +			kvm_flush_remote_tlbs(vcpu->kvm);
> +		kvm_unlink_unsync_page(vcpu->kvm, sp);
> +	}
> +
>  	if (vcpu->arch.mmu.sync_page(vcpu, sp)) {
>  		kvm_mmu_zap_page(vcpu->kvm, sp);
>  		return 1;
> @@ -1293,7 +1297,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
>  			kvm_flush_remote_tlbs(vcpu->kvm);
>  
>  		for_each_sp(pages, sp, parents, i) {
> -			kvm_sync_page(vcpu, sp);
> +			kvm_sync_page(vcpu, sp, true);
>  			mmu_pages_clear_parents(&parents);
>  		}
>  		cond_resched_lock(&vcpu->kvm->mmu_lock);
> @@ -1313,7 +1317,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  	unsigned index;
>  	unsigned quadrant;
>  	struct hlist_head *bucket;
> -	struct kvm_mmu_page *sp;
> +	struct kvm_mmu_page *sp, *unsync_sp = NULL;
>  	struct hlist_node *node, *tmp;
>  
>  	role = vcpu->arch.mmu.base_role;
> @@ -1332,12 +1336,16 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  	hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
>  		if (sp->gfn == gfn) {
>  			if (sp->unsync)
> -				if (kvm_sync_page(vcpu, sp))
> -					continue;
> +				unsync_sp = sp;

Xiao,

I don't see a reason why you can't create a new mapping to an unsync
page. The code already creates shadow pte entries using unsync
pagetables.

So all you need would be to kvm_sync_pages before write protecting.

Also make sure kvm_sync_pages is in place here before enabling multiple
unsync shadows, in the patch series.

>  
>  			if (sp->role.word != role.word)
>  				continue;
>  
> +			if (unsync_sp && kvm_sync_page(vcpu, unsync_sp, false)) {
> +				unsync_sp = NULL;
> +				continue;
> +			}
> +
>  			mmu_page_add_parent_pte(vcpu, sp, parent_pte);
>  			if (sp->unsync_children) {
>  				set_bit(KVM_REQ_MMU_SYNC, &vcpu->requests);
> @@ -1346,6 +1354,9 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>  			trace_kvm_mmu_get_page(sp, false);
>  			return sp;
>  		}
> +	if (unsync_sp)
> +		kvm_sync_page(vcpu, unsync_sp, true);
> +
>  	++vcpu->kvm->stat.mmu_cache_miss;
>  	sp = kvm_mmu_alloc_page(vcpu, parent_pte);
>  	if (!sp)
> -- 
> 1.6.1.2
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page
  2010-04-22 19:29   ` Marcelo Tosatti
@ 2010-04-23  3:35     ` Xiao Guangrong
  0 siblings, 0 replies; 10+ messages in thread
From: Xiao Guangrong @ 2010-04-23  3:35 UTC (permalink / raw)
  To: Marcelo Tosatti; +Cc: Avi Kivity, KVM list, LKML



Marcelo Tosatti wrote:

>>  	role = vcpu->arch.mmu.base_role;
>> @@ -1332,12 +1336,16 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
>>  	hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
>>  		if (sp->gfn == gfn) {
>>  			if (sp->unsync)
>> -				if (kvm_sync_page(vcpu, sp))
>> -					continue;
>> +				unsync_sp = sp;
> 

Hi Marcelo,

Thanks for your comments, maybe the changlog is not clear, please allow
me explain here.

Two cases maybe happen in kvm_mmu_get_page() function:

- one case is, the goal sp is already in cache, if the sp is unsync,
  we only need update it to assure this mapping is valid, but not
  mark it sync and not write-protect sp->gfn since it not broke unsync
  rule(one shadow page for a gfn)

- another case is, the goal sp not existed, we need create a new sp
  for gfn, i.e, gfn (may)has another shadow page, to keep unsync rule,
  we should sync(mark sync and write-protect) gfn's unsync shadow page.
  After enabling multiple unsync shadows, we sync those shadow pages
  only when the new sp not allow to become unsync(also for the unsyc
  rule, the new rule is: allow all pte page become unsync)

> 
> I don't see a reason why you can't create a new mapping to an unsync
> page. The code already creates shadow pte entries using unsync
> pagetables.

Do you means the case 2? In the original code, it unsync-ed gfn's unsync
page first regardless it's whether broke unsync rule:

|	hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
|		if (sp->gfn == gfn) {
|			if (sp->unsync)
|				if (kvm_sync_page(vcpu, sp))

And, my English is poor, sorry if i misunderstand your comment :-(

Xiao

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page
  2010-04-22  6:13 ` [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
  2010-04-22 19:29   ` Marcelo Tosatti
@ 2010-04-23 11:35   ` Avi Kivity
  1 sibling, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2010-04-23 11:35 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, KVM list, LKML

On 04/22/2010 09:13 AM, Xiao Guangrong wrote:
> If have new mapping to the unsync page(i.e, add a new parent), just
> update the page from sp->gfn but not write-protect gfn, and if need
> create new shadow page form sp->gfn, we should sync it
>    

Sorry, I don't understand this patch.  Can you clarify?  for example, 
the situation before adding the new parent, what the guest action was, 
and the new situation.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time
  2010-04-22  6:13 ` [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time Xiao Guangrong
@ 2010-04-23 12:08   ` Avi Kivity
  0 siblings, 0 replies; 10+ messages in thread
From: Avi Kivity @ 2010-04-23 12:08 UTC (permalink / raw)
  To: Xiao Guangrong; +Cc: Marcelo Tosatti, KVM list, LKML

On 04/22/2010 09:13 AM, Xiao Guangrong wrote:
> Allow more page become asynchronous at getting sp time, if need create new
> shadow page for gfn but it not allow unsync(level>  0), we should unsync all
> gfn's unsync page
>    

This is something I wanted for a long time.

-- 
Do not meddle in the internals of kernels, for they are subtle and quick to panic.


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2010-04-23 12:08 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <4BCFE581.8050305@cn.fujitsu.com>
2010-04-22  6:12 ` [PATCH 5/10] KVM MMU: cleanup invlpg code Xiao Guangrong
2010-04-22  6:13 ` [PATCH 6/10] KVM MMU: don't write-protect if have new mapping to unsync page Xiao Guangrong
2010-04-22 19:29   ` Marcelo Tosatti
2010-04-23  3:35     ` Xiao Guangrong
2010-04-23 11:35   ` Avi Kivity
2010-04-22  6:13 ` [PATCH 7/10] KVM MMU: allow more page become unsync at gfn mapping time Xiao Guangrong
2010-04-22  6:13 ` [PATCH 8/10] KVM MMU: allow more page become unsync at getting sp time Xiao Guangrong
2010-04-23 12:08   ` Avi Kivity
2010-04-22  6:13 ` [PATCH 9/10] KVM MMU: separate invlpg code form kvm_mmu_pte_write() Xiao Guangrong
2010-04-22  6:14 ` [PATCH 10/10] KVM MMU: optimize sync/update unsync-page Xiao Guangrong

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).