public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes
@ 2013-03-06  7:03 Takuya Yoshikawa
  2013-03-06  7:05 ` [PATCH 1/3] KVM: MMU: Fix and clean up for_each_gfn_* macros Takuya Yoshikawa
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2013-03-06  7:03 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm

Note: the first two patches are from my other work.

Takuya Yoshikawa (3):
  KVM: MMU: Fix and clean up for_each_gfn_* macros
  KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
  KVM: MMU: Introduce a helper function for FIFO zapping

 arch/x86/kvm/mmu.c |   78 ++++++++++++++++++++++------------------------------
 1 files changed, 33 insertions(+), 45 deletions(-)

-- 
1.7.5.4


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH 1/3] KVM: MMU: Fix and clean up for_each_gfn_* macros
  2013-03-06  7:03 [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Takuya Yoshikawa
@ 2013-03-06  7:05 ` Takuya Yoshikawa
  2013-03-06  7:05 ` [PATCH 2/3] KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() Takuya Yoshikawa
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2013-03-06  7:05 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm

The expression (sp)->gfn should not be expanded using @gfn.
Although no user of these macros passes a string other than gfn now,
this should be fixed before anyone sees strange errors.

Note: ignored the following checkpatch errors:
  ERROR: Macros with complex values should be enclosed in parenthesis
  ERROR: trailing statements should be on next line

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |   18 ++++++++----------
 1 files changed, 8 insertions(+), 10 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 956ca35..3e4822b 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1644,16 +1644,14 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
 static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list);
 
-#define for_each_gfn_sp(kvm, sp, gfn)					\
-  hlist_for_each_entry(sp,						\
-   &(kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)], hash_link)	\
-	if ((sp)->gfn != (gfn)) {} else
-
-#define for_each_gfn_indirect_valid_sp(kvm, sp, gfn)			\
-  hlist_for_each_entry(sp,						\
-   &(kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(gfn)], hash_link)	\
-		if ((sp)->gfn != (gfn) || (sp)->role.direct ||		\
-			(sp)->role.invalid) {} else
+#define for_each_gfn_sp(_kvm, _sp, _gfn)				\
+	hlist_for_each_entry(_sp,					\
+	  &(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \
+		if ((_sp)->gfn != (_gfn)) {} else
+
+#define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn)			\
+	for_each_gfn_sp(_kvm, _sp, _gfn)				\
+		if ((_sp)->role.direct || (_sp)->role.invalid) {} else
 
 /* @sp->gfn should be write-protected at the call site */
 static int __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 2/3] KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
  2013-03-06  7:03 [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Takuya Yoshikawa
  2013-03-06  7:05 ` [PATCH 1/3] KVM: MMU: Fix and clean up for_each_gfn_* macros Takuya Yoshikawa
@ 2013-03-06  7:05 ` Takuya Yoshikawa
  2013-03-06  7:06 ` [PATCH 3/3] KVM: MMU: Introduce a helper function for FIFO zapping Takuya Yoshikawa
  2013-03-07 11:53 ` [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Gleb Natapov
  3 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2013-03-06  7:05 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm

We are traversing the linked list, invalid_list, deleting each entry by
kvm_mmu_free_page().  _safe version is there for such a case.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |    7 +++----
 1 files changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3e4822b..0f42645 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2087,7 +2087,7 @@ static int kvm_mmu_prepare_zap_page(struct kvm *kvm, struct kvm_mmu_page *sp,
 static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 				    struct list_head *invalid_list)
 {
-	struct kvm_mmu_page *sp;
+	struct kvm_mmu_page *sp, *nsp;
 
 	if (list_empty(invalid_list))
 		return;
@@ -2104,11 +2104,10 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	 */
 	kvm_flush_remote_tlbs(kvm);
 
-	do {
-		sp = list_first_entry(invalid_list, struct kvm_mmu_page, link);
+	list_for_each_entry_safe(sp, nsp, invalid_list, link) {
 		WARN_ON(!sp->role.invalid || sp->root_count);
 		kvm_mmu_free_page(sp);
-	} while (!list_empty(invalid_list));
+	}
 }
 
 /*
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH 3/3] KVM: MMU: Introduce a helper function for FIFO zapping
  2013-03-06  7:03 [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Takuya Yoshikawa
  2013-03-06  7:05 ` [PATCH 1/3] KVM: MMU: Fix and clean up for_each_gfn_* macros Takuya Yoshikawa
  2013-03-06  7:05 ` [PATCH 2/3] KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() Takuya Yoshikawa
@ 2013-03-06  7:06 ` Takuya Yoshikawa
  2013-03-07 11:53 ` [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Gleb Natapov
  3 siblings, 0 replies; 6+ messages in thread
From: Takuya Yoshikawa @ 2013-03-06  7:06 UTC (permalink / raw)
  To: mtosatti, gleb; +Cc: kvm

Make the code for zapping the oldest mmu page, placed at the tail of the
active list, a separate function.

Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
---
 arch/x86/kvm/mmu.c |   55 +++++++++++++++++++++------------------------------
 1 files changed, 23 insertions(+), 32 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 0f42645..fdacabb 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2110,6 +2110,21 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 	}
 }
 
+static bool prepare_zap_oldest_mmu_page(struct kvm *kvm,
+					struct list_head *invalid_list)
+{
+	struct kvm_mmu_page *sp;
+
+	if (list_empty(&kvm->arch.active_mmu_pages))
+		return false;
+
+	sp = list_entry(kvm->arch.active_mmu_pages.prev,
+			struct kvm_mmu_page, link);
+	kvm_mmu_prepare_zap_page(kvm, sp, invalid_list);
+
+	return true;
+}
+
 /*
  * Changing the number of mmu pages allocated to the vm
  * Note: if goal_nr_mmu_pages is too small, you will get dead lock
@@ -2117,23 +2132,15 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int goal_nr_mmu_pages)
 {
 	LIST_HEAD(invalid_list);
-	/*
-	 * If we set the number of mmu pages to be smaller be than the
-	 * number of actived pages , we must to free some mmu pages before we
-	 * change the value
-	 */
 
 	spin_lock(&kvm->mmu_lock);
 
 	if (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages) {
-		while (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages &&
-			!list_empty(&kvm->arch.active_mmu_pages)) {
-			struct kvm_mmu_page *page;
+		/* Need to free some mmu pages to achieve the goal. */
+		while (kvm->arch.n_used_mmu_pages > goal_nr_mmu_pages)
+			if (!prepare_zap_oldest_mmu_page(kvm, &invalid_list))
+				break;
 
-			page = container_of(kvm->arch.active_mmu_pages.prev,
-					    struct kvm_mmu_page, link);
-			kvm_mmu_prepare_zap_page(kvm, page, &invalid_list);
-		}
 		kvm_mmu_commit_zap_page(kvm, &invalid_list);
 		goal_nr_mmu_pages = kvm->arch.n_used_mmu_pages;
 	}
@@ -4007,13 +4014,10 @@ void __kvm_mmu_free_some_pages(struct kvm_vcpu *vcpu)
 {
 	LIST_HEAD(invalid_list);
 
-	while (kvm_mmu_available_pages(vcpu->kvm) < KVM_REFILL_PAGES &&
-	       !list_empty(&vcpu->kvm->arch.active_mmu_pages)) {
-		struct kvm_mmu_page *sp;
+	while (kvm_mmu_available_pages(vcpu->kvm) < KVM_REFILL_PAGES) {
+		if (!prepare_zap_oldest_mmu_page(vcpu->kvm, &invalid_list))
+			break;
 
-		sp = container_of(vcpu->kvm->arch.active_mmu_pages.prev,
-				  struct kvm_mmu_page, link);
-		kvm_mmu_prepare_zap_page(vcpu->kvm, sp, &invalid_list);
 		++vcpu->kvm->stat.mmu_recycled;
 	}
 	kvm_mmu_commit_zap_page(vcpu->kvm, &invalid_list);
@@ -4182,19 +4186,6 @@ restart:
 	spin_unlock(&kvm->mmu_lock);
 }
 
-static void kvm_mmu_remove_some_alloc_mmu_pages(struct kvm *kvm,
-						struct list_head *invalid_list)
-{
-	struct kvm_mmu_page *page;
-
-	if (list_empty(&kvm->arch.active_mmu_pages))
-		return;
-
-	page = container_of(kvm->arch.active_mmu_pages.prev,
-			    struct kvm_mmu_page, link);
-	kvm_mmu_prepare_zap_page(kvm, page, invalid_list);
-}
-
 static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
 {
 	struct kvm *kvm;
@@ -4229,7 +4220,7 @@ static int mmu_shrink(struct shrinker *shrink, struct shrink_control *sc)
 		idx = srcu_read_lock(&kvm->srcu);
 		spin_lock(&kvm->mmu_lock);
 
-		kvm_mmu_remove_some_alloc_mmu_pages(kvm, &invalid_list);
+		prepare_zap_oldest_mmu_page(kvm, &invalid_list);
 		kvm_mmu_commit_zap_page(kvm, &invalid_list);
 
 		spin_unlock(&kvm->mmu_lock);
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes
  2013-03-06  7:03 [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Takuya Yoshikawa
                   ` (2 preceding siblings ...)
  2013-03-06  7:06 ` [PATCH 3/3] KVM: MMU: Introduce a helper function for FIFO zapping Takuya Yoshikawa
@ 2013-03-07 11:53 ` Gleb Natapov
  2013-03-07 20:26   ` Marcelo Tosatti
  3 siblings, 1 reply; 6+ messages in thread
From: Gleb Natapov @ 2013-03-07 11:53 UTC (permalink / raw)
  To: Takuya Yoshikawa; +Cc: mtosatti, kvm

On Wed, Mar 06, 2013 at 04:03:22PM +0900, Takuya Yoshikawa wrote:
> Note: the first two patches are from my other work.
> 
> Takuya Yoshikawa (3):
>   KVM: MMU: Fix and clean up for_each_gfn_* macros
>   KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
>   KVM: MMU: Introduce a helper function for FIFO zapping
> 
>  arch/x86/kvm/mmu.c |   78 ++++++++++++++++++++++------------------------------
>  1 files changed, 33 insertions(+), 45 deletions(-)
> 
Reviewed-by: Gleb Natapov <gleb@redhat.com>

--
			Gleb.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes
  2013-03-07 11:53 ` [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Gleb Natapov
@ 2013-03-07 20:26   ` Marcelo Tosatti
  0 siblings, 0 replies; 6+ messages in thread
From: Marcelo Tosatti @ 2013-03-07 20:26 UTC (permalink / raw)
  To: Gleb Natapov, "; +Cc: Takuya Yoshikawa, kvm

On Thu, Mar 07, 2013 at 01:53:13PM +0200, Gleb Natapov wrote:
> On Wed, Mar 06, 2013 at 04:03:22PM +0900, Takuya Yoshikawa wrote:
> > Note: the first two patches are from my other work.
> > 
> > Takuya Yoshikawa (3):
> >   KVM: MMU: Fix and clean up for_each_gfn_* macros
> >   KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page()
> >   KVM: MMU: Introduce a helper function for FIFO zapping
> > 
> >  arch/x86/kvm/mmu.c |   78 ++++++++++++++++++++++------------------------------
> >  1 files changed, 33 insertions(+), 45 deletions(-)
> > 
> Reviewed-by: Gleb Natapov <gleb@redhat.com>

Applied, thanks.


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2013-03-07 21:16 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-06  7:03 [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Takuya Yoshikawa
2013-03-06  7:05 ` [PATCH 1/3] KVM: MMU: Fix and clean up for_each_gfn_* macros Takuya Yoshikawa
2013-03-06  7:05 ` [PATCH 2/3] KVM: MMU: Use list_for_each_entry_safe in kvm_mmu_commit_zap_page() Takuya Yoshikawa
2013-03-06  7:06 ` [PATCH 3/3] KVM: MMU: Introduce a helper function for FIFO zapping Takuya Yoshikawa
2013-03-07 11:53 ` [PATCH 0/3] KVM: MMU: Simple cleanups and macro fixes Gleb Natapov
2013-03-07 20:26   ` Marcelo Tosatti

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox