kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH for 3.3] KVM: Fix write protection race during dirty logging
@ 2012-02-05 11:42 Takuya Yoshikawa
  2012-02-06  3:40 ` Xiao Guangrong
  2012-02-08 16:38 ` Marcelo Tosatti
  0 siblings, 2 replies; 9+ messages in thread
From: Takuya Yoshikawa @ 2012-02-05 11:42 UTC (permalink / raw)
  To: avi, mtosatti; +Cc: kvm

From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>

This patch fixes a race introduced by:

  commit 95d4c16ce78cb6b7549a09159c409d52ddd18dae
  KVM: Optimize dirty logging by rmap_write_protect()

During protecting pages for dirty logging, other threads may also try
to protect a page in mmu_sync_children() or kvm_mmu_get_page().

In such a case, because get_dirty_log releases mmu_lock before flushing
TLB's, the following race condition can happen:

  A (get_dirty_log)     B (another thread)

  lock(mmu_lock)
  clear pte.w
  unlock(mmu_lock)
                        lock(mmu_lock)
                        pte.w is already cleared
                        unlock(mmu_lock)
                        skip TLB flush
                        return
  ...
  TLB flush

Though thread B assumes the page has already been protected when it
returns, the remaining TLB entry will break that assumption.

This patch fixes this problem by making get_dirty_log hold the mmu_lock
until it flushes the TLB's.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
 arch/x86/kvm/x86.c |   11 +++++------
 1 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 2bd77a3..048fe6c 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3022,6 +3022,8 @@ static void write_protect_slot(struct kvm *kvm,
 			       unsigned long *dirty_bitmap,
 			       unsigned long nr_dirty_pages)
 {
+	spin_lock(&kvm->mmu_lock);
+
 	/* Not many dirty pages compared to # of shadow pages. */
 	if (nr_dirty_pages < kvm->arch.n_used_mmu_pages) {
 		unsigned long gfn_offset;
@@ -3029,16 +3031,13 @@ static void write_protect_slot(struct kvm *kvm,
 		for_each_set_bit(gfn_offset, dirty_bitmap, memslot->npages) {
 			unsigned long gfn = memslot->base_gfn + gfn_offset;
 
-			spin_lock(&kvm->mmu_lock);
 			kvm_mmu_rmap_write_protect(kvm, gfn, memslot);
-			spin_unlock(&kvm->mmu_lock);
 		}
 		kvm_flush_remote_tlbs(kvm);
-	} else {
-		spin_lock(&kvm->mmu_lock);
+	} else
 		kvm_mmu_slot_remove_write_access(kvm, memslot->id);
-		spin_unlock(&kvm->mmu_lock);
-	}
+
+	spin_unlock(&kvm->mmu_lock);
 }
 
 /*
-- 
1.7.5.4


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2012-02-09 13:54 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2012-02-05 11:42 [PATCH for 3.3] KVM: Fix write protection race during dirty logging Takuya Yoshikawa
2012-02-06  3:40 ` Xiao Guangrong
2012-02-06  3:46   ` Takuya Yoshikawa
2012-02-06  3:53     ` Xiao Guangrong
2012-02-06  5:02       ` Xiao Guangrong
2012-02-06  5:12         ` Takuya Yoshikawa
2012-02-06  9:48           ` Avi Kivity
2012-02-08 16:38 ` Marcelo Tosatti
2012-02-09 13:54   ` Takuya Yoshikawa

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).