public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] KVM: mmu: spte_write_protect optimization
@ 2022-05-25 19:12 Venkatesh Srinivas
  2022-05-25 20:07 ` Sean Christopherson
  0 siblings, 1 reply; 4+ messages in thread
From: Venkatesh Srinivas @ 2022-05-25 19:12 UTC (permalink / raw)
  To: kvm; +Cc: seanjc, venkateshs, Junaid Shahid

From: Junaid Shahid <junaids@google.com>

This change uses a lighter-weight function instead of mmu_spte_update()
in the common case in spte_write_protect(). This helps speed up the
get_dirty_log IOCTL.

Performance: dirty_log_perf_test with 32 GB VM size
             Avg IOCTL time over 10 passes
             Haswell: ~0.23s vs ~0.4s
             IvyBridge: ~0.8s vs 1s

Signed-off-by: Venkatesh Srinivas <venkateshs@chromium.org>
Signed-off-by: Junaid Shahid <junaids@google.com>
---
 arch/x86/kvm/mmu/mmu.c | 26 +++++++++++++++++++++-----
 1 file changed, 21 insertions(+), 5 deletions(-)

diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index efe5a3dca1e0..a6db9dfaf7c3 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -1151,6 +1151,22 @@ static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep)
 	}
 }
 
+static bool spte_test_and_clear_writable(u64 *sptep)
+{
+	u64 spte = *sptep;
+
+	if (spte & PT_WRITABLE_MASK) {
+		clear_bit(PT_WRITABLE_SHIFT, (unsigned long *)sptep);
+
+		if (!spte_ad_enabled(spte))
+			kvm_set_pfn_dirty(spte_to_pfn(spte));
+
+		return true;
+	}
+
+	return false;
+}
+
 /*
  * Write-protect on the specified @sptep, @pt_protect indicates whether
  * spte write-protection is caused by protecting shadow page table.
@@ -1174,11 +1190,11 @@ static bool spte_write_protect(u64 *sptep, bool pt_protect)
 
 	rmap_printk("spte %p %llx\n", sptep, *sptep);
 
-	if (pt_protect)
-		spte &= ~shadow_mmu_writable_mask;
-	spte = spte & ~PT_WRITABLE_MASK;
-
-	return mmu_spte_update(sptep, spte);
+	if (pt_protect) {
+		spte &= ~(shadow_mmu_writable_mask | PT_WRITABLE_MASK);
+		return mmu_spte_update(sptep, spte);
+	}
+	return spte_test_and_clear_writable(sptep);
 }
 
 static bool rmap_write_protect(struct kvm_rmap_head *rmap_head,
-- 
2.36.1.124.g0e6072fb45-goog


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-05-26 17:33 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-05-25 19:12 [PATCH] KVM: mmu: spte_write_protect optimization Venkatesh Srinivas
2022-05-25 20:07 ` Sean Christopherson
2022-05-26 17:18   ` David Matlack
2022-05-26 17:33     ` Ben Gardon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox