* [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch
@ 2013-08-30 3:50 Takuya Yoshikawa
2013-08-30 3:51 ` [PATCH 1/2] KVM: Take mmu_lock only while write-protecting pages in get_dirty_log Takuya Yoshikawa
` (2 more replies)
0 siblings, 3 replies; 4+ messages in thread
From: Takuya Yoshikawa @ 2013-08-30 3:50 UTC (permalink / raw)
To: gleb, pbonzini; +Cc: kvm, xiaoguangrong
I think this patch set answers Gleb's comment.
Takuya
^ permalink raw reply [flat|nested] 4+ messages in thread
* [PATCH 1/2] KVM: Take mmu_lock only while write-protecting pages in get_dirty_log
2013-08-30 3:50 [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Takuya Yoshikawa
@ 2013-08-30 3:51 ` Takuya Yoshikawa
2013-08-30 3:52 ` [PATCH 2/2] KVM: Stop using extra buffer for copying dirty_bitmap to user-space Takuya Yoshikawa
2013-09-01 14:16 ` [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Gleb Natapov
2 siblings, 0 replies; 4+ messages in thread
From: Takuya Yoshikawa @ 2013-08-30 3:51 UTC (permalink / raw)
To: gleb, pbonzini; +Cc: kvm, xiaoguangrong
Xiao's "KVM: MMU: flush tlb if the spte can be locklessly modified"
allows us to release mmu_lock before flushing TLBs.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
Xiao can change the remaining mmu_lock to RCU's read-side lock:
The grace period will be reasonably limited.
arch/x86/kvm/mmu.c | 4 ++++
arch/x86/kvm/x86.c | 4 ----
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5d9efb1..c6da9ba 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1249,6 +1249,8 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
{
unsigned long *rmapp;
+ spin_lock(&kvm->mmu_lock);
+
while (mask) {
rmapp = __gfn_to_rmap(slot->base_gfn + gfn_offset + __ffs(mask),
PT_PAGE_TABLE_LEVEL, slot);
@@ -1257,6 +1259,8 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
/* clear the first set bit */
mask &= mask - 1;
}
+
+ spin_unlock(&kvm->mmu_lock);
}
static bool rmap_write_protect(struct kvm *kvm, u64 gfn)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index e5ca72a..1d1f6df 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3543,8 +3543,6 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
memset(dirty_bitmap_buffer, 0, n);
- spin_lock(&kvm->mmu_lock);
-
for (i = 0; i < n / sizeof(long); i++) {
unsigned long mask;
gfn_t offset;
@@ -3563,8 +3561,6 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
if (is_dirty)
kvm_flush_remote_tlbs(kvm);
- spin_unlock(&kvm->mmu_lock);
-
r = -EFAULT;
if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
goto out;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] KVM: Stop using extra buffer for copying dirty_bitmap to user-space
2013-08-30 3:50 [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Takuya Yoshikawa
2013-08-30 3:51 ` [PATCH 1/2] KVM: Take mmu_lock only while write-protecting pages in get_dirty_log Takuya Yoshikawa
@ 2013-08-30 3:52 ` Takuya Yoshikawa
2013-09-01 14:16 ` [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Gleb Natapov
2 siblings, 0 replies; 4+ messages in thread
From: Takuya Yoshikawa @ 2013-08-30 3:52 UTC (permalink / raw)
To: gleb, pbonzini; +Cc: kvm, xiaoguangrong
Now that mmu_lock is held only inside kvm_mmu_write_protect_pt_masked(),
we can do __put_user() for copying each 64/32 dirty bits to user-space.
This eliminates the need to copy the whole bitmap to an extra buffer and
the resulting code is much more cache friendly than before.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya_b1@lab.ntt.co.jp>
Cc: Xiao Guangrong <xiaoguangrong@linux.vnet.ibm.com>
---
arch/x86/kvm/x86.c | 18 ++++++++----------
virt/kvm/kvm_main.c | 6 +-----
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1d1f6df..79e8ad0 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -3522,7 +3522,7 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
struct kvm_memory_slot *memslot;
unsigned long n, i;
unsigned long *dirty_bitmap;
- unsigned long *dirty_bitmap_buffer;
+ unsigned long __user *p_user;
bool is_dirty = false;
mutex_lock(&kvm->slots_lock);
@@ -3539,11 +3539,12 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
goto out;
n = kvm_dirty_bitmap_bytes(memslot);
+ r = -EFAULT;
+ if (clear_user(log->dirty_bitmap, n))
+ goto out;
- dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
- memset(dirty_bitmap_buffer, 0, n);
-
- for (i = 0; i < n / sizeof(long); i++) {
+ p_user = (unsigned long __user *)log->dirty_bitmap;
+ for (i = 0; i < n / sizeof(long); i++, p_user++) {
unsigned long mask;
gfn_t offset;
@@ -3553,7 +3554,8 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
is_dirty = true;
mask = xchg(&dirty_bitmap[i], 0);
- dirty_bitmap_buffer[i] = mask;
+ if (__put_user(mask, p_user))
+ goto out;
offset = i * BITS_PER_LONG;
kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask);
@@ -3561,10 +3563,6 @@ int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
if (is_dirty)
kvm_flush_remote_tlbs(kvm);
- r = -EFAULT;
- if (copy_to_user(log->dirty_bitmap, dirty_bitmap_buffer, n))
- goto out;
-
r = 0;
out:
mutex_unlock(&kvm->slots_lock);
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index bf040c4..c919f58 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -626,14 +626,10 @@ static int kvm_vm_release(struct inode *inode, struct file *filp)
return 0;
}
-/*
- * Allocation size is twice as large as the actual dirty bitmap size.
- * See x86's kvm_vm_ioctl_get_dirty_log() why this is needed.
- */
static int kvm_create_dirty_bitmap(struct kvm_memory_slot *memslot)
{
#ifndef CONFIG_S390
- unsigned long dirty_bytes = 2 * kvm_dirty_bitmap_bytes(memslot);
+ unsigned long dirty_bytes = kvm_dirty_bitmap_bytes(memslot);
memslot->dirty_bitmap = kvm_kvzalloc(dirty_bytes);
if (!memslot->dirty_bitmap)
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch
2013-08-30 3:50 [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Takuya Yoshikawa
2013-08-30 3:51 ` [PATCH 1/2] KVM: Take mmu_lock only while write-protecting pages in get_dirty_log Takuya Yoshikawa
2013-08-30 3:52 ` [PATCH 2/2] KVM: Stop using extra buffer for copying dirty_bitmap to user-space Takuya Yoshikawa
@ 2013-09-01 14:16 ` Gleb Natapov
2 siblings, 0 replies; 4+ messages in thread
From: Gleb Natapov @ 2013-09-01 14:16 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: pbonzini, kvm, xiaoguangrong
On Fri, Aug 30, 2013 at 12:50:11PM +0900, Takuya Yoshikawa wrote:
> I think this patch set answers Gleb's comment.
>
It does. Thanks.
--
Gleb.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2013-09-01 14:16 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-08-30 3:50 [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Takuya Yoshikawa
2013-08-30 3:51 ` [PATCH 1/2] KVM: Take mmu_lock only while write-protecting pages in get_dirty_log Takuya Yoshikawa
2013-08-30 3:52 ` [PATCH 2/2] KVM: Stop using extra buffer for copying dirty_bitmap to user-space Takuya Yoshikawa
2013-09-01 14:16 ` [PATCH 0/2] RFC: KVM: Simple optimization based on Xiao's patch Gleb Natapov
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).