* [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user()
@ 2011-05-07 7:31 Takuya Yoshikawa
2011-05-07 7:35 ` [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered Takuya Yoshikawa
0 siblings, 1 reply; 3+ messages in thread
From: Takuya Yoshikawa @ 2011-05-07 7:31 UTC (permalink / raw)
To: avi, mtosatti; +Cc: andi, kvm, yoshikawa.takuya
From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
When we optimized walk_addr_generic() by not using the generic guest
memory reader, we replaced copy_from_user() with get_user():
commit e30d2a170506830d5eef5e9d7990c5aedf1b0a51
KVM: MMU: Optimize guest page table walk
commit 15e2ac9a43d4d7d08088e404fddf2533a8e7d52e
KVM: MMU: Fix 64-bit paging breakage on x86_32
But as Andi pointed out later, copy_from_user() does the same as
get_user() as long as we give a constant size to it.
So we use copy_from_user() to clean up the code.
The only, noticeable, regression introduced by this is 64-bit gpte
reading on x86_32 hosts needed for PAE guests.
But this can be mitigated by implementing 8-byte get_user() for x86_32,
if needed.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/x86/kvm/paging_tmpl.h | 16 +---------------
1 files changed, 1 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index f9d9af1..0803e36 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -113,20 +113,6 @@ static unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, pt_element_t gpte)
return access;
}
-static int FNAME(read_gpte)(pt_element_t *pte, pt_element_t __user *ptep_user)
-{
-#if defined(CONFIG_X86_32) && (PTTYPE == 64)
- u32 *p = (u32 *)pte;
- u32 __user *p_user = (u32 __user *)ptep_user;
-
- if (unlikely(get_user(*p, p_user)))
- return -EFAULT;
- return get_user(*(p + 1), p_user + 1);
-#else
- return get_user(*pte, ptep_user);
-#endif
-}
-
/*
* Fetch a guest pte for a guest virtual address
*/
@@ -197,7 +183,7 @@ walk:
}
ptep_user = (pt_element_t __user *)((void *)host_addr + offset);
- if (unlikely(FNAME(read_gpte)(&pte, ptep_user))) {
+ if (unlikely(copy_from_user(&pte, ptep_user, sizeof(pte)))) {
present = false;
break;
}
--
1.7.1
^ permalink raw reply related [flat|nested] 3+ messages in thread* [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered
2011-05-07 7:31 [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user() Takuya Yoshikawa
@ 2011-05-07 7:35 ` Takuya Yoshikawa
2011-05-09 8:37 ` Avi Kivity
0 siblings, 1 reply; 3+ messages in thread
From: Takuya Yoshikawa @ 2011-05-07 7:35 UTC (permalink / raw)
To: avi, mtosatti; +Cc: andi, kvm, yoshikawa.takuya
From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
This way, we can avoid checking the user space address many times when
we read the guest memory.
Although we can do the same for write if we check which slots are
writable, we do not care write now: reading the guest memory happens
more often than writing.
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
arch/x86/kvm/paging_tmpl.h | 2 +-
virt/kvm/kvm_main.c | 7 +++++--
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 0803e36..6c4dc01 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -183,7 +183,7 @@ walk:
}
ptep_user = (pt_element_t __user *)((void *)host_addr + offset);
- if (unlikely(copy_from_user(&pte, ptep_user, sizeof(pte)))) {
+ if (unlikely(__copy_from_user(&pte, ptep_user, sizeof(pte)))) {
present = false;
break;
}
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 5814645..0402b6b 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -648,7 +648,10 @@ int __kvm_set_memory_region(struct kvm *kvm,
goto out;
if (mem->guest_phys_addr & (PAGE_SIZE - 1))
goto out;
- if (user_alloc && (mem->userspace_addr & (PAGE_SIZE - 1)))
+ /* We can read the guest memory with __xxx_user() later on. */
+ if (user_alloc &&
+ ((mem->userspace_addr & (PAGE_SIZE - 1)) ||
+ !access_ok(VERIFY_READ, mem->userspace_addr, mem->memory_size)))
goto out;
if (mem->slot >= KVM_MEMORY_SLOTS + KVM_PRIVATE_MEM_SLOTS)
goto out;
@@ -1283,7 +1286,7 @@ int kvm_read_guest_page(struct kvm *kvm, gfn_t gfn, void *data, int offset,
addr = gfn_to_hva(kvm, gfn);
if (kvm_is_error_hva(addr))
return -EFAULT;
- r = copy_from_user(data, (void __user *)addr + offset, len);
+ r = __copy_from_user(data, (void __user *)addr + offset, len);
if (r)
return -EFAULT;
return 0;
--
1.7.1
^ permalink raw reply related [flat|nested] 3+ messages in thread* Re: [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered
2011-05-07 7:35 ` [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered Takuya Yoshikawa
@ 2011-05-09 8:37 ` Avi Kivity
0 siblings, 0 replies; 3+ messages in thread
From: Avi Kivity @ 2011-05-09 8:37 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: mtosatti, andi, kvm, yoshikawa.takuya
On 05/07/2011 10:35 AM, Takuya Yoshikawa wrote:
> From: Takuya Yoshikawa<yoshikawa.takuya@oss.ntt.co.jp>
>
> This way, we can avoid checking the user space address many times when
> we read the guest memory.
>
> Although we can do the same for write if we check which slots are
> writable, we do not care write now: reading the guest memory happens
> more often than writing.
Thanks, applied. I changed VERIFY_READ to VERIFY_WRITE, since the
checks are exactly the same.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2011-05-09 8:37 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-07 7:31 [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user() Takuya Yoshikawa
2011-05-07 7:35 ` [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered Takuya Yoshikawa
2011-05-09 8:37 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox