public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user()
@ 2011-05-07  7:31 Takuya Yoshikawa
  2011-05-07  7:35 ` [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered Takuya Yoshikawa
  0 siblings, 1 reply; 3+ messages in thread
From: Takuya Yoshikawa @ 2011-05-07  7:31 UTC (permalink / raw)
  To: avi, mtosatti; +Cc: andi, kvm, yoshikawa.takuya

From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>

When we optimized walk_addr_generic() by not using the generic guest
memory reader, we replaced copy_from_user() with get_user():

  commit e30d2a170506830d5eef5e9d7990c5aedf1b0a51
  KVM: MMU: Optimize guest page table walk

  commit 15e2ac9a43d4d7d08088e404fddf2533a8e7d52e
  KVM: MMU: Fix 64-bit paging breakage on x86_32

But as Andi pointed out later, copy_from_user() does the same as
get_user() as long as we give a constant size to it.

So we use copy_from_user() to clean up the code.

The only, noticeable, regression introduced by this is 64-bit gpte
reading on x86_32 hosts needed for PAE guests.

But this can be mitigated by implementing 8-byte get_user() for x86_32,
if needed.

Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
 arch/x86/kvm/paging_tmpl.h |   16 +---------------
 1 files changed, 1 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index f9d9af1..0803e36 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -113,20 +113,6 @@ static unsigned FNAME(gpte_access)(struct kvm_vcpu *vcpu, pt_element_t gpte)
 	return access;
 }
 
-static int FNAME(read_gpte)(pt_element_t *pte, pt_element_t __user *ptep_user)
-{
-#if defined(CONFIG_X86_32) && (PTTYPE == 64)
-	u32 *p = (u32 *)pte;
-	u32 __user *p_user = (u32 __user *)ptep_user;
-
-	if (unlikely(get_user(*p, p_user)))
-		return -EFAULT;
-	return get_user(*(p + 1), p_user + 1);
-#else
-	return get_user(*pte, ptep_user);
-#endif
-}
-
 /*
  * Fetch a guest pte for a guest virtual address
  */
@@ -197,7 +183,7 @@ walk:
 		}
 
 		ptep_user = (pt_element_t __user *)((void *)host_addr + offset);
-		if (unlikely(FNAME(read_gpte)(&pte, ptep_user))) {
+		if (unlikely(copy_from_user(&pte, ptep_user, sizeof(pte)))) {
 			present = false;
 			break;
 		}
-- 
1.7.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2011-05-09  8:37 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-05-07  7:31 [PATCH 1/2] KVM: MMU: Clean up gpte reading with copy_from_user() Takuya Yoshikawa
2011-05-07  7:35 ` [PATCH 2/2] KVM: Validate userspace_addr of memslot when registered Takuya Yoshikawa
2011-05-09  8:37   ` Avi Kivity

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox