* [PATCH] KVM: MMU: Use ptep_user for cmpxchg_gpte()
@ 2011-04-30 14:50 Takuya Yoshikawa
2011-04-30 19:48 ` Avi Kivity
0 siblings, 1 reply; 2+ messages in thread
From: Takuya Yoshikawa @ 2011-04-30 14:50 UTC (permalink / raw)
To: avi, mtosatti; +Cc: kvm, yoshikawa.takuya
From: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
The address of the gpte was already calculated and stored in ptep_user
before entering cmpxchg_gpte().
This patch makes cmpxchg_gpte() to use that to make it clear that we
are using the same address during walk_addr_generic().
Signed-off-by: Takuya Yoshikawa <yoshikawa.takuya@oss.ntt.co.jp>
---
Note: I tested this but could not see cmpxchg_gpte() being called.
Is there any good test case?
arch/x86/kvm/paging_tmpl.h | 32 +++++++++++---------------------
1 files changed, 11 insertions(+), 21 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 52450a6..73cdf98 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -79,20 +79,16 @@ static gfn_t gpte_to_gfn_lvl(pt_element_t gpte, int lvl)
}
static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
- gfn_t table_gfn, unsigned index,
- pt_element_t orig_pte, pt_element_t new_pte)
+ pt_element_t __user *ptep_user, unsigned index,
+ pt_element_t orig_pte, pt_element_t new_pte)
{
+ int npages;
pt_element_t ret;
pt_element_t *table;
struct page *page;
- gpa_t gpa;
- gpa = mmu->translate_gpa(vcpu, table_gfn << PAGE_SHIFT,
- PFERR_USER_MASK|PFERR_WRITE_MASK);
- if (gpa == UNMAPPED_GVA)
- return -EFAULT;
-
- page = gfn_to_page(vcpu->kvm, gpa_to_gfn(gpa));
+ npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1, &page);
+ BUG_ON(npages != 1);
table = kmap_atomic(page, KM_USER0);
ret = CMPXCHG(&table[index], orig_pte, new_pte);
@@ -234,12 +230,9 @@ walk:
int ret;
trace_kvm_mmu_set_accessed_bit(table_gfn, index,
sizeof(pte));
- ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn,
- index, pte, pte|PT_ACCESSED_MASK);
- if (ret < 0) {
- present = false;
- break;
- } else if (ret)
+ ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index,
+ pte, pte|PT_ACCESSED_MASK);
+ if (ret)
goto walk;
mark_page_dirty(vcpu->kvm, table_gfn);
@@ -293,12 +286,9 @@ walk:
int ret;
trace_kvm_mmu_set_dirty_bit(table_gfn, index, sizeof(pte));
- ret = FNAME(cmpxchg_gpte)(vcpu, mmu, table_gfn, index, pte,
- pte|PT_DIRTY_MASK);
- if (ret < 0) {
- present = false;
- goto error;
- } else if (ret)
+ ret = FNAME(cmpxchg_gpte)(vcpu, mmu, ptep_user, index,
+ pte, pte|PT_DIRTY_MASK);
+ if (ret)
goto walk;
mark_page_dirty(vcpu->kvm, table_gfn);
--
1.7.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [PATCH] KVM: MMU: Use ptep_user for cmpxchg_gpte()
2011-04-30 14:50 [PATCH] KVM: MMU: Use ptep_user for cmpxchg_gpte() Takuya Yoshikawa
@ 2011-04-30 19:48 ` Avi Kivity
0 siblings, 0 replies; 2+ messages in thread
From: Avi Kivity @ 2011-04-30 19:48 UTC (permalink / raw)
To: Takuya Yoshikawa; +Cc: mtosatti, kvm, yoshikawa.takuya
On 04/30/2011 05:50 PM, Takuya Yoshikawa wrote:
> From: Takuya Yoshikawa<yoshikawa.takuya@oss.ntt.co.jp>
>
> The address of the gpte was already calculated and stored in ptep_user
> before entering cmpxchg_gpte().
>
> This patch makes cmpxchg_gpte() to use that to make it clear that we
> are using the same address during walk_addr_generic().
>
> Signed-off-by: Takuya Yoshikawa<yoshikawa.takuya@oss.ntt.co.jp>
> ---
> Note: I tested this but could not see cmpxchg_gpte() being called.
> Is there any good test case?
Run with npt or ept disabled. With a normal OS you'll see A and D bits
being set when the guest is under memory pressure, or when the guest
uses a shared writeable mapping. You can also try kvm-unit-tests.git's
x86/access.flat test (also with npt/ept disabled).
> static int FNAME(cmpxchg_gpte)(struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
> - gfn_t table_gfn, unsigned index,
> - pt_element_t orig_pte, pt_element_t new_pte)
> + pt_element_t __user *ptep_user, unsigned index,
> + pt_element_t orig_pte, pt_element_t new_pte)
> {
> + int npages;
> pt_element_t ret;
> pt_element_t *table;
> struct page *page;
> - gpa_t gpa;
>
> - gpa = mmu->translate_gpa(vcpu, table_gfn<< PAGE_SHIFT,
> - PFERR_USER_MASK|PFERR_WRITE_MASK);
> - if (gpa == UNMAPPED_GVA)
> - return -EFAULT;
> -
> - page = gfn_to_page(vcpu->kvm, gpa_to_gfn(gpa));
> + npages = get_user_pages_fast((unsigned long)ptep_user, 1, 1,&page);
> + BUG_ON(npages != 1);
This BUG_ON() is user triggerable - have one thread munmap() guest
memory while another thread runs guest code which triggers this path.
You need to return an error here. Or maybe you can just return - the
user is doing something meaningless and there's no correct action here.
--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2011-04-30 19:48 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-04-30 14:50 [PATCH] KVM: MMU: Use ptep_user for cmpxchg_gpte() Takuya Yoshikawa
2011-04-30 19:48 ` Avi Kivity
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).