From: Marcelo Tosatti <mtosatti@redhat.com>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Avi Kivity <avi@redhat.com>, LKML <linux-kernel@vger.kernel.org>,
KVM <kvm@vger.kernel.org>
Subject: Re: [PATCH v3 18/19] KVM: MMU: mmio page fault support
Date: Thu, 7 Jul 2011 05:49:06 -0300 [thread overview]
Message-ID: <20110707084906.GD8334@amt.cnet> (raw)
In-Reply-To: <4E14BE90.8070508@cn.fujitsu.com>
On Thu, Jul 07, 2011 at 03:59:12AM +0800, Xiao Guangrong wrote:
> On 07/07/2011 02:52 AM, Marcelo Tosatti wrote:
>
> >> +/*
> >> + * If it is a real mmio page fault, return 1 and emulat the instruction
> >> + * directly, return 0 to let CPU fault again on the address, -1 is
> >> + * returned if bug is detected.
> >> + */
> >> +int handle_mmio_page_fault_common(struct kvm_vcpu *vcpu, u64 addr, bool direct)
> >> +{
> >> + u64 spte;
> >> +
> >> + if (quickly_check_mmio_pf(vcpu, addr, direct))
> >> + return 1;
> >> +
> >> + spte = walk_shadow_page_get_mmio_spte(vcpu, addr);
> >> +
> >> + if (is_mmio_spte(spte)) {
> >> + gfn_t gfn = get_mmio_spte_gfn(spte);
> >> + unsigned access = get_mmio_spte_access(spte);
> >> +
> >> + if (direct)
> >> + addr = 0;
> >> + vcpu_cache_mmio_info(vcpu, addr, gfn, access);
> >> + return 1;
> >> + }
> >> +
> >> + /*
> >> + * It's ok if the gva is remapped by other cpus on shadow guest,
> >> + * it's a BUG if the gfn is not a mmio page.
> >> + */
> >> + if (direct && is_shadow_present_pte(spte))
> >> + return -1;
> >
>
> Marcelo,
>
> > This is only going to generate an spte dump, for a genuine EPT
> > misconfig, if the present bit is set.
> >
> > Should be:
> >
> > /*
> > * Page table zapped by other cpus, let CPU fault again on
> > * the address.
> > */
> > if (*spte == 0ull)
> > return 0;
>
> Can not use "*spte == 0ull" here, it should use !is_shadow_present_pte(spte)
> instead, since on x86-32 host, we can get the high 32 bit is set, and the present
> bit is cleared.
OK, then check for 0ull on x86-64, and low 32-bit zeroed but high 32-bit with mmio
pattern on x86-32.
It is not acceptable to disable spte dump for sptes without present bit.
> > /* BUG if gfn is not an mmio page */
> > return -1;
>
> We can not detect bug for soft-mmu, since the shadow page can be changed anytime,
> for example:
>
> VCPU 0 VCPU1
> mmio page is intercepted
> change the guest page table and map the virtual
> address to the RAM region
> walk shadow page table, and
> detect the gfn is RAM, spurious
> BUG is reported
>
> In theory, it can be happened.
Yes, with TDP only assigning a memory slot can change an spte from mmio
to valid, which can only happen under SRCU, so this path is protected.
> >> +static bool sync_mmio_spte(u64 *sptep, gfn_t gfn, unsigned access,
> >> + int *nr_present)
> >> +{
> >> + if (unlikely(is_mmio_spte(*sptep))) {
> >> + if (gfn != get_mmio_spte_gfn(*sptep)) {
> >> + mmu_spte_clear_no_track(sptep);
> >> + return true;
> >> + }
> >> +
> >> + (*nr_present)++;
> >
> > Can increase nr_present in the caller.
> >
>
> Yes, we should increase it to avoid the unsync shadow page to be freed.
>
> >> @@ -6481,6 +6506,13 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
> >> if (!kvm->arch.n_requested_mmu_pages)
> >> nr_mmu_pages = kvm_mmu_calculate_mmu_pages(kvm);
> >>
> >> + /*
> >> + * If the new memory slot is created, we need to clear all
> >> + * mmio sptes.
> >> + */
> >> + if (npages && old.base_gfn != mem->guest_phys_addr >> PAGE_SHIFT)
> >> + kvm_arch_flush_shadow(kvm);
> >> +
> >
> > This should be in __kvm_set_memory_region.
> >
>
> Um, __kvm_set_memory_region is a common function, and only x86 support mmio spte,
> it seems no need to do this for all architecture?
Please do it in __kvm_set_memory_region, after
kvm_arch_prepare_memory_region (other arches won't mind the flush).
next prev parent reply other threads:[~2011-07-07 8:51 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-06-30 8:19 [PATCH v3 01/19] KVM: MMU: fix walking shadow page table Xiao Guangrong
2011-06-30 8:19 ` [PATCH v3 02/19] KVM: MMU: do not update slot bitmap if spte is nonpresent Xiao Guangrong
2011-06-30 8:20 ` [PATCH v3 03/19] KVM: x86: introduce vcpu_mmio_gva_to_gpa to cleanup the code Xiao Guangrong
2011-06-30 8:20 ` [PATCH v3 04/19] KVM: MMU: cache mmio info on page fault path Xiao Guangrong
2011-07-05 19:04 ` Marcelo Tosatti
2011-07-06 1:17 ` Xiao Guangrong
2011-06-30 8:21 ` [PATCH v3 05/19] KVM: MMU: optimize to handle dirty bit Xiao Guangrong
2011-07-05 19:27 ` Marcelo Tosatti
2011-07-06 1:22 ` Xiao Guangrong
2011-07-06 16:51 ` Marcelo Tosatti
2011-07-06 19:12 ` Xiao Guangrong
2011-07-07 8:15 ` Marcelo Tosatti
2011-06-30 8:21 ` [PATCH v3 06/19] KVM: MMU: cleanup for FNAME(fetch) Xiao Guangrong
2011-06-30 8:22 ` [PATCH v3 07/19] KVM: MMU: rename 'pt_write' to 'emulate' Xiao Guangrong
2011-06-30 8:22 ` [PATCH v3 08/19] KVM: MMU: count used shadow pages on prepareing path Xiao Guangrong
2011-06-30 8:23 ` [PATCH v3 09/19] KVM: MMU: split kvm_mmu_free_page Xiao Guangrong
2011-06-30 8:23 ` [PATCH v3 10/19] KVM: MMU: remove bypass_guest_pf Xiao Guangrong
2011-06-30 8:24 ` [PATCH v3 11/19] KVM: MMU: filter out the mmio pfn from the fault pfn Xiao Guangrong
2011-07-06 17:17 ` Marcelo Tosatti
2011-07-06 19:13 ` Xiao Guangrong
2011-06-30 8:24 ` [PATCH v3 12/19] KVM: MMU: abstract some functions to handle " Xiao Guangrong
2011-06-30 8:25 ` [PATCH v3 13/19] KVM: MMU: introduce the rules to modify shadow page table Xiao Guangrong
2011-06-30 8:25 ` [PATCH v3 14/19] KVM: MMU: clean up spte updating and clearing Xiao Guangrong
2011-07-06 17:39 ` Marcelo Tosatti
2011-07-06 19:18 ` Xiao Guangrong
2011-07-07 8:16 ` Marcelo Tosatti
2011-07-07 9:30 ` Xiao Guangrong
2011-06-30 8:26 ` [PATCH v3 15/19] KVM: MMU: do not need atomicly to set/clear spte Xiao Guangrong
2011-06-30 8:26 ` [PATCH v3 16/19] KVM: MMU: lockless walking shadow page table Xiao Guangrong
2011-07-06 18:08 ` Marcelo Tosatti
2011-07-06 19:26 ` Xiao Guangrong
2011-07-07 8:18 ` Marcelo Tosatti
2011-06-30 8:26 ` [PATCH v3 17/19] KVM: MMU: reorganize struct kvm_shadow_walk_iterator Xiao Guangrong
2011-06-30 8:27 ` [PATCH v3 18/19] KVM: MMU: mmio page fault support Xiao Guangrong
2011-07-06 18:52 ` Marcelo Tosatti
2011-07-06 19:59 ` Xiao Guangrong
2011-07-07 8:49 ` Marcelo Tosatti [this message]
2011-06-30 8:28 ` [PATCH v3 19/19] KVM: MMU: trace mmio page fault Xiao Guangrong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110707084906.GD8334@amt.cnet \
--to=mtosatti@redhat.com \
--cc=avi@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=xiaoguangrong@cn.fujitsu.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox