From: Paolo Bonzini <pbonzini@redhat.com>
To: Tiejun Chen <tiejun.chen@intel.com>
Cc: kvm@vger.kernel.org
Subject: Re: [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte
Date: Fri, 14 Nov 2014 11:11:08 +0100 [thread overview]
Message-ID: <5465D53C.9040007@redhat.com> (raw)
In-Reply-To: <1415957488-27490-2-git-send-email-tiejun.chen@intel.com>
On 14/11/2014 10:31, Tiejun Chen wrote:
> In PAE case maxphyaddr may be 52bit as well, we also need to
> disable mmio page fault. Here we can check MMIO_SPTE_GEN_HIGH_SHIFT
> directly to determine if we should set the present bit, and
> bring a little cleanup.
>
> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/mmu.c | 23 +++++++++++++++++++++++
> arch/x86/kvm/x86.c | 30 ------------------------------
> 3 files changed, 24 insertions(+), 30 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index dc932d3..667f2b6 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -809,6 +809,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn_offset, unsigned long mask);
> void kvm_mmu_zap_all(struct kvm *kvm);
> +void kvm_set_mmio_spte_mask(void);
> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
> unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
> void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ac1c4de..8e4be36 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -295,6 +295,29 @@ static bool check_mmio_spte(struct kvm *kvm, u64 spte)
> return likely(kvm_gen == spte_gen);
> }
>
> +/*
> + * Set the reserved bits and the present bit of an paging-structure
> + * entry to generate page fault with PFER.RSV = 1.
> + */
> +void kvm_set_mmio_spte_mask(void)
> +{
> + u64 mask;
> + int maxphyaddr = boot_cpu_data.x86_phys_bits;
> +
> + /* Mask the reserved physical address bits. */
> + mask = rsvd_bits(maxphyaddr, MMIO_SPTE_GEN_HIGH_SHIFT - 1);
> +
> + /* Magic bits are always reserved for 32bit host. */
> + mask |= 0x3ull << 62;
This should be enough to trigger the page fault on PAE systems.
The problem is specific to non-EPT 64-bit hosts, where the PTEs have no
reserved bits beyond 51:MAXPHYADDR. On EPT we use WX- permissions to
trigger EPT misconfig, on 32-bit systems we have bit 62.
> + /* Set the present bit to enable mmio page fault. */
> + if (maxphyaddr < MMIO_SPTE_GEN_HIGH_SHIFT)
> + mask = PT_PRESENT_MASK;
Shouldn't this be "|=" anyway, instead of "="?
Paolo
> +
> + kvm_mmu_set_mmio_spte_mask(mask);
> +}
> +EXPORT_SYMBOL_GPL(kvm_set_mmio_spte_mask);
> +
> void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
> u64 dirty_mask, u64 nx_mask, u64 x_mask)
> {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f85da5c..550f179 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5596,36 +5596,6 @@ void kvm_after_handle_nmi(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_after_handle_nmi);
>
> -static void kvm_set_mmio_spte_mask(void)
> -{
> - u64 mask;
> - int maxphyaddr = boot_cpu_data.x86_phys_bits;
> -
> - /*
> - * Set the reserved bits and the present bit of an paging-structure
> - * entry to generate page fault with PFER.RSV = 1.
> - */
> - /* Mask the reserved physical address bits. */
> - mask = rsvd_bits(maxphyaddr, 51);
> -
> - /* Bit 62 is always reserved for 32bit host. */
> - mask |= 0x3ull << 62;
> -
> - /* Set the present bit. */
> - mask |= 1ull;
> -
> -#ifdef CONFIG_X86_64
> - /*
> - * If reserved bit is not supported, clear the present bit to disable
> - * mmio page fault.
> - */
> - if (maxphyaddr == 52)
> - mask &= ~1ull;
> -#endif
> -
> - kvm_mmu_set_mmio_spte_mask(mask);
> -}
> -
> #ifdef CONFIG_X86_64
> static void pvclock_gtod_update_fn(struct work_struct *work)
> {
>
next prev parent reply other threads:[~2014-11-14 10:11 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-11-14 9:31 [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Tiejun Chen
2014-11-14 9:31 ` [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
2014-11-14 10:11 ` Paolo Bonzini [this message]
2014-11-17 1:55 ` Chen, Tiejun
2014-11-14 10:06 ` [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Paolo Bonzini
2014-11-17 1:34 ` Chen, Tiejun
2014-11-17 9:22 ` Paolo Bonzini
2014-11-17 9:27 ` Chen, Tiejun
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5465D53C.9040007@redhat.com \
--to=pbonzini@redhat.com \
--cc=kvm@vger.kernel.org \
--cc=tiejun.chen@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).