* [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte
@ 2014-11-17 11:31 Tiejun Chen
2014-11-17 11:40 ` Paolo Bonzini
0 siblings, 1 reply; 4+ messages in thread
From: Tiejun Chen @ 2014-11-17 11:31 UTC (permalink / raw)
To: pbonzini; +Cc: kvm
In non-ept 64-bit of PAE case maxphyaddr may be 52bit as well,
so we also need to disable mmio page fault. Here we can check
MMIO_SPTE_GEN_HIGH_SHIFT directly to determine if we should
set the present bit, and bring a little cleanup.
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
---
v2:
* Correct codes comments
* Need to use "|=" to set the present bit
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 25 +++++++++++++++++++++++++
arch/x86/kvm/x86.c | 30 ------------------------------
3 files changed, 26 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dc932d3..667f2b6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -809,6 +809,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask);
void kvm_mmu_zap_all(struct kvm *kvm);
+void kvm_set_mmio_spte_mask(void);
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ac1c4de..fe9a917 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -295,6 +295,31 @@ static bool check_mmio_spte(struct kvm *kvm, u64 spte)
return likely(kvm_gen == spte_gen);
}
+/*
+ * Set the reserved bits and the present bit of an paging-structure
+ * entry to generate page fault with PFER.RSV = 1.
+ */
+void kvm_set_mmio_spte_mask(void)
+{
+ u64 mask;
+ int maxphyaddr = boot_cpu_data.x86_phys_bits;
+
+ /* Mask the reserved physical address bits. */
+ mask = rsvd_bits(maxphyaddr, MMIO_SPTE_GEN_HIGH_SHIFT - 1);
+
+ /* Magic bits are always reserved to identify mmio spte.
+ * On 32 bit systems we have bit 62.
+ */
+ mask |= 0x3ull << 62;
+
+ /* Set the present bit to enable mmio page fault. */
+ if (maxphyaddr < MMIO_SPTE_GEN_HIGH_SHIFT)
+ mask |= PT_PRESENT_MASK;
+
+ kvm_mmu_set_mmio_spte_mask(mask);
+}
+EXPORT_SYMBOL_GPL(kvm_set_mmio_spte_mask);
+
void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
u64 dirty_mask, u64 nx_mask, u64 x_mask)
{
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f85da5c..550f179 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5596,36 +5596,6 @@ void kvm_after_handle_nmi(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_after_handle_nmi);
-static void kvm_set_mmio_spte_mask(void)
-{
- u64 mask;
- int maxphyaddr = boot_cpu_data.x86_phys_bits;
-
- /*
- * Set the reserved bits and the present bit of an paging-structure
- * entry to generate page fault with PFER.RSV = 1.
- */
- /* Mask the reserved physical address bits. */
- mask = rsvd_bits(maxphyaddr, 51);
-
- /* Bit 62 is always reserved for 32bit host. */
- mask |= 0x3ull << 62;
-
- /* Set the present bit. */
- mask |= 1ull;
-
-#ifdef CONFIG_X86_64
- /*
- * If reserved bit is not supported, clear the present bit to disable
- * mmio page fault.
- */
- if (maxphyaddr == 52)
- mask &= ~1ull;
-#endif
-
- kvm_mmu_set_mmio_spte_mask(mask);
-}
-
#ifdef CONFIG_X86_64
static void pvclock_gtod_update_fn(struct work_struct *work)
{
--
1.9.1
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-17 11:31 [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
@ 2014-11-17 11:40 ` Paolo Bonzini
2014-11-18 9:23 ` Chen, Tiejun
0 siblings, 1 reply; 4+ messages in thread
From: Paolo Bonzini @ 2014-11-17 11:40 UTC (permalink / raw)
To: Tiejun Chen; +Cc: kvm
On 17/11/2014 12:31, Tiejun Chen wrote:
> In non-ept 64-bit of PAE case maxphyaddr may be 52bit as well,
There is no such thing as 64-bit PAE.
On 32-bit PAE hosts, PTEs have bit 62 reserved, as in your patch:
> + /* Magic bits are always reserved for 32bit host. */
> + mask |= 0x3ull << 62;
so there is no need to disable the MMIO page fault.
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-17 11:40 ` Paolo Bonzini
@ 2014-11-18 9:23 ` Chen, Tiejun
2014-11-18 10:06 ` Paolo Bonzini
0 siblings, 1 reply; 4+ messages in thread
From: Chen, Tiejun @ 2014-11-18 9:23 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm
On 2014/11/17 19:40, Paolo Bonzini wrote:
>
>
> On 17/11/2014 12:31, Tiejun Chen wrote:
>> In non-ept 64-bit of PAE case maxphyaddr may be 52bit as well,
>
> There is no such thing as 64-bit PAE.
>
Definitely.
> On 32-bit PAE hosts, PTEs have bit 62 reserved, as in your patch:
>
Do you mean just one reserved bit is fine enough in this case?
Thanks
Tiejun
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-18 9:23 ` Chen, Tiejun
@ 2014-11-18 10:06 ` Paolo Bonzini
0 siblings, 0 replies; 4+ messages in thread
From: Paolo Bonzini @ 2014-11-18 10:06 UTC (permalink / raw)
To: Chen, Tiejun; +Cc: kvm
On 18/11/2014 10:23, Chen, Tiejun wrote:
>> On 32-bit PAE hosts, PTEs have bit 62 reserved, as in your patch:
>
> Do you mean just one reserved bit is fine enough in this case?
Yes.
Paolo
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2014-11-18 10:06 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-17 11:31 [v2][PATCH] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
2014-11-17 11:40 ` Paolo Bonzini
2014-11-18 9:23 ` Chen, Tiejun
2014-11-18 10:06 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).