* [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()
@ 2014-11-14 9:31 Tiejun Chen
2014-11-14 9:31 ` [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
2014-11-14 10:06 ` [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Paolo Bonzini
0 siblings, 2 replies; 8+ messages in thread
From: Tiejun Chen @ 2014-11-14 9:31 UTC (permalink / raw)
To: pbonzini; +Cc: kvm
In some real scenarios 'start' may not be less than 'end' like
maxphyaddr = 52.
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
---
arch/x86/kvm/mmu.h | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index bde8ee7..0e98b5e 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -58,6 +58,8 @@
static inline u64 rsvd_bits(int s, int e)
{
+ if (unlikely(s > e))
+ return 0;
return ((1ULL << (e - s + 1)) - 1) << s;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-14 9:31 [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Tiejun Chen
@ 2014-11-14 9:31 ` Tiejun Chen
2014-11-14 10:11 ` Paolo Bonzini
2014-11-14 10:06 ` [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Paolo Bonzini
1 sibling, 1 reply; 8+ messages in thread
From: Tiejun Chen @ 2014-11-14 9:31 UTC (permalink / raw)
To: pbonzini; +Cc: kvm
In PAE case maxphyaddr may be 52bit as well, we also need to
disable mmio page fault. Here we can check MMIO_SPTE_GEN_HIGH_SHIFT
directly to determine if we should set the present bit, and
bring a little cleanup.
Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 23 +++++++++++++++++++++++
arch/x86/kvm/x86.c | 30 ------------------------------
3 files changed, 24 insertions(+), 30 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index dc932d3..667f2b6 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -809,6 +809,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
struct kvm_memory_slot *slot,
gfn_t gfn_offset, unsigned long mask);
void kvm_mmu_zap_all(struct kvm *kvm);
+void kvm_set_mmio_spte_mask(void);
void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ac1c4de..8e4be36 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -295,6 +295,29 @@ static bool check_mmio_spte(struct kvm *kvm, u64 spte)
return likely(kvm_gen == spte_gen);
}
+/*
+ * Set the reserved bits and the present bit of an paging-structure
+ * entry to generate page fault with PFER.RSV = 1.
+ */
+void kvm_set_mmio_spte_mask(void)
+{
+ u64 mask;
+ int maxphyaddr = boot_cpu_data.x86_phys_bits;
+
+ /* Mask the reserved physical address bits. */
+ mask = rsvd_bits(maxphyaddr, MMIO_SPTE_GEN_HIGH_SHIFT - 1);
+
+ /* Magic bits are always reserved for 32bit host. */
+ mask |= 0x3ull << 62;
+
+ /* Set the present bit to enable mmio page fault. */
+ if (maxphyaddr < MMIO_SPTE_GEN_HIGH_SHIFT)
+ mask = PT_PRESENT_MASK;
+
+ kvm_mmu_set_mmio_spte_mask(mask);
+}
+EXPORT_SYMBOL_GPL(kvm_set_mmio_spte_mask);
+
void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
u64 dirty_mask, u64 nx_mask, u64 x_mask)
{
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index f85da5c..550f179 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -5596,36 +5596,6 @@ void kvm_after_handle_nmi(struct kvm_vcpu *vcpu)
}
EXPORT_SYMBOL_GPL(kvm_after_handle_nmi);
-static void kvm_set_mmio_spte_mask(void)
-{
- u64 mask;
- int maxphyaddr = boot_cpu_data.x86_phys_bits;
-
- /*
- * Set the reserved bits and the present bit of an paging-structure
- * entry to generate page fault with PFER.RSV = 1.
- */
- /* Mask the reserved physical address bits. */
- mask = rsvd_bits(maxphyaddr, 51);
-
- /* Bit 62 is always reserved for 32bit host. */
- mask |= 0x3ull << 62;
-
- /* Set the present bit. */
- mask |= 1ull;
-
-#ifdef CONFIG_X86_64
- /*
- * If reserved bit is not supported, clear the present bit to disable
- * mmio page fault.
- */
- if (maxphyaddr == 52)
- mask &= ~1ull;
-#endif
-
- kvm_mmu_set_mmio_spte_mask(mask);
-}
-
#ifdef CONFIG_X86_64
static void pvclock_gtod_update_fn(struct work_struct *work)
{
--
1.9.1
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()
2014-11-14 9:31 [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Tiejun Chen
2014-11-14 9:31 ` [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
@ 2014-11-14 10:06 ` Paolo Bonzini
2014-11-17 1:34 ` Chen, Tiejun
1 sibling, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-11-14 10:06 UTC (permalink / raw)
To: Tiejun Chen; +Cc: kvm
On 14/11/2014 10:31, Tiejun Chen wrote:
> In some real scenarios 'start' may not be less than 'end' like
> maxphyaddr = 52.
>
> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
> ---
> arch/x86/kvm/mmu.h | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
> index bde8ee7..0e98b5e 100644
> --- a/arch/x86/kvm/mmu.h
> +++ b/arch/x86/kvm/mmu.h
> @@ -58,6 +58,8 @@
>
> static inline u64 rsvd_bits(int s, int e)
> {
> + if (unlikely(s > e))
> + return 0;
> return ((1ULL << (e - s + 1)) - 1) << s;
> }
>
>
s == e + 1 is supported:
(1ULL << (e - (e + 1) + 1)) - 1) << s ==
(1ULL << 0) << s ==
0
Is there any case where s is even bigger?
Paolo
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-14 9:31 ` [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
@ 2014-11-14 10:11 ` Paolo Bonzini
2014-11-17 1:55 ` Chen, Tiejun
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-11-14 10:11 UTC (permalink / raw)
To: Tiejun Chen; +Cc: kvm
On 14/11/2014 10:31, Tiejun Chen wrote:
> In PAE case maxphyaddr may be 52bit as well, we also need to
> disable mmio page fault. Here we can check MMIO_SPTE_GEN_HIGH_SHIFT
> directly to determine if we should set the present bit, and
> bring a little cleanup.
>
> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 +
> arch/x86/kvm/mmu.c | 23 +++++++++++++++++++++++
> arch/x86/kvm/x86.c | 30 ------------------------------
> 3 files changed, 24 insertions(+), 30 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index dc932d3..667f2b6 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -809,6 +809,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> gfn_t gfn_offset, unsigned long mask);
> void kvm_mmu_zap_all(struct kvm *kvm);
> +void kvm_set_mmio_spte_mask(void);
> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
> unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
> void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index ac1c4de..8e4be36 100644
> --- a/arch/x86/kvm/mmu.c
> +++ b/arch/x86/kvm/mmu.c
> @@ -295,6 +295,29 @@ static bool check_mmio_spte(struct kvm *kvm, u64 spte)
> return likely(kvm_gen == spte_gen);
> }
>
> +/*
> + * Set the reserved bits and the present bit of an paging-structure
> + * entry to generate page fault with PFER.RSV = 1.
> + */
> +void kvm_set_mmio_spte_mask(void)
> +{
> + u64 mask;
> + int maxphyaddr = boot_cpu_data.x86_phys_bits;
> +
> + /* Mask the reserved physical address bits. */
> + mask = rsvd_bits(maxphyaddr, MMIO_SPTE_GEN_HIGH_SHIFT - 1);
> +
> + /* Magic bits are always reserved for 32bit host. */
> + mask |= 0x3ull << 62;
This should be enough to trigger the page fault on PAE systems.
The problem is specific to non-EPT 64-bit hosts, where the PTEs have no
reserved bits beyond 51:MAXPHYADDR. On EPT we use WX- permissions to
trigger EPT misconfig, on 32-bit systems we have bit 62.
> + /* Set the present bit to enable mmio page fault. */
> + if (maxphyaddr < MMIO_SPTE_GEN_HIGH_SHIFT)
> + mask = PT_PRESENT_MASK;
Shouldn't this be "|=" anyway, instead of "="?
Paolo
> +
> + kvm_mmu_set_mmio_spte_mask(mask);
> +}
> +EXPORT_SYMBOL_GPL(kvm_set_mmio_spte_mask);
> +
> void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask,
> u64 dirty_mask, u64 nx_mask, u64 x_mask)
> {
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index f85da5c..550f179 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -5596,36 +5596,6 @@ void kvm_after_handle_nmi(struct kvm_vcpu *vcpu)
> }
> EXPORT_SYMBOL_GPL(kvm_after_handle_nmi);
>
> -static void kvm_set_mmio_spte_mask(void)
> -{
> - u64 mask;
> - int maxphyaddr = boot_cpu_data.x86_phys_bits;
> -
> - /*
> - * Set the reserved bits and the present bit of an paging-structure
> - * entry to generate page fault with PFER.RSV = 1.
> - */
> - /* Mask the reserved physical address bits. */
> - mask = rsvd_bits(maxphyaddr, 51);
> -
> - /* Bit 62 is always reserved for 32bit host. */
> - mask |= 0x3ull << 62;
> -
> - /* Set the present bit. */
> - mask |= 1ull;
> -
> -#ifdef CONFIG_X86_64
> - /*
> - * If reserved bit is not supported, clear the present bit to disable
> - * mmio page fault.
> - */
> - if (maxphyaddr == 52)
> - mask &= ~1ull;
> -#endif
> -
> - kvm_mmu_set_mmio_spte_mask(mask);
> -}
> -
> #ifdef CONFIG_X86_64
> static void pvclock_gtod_update_fn(struct work_struct *work)
> {
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()
2014-11-14 10:06 ` [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Paolo Bonzini
@ 2014-11-17 1:34 ` Chen, Tiejun
2014-11-17 9:22 ` Paolo Bonzini
0 siblings, 1 reply; 8+ messages in thread
From: Chen, Tiejun @ 2014-11-17 1:34 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm
On 2014/11/14 18:06, Paolo Bonzini wrote:
>
>
> On 14/11/2014 10:31, Tiejun Chen wrote:
>> In some real scenarios 'start' may not be less than 'end' like
>> maxphyaddr = 52.
>>
>> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
>> ---
>> arch/x86/kvm/mmu.h | 2 ++
>> 1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>> index bde8ee7..0e98b5e 100644
>> --- a/arch/x86/kvm/mmu.h
>> +++ b/arch/x86/kvm/mmu.h
>> @@ -58,6 +58,8 @@
>>
>> static inline u64 rsvd_bits(int s, int e)
>> {
>> + if (unlikely(s > e))
>> + return 0;
>> return ((1ULL << (e - s + 1)) - 1) << s;
>> }
>>
>>
>
> s == e + 1 is supported:
>
> (1ULL << (e - (e + 1) + 1)) - 1) << s ==
(1ULL << (e - (e + 1) + 1)) - 1) << s
= (1ULL << (e - e - 1) + 1)) - 1) << s
= (1ULL << (-1) + 1)) - 1) << s
= (1ULL << (0) - 1) << s
= (1ULL << (- 1) << s
Am I missing something?
Thanks
Tiejun
> (1ULL << 0) << s ==
> 0
>
> Is there any case where s is even bigger?
>
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte
2014-11-14 10:11 ` Paolo Bonzini
@ 2014-11-17 1:55 ` Chen, Tiejun
0 siblings, 0 replies; 8+ messages in thread
From: Chen, Tiejun @ 2014-11-17 1:55 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm
On 2014/11/14 18:11, Paolo Bonzini wrote:
>
>
> On 14/11/2014 10:31, Tiejun Chen wrote:
>> In PAE case maxphyaddr may be 52bit as well, we also need to
>> disable mmio page fault. Here we can check MMIO_SPTE_GEN_HIGH_SHIFT
>> directly to determine if we should set the present bit, and
>> bring a little cleanup.
>>
>> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
>> ---
>> arch/x86/include/asm/kvm_host.h | 1 +
>> arch/x86/kvm/mmu.c | 23 +++++++++++++++++++++++
>> arch/x86/kvm/x86.c | 30 ------------------------------
>> 3 files changed, 24 insertions(+), 30 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index dc932d3..667f2b6 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -809,6 +809,7 @@ void kvm_mmu_write_protect_pt_masked(struct kvm *kvm,
>> struct kvm_memory_slot *slot,
>> gfn_t gfn_offset, unsigned long mask);
>> void kvm_mmu_zap_all(struct kvm *kvm);
>> +void kvm_set_mmio_spte_mask(void);
>> void kvm_mmu_invalidate_mmio_sptes(struct kvm *kvm);
>> unsigned int kvm_mmu_calculate_mmu_pages(struct kvm *kvm);
>> void kvm_mmu_change_mmu_pages(struct kvm *kvm, unsigned int kvm_nr_mmu_pages);
>> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
>> index ac1c4de..8e4be36 100644
>> --- a/arch/x86/kvm/mmu.c
>> +++ b/arch/x86/kvm/mmu.c
>> @@ -295,6 +295,29 @@ static bool check_mmio_spte(struct kvm *kvm, u64 spte)
>> return likely(kvm_gen == spte_gen);
>> }
>>
>> +/*
>> + * Set the reserved bits and the present bit of an paging-structure
>> + * entry to generate page fault with PFER.RSV = 1.
>> + */
>> +void kvm_set_mmio_spte_mask(void)
>> +{
>> + u64 mask;
>> + int maxphyaddr = boot_cpu_data.x86_phys_bits;
>> +
>> + /* Mask the reserved physical address bits. */
>> + mask = rsvd_bits(maxphyaddr, MMIO_SPTE_GEN_HIGH_SHIFT - 1);
>> +
>> + /* Magic bits are always reserved for 32bit host. */
>> + mask |= 0x3ull << 62;
>
> This should be enough to trigger the page fault on PAE systems.
>
> The problem is specific to non-EPT 64-bit hosts, where the PTEs have no
> reserved bits beyond 51:MAXPHYADDR. On EPT we use WX- permissions to
> trigger EPT misconfig, on 32-bit systems we have bit 62.
Thanks for your explanation.
>
>> + /* Set the present bit to enable mmio page fault. */
>> + if (maxphyaddr < MMIO_SPTE_GEN_HIGH_SHIFT)
>> + mask = PT_PRESENT_MASK;
>
> Shouldn't this be "|=" anyway, instead of "="?
>
Yeah, just miss this. Thanks a lot, I will fix this in next revision.
Thanks
Tiejun
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()
2014-11-17 1:34 ` Chen, Tiejun
@ 2014-11-17 9:22 ` Paolo Bonzini
2014-11-17 9:27 ` Chen, Tiejun
0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2014-11-17 9:22 UTC (permalink / raw)
To: Chen, Tiejun; +Cc: kvm
On 17/11/2014 02:34, Chen, Tiejun wrote:
> On 2014/11/14 18:06, Paolo Bonzini wrote:
>>
>>
>> On 14/11/2014 10:31, Tiejun Chen wrote:
>>> In some real scenarios 'start' may not be less than 'end' like
>>> maxphyaddr = 52.
>>>
>>> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
>>> ---
>>> arch/x86/kvm/mmu.h | 2 ++
>>> 1 file changed, 2 insertions(+)
>>>
>>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>>> index bde8ee7..0e98b5e 100644
>>> --- a/arch/x86/kvm/mmu.h
>>> +++ b/arch/x86/kvm/mmu.h
>>> @@ -58,6 +58,8 @@
>>>
>>> static inline u64 rsvd_bits(int s, int e)
>>> {
>>> + if (unlikely(s > e))
>>> + return 0;
>>> return ((1ULL << (e - s + 1)) - 1) << s;
>>> }
>>>
>>>
>>
>> s == e + 1 is supported:
>>
>> (1ULL << (e - (e + 1) + 1)) - 1) << s ==
>
> (1ULL << (e - (e + 1) + 1)) - 1) << s
> = (1ULL << (e - e - 1) + 1)) - 1) << s
> = (1ULL << (-1) + 1)) - 1) << s
no,
((1ULL << (-1 + 1)) - 1) << s
> = (1ULL << (0) - 1) << s
((1ULL << (0)) - 1) << s
> = (1ULL << (- 1) << s
(1 - 1) << s
0 << s
Paolo
>
> Am I missing something?
>
> Thanks
> Tiejun
>
>> (1ULL << 0) << s ==
>> 0
>>
>> Is there any case where s is even bigger?
>>
>> Paolo
>> --
>> To unsubscribe from this list: send the line "unsubscribe kvm" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits()
2014-11-17 9:22 ` Paolo Bonzini
@ 2014-11-17 9:27 ` Chen, Tiejun
0 siblings, 0 replies; 8+ messages in thread
From: Chen, Tiejun @ 2014-11-17 9:27 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: kvm
On 2014/11/17 17:22, Paolo Bonzini wrote:
>
>
> On 17/11/2014 02:34, Chen, Tiejun wrote:
>> On 2014/11/14 18:06, Paolo Bonzini wrote:
>>>
>>>
>>> On 14/11/2014 10:31, Tiejun Chen wrote:
>>>> In some real scenarios 'start' may not be less than 'end' like
>>>> maxphyaddr = 52.
>>>>
>>>> Signed-off-by: Tiejun Chen <tiejun.chen@intel.com>
>>>> ---
>>>> arch/x86/kvm/mmu.h | 2 ++
>>>> 1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>>>> index bde8ee7..0e98b5e 100644
>>>> --- a/arch/x86/kvm/mmu.h
>>>> +++ b/arch/x86/kvm/mmu.h
>>>> @@ -58,6 +58,8 @@
>>>>
>>>> static inline u64 rsvd_bits(int s, int e)
>>>> {
>>>> + if (unlikely(s > e))
>>>> + return 0;
>>>> return ((1ULL << (e - s + 1)) - 1) << s;
>>>> }
>>>>
>>>>
>>>
>>> s == e + 1 is supported:
>>>
>>> (1ULL << (e - (e + 1) + 1)) - 1) << s ==
>>
>> (1ULL << (e - (e + 1) + 1)) - 1) << s
>> = (1ULL << (e - e - 1) + 1)) - 1) << s
>> = (1ULL << (-1) + 1)) - 1) << s
>
> no,
You're right since I'm seeing "()" wrongly.
Sorry to bother you.
Thanks
Tiejun
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2014-11-17 9:27 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-11-14 9:31 [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Tiejun Chen
2014-11-14 9:31 ` [RFC][PATCH 2/2] kvm: x86: mmio: fix setting the present bit of mmio spte Tiejun Chen
2014-11-14 10:11 ` Paolo Bonzini
2014-11-17 1:55 ` Chen, Tiejun
2014-11-14 10:06 ` [RFC][PATCH 1/2] kvm: x86: mmu: return zero if s > e in rsvd_bits() Paolo Bonzini
2014-11-17 1:34 ` Chen, Tiejun
2014-11-17 9:22 ` Paolo Bonzini
2014-11-17 9:27 ` Chen, Tiejun
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).