* [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
@ 2014-08-18 9:50 Wanpeng Li
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 9:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Wanpeng Li
EPT misconfig handler in kvm will check which reason lead to EPT
misconfiguration after vmexit. One of the reasons is that an EPT
paging-structure entry is configured with settings reserved for
future functionality. However, the handler can't identify if
paging-structure entry of reserved bits for 1-GByte page are
configured, since PDPTE which point to 1-GByte page will reserve
bits 29:12 instead of bits 7:3 which are reserved for PDPTE that
references an EPT Page Directory. This patch fix it by reserve
bits 29:12 for 1-GByte page.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
arch/x86/kvm/vmx.c | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index bfe11cf..71cbee5 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -5521,9 +5521,14 @@ static u64 ept_rsvd_mask(u64 spte, int level)
for (i = 51; i > boot_cpu_data.x86_phys_bits; i--)
mask |= (1ULL << i);
- if (level > 2)
- /* bits 7:3 reserved */
- mask |= 0xf8;
+ if (level > 2) {
+ if (spte & (1 << 7))
+ /* 1GB ref, bits 29:12 */
+ mask |= 0x3ffff000;
+ else
+ /* bits 7:3 reserved */
+ mask |= 0xf8;
+ }
else if (level == 2) {
if (spte & (1ULL << 7))
/* 2MB ref, bits 20:12 reserved */
@@ -5561,7 +5566,8 @@ static void ept_misconfig_inspect_spte(struct kvm_vcpu *vcpu, u64 spte,
WARN_ON(1);
}
- if (level == 1 || (level == 2 && (spte & (1ULL << 7)))) {
+ if (level == 1 || ((level == 3 || level == 2)
+ && (spte & (1ULL << 7)))) {
u64 ept_mem_type = (spte & 0x38) >> 3;
if (ept_mem_type == 2 || ept_mem_type == 3 ||
--
1.9.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
@ 2014-08-18 9:50 ` Wanpeng Li
2014-08-18 10:20 ` Paolo Bonzini
2014-08-18 10:52 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode Wanpeng Li
` (4 subsequent siblings)
5 siblings, 2 replies; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 9:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Wanpeng Li
fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
clts), however, there is no user currently, this patch drop it.
Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
arch/x86/include/asm/kvm_host.h | 1 -
arch/x86/kvm/svm.c | 1 -
arch/x86/kvm/vmx.c | 1 -
3 files changed, 3 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 5724601..b68f3e5 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -710,7 +710,6 @@ struct kvm_x86_ops {
void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
- void (*fpu_activate)(struct kvm_vcpu *vcpu);
void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
void (*tlb_flush)(struct kvm_vcpu *vcpu);
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index ddf7427..1f49c86 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
.cache_reg = svm_cache_reg,
.get_rflags = svm_get_rflags,
.set_rflags = svm_set_rflags,
- .fpu_activate = svm_fpu_activate,
.fpu_deactivate = svm_fpu_deactivate,
.tlb_flush = svm_flush_tlb,
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 71cbee5..2963303 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
.cache_reg = vmx_cache_reg,
.get_rflags = vmx_get_rflags,
.set_rflags = vmx_set_rflags,
- .fpu_activate = vmx_fpu_activate,
.fpu_deactivate = vmx_fpu_deactivate,
.tlb_flush = vmx_flush_tlb,
--
1.9.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
@ 2014-08-18 9:50 ` Wanpeng Li
2014-08-18 10:24 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs Wanpeng Li
` (3 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 9:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Wanpeng Li
vmx_segment_cache_clear() will be called by vmx_set_segment()
which lead to vmx_segment_cache_clear() is called twice in
enter_pmode(). This patch remove the duplicate call site.
Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
arch/x86/kvm/vmx.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 2963303..c97c44c 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -3183,8 +3183,6 @@ static void enter_pmode(struct kvm_vcpu *vcpu)
vmx->rmode.vm86_active = 0;
- vmx_segment_cache_clear(vmx);
-
vmx_set_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
flags = vmcs_readl(GUEST_RFLAGS);
--
1.9.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
2014-08-18 9:50 ` [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode Wanpeng Li
@ 2014-08-18 9:50 ` Wanpeng Li
2014-08-18 10:28 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits " Wanpeng Li
` (2 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 9:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Wanpeng Li
The first entry in each pair(IA32_MTRR_PHYSBASEn) defines the base
address and memory type for the range; the second entry(IA32_MTRR_PHYSMASKn)
contains a mask used to determine the address range. The legal values
for the type field of IA32_MTRR_PHYSBASEn are 0,1,4,5, and 6. However,
IA32_MTRR_PHYSMASKn don't have type field. This patch avoid check if
the type field is legal for IA32_MTRR_PHYSMASKn.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
arch/x86/kvm/x86.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 204422d..30f55cc 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1767,7 +1767,15 @@ static bool mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
}
/* variable MTRRs */
- return valid_mtrr_type(data & 0xff);
+ if (msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR) {
+ int idx, is_mtrr_mask;
+
+ idx = (msr - 0x200) / 2;
+ is_mtrr_mask = msr - 0x200 - 2 * idx;
+ if (!is_mtrr_mask)
+ return valid_mtrr_type(data & 0xff);
+ }
+ return true;
}
static int set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr, u64 data)
--
1.9.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits of Variable Range MTRRs
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
` (2 preceding siblings ...)
2014-08-18 9:50 ` [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs Wanpeng Li
@ 2014-08-18 9:50 ` Wanpeng Li
2014-08-18 12:27 ` Wanpeng Li
2014-08-18 10:18 ` [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Paolo Bonzini
2014-08-18 10:52 ` Xiao Guangrong
5 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 9:50 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Wanpeng Li
Section 11.11.2.3 of the SDM mentions "All other bits in the IA32_MTRR_PHYSBASEn
and IA32_MTRR_PHYSMASKn registers are reserved; the processor generates a
general-protection exception(#GP) if software attempts to write to them". This
patch do it in kvm.
Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
---
arch/x86/kvm/x86.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index caaffeb..aa64c70 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -1769,11 +1769,22 @@ static bool mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
/* variable MTRRs */
if (msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR) {
int idx, is_mtrr_mask;
+ u64 mask = 0;
idx = (msr - 0x200) / 2;
is_mtrr_mask = msr - 0x200 - 2 * idx;
- if (!is_mtrr_mask)
- return valid_mtrr_type(data & 0xff);
+ for (i = 63; i > boot_cpu_data.x86_phys_bits; i--)
+ mask |= (1ULL << i);
+ if (!is_mtrr_mask) {
+ if (!valid_mtrr_type(data & 0xff))
+ return false;
+ mask |= 0xf00;
+ } else
+ mask |= 0x7ff;
+ if (data & mask) {
+ kvm_inject_gp(vcpu, 0);
+ return false;
+ }
}
return true;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
` (3 preceding siblings ...)
2014-08-18 9:50 ` [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits " Wanpeng Li
@ 2014-08-18 10:18 ` Paolo Bonzini
2014-08-19 2:17 ` Wanpeng Li
2014-08-18 10:52 ` Xiao Guangrong
5 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:18 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 11:50, Wanpeng Li ha scritto:
> EPT misconfig handler in kvm will check which reason lead to EPT
> misconfiguration after vmexit. One of the reasons is that an EPT
> paging-structure entry is configured with settings reserved for
> future functionality. However, the handler can't identify if
> paging-structure entry of reserved bits for 1-GByte page are
> configured, since PDPTE which point to 1-GByte page will reserve
> bits 29:12 instead of bits 7:3 which are reserved for PDPTE that
> references an EPT Page Directory. This patch fix it by reserve
> bits 29:12 for 1-GByte page.
>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> arch/x86/kvm/vmx.c | 14 ++++++++++----
> 1 file changed, 10 insertions(+), 4 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index bfe11cf..71cbee5 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -5521,9 +5521,14 @@ static u64 ept_rsvd_mask(u64 spte, int level)
> for (i = 51; i > boot_cpu_data.x86_phys_bits; i--)
> mask |= (1ULL << i);
>
> - if (level > 2)
> - /* bits 7:3 reserved */
> - mask |= 0xf8;
> + if (level > 2) {
level can be 4 here. You have to return 0xf8 for level == 4.
The same "if" statement then can cover both 2MB and 1GB pages, like
if (spte & (1ULL << 7))
/* 1GB/2MB page, bits 29:12 or 20:12 reserved respectively */
mask |= (PAGE_SIZE << ((level - 1) * 9)) - PAGE_SIZE;
else
/* bits 6:3 reserved */
mask |= 0x78;
> - if (level == 1 || (level == 2 && (spte & (1ULL << 7)))) {
> + if (level == 1 || ((level == 3 || level == 2)
> + && (spte & (1ULL << 7)))) {
This condition can be simplified by checking the return value of ept_rsvd_mask.
If it includes 0x38, this is a large page. Otherwise it is a leaf page and
you can go down the "if".
Paolo
> u64 ept_mem_type = (spte & 0x38) >> 3;
>
> if (ept_mem_type == 2 || ept_mem_type == 3 ||
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
@ 2014-08-18 10:20 ` Paolo Bonzini
2014-08-18 10:26 ` Avi Kivity
2014-08-18 10:52 ` Paolo Bonzini
1 sibling, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:20 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel, Avi Kivity
Il 18/08/2014 11:50, Wanpeng Li ha scritto:
> fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
> clts), however, there is no user currently, this patch drop it.
>
> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/svm.c | 1 -
> arch/x86/kvm/vmx.c | 1 -
> 3 files changed, 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5724601..b68f3e5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -710,7 +710,6 @@ struct kvm_x86_ops {
> void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
> unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
> void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
> - void (*fpu_activate)(struct kvm_vcpu *vcpu);
> void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
>
> void (*tlb_flush)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index ddf7427..1f49c86 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
> .cache_reg = svm_cache_reg,
> .get_rflags = svm_get_rflags,
> .set_rflags = svm_set_rflags,
> - .fpu_activate = svm_fpu_activate,
> .fpu_deactivate = svm_fpu_deactivate,
>
> .tlb_flush = svm_flush_tlb,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 71cbee5..2963303 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
> .cache_reg = vmx_cache_reg,
> .get_rflags = vmx_get_rflags,
> .set_rflags = vmx_set_rflags,
> - .fpu_activate = vmx_fpu_activate,
> .fpu_deactivate = vmx_fpu_deactivate,
>
> .tlb_flush = vmx_flush_tlb,
>
Avi/Gleb, do you remember any particular reason for this?
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode
2014-08-18 9:50 ` [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode Wanpeng Li
@ 2014-08-18 10:24 ` Paolo Bonzini
0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:24 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 11:50, Wanpeng Li ha scritto:
> vmx_segment_cache_clear() will be called by vmx_set_segment()
> which lead to vmx_segment_cache_clear() is called twice in
> enter_pmode(). This patch remove the duplicate call site.
>
> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> arch/x86/kvm/vmx.c | 2 --
> 1 file changed, 2 deletions(-)
>
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 2963303..c97c44c 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -3183,8 +3183,6 @@ static void enter_pmode(struct kvm_vcpu *vcpu)
>
> vmx->rmode.vm86_active = 0;
>
> - vmx_segment_cache_clear(vmx);
> -
> vmx_set_segment(vcpu, &vmx->rmode.segs[VCPU_SREG_TR], VCPU_SREG_TR);
>
> flags = vmcs_readl(GUEST_RFLAGS);
>
I think this is not useful and obscures the code a bit.
It would be a bigger improvement to see when fix_pmode_seg really needs
to call vmx_set_segment, and avoid the call. Are you seeing enter_pmode
high in any profiles?
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 10:20 ` Paolo Bonzini
@ 2014-08-18 10:26 ` Avi Kivity
2014-08-18 10:51 ` Paolo Bonzini
0 siblings, 1 reply; 19+ messages in thread
From: Avi Kivity @ 2014-08-18 10:26 UTC (permalink / raw)
To: Paolo Bonzini, Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
On 08/18/2014 01:20 PM, Paolo Bonzini wrote:
> Il 18/08/2014 11:50, Wanpeng Li ha scritto:
>> fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
>> clts), however, there is no user currently, this patch drop it.
>>
>> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>> arch/x86/include/asm/kvm_host.h | 1 -
>> arch/x86/kvm/svm.c | 1 -
>> arch/x86/kvm/vmx.c | 1 -
>> 3 files changed, 3 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
>> index 5724601..b68f3e5 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -710,7 +710,6 @@ struct kvm_x86_ops {
>> void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
>> unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
>> void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
>> - void (*fpu_activate)(struct kvm_vcpu *vcpu);
>> void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
>>
>> void (*tlb_flush)(struct kvm_vcpu *vcpu);
>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>> index ddf7427..1f49c86 100644
>> --- a/arch/x86/kvm/svm.c
>> +++ b/arch/x86/kvm/svm.c
>> @@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
>> .cache_reg = svm_cache_reg,
>> .get_rflags = svm_get_rflags,
>> .set_rflags = svm_set_rflags,
>> - .fpu_activate = svm_fpu_activate,
>> .fpu_deactivate = svm_fpu_deactivate,
>>
>> .tlb_flush = svm_flush_tlb,
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index 71cbee5..2963303 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
>> .cache_reg = vmx_cache_reg,
>> .get_rflags = vmx_get_rflags,
>> .set_rflags = vmx_set_rflags,
>> - .fpu_activate = vmx_fpu_activate,
>> .fpu_deactivate = vmx_fpu_deactivate,
>>
>> .tlb_flush = vmx_flush_tlb,
>>
> Avi/Gleb, do you remember any particular reason for this?
>
IIRC (vaguely) if we expect the fpu to be used in the near future, we
activate it eagerly so that we don't fault when it is used.
Prevents the sequence:
guest user: use fpu
#NM
host: reflect #NM to guest
guest kernel: CLTS
guest kernel: switch fpu state
#NM
host: switch fpu
guest kernel: switch fpu state (restarted)
guest user: use fpu (restarted)
Why was the user removed? Full-time eager fpu?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs
2014-08-18 9:50 ` [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs Wanpeng Li
@ 2014-08-18 10:28 ` Paolo Bonzini
0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:28 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 11:50, Wanpeng Li ha scritto:
> + if (msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR) {
This should be a WARN_ON, and the base/mask can be separated just with
an "&".
WARN_ON(!(msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR));
if ((msr & 1) == 0)
/* MTRR base */
return valid_mtrr_type(data & 0xff);
/* MTRR mask */
return true;
Please provide a unit test for this patch and for patch 1.
Paolo
> + int idx, is_mtrr_mask;
> +
> + idx = (msr - 0x200) / 2;
> + is_mtrr_mask = msr - 0x200 - 2 * idx;
> + if (!is_mtrr_mask)
> + return valid_mtrr_type(data & 0xff);
> + }
> + return true;
> }
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 10:26 ` Avi Kivity
@ 2014-08-18 10:51 ` Paolo Bonzini
2014-08-18 11:15 ` Avi Kivity
0 siblings, 1 reply; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:51 UTC (permalink / raw)
To: Avi Kivity, Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 12:26, Avi Kivity ha scritto:
>
> On 08/18/2014 01:20 PM, Paolo Bonzini wrote:
>> Il 18/08/2014 11:50, Wanpeng Li ha scritto:
>>> fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
>>> clts), however, there is no user currently, this patch drop it.
>>>
>>> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
>>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>> ---
>>> arch/x86/include/asm/kvm_host.h | 1 -
>>> arch/x86/kvm/svm.c | 1 -
>>> arch/x86/kvm/vmx.c | 1 -
>>> 3 files changed, 3 deletions(-)
>>>
>>> diff --git a/arch/x86/include/asm/kvm_host.h
>>> b/arch/x86/include/asm/kvm_host.h
>>> index 5724601..b68f3e5 100644
>>> --- a/arch/x86/include/asm/kvm_host.h
>>> +++ b/arch/x86/include/asm/kvm_host.h
>>> @@ -710,7 +710,6 @@ struct kvm_x86_ops {
>>> void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
>>> unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
>>> void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
>>> - void (*fpu_activate)(struct kvm_vcpu *vcpu);
>>> void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
>>> void (*tlb_flush)(struct kvm_vcpu *vcpu);
>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>> index ddf7427..1f49c86 100644
>>> --- a/arch/x86/kvm/svm.c
>>> +++ b/arch/x86/kvm/svm.c
>>> @@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
>>> .cache_reg = svm_cache_reg,
>>> .get_rflags = svm_get_rflags,
>>> .set_rflags = svm_set_rflags,
>>> - .fpu_activate = svm_fpu_activate,
>>> .fpu_deactivate = svm_fpu_deactivate,
>>> .tlb_flush = svm_flush_tlb,
>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>>> index 71cbee5..2963303 100644
>>> --- a/arch/x86/kvm/vmx.c
>>> +++ b/arch/x86/kvm/vmx.c
>>> @@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
>>> .cache_reg = vmx_cache_reg,
>>> .get_rflags = vmx_get_rflags,
>>> .set_rflags = vmx_set_rflags,
>>> - .fpu_activate = vmx_fpu_activate,
>>> .fpu_deactivate = vmx_fpu_deactivate,
>>> .tlb_flush = vmx_flush_tlb,
>>>
>> Avi/Gleb, do you remember any particular reason for this?
>>
>
> IIRC (vaguely) if we expect the fpu to be used in the near future, we
> activate it eagerly so that we don't fault when it is used.
>
> Prevents the sequence:
>
> guest user: use fpu
> #NM
> host: reflect #NM to guest
> guest kernel: CLTS
> guest kernel: switch fpu state
> #NM
> host: switch fpu
> guest kernel: switch fpu state (restarted)
> guest user: use fpu (restarted)
>
> Why was the user removed? Full-time eager fpu?
No, I mean any reason to keep the hooks. In the meanwhile I found it myself:
commit 2d04a05bd7e93c13f13a82ac40de4065a99d069b
Author: Avi Kivity <avi@redhat.com>
Date: Wed Apr 20 15:32:49 2011 +0300
KVM: x86 emulator: emulate CLTS internally
Avoid using ctxt->vcpu; we can do everything with ->get_cr() and ->set_cr().
A side effect is that we no longer activate the fpu on emulated CLTS; but that
should be very rare.
Signed-off-by: Avi Kivity <avi@redhat.com>
vmx_fpu_activate and svm_fpu_activate are still called on #NM and CLTS, but
never from common code after the above patch.
Activation on CLTS is currently VMX only; I guess on AMD we could check the
decode assists' CR_VALID bit and instruction length to detect CLTS.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
` (4 preceding siblings ...)
2014-08-18 10:18 ` [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Paolo Bonzini
@ 2014-08-18 10:52 ` Xiao Guangrong
2014-08-18 11:18 ` Paolo Bonzini
5 siblings, 1 reply; 19+ messages in thread
From: Xiao Guangrong @ 2014-08-18 10:52 UTC (permalink / raw)
To: Wanpeng Li, Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
On 08/18/2014 05:50 PM, Wanpeng Li wrote:
> EPT misconfig handler in kvm will check which reason lead to EPT
> misconfiguration after vmexit. One of the reasons is that an EPT
> paging-structure entry is configured with settings reserved for
> future functionality. However, the handler can't identify if
> paging-structure entry of reserved bits for 1-GByte page are
> configured, since PDPTE which point to 1-GByte page will reserve
> bits 29:12 instead of bits 7:3 which are reserved for PDPTE that
> references an EPT Page Directory. This patch fix it by reserve
> bits 29:12 for 1-GByte page.
That mask is only set in the lowest pte for 4K page, i think it
is not a problem, no?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
2014-08-18 10:20 ` Paolo Bonzini
@ 2014-08-18 10:52 ` Paolo Bonzini
1 sibling, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 10:52 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 11:50, Wanpeng Li ha scritto:
> fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
> clts), however, there is no user currently, this patch drop it.
>
> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
> ---
> arch/x86/include/asm/kvm_host.h | 1 -
> arch/x86/kvm/svm.c | 1 -
> arch/x86/kvm/vmx.c | 1 -
> 3 files changed, 3 deletions(-)
>
> diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
> index 5724601..b68f3e5 100644
> --- a/arch/x86/include/asm/kvm_host.h
> +++ b/arch/x86/include/asm/kvm_host.h
> @@ -710,7 +710,6 @@ struct kvm_x86_ops {
> void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
> unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
> void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
> - void (*fpu_activate)(struct kvm_vcpu *vcpu);
> void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
>
> void (*tlb_flush)(struct kvm_vcpu *vcpu);
> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
> index ddf7427..1f49c86 100644
> --- a/arch/x86/kvm/svm.c
> +++ b/arch/x86/kvm/svm.c
> @@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
> .cache_reg = svm_cache_reg,
> .get_rflags = svm_get_rflags,
> .set_rflags = svm_set_rflags,
> - .fpu_activate = svm_fpu_activate,
> .fpu_deactivate = svm_fpu_deactivate,
>
> .tlb_flush = svm_flush_tlb,
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 71cbee5..2963303 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
> .cache_reg = vmx_cache_reg,
> .get_rflags = vmx_get_rflags,
> .set_rflags = vmx_set_rflags,
> - .fpu_activate = vmx_fpu_activate,
> .fpu_deactivate = vmx_fpu_deactivate,
>
> .tlb_flush = vmx_flush_tlb,
>
Applying shortly to kvm/queue, thanks.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/5] KVM: x86: drop fpu_activate hook
2014-08-18 10:51 ` Paolo Bonzini
@ 2014-08-18 11:15 ` Avi Kivity
0 siblings, 0 replies; 19+ messages in thread
From: Avi Kivity @ 2014-08-18 11:15 UTC (permalink / raw)
To: Paolo Bonzini, Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
On 08/18/2014 01:51 PM, Paolo Bonzini wrote:
> Il 18/08/2014 12:26, Avi Kivity ha scritto:
>> On 08/18/2014 01:20 PM, Paolo Bonzini wrote:
>>> Il 18/08/2014 11:50, Wanpeng Li ha scritto:
>>>> fpu_activate hook is introduced by commit 6b52d186 (KVM: Activate fpu on
>>>> clts), however, there is no user currently, this patch drop it.
>>>>
>>>> Reviewed-by: Yang Zhang <yang.z.zhang@intel.com>
>>>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>>>> ---
>>>> arch/x86/include/asm/kvm_host.h | 1 -
>>>> arch/x86/kvm/svm.c | 1 -
>>>> arch/x86/kvm/vmx.c | 1 -
>>>> 3 files changed, 3 deletions(-)
>>>>
>>>> diff --git a/arch/x86/include/asm/kvm_host.h
>>>> b/arch/x86/include/asm/kvm_host.h
>>>> index 5724601..b68f3e5 100644
>>>> --- a/arch/x86/include/asm/kvm_host.h
>>>> +++ b/arch/x86/include/asm/kvm_host.h
>>>> @@ -710,7 +710,6 @@ struct kvm_x86_ops {
>>>> void (*cache_reg)(struct kvm_vcpu *vcpu, enum kvm_reg reg);
>>>> unsigned long (*get_rflags)(struct kvm_vcpu *vcpu);
>>>> void (*set_rflags)(struct kvm_vcpu *vcpu, unsigned long rflags);
>>>> - void (*fpu_activate)(struct kvm_vcpu *vcpu);
>>>> void (*fpu_deactivate)(struct kvm_vcpu *vcpu);
>>>> void (*tlb_flush)(struct kvm_vcpu *vcpu);
>>>> diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
>>>> index ddf7427..1f49c86 100644
>>>> --- a/arch/x86/kvm/svm.c
>>>> +++ b/arch/x86/kvm/svm.c
>>>> @@ -4349,7 +4349,6 @@ static struct kvm_x86_ops svm_x86_ops = {
>>>> .cache_reg = svm_cache_reg,
>>>> .get_rflags = svm_get_rflags,
>>>> .set_rflags = svm_set_rflags,
>>>> - .fpu_activate = svm_fpu_activate,
>>>> .fpu_deactivate = svm_fpu_deactivate,
>>>> .tlb_flush = svm_flush_tlb,
>>>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>>>> index 71cbee5..2963303 100644
>>>> --- a/arch/x86/kvm/vmx.c
>>>> +++ b/arch/x86/kvm/vmx.c
>>>> @@ -8896,7 +8896,6 @@ static struct kvm_x86_ops vmx_x86_ops = {
>>>> .cache_reg = vmx_cache_reg,
>>>> .get_rflags = vmx_get_rflags,
>>>> .set_rflags = vmx_set_rflags,
>>>> - .fpu_activate = vmx_fpu_activate,
>>>> .fpu_deactivate = vmx_fpu_deactivate,
>>>> .tlb_flush = vmx_flush_tlb,
>>>>
>>> Avi/Gleb, do you remember any particular reason for this?
>>>
>> IIRC (vaguely) if we expect the fpu to be used in the near future, we
>> activate it eagerly so that we don't fault when it is used.
>>
>> Prevents the sequence:
>>
>> guest user: use fpu
>> #NM
>> host: reflect #NM to guest
>> guest kernel: CLTS
>> guest kernel: switch fpu state
>> #NM
>> host: switch fpu
>> guest kernel: switch fpu state (restarted)
>> guest user: use fpu (restarted)
>>
>> Why was the user removed? Full-time eager fpu?
> No, I mean any reason to keep the hooks.
If there are no callers, I can't think of any.
> In the meanwhile I found it myself:
>
> commit 2d04a05bd7e93c13f13a82ac40de4065a99d069b
> Author: Avi Kivity <avi@redhat.com>
> Date: Wed Apr 20 15:32:49 2011 +0300
>
> KVM: x86 emulator: emulate CLTS internally
>
> Avoid using ctxt->vcpu; we can do everything with ->get_cr() and ->set_cr().
>
> A side effect is that we no longer activate the fpu on emulated CLTS; but that
> should be very rare.
>
> Signed-off-by: Avi Kivity <avi@redhat.com>
>
> vmx_fpu_activate and svm_fpu_activate are still called on #NM and CLTS, but
> never from common code after the above patch.
>
> Activation on CLTS is currently VMX only; I guess on AMD we could check the
> decode assists' CR_VALID bit and instruction length to detect CLTS.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
2014-08-18 10:52 ` Xiao Guangrong
@ 2014-08-18 11:18 ` Paolo Bonzini
0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 11:18 UTC (permalink / raw)
To: Xiao Guangrong, Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 12:52, Xiao Guangrong ha scritto:
>> > EPT misconfig handler in kvm will check which reason lead to EPT
>> > misconfiguration after vmexit. One of the reasons is that an EPT
>> > paging-structure entry is configured with settings reserved for
>> > future functionality. However, the handler can't identify if
>> > paging-structure entry of reserved bits for 1-GByte page are
>> > configured, since PDPTE which point to 1-GByte page will reserve
>> > bits 29:12 instead of bits 7:3 which are reserved for PDPTE that
>> > references an EPT Page Directory. This patch fix it by reserve
>> > bits 29:12 for 1-GByte page.
> That mask is only set in the lowest pte for 4K page, i think it
> is not a problem, no?
It will cause KVM to WARN. The EPT memory type will also be ignored for
gigabyte pages.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits of Variable Range MTRRs
2014-08-18 9:50 ` [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits " Wanpeng Li
@ 2014-08-18 12:27 ` Wanpeng Li
2014-08-18 12:49 ` Paolo Bonzini
0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2014-08-18 12:27 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Hi Paolo,
On Mon, Aug 18, 2014 at 05:50:31PM +0800, Wanpeng Li wrote:
>Section 11.11.2.3 of the SDM mentions "All other bits in the IA32_MTRR_PHYSBASEn
>and IA32_MTRR_PHYSMASKn registers are reserved; the processor generates a
>general-protection exception(#GP) if software attempts to write to them". This
>patch do it in kvm.
>
How about this one?
Regards,
Wanpeng Li
>Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>---
> arch/x86/kvm/x86.c | 15 +++++++++++++--
> 1 file changed, 13 insertions(+), 2 deletions(-)
>
>diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>index caaffeb..aa64c70 100644
>--- a/arch/x86/kvm/x86.c
>+++ b/arch/x86/kvm/x86.c
>@@ -1769,11 +1769,22 @@ static bool mtrr_valid(struct kvm_vcpu *vcpu, u32 msr, u64 data)
> /* variable MTRRs */
> if (msr >= 0x200 && msr < 0x200 + 2 * KVM_NR_VAR_MTRR) {
> int idx, is_mtrr_mask;
>+ u64 mask = 0;
>
> idx = (msr - 0x200) / 2;
> is_mtrr_mask = msr - 0x200 - 2 * idx;
>- if (!is_mtrr_mask)
>- return valid_mtrr_type(data & 0xff);
>+ for (i = 63; i > boot_cpu_data.x86_phys_bits; i--)
>+ mask |= (1ULL << i);
>+ if (!is_mtrr_mask) {
>+ if (!valid_mtrr_type(data & 0xff))
>+ return false;
>+ mask |= 0xf00;
>+ } else
>+ mask |= 0x7ff;
>+ if (data & mask) {
>+ kvm_inject_gp(vcpu, 0);
>+ return false;
>+ }
> }
> return true;
> }
>--
>1.9.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits of Variable Range MTRRs
2014-08-18 12:27 ` Wanpeng Li
@ 2014-08-18 12:49 ` Paolo Bonzini
0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-18 12:49 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 18/08/2014 14:27, Wanpeng Li ha scritto:
> > Section 11.11.2.3 of the SDM mentions "All other bits in the IA32_MTRR_PHYSBASEn
> > and IA32_MTRR_PHYSMASKn registers are reserved; the processor generates a
> > general-protection exception(#GP) if software attempts to write to them". This
> > patch do it in kvm.
>
> How about this one?
It's okay, but it is going to change after you modify patch 4.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
2014-08-18 10:18 ` [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Paolo Bonzini
@ 2014-08-19 2:17 ` Wanpeng Li
2014-08-19 6:11 ` Paolo Bonzini
0 siblings, 1 reply; 19+ messages in thread
From: Wanpeng Li @ 2014-08-19 2:17 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: Xiao Guangrong, Gleb Natapov, hpa, x86, kvm, linux-kernel
Hi Paolo,
On Mon, Aug 18, 2014 at 12:18:59PM +0200, Paolo Bonzini wrote:
>Il 18/08/2014 11:50, Wanpeng Li ha scritto:
>> EPT misconfig handler in kvm will check which reason lead to EPT
>> misconfiguration after vmexit. One of the reasons is that an EPT
>> paging-structure entry is configured with settings reserved for
>> future functionality. However, the handler can't identify if
>> paging-structure entry of reserved bits for 1-GByte page are
>> configured, since PDPTE which point to 1-GByte page will reserve
>> bits 29:12 instead of bits 7:3 which are reserved for PDPTE that
>> references an EPT Page Directory. This patch fix it by reserve
>> bits 29:12 for 1-GByte page.
>>
>> Signed-off-by: Wanpeng Li <wanpeng.li@linux.intel.com>
>> ---
>> arch/x86/kvm/vmx.c | 14 ++++++++++----
>> 1 file changed, 10 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
>> index bfe11cf..71cbee5 100644
>> --- a/arch/x86/kvm/vmx.c
>> +++ b/arch/x86/kvm/vmx.c
>> @@ -5521,9 +5521,14 @@ static u64 ept_rsvd_mask(u64 spte, int level)
>> for (i = 51; i > boot_cpu_data.x86_phys_bits; i--)
>> mask |= (1ULL << i);
>>
>> - if (level > 2)
>> - /* bits 7:3 reserved */
>> - mask |= 0xf8;
>> + if (level > 2) {
>
>level can be 4 here. You have to return 0xf8 for level == 4.
>
>The same "if" statement then can cover both 2MB and 1GB pages, like
>
> if (spte & (1ULL << 7))
> /* 1GB/2MB page, bits 29:12 or 20:12 reserved respectively */
> mask |= (PAGE_SIZE << ((level - 1) * 9)) - PAGE_SIZE;
> else
> /* bits 6:3 reserved */
> mask |= 0x78;
>
>> - if (level == 1 || (level == 2 && (spte & (1ULL << 7)))) {
>> + if (level == 1 || ((level == 3 || level == 2)
>> + && (spte & (1ULL << 7)))) {
>
>This condition can be simplified by checking the return value of ept_rsvd_mask.
>If it includes 0x38, this is a large page. Otherwise it is a leaf page and
>you can go down the "if".
As you know, 5:3 bits which used for EPT MT are not reserved bits, so
I fail to understand why check the return value of ept_rsvd_mask and
it's a large page if includes 0x38. Could you eplain in more details? ;-)
Regards,
Wanpeng Li
>
>Paolo
>
>> u64 ept_mem_type = (spte & 0x38) >> 3;
>>
>> if (ept_mem_type == 2 || ept_mem_type == 3 ||
>>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page
2014-08-19 2:17 ` Wanpeng Li
@ 2014-08-19 6:11 ` Paolo Bonzini
0 siblings, 0 replies; 19+ messages in thread
From: Paolo Bonzini @ 2014-08-19 6:11 UTC (permalink / raw)
To: Wanpeng Li; +Cc: Xiao Guangrong, Gleb Natapov, hpa, x86, kvm, linux-kernel
Il 19/08/2014 04:17, Wanpeng Li ha scritto:
>>> >> - if (level == 1 || (level == 2 && (spte & (1ULL << 7)))) {
>>> >> + if (level == 1 || ((level == 3 || level == 2)
>>> >> + && (spte & (1ULL << 7)))) {
>> >
>> >This condition can be simplified by checking the return value of ept_rsvd_mask.
>> >If it includes 0x38, this is a large page.
Oops, a "not" was missing. If it includes 0x38, this is _not_ a large
page (it is a page directory / page directory pointer / PML4).
>> Otherwise it is a leaf page and
>> you can go down the "if".
> As you know, 5:3 bits which used for EPT MT are not reserved bits, so
> I fail to understand why check the return value of ept_rsvd_mask and
> it's a large page if includes 0x38. Could you eplain in more details? ;-)
A non-leaf page will always have 0x38 in the ept_rsvd_mask. A leaf page
will never have 0x38 in the ept_rsvd_mask.
Paolo
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2014-08-19 6:11 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-08-18 9:50 [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Wanpeng Li
2014-08-18 9:50 ` [PATCH 2/5] KVM: x86: drop fpu_activate hook Wanpeng Li
2014-08-18 10:20 ` Paolo Bonzini
2014-08-18 10:26 ` Avi Kivity
2014-08-18 10:51 ` Paolo Bonzini
2014-08-18 11:15 ` Avi Kivity
2014-08-18 10:52 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 3/5] KVM: vmx: don't vmx_segment_cache_clear twice in enter_pmode Wanpeng Li
2014-08-18 10:24 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 4/5] KVM: x86: fix check legal type of Variable Range MTRRs Wanpeng Li
2014-08-18 10:28 ` Paolo Bonzini
2014-08-18 9:50 ` [PATCH 5/5] KVM: x86: #GP when attempts to write reserved bits " Wanpeng Li
2014-08-18 12:27 ` Wanpeng Li
2014-08-18 12:49 ` Paolo Bonzini
2014-08-18 10:18 ` [PATCH 1/5] KVM: vmx: fix ept reserved bits for 1-GByte page Paolo Bonzini
2014-08-19 2:17 ` Wanpeng Li
2014-08-19 6:11 ` Paolo Bonzini
2014-08-18 10:52 ` Xiao Guangrong
2014-08-18 11:18 ` Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox