* [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits @ 2019-12-04 15:40 Paolo Bonzini 2019-12-04 15:48 ` Sean Christopherson 2019-12-04 15:57 ` Tom Lendacky 0 siblings, 2 replies; 12+ messages in thread From: Paolo Bonzini @ 2019-12-04 15:40 UTC (permalink / raw) To: linux-kernel, kvm; +Cc: thomas.lendacky, sean.j.christopherson, stable The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally if available, it is simplest and works even if memory is not encrypted. Cc: stable@vger.kernel.org Reported-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> --- arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c index 6f92b40d798c..1e4ee4f8de5f 100644 --- a/arch/x86/kvm/mmu/mmu.c +++ b/arch/x86/kvm/mmu/mmu.c @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, static u8 kvm_get_shadow_phys_bits(void) { /* - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected - * in CPU detection code, but MKTME treats those reduced bits as - * 'keyID' thus they are not reserved bits. Therefore for MKTME - * we should still return physical address bits reported by CPUID. + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected + * in CPU detection code, but the processor treats those reduced bits as + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at + * the physical address bits reported by CPUID. */ - if (!boot_cpu_has(X86_FEATURE_TME) || - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) - return boot_cpu_data.x86_phys_bits; + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) + return cpuid_eax(0x80000008) & 0xff; - return cpuid_eax(0x80000008) & 0xff; + /* + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with + * custom CPUID. Proceed with whatever the kernel found since these features + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). + */ + return boot_cpu_data.x86_phys_bits; } static void kvm_mmu_reset_all_pte_masks(void) -- 1.8.3.1 ^ permalink raw reply related [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-04 15:40 [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits Paolo Bonzini @ 2019-12-04 15:48 ` Sean Christopherson 2019-12-10 9:19 ` Paolo Bonzini 2019-12-04 15:57 ` Tom Lendacky 1 sibling, 1 reply; 12+ messages in thread From: Sean Christopherson @ 2019-12-04 15:48 UTC (permalink / raw) To: Paolo Bonzini; +Cc: linux-kernel, kvm, thomas.lendacky, stable On Wed, Dec 04, 2019 at 04:40:37PM +0100, Paolo Bonzini wrote: > The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually > true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally if > available, it is simplest and works even if memory is not encrypted. > > Cc: stable@vger.kernel.org > Reported-by: Tom Lendacky <thomas.lendacky@amd.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- > 1 file changed, 12 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6f92b40d798c..1e4ee4f8de5f 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, > static u8 kvm_get_shadow_phys_bits(void) > { > /* > - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected > - * in CPU detection code, but MKTME treats those reduced bits as > - * 'keyID' thus they are not reserved bits. Therefore for MKTME > - * we should still return physical address bits reported by CPUID. > + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected > + * in CPU detection code, but the processor treats those reduced bits as > + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at > + * the physical address bits reported by CPUID. > */ > - if (!boot_cpu_has(X86_FEATURE_TME) || > - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) > - return boot_cpu_data.x86_phys_bits; > + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) > + return cpuid_eax(0x80000008) & 0xff; > > - return cpuid_eax(0x80000008) & 0xff; > + /* > + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with > + * custom CPUID. Proceed with whatever the kernel found since these features > + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). No love for MKTME? :-D Reviewed-by: Sean Christopherson <sean.j.christopherson@intel.com> > + */ > + return boot_cpu_data.x86_phys_bits; > } > > static void kvm_mmu_reset_all_pte_masks(void) > -- > 1.8.3.1 > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-04 15:48 ` Sean Christopherson @ 2019-12-10 9:19 ` Paolo Bonzini 2019-12-10 15:37 ` Sean Christopherson 0 siblings, 1 reply; 12+ messages in thread From: Paolo Bonzini @ 2019-12-10 9:19 UTC (permalink / raw) To: Sean Christopherson; +Cc: linux-kernel, kvm, thomas.lendacky, stable On 04/12/19 16:48, Sean Christopherson wrote: >> + /* >> + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with >> + * custom CPUID. Proceed with whatever the kernel found since these features >> + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). > No love for MKTME? :-D I admit I didn't check, but does MKTME really require CPUID leaves in the "AMD range"? (My machines have 0x80000008 as the highest supported leaf in that range). Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-10 9:19 ` Paolo Bonzini @ 2019-12-10 15:37 ` Sean Christopherson 0 siblings, 0 replies; 12+ messages in thread From: Sean Christopherson @ 2019-12-10 15:37 UTC (permalink / raw) To: Paolo Bonzini; +Cc: linux-kernel, kvm, thomas.lendacky, stable On Tue, Dec 10, 2019 at 10:19:57AM +0100, Paolo Bonzini wrote: > On 04/12/19 16:48, Sean Christopherson wrote: > >> + /* > >> + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with > >> + * custom CPUID. Proceed with whatever the kernel found since these features > >> + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). > > No love for MKTME? :-D > > I admit I didn't check, but does MKTME really require CPUID leaves in > the "AMD range"? (My machines have 0x80000008 as the highest supported > leaf in that range). Ah, no, I just misunderstood what the blurb in the parentheses was saying. ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-04 15:40 [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits Paolo Bonzini 2019-12-04 15:48 ` Sean Christopherson @ 2019-12-04 15:57 ` Tom Lendacky 2019-12-10 3:53 ` Huang, Kai 2019-12-10 9:17 ` Paolo Bonzini 1 sibling, 2 replies; 12+ messages in thread From: Tom Lendacky @ 2019-12-04 15:57 UTC (permalink / raw) To: Paolo Bonzini, linux-kernel, kvm; +Cc: sean.j.christopherson, stable On 12/4/19 9:40 AM, Paolo Bonzini wrote: > The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually > true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally if > available, it is simplest and works even if memory is not encrypted. This isn't correct for AMD. The reduction in physical addressing is correct. You can't set, e.g. bit 45, in the nested page table, because that will be considered a reserved bit and generate an NPF. When memory encryption is enabled today, bit 47 is the encryption indicator bit and bits 46:43 must be zero or else an NPF is generated. The hardware uses these bits internally based on the whether running as the hypervisor or based on the ASID of the guest. Thanks, Tom > > Cc: stable@vger.kernel.org > Reported-by: Tom Lendacky <thomas.lendacky@amd.com> > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > --- > arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- > 1 file changed, 12 insertions(+), 8 deletions(-) > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > index 6f92b40d798c..1e4ee4f8de5f 100644 > --- a/arch/x86/kvm/mmu/mmu.c > +++ b/arch/x86/kvm/mmu/mmu.c > @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, > static u8 kvm_get_shadow_phys_bits(void) > { > /* > - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected > - * in CPU detection code, but MKTME treats those reduced bits as > - * 'keyID' thus they are not reserved bits. Therefore for MKTME > - * we should still return physical address bits reported by CPUID. > + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected > + * in CPU detection code, but the processor treats those reduced bits as > + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at > + * the physical address bits reported by CPUID. > */ > - if (!boot_cpu_has(X86_FEATURE_TME) || > - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) > - return boot_cpu_data.x86_phys_bits; > + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) > + return cpuid_eax(0x80000008) & 0xff; > > - return cpuid_eax(0x80000008) & 0xff; > + /* > + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with > + * custom CPUID. Proceed with whatever the kernel found since these features > + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). > + */ > + return boot_cpu_data.x86_phys_bits; > } > > static void kvm_mmu_reset_all_pte_masks(void) > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-04 15:57 ` Tom Lendacky @ 2019-12-10 3:53 ` Huang, Kai 2019-12-10 9:18 ` Paolo Bonzini 2019-12-10 9:17 ` Paolo Bonzini 1 sibling, 1 reply; 12+ messages in thread From: Huang, Kai @ 2019-12-10 3:53 UTC (permalink / raw) To: thomas.lendacky@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com Cc: Christopherson, Sean J, stable@vger.kernel.org On Wed, 2019-12-04 at 09:57 -0600, Tom Lendacky wrote: > On 12/4/19 9:40 AM, Paolo Bonzini wrote: > > The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is > > actually > > true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally > > if > > available, it is simplest and works even if memory is not encrypted. > > This isn't correct for AMD. The reduction in physical addressing is > correct. You can't set, e.g. bit 45, in the nested page table, because > that will be considered a reserved bit and generate an NPF. When memory > encryption is enabled today, bit 47 is the encryption indicator bit and > bits 46:43 must be zero or else an NPF is generated. The hardware uses > these bits internally based on the whether running as the hypervisor or > based on the ASID of the guest. Right. Alghouth both MKTME and SME/SEV reduce physical bits, but they treat those reduced bits differently: MKTME treats those as keyID thus they can be set, but SME/SEV treats those as reserved bits so you cannot set any of them. Maybe the naming of shadow_phys_bits is a little bit confusing here. The purpose of it was to determine first reserved bits, but not actual physical address bits . Therefore for MKTME it should include the keyID bits, but for SME/SEV it should not. Thanks, -Kai > > Thanks, > Tom > > > Cc: stable@vger.kernel.org > > Reported-by: Tom Lendacky <thomas.lendacky@amd.com> > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > > --- > > arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- > > 1 file changed, 12 insertions(+), 8 deletions(-) > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > index 6f92b40d798c..1e4ee4f8de5f 100644 > > --- a/arch/x86/kvm/mmu/mmu.c > > +++ b/arch/x86/kvm/mmu/mmu.c > > @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 > > accessed_mask, > > static u8 kvm_get_shadow_phys_bits(void) > > { > > /* > > - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected > > - * in CPU detection code, but MKTME treats those reduced bits as > > - * 'keyID' thus they are not reserved bits. Therefore for MKTME > > - * we should still return physical address bits reported by CPUID. > > + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected > > + * in CPU detection code, but the processor treats those reduced bits as > > + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look > > at > > + * the physical address bits reported by CPUID. > > */ > > - if (!boot_cpu_has(X86_FEATURE_TME) || > > - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) > > - return boot_cpu_data.x86_phys_bits; > > + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) > > + return cpuid_eax(0x80000008) & 0xff; > > > > - return cpuid_eax(0x80000008) & 0xff; > > + /* > > + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with > > + * custom CPUID. Proceed with whatever the kernel found since these > > features > > + * aren't virtualizable (SME/SEV also require CPUIDs higher than > > 0x80000008). > > + */ > > + return boot_cpu_data.x86_phys_bits; > > } > > > > static void kvm_mmu_reset_all_pte_masks(void) > > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-10 3:53 ` Huang, Kai @ 2019-12-10 9:18 ` Paolo Bonzini 0 siblings, 0 replies; 12+ messages in thread From: Paolo Bonzini @ 2019-12-10 9:18 UTC (permalink / raw) To: Huang, Kai, thomas.lendacky@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Christopherson, Sean J, stable@vger.kernel.org On 10/12/19 04:53, Huang, Kai wrote: > Right. Alghouth both MKTME and SME/SEV reduce physical bits, but they treat > those reduced bits differently: MKTME treats those as keyID thus they can be > set, but SME/SEV treats those as reserved bits so you cannot set any of them. > > Maybe the naming of shadow_phys_bits is a little bit confusing here. The purpose > of it was to determine first reserved bits, but not actual physical address bits > . Therefore for MKTME it should include the keyID bits, but for SME/SEV it > should not. Not just the first reserved bit, but _some_ reserved bit such that all consecutive bits up to bit 51 are reserved. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-04 15:57 ` Tom Lendacky 2019-12-10 3:53 ` Huang, Kai @ 2019-12-10 9:17 ` Paolo Bonzini 2019-12-11 0:11 ` Huang, Kai 1 sibling, 1 reply; 12+ messages in thread From: Paolo Bonzini @ 2019-12-10 9:17 UTC (permalink / raw) To: Tom Lendacky, linux-kernel, kvm; +Cc: sean.j.christopherson, stable On 04/12/19 16:57, Tom Lendacky wrote: > On 12/4/19 9:40 AM, Paolo Bonzini wrote: >> The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is actually >> true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally if >> available, it is simplest and works even if memory is not encrypted. > > This isn't correct for AMD. The reduction in physical addressing is > correct. You can't set, e.g. bit 45, in the nested page table, because > that will be considered a reserved bit and generate an NPF. When memory > encryption is enabled today, bit 47 is the encryption indicator bit and > bits 46:43 must be zero or else an NPF is generated. The hardware uses > these bits internally based on the whether running as the hypervisor or > based on the ASID of the guest. kvm_get_shadow_phys_bits() must be conservative in that: 1) if a bit is reserved it _can_ return a value higher than its index 2) if a bit is used by the processor (for physical address or anything else) it _must_ return a value higher than its index. In the SEV case we're not obeying (2), because the function returns 43 when the C bit is bit 47. The patch fixes that. Paolo > > Thanks, > Tom > >> >> Cc: stable@vger.kernel.org >> Reported-by: Tom Lendacky <thomas.lendacky@amd.com> >> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> >> --- >> arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- >> 1 file changed, 12 insertions(+), 8 deletions(-) >> >> diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c >> index 6f92b40d798c..1e4ee4f8de5f 100644 >> --- a/arch/x86/kvm/mmu/mmu.c >> +++ b/arch/x86/kvm/mmu/mmu.c >> @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 accessed_mask, >> static u8 kvm_get_shadow_phys_bits(void) >> { >> /* >> - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected >> - * in CPU detection code, but MKTME treats those reduced bits as >> - * 'keyID' thus they are not reserved bits. Therefore for MKTME >> - * we should still return physical address bits reported by CPUID. >> + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected >> + * in CPU detection code, but the processor treats those reduced bits as >> + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look at >> + * the physical address bits reported by CPUID. >> */ >> - if (!boot_cpu_has(X86_FEATURE_TME) || >> - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) >> - return boot_cpu_data.x86_phys_bits; >> + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) >> + return cpuid_eax(0x80000008) & 0xff; >> >> - return cpuid_eax(0x80000008) & 0xff; >> + /* >> + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with >> + * custom CPUID. Proceed with whatever the kernel found since these features >> + * aren't virtualizable (SME/SEV also require CPUIDs higher than 0x80000008). >> + */ >> + return boot_cpu_data.x86_phys_bits; >> } >> >> static void kvm_mmu_reset_all_pte_masks(void) >> > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-10 9:17 ` Paolo Bonzini @ 2019-12-11 0:11 ` Huang, Kai 2019-12-11 9:07 ` Paolo Bonzini 0 siblings, 1 reply; 12+ messages in thread From: Huang, Kai @ 2019-12-11 0:11 UTC (permalink / raw) To: thomas.lendacky@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com Cc: Christopherson, Sean J, stable@vger.kernel.org On Tue, 2019-12-10 at 10:17 +0100, Paolo Bonzini wrote: > On 04/12/19 16:57, Tom Lendacky wrote: > > On 12/4/19 9:40 AM, Paolo Bonzini wrote: > > > The comment in kvm_get_shadow_phys_bits refers to MKTME, but the same is > > > actually > > > true of SME and SEV. Just use CPUID[0x8000_0008].EAX[7:0] unconditionally > > > if > > > available, it is simplest and works even if memory is not encrypted. > > > > This isn't correct for AMD. The reduction in physical addressing is > > correct. You can't set, e.g. bit 45, in the nested page table, because > > that will be considered a reserved bit and generate an NPF. When memory > > encryption is enabled today, bit 47 is the encryption indicator bit and > > bits 46:43 must be zero or else an NPF is generated. The hardware uses > > these bits internally based on the whether running as the hypervisor or > > based on the ASID of the guest. > > kvm_get_shadow_phys_bits() must be conservative in that: > > 1) if a bit is reserved it _can_ return a value higher than its index > > 2) if a bit is used by the processor (for physical address or anything > else) it _must_ return a value higher than its index. > > In the SEV case we're not obeying (2), because the function returns 43 > when the C bit is bit 47. The patch fixes that. Could we guarantee that C-bit is always below bits reported by CPUID? Thanks, -Kai > > Paolo > > > Thanks, > > Tom > > > > > Cc: stable@vger.kernel.org > > > Reported-by: Tom Lendacky <thomas.lendacky@amd.com> > > > Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> > > > --- > > > arch/x86/kvm/mmu/mmu.c | 20 ++++++++++++-------- > > > 1 file changed, 12 insertions(+), 8 deletions(-) > > > > > > diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c > > > index 6f92b40d798c..1e4ee4f8de5f 100644 > > > --- a/arch/x86/kvm/mmu/mmu.c > > > +++ b/arch/x86/kvm/mmu/mmu.c > > > @@ -538,16 +538,20 @@ void kvm_mmu_set_mask_ptes(u64 user_mask, u64 > > > accessed_mask, > > > static u8 kvm_get_shadow_phys_bits(void) > > > { > > > /* > > > - * boot_cpu_data.x86_phys_bits is reduced when MKTME is detected > > > - * in CPU detection code, but MKTME treats those reduced bits as > > > - * 'keyID' thus they are not reserved bits. Therefore for MKTME > > > - * we should still return physical address bits reported by CPUID. > > > + * boot_cpu_data.x86_phys_bits is reduced when MKTME or SME are detected > > > + * in CPU detection code, but the processor treats those reduced bits as > > > + * 'keyID' thus they are not reserved bits. Therefore KVM needs to look > > > at > > > + * the physical address bits reported by CPUID. > > > */ > > > - if (!boot_cpu_has(X86_FEATURE_TME) || > > > - WARN_ON_ONCE(boot_cpu_data.extended_cpuid_level < 0x80000008)) > > > - return boot_cpu_data.x86_phys_bits; > > > + if (likely(boot_cpu_data.extended_cpuid_level >= 0x80000008)) > > > + return cpuid_eax(0x80000008) & 0xff; > > > > > > - return cpuid_eax(0x80000008) & 0xff; > > > + /* > > > + * Quite weird to have VMX or SVM but not MAXPHYADDR; probably a VM with > > > + * custom CPUID. Proceed with whatever the kernel found since these > > > features > > > + * aren't virtualizable (SME/SEV also require CPUIDs higher than > > > 0x80000008). > > > + */ > > > + return boot_cpu_data.x86_phys_bits; > > > } > > > > > > static void kvm_mmu_reset_all_pte_masks(void) > > > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-11 0:11 ` Huang, Kai @ 2019-12-11 9:07 ` Paolo Bonzini 2019-12-11 20:48 ` Tom Lendacky 0 siblings, 1 reply; 12+ messages in thread From: Paolo Bonzini @ 2019-12-11 9:07 UTC (permalink / raw) To: Huang, Kai, thomas.lendacky@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Christopherson, Sean J, stable@vger.kernel.org On 11/12/19 01:11, Huang, Kai wrote: >> kvm_get_shadow_phys_bits() must be conservative in that: >> >> 1) if a bit is reserved it _can_ return a value higher than its index >> >> 2) if a bit is used by the processor (for physical address or anything >> else) it _must_ return a value higher than its index. >> >> In the SEV case we're not obeying (2), because the function returns 43 >> when the C bit is bit 47. The patch fixes that. > Could we guarantee that C-bit is always below bits reported by CPUID? That's a question for AMD. :) The C bit can move (and probably will, otherwise they wouldn't have bothered adding it to CPUID) in future generations of the processor. Paolo ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-11 9:07 ` Paolo Bonzini @ 2019-12-11 20:48 ` Tom Lendacky 2019-12-11 23:16 ` Huang, Kai 0 siblings, 1 reply; 12+ messages in thread From: Tom Lendacky @ 2019-12-11 20:48 UTC (permalink / raw) To: Paolo Bonzini, Huang, Kai, linux-kernel@vger.kernel.org, kvm@vger.kernel.org Cc: Christopherson, Sean J, stable@vger.kernel.org On 12/11/19 3:07 AM, Paolo Bonzini wrote: > On 11/12/19 01:11, Huang, Kai wrote: >>> kvm_get_shadow_phys_bits() must be conservative in that: >>> >>> 1) if a bit is reserved it _can_ return a value higher than its index >>> >>> 2) if a bit is used by the processor (for physical address or anything >>> else) it _must_ return a value higher than its index. >>> >>> In the SEV case we're not obeying (2), because the function returns 43 >>> when the C bit is bit 47. The patch fixes that. >> Could we guarantee that C-bit is always below bits reported by CPUID? > > That's a question for AMD. :) The C bit can move (and probably will, > otherwise they wouldn't have bothered adding it to CPUID) in future > generations of the processor. Right, there's no way to guarantee that it is always below bits reported by CPUID. As Paolo stated, the position is reported by CPUID so that it can easily move and be accounted for programmatically. Thanks, Tom > > Paolo > ^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits 2019-12-11 20:48 ` Tom Lendacky @ 2019-12-11 23:16 ` Huang, Kai 0 siblings, 0 replies; 12+ messages in thread From: Huang, Kai @ 2019-12-11 23:16 UTC (permalink / raw) To: thomas.lendacky@amd.com, linux-kernel@vger.kernel.org, kvm@vger.kernel.org, pbonzini@redhat.com Cc: Christopherson, Sean J, stable@vger.kernel.org On Wed, 2019-12-11 at 14:48 -0600, Tom Lendacky wrote: > On 12/11/19 3:07 AM, Paolo Bonzini wrote: > > On 11/12/19 01:11, Huang, Kai wrote: > > > > kvm_get_shadow_phys_bits() must be conservative in that: > > > > > > > > 1) if a bit is reserved it _can_ return a value higher than its index > > > > > > > > 2) if a bit is used by the processor (for physical address or anything > > > > else) it _must_ return a value higher than its index. > > > > > > > > In the SEV case we're not obeying (2), because the function returns 43 > > > > when the C bit is bit 47. The patch fixes that. > > > Could we guarantee that C-bit is always below bits reported by CPUID? > > > > That's a question for AMD. :) The C bit can move (and probably will, > > otherwise they wouldn't have bothered adding it to CPUID) in future > > generations of the processor. > > Right, there's no way to guarantee that it is always below bits reported > by CPUID. As Paolo stated, the position is reported by CPUID so that it > can easily move and be accounted for programmatically. Then I don't think this patch could fix the issue Paolo discribed? Thanks, -Kai ^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2019-12-11 23:16 UTC | newest] Thread overview: 12+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2019-12-04 15:40 [PATCH v2] KVM: x86: use CPUID to locate host page table reserved bits Paolo Bonzini 2019-12-04 15:48 ` Sean Christopherson 2019-12-10 9:19 ` Paolo Bonzini 2019-12-10 15:37 ` Sean Christopherson 2019-12-04 15:57 ` Tom Lendacky 2019-12-10 3:53 ` Huang, Kai 2019-12-10 9:18 ` Paolo Bonzini 2019-12-10 9:17 ` Paolo Bonzini 2019-12-11 0:11 ` Huang, Kai 2019-12-11 9:07 ` Paolo Bonzini 2019-12-11 20:48 ` Tom Lendacky 2019-12-11 23:16 ` Huang, Kai
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).