Kernel KVM virtualization development
 help / color / mirror / Atom feed
* [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
@ 2026-05-15 11:19 Igor Mammedov
  2026-05-15 12:44 ` Sean Christopherson
  0 siblings, 1 reply; 2+ messages in thread
From: Igor Mammedov @ 2026-05-15 11:19 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, babu.moger, seanjc

AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
address as seen by L1 falls in [0x78000000, 0x80000000) range.
Reserve this range on affected CPUs so the page allocator would never
alocate the VMCB there.

Given that it's relative old CPUs + nested env + only performance
impact, it's not worth fixing on KVM side (which could involve
messing with allocator or reallocating VMCB, until it's not
in affected range). And since test acts here as L1 it's not possible to fix
issue in kernel (L0) anyway. Hence a quirk here, to prevent tests
failures where we can't do anything about them.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
CCed: AMD folks for awaraness and in case they would be willing
to look into fixing issue on CPU side.

there is not official EOL dates on both, but using a newer
generation(s) release dates it appears that both are effectively
discontinued for ~3-5 yeas
---
 x86/svm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/x86/svm.c b/x86/svm.c
index a85da905..706ff2ef 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -334,6 +334,21 @@ static void setup_npt(void)
 	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
 }
 
+#define VLS_BUG_START	0x78000000ULL
+#define VLS_BUG_END	0x80000000ULL
+
+static bool has_vls_bug(void)
+{
+	u32 sig = cpuid(1).a;
+	u32 fam = x86_family(sig);
+	u32 model = x86_model(sig);
+
+	if (fam != 0x17)
+		return false;
+
+	return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
+}
+
 static void setup_svm(void)
 {
 	void *hsave = alloc_page();
@@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 		return report_summary();
 	}
 
+	if (has_vls_bug()) {
+		phys_addr_t addr;
+
+		for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
+			reserve_pages(addr, 1);
+	}
+
 	setup_svm();
 
 	vmcb = alloc_page();
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
  2026-05-15 11:19 [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Igor Mammedov
@ 2026-05-15 12:44 ` Sean Christopherson
  0 siblings, 0 replies; 2+ messages in thread
From: Sean Christopherson @ 2026-05-15 12:44 UTC (permalink / raw)
  To: Igor Mammedov; +Cc: kvm, pbonzini, babu.moger

On Fri, May 15, 2026, Igor Mammedov wrote:
> AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
> have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
> on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
> address as seen by L1 falls in [0x78000000, 0x80000000) range.
> Reserve this range on affected CPUs so the page allocator would never
> alocate the VMCB there.
> 
> Given that it's relative old CPUs + nested env + only performance
> impact, it's not worth fixing on KVM side (which could involve
> messing with allocator or reallocating VMCB, until it's not
> in affected range). And since test acts here as L1 it's not possible to fix
> issue in kernel (L0) anyway. Hence a quirk here, to prevent tests
> failures where we can't do anything about them.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
> CCed: AMD folks for awaraness and in case they would be willing
> to look into fixing issue on CPU side.

LOL.  Don't hold your breath.  According to AMD, Zen4 erratum 1454 is a firmware
issue.

https://lore.kernel.org/all/20241105160234.1300702-1-superm1@kernel.org

> there is not official EOL dates on both, but using a newer
> generation(s) release dates it appears that both are effectively
> discontinued for ~3-5 yeas
> ---
>  x86/svm.c | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index a85da905..706ff2ef 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -334,6 +334,21 @@ static void setup_npt(void)
>  	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
>  }
>  
> +#define VLS_BUG_START	0x78000000ULL
> +#define VLS_BUG_END	0x80000000ULL
> +
> +static bool has_vls_bug(void)

Probably need to be more specific, because there appear to be multiple flavors
of vls bugs.  :-/

> +{
> +	u32 sig = cpuid(1).a;
> +	u32 fam = x86_family(sig);
> +	u32 model = x86_model(sig);
> +
> +	if (fam != 0x17)
> +		return false;
> +
> +	return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
> +}
> +
>  static void setup_svm(void)
>  {
>  	void *hsave = alloc_page();
> @@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
>  		return report_summary();
>  	}
>  
> +	if (has_vls_bug()) {
> +		phys_addr_t addr;
> +
> +		for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
> +			reserve_pages(addr, 1);
> +	}
> +
>  	setup_svm();
>  
>  	vmcb = alloc_page();
> -- 
> 2.43.0
> 

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-15 12:44 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-15 11:19 [kvm-unit-tests PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Igor Mammedov
2026-05-15 12:44 ` Sean Christopherson

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox