Kernel KVM virtualization development
 help / color / mirror / Atom feed
* [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
@ 2026-05-14 11:34 Igor Mammedov
  2026-05-15 11:11 ` Igor Mammedov
  0 siblings, 1 reply; 2+ messages in thread
From: Igor Mammedov @ 2026-05-14 11:34 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, babu.moger

AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
address as seen by L1 falls in [0x78000000, 0x80000000) range.
Reserve this range on affected CPUs so the page allocator would never
alocate the VMCB there.

Given that it's relative old CPUs + nested env + only performance
impact, it's not worth fixing on KVM side (which could involve
messing with allocator or reallocating VMCB, until it's not
in affected range). Hence a quirk here, to prevent tests
failures where we can't do anything about them.

Signed-off-by: Igor Mammedov <imammedo@redhat.com>
---
there is not official EOL dates on both, but using a newer
generation(s) release dates it appears that both are effectively
discontinued for ~3-5 yeas
---
 x86/svm.c | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)

diff --git a/x86/svm.c b/x86/svm.c
index a85da905..706ff2ef 100644
--- a/x86/svm.c
+++ b/x86/svm.c
@@ -334,6 +334,21 @@ static void setup_npt(void)
 	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
 }
 
+#define VLS_BUG_START	0x78000000ULL
+#define VLS_BUG_END	0x80000000ULL
+
+static bool has_vls_bug(void)
+{
+	u32 sig = cpuid(1).a;
+	u32 fam = x86_family(sig);
+	u32 model = x86_model(sig);
+
+	if (fam != 0x17)
+		return false;
+
+	return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
+}
+
 static void setup_svm(void)
 {
 	void *hsave = alloc_page();
@@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
 		return report_summary();
 	}
 
+	if (has_vls_bug()) {
+		phys_addr_t addr;
+
+		for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
+			reserve_pages(addr, 1);
+	}
+
 	setup_svm();
 
 	vmcb = alloc_page();
-- 
2.43.0


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome
  2026-05-14 11:34 [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Igor Mammedov
@ 2026-05-15 11:11 ` Igor Mammedov
  0 siblings, 0 replies; 2+ messages in thread
From: Igor Mammedov @ 2026-05-15 11:11 UTC (permalink / raw)
  To: kvm; +Cc: pbonzini, babu.moger

On Thu, 14 May 2026 13:34:24 +0200
Igor Mammedov <imammedo@redhat.com> wrote:

forgot to tags it properly, pls ignore. I'll repost.

> AMD Family 17h models 0x00-0x0f (Naples/Zen+) and 0x30-0x3f (Rome/Zen2)
> have a hardware bug where Virtual VMLOAD/VMSAVE causes spurious VMEXITs
> on VMLOAD/VMSAVE even with intercepts disabled, when the VMCB physical
> address as seen by L1 falls in [0x78000000, 0x80000000) range.
> Reserve this range on affected CPUs so the page allocator would never
> alocate the VMCB there.
> 
> Given that it's relative old CPUs + nested env + only performance
> impact, it's not worth fixing on KVM side (which could involve
> messing with allocator or reallocating VMCB, until it's not
> in affected range). Hence a quirk here, to prevent tests
> failures where we can't do anything about them.
> 
> Signed-off-by: Igor Mammedov <imammedo@redhat.com>
> ---
> there is not official EOL dates on both, but using a newer
> generation(s) release dates it appears that both are effectively
> discontinued for ~3-5 yeas
> ---
>  x86/svm.c | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/x86/svm.c b/x86/svm.c
> index a85da905..706ff2ef 100644
> --- a/x86/svm.c
> +++ b/x86/svm.c
> @@ -334,6 +334,21 @@ static void setup_npt(void)
>  	__setup_mmu_range(pml4e, 0, size, X86_MMU_MAP_USER);
>  }
>  
> +#define VLS_BUG_START	0x78000000ULL
> +#define VLS_BUG_END	0x80000000ULL
> +
> +static bool has_vls_bug(void)
> +{
> +	u32 sig = cpuid(1).a;
> +	u32 fam = x86_family(sig);
> +	u32 model = x86_model(sig);
> +
> +	if (fam != 0x17)
> +		return false;
> +
> +	return model <= 0x0f || (model >= 0x30 && model <= 0x3f);
> +}
> +
>  static void setup_svm(void)
>  {
>  	void *hsave = alloc_page();
> @@ -413,6 +428,13 @@ int run_svm_tests(int ac, char **av, struct svm_test *svm_tests)
>  		return report_summary();
>  	}
>  
> +	if (has_vls_bug()) {
> +		phys_addr_t addr;
> +
> +		for (addr = VLS_BUG_START; addr < VLS_BUG_END; addr += PAGE_SIZE)
> +			reserve_pages(addr, 1);
> +	}
> +
>  	setup_svm();
>  
>  	vmcb = alloc_page();


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2026-05-15 11:11 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2026-05-14 11:34 [PATCH] x86/svm: work around Virtual VMLOAD/VMSAVE bug on Naples and Rome Igor Mammedov
2026-05-15 11:11 ` Igor Mammedov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox